Astro.IQ

About

Machine Learning(1) + Spacecraft Trajectory Optimisation(2)

Inspiration

A housefly is a rather simple organism, yet it is able to independently make decisions to achieve its goals, such as navigating to a food-source and avoiding obstacles. Inspecting closer, a housefly is able to make these decisions instantaneously, such as in the case of being swatted at by a human. If one thinks about the descent of a lander onto the Martian surface, the nature of the situation is quite the same. Because communication with Earth is prolonged, the lander must make decisions on its own in order to safely land on the surface. If a common housefly can independently make decisions in real-time, in uncertain dynamic environments, than surely a spacecraft should be able to do the same in an environment where the objective is clearly outlined.

Goal

This library aims to implement various machine learning and computational intelligence techniques in a variety of common astrodynamics applications.

Applications

Trajectory Optimisation Architectures

Machine Learning Architectures

Definitions

(1) A type of artificial intelligence (AI) that provides computers with the ability to learn without being explicitly programmed.
(2) An especially complicated continuous optimisation problem, which is characterised by: 1) nonlinear dynamics, 2) many practical trajectories and state variable discontinuities, 3) inexplicit terminal conditions (e.g. departure and arrival planet positions), 4) time-dependant influences (i.e. from planet positions determined from ephemerides), 5) apriori unknown basic optimal trajectory structure.

References

(Yam et al.) Yam, C. H., Izzo, D., & Biscani, F. (2010). Towards a High Fidelity Direct Transcription Method for Optimisation of Low-Thrust Trajectories. 4th International Conference on Astrodynamics Tools and Techniques, 1–7. Retrieved from http://arxiv.org/abs/1004.4539
(Sims et al.) Sims, J. A., & Flanagan, S. N. (1997). Preliminary Design of Low-Thrust Interplanetary Missions.
(Mnih et al.) Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., … & Petersen, S. (2015). Human-level control through deep reinforcement learning. Nature, 518(7540), 529-533.
(Levine et al.) Levine, S., & Koltun, V. (2013). Guided Policy Search. In ICML (3) (pp. 1-9).
(Dachwald) Dachwald, B. (2005). Optimization of very-low-thrust trajectories using evolutionary neurocontrol. Acta Astronautica, 57(2), 175-185.