The low-cost and short-lead time of small satellites has led to their use in science-based missions, earth observation, and interplanetary missions. Today, they are also key instruments in orchestrating technological demonstrations for on-orbit operations (O3) such as inspection and spacecraft servicing with planned roles in active debris removal and on-orbit assembly. This paper provides an overview of the robotics and autonomous systems (RASs) technologies that enable robotic O3 on smallsat platforms. Major RAS topics such as sensing & perception, guidance, navigation & control (GN&C) microgravity mobility and mobile manipulation, and autonomy are discussed from the perspective of relevant past and planned missions.
: In recent decades, terrain modelling and reconstruction techniques have increased research interest in precise short and long distance autonomous navigation, localisation and mapping within field robotics. One of the most challenging applications is in relation to autonomous planetary exploration using mobile robots. Rovers deployed to explore extraterrestrial surfaces are required to perceive and model the environment with little or no intervention from the ground station. Up to date, stereopsis represents the state-of-the art method and can achieve short-distance planetary surface modelling. However, future space missions will require scene reconstruction at greater distance, fidelity and feature complexity, potentially using other sensors like Light Detection And Ranging (LIDAR). LIDAR has been extensively exploited for target detection, identification, and depth estimation in terrestrial robotics, but is still under development to become a viable technology for space robotics. This paper will first review current methods for scene reconstruction and terrain modelling using cameras in planetary robotics and LIDARs in terrestrial robotics; then we will propose camera-LIDAR fusion as a feasible technique to overcome the limitations of either of these individual sensors for planetary exploration. A comprehensive analysis will be presented to demonstrate the advantages of camera-LIDAR fusion in terms of range, fidelity, accuracy and computation.
Interest is increasing in the use of neural networks and deep-learning for on-board processing tasks in the space industry . However development has lagged behind terrestrial applications for several reasons: space qualified computers have significantly less processing power than their terrestrial equivalents, reliability requirements are more stringent than the majority of applications deep-learning is being used for. The long requirements, design and qualification cycles in much of the space industry slows adoption of recent developments. GPUs are the first hardware choice for implementing neural networks on terrestrial computers, however no radiation hardened equivalent parts are currently available. Field Programmable Gate Array devices are capable of efficiently implementing neural networks and radiation hardened parts are available, however the process to deploy and validate an inference network is non-trivial and robust tools that automate the process are not available. We present an open source tool chain that can automatically deploy a trained inference network from the TensorFlow framework directly to the LEON 3, and an industrial case study of the design process used to train and optimise a deep-learning model for this processor. This does not directly change the three challenges described above however it greatly accelerates prototyping and analysis of neural network solutions, allowing these options to be more easily considered than is currently possible. Future improvements to the tools are identified along with a summary of some of the obstacles to using neural networks and potential solutions to these in the future.
Space telescopes are our ‘eyes in the sky’ that enable unprecedented astronomy missions and also permit Earth observation integral to science and national security. On account of the increased spatial resolution, spectral coverage, and signal-to-noise ratio, there is a constant clamour for larger aperture telescopes by the science and surveillance communities. This paper addresses a 25 m modular telescope operating in the visible wavelengths of the electromagnetic spectrum; such a telescope located at geostationary Earth orbit would permit 1 m spatial resolution of a location on Earth. Specifically, it discusses the requirements and architectural options for a robotic assembly system, called Robotic Agent for Space Telescope Assembly (RASTA). Aspects of a first-order design and initial laboratory test-bed developments are also presented.
The size of any single spacecraft is ultimately limited by the volume and mass constraints of currently available launchers, even if elaborate deployment techniques are employed. Costs of a single large spacecraft may also be unfeasible for some applications such as space telescopes, due to the increasing cost and complexity of very large monolithic components such as polished mirrors. The capability to assemble in-orbit will be required to address missions with large infrastructures or large instruments/apertures for the purposes of increased resolution or sensitivity. This can be achieved by launching multiple smaller spacecraft elements with innovative technologies to assemble (or self-assemble) once in space and build a larger much fractionated spacecraft than the individual modules launched. Up until now, in-orbit assembly has been restricted to the domain of very large and expensive missions such as space stations. However, we are now entering into a new and exciting era of space exploitation, where new mission applications/markets are on the horizon which will require the ability to assemble large spacecraft in orbit. These missions will need to be commercially viable and use both innovative technologies and small/micro satellite approaches, in order to be commercially successful, whilst still being safety compliant. This will enable organisations such as SSTL, to compete in an area previously exclusive to large commercial players. However, inorbit assembly brings its own challenges in terms of guidance, navigation and control, robotics, sensors, docking mechanisms, system control, data handling, optical alignment and stability, lighting, as well as many other elements including non-technical issues such as regulatory and safety constraints. Nevertheless, small satellites can also be used to demonstrate and de-risk these technologies. In line with these future mission trends and challenges, and to prepare for future commercial mission demands, SSTL has recently been making strides towards developing its overall capability in “in-orbit assembly in space” using small satellites and low-cost commercial approaches. This includes studies and collaborations with Surrey Space Centre (SSC) to investigate the three main potential approaches for in-orbit assembly, i.e. deployable structures, robotic assembly and modular rendezvous and docking. Furthermore, SSTL is currently developing an innovative small ~20kg nanosatellite (the “Target”) as part of the ELSA-d mission which will include various rendezvous and docking demonstrations. This paper provides an overview and latest results/status of all these exciting recent in-orbit assembly related activities.