When the Soviet Union launched the Sputnik satellite in 1957, it started the Space Age as the beeping metal sphere transmitted radio signals. Since then, satellites have grown in complexity but their core functions have remained surprisingly static. Most still function as passive tools: capturing images, relaying communications, beaming GPS coordinates to the earth, and so on.
But a quiet revolution is now underway above us. Satellites are becoming smarter, powered by artificial intelligence (AI), and autonomous.
Now, say an autonomous satellite operated by a private company malfunctions in orbit. The AI system onboard mistakenly interprets a routine atmospheric anomaly as a collision threat and initiates an unplanned evasive manoeuvre. In doing so, it crosses dangerously close to a military reconnaissance satellite belonging to a rival nation. A crash is narrowly averted but not before that nation lodges a diplomatic protest and alleges hostile intent. The satellite’s AI system was developed in one country, launched by another, operated from a third, and registered by a fourth. Who is liable? Who is accountable?
Understanding autonomous satellites
AI is transforming satellites from passive observers into active, thinking machines. Thanks to recent breakthroughs — from large AI models powering popular applications like ChatGPT to smaller, energy-efficient systems capable of running on smartphones — engineers are now able to fit satellites with onboard AI. This onboard intelligence is technically called satellite edge computing and allows satellites to analyse their environment, make decisions, and act autonomously like self-driving cars on the ground.
These AI-powered satellites are emerging from prestigious national labs and startup garages alike and possess game-changing applications:
Automated space operations: Independent manoeuvring in space to perform tasks like docking, inspections, in-orbit refuelling, and debris removal
Self-diagnosis and repair:Monitoring their own health, identifying faults, and executing repairs without human intervention
Route planning: Optimising orbital trajectories to avoid hazards and obstacles or to save fuel
Targeted geospatial intelligence: Detecting disasters and other events of interest in real-time from orbit and coordinating with other satellites intelligently to prioritise areas of interest
Combat support: Providing real-time threat identification and potentially enabling autonomous target tracking and engagement, directly from orbit
Smarter sats, smarter risks
This autonomy is not without consequence.
AI hallucinations are becoming an important source of misinformation on the ground and they pose a similar threat in the space domain. A satellite hallucinating, misclassifying a harmless commercial satellite as hostile, and responding with defensive actions is currently entirely uncharted territory. Misjudgments like this could escalate tensions between nations and even trigger a geopolitical crisis.
As satellites become more intelligent and autonomous, the stakes rise concomitantly. Intelligence brings not just power but also responsibility in technological design and legal, ethical, and geopolitical oversight.
In particular, AI’s ability to confer autonomy to satellites exposes gaps in the Outer Space Treaty (OST) 1967 and the Convention for International Liability for Damage Caused by Space Objects of 1972. The OST’s assignment of state responsibility for space activities (Article VI), liability for damage (VII), and the Liability Convention’s liability provisions assume a human is in control — but AI autonomy challenges this.
For example, the “authorisation and continuing supervision” concept in the OST is rendered ambiguous and the Liability Convention’s definitions struggle with AI-caused incidents.
The core legal dilemma is fault attribution: who is liable when an AI’s decision causes a collision: the launching state, the operator, the developer, or the AI? This human-AI gap coupled with transnational space ventures entangles accountability in jurisdictional and contractual complexities.
Further, AI’s dual-use capabilities (i.e. civilian + military) create misinterpretation risks in geopolitically sensitive contexts. Addressing these shortcomings requires adapting legal principles, developing new governance frameworks, and in all a multifaceted approach that adapts existing legal frameworks as well as develops new governance mechanisms.
Legal and technical solutions
Space safety amid AI developments demands synchronised legal and technical evolution. A first step is categorising satellite autonomy levels, similar to autonomous vehicle regulations, with stricter rules for more autonomous systems. Enshrining meaningful human control in space law is crucial, as the 2024 IISL Working Group’s Final Report on Legal Aspects of AI in Space emphasised.
Global certification frameworks, such as those under the United Nations Committee on the Peaceful Uses of Outer Space or the International Standards Organisations, could test how satellite AI handles collisions or sensor faults; subject it to adversarial (but controlled) tests with unexpected data; and log key decisions like manoeuvres for later review.
Since they manage high-risk, cross-border operations, the aviation and maritime sectors offer useful templates. The 1996 International Convention on Liability and Compensation for Damage in Connection with the Carriage of Hazardous and Noxious Substances (a.k.a. HNS) and the 1999 Convention for the Unification of Certain Rules for International Carriage by Air use strict liability and pooled insurance to simplify compensation. These models could inform space law, where a single AI malfunction may affect multiple actors.
Ethical, geopolitical imperatives
AI in space raises critical ethical and geopolitical concerns as well. The potential for AI-driven autonomous weapons is a topic of ongoing discussions within the Convention on Certain Conventional Weapons and its Group of Governmental Experts on Lethal Autonomous Weapons Systems. It raises critical concerns about the lack of human control and the risk of escalation, concerns that are equally applicable to the development of autonomous weapons in space. Thus, international safeguards to prevent an arms race in that domain are necessary.
Ethical data governance is also vital thanks to the vast amount of data AI satellites collect and the attendant privacy and misuse risks. Since autonomy can also inadvertently escalate tensions, international cooperation is as crucial as legal and technical development.
Shared orbits, shared responsibilities
The rise of AI-powered satellites marks a defining moment in humanity’s use of outer space. But with thousands of autonomous systems projected to operate in low-earth orbit by 2030, the probability of collisions, interference or geopolitical misinterpretation is rising rapidly. Autonomy offers speed and efficiency but also introduces instability without legal clarity.
History shows that every technological leap demands corresponding legal innovation. Railways required tort law. Automobiles brought about road safety legislation. The digital revolution led to cybersecurity and data protection regimes. Space autonomy now demands a regulatory architecture that balances innovation with precaution and sovereignty with shared stewardship.
We are entering an era where the orbits above us are not just physical domains but algorithmically governed decision spaces. The central challenge is not merely our ability to build intelligent autonomous satellites but our capacity to develop equally intelligent laws and policies to govern their use, demanding urgent international collaboration to ensure legal frameworks keep pace with technological advancements in space.
Shrawani Shagun is pursuing a PhD at National Law University, Delhi, focusing on environmental sustainability and space governance. Leo Pauly is founder and CEO, Plasma Orbital.
Published – May 27, 2025 05:30 am IST