Artificial Intelligence in Space

Texte intégral



Artificial Intelligence in Space

George Anthony Gal


, Cristiana Santos


, Lucien Rapp


, Réka Markovich


, Leendert van der




Legal Parallax, LLC, USA


University of Luxembourg


Lucien Rapp,

Université Toulouse Capitole 1, SIRIUS Chair

1 Introduction

Governance of space activities is being confronted with a progressive transformation associated with the emergence of satellite systems and space-based services utilizing AI, which includes ML. This chapter identifies and examines some of the fundamental legal challenges linked to using AI in the space domain. The legal challenges emanating from the reliance and use of AI in space necessitates ascertaining the existence of linkage between space systems and services using AI to a system of governing rules and guiding legal principles.

The nature of the space and satellite industry presents a quintessential use-case for AI. Essentially, virtually all space activities and ventures constitute fertile ground ripe for employing AI. Indeed, AI is ripe for use in Earth orbit activities like active debris removal (“ADR” ), near Earth ventures such as abiotic resource extraction, and deep space exploration. Generally, AI applications occur in two principal vectors, which are:

• autonomous robots (or space objects) – whether in the form of an autonomous spacecraft or a satellite constellation, autonomous or intelligent space objects possess the ability to not only collect, analyze, and use data for informational and operation purposes but they can also go where no human has gone or could go, collecting probes and data – to autonomous spacecraft and swarm intelligence, assisting space activities, such as mining and use of abiotic resources, exploration, in-orbit servicing (IoS), active debris removal (ADR), and protection of space assets which includes self-preservation from rogue and unknown natural space objects; and

• analyzing an, if necessary, acting upon space big data, related to (1) debris monitoring, ( 2) self-preservation based on potential threat by rogue and unknown natural objects in the space domain as well as perceived threats from another human manufactured object, (3) predictive analytics of very-high resolution (VHR) satellite imagery,(4) real-time geospatial data analysis ,and (5) an analysis of data products derived from a convergence of a wide spectrum of sources (e.g. satellite, drone, IoT, UAVs imagery and location data). Space big data also enables space cloud computing services, in which a data is stored on space based assets1. Indeed, the development of AI-based technologies combined with space data can enhance the production, storage, access and diffusion of the data in outer space and on Earth.

Space is undergoing seismic shifts driven by New Space (promoting a Smart, Fast and Now Space)2, the GAFA web giants, newcomers, venture capital firms and start-ups. There is significant growth in number of space activities, space objects, and space actors. However, new challenges emerge in the course of such active exploration and use, whilst deploying AI in space. Elaborating upon a forward-looking perspective, harnessing AI and ML technologies in accessing and exploring outer space, as well as engaging in space-1 To increase of data capacity, cost reduction of services and real-time access to data storage.


2 enabled downstream commercial applications and services, will, in all likelihood, span a broad array of intended and unintended consequences. These consequences stem from the use and misuse of such technologies which cannot be downplayed or disregarded. Accordingly, the following quadrant risks that merits attention:

(i) privacy issues associated with the use of these technologies, e.g. citizen tracking and surveillance, potential re-identification of individuals, function creep, fake imagery, biased automatic decision-making and unjust discrimination based on nationality, gender, race, geographic localization, lack of transparency, etc.

(ii) liability issues emerging from potential damage created by autonomous spacecraft, e.g. collisions; or by hacking/malware that aims to weaponize AI, and consequences on space data (security of sensitive data stored in outer space, malicious data capture).

These risks are more acute when acknowledging important facets in the space field. Firstly, space is a service-and-needs-oriented market dominated mostly by demand and competitive industry logics, without a main centralized regulatory body. Secondly, there is an increased ubiquitous repercussion of space activities on Earth, since the benefits and solutions that space provides for the problems and needs of mankind are becoming ubiquitous3 (transport, smart city management, security, agriculture, climate change monitoring, etc.). In this line, European Space Agency (ESA) estimates that for every Euro spent in the sector, there has been six Euros benefit to society. This correlation reflects a more accentuated dependence of Earth from space-based services.

The breadth of these space-based services – much of these AI-enabled – requires consideration of a broad range of legal and regulatory issues that the space industry alone cannot answer. Also, UN Space Treaties leave much uncertainty about permitted AI uses and activities in space. Clearly there is a need to develop or reinterpret ‘rules of the road’ to enable continued and legally compliant access for commercial and civilian actors in space.

The principal objectives of this Chapter include:

1. Identifying and discussing the potential risks and challenges associated with implemented AI and ML in space;

2. Analyzing to which extent the current corpus iuris spatialis (from the seventies) can still give answers to these risks and challenges and which methodology to follow onwards; and

3. Discussing how AI-based legal tools can support space law.

Consistent with these objectives, Section 2 examines the specificities of AI in space, describes distinct features from AI on Earth and demonstrate the usefulness and benefits of AI in space. Section 3 analyzes certain legal, ethical and governance risks associated with AI in space. Section 4 discusses limitations in the current space law legal framework relating to AI in space. Section 5 offers a methodological approach for determining the legal regime applicable to AI in space while Section 6 approaches legal AI-based tools that enable knowledge representation and reasoning at the service of space law. Section 7 summarizes this analysis of AI in space.

2 Contextual dynamics of space and specificities of AI in Space

Space technology, data and services have become indispensable in the daily lives of Europeans as well as the majority of most global inhabitants. Space based services and activities also play an essential role in preserving the strategic and national security interests of many States. Europe seeks to cement its position as one of the major space faring powers by implementing extensive freedom of action in the space domain

3 Hon. Philip E. Coyle, Senior Advisor, Center for Defense Information,


3 that encourages scientific and technical progress and support the competitiveness and innovation capacity of space sector industries.

To boost the EU’s space leadership beyond 2020, a Regulation1 proposal establishes the space programme of the Union and the European Union Agency for the Space Programme. The proposed budget allocation of 16bn €for the post-2020 EU space programme2 has been received by the European space industry as a clear and strong signal of the political willingness to reinforce Europe’s leadership, competitiveness, sustainability and autonomy in space.3 AI is one area where Europe is exerting its leadership role in the space domain.

The use of AI in space capitalizes from the contextual quadrant called ‘New Space’ which is creating a more complex and challenging environment at the physical, technological and operational realms. The current contextual dynamics of space and specificities of space amenable to AI are discussed below.

2.1 Contextual dynamics of space

Currently space is defined as Space 4.0 which refers to this era of pro-activeness, open-mindedness to both technology disruption and opportunity4 where trends include space big data (e.g. data imagery), predictive and geospatial analytics applied thereto. In particular, this era is backed up by AI-based technology, machine learning (ML), and Internet of Things (IoT). IoT is forecasted to be pervasive by 2025, with connected “things” driving a data explosion with sensors deployed by mega constellations of smallsats, (such as Hiber, Astrocast and Keplercars).

The use of such technologies promote a digital revolution, unlocking access to space-based benefits:5 the space industry is now moving toward leveraging full digitalization of products (high-performance spacecraft infrastructure; on board computers, antennas, microwave products), new processes (increasing production speed and decreasing failure rates; and data uptake (the ability to assess the data right away, distribution as well as for data analytics, processing, visualization and value adding, enabling Earth Observation (EO) to become part of the larger data and digital economy.

These space-based benefits (products-processes-data uptake) increase the repercussion of space

activities on Earth. A growing number of key economic sectors (in particular land and infrastructure

monitoring, security, as well as the digital economy, transport, telecommunications, environment, agriculture and energy) use satellite navigation and EO systems.

Space democratization and privatization reflect the access to and participation in space by space-faring

nations and non-governmental entities such as privately owned juridical entities. Among space actors, the private sector currently accounts for 70% of space activity6 (UNOOSA, 2018). This percentage will only increase given the emergence of new private actors who seek commercial opportunities in the exploration

1 In a vote on 17 April 2019, the European Parliament endorsed a provisional agreement reached by co-legislators on the EU Space

Programme for 2021-2027, bringing all existing and new space activities under the umbrella of a single programme to foster a strong and innovative space industry in Europe.

2These benefits represent a return of investment for Europe of between 10 to 20 times the costs of the programme.

3 This budget will be used first, to maintain and upgrade the existing infrastructures of Galileo and Copernicus, so that our systems

remain at the top. Second we will adapt to new needs, such as fighting climate change, security or internet of things.

4 ESA, What is Space 4.0?, 2016 (accessed 4 of May



6 “(…) Nowadays, private sector augments all segments of the space domain, from ground equipment and commercial space


4 and exploitation of space and its resources thanks to frontier technologies, such as AI and the data revolution.7

New actors together with emerging new technologies such as AI developnew global business models

driven by demand, such as satellite constellations), tourism, asteroid and lunar mining, in-situ resource utilization8 (ISRU), 5G, in-orbit servicing (IoS), 3D printing of satellite parts (e.g. solar panels, etc.),and commercial space station. These new business segments9 are leveraging space economy. The space economy is expanding enormously, with predictions that it generates revenues of US$ 1.1-2.7 trillion or more by 2040.10.

New high-end technologies embedding small-satellite design describe the current landscape of the

space industry. Smaller, lightweight satellites based on affordable off-the-shelf hardware, less expensive spacecraft (small, nano and pico-satellites) can be replaced more easily thereby refreshing technology rapidly11 combined with the ability to launch thousands of these satellites into mega constellations opens up possibilities for more missions and applications using space infrastructure.

2.2 Specificities of space amenable to AI

It is still important to consider the specificities of AI in outer space and why it is distinct from the its terrestrial use. Some of the amenable specificities and distinctions are as follows:

i. Space conditions are hard and amenable only for AI machines. Space is a remote, hostile and hazardous environment12 to human life and in some cases impossible for humans to explore and survive in space renders space technologies dependent on technology and processes related to AI13. AI-based technologies fit for operational decision-making, which are robust, resilient, adaptable and responsive to changing threats.

ii. Upstream and downstream impact of AI in space. AI in a fast-approaching future will impact all sectors of the space industry, from launch to constellation control and satellite performance analysis14, from AI logic directly on board the payload for deep space applications ranging to the downstream sector of telecommunications and Earth observation in commercial applications, e.g. for image classification and preditive analysis of phenomena.

iii. Autonomy of intelligent space objects. Using AI, a spacecraft may be able15 to recognize a threat, capture data, learn from and counteract with it or take evasive action, and even propagate its newly acquired knowledge to other satellites. For example, ‘’when a Mars rover conducting exploration of Mars needs to

contact Earth, it takes up to 24 minutes to pass the signal between the two planets in one direction. It is rather long time for making decisions, that is why engineers are increasingly providing space robots with

7 European Investment Bank, The future of the European space sector How to leverage Europe’s technological leadership and boost

investments for space ventures,

8 Lucas-Rhimbassen M., Santos C., Long G., Rapp L., Conceptual model for a profitable return on investmentfrom space debris as

abiotic space resource , 8TH European Coference for Aeronautics and Space Sciences (EUCASS), 2019.

9 And others, like Scalability and agility, Media / Advertising, B2C , Vertical integration, Position in value chain.

10( 11Livemint, Mini satellites, maximum possibilities., 2018 (accessed 4 of May 2019).

12 E.g. difficult accessibility, the complexity of extra-atmospheric missions, the extreme physical and climatic conditions, new

gravitational forces, different temperature ranges and unknown collisions with dust or an asteroid.

13 Larysa Soroka, Kseniia Kurkova, Artificial Intelligence and Space Technologies: Legal, Ethical and Technological Issues,

Advanced Space Law, Volume 3, 2019: 131-139.




the ability to make decisions by themselves.16 AI provides space objects with the ability to collect and

analyze data and decide when and what information to send back to Earth without any human involvement, and to predict, self-diagnose problems, and fix themselves while continuing to perform17. When collisions occur between intelligent space objects and debris, a legal issues like liability are triggered, some of which are dealt with in Section 3.1 and 4.

iv. Asset Protection. AI assists with the protection of space assets by allowing the development of automatic collision avoidance system that will assess the risk and likelihood of in-space collisions, improving the decision making process on whether or not an orbital manoeuvre is needed, and transmitting warnings to other potential at risk space objects.18

v. Big data from Space. Big data from space19 refers to the massive spatio-temporal Earth and Space observation data collected by a variety of sensors – ranging from ground based to space-borne – and the synergy with data coming from other sources and communities. Spatial big data, when combined with “Big Data Analytics,” delivers "value" from big datasets, whose volume, velocity, variety, veracity, and value is beyond the ability of traditional tools to capture, store, manage and analyse the sheer volume of data. Geospatial intelligence is one of many ways to use artificial intelligence in outer space. It refers to employing AI for extracting and analyzing images and other geospatial information relating to terrestrial, aerial, and/or spatial objects and events. It also allows for real time interpretation of what is happening or transpiring in a specific geolocation relating to events such as disasters, refugee migration and safety, and agricultural production. These aspects are analysed in Section 3.2 of this Chapter.

3 Risks of AI in Space

AI in space is igniting a gradual shift from “computer-assisted human choice and human-ratified computer choice”20 to non-human analysis, decision-making and implementation of action. The emerging deployment and use of intelligent space objects21 present novel challenges to the current space law regime especially when, not if, the use of such objects causes terrestrial and/or extraterrestrial injury by an AI system or service such as a violation of privacy rights, violation of data protections requirement, or injury resulting from al collision involving a space object.22

16 Downer, Bethany. The Role of Artificial Intelligence in Space Exploration, 2018. https://www.

17 18

19 P. Soille, S. Loekken, and S. Albani (Eds.) Proc. of the 2019 conference on Big Data from Space (BiDS’2019), EUR 29660 EN,

Publications Office of the European Union, Luxembourg, 2019,

20Mariano-Florentino Cuellar, A Simpler World? On Pruning Risks and Harvesting Fruits in an Orchard of Whispering Algorithms,

51 U.C. Davis Law Review, 27, 39 (Nov. 2017).

21 A space object is limited to an object, including its component parts which were “launched” into space).The issue can become a bit murkier if intelligent space objects can be manufactured and deployed in situ in outer space.

22 It would be naive to think that intelligent space objects will not cause any injury. The experience associated with implementing


6 The space law treaty regime consists of the foundational Treaty on Principles Governing the Activities of States in the Exploration and Use of Outer Space, including the Moon and Other Celestial Bodies (“Outer Space Treaty”)23 and its progeny treaties. The OST embeds the cornerstone principles for the current international space law jurisprudence.24 Its principles are expanded by the progeny treaties of the Agreement on the Rescue of Astronauts, the Return of Astronauts and the Return of Objects Launched into Outer Space (“Rescue Agreement”),25 the Convention on International Liability for Damage Caused by Space Objects (“Liability Convention”),26 the Convention on Registration of Objects Launched into Outer Space (“Registration Convention”),27 and the Agreement Governing the Activities of States on the Moon and Other Celestial Bodies (“Moon Treaty”).28 Liability issues associated with AI risks concern an analysis of the Outer Space Treaty and the Liability Convention.

3.1 Liability of Intelligent Space Objects

Liability under the space law treaty regime is rooted in Outer Space Treaty Article VIII which is

the genesis of the Liability Convention.

Article VII imposes international liability only on a launching State 29 The Liability Convention establishes a restricted framework for assessing international liability which only applies to a launching States.30 The determination of liability and allocation of faulty is based on where the damage occurs. Liability Convention Article II imposes absolute or strict liability for damage a space object causes on Earth or to an aircraft in flight. On the other hand, if a space object causes damage in outer space or on a celestial body, then liability is based on the degree of fault as allocated by Article III. This section applied these rules on liability in the context of intelligent space objects.

3.1.1 Some notes on liability Associated with Intelligent Space Objects

The “damage” covered by the Liability Convention is neither comprehensive nor unambiguous. Article 1(a) defines “damage” to mean “loss of life, personal injury or other impairment of health; or loss of or damage to property of States or of persons, natural or juridical, or property of international intergovernmental organizations.” This definition creates uncertainty regarding the parameters or scope of damage subject to the Convention. It is unclear if the damage is limited to physical damage caused by space object31 and if it extends to non-kinetic harm, indirect damage or purely economic injury.32 Similarly, it is unsettled how far the phase “other impairment of health” reaches in connection with a person. For instance, is the phrase limited to physical injury or if it extends to emotional and/or mental injury? 23 entered into Force Oct. 10, 1967, 18 UST 2410; TIAS 6347; 610 UNTS 205; 6 ILM 386 (1967).

24 Frans von der Dunk, Sovereignty Versus Space - Public Law and Private Launch in the Asian Context, 5 Singapore Journal of

International and Comparative Law 22, 27 (2001).

25 entered into Force Dec. 3, 1968, 19 UST 7570; TIAS 6599; 672 UNTS 119; 7 ILM 151 (1968) 26 entered into Force Sept. 1, 1972, 24 UST 2389; TIAS 7762; (961 UNTS 187; 10 ILM 965 (1971) 27 entered into Force Sept. 15, 1976, 28 UST 695; TIAS 8480; 1023 UNTS 15; 14 ILM 43 (1975)

28 entered into Force July 1, 1984, 1363 UNTS 3; 18 ILM 1434 (1979). The Moon Treaty is viewed differently than the other space

treaties because it has not received the international ratification of the other space law treaties and the major space faring nations such as the United States, Russia and China have neither signed nor ratified the treaty


Liability Convention Article 1( c) defines the term “launching State” as a State which launches or procures the launch of the space object and the State from whose territory or facility the space object is launched. A non-governmental space actor does not have international liability under the Liability Convention for damage caused by the space object regardless of their culpability. This means that a State space actor can only have international liability if it comes within the definition of a “launching State.” 30 George Anthony Long, Artificial Intelligence and State Responsibility Under the Outer Space Treaty at 5, 69th IAC, Bremen,

Germany, ( Oct 5, 2018) published in 2018 Proceedings Of The International Institute Of Space Law (Eleven Int’l Publishing 2018).

31 Major Elizabeth Seebode Waldrop, Integration of Military and Civilian Space Assets: Legal and National Security Implications,

55 A.F.L. Rev. 157, 214 (2004).

32 George Anthony Long, Small Satellites and State Responsibility Associated With Space Traffic Situational Awareness at 3, 1st


7 Resolution of the reach of a damage claim, like all legal issues associated with the Liability Convention, depends upon whether the definition is given a restrictive or expansive interpretation. Intelligent space objects, i.e., autonomous space objects utilizing AI, present challenges for the strict and fault liability scheme imposed on launching States.

Liability Convention Article III reads as follows:

[i]n the event of damage being caused elsewhere than on the surface of the Earth to a space object of one launching State or to persons or property on board such a space object by a space object of another launching State, the latter shall be liable only if the damage is due to its fault or the fault of persons for whom it is responsible. (Emphasis added)

Intelligent space objects disrupt Article III’s fault-based liability scheme since decisions, acts or omissions of an intelligent space object may not be construed to be conduct of a person and may not always be attributable to a launching State.

3.1.2 Fault Liability is Predicated on Human Fault

Generally, we think of a person as a human being.33 In the legal arena, the term “person” generally refers to an entity which is subject to legal rights and duties.34 The law considers artificial entities like corporations, partnerships, joint ventures, and trusts to be a “person” as they are subject to legal rights and duties. 35 Additionally, in certain instances, the law recognizes and imposes legal rights and duties on certain inanimate objects like ships, land, and goods which results in such inanimate objects being subject to judicial jurisdiction as well as being subject to a judgment rendered against it.36 However, the legal rights and duties imposed on artificial entities and inanimate objects flow from actions or conduct engaged in by human beings.

This is not necessarily the case for actions or conduct taken based on machine intelligence. Although a machine can learn independently from human input and make decisions based on its learning and available information, that ability does not necessarily equate with natural or legal personhood. As noted, decisions and conduct of legal persons are ultimately decisions made by a human being. This means the decision is not based solely on intellect or data but is also the product of human factors such as a conscious, emotion, and discretion.37 Thus, the concept of legal personhood is ultimately premised on humanity. Decisions and conduct based on AI divorced from human oversight or control arguably lack consideration of human factors such as a conscious, emotion and discretion.38 Even more so, there is not currently any law which grants “personhood” to an intelligent space object. The lack of direct or indirect human considerations in the decision making of an intelligent machine together with such an object not having any legal rights or duties under existing law strongly suggests that decisions by an intelligent space object are not made by a natural or legal person.39

Since fault liability under Liability Convention Article III is premised on the fault of a State or the faults of persons, a decision by an intelligent space object will, in all likelihood, not be the “fault of persons.”

33 Lawrence B. Solum, Legal Personhood for Artificial Intelligences, 70 N.C.L. Rev. 1231, 1238 (1992). 34 Id., at 1238-1239.

35 Id.

36 Id., at 1239. 37 Id.,at 1262 - 1287. 38 Id.


8 Accordingly, assessing fault liability under Article III for a decision made by an intelligent space object may very well depend upon whether such a decision can be attributable to the launching State.

3.1.3 Fault Liability in Absence of Human Oversight in the Decision Making Process

Generally, liability for damage or injury attributable to States is traceable to human acts or omissions. This basis for imposing liability appears to be inapplicable when damage or injury in outer space is caused by an analysis, decision, and implementation of a course of action made by a machine without human approval.40 Liability premised on human acts or omission fails when no particular human possessed the ability to prevent the injury, short of making the decision to utilize AI in a space object.41 For sure, it is substantively difficult to draw a line between reliance on AI to supplant the judgment of a human decision maker and the propriety of allowing a machine, or nonhuman, to decide and implement a course of action.42 To this extent, it seems that launching State fault-based liability should not be premised solely on the decision to launch an intelligent space object, as such a sweeping basis for liability would effectively retard the development and deployment of intelligent space objects.43 Thus, the appropriate analysis appears to be what conduct is necessary to attribute fault liability to a State for damage caused by an intelligent space object when human oversight is not involved in the occurrence causing the damage.

Resolving this dilemma presents novel and complex issues associated with the standard of care, foreseeability and proximate cause which are crucial elements for establishing fault (under Liability Convention Article III).44 This matter is further complicated by the distinct possibility that it may not be possible to ascertain how an intelligent space object made a particular decision.45

Nevertheless, untangling these nuanced legal obstacles may not be necessary to assess fault liability. Outer Space Treaty Article VI requires a State to assure that the space activities of its governmental and non-governmental entities comply with the Outer Space Treaty. It not only makes a State internationally responsible for its national activities in outer space, but it also imposes a dual mandate of “authorization and continuing supervision” that is not limited to the launching State or the space actor’s home State.46 Outer Space Treaty Article VI does not expressly burden the launching State with the authorization and supervision obligation. Instead, it vests the authorization and continuous supervision on the “appropriate State.” Since neither Outer Space Treaty Article VI nor any other provision of the space law treaty regime define the term “appropriate State”, or sets forth any criteria for ascertaining the appropriate State(s), there are not any agreed upon legal standards for determining what constitutes the “appropriate State”. Nevertheless, it has been articulated that a launching State is generally always an appropriate party for purposes of Outer Space Treaty Article VI.47 This is a reasonable and accurate extrapolation since the liability scheme is predicated on launching State status.

40 Curtis E.A. Karnow, Liability For Distributed Artificial Intelligences, 11 Berkeley Technology Law Journal 147, 189-190 (1996). 41 Id.

42 Id.

43 See Long, supra Note 39, at 7. See also Weston Kowert, The Foreseeability of Human-artificial Intelligence Interactions, 96

Texas Law Review 181, 183 (2017).

44 Long, supra Note 39, at 8. While the decision to launch an intelligent space object may not be the basis for fault liability, as

discussed infra, how the decision was made may serve as a vehicle for assessing fault liability

45Id. 46 See Id.

47 See Generally Bin Cheng, Article VI Of The 1967 Space Treaty Revisited: “International Responsibility,” “National Activities,”


9 Since fault liability is generally predicated on the breach of a standard of care,48 the dual responsibility of “authorization and continuing supervision by the appropriate State party” arguably establishes a standard of care which a launching State must comply with,49 especially in connection with an intelligent space object. This essentially means that a launching State bears the duty to ensure that appropriate authorization and supervision will be exercised in connection with an intelligent space object that it launches for a non-governmental entity, regardless of whether the object is owned or operated by a national. The standard of care analysis, therefore, shifts from the specific occurrence which caused the damage to examine whether the launching State exercised sufficient authorization and supervision over the activities engaged in by the intelligent space object.

By analogizing to the “due diligence” standard under international law,50 determining whether a launching State exercised sufficient authorization and supervision involves a flexible and fluid standard. “Due diligence” is not an obligation to achieve a particular result; rather it is an obligation of conduct which requires a State to engage in sufficient efforts to prevent harm or injury to another State or its nationals51 or the global commons.52 The breach of this duty is not limited to State action, but it also extends to the conduct of a State’s nationals.53 While there is “an overall minimal level of vigilance” associated with due diligence, “a higher degree of care may be more realistically expected” from States possessing the ability and resources to provide it.54 In any event it would appear that a launching State’s standard of care entails assuring that there is some State authorization and supervision over the space activities engaged in by the intelligent space object. However, based on the flexible standard of care, it seems that the function of the intelligent space object determines whether human input or oversight is required and if so, what is the appropriate degree and extent of the human oversight.

This flexibility is consistent with the approach recognized by the European Commission (EC) in connection with Artificial Intelligence in general.55 The EC has adopted the policy that human oversight

is a necessary component in the use of AI.56 The policy is premised on the reasoning that human oversight ensures that an “AI system does not undermine human autonomy or cause other adverse effects.”57 The human oversight requires the “appropriate involvement by human beings” which may vary depending upon the “intended use of the AI system” and the “effect,” if any, it may have on people and legal entities.58 The EC then enumerates certain non-exhaustive manifestations of human oversight which include 1) human review and validation of an AI decision either before or immediately afterward implementation of the decision, 2) monitoring of the AI system “while in operation and the ability to intervene in real time and deactivate” the AI system, and 3) imposing operational restraints to ensure that certain decisions are not

48 Joel A. Dennerley, supra Note 3, 29 Eur. J. Intl. L. at 295. 49 See Generally Bin Cheng, supra Note 67.

50 Dennerley, supra Note 3, at 293-295.

51 ILA Study Group supra Note 71 at 29 citing Responsibilities and Obligations of States Sponsoring Persons and Entities with

Respect to Activities in the Area, Seabed Mining Advisory Opinion at ¶ 117 (Seabed Dispute Chamber of the International Tribunal of the Law of the Sea, Case No 17 , 1 February 2011); Jan E. Messerschmidt, Hackback: Permitting Retaliatory Hacking by Non-State Actors As Proportionate Countermeasures to Transboundary Cyberharm Shearman & Sterling Student Writing Prize in Comparative and International Law, Outstanding Note Aw, 52 Colum. J. Transnat'l L. 275, 302 - 305 (2013). See United States Diplomatic and Consular Staff in Tehran (U.S. v. Iran), 1980 I.C.J. 3, 61 - 67 (May 24).

52 See Mark Allan Gray, The International Crime of Ecocide, 26 Cal. W. Int'l L.J. 215, 238 (1996). at 242; Robert Rosenstock and

Margo Kaplan, The Fifty-Third Session of the International Law Commission, 96 Am. J. Int'l L. 412, 416 (2002).

53 Mark Allan Gray, supra Note 73, 26 Cal. W. Int'l L.J. at 243. 54 Id.; See ILA Study Group supra Note 71 at 4 and 31.

55 European Commission, White Paper on Artificial Intelligence - A European approach to excellence and trust, COM(2020) 65

final (Brussels, 19.2.2020) available at


10 made by the AI system.59 This EC policy presents a flexible framework that can be utilized to determine whether a launching State has met its standard of care in relation to a non-governmental intelligent space object that causes damage in outer space.

The due diligence flexible standard can also be used by the launching State to negate or mitigate its liability for damage caused by an intelligent space object. The flexible standard will allow a launching State to argue that the home State of the non-governmental space actor bears a greater degree of oversight responsibility than the launching State. Accordingly, it should be reasonable and sufficient for a launching State to rely on assurances that the non-national’s home State exercises adequate authorization and oversight procedures for its nationals’ use of intelligent space objects. This shifts the supervisory obligation from the launching State to the home State of the non-governmental space actor. The home State’s failure to properly exercise its standard of care may, depending upon the circumstance, mitigate or absorb the launching State of fault liability under Liability Convention Article III. This shift, however, is not automatic as the due diligence standard makes it dependent on the home State’s technological prowess in the area of AI or its financial ability to contract for such expertise.

3.1.4 Intelligent Space Objects and Absolute Liability

Liability Convention Article II imposes strict liability on a launching State if a space object causes damage on the Earth’s surface or to aircraft in flight. Article VI(1), however, allows exoneration from absolute liability if the damage results “either wholly or partially from gross negligence or from an act or omission done with intent to cause damage on the part of a claimant State or of natural or juridical persons it represents.” This defense, however, may not be available if the damage results “wholly or partially” from an act or omission of an intelligent space object deployed or controlled by the claimant State or a natural or juridical person the claimant State represents.

“Gross negligence” the mental element of an act or omission are products of human thought , which are absent in the machine decision process.60 Even more so, Liability Convention Article VI may also defeat exoneration from absolute liability if the claimant State is able to show that the launching State’s deployment of the intelligent space object breached its State responsibility under international law, the United Nations Charter or the Outer Space Treaty. This defense to the negation of absolute liability thrusts consideration of Outer Space Treaty Article VI into the equation.

3.1.5 Intelligent Space Objects and Liability Under Outer Space Treaty Article VII

Outer Space Treaty Article VII imposes international liability on the launching State, without qualification or exception. Moreover, Article VII does not predicate fault liability on a human involvement in the damage causing event or fault being otherwise attributable to the launching State. This unqualified launching State liability may possibly present an alternative recourse for pursuing compensation for damage in space caused by an intelligent space object. The pursuit of monetary compensation under Outer Space Treaty Article VII may very well arise when fault cannot be assessed under Liability Convention Article III because the decision causing the damage was not made by a person and the decision is not otherwise attributable to a launching State. The issue can also surface if a claimant State seeks financial compensation for an injury or harm cause by an intelligent space object which does not come within the meaning of “damage” as defined by Liability Convention Article 1(a). For instance, if an intelligent space object is used to interfere with, jam or hijack a commercial satellite transmission, then the financial injury suffered as a consequence of 59 Id.


11 such conduct may not be compensable under the Liability Convention given its definition of “damage.” Outer Space Treaty Article VII, however, may provide a basis for recovery under such a circumstance. Of course, a party seeking to pursue such a remedy under Outer Space Treaty Article VII may, in all likelihood, encounter the objection that since the Liability Convention is the progeny of Outer Space Treaty Article VII, a State is estopped from pursuing a remedy directly under Outer Space Treaty Article VII. Such an objection can be premised on the public international law principle that “when a general and a more specific provision both apply at the same time, preference must be given to the special provision.”61 It is unclear if this principle can apply to the relationship between Outer Space Treaty Article VII and the Liability Convention.

Although the Liability Convention expressly proclaims that one of its principal purposes is to establish rules and procedures “concerning liability for damage caused by space objects,”62 the treaty does not assert that its rules and procedures are exclusive when assessing liability through means other than the Liability Convention.. Most important though, is that neither the Outer Space Treaty nor the Liability Convention precludes a recovery of damage under Outer Space Treaty Article VII. This point is significant given the general principle of international law that what is not prohibited is permitted.63 In other words, “‘in relation to a specific act, it is not necessary to demonstrate a permissive rule so long as there is no prohibition.’”64 The determination of whether Liability Convention Article III estops a State from pursuing recourse under Outer Space Treaty Article VII for an injury caused by a space activity is, like most current space law issues, purely an academic exercise in as much as there is scant guidance from national or international courts, tribunals, or agencies on interpreting the provisions of the space law treaty regime. Nevertheless, resolution of the issue presents a binary choice of whether the Liability Convention does or does not preclude resort to Outer Space Treaty Article VII. The resolution of the issue will have a significant impact on whether the Liability Convention needs to be amended or supplemented to accommodate the deployment and use of intelligent space objects. For sure, if relief can be obtained under Outer Space Treaty Article VII when a remedy is not available under the Liability Convention, then Outer Space Treaty Article VII should provide sufficient flexibility to address liability issues associated with intelligent space objects during this period of AI infancy.

3.2 Data protection and ethical challenges related to AI in Space

Every year, commercially available satellite images are becoming sharper and taken more frequently. The leading-edge imagery resolution commercially available limits each pixel in a captured image to approximately 31 cm65. There is increasing demand from private commercial entities pushing66 for lowering the resolution restrictions threshold to 10 cm67-68. The significance of using AI in connection with satellite

61 Alessandra Perrazzelli and Paolo R. Vergano , Terminal Dues Under the Upu Convention and the Gats: An Overview of the Rules

and of Their Compatibility, 23 Fordham Intl. L.J. 736, 747 (2000)

62 Liability Convention Preamble, 4th Paragraph. The other purpose is to ensure prompt payment “of a full and equitable measure

of compensation to victims” in accordance with the Convention.

63 S.S. Lotus, P.C.I.J. Ser. A, No. 10 at 18 (1927).

64Roland Tricot and Barrie Sander, “Recent Developments: The Broader Consequences Of The International Court of Justice’s

Advisory Opinion On The Unilateral Declaration of Independence In Respect Of Kosovo,” 49 Columbia Journal of Transnational Law, 321, 327 ( 2011) quoting Accordance with International Law of the Unilateral Declaration of Independence in Respect of Kosovo (Kosovo Advisory Opinion), Advisory Opinion, 2010 I.C.J. at 2 (July 22)(declaration of Judge Simma).


12 imaging is best illustrated by the United States, in January 20202, imposing immediate interim export controls regulating the dissemination of AI technology software that possesses the ability to automatically scan aerial images to recognize anomalies or identify objects of interest , such as vehicles, houses, and other structures.69

Speculation revolves around satellite imagery discerning car plates, individuals, and “manholes and mailboxes”70. In fact, in 2013, police in Oregon, used Google Earth satellite image depicting marijuana growing illegally on a man’s property71. In 2018, Brazilian police used real-time satellite imagery72 to detect a spot where trees had been ripped out of the ground to illegally produce charcoal and arrested eight people in connection with the scheme. In China, human rights activists used satellite imagery73 to show that many of the Uighur reeducation camps in Xinjiang province are surrounded by watchtowers and razor wire. A recent case deployed ML to create a system that could autonomously review video footage and detect patterns of activity and in one of the test cases, the system monitored video of a parking lot and identified moving cars and pedestrians. This system established a baseline of normal activity from which anomalous and suspicious actions could be detected.74

Even if this image and video resolution do not suffice to be able to distinguish75 individuals or their features, it is no longer in a sweet spot. The broad definition of personal data included in the General Data Protection Regulation76 (GDPR), enables that all information of EO data related to an identified or identifiable natural person (as location data) can be considered as personal data77. The broad definition of

personal data included in the GDPR enables that all information of EO data related to an identified or identifiable natural person (as location data) can be considered as personal data78. The attribute “identified” refers to a known person, and “identifiable” refers to a person who is not identified yet, but identification is still possible. An individual is directly identified or identifiable by reference to “direct or unique identifiers”. These “direct and unique identifiers” covers data types which can be easily referenced and associated with an individual, including descriptors such as a name, an identification number or username, location data, card of phone numbers, online identifiers, etc., as described in the GDPR. An individual is “indirectly

identifiable” by combinations of indirect (and therefore non-unique identifiers) that allow individual to be

singled out; they are less obvious information types which can be related to, or “linked” to an individual, such as, for instance, video footage, public key, signatures, IP addresses, device identifiers, metadata, and alike.

69 85 Fed. Reg. 459 (January 6, 2020)

70 See US lifts restrictions on more detailed satellite images, BBC, 71 72 73

74 See

75 Cristiana Santos, Delphine Miramont, Lucien Rapp, “High Resolution Satellite Imagery and Potential Identification of

Individuals”, P. Soille, S. Loekken, and S. Albani (Eds.) Proc. of the 2019 conference on Big Data from Space (BiDS’2019), EUR 29660 EN, Publications Office of the European Union, Luxembourg, 2019, p.237-240.

76Regulation (EU) 2016/679 (General Data Protection Regulation) on the protection of natural persons with regard to the processing

of personal data and on the free movement of such data, OJ L 119, 04.05.2016

77 The attribute “identified” refers to a known person, and “identifiable” refers to a person who is not identified yet, but identification

is still possible. An individual is directly identified or identifiable by reference to “direct or unique identifiers”. These “direct and unique identifiers” covers data types which can be easily referenced and associated with an individual, including descriptors such as a name, an identification number or username, location data, card of phone numbers, online identifiers, etc. (art. 4 (1)).

78 The attribute “identified” refers to a known person, and “identifiable” refers to a person who is not identified yet, but identification


13 Arguably, a person – as a whole – can be depicted on these pictures, as for the resolution might allow for the identification of a person, considering, for example, the person’s height, body type and clothing. Likewise, objects and places (location data) linked to a person could also enable identification of a person via very high-res images (VHR), such as the person’s home, cars, boats and others79. The lawfulness of its processing needs to be then assured.

As massive constellations of small satellites80 are becoming a staple in LEO, a larger influx of data, observation capabilities and high-quality imagery from EO satellites81 is expected to become more widely available on a timely basis. EO massive constellations may provide more frequent image capture and updates (capturing a single point several times a day) at a much lower cost. Users can plan both the target and frequency, allowing for a more specific analysis in a particular tracking. Ordinarily, these collected terabytes of data that must be downlinked to a ground station before being processed and reviewed. But now, enabled satellites can carry mission applications on board, including AI that would conduct that processing on the satellite82. This means that only the most relevant data would be transmitted, not only saving on downlink costs but also allowing ground analysts to focus on the data that matters most. For example, one company has developed algorithms relying on AI to analyze stacks of images and automatically

detect change, allowing users to track changes to individual properties in any country: "This machine learning tool, it's constantly looking at the imagery and classifying things it finds within them: that's a road, that's a building, that's a flood, that's an area that's been burned."83 Other analytics companies feed visual data into algorithms designed

to derive added value from mass images.

AI may be used, in breach of EU data protection and other rules, by public authorities or other private entities for mass surveillance. Very high-resolution (VHR) optical data may have the same quality as aerial photography, and therefore may raise respective privacy84, data protection and ethical issues.85 -86-87-88-89

In addition, EO data may be explored by smart video or face recognition technologies90-91 and combined

with other data streams as GPS, security cameras, etc., thus raising privacy concerns, even if the raw or pre-processed data itself do not.

Examples of several scenarios can be anticipated where identifiability of individuals can be at stake: Applying very high-res satellites for scanning the landscape, inspection thereof, capturing images of 79Aloisio, G. (2017). Privacy and Data Protection Issues of the European Union Copernicus Border Surveillance Service. Master

thesis. University of Luxembourg.

80The EO constellation will be centered at 600km, which spans a large range of altitudes. It comprises 300 non-maneuverable 3U

cubesats so is much smaller in bothtotal areal cross-section and aggregate mass

81Popkin, G.“Technology and satellite companies open up a world of data”, 82

83 84 85ITU-T Study Group 17 (SG17),

86Von der Dunk, F., "Outer Space Law Principles and Privacy", in Evidence from Earth Observation Satellites: Emerging Legal

Issues, Denise Leung and Ray Purdy (editors), Leiden: Brill, pp. 243–258, 2013.

87European Space Policy Institute, “Current Legal Issues for Satellite Earth Observation”, 2010, p. 38.

88 Cristiana Santos, Lucien Rapp, “Satellite Imagery, Very High-Resolution and Processing-Intensive Image Analysis: Potential

Risks Under the GDPR”, Air and Space Law, 2019, vol. 44, Issue 3, p. 275–295.

89 Cristiana Santos, Delphine Miramont, Lucien Rapp, “High Resolution Satellite Imagery and Potential Identification of

Individuals”, P. Soille, S. Loekken, and S. Albani (Eds.) Proc. of the 2019 conference on Big Data from Space (BiDS’2019), EUR 29660 EN, Publications Office of the European Union, Luxembourg, 2019, p.237-240.


91 See: Facial recognition technology: fundamental rights considerations in the context of law enforcement,


14 buildings, cars, real estate showcasing, stock image production, production of footage for publicity purposes, and suchlike. Those familiar with the area and/or familiar with the individuals who may be in the vicinity may be able to identify them and their movements as well as social patterns. The actual risk prompts by making this data available open-source to be used for any unforeseen purpose.

The European strategy for data92 aims a secure and dynamic data-agile economy in the world – empowering Europe with data to improve decision-making and better the lives of all its citizens. The future regulatory framework for AI in Europe aims to create an ‘ecosystem of trust’. To do so, it must ensure compliance with EU rules, including the rules protecting fundamental rights and consumers’ rights, in particular for AI systems that pose a high risk, as explained in this chapter.93 If a clear European regulatory framework is sought to build trust among consumers and businesses in AI in space, and therefore speed up the uptake of the technology, it is necessary to be aware of risks of AI in space.

While AI can do much good, it can also do harm. This harm might be both material (safety and health of individuals, including loss of life, damage to property) and immaterial (loss of privacy, limitations to the right of freedom of expression, human dignity, discrimination for instance in access to employment), and can relate to a wide variety of risks. Elaborating upon a forward-looking perspective, harnessing AI technologies in accessing and exploring outer space, as well as engaging in space based commercial activities will, in all likelihood, span a broad array of intended and unintended consequences flowing from the use and misuse of such technologies which cannot be downplayed or disregarded. However, it is considered that the two most prominent and complex legal issues are the issue of privacy and data protection on one hand, and liability for erroneous positioning on the other hand94.

3.2.1 Privacy, data protection issues

Employing AI in connection with satellite imaging raises concern relating to personal privacy and data protection. Some of the potential forecasted risks 95 include the following:

• Ubiquity of “facial recognition data.”96 Facial recognition data can, potentially, be obtained from a plethora of sources. The facial images collected and registered in a multitude of widely available databases can be used to track movement of people through time and space and therefore constitute a potential source for identifying individuals by an analysis of the images captured by the various facial recognition systems. More generally, any photograph can potentially become a piece of biometric data with more or less straightforward technical processing. The dissemination of data collected by facial recognition devices is taking place amid a context of permanent self-exposure on social media which increases the porosity of facial recognition data. This indicates that a massive amount of data is technically accessible for which AI can potentially be mobilized for facial recognition-based identification

• Lack of transparency requires that the data controller inform the data subject of the personal information collected, the purpose of the collection and use of the data. Transparency also entails the imagery operators to inform data subjects of their rights to access, correct and erase the personal data as well as well as the procedure for exercising such rights. The transparency obligation is

92 93 White Paper on Artificial Intelligence - A European approach to excellence and trust (COM(2020) 65 final)

94 F. von der Dunk, “Legal aspects of navigation - The cases for privacy and liability: An introduction for non-lawyers”, Coodinates

Magazine, May 2015,


15 difficult to document, monitor and enforce given the volume of different companies involved in the collection and intelligent processing of personal data.

• Data maximization and disproportionality of data processing: space technology entails the tendency of extensive collection, aggregation and algorithmic analysis of all the available data for various reasons, which hampers the data minimization principle. In addition, irrelevant data is also being collected and archived, undermining the storage limitation principle.

• Purpose limitation and repurposing of data. Since data analytics can mine stored data for new insights and find correlations between apparently disparate datasets, space big data is susceptible to reuse for secondary unauthorized purposes, profiling, or surveillance.97 This undermines the purpose specification principle, which conveys the purpose for which the data is collected must be specified and lawful. As for a repurpose, personal data should not be further processed in a way that the data subject might consider unexpected, inappropriate or otherwise objectionable and, therefore, unconnected to the delivery of the service. Moreover, once the infrastructure is in place, Facial Recognition technology may easily be used for “function creep”98: which refers to situations when, for instance, the purposes of VHR usage expand, either to additional operations or to additional activities within the originally envisaged operation. It also contemplates when such imagery is disseminated over the internet which naturally increases the risk of the data being reused widely. Given this circumstance it is problematic that the data subject can effectively control the use of the facial recognition data by giving or withholding consent.

• Retrace. By analyzing large amounts of data and identifying links among them, AI can be used to retrace and de-anonymize99 data about persons, creating new personal data protection risks even in respect to datasets that per se do not include personal data.

• Rights of access, correction and erasure. Results drawn from data analysis may not be representative or accurate if its sources are not subject to proper validation. For instance, AI analysis combining online social media resources are not necessarily representative of the whole population at stake. Moreover, machine learning may contain hidden bias in its programming or software which can lead to inaccurate predictions and inappropriate profiling of persons. Hence, AI interpretation of data collected by high-res images need human validation to ensure the trustworthiness of a given interpretation and avoid the incorrect interpreting an image.. “At best, satellite images are interpretations of conditions on Earth – a “snapshot” derived from algorithms that calculate how the raw data are defined and visualized”.100 This can create a black box, making it difficult to know when or why the algorithm gets it wrong. For example, one recently developed algorithm was designed to identify artillery craters on satellite images – but the algorithm also identified locations that looked similar to craters but were not craters. This demonstrates the need for metrics to assist in in formulating an accurate interpretation of big space data

• Potential identification of individuals. For instance, if the footage taken through VHR imaging only shows the top of a person’s head and one cannot identify that person without using sophisticated means, it is not personal data. However, if the same image was taken in the backyard of a house with additional imaging analytical algorithms that may enable an identification of the 97

98 99 White paper on AI


16 house and/or the owner, that footage would be considered as a personal data. Thus, personal data is very much context-dependent. This scenario escalates with the advances of “ultra-high” definition images being published online, from commercial satellite companies, and the consequential application of big data analytic tools. It might be possible to identify indirectly an individual (and to depict individual households, etc.), when high-res images are combined with other spatial and non-spatial datasets. Thus, while the footage of people may be restricted to “the tops of people’s heads”, once these images are contextualized by particular landmarks or other information, they may become identifiable. “The combination of publicly available data pools with high resolution

image data, coupled with the integration and analysis capabilities of modern Geographic Information Systems (GIS) disclosing geographic keys such as longitude and latitude, can result in a technological invasion of personal privacy”.101

• Risk to anonymity in the public space102: Erosion of anonymity, by public authorities or private organizations, is likely to jeopardize some of the fundamental privacy principles established by the GDRP. Facial recognition in public areas can end up making harmless behavior look suspicious. Wearing a hood, sunglasses or a cap, looking at your telephone or at the ground, can have an impact on the effectiveness of these devices and serve as a basis for suspicion in itself.103 Additionally, the interface between facial recognition systems and satellite imaging creates the opportunity for an unprecedented level of surveillance, whether by a governmental or private entity. It is not unthinkable that coupling satellite imagery with facial recognition software and other types of technology, such as sound capturing devices, further increases the level of surveillance of people and places.

• Fallible technologies might produce unfair bias104 and outcomes105-106

: Like any biometric processing, facial recognition is based on statistical estimates of the match between the elements being compared. It is therefore inherently fallible because it is a probability of match. The French Data Protection explains furthermore that the biometric templates calculated are always different depending on the conditions under which they are calculated (lighting, angle, image quality, resolution of the face, etc.). Every device therefore exhibits variable performance according, on the one hand, to its aims, and, on the other hand, to the conditions for collecting the faces being compared. Space AI embedded devices with facial recognition can thus lead to "false positives" (a person is wrongly identified) and "false negatives" (the system does not recognize a person who ought to be recognized). Depending on the quality and configuration of the device, the rate of false positives and false negatives may vary. The model’s result may be incorrect or discriminatory if the training data renders a biased picture reality, or if it has no relevance to the area in question. Such use of personal data would be in contravention of the fairness principle.

• Transparency and (in)visibility. This risk applies when individuals on the ground may not know VHR satellites are in operation, and if they do, may be unsure about who is operating them and the purpose for which it is being used, causing somehow discomfort.

101 Chun S., Atluri V., Protecting Privacy from Continuous High-Resolution Satellite Surveillance. In: Thuraisingham B., et al.

(eds.) Data and Application Security. IFIP, vol 73. Springer, 2002.

102 103

104 Joy Buolamwini, Timnit Gebru; Proceedings of the 1st Conference on Fairness, Accountability and Transparency, PMLR

81:77-91, 2018.





  3. e.
  4. ?
  6. .
  7. s,
  8. (
  10. ,
  11. 82
  12. 15
  13. 18
  15. 65
  16. 67
  17. 68
  19. lab?
  20. BBC,
  21. 71
  22. 72
  23. 73
  24. e
  26. 83
  27. 84
  30. “Satellite Imagery, Very High-Resolution and Processing-Intensive Image Analysis: Potential Risks Under the GDPR”,
Sujets connexes :