EU adopts new AI Act: what does it mean for the space sector? | Fieldfisher
Skip to main content
Insight

EU adopts new AI Act: what does it mean for the space sector?

John Worthy
15/07/2024

Locations

United Kingdom

The EU Artificial Intelligence Act ("EU AI Act"), approved by the European Council on 21 May 2024, introduces a comprehensive regulatory framework for AI systems within the EU. In addition, it applies to non-EU businesses providing AI systems into the EU, so the geographic reach is very wide. For a useful overview of the EU AI Act, please read more.

AI is increasingly valuable in numerous space domains, from manufacturing and in-orbit operations to sensing and data analytics. The reliance on AI is only likely to grow, making AI an integral part of the space sector. Accordingly, the new Act will affect many players across the industry.

Who is affected by the new compliance obligations? Businesses up and down the space AI value chain, including distributors, importers, deployers and other providers of AI systems will have new compliance obligations if they are involved in creating, using, importing, or distributing AI systems (especially high-risk AI systems) within the EU, regardless of where the AI system was developed.

The AI system will need to comply with various requirements, according to the level of risk which the system presents.

How are high-risk AI systems covered? High-risk AI systems are a major focus of the new regulatory regime. This means that providers of high-risk AI systems are impacted, alongside other businesses that supply systems, tools, services, components, or processes that are incorporated into high-risk AI systems. Therefore, businesses will need to assess whether their AI systems are treated as ‘high-risk’.

AI systems are classified as ‘high-risk’ in the following two scenarios:

  • The AI system is a safety component of a product, or the AI system is itself a product covered by the EU harmonisation legislation listed in Annex I of the EU AI Act. Examples of products that could be affected by a classification as high-risk if AI systems are deployed may include:
    • radio equipment (e.g. ground station/satellite communication equipment, in-orbit systems, terminal equipment, wireless network infrastructure, satellite navigation/positioning equipment, range control systems);
    • aviation (e.g. launch vehicles, high altitude platform stations and drones)
    • machinery (e.g. factory equipment used in the production of satellites and/or launch vehicles, ground equipment or similar products);
    • equipment and protective systems intended for use in potentially explosive atmospheres (e.g. spaceports, launch testing sites); and
    • vehicles (e.g. transmitters/receivers used in automated vehicles).
  • The AI system is used for one of the high-risk purposes listed in the EU AI Act. These include areas such as:
    • employment and workers management, such as the use of AI systems in the recruitment process and monitoring the performance of employees;
    • law enforcement;
    • safety components in critical infrastructure, such as AI-driven safety components used in road traffic and utility supply; and
    • determining access to public and private essential services, such as healthcare, credit, insurance, and emergency responses.

What is the impact on high-risk AI systems? If the AI system is classified as high risk, developers of such AI systems (i.e. providers) must establish a risk management system, ensure high-quality data is used during training, provide technical documentation and appropriate instructions for use with the AI system. Significantly, the requirements also include ensuring appropriate human oversight, ensuring that the system achieves an appropriate level of accuracy, robustness and cyber security, having a quality management system in place for post-market monitoring, drawing up an EU declaration of conformity and registering the AI system.

Deployers, on the other hand, have a more limited set of obligations. For example, they must follow instructions provided with the AI system, apply suitable human oversight, monitor system operation, and inform workers’ representatives when using AI technology in the workplace.

Importers and distributors have limited obligations when compared to providers and deployers. However, if they substantially modify the AI system, put their trademark on an AI system, or use the AI system for a high-risk purpose not foreseen by the provider, they will be considered as providers themselves, thus becoming subject to the obligations mentioned above.

It may be that the widely anticipated EU Space Law will refine some of the effects of the EU AI Act on the space sector.

What are the timelines? The EU AI Act is expected to enter into force on 1 August 2024. Most obligations under the Act will apply from August 2026.

What should I do to prepare? To prepare for the EU AI Act, companies should begin by evaluating whether they currently utilise or develop AI systems, or plan to acquire or use AI systems. They should then review how far they will be treated as developers, deployers, importers or distributors of these AI systems or subsystems (or act as a supplier to an AI provider) and determine if the AI systems are high-risk.

If so, businesses should take proactive steps, which include, among other things, assessing the risks associated with their AI systems, establishing appropriate policies and governance, provide training to the personnel handling AI systems and conduct monitoring of the AI systems. Businesses should also assess their key contracts with suppliers and customers where AI systems are involved to ensure that sufficient protections are in place, as well as preparing for the upcoming obligation to register their high-risk AI systems under the EU AI Act.

Please contact us if you would like to discuss the impact of the EU AI Act.

Related Work Areas

Technology