With this letter, we bring the perspective of the Center for Responsible AI, a consortium that joins ten AI startups, seven research centers, one Law firm, and five industry leading companies. Together, we are building the next generation of AI products driven by the principles and technologies of Responsible AI.

The Center for Responsible AI supports the goals of the EU AI Act and, actively engaged in the field, also suggests recommendations to further clarify the impact of the EU AI Act in assisting humans, while driving innovation, promoting the democratising of AI, and enhancing literacy and legal clarity on the terms being used. With this letter, it is our utmost goal to make the text of the EU AI Act explicit, to achieve its purpose to benefit citizens, researchers, and industry, our community at the Center.

The references to the Articles below refer to the European Parliament´s compromise text of the EU AI Act adopted on 14 June 2023.

1. Driving AI Product Innovation in Europe by permitting limited testing in real-world conditions of high-risk products

Product innovation relies on the ability for swift iterations which are largely derived from initial customer feedback.

However, Article 2(5d) prohibits in all situations the testing of high-risk products under real-world conditions. This essentially implies that high-risk products aiming for early customer validation would need to go through sandbox testing, which could significantly slow down the pace of innovation putting Europe in a competitive disadvantage.

In order to address this issue, it is proposed that the Article 2(5d) should be amended to permit limited testing of high-risk products in controlled **real-world conditions. This modification would foster a more favorable setting for product innovation in Europe, while maintaining a balanced approach to risk management.

2. Eliminating barriers for the development of open-source foundation models

As mentioned in recital 12a, the contribution of open-source software to Europe’s economy is immense, carrying a value estimated between €65 billion to €95 billion of European Union GDP. They are a key instrument for AI research and innovation in startups and research centers.

Startups and SMEs use open-source software as a means to compete with large corporations. The collaboration in this space implies the free public sharing of open-source components that, in accordance with recital 12b, is not considered as placing on the market. However, an exemption arises in Article 2(5e), whereby foundation models are excluded.

The proposed amendment should include allowing the development of foundation models by the open-source community in an unrestrictive manner. This way, it will mirror the nature of open-source software development and aid further in collaborative AI innovation.

3. Certain provisions regarding foundation models should be revisited

The definition of foundation models in Article 3(1c) (1d) has faced criticism from AI computer scientists. While the effort to avoid referencing specific AI learning methodologies and techniques from a legal perspective is appreciated it is essential to align the definition with EU standards. We recommend an alternative definition that clearly outlines what qualifies as a large machine learning model covered under the AI Act, how it relates to generative AI, and general-purpose AI.

The first specific obligation hard for startups and research centers is the requirement to have safeguards as stated in Article 28b(4) which requires scaling human-feedback — a big financial burden for startups and research centers. To avoid this burden, we should introduce exceptions regarding the obligations imposed on foundation models under Article 28b.

Regarding Recital 60h, Article 28b(4c), and Article 52(3a), the introduction of the obligation to publicly disclose detailed summaries of training data protected by copyright law has ignited a vigorous debate. Key points of contention include defining copyrightable works, the objective of such disclosures, and enforcement methods. Considering the nuanced differences between the EU and US discussions on using proprietary information for AI training, and drawing from academic discourse and the findings of the Special Committee on Artificial Intelligence in a Digital Age, we propose:

(i) Removing the requirement to summarize copyrightable works used for foundation model training due to practical challenges, potential conflicts with ongoing agreements, and commercial interests.

(ii) Replacing it with a general obligation for model providers to comply with existing EU intellectual property laws, including trade secrets, and mandating documented due diligence and possible audits to ensure lawful use of proprietary information.

4. Reinforce environmental concerns and obligations

The initial AI Act draft missed a significant opportunity by not addressing the crucial issue of environmental sustainability in AI. This oversight contradicted the approach proposed by the High-Level Expert Group on AI and the ongoing discussions highlighting the imperative of sustainability in AI.

Although partially remedied in the European Parliament's compromise text, it is important to recognize that environmental impact is not confined solely to high-risk or specific-risk AI systems, including foundation models.

A more comprehensive approach that encompasses all AI systems covered by the Act is necessary together with the inclusion of voluntary considerations for low and minimal-risk AI systems.

5. Promoting regulatory clarity and making Annex III even more dynamic