In a move reflecting industry-wide concerns, numerous tech companies, among which are GitHub, Hugging Face, and Creative Commons, have issued a joint paper to EU policymakers. The document underscores the urgency for enhanced open-source support for AI development, as the legislative process for the AI Act approaches its conclusion. Other participants in this collective action include EleutherAI, LAION, and Open Future.
Aiming for global precedence
These companies have voiced several suggestions intended for the European Parliament’s consideration before the enactment of the final rules. These suggestions include improved clarity in AI components’ definitions, acknowledgment that open-source models aren’t a source of commercial benefit for hobbyists and researchers, allowance for limited real-world testing of AI projects, and implementing proportional requirements for diverse foundational models.
“The AI Act holds the potential to set a worldwide example in terms of AI regulation, allowing for risk management while also fostering innovation,” the companies articulate in their joint document. The Act could indeed play a significant role in facilitating the growth of AI by endorsing the emerging open ecosystem approach, thereby furthering its objective.
AI regulation has stirred global governments, with the EU at the forefront of these serious deliberations. However, the AI Act proposed by the EU has faced scrutiny for its overextensive AI technology definitions, simultaneously failing to address the application layer adequately.
Open-source AI: A double-edged sword?
The open-source principle, which encourages sharing access to models and promoting transparency within the AI community, has been embraced by several developers of generative AI models. Yet, this ethos has also led to challenges for companies establishing these frameworks. Notably, OpenAI has had to halt sharing much of its GPT research due to competition and safety concerns.
In the light of such challenges, the companies have highlighted potential detrimental consequences of some current propositions impacting high-risk models. They caution against potentially burdensome measures like mandatory third-party audits. They argue that open-source library tools, not used for commercial activities, should not fall under regulatory scrutiny.
Learning from real-world testing
According to the joint paper, prohibiting AI models’ real-world testing “will considerably hinder any research and development.” They contend that open testing yields valuable insights for functional improvement. Current stipulations, however, disallow AI applications from being tested beyond closed experiments to avoid potential legal complications from untested products.
As the EU’s AI Act takes shape, AI companies have not shied away from expressing their opinions on what it should encompass. OpenAI has notably lobbied EU policymakers against stricter regulations around generative AI, with some of its suggestions making it into the Act’s most recent version.
We value your thoughts
We’re eager to hear your perspective on this issue. What are your thoughts on the appeals made by these tech giants for more open-source support in the AI Act? Please share your thoughts with us in the comment section below!