Spain Pushes for Stricter Regulation and Vulnerability Testing
The European Union’s proposed AI Act, which aims to regulate artificial intelligence, is currently being debated as European officials consider how to supervise foundational models. Spain, as the current leader of the EU, is in favor of enhanced screening for weaknesses and the implementation of a tiered regulatory framework based on the number of users of the model.
Multiple Trilogues Held, with Fourth Meeting Expected This Week
European lawmakers have already held three trilogues, which are three-party discussions between the European Parliament, the Council of the European Union, and the European Commission, to discuss the AI Act. A fourth trilogue is expected to take place this week. However, if no agreement is reached, another meeting has been scheduled for December, raising concerns that decision-making on the law could be postponed until next year. The original goal was to pass the AI Act before the end of this year.
Proposed Requirements for Foundation Model Developers
One of the drafts of the EU AI Act suggests that developers of foundation models should be obligated to assess potential risks, subject the models to testing during development and after market release, analyze bias in training data, validate data, and publish technical documents before release.
Call for Consideration of Smaller Companies
Open-source companies have urged the EU to take into account the challenges faced by smaller companies in complying with the regulations. They argue that a distinction should be made between for-profit foundation models and hobbyists and researchers.
EU AI Act as a Potential Model for Other Regions
Many government officials, including those in the US, have looked to the EU’s AI Act as a potential example for drafting regulations around generative AI. However, the EU has been slower in progress compared to other international players, such as China, which implemented its own AI rules in August of this year.