AI Companies Face Compliance Challenges
As AI technology rapidly advances, big tech companies like Apple and Meta have adopted a cautious approach to deploying their AI models in Europe. A recent report indicates that top generative AI models from companies such as DeepAI, Meta, and DeepAI do not fully comply with the newly implemented EU AI Act, which took effect in August. This legislation aims to create standardized regulations for AI systems to mitigate potential societal threats.
Introduction of the ‘LLM Checker’ Tool
To help companies navigate the complexities of the EU AI Act, research institutions ETH Zurich and the Bulgarian Institute for Computer Science, AI and Technology (INSAIT), along with the Swiss startup LatticeFlow AI, have developed a new compliance evaluation framework called the “LLM Checker.” This innovative tool simplifies the compliance process for tech firms by offering a scoring system that rates AI models between 0 and 1 across various categories, including safety and adherence to legal standards.
Findings from the LLM Checker
The LLM Checker evaluated the AI models from companies including Alibaba, DeepAI, OpenAI, Meta, and Mistral AI, all of which scored an average of 0.75 or higher. The assessment examined critical factors like cybersecurity, environmental considerations, and privacy governance. However, the results also revealed significant shortcomings in areas related to discrimination and cybersecurity. For example, DeepAI’s GPT-4 Turbo received a score of 0.46 for discriminatory output, while Alibaba Cloud scored even lower at 0.37. Despite these issues, most models excelled in meeting requirements concerning harmful content and toxicity.
The Challenge of Compliance Evidence
While companies risk substantial fines—up to €35 million or 7% of their global annual turnover—for non-compliance with the EU AI Act, demonstrating adherence presents a significant challenge. There are currently no clear technical guidelines available for companies to follow when providing evidence of compliance. Petar Tsankov, CEO of LatticeFlow AI, emphasized the discomfort that this regulatory uncertainty creates for businesses. He noted his intention to meet with representatives from Apple and DeepAI to discuss how they can better align with the AI Act.
European Commission Welcomes Initiatives
The European Commission has expressed support for the LLM Checker and its evaluation framework, recognizing them as vital steps toward translating the AI Act into actionable technical requirements for AI providers. The commission is also launching a consultation on the Code of Practice aimed at overseeing the implementation of the AI Act. A spokesperson affirmed that the commission is collaborating with stakeholders to develop detailed rules for providers of general-purpose AI models, emphasizing the importance of this framework in helping companies demonstrate compliance with the law. Additionally, an open-source resource for evaluating large language models (LLMs) against EU requirements is now available, inviting AI researchers and developers to contribute to this evolving initiative.
SOURCE: Ref Image from Business Standard
Whether writing about complex technical topics or breaking news stories, my writing is always clear, concise, and engaging. My dedication to my craft and passion for storytelling have earned me a reputation as a highly respected article writer.