Report Finds Top AI Developers Lack Transparency in Disclosing Societal Impact

Stanford HAI Releases Foundation Model Transparency Index

A new report released by Stanford HAI (Human-Centered Artificial Intelligence) suggests that leading developers of AI base models, like OpenAI and Meta, are not effectively disclosing information regarding the potential societal effects of their models. The Foundation Model Transparency Index, unveiled today by Stanford HAI, evaluated the transparency measures taken by the makers of the top 10 AI models. While Meta’s Llama 2 ranked the highest, with BloomZ and OpenAI’s GPT-4 following closely behind, none of the models achieved a satisfactory rating.

Claude AI Comprehensive Guide: Transform Your Business With AI Automation & AI Agents: AI Workflows, Automation, Agents, API Development, Coding, Security, Cost Optimization & Real-World Case Studies

Claude AI Comprehensive Guide: Transform Your Business With AI Automation & AI Agents: AI Workflows, Automation, Agents, API Development, Coding, Security, Cost Optimization & Real-World Case Studies

As an affiliate, we earn on qualifying purchases.

Transparency Defined and Evaluated

The researchers at Stanford HAI used 100 indicators to define transparency and assess the disclosure practices of the model creators. They examined publicly available information about the models, focusing on how they are built, how they work, and how people use them. The evaluation considered whether companies disclosed partners and third-party developers, whether customers were informed about the use of private information, and other relevant factors.

1001 Prompts for Unlocking Generative AI in Local Government

1001 Prompts for Unlocking Generative AI in Local Government

As an affiliate, we earn on qualifying purchases.

Top Performers and their Scores

Meta scored 53 percent, receiving the highest score in terms of model basics as the company released its research on model creation. BloomZ, an open-source model, closely followed at 50 percent, and GPT-4 scored 47 percent. Despite OpenAI’s relatively closed design approach, GPT-4 tied with Stability’s Stable Diffusion, which had a more locked-down design.

Building Agentic AI with Local LLMs: The Hands-On Python Guide to Autonomous Agents Using Ollama, Llama 3, and LangGraph in VS Code

Building Agentic AI with Local LLMs: The Hands-On Python Guide to Autonomous Agents Using Ollama, Llama 3, and LangGraph in VS Code

As an affiliate, we earn on qualifying purchases.

OpenAI’s Disclosure Challenges

OpenAI, known for its reluctance to release research and disclose data sources, still managed to rank high due to the abundance of available information about its partners. The company collaborates with various companies that integrate GPT-4 into their products, resulting in a wealth of publicly available details.

Artificial Intelligence Bible (3-in-1): AI Agents, Prompt Engineering & Generative AI. Automate & Scale — Beginner's Guide to Slash Costs, Save Time & Accelerate Growth: ChatGPT Insights Included

Artificial Intelligence Bible (3-in-1): AI Agents, Prompt Engineering & Generative AI. Automate & Scale — Beginner's Guide to Slash Costs, Save Time & Accelerate Growth: ChatGPT Insights Included

As an affiliate, we earn on qualifying purchases.

Creators Silent on Societal Impact

However, the Stanford researchers found that none of the creators of the evaluated models disclosed any information about the societal impact of their models. There is no mention of where to direct privacy, copyright, or bias complaints.

Index Aims to Encourage Transparency

Rishi Bommasani, a society lead at the Stanford Center for Research on Foundation Models and one of the researchers involved in the index, explains that the goal is to provide a benchmark for governments and companies. Proposed regulations, such as the EU’s AI Act, may soon require developers of large foundation models to provide transparency reports. The index aims to make models more transparent by breaking down the concept into measurable factors. The group focused on evaluating one model per company to facilitate comparisons.

OpenAI’s Research Distribution Policy

OpenAI, despite its name, no longer shares its research or codes publicly, citing concerns about competitiveness and safety. This approach contrasts with the large and vocal open-source community within the generative AI field.

The Verge reached out to Meta, OpenAI, Stability, Google, and Anthropic for comments but has not received a response yet.

Potential Expansion of the Index

Bommasani states that the group is open to expanding the scope of the index in the future. However, for now, they will focus on the 10 foundation models that have already been evaluated.

You May Also Like

Building Trustworthy AI Algorithms: A Guide to Security and Reliability

In our quest to master AI algorithms, we are setting out to…

Master the Art of Outsmarting Adversarial Attacks on AI Models

We acknowledge the doubt that comes with trying to outsmart adversarial attacks…

Securing the Smart Grid: AI Protecting Critical Infrastructure

Offering innovative AI solutions, this article reveals how critical infrastructure is protected, but the full impact of these advancements remains to be seen.

Top 5 Reasons Why AI Security Is a Game Changer in Cybersecurity

As a cybersecurity expert, I have always viewed the fight against cyber…