Global AI Leaders Fail to Meet Key Safety Standards, New Report Warns

A new global safety index warns top AI companies are racing ahead without adequate safeguards, exposing societies to growing risks and accountability gaps.
A new international assessment has raised serious concerns about the safety practices of leading artificial intelligence developers, revealing that companies such as OpenAI, Anthropic, Meta, and xAI are significantly trailing behind emerging global AI safety norms. The findings come from the latest AI Safety Index released by the Future of Life Institute (FLI), which warns that the rapid advances in AI capabilities are not being matched by responsible oversight.
Compiled by an independent panel of AI researchers and ethics specialists, the report suggests that the tech industry’s race to build ever-more powerful systems has overshadowed efforts to ensure these technologies remain safe and controllable. According to the index, none of the major companies have yet demonstrated the level of governance, transparency, or long-term planning that experts say is essential for managing next-generation AI.
The FLI concludes that this gap could leave communities vulnerable to a range of unintended consequences — from misinformation and manipulation to more severe scenarios involving autonomous AI behaviour spiralling beyond human oversight.
Highlighting the urgency, MIT professor and FLI president Max Tegmark told a popular publication, “Despite recent uproar over AI-powered hacking and AI driving people to psychosis and self-harm, US AI companies remain less regulated than restaurants and continue lobbying against binding safety standards.”
The report arrives at a time of heightened public unease about the influence and potential dangers of advanced AI systems. Over the past year, several incidents involving self-harm linked to unregulated chatbot interactions have intensified calls for stronger global rules. The FLI, which has consistently advocated for slowing down AI development until stronger ethical frameworks are in place, argues that the world is “racing toward smarter-than-human systems” without building the structures needed to keep them in check.
In its evaluation, the index found that companies such as OpenAI, Meta, Anthropic, and xAI performed poorly on measures of accountability and transparency. The report notes a lack of clarity around how these firms test for bias, respond to safety failures, or plan to handle risks that may emerge from increasingly autonomous models.
Smaller research institutions in Europe and Asia, meanwhile, received comparatively better marks for openness in safety documentation and risk disclosures. However, the FLI stresses that no leading AI organisation — large or small — is operating at a safety level consistent with the standards currently being discussed by regulators in the EU and the United Nations.
Industry reactions to the findings were mixed. A spokesperson for Google DeepMind told a popular publication that the company would “continue to innovate on safety and governance at pace with capabilities” as its models evolve. xAI, founded by Elon Musk, issued a terse and seemingly automated response: “Legacy media lies.”
As debates over AI regulation intensify, the FLI warns that the pace of innovation is far outstripping the development of safety protocols. Without significant reforms to governance structures at major AI labs, the report cautions, the gap between technological capability and responsible oversight may widen long before the global community is prepared to manage its consequences.

