OK, that's a clickbait title, but only a little. They're actually amongst the heaviest users of Claude, according to Anthropic (PDF), via the 2025 AI Index Report from Stanford's Institute for Human-Centered AI. The report itself is a 456-page PDF, so do start with the key takeaways, but then either search for specific words of interest, or pick one of the 8 chapters that most interests you and take a deeper look. So many graphics to support your next presentation! And they're all available, along with the supporting data, on a Google Drive. There's also a cool Interactive Data Visualization page where you can pick out and create your own graphics, including the option to show some data per capita, which the charts in the report do not do.
A couple of interesting highlights from the Responsible AI chapter:
9. LLMs trained to be explicitly unbiased continue to demonstrate implicit bias.
Many advanced LLMs—including GPT-4 and Claude 3 Sonnet—were designed with measures to curb explicit biases, but they continue to exhibit implicit ones. The models disproportionately associate negative terms with Black individuals, more often associate women with humanities instead of STEM fields, and favor men for leadership roles, reinforcing racial and gender biases in decision making. Although bias metrics have improved on standard benchmarks, AI model bias remains a pervasive issue.
and
10. RAI gains attention from academic researchers.
The number of RAI papers accepted at leading AI conferences increased by 28.8%, from 992 in 2023 to 1,278 in 2024, continuing a steady annual rise since 2019. This upward trend highlights the growing importance of RAI within the AI research community.
And what have they been talking about in Las Cortes Generales?!?