Google’s updated, public AI ethics policy removes its promise that it won’t use the technology to pursue applications for weapons and surveillance. In a previous version of the principles seen by CNN on the internet archive Wayback Machine, the company included applications it won’t pursue. One such category was weapons or other technology intended to injure people. Another was technology used to surveil beyond international norms. That language is gone on the updated principles page. Since OpenAI launched chatbot ChatGPT in 2022, the artificial intelligence race has advanced at a dizzying pace. While AI has boomed in use, legislation and regulations on transparency and ethics in AI have yet to catch up – and now Google seems to have loosened self-imposed restrictions. In a blog post Tuesday, senior vice president of research, labs, technology & society James Manyika and Google DeepMind head Demis Hassabis said that AI frameworks published by democratic countries have deepened Google’s “understanding of AI’s potential and risks.”
Google removed a pledge to not build AI for weapons or surveillance from its website this week. The change was first spotted by Bloomberg. The company appears to have updated its public AI principles page, erasing a section titled “applications we will not pursue,” which was still included as recently as last week. Asked for comment, the company pointed TechCrunch to a new blog post on “responsible AI.” It notes, in part, “we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.” Google’s newly updated AI principles note the company will work to “mitigate unintended or harmful outcomes and avoid unfair bias,” as well as align the company with “widely accepted principles of international law and human rights.”
The Washington Post reports that in a significant shift from its earlier stance, Google has revised its AI principles, eliminating a section that outlined four “Applications we will not pursue.” Until recently, this list included weapons, surveillance, technologies likely to cause overall harm, and use cases that violate international law and human rights principles. The company declined to comment specifically on the changes to its weapons and surveillance policies. Google executives Demis Hassabis, head of AI, and James Manyika, senior vice president for technology and society, explained the update in a blog post on Tuesday. They emphasized the need for companies based in democratic countries to serve government and national security clients, given the global competition for AI leadership within an increasingly complex geopolitical landscape. The executives stated that democracies should lead AI development, guided by core values like freedom, equality, and respect for human rights.