Google Removes Ban on Using Its Artificial Intelligence for Weapons and Surveillance

Google Removes Ban on Using Its Artificial Intelligence for Weapons and Surveillance

Google updated its artificial intelligence (AI) rules on Tuesday, removing a promise not to use AI in ways “that cause or are likely to cause overall harm.” A deleted part of the old guidelines had also committed Google to avoiding AI for surveillance, weapons, or technology meant to harm people. The Washington Post first noticed the change, and it was saved by the Internet Archive.

At the same time, Google DeepMind CEO Demis Hassabis and Google’s senior executive for technology and society, James Manyika, published a blog post explaining new “core tenets” for AI. These focus on innovation, teamwork, and “responsible” AI development—though the new guidelines don’t make any clear promises.

“There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape,” the blog post says. “We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights. And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.”

Hassabis joined Google after it bought DeepMind in 2014. In a 2015 interview with Wired, he said the deal included rules that stopped DeepMind’s technology from being used for military or surveillance purposes.

Even though Google promised not to develop AI weapons, the company has still worked on military projects. This includes Project Maven, a 2018 Pentagon project where Google used AI to analyze drone footage, and Project Nimbus, a 2021 cloud computing contract with the Israeli government. These deals, made before AI became as advanced as it is today, led to disagreements among Google employees who felt they went against the company’s AI principles.

Google’s updated AI ethics now match what other big AI companies are doing. Meta’s Llama and OpenAI’s ChatGPT allow some military use, and a deal between Amazon and government software maker Palantir lets Anthropic sell its Claude AI to the U.S. military and intelligence agencies.

Thanks for visiting Our Secret House. Create your free account or donate by signing up to never miss any news!

If you would like to show your support today you can do so by becoming a digital subscriber. Doing so helps helps make Secret House possible and makes a real difference for our future.

Read more