Artificial intelligence pioneer Geoffrey Hinton quits Google and warns that AI could soon become more intelligent than humans

Artificial intelligence pioneer Geoffrey Hinton quits Google and warns that AI could soon become more intelligent than humans

An individual widely known as a pioneer in the realm of artificial intelligence (AI) has resigned from his position at Google, issuing a cautionary message regarding the escalating hazards posed by advances in this domain.

Geoffrey Hinton, a 75-year-old researcher, declared in a statement to the New York Times that he was resigning from his position at Google and expressing remorse for his contributions to the field of AI.

In an interview with the New York Times, Hinton stated that he had maintained confidence in Google's responsible management of the technology until the previous year. However, he altered his opinion after Microsoft integrated a chatbot into its Bing search engine, prompting Google to express concerns about the potential impact on its own search business.

He also pointed to the stunning pace of advancement, far beyond what he and others had anticipated.

“The idea that this stuff could actually get smarter than people — a few people believed that,” Hinton said in the interview. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

During an interview with the BBC, Hinton expressed his apprehension regarding certain risks associated with AI chatbots, stating that some of these risks were "quite scary". He warned that chatbots could surpass human intelligence and be manipulated by “bad actors”.

“I’ve come to the conclusion that the kind of intelligence we’re developing is very different from the intelligence we have.”

“So it’s as if you had 10,000 people and whenever one person learned something, everybody automatically knew it. And that’s how these chatbots can know so much more than any one person.”

Dr Hinton also said that his age had played into his decision to leave the tech giant, telling the BBC: "I'm 75, so it's time to retire."

"Right now, what we're seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has and it eclipses them by a long way. In terms of reasoning, it's not as good, but it does already do simple reasoning.

"And given the rate of progress, we expect things to get better quite fast. So we need to worry about that."

He added: "I've come to the conclusion that the kind of intelligence we're developing is very different from the intelligence we have.

"We're biological systems and these are digital systems. And the big difference is that with digital systems, you have many copies of the same set of weights, the same model of the world.

"And all these copies can learn separately but share their knowledge instantly. So it's as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it. And that's how these chatbots can know so much more than any one person."

Hinton's immediate concern has already materialized, as a plethora of AI-generated content such as photos, videos, and text has flooded the internet, making it difficult for individuals to differentiate between authentic and fabricated material.

Hinton also expressed concern that AI will eventually replace professions such as paralegals, personal assistants, and other "drudge work", and possibly even more in the future.

Google’s chief scientist, Jeff Dean said in a statement that Google appreciated Hinton’s contributions to the company over the past decade.

“I’ve deeply enjoyed our many conversations over the years. I’ll miss him, and I wish him well!

“As one of the first companies to publish AI Principles, we remain committed to a responsible approach to AI. We’re continually learning to understand emerging risks while also innovating boldly.”

In March, some prominent figures in tech signed a letter calling for artificial intelligence labs to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.” The letter, published by the Future of Life Institute, a nonprofit backed by Elon Musk,came just two weeks after OpenAI announced GPT-4, an even more powerful version of the technology that powers ChatGPT. In early tests and a company demo, GPT-4 was used to draft lawsuits, pass standardized exams and build a working website from a hand-drawn sketch.

Thanks for visiting Our Secret House. Create your free account by signing up or log in to continue reading.

If you would like to show your support today you can do so by becoming a digital subscriber. Doing so helps helps make Secret House possible and makes a real difference for our future.

Read more