The company behind ChatGPT will evaluate the possible “catastrophic risks” that could come with the unchecked development of the technology.
OpenAI has formed a Preparedness team that will assess, test and evaluate artificial intelligence (AI) models to address their potential dangers.
Some of the risks the company aims to mitigate are the technology’s capacity to pose “chemical, biological, and radiological threats” and facilitate “autonomous replication”. The team will also evaluate an algorithm’s ability to persuade and fool humans in instances such as phishing attacks and generating malicious codes.
The team will be led by Aleksander Madry, the director of MIT’s Center for Deployable Machine Learning.
“We believe that frontier AI models, which will exceed the capabilities currently present in the most advanced existing models, have the potential to benefit all of humanity,” OpenAI said in a blog post making the announcement. “But they also pose increasingly severe risks.”
Coinciding with the launch of the team, OpenAI has also made a community call-out for ideas for risk studies, announcing a $25,000 prize and a possible job offer for the top 10 submissions.
The company’s CEO Sam Altman has been one of the most outspoken defenders of the need to establish AI regulation. Last spring, he testified before a US Senate committee about the possibilities and dangers of the new technology that powers ChatGPT, saying: “As this technology advances, we understand that people are anxious about how it could change the way we live. We are too.”
He called on US lawmakers to impose stricter restrictions on AI tools, including the creation of a US or global agency that would provide licences for companies aiming to develop AI tools – and take them away if they refuse to company with safety standards.
The meeting was prompted by the Biden administration’s concerns over the rapid development of generative AI. Over the last few months, there has been a dramatic rise in the popularity of AI-powered chatbots such as ChatGPT. These free tools can generate text in response to a prompt, including articles, essays, jokes and even poetry. However, recent studies have showed that they can be used by non-state actors to carry out cyber attacks or to design bioweapons.
Earlier this week, the UK government published a report on the capabilities and risks that AI poses to the UK’s society and economy. The country will be hosting the first global AI Safety Summit on 1-2 November.
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.