2023-05-30 09:16:19 ET
A group of tech executives and others associated with artificial intelligence penned a letter on Tuesday stating the risks linked to the technology are on par with pandemics and nuclear war.
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," the signatories of the letter wrote.
The letter was released by the Center for AI safety and was signed by OpenAI CEO Sam Altman, executives from Microsoft ( NASDAQ: MSFT ), Google ( GOOG ) ( GOOGL ) and several other companies and universities.
OpenAI has received billions in funding from Microsoft ( MSFT ), which has integrated the popular ChatGPT into many of its products and services.
Earlier this month, a poll conducted by Reuters/Ipsos found that nearly two-thirds of Americans think AI poses a risk for humanity.
In March, tech luminaries such as Elon Musk and Steve Wozniak also wrote a letter that called for a six-month pause in the development of many AI tools to develop new safety standards for the technology.
Related tickers: C3.ai ( AI ), SoundHound AI ( SOUN ), Nvidia ( NVDA ), BigBear.ai Holdings ( BBAI ), Microsoft ( MSFT ), Alphabet ( GOOG ) ( GOOGL ), Apple ( AAPL ), Amazon ( AMZN )
More on artificial intelligence and its impact
- Generative AI is all the rage. Tim Cook is asking for caution.
- What happened during the White House meeting on artificial intelligence?
- Chamber of Commerce lays out AI regulation ideas
For further details see:
OpenAI CEO, others write letter that AI risk is on par with pandemic, nuclear war