Summary: AI Doomerism Is a Decoy

Yet “even under that charitable interpretation,” Bender told me, “you have to wonder: If you think this is so dangerous, why are you still building it?”

The solutions these companies have proposed for both the empirical and fantastical harms of their products are vague, filled with platitudes that stray from an established body of work on what experts told me regulating AI would actually require.

Hundreds of AI executives, researchers, and other tech and business figures, including OpenAI CEO Sam Altman and Bill Gates, signed a one-sentence statement written by the Center for AI Safety declaring that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Those 22 words were released following a multi-week tour in which executives from OpenAI, Microsoft, Google, and other tech companies called for limited regulation of AI.

Silicon Valley has shown little regard for years of research demonstrating that AI’s harms are not speculative but material; only now, after the launch of OpenAI’s ChatGPT and a cascade of funding, does there seem to be much interest in appearing to care about safety.

He also added that the public needs to end the current “AI arms race between these corporations, where they’re basically prioritizing the development of AI technologies over their safety.” That leaders from Google, Microsoft, OpenAI, Deepmind, Anthropic, and Stability AI signed his center’s warning, Hendrycks said, could be a sign of genuine concern.

Testifying before a Senate panel, he said that “regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.” Both Altman and the senators treated that increasing power as inevitable, and associated risks as yet-unrealized “potential downsides.”

But many of the experts I spoke with were skeptical of how much AI will progress from its current abilities, and they were adamant that it need not advance at all to hurt people—indeed, many applications already do.

Similar Articles

AI Doomerism Is a Decoy

Big Tech’s warnings about an AI apocalypse are distracting us from years of actual harms their products have caused.

Read the complete article at: www.theatlantic.com

The real reason Google and OpenAI are warning that AI is as bad as nuclear war or pandemic in new statement

We may be at risk of ‘extinction’ from artificial intelligence – but that warning is very convenient to the people making it

Read the complete article at: www.independent.co.uk

AI Doomerism Is a Decoy

Read the complete article at: apple.news

Add a Comment

Your email address will not be published. Required fields are marked *