Connect with us

AI

AI leaders sound the alarm with a 22-word wake-up call

We can’t ignore these pressing issues, even if advanced AI is still just cooking in the lab.

In this image, chatgpt is being introduced as a research product developed by safety company openal, with examples, capabilities, limitations, and upgrade options being discussed. Full text: research product developers safety company openal chatgpt introducing chatgpt examples capabicos limitations we've trained a model called chatgpt which interacts in a conversational way. The dialogue format makes it possible for chatgpt to answer followup questions, admit its mistakes, challenge meres uur la proude tour 10. May (atotaly peckce incorrect premises, and reject inappropriate requests. Try changpt read about chatgpt plus limited knowledge of world and & upgrade to plus settings get help + log and
Image: Pexels

AI’s looming dangers have brought big-name researchers and CEOs to their keyboards, dashing out a concise 22-word wake-up call that urges humanity to grapple with AI risks at the same level as pandemics and nuclear war.

The brief and intensely focused message urges us to take AI’s risks seriously:

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

AI’s heavy hitters, including Google DeepMind CEO Demis Hassabis, OpenAI CEO Sam Altman, Geoffrey Hinton, and Yoshua Bengio, all backed this statement.

Notice anything missing here? What we’re missing here, as first noted by The Verge, is Yann LeCun’s signature – Meta’s (Facebook’s parent company) chief AI scientist.

Tackling AI safety, the great debate still rolls on

This development is part of an ongoing debate around AI safety. Remember that open letter demanding a six-month “pause” in AI development? Yeah, reactions were all over the place.

Dan Hendrycks, executive director at the Center for AI Safety, thinks keeping it short and sweet is key. To him, less is more regarding a stark message – you can’t dilute what’s potent.

With this warning, he aims to show the world that fringe worrywarts don’t just hold anxiety about AI; nope, it’s got legs, and it matters.

AI is nowhere near perfection 

Amidst all the squabbling and hypotheticals surrounding AI safety debates, there’s one thing they all agree on: AI is causing a lot of headaches – from mass surveillance, error-riddled predictive policing, and a misinformation epidemic.

We can’t ignore these pressing issues, even if advanced AI is still cooking in the lab.

What do you think? Is there a better way to go about this? Have any thoughts? Drop us a line below in the comments, or carry the discussion to our Twitter or Facebook.

Editors’ Recommendations:

Follow us on Flipboard, Google News, or Apple News

Kevin is KnowTechie's founder and executive editor. With over 15 years of blogging experience in the tech industry, Kevin has transformed what was once a passion project into a full-blown tech news publication. Shoot him an email at kevin@knowtechie.com or find him on Mastodon or Post.

2 Comments

2 Comments

  1. Avatar of mason pelt

    Mason Pelt

    May 31, 2023 at 10:58 am

    Would you sign my letter calling for a moratorium on letters calling for a moratorium on AI research?

    • Avatar of kevin raposo

      Kevin Raposo

      June 1, 2023 at 3:55 am

      Yes, 1,000 percent.

Leave a Reply

Your email address will not be published. Required fields are marked *

More in AI