Stop ‘Giant AI Experiments’


The Gist

  • AI: Don’t do it. Tech and business leaders call for a six-month pause on training AI systems more powerful than GPT-4.
  • We’re ‘out of control.’ The letter expresses concerns about an “out-of-control race” to develop AI systems that cannot be understood, predicted or controlled.
  • Better safety protocols. Signees suggest focusing on AI safety and design protocols, improving AI systems and enhancing AI governance during this pause.

You know that whole artificial intelligence innovation thing? Stop it. Now. Or the government should stop you.

At least that’s what a group of technology and business leaders — including Elon Musk, Steve Wozniak and tech leaders from Meta, Google and Microsoft — say in a jointly-signed letter hosted by the Future of Life Institute made public this week. Specifically, these leaders “call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

OpenAI, the creators of ChatGPT-4, probably doesn’t mind this directive. After all, AI innovators outside of OpenAI are scrambling to beat ChatGPT right now.

Why the call for this “pause?” AI labs are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control,” according to the letter. As of the publication of this article early afternoon ET Wednesday, March 29, there are 1,124 names listed as signees of the “Pause Giant AI Experiments: An Open Letter.” (ChatGPT says there are 1,125, but, alas, we regress).

“This pause should be public and verifiable, and include all key actors,” the co-signees go on to say. “If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

Who Signed This ‘Pause Giant AI Experiments’ Letter?

Among the more than 1,100 signees (so far):

  • Yoshua Bengio, founder and scientific director at Mila, Turing Award winner and professor at University of Montreal
  • Stuart Russell, Berkeley professor of Computer Science, director of the Center for Intelligent Systems, and co-author of the standard textbook “Artificial Intelligence: A Modern Approach”
  • Yuval Noah Harari, author and professor, Hebrew University of Jerusalem.
  • Andrew Yang, Forward Party, co-chair, US presidential candidate 2020, NYT bestselling author, Presidential Ambassador of Global Entrepreneurship
  • Connor Leahy, CEO, Conjecture
  • Jaan Tallinn, co-founder of Skype, Centre for the Study of Existential Risk, Future of Life Institute
  • Evan Sharp, co-founder, Pinterest
  • Chris Larsen, co-founder, Ripple
  • Emad Mostaque, CEO, Stability AI
  • Maxim Khesin, Meta, machine learning engineer
  • Noam Shazeer, founder of Character.ai, CEO, major contributor to Google’s LaMDA
  • Andrew Brassington, Microsoft, senior software engineer

“Contemporary AI systems are now becoming human-competitive at general tasks,” according to the letter, “and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs.”

AI Leaders Are Failing at Responsible Development

Wait, so with all the lightning-speed advancements in AI the last few months — with OpenAI’s ChatGPT chatbot leading the way — we need to put on the brakes with AI innovation? How did this happen? Should marketers and customer experience professionals take the advice of the leaders in this joint letter? Or is this more targeted to “giant AI experiments,” whatever those are? What qualifies as giant? What’s more powerful than ChatGPT-4, and who decides that?

Of course, there have been calls for responsible AI practices that balance the needs of customer experience and marketing efficiency and creativity along with morality, ethics and accuracy. Ethical AI is a term much bandied about, and we’re still trying to figure it out despite our artificial intelligence friends veering off the beaten path once in a while.



Source link

We will be happy to hear your thoughts

Leave a reply

SHOP WITH THE DURENS
Logo
Enable registration in settings - general
Compare items
  • Total (0)
Compare
0
Shopping cart