Dear clients,
Elon Musk, with a panel of AI experts and industry leaders, are calling for a six-month pause in developing systems more powerful than the recently launched OpenAI GPT-4 in an open letter citing potential risks to society. It proposes to suspend the development of advanced AI until such time as independent experts develop common security protocols, and a call for developers to work with policies on governance.
Earlier this month, Microsoft-backed OpenAI unveiled the fourth iteration of its GPT (Generative Pre-trained Transformer) artificial intelligence program that wowed users by engaging them in human conversation, writing songs, and summarizing long documents.
“Powerful artificial intelligence systems should only be developed when we are confident that their effects will be positive and the risks associated with them will be manageable,” reads a letter published by the Future of Life Institute.
The letter was signed by more than 1,000 people, including Musk. However, there were no signatures from Sam Altman, CEO of OpenAI, and Sundar Pichai and Satya Nadella, CEOs of Alphabet and Microsoft.
Experts comment that the content of the letter is not perfect, but the message is correct. The big players are becoming less open about what they do, making it more difficult to protect society from any harm that might arise.
Since its release last year, OpenAI’s ChatGPT has spurred competitors to accelerate the development of similar large language models, and companies are rushing to incorporate AI into their products. Investors wary of relying on one company are warmly welcoming an OpenAI competition.