Billionaire billionaire Elon Musk and group of Called AI experts and industry executives for a six-Month pause in the development of powerful Artificial intelligence (AI) systems larger than OpenAI’s newly launched GPT-4, to allow it is time sure They are safely quoting potential risks to society and humanity.
An open letter previously signed more out of 1000 people So far including Musk and Apple co-founder Steve Wozniak, he’s been pushed through the release of GPT-4 from OpenAI in San Francisco.
company says that it latest model a lot more powerful From the previous version that was used for power ChatGPT, a capable bot of tract generation of text from shorter of urge.
Artificial intelligence systems with Human competitive intelligence can pose profound risks to society and humanity, said the open letter titled Pausing Giant AI Experiments.
Strong artificial intelligence systems should He is developed Only once we are confident that its effects will be positive and its risks will be under control.”
Musk was an initial investor in Open AI, years spent on that it boardso is it car Tesla develops artificial intelligence systems for help power It is self-driving technologyamong other applications.
the message, hosted From the future funded by Musk of Life Institute, signed by Eminent critics Beside competitors of Open AI like Amnesty International’s head of stability, Imad Mostaki.
“trustworthy and loyal”
The letter was quoted from a blog post written by OpenAI founder Sam Altman, who He suggested that “at some point, it may be important to gain independence review before starting training future systems. “
“We agree. That’s the point now,” the authors said of He wrote the open letter.
Therefore, we call on All AI labs instantly pause training of artificial intelligence systems more powerful from GPT-4 for at least six Months.”
They called for governments step in And impose a moratorium if the companies failed to agree.
the six Months should used for development safety Protocols and AI Governance Systems and Refocus research on Ensure artificial intelligence systems more Accurate, Safe, Trustworthy and Loyal.
The letter did not detail the risks posed by GPT-4.
But the researchers include Gary Marcus of New york university, who Signing the message, it has long been argued that chatbots are great Liars and they have the ability to spread of misinformation.
However, author Cory Doctorow has compared the AI industry to a “pump and dump” scheme, arguing that both the potential and the threats of AI systems have been overrated.
Billionaire billionaire Elon Musk and group of Called AI experts and industry executives for a six-Month pause in the development of powerful Artificial intelligence (AI) systems larger than OpenAI’s newly launched GPT-4, to allow it is time sure They are safely quoting potential risks to society and humanity.
An open letter previously signed more out of 1000 people So far including Musk and Apple co-founder Steve Wozniak, he’s been pushed through the release of GPT-4 from OpenAI in San Francisco.
company says that it latest model a lot more powerful From the previous version that was used for power ChatGPT, a capable bot of tract generation of text from shorter of urge.
Artificial intelligence systems with Human competitive intelligence can pose profound risks to society and humanity, said the open letter titled Pausing Giant AI Experiments.
Strong artificial intelligence systems should He is developed Only once we are confident that its effects will be positive and its risks will be under control.”
Musk was an initial investor in Open AI, years spent on that it boardso is it car Tesla develops artificial intelligence systems for help power It is self-driving technologyamong other applications.
the message, hosted From the future funded by Musk of Life Institute, signed by Eminent critics Beside competitors of Open AI like Amnesty International’s head of stability, Imad Mostaki.
“trustworthy and loyal”
The letter was quoted from a blog post written by OpenAI founder Sam Altman, who He suggested that “at some point, it may be important to gain independence review before starting training future systems. “
“We agree. That’s the point now,” the authors said of He wrote the open letter.
Therefore, we call on All AI labs instantly pause training of artificial intelligence systems more powerful from GPT-4 for at least six Months.”
They called for governments step in And impose a moratorium if the companies failed to agree.
the six Months should used for development safety Protocols and AI Governance Systems and Refocus research on Ensure artificial intelligence systems more Accurate, Safe, Trustworthy and Loyal.
The letter did not detail the risks posed by GPT-4.
But the researchers include Gary Marcus of New york university, who Signing the message, it has long been argued that chatbots are great Liars and they have the ability to spread of misinformation.
However, author Cory Doctorow has compared the AI industry to a “pump and dump” scheme, arguing that both the potential and the threats of AI systems have been overrated.