GPT-3 is the latest and largest AI language model of the AI company OpenAI, which it slowly introduced in mid-July. The predecessor to this algorithm, GPT-2, made headlines in February last year when the company withheld its release for fear that it would be abused. In November, however, OpenAI changed their minds and released it, as it stated that they did not detect any “strong evidence of misuse so far.”
Listen beautiful relax classics on our Youtube channel.
The lab took a different approach with GPT-3; it neither withheld it nor granted public access. Instead, it gave the algorithm to select researchers who applied for a private beta, with the goal of gathering their feedback and commercializing the technology by the end of the year.
A man named Liam Porr submitted an application for the private beta.
He filled out a form with a simple questionnaire about his intended use. But he also didn’t wait around. After reaching out to several members of the Berkeley AI community, he quickly found a PhD student who already had access. Once the graduate student agreed to collaborate, Porr wrote a small script for him to run. It gave GPT-3 the headline and introduction for a blog post and had it spit out several completed versions.
Porr then posted his first blog. Titled “Feeling unproductive? Maybe you should stop overthinking”, the blog reached the number one spot in Hacker News. Porr continued posting AI-generated blogs such as this one with little to no editing.
“From the time that I thought of the idea and got in contact with the PhD student to me actually creating the blog and the first blog going viral—it took maybe a couple of hours,” he says.
And that’s the scary part for Porr, who studies computer science at the University of California, Berkeley: the process was “super easy.”
Porr says he wanted to prove that GPT-3 could be passed off as a human writer. Indeed, despite the algorithm’s somewhat weird writing pattern and occasional errors, only three or four of the dozens of people who commented on his top post on Hacker News raised suspicions that it might have been generated by an algorithm. All those comments were immediately downvoted by other community members.
Porr proved that AI models could be used to create mediocre, clickbait content, which would decrease the value of online content. He also proved the possibility that the language model could be misused, a fear that experts have for a long time.
Porr plans to do more experiments with GPT-3. But he’s still waiting to get access from OpenAI. “It’s possible that they’re upset that I did this,” he says. “I mean, it’s a little silly.”
What are your thoughts about this one?
(Image Credit: geralt/ Pixabay)