AI is finally good at stuff, and that’s a problem
Move over Shakespeare, here comes ChartGPT. New storyteller and apparently, pretty good BS’er.
By Rebecca Heilweil – A few weeks ago, Wharton professor Ethan Mollick told his MBA students to play around with GPT, an artificial intelligence model, and see if the technology could write an essay based on one of the topics discussed in his course. The assignment was, admittedly, mostly a gimmick meant to illustrate the power of the technology. Still, the algorithmically generated essays — although not perfect and a tad over-reliant on the passive voice — were at least reasonable, Mollick recalled. They also passed another critical test: a screening by Turnitin, a popular anti-plagiarism software. AI, it seems, had suddenly gotten pretty good.
It certainly feels that way right now. Over the past week or so, screenshots of conversations with ChatGPT, the newest iteration of the AI model developed by the research firm OpenAI, have gone viral on social media. People have directed the tool, which is freely available online, to make jokes, write TV episodes, compose music, and even debug computer code — all things I got the AI to do, too. More than a million people have now played around with the AI, and even though it doesn’t always tell the truth or make sense, it’s still a pretty good writer and an even more confident bullshitter. Along with the recent updates to DALL-E, OpenAI’s art-generation software, and Lensa AI, a controversial platform that can produce digital portraits with the help of machine learning, GPT is a stark wakeup call that artificial intelligence is starting to rival human ability, at least for some things.
“I think that things have changed very dramatically,” Mollick told Recode. “And I think it’s just a matter of time for people to notice.”
If you’re not convinced, you can try it yourself here. The system works like any online chatbot, and you can simply type out and submit any question or prompt you want the AI to address.
How does GPT even work? At its core, the technology is based on a type of artificial intelligence called a language model, a prediction system that essentially guesses what it should write, based on previous texts it has processed. GPT was built by training its AI with an extraordinarily large amount of data, much of which comes from the vast supply of data on the internet, along with billions of dollars, including initial funding from several prominent tech billionaires, including Reid Hoffman and Peter Thiel. ChatGPT was also trained on examples of back-and-forth human conversation, which helps it make its dialogue sound a lot more human, as a blog post published by OpenAI explains. Read On:
Comments
AI is finally good at stuff, and that’s a problem — No Comments
HTML tags allowed in your comment: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>