![]() |
Should you worry about ChatGPT coming for your job? |
If you've spent any time scanning social media feeds over the past week (who hasn't), you've undoubtedly heard of ChatGPT. The captivating and mind-blowing chatbot, built by OpenAI and debuted last week, is a neat little AI that can spew forth incredibly convincing, human-sounding prose in response to user-generated requests.
You may, for example, ask it to produce a plot description for Knives Out, but Benoit Blanc is really Foghorn Leghorn (just me?), and it'll spew out something somewhat coherent. It can also help mend faulty code and compose articles so compelling that academics think they'd achieve an A on college exams.
Its reactions have astonished many to such a degree that some have even announced, "Google is dead." Then there are some who say this extends beyond Google: Human jobs are in peril, too.
The Guardian, for instance, announced "professors, programmers and journalists might all be out of a job in only a few years." Another interpretation, from the Australian Computer Society's flagship newspaper Information Age, recommended the same. The Telegraph said the bot might "perform your job better than you."
I'd advise hold your digital horses. ChatGPT isn't going to knock you out of a job just yet.
A fantastic illustration of why is supplied by the tale published in Information Age. The newspaper employed ChatGPT to produce a complete piece on ChatGPT and released the final result with a brief introduction. The essay is about as simple as you can ask for — ChatGPT presents a basic retelling of the facts of its existence — but in "creating" the piece, ChatGPT also produced bogus statements and credited them to an OpenAI researcher, John Smith (who is genuine, allegedly) (who is real, apparently).
This emphasizes the major shortcoming of a big language model like ChatGPT: It doesn't know how to discern truth from fiction. It can't be taught to do so. It's a word organizer, an AI created in such a manner that it can produce meaningful phrases.
That's a critical difference, as it effectively bans ChatGPT (or the underlying big language model it's built on, OpenAI's GPT 3.5) from producing news or commenting on current affairs. (It also isn't trained on up-to-the-minute data, but that's another story.) It surely can't perform the work of a journalist. To say so undermines the act of journalism itself.
ChatGPT won't be travelling out into the world to chat to Ukrainians about the Russian invasion. It won't be able to discern the passion on Kylian Mbappe's face when he wins the World Cup. It surely isn't hopping on a ship to Antarctica to write about its adventures. It can't be startled by a comment, utterly out of character, that accidentally betrays a secret about a CEO's firm. Hell, it would have no chance of reporting Musk's takeover of Twitter — it's no judge of truth, and it simply can't read the room.
It's interesting to see how great the feedback to ChatGPT has been. It's undoubtedly deserving of praise, and the documented advances OpenAI has achieved over its prior offering, GPT-3, are intriguing in their own right. But the big reason it's really attracted attention is because it's so freely available.
GPT-3 didn't have a clean and easy-to-use web architecture and, despite media like the Guardian utilized it to create stories, it made just a short splash online. Developing a chatbot you can engage with, and share screenshots from, radically transforms the way the product is used and talked about. That's also led to the bot being a touch overhyped.
Strangely enough, this is the second AI to raise a sensation in recent weeks.
On Nov. 15, Meta AI unveiled its own artificial intelligence, named Galactica. Like ChatGPT, it's a huge language model and was touted as a tool to "order science." Essentially, it might provide answers to queries such, "What is quantum gravity?" or explain math problems. Much like ChatGPT, you slip in a query, and it delivers a response.
Galactica was trained on more than 48 million scientific publications and abstracts, and it produced convincing-sounding replies. The development team marketed the bot as a tool to organize information, stating it might create Wikipedia articles and scholarly studies.
Problem was, it was largely spitting out junk — meaningless stuff that seemed professional and even contained references to scholarly publications, but those were made up. The sheer amount of disinformation it was creating in response to basic cues, and how pernicious that misinformation was, irked academics and AI researchers, who let their opinions fly on Twitter. The criticism saw the experiment shut down by the Meta AI team after two days.
ChatGPT doesn't appear like it's moving in the same way. It seems like a "smarter" version of Galactica, with a much stronger filter. Where Galactica was handing out how to manufacture a bomb, for instance, ChatGPT screens out inquiries that are racist, abusive or improper. ChatGPT has also been taught to be conversational and confess to its faults.
And yet, ChatGPT is still constrained the same way all big language models are. Its objective is to generate phrases or songs or paragraphs or essays by researching billions (trillions?) of words that exist throughout the web. It then puts those words together, suggesting the optimum way to arrange them.
In doing so, it creates some fairly compelling essay replies, certainly. It also writes crap, exactly like Galactica. How can you learn from an AI that may not be offering an honest answer? What type of employment may it replace? Will the audience know who or what authored a piece? And how can you tell the AI isn't being genuine, particularly if it seems convincing? The OpenAI team admits the bot's flaws, but these are unanswered concerns that restrict the potential of an AI like this today.
So, even if the little chatbot is amusing, as proven by this fantastic dialogue about a person who brags about pumpkins, it's hard to see how this AI will throw academics, programmers or journalists out of a job. Instead, in the near run, ChatGPT and its underlying architecture will likely complement what journalists, educators and programmers do. It's a tool, not a substitute. Just as journalists use AI to transcribe extensive conversations, they may employ a ChatGPT-style AI to, let's say, develop a headline suggestion.
Because that's precisely what we accomplished with this work. The title you see on this story was, in part, proposed by ChatGPT. But it's ideas weren't flawless. It advised using phrases like "Human Employment" and "Humans Workers." Those seemed too formal, too... robotic. Emotionless. So, we altered its ideas until we obtained what you see above.
Does that imply a future edition of ChatGPT or its underlying AI model (which may be published as early as next year) won't come along and render us irrelevant?
Maybe! For now, I'm feeling that my work as a journalist is quite safe.
First published on Dec. 7, 2022 at 2:31 p.m. PT.