The recent development around automated content-creating tools has been a conversation amongst digital marketers that is gaining momentum. These tools understand inputted keyphrases, entities, topics and competitor data in an effort to create the most optimised content for clients.
SEOs are always looking for ways to streamline and save time, and one of the lengthiest efforts in SEO is content creation. Marketers are seemingly both relieved at the idea of a helping hand and a little unnerved at the prospect of potentially being replaced by a machine.
However, thus far these tools have fallen short for their inability to create human content. While they may be optimised ideally for search engines, they read mechanically. Enter GPT-3. This new release from OpenAI seems to create both long and short form content relatively easily and coherently from inputting some pretty basic parameters.
This is assuredly one of the most advanced forms of this type of technology. Last year GPT-2 got a resounding nod of approval for its attempt at writing a story based on the opening lines of George Orwell’s Nineteen Eighty-Four. If there was a tone of voice guide for dystopian literature, it would seem this AI had mastered it.
What is GPT-3?
GPT-3 is the latest language model developed by OpenAI, a machine learning company. It’s the most sophisticated of its kind produced by OpenAI so far and has now been released with closed access.
It’s relevant to content marketers because GPT-3 allows you to input a content brief in standard written English (meaning no prior programming knowledge is needed) and it will create content based on said requirements.
It can even understand different styles and genres, as evident from the variety of ways it describes a giraffe:
Its intelligence comes from its vastness. GPT-3 is 350GB in size and has 175 billion parameters. For perspective, GPT-3’s predecessor, GPT-2, had a mere 1.5 billion parameters. For an even greater perspective, I’m not confident that I have any at all.
GPT technology uses its existing knowledge to respond to input requirements. This existing knowledge comes from a dataset. This dataset is a set amount of text on the internet that the GPT has ingested.
GPT-2’s dataset was 8 million pages. GPT-3’s dataset is all text on the internet. That means the responses it can construct are far more complex than its predecessor, based on its greater understanding of language as a whole. The AI will then feedback the most cogent content based on what has been inputted by a human (at least machines haven’t progressed beyond us entirely).
The limitations of GPT-3
So far GPT-3 sounds both fascinating and terrifying. You may now not only be worried for your job security but also the potential destruction of mankind. However, luckily for us, we’re not as close to a Promethean nightmare as it might seem.
While GPT-3 is exceedingly intelligent, it suffers from several flaws when producing content.
1. It cannot reason
The content GPT-3 is producing doesn’t mean anything to GPT-3 as it has no common sense. This means it is arguably unreliable.
Often when you ask GPT-3 simple questions, it cannot apply reason to its answer and can be easily confused. You can see a hilarious example of that here.
Or, I’ve included my favourite:
2. It’s too obedient
GPT-3 doesn’t know a good idea from a bad one. If a client requested that you create product page content for home appliances with the target audience of infant children in the style of a military commander, you’d be sending a Slack to your project lead. However, GPT-3 will not only accept those conditions, but return said piece of content within the same day.
What humans can do that GPT-3 cannot is apply rationale and create an argument for why that style of content may not have the most impact, and instead suggest a data-based alternative to utilise conversions.
3. It may suffer from memory loss
GPT-3 creates content based on the surrounding content. That means that GPT-3 may start strong, but can lose focus on longer pieces of content or take too much influence from the preceding content, making it susceptible to incoherence.
This series of “interviews” with celebrities was created using GPT-3 and, with a Q&A setup, GPT-3 performs to an almost creepy level of flawlessness. However, when given the slightest whiff of a red herring, you can almost smell the smoke coming out of GPT-3’s burning cogs:
So…should we be worried?
In a word, no. As Forbes explained, “GPT-3 is an extremely sophisticated text predictor”. This reassuring sentence now places GPT-3 in the same category as my iPhone’s autocorrect as it naively assumes I meant to say “duck”.
That analogy may be an oversimplification…
Oversimplification aside, the sentiment is still true. If you’re creating content, you shouldn’t see GPT-3 as an enemy. In fact, it’s an ally.
GPT-3 could change how your content and copy teams work
Although GPT-3 does not offer the level of sophistication required to replace human content marketers, it does offer a host of benefits that content marketers can utilise to make work smoother.
Here are some things GPT-3 might allow you to do:
- Quickly create variations of content to deliver to different audiences (think seriously bespoke press releases for targeting publications)
- Identify the most important pieces of information from a large dataset for creative campaigns
- Design sample headers
- Create rough drafts of content which you can refine.
GPT-3 isn’t coming for our jobs, it’s coming to make our jobs easier.
If the digital marketing world chooses to embrace GPT-3, SEOs will likely use the technology for a host of applications, such as writing code, designing websites and scraping internet data.
he sophisticated forms of natural language processing that GPT technology offers may also be used to create automated content tools which will aid in the streamlining of content creation. However, until this AI develops a thinking, sentient brain, we don’t need to fear the rise of Skynet just yet.