Our writing staff here at TextGoods is 100% human, despite the continuing rise of believable content generated by artificial intelligence.
Large Language Models like ChatGPT-4 are becoming more and more advanced in their abilities to generate content in a huge array of topics, but it is important to understand the myriad of drawbacks in using artificially generated content on your site.
#1 Citing the Real Source
ChatGPT does not provide sufficient citations to verify the authenticity and relevance of the data it sources.
Further, the most commonly used large language model (ChatGPT) only has access to data published prior to September of 2021.
This unilaterally ensures that all content generated by the model is out of date by nearly two years, even at the time of publishing. Let’s take an example:
ChatGPT’s knowledge cutoff makes it extremely difficult to generate timely, up-to-the-minute articles that drive viewership.
However, the knowledge cutoff is not the only missing piece of the puzzle.
When ChatGPT returns statistics, it does not cite a source directly.
While it often provides a general sense of where the data was derived from, the citation is incomplete and does not allow for verification of the statistic.
A lack of proper citation fundamentally undermines the ethos of statistic driven conclusions published online.
Further, large language models are ultimately prompt driven tools.
This means that the quality of the output is often driven by the quality of the input.
Poor prompt structuring can lead to inaccurate, nonsensical, or incomplete answers.
Citing ChatGPT as a source is similar to a middle schooler citing Wikipedia: while you are often citing correct information, it is improper to cite a secondary source.
Our human writers seek out topical, reliable, up-to-date sources to populate articles.
#2 Human Connection in Writing
ChatGPT’s responses lack human intent and style. The content generated by large language models is formulaic and based on the average output of all human writing. This means, at best, ChatGPT writes at an average level.
Large language models use a concept called “temperature” to ensure that the responses they generate are sensical and generally readable by humans.
Models generally attempt to keep the temperature “low,” meaning that each sentence is completed in the most standard, predictable word formulation possible.
This leads to the general feeling that ChatGPT talks in the voice of a young student first learning to write technically.
Simple sentences are interspersed with predictable complexities and generally used without overarching intent.
Ultimately, this results in responses that lack human intent, human connection, and personality.
The result is most clearly seen when reading aloud one of ChatGPT’s responses. Try it out with the response below:
In reading the snippet aloud, you may have noticed that the response forces a dull cadence that perhaps reminds you of the most boring academic lecture of your life.
This is the result of “low temperature” writing, driving a chasm between your reader and your content.
The Search Engine Journal emphasizes this in an article demonstrating that content creators do not need to be afraid of being replaced by artificial intelligence.
Robots can’t replicate the human touch.
Robots are Robots.
They don’t have emotions, memories, or preferences, and they don’t love or hate anything.
This is exactly why human writers are irreplaceable. When writing is deeply human, it moves people much better than bare statements of facts.
We care about stories. To tell stories, you need a background of meaningful experiences.
Robots don’t have them.
–Search Engine Journal, “The Future of SEO & Content: Can AI Replace Human Writers?”
Art, fundamentally, is an expression of human connection and shared emotion.
This may seem esoteric and unrelated to content generation, yet human connection and shared emotions are the foundation of successful content.
Using AI content generation inserts a third party that is fundamentally incapable of sharing emotions with readers.
This is not a moralistic expose on the human experience, it is expressed directly in tone and style.
#3 AI Content Violates Google’s Guidelines
AI generated content also has severe impacts on search engine optimization and viewership of an article.
In April of 2022, Google’s John Mueller put out a statement related to the use of AI generated content:
For us these would, essentially, still fall into the category of automatically generated content which is something we’ve had in the Webmaster Guidelines since almost the beginning.
And people have been automatically generating content in lots of different ways.
And for us, if you’re using machine learning tools to generate your content, it’s essentially the same as if you’re just shuffling words around, or looking up synonyms, or doing the translation tricks that people used to do. Those kind of things.
My suspicion is maybe the quality of content is a little bit better than the really old school tools, but for us it’s still automatically generated content, and that means for us it’s still against the Webmaster Guidelines.
So we would consider that to be spam.
–John Mueller, SEJ: Google Says AI Generated Content Is Against Guidelines
As John clearly stated, AI generated content is similar to the translation tricks of old and will be in conflict with the Webmaster Guidelines.
This may have serious and severe impacts on SEO and click statistics.
However, there are acceptable uses of AI in the writing space that can be used to enhance content creator’s productivity.
AI can be used to summarize web pages and scrape important tidbits from massive data sets.
While the “vanilla” version of ChatGPT is not truly capable of this at this time, custom built applications using ChatGPT’s API are capable of offloading tedious tasks from content creators.
#4 AI Companies Are Seeking to “Watermark” Content
Current methodologies of AI content generation can be largely unreliable if the writing is edited or style guidelines are used.
However, AI Companies, including OpenAI, are actively attempting to create a sort of digital “watermark” so that determining AI generated content becomes simple, reliable, and repeatable.
The proposed “watermark” feature is based on standard cryptology methods that would allow any user with a key the ability to tell whether or not content was automatically generated.
This function was discussed by OpenAI Engineer Scott Aronson:
One simple example would be robots.txt: if you want your website not to be indexed by search engines, you can specify that, and the major search engines will respect it.
In a similar way, you could imagine something like watermarking—if we were able to demonstrate it and show that it works and that it’s cheap and doesn’t hurt the quality of the output and doesn’t need much compute and so on—that it would just become an industry standard, and anyone who wanted to be considered a responsible player would include it.”
–Scott Aronson, OpenAI’s attempts to watermark AI text hit limits
Even if use of AI content has not impacted your functionality to date, regulators are actively developing new ways to ensure that AI content is detectable.
#5 The Human Factors
The TextGoods team employs a fully human writing team, providing jobs and invigorating the economy.
These humans also provide a crucial functionality: feedback.
Our writers provide input to the TextGoods process and allow for an atmosphere of continual improvement.
When you use our services, you can rest assured that a full team of human writers has developed a process to deliver you the most successful content for your application.
The best content generators of the modern world are, simply put, humans! As such, we promise that every piece of content you receive from us is 100% human crafted and edited.