While many of us are grateful for the benefits of AI such as Chat-GPT, we can’t ignore the growing concern about its potential misuse. This has led to calls for a 6-month pause on AI development. And most recently, Geoffrey Hinton, also known as the AI Godfather, acknowledged the risks of amplifying existing biases and inequalities if AI is not developed responsibly.
So what does this mean for us as marketers? How can we use AI consciously to avoid spreading disinformation and misinformation? And is our creativity proving to be the first casualty of the rise of the robots?
AI-generated content is like the internet consuming 10,000 calories a day.
Let’s take a moment to acknowledge how Large Language Models (LLMs) like ChatGPT and AI chatbots work behind the scenes. When you ask them a question, the AI analyzes the text to gain a programmed understanding of the context and meaning of the input.
It then scrapes the internet for answers using the following sources (in its own words):
- Books from Project Gutenberg and other online libraries
- Online encyclopedias such as Wikipedia
- News articles from various sources
- Online forums and discussion boards
- Academic papers and publications from research institutions
- Social media posts and comments
- Online blogs and articles from various industries and niches
- Chat logs and customer service interactions.
And then delivers the answer to your question as a digestible, conversational response that reads like your smartest friend giving you sound advice.
But is the response accurate?
Well, just look at its sources. Editable sources like Wikipedia, manipulative clickbait articles, poorly researched blogs, reactive social media posts, and chat logs; a slew of humanity’s most ill-formed thoughts mixed in with the best.
AI also doesn’t distinguish between human-created and bot-created content. So information from weaponized troll bots that have been deployed with the sole purpose of spreading misinformation and disinformation may also be read and included in responses. Online content is one and the same to AI.
Oh, and it left out a source. Its own imagination. Sometimes ChatGPT just goes ahead and makes things up.
As credible content creators and content marketers, we need to be aware of this. Some of us are now over-automating our content for the sheer act of churning it out at speed. Not only can this play a role in fattening up the internet with the disinformation and misinformation that AI feeds to us, it dilutes our authenticity as creators.
At the time of writing, there’s no official third party that’s been tasked with fact-checking AI chatbots.
As professionals, creatives, and those seeking to engage people with their content on a deeper, more meaningful level, there’s nothing worse than finding out that trusted organizations or the people we follow have lazily automated the bulk of their content.
Take CNET, for example, a trusted source for technology news and reviews. It had to issue corrections on 41 of the 77 stories it published after quietly publishing articles written by AI for months. After this PR nightmare, CNET owner Red Ventures then hit its remaining human staff with a fresh round of layoffs.
All of this sounds like disaster porn, but it’s not. AI can make creatives more efficient, can inspire us to push the boundaries of what we ever thought was possible with our content, and enhance our reach, but like all things in life, moderation is encouraged.
Use AI tools to lay the foundation – don’t let it architect your house. If you’re a content marketer, for now, Google is taking a fair stance on AI-generated content while stressing that the best way to safeguard your ranking is to write for humans. This is the case for digital marketing, across the board. Put your audience back into the heart of what you’re doing instead of churning out content. It’s the best way to maintain trust and authenticity for your brand.