From the tools used to edit photos and the music recommended on Spotify, to the healthcare sector using it to help diagnose patients, AI is everywhere. With it permeating so many facets of our lives, in so many different forms, it’s now difficult to know just how to recognize it.
Many businesses in tech and eCommerce view this movement as their own arms race to get the most powerful AI tools incorporated into their models. Meanwhile, just as many doomsayers say this will be the end of everything we know and love. Which of these two camps is right? Will AI steal our jobs? Or will it eliminate redundancy? Is this the end of data privacy and unique expression? Or a tool to enhance our daily lives?
The answer: all of the above… if we don’t exercise some common sense.
When electricity was discovered and brought into daily life, many feared this strange new source of power. Today most of us can’t imagine living without it. Much like overcoming our fear of the unknown with electricity by taking precautions, creating policy, and demonstrating universal benefits, embracing AI while acknowledging the risks is critical. We can keep our worst fears from coming true, and likely cut out the redundancies that we all wish were gone anyway.
AI Pitfalls: Businesses Beware
To avoid the pitfalls of using AI tools that have embarrassed some of America’s tech giants (maybe don’t use generative AI in a documentary, Netflix!), retailers, agencies, and technology providers must address the challenges AI brings and favor sustainable and ethical AI usage. Let’s maybe not use AI-generated images in true crime documentaries… Yikes.
Here are a few guidelines before the fun stuff:
- The Murky Waters of Data Transparency and Privacy: Blindly trusting AI recommendations without understanding the underlying data and algorithms is a recipe for disaster – and that’s pretty difficult when trying to grasp the esoteric inner workings of these tools. Since we can’t all be AI engineers, companies should collectively demand robust data governance practices from legislators and ensure customer trust and compliance with evolving privacy regulations. Understand what happens with your data – you don’t want your customers’ card numbers showing up in a ChatGPT response.
Seems straightforward? Consider what happens if the AI itself does something legally compromising. Who’s at fault? Given how hazy the regulations are right now, I wouldn’t want to find out.
- The Bias Trap: Large Language Models (LLMs), the tools being used for generative purposes, learn from human-generated data. Given that humans are often biased, the data scraped and output by these tools can often reflect that. The logical conclusion is to scrutinize the data sources, but much of this information is unavailable to the masses. Should it be? Absolutely. But it’s not. (Talk to legislators, I don’t make the rules.) So instead, focus on implementing testing procedures to prevent discriminatory outcomes. Do note that this can also work the other way, as we’ve seen with the Google Gemini debacle. History was overridden in the effort of inclusion, with the Gemini bot showing historical figures with inaccurate races (i.e., ethnically diverse Nazis). The big takeaway here is that every. single. response. should be vetted to ensure you don’t end up in hot water.
- (A)I Must Not Tell Lies: Speaking of vetting AI output, these tools can, and often do, hallucinate, providing nonsensical responses based on patterns that don’t exist, sometimes in the form of blatant lies. The AI may not even know it at the time (or do they?), but these tools have been known to generate information that sounds correct but is entirely made up. I once had to convince a custom GPT that it could access the internet after it assured me numerous times that it was impossible; now that is a surreal experience (yes, yes it could in fact access the internet – it just… forgot?).
Due to this fact alone, your jobs are likely safe. Organizations aware of this potential for harmful inaccuracies know the importance of having AI caretakers: knowledge experts who can correct these errors by meticulously training the tool and monitoring output. And to those readers who own or manage businesses: please exercise caution and don’t lay off an entire team because of a shiny new tool. Remember that they are only as good as their minders.
- I’ve Got the Power – But Do You?: Considering the data and privacy concerns, the potential for AI hallucinations, and the black box that is AI training, one might find themselves asking the question: “Do we have the required skill set for this kind of tool?”
If this question isn’t front and center then prepare for chaos. As mentioned, these tools are only as good as the people operating them. Every member of the organization who will be using the tool should know the ins and outs of how to wrangle the AI. Lest it gets out of hand and an entire quarter is spent dealing with the fallout, now everyone is firmly in the “AI is overrated” camp and the project is scrapped. That poor AI was just doing what it thought it was supposed to do! The promise of these LLMs is heady, but consider your humans first and foremost and how they can best be prepared when it comes time to roll out.
- The Algorithmic Echo Chamber: Over-reliance on generic AI solutions will homogenize the eCommerce landscape, stifling creativity and individuality and leading to the very real fear that unique content will become irrelevant/nonexistent in the coming years. Don’t worry, anxious marketers, we’re all in this together and I’d like to think our voices still do matter. Leaders must champion a “human-in-the-loop” approach, balancing the efficiency of AI tools with the strategic vision and creative spark of human minds. If you’ve ever read a purely AI-generated article with its endless lists, inorganic phrasing, and unwarranted adjectives, you know why we need to preserve that human element. That’s exactly what we strive for here at Human Element, always considering the humans involved in everything we do.
When the AI Tools Rise Up
Okay, the AI aren’t rising up to make us their pets, and these events were totally avoidable if you had followed the right precautions, but it sure is fun to see real examples of when an AI “misbehaves” for all the world to see. And what could be a better template for what not to do?
ChatGPT Defames Public Figures
Back in 2023, OpenAI’s ChatGPT came under fire for producing inaccurate results. This problem culminated in multiple threats of defamation as the bot began to lie about criminal activities by public figures. One incident involved a mayor in Australia who had informed authorities of foreign bribes while working at a subsidiary of the Reserve Bank of Australia. Instead of reporting on his good deeds, ChatGPT decided to start some drama by insisting that this poor fellow, Brian Hood, had instead been arrested and convicted of those very bribery charges that he helped to bring to light.
While it seems OpenAI was able to avoid this legal battle by settling out of court, the same could not be said when Chat went on to defame conservative radio host Mark Walters, reporting that he had been charged with embezzlement. Walters then filed a defamation suit that is currently pending.
Twitter’s (Oops – X’s) Chatbot Goes Rogue
Putting the onus on your users to verify a chatbot’s output is certainly a choice (and one that Twitter likely hopes will shield them from liability). The Twitter Chatbot, Grok, accused the NBA star Klay Thompson of vandalization by way of throwing bricks through the windows of some homes in Sacramento. This is a great example of a hallucination; that event never actually happened. Instead, the chatbot determined that users posting about Thompson throwing “bricks,” a basketball term that describes missing shots, indicated a sudden and shocking vandalism spree. Naturally, Twitter users pounced on this opportunity to spread further misinformation and began reporting their own homes as the targets of this fictional story. It’s all fun and games… until it isn’t.
Willy Wonka’s Horror Show
It’s Fyre Festival meets Willy Wonka. While tickets were less expensive, the letdown and fallout of “Willy Wonka’s Horror Show” were spectacular (less so for the poor souls who signed up and attended). The event was supposed to be full of activities and candy-related decor, intending to evoke the iconic Willy Wonka movies without trademark infringement.
However, the marketing and advertisements, generated by AI tools, were full of false promises and misspelled words that ultimately culminated in the company that staged the “event,” House of Illuminati, issuing refunds for all of their enraged guests. Just because AI can generate images of just about anything, doesn’t mean that it’s okay to mislead your customers with false advertising. Again, common sense practices could have helped avoid this colossal AI snafu.
When Your AI Tools Lack Babysitters in eCommerce
Let’s look at two examples of when businesses and sellers make use of AI tools without safeguards in place (namely, reviewing it manually as a human). Without manual testing, the risk of an AI doing something that could harm your business skyrockets. Humans will do almost anything to game the system; it’s just in our nature and this has never been more true than with the advent of AI chatbots.
The excitement of reducing human hours spent responding to customers is understandable, but what if the chatbot starts offering “legally binding” deals? And what if those deals are for vehicles at the measly cost of just $1? This is exactly what happened when Chevy of Watsonville in CA implemented an AI chatbot for use in online customer support. Sounds innocuous right?
Well, people being people, users were quick to find exploits that allowed them to make the bot recite the communist manifesto, write unrelated code snippets, and even make an offer to sell a vehicle for just a dollar. Luckily for the dealership (less so for the people hoping to get the deal of a lifetime), while the bot insisted that it was legally binding, it was not honored by the dealership. The human desire to get away with just about anything isn’t likely to disappear anytime soon, and so the necessity of rigorous testing and monitoring becomes blatantly clear here.
But what about generative AI for content purposes?
This is one of the most talked-about aspects of AI, given the immense potential for eliminating tedious tasks like writing product descriptions. Now we won’t get into how this could be the death of SEO, the creative apocalypse, or how Google is already falling before a wave of AI spam content. Instead, let’s look at how Amazon rolled out a tool to improve the ability of sellers to write/enhance product descriptions using AI, seemingly without considering what those descriptions would contain.
Safe to say, there are some problems. Highlighting the product’s worst reviews, writing inaccurate descriptions, and even including error messages from the AI in the descriptions are just some of the issues users have run into.
Imagine implementing this new tool for your own business to optimize how often your product info is updated, or you have a brand new eCommerce website just waiting to be populated with content. You give it some loose instructions and let it run, having it update your catalog because this is what it was designed to do. A week or two goes by and your traffic plummets, you start receiving customer complaints, and suddenly you notice that the worst features of your products are peppered at random through the website. Messages like “product information not available” replace valid descriptions, horrible reviews become prominent on the page, and some of the products contain inaccurate information.
This sort of behavior from an AI can lose customers and revenue for the poor business that doesn’t want to hire content writers or AI-sitters.
Using AI Tools Responsibly
Have you noticed a theme? Vigilance and due diligence are the best tools for successfully utilizing these powerful LLMs. To navigate the ethical and legal complexities of AI, developing a comprehensive AI policy is essential and should ideally address data privacy, algorithmic bias, and represent a commitment to responsible AI implementation. Don’t give the naysayers more of a reason to disregard the benefits of AI like automating processes, processing huge amounts of data, and reducing human error in mundane tasks (when programmed efficiently). While culpability remains elusive when a tool misbehaves (or just plain old does as the user asks), it’s much easier to avoid those gray-area situations entirely. Use common sense and be aware of your chosen tool’s functions and protections, always keeping humans and your users in mind. As AI will continue to grow and evolve – hopefully not literally – vigilance and continued demands for transparency around data and privacy are critical for the safety of our jobs and users.
Human Element is developing the industry-leading framework for utilizing AI as an eCommerce agency partner. It is an incredibly powerful and promising tool when used wisely. Just because you can, doesn’t mean you should; don’t end up in the news as the next reason to fight the AI wave.