<img alt="" src="https://secure.insightful-enterprise-intelligence.com/784283.png" style="display:none;">
Mark DiMattei
6057541
https://www.keypointintelligence.com/media/3850/mdimattei.gif

Generative AI and the Problem Factory

Blind faith in AI is causing trouble for its users

Mar 10, 2024 8:00:00 PM

 

Sign up for The Key Point of View, our weekly newsletter of blogs and podcasts!

 

Recently, an event calling itself “Willy’s Chocolate Experience” was held in Glasgow, Scotland. The brightly colored promotional artwork promised an event that would bring the wonders of Roald Dahl’s book (and the associated movies) to life. Instead, attendees were treated to (what one visitor described to the BBC) “little more than an abandoned, empty warehouse” with poor actors forced to make the best of a bad situation. Part of the outrage came from the use of artificial intelligence (AI) to create the promotional artwork…as well as the basic concept of the experience. Countless people on social media have glommed onto the appearance of an actor emerging from behind a mirror with a white face mask and black robes to scare passing children. (The being, called “the Unknown” in the event’s mythos, doesn’t exist in any of the source material…)

 

Source: BBC

 

While some could excuse the use of AI in creating the promotional materials (despite the dangers of creating a false sense of expectations), there’s little defense when it’s abused to create other types of content. Paul Connell, a main actor hired for Willy’s Chocolate Experience, told the BBC that he was forced to make the best of “15 pages of AI-generated gibberish” that involved a plot of Willy trying to stop the Unknown from stealing his “anti-graffiti gobstopper”.

 

An Everlasting Gobstopper That Changes Flavors…

While Willy’s Chocolate Experience is one of the most recent examples of AI run amok, it certainly isn’t the only one. In May and June of last year, two different sets of lawyers found themselves in trouble as they submitted legal briefs only to be told that the generative AI they used to make them fabricated the relevant cases the lawyers sought to use as precedent for their respective cases. In 2021, Zillow Offers’ over-relied on AI technology to help the company price homes and caused the parent company  to have to let go of around 2,000 employees due to the algorithm purchasing homes at higher prices than its current estimates of future selling prices.

 

There are also some disturbing trends regarding the use of AI that are tied to human biases.

  • In August 2023, Tutoring company iTutorGroup used AI to intentionally disregard applicants who were older than 55 years old for women and 60 years old for men. Likewise, Amazon had to scrap an AI program designed to review applications in 2018 as it was promoting male applicants far more than female.
  • According to Scientific American, a study conducted in 2019 noted that AI programs used in the US to help select patients to in need of “high-risk care management” were less likely to select Black patients over White ones.
  • In 2016, Microsoft announced the release of an AI chatbot that the company named Tay. While they designed the bot to behave like an ordinary teen girl and interact with individuals via Twitter using a combination of machine learning and natural language processing, Tay quickly devolved into tweeting racist, misogynistic, and anti-Semitic statements within a matter of hours.
  • Facial recognition technology is a huge issue when it comes to AI. In 2018, the ACLU noted that Amazon’s Rekognition software identified 28 members of the US Congress as police suspects, with most of these wrong IDs disproportionately affecting BIPOC members. Apple’s Face ID is also notorious for allowing people with slightly similar facial structures to unlock their products—with Black and Asian people being much more likely to experience this than white users.

 

Finding a Golden Ticket

We, as a society, have become so used to technology that it’s safe to say that we’ve come to rely on it without question. Many of us take for granted that you can spellcheck your e-mails and reports, look up the answer to any question that pops into your head, or contact people around the world with just a push of a button. We are also more than aware that AI isn’t the perfect technology that science fiction has made it out to be in books, TV, and movies.

 

When it comes to using generative AI, we need to apply the same scrutiny we once did for Wikipedia and other collaborative websites. Artificial intelligence has the potential to be something truly great, but we are still in the stage of working out the bugs and need to be much more vigilant with the content we create using tools like ChatGPT or Microsoft Bard.

 

Ultimately, we should be utilizing AI less like a tool to do things for us and more like the results of a search engine. Instead of trusting that our prompts have created something final, we would be better served to thoroughly review any “complete” product and cross-check it against other sources or expert experiences. Otherwise, the Unknown will ruin more than just Willy’s Chocolate Experience…

 

Browse through our Industry Reports Page (latest reports only). Log in to the InfoCenter to view research on artificial intelligence and generative AI through our WorkPlace CompleteView Advisory Service. If you’re not a subscriber, contact us for more info by clicking here.

 

Keep Reading

Consumer Perspective of Artificial Intelligence

What to Do When Artificial Intelligence Gets Dumber?

Deepfakes Are Making It Harder to Discover Deception in Media