<img alt="" src="https://secure.insightful-enterprise-intelligence.com/784283.png" style="display:none;">
Skip to content
Hero-Image
Mark DiMatteiMar 9, 20264 min read

Things Are Getting Sloppy at Work

How workslop is invading our offices

Sign up for The Key Point of View, our weekly newsletter of blogs, podcasts, and videos!

 

Artificial intelligence (AI) has become more controversial than initially expected. Beyond the environmental effects of AI data centers each taking enough water to supply a small town; the concerns over AI hallucinating information that doesn’t exist; as well as the overflow of books, songs, and images poorly created with generative AI, we now have a new danger in the form of “workslop.”

While we were once free of dealing with slop on a professional level, that has now disappeared as companies have begun to introduce specialized chatbots dedicated to their industries as well as AI tools of their own creation. As named by the Harvard Business Review, workslop is defined as “low-quality, AI-generated content in the workplace that looks polished but lacks substance.” This content often wastes time and money as employees are forced to correct inaccurate information or transform generic content into something more in-line with corporate standards.

 

BrokenRobot

 

This Isn’t Your Parents’ Spam Emails…

Unfortunately, this new type of slop isn’t as harmless as our previous experiences with images of people with too many hands or autotuned “girl power” anthems that reveal they are about drug-fueled bacchanals. According to BetterUp Labs and the Standford Social Media Lab, workslop “creates the illusion of progress—slick slides, lengthy reports, overly tightened summaries, or code without context. Rather than saving time, it leaves colleagues to do the real thinking and clean-up.” In another article from the Harvard Business Review, the authors noted that employees forced to deal with workslop not only noticed it created friction in the office because they lost time correcting bad code or statistically incorrect data in presentations, but it also made them think less of their coworkers and resentment to higher-ups that push for “AI in everything” initiatives indiscriminately (which feeds into the need for workslop).

 

WorkslopStats

Source: BetterUp Labs

 

There is a glut of AI tools out there. According to SellersCommerce, 78% of companies use AI for at least one operational function, especially within IT initiatives or for marketing and sales tasks. That said, AI start-up Vectara has noticed that AI models can hallucinate anywhere from 3% to 27% of the time—with more complex tasks seeing a higher rate. (They have also begun to monitor and maintain an AI hallucination leaderboard to keep track of which tools fabricate the most incorrect content.) This also differs across platforms and industries. In a study conducted by the Stanford Institute for Human-Centered AI, they found that general-purpose chatbots hallucinated anywhere between 58% and 82% of the time on legal queries.

The main problem with this “we need AI functionality company-wide” mentality, however, is that it doesn’t produce real results. In a study done by MIT’s Project NANDA, researchers found that 95% of organizations are getting zero return despite a $30 to $40 billion enterprise investment into generative AI. While 60% of the study’s respondents evaluated enterprise-grade options, just 20% were able to reach the pilot stage of their AI solution and only 5% were able to get past that point to release it. The study notes that “The core barrier to scaling is not infrastructure, regulation, or talent. It is learning. Most GenAI systems do not retain feedback, adapt to context, or improve over time.”

 

Keypoint Intelligence Opinion

At the risk of sounding like a broken record, a lot of the issues that feed into these slop-creating initiatives comes from a lack of any formal regulation. Generative AI uses large language models (LLMs) populated with any data the creators can find, often stealing content outright and pulling information from dubious sources that gives equal weight to incorrect nonsense and factually correct content. This then feeds into the AI ouroboros where AI-generated content is put into another LLM and the diversity of the data being used starts causing hallucinations. What could’ve been a great tool that helped us on a professional and personal level is now so inbred that it better resembles a European noble dynasty in the 1700s than anything trustworthy or fully functional.

When weighing the pros and cons of creating an AI tool for a company, we really need to consider much more than the idea of “I think this could be revolutionary”—especially with the proliferation of AI-based tools that already exist in the world. There really needs to be a hole in the market (a real position to fill) to warrant such a tool when we already have widespread social backlash to the wastefulness of AI, concerns about the AI bubble popping, and the general distaste of engaging with content that is subpar.

Afterall, how useful can an AI tool that analyzes data and collates it into presentations be if we still need someone to go through and make sure that everything’s correct. A shortcut isn’t helpful if it puts you further back on your path.

 

Stay ahead in the ever-evolving print industry by browsing our Report Store for the latest insights. Log in to the InfoCenter to view research on AI through our Workplace- and Production-based Advisory Services. Not a subscriber? Contact us for more information.

 

Keep Reading

The Artificial Leading Artificial Intelligence

Artificial Intelligence: Who Writes This Junk?

 

avatar
Mark DiMattei

Mark DiMattei is the Manager for Keypoint Intelligence’s Publishing, Editing, and News (PEN) group. He is responsible for editing all of the company’s deliverables for grammar and content, ensuring that all documents adhere to the company’s standards. He also assists in authoring reports and blogs on topics spanning the production printing and office document technology markets.

RELATED ARTICLES