<img alt="" src="https://secure.insightful-enterprise-intelligence.com/784283.png" style="display:none;">
Mark DiMattei
6057541
https://www.keypointintelligence.com/media/3850/mdimattei.gif

INFOGRAPHIC: The Ethics of AI

Democratizing artificial intelligence

Nov 26, 2024 7:00:00 PM

 

Sign up for The Key Point of View, our weekly newsletter of blogs and podcasts!

 

I’ve spoken before about the issues concerning artificial intelligence (AI)—especially generative AI, which has gained the nickname of the “Plagiarism Machine” on social media and other online forums. And while it’s easy to point out shortcomings and issues, it’s much more fruitful to try and come up with solutions on how to take a problem technology like generative AI and make it better.

 

The issue is that there are no ethics when it comes to the creation of large language models (LLMs) or the harvesting of data to feed these LLMs. For AI to be better, we (as its users) need to be better…

 

 

Keypoint Intelligence Opinion

We, as a society, have made a point to regulate so many other aspects of our life. We have laws on how to produce food for sale, licenses required for driving cars, official safety practices for workplaces regardless of whether they are an industrial construction site or an office. So why is AI being treated as something special—as something above our laws regulating ownership and copyrights?

 

Whether we ultimately decide to pursue a data-ownership democracy, digital socialism, or some other form of content control, the main issue is that we need to have something so that artificial intelligence can be held to the same standard as everything else in our life. As it stands now, many of generative AI’s issues—hallucinations, incorrect information, junk content creation—stems from the fact that there is no regulation for how LLMs collect data for their use. We can’t fix these issues if we can’t see where the data is being pulled from and assert control over who and what gets included.

 

There is also no way for content creators to be credited for their work when it ends up in an LLM or an easy opt-out structure to keep their work from being taken against their will. This is the crux of the issue, as no one is going to want to spend hours of their own time and effort crafting a thoughtful essay, drawing an emotionally-driven image, or creating the perfect code if it’s going to be chopped up and fed to an LLM only to be spat out in worse quality and held together with pieces from other writer’s/artist’s/coder’s work. This isn’t Frankenstein’s monster we’re making…it’s your neighbor Frank Stein’s monster, and Frank lost two fingers and got tetanus trying to repair the stairs on his front porch.

 

At the risk of sounding like those anti-piracy ads put before movies on DVDs, generative AI (in its current state) is stealing and we need to start caring a little more about it.

 

Browse through our Industry Reports Page (latest reports only). Log in to the InfoCenter to view research on data ownership, generative AI, and large language models through our Artificial Intelligence Advisory Service. If you’re not a subscriber, contact us for more info by clicking here.

 

Keep Reading

The Artificial Leading Artificial Intelligence

INFOGRAPHIC: Poisoning Data to Protect Artists

Generative AI and the Problem Factory