Check out the Keypoint Intelligence Channel on YouTube for interviews and other insightful videos!
In a recent New York Times article, the authors discuss how ChatGPT could introduce a watermark-like feature into its chatbot’s generated text to help combat concerns of artificial intelligence (AI)-generated content being submitted into academia or as a safeguard against erroneous news articles trained on incorrect sources. The basis behind their thesis is that ChatGPT uses a large language model (LLM), which assigns weight to certain words when the AI begins to generate responses to a prompt. These words often appear within the generated content with greater frequency than what a human writer would use—making it easy to help identify when something was written by a person versus a robot.
But what exactly is an LLM and how does it function?
Generally speaking, an LLM is an AI system trained on a substantial volume of text to perform various tasks like generating text, answering questions via prompts, or even solving mathematical equations. LLMs are used in many of the chatbots that have recently received notoriety for being able to create children’s books, articles for magazines, and menus.
The following infographic offers some key details about LLMs and recent developments in the industry.
Log in to the InfoCenter to view research on AI and Internet 4.0 devices via our Office CompleteView Advisory Service. If you’re not a subscriber, just send us an email at firstname.lastname@example.org for more info.
Robowriters of the World Unite and Take Over