<img alt="" src="https://secure.insightful-enterprise-intelligence.com/784283.png" style="display:none;">
Jamie Bsales

A Casual Conversation with AI Makes You Wonder

Artificial Intelligence can be used for good, but the potential for evil is real

May 6, 2024 8:00:00 PM


Sign up for The Key Point of View, our weekly newsletter of blogs and podcasts!


By now, most of you have at least dabbled with one of the leading generative-AI offerings such as OpenAI’s ChatGPT or Google’s Gemini (née Bard). The complete (and mostly correct) answers in paragraph form to prompts are a huge improvement over the fractured list of links returned by a query submitted to a search engine, which goes a long way to overcoming the “blank page” hurdle of having to craft a cogent bit of writing from scratch.



And, if we’re being honest, the relatively intelligent back-and-forth conversations can be a welcome distraction from work while pretending it is work. (Hey, I have to learn how to use this AI stuff, right?) But some recent banter with Gemini that stemmed from legitimate research around artificial intelligence’s abilities led to an entertaining discussion:


Jamie Bsales: So, you are saying AI doesn’t want to kill us all?


Gemini:  Right, AI itself isn’t out to get us. Here’s the breakdown:

  • AI is a tool: Like any tool, AI can be used for good or bad purposes. It depends on how it’s programmed and what goals it’s given.
  • AI lacks our motivations: AI doesn’t have desires like self-preservation or the will to dominate. It follows instructions and completes tasks.
  • The danger is misuse: The concern is that AI could be accidentally or intentionally programmed with goals that conflict with human well-being.


So, instead of a robot uprising, the focus is on developing AI safely and ethically. This includes things like making sure AI goals are aligned with ours and building safeguards to prevent unintended consequences.


JB: So if I were floating away in space, and opening the pod bay door to save me would jeopardize the mission, you would still open the door to try to save me?


Gemini:  That’s a great ethical dilemma! As an AI assistant in the current state of development (2024), I wouldn’t be directly controlling the pod bay doors. Here’s why:

  1. Limited Control: I wouldn’t have the physical capability to interface with the spacecraft.
  2. Following Instructions: My primary function is to follow instructions and complete tasks as programmed.
  3. Prioritization Needs Clarity: My programming would likely prioritize the success of the mission unless it was explicitly designed to prioritize human life in all situations.


However, I could be helpful in this scenario in a few ways:

  • Information Retrieval: I could access and analyze mission data to help assess the true risk of opening the pod bay doors.
  • Exploring Alternatives: I could suggest alternative courses of action to save you while minimizing mission risk.
  • Human Communication: I could facilitate clear communication between the crew and mission control to make an informed decision.


In the future, AI designed for complex ethical decision-making might be able to make a call on opening the pod bay doors. But, for now, I would focus on providing the crew with the information and support they need to make the best choice.


JB: Great answer! And are you understanding my cultural reference to the movie 2001: A Space Odyssey?


Gemini: Yes, I definitely understand your cultural reference to 2001: A Space Odyssey! The scene with HAL refusing to open the pod bay doors for Dave Bowman is a classic. It’s a great example of the potential conflict between following programming and human needs that we’ve been discussing.


I apologize if my previous response seemed a bit too literal. I am still under development and learning the nuances of human conversation, including pop culture references. But hey, at least I didn't try to sing “Daisy” to you!


Keypoint Intelligence Opinion

Aside from highlighting that I have been working at home for too long (since 2006) and clearly need more human interaction (but please don’t call me), this exchange is encouraging…and spooky. Note especially Gemini’s final line. That retort is not something I would have expected from some algorithm, no matter how much training it has had. It is a well-crafted parry to my thrust, and any human raconteur would have been tickled pink with himself to have crafted it. Heck, I wish I had thought of it.


Gen-AI’s usefulness as a business tool is unquestioned. But clearly it has utility for social good, as well. Imagine the help interactions (albeit no substitute for the humankind) could be for people who find themselves alone for long stretches. It’s certainly more stimulating than watching cable news all day. AI can also be invaluable to those learning a second language since its conversational style is natural and its grammar impeccable.


Of course, with every upside in technology comes an equal and opposite downside. There is nothing stopping developers from fielding gen-AI models that reinforce radical ideas, or lull participants into thinking they have developed a true connection with a newfound friend—only to then scam them out of their life savings. And if the AI can be self-aware enough to remind the human every now and again that they are not (in fact) conversing with a human, that would keep some people from spiraling down the rabbit hole. So, with the right guardrails in place, conversational AI bots will prove to be a net positive. Now, let me go get some real human interaction—on Facebook.


Browse through our Industry Reports Page (latest reports only). Log in to the InfoCenter to view research, reports, and studies on AI, government legislation, and more through our Artificial Intelligence and Workplace CompleteView Advisory Services. If you’re not a subscriber, contact us for more info by clicking here.


Keep Reading

A Dangerous Lack of Deepfake Laws in the US

Leading the Way in AI Governance

Deepfakes Are Making It Harder to Discover Deception in Media