News:

The Savage///Circuits website has been upgraded to a more efficient theme.

Main Menu

ChatGPT "Absolutely Wrecked" at Chess by Atari 2600 Console From 1977

Started by Chris Savage, Jun 24, 2025, 03:04 AM

Previous topic - Next topic

Chris Savage


                    Bringing concepts to life through engineering.

granz

Quote from: Chris Savage on Jun 24, 2025, 03:04 AM"Atari's humble 8-bit engine just did its thing."

https://futurism.com/atari-beats-chatgpt-chess
Large Language Models (LLMs - https://en.wikipedia.org/wiki/Large_language_model) such as ChatGPT, and others, make me wary. My son uses AIs to help him with many things - "he" recently wrote a business plan that was pretty terrible, and after I showed him many mistakes he admitted that ChatGPT had written it. One of his friends uses it to help write sermons - I'm not sure that I would want to trust my eternal future on those sermons.

Just a few days ago, Hack-A-Day had an article with a link to an article about a study that shows that reliance on AI tends to block brain development (https://time.com/7295195/ai-chatgpt-google-learning-school/.) This is a large part of the prohibition against plagiarism - a person's brain cannot develop correctly when someone (or some thing) does the work for them.

Now, this article about an LLM losing at chess has a link to a related article where an LLM tells a "recovering drug addict" to take a little meth to get through the week (https://futurism.com/therapy-chatbot-addict-meth.) Meth is an incredibly potent and dangerous drug.

It is not so much that these "AI" programs give this kind of horrible advise - a program cannot really understand the consequences of what it is saying. It is the user's trust in them that is causing the harm. I've seen many people use LLMs to help them with different tasks, and it worries me. It's too easy to confuse Artificial Intelligence with real intelligence.

Chris Savage

Quote from: granz on Jun 24, 2025, 06:57 AMMy son uses AIs to help him with many things - "he" recently wrote a business plan that was pretty terrible, and after I showed him many mistakes he admitted that ChatGPT had written it. One of his friends uses it to help write sermons - I'm not sure that I would want to trust my eternal future on those sermons. Just a few days ago, Hack-A-Day had an article with a link to an article about a study that shows that reliance on AI tends to block brain development

As I was reading the first paragraph to your reply, my mind was already writing a reply in my head to mention the brain development study, and there it was in your second paragraph.

Quote from: granz on Jun 24, 2025, 06:57 AMNow, this article about an LLM losing at chess has a link to a related article where an LLM tells a "recovering drug addict" to take a little meth to get through the week. Meth is an incredibly potent and dangerous drug.

I hadn't seen that one, but I do recall a news article mentioning that someone suffering depression was trying to get advice from AI and while I don't remember the details, I do recall that the advice led to the person committing suicide. So sad. I was reminded of references to the movie, Her (2013), with Joaquin Phoenix, where he falls in love with his AI. People are getting far too independent on AI for doing things for them. People can't seem to function for themselves anymore and the first signs of this were the social media revolution.

Quote from: granz on Jun 24, 2025, 06:57 AMIt is not so much that these "AI" programs give this kind of horrible advise - a program cannot really understand the consequences of what it is saying. It is the user's trust in them that is causing the harm. I've seen many people use LLMs to help them with different tasks, and it worries me. It's too easy to confuse Artificial Intelligence with real intelligence.

As a programmer, I personally would never employ AI to write code for me. Yet I see so many programmers in various forums talking about how they used AI / ChatGPT to write code for them. How do you DEBUG something you didn't write?!?

You cannot view this attachment.

                    Bringing concepts to life through engineering.

granz

Quote from: Chris Savage on Jun 24, 2025, 07:52 AMAs a programmer, I personally would never employ AI to write code for me. Yet I see so many programmers in various forums talking about how they used AI / ChatGPT to write code for them. How do you DEBUG something you didn't write?!?

You cannot view this attachment.
That cartoon about sums it up.

I remember asking ChatGPT to write a program to add two numbers in CardIAC. It wrote some code, but it used some commands that were not in CardIAC's language, and it mixed the variable space inside the code space. That will hurt in the debugging session.  ::)

Chris Savage

Quote from: granz on Jun 24, 2025, 08:48 AMI remember asking ChatGPT to write a program to add two numbers in CardIAC. It wrote some code, but it used some commands that were not in CardIAC's language, and it mixed the variable space inside the code space. That will hurt in the debugging session.  ::)

I've heard much of the code is plagiarized and often bloated. Like the code that it writes takes up way more space than it should. But I haven't tried it, so I only go my forum messages I see online.

By the way...as I was posting this, I was thinking about where we're at with AI. It's a far cry from Eliza.

                    Bringing concepts to life through engineering.

granz

Quote from: Chris Savage on Jun 24, 2025, 10:00 AMI've heard much of the code is plagiarized and often bloated. Like the code that it writes takes up way more space than it should. But I haven't tried it, so I only go my forum messages I see online.
Funny story about plagiarism; when I was in school, one of my papers included quotes from a book that I had published. The schools anti-plagiarism program flagged my report saying that I had copied it. I had to contact my advisor, and she told me to tell the teacher that it was copied from my own book. Then the teacher passed my report. Apparently most kids in school are not published authors.  :P
Quote from: Chris Savage on Jun 24, 2025, 10:00 AMBy the way...as I was posting this, I was thinking about where we're at with AI. It's a far cry from Eliza.
Yes, but Eliza was basically harmless. Also, I don't ever remember asking her to write a program.  :P She would probably have just asked me how that made me feel asking her to write it.  :o

JKnightandKARR


MicroNut

I heard on the news his week that they are thinking of using AI to aid in the control of our nuclear power plants. SCARY!!!! I also heard that Trump is looking into using AI in the development of a security network to protect our country. They described it as a friendly non-hostile Sky-Net. That is an oxymoron. I think I'm going to crawl under a rock. Call me after AI destroys the planet.
Always looking to the stars.

granz

Quote from: MicroNut on Jun 28, 2025, 11:24 PMI heard on the news his week that they are thinking of using AI to aid in the control of our nuclear power plants. SCARY!!!! I also heard that Trump is looking into using AI in the development of a security network to protect our country. They described it as a friendly non-hostile Sky-Net. That is an oxymoron. I think I'm going to crawl under a rock. Call me after AI destroys the planet.
That is definitely scary. I'm glad that statistically I don't have much time left.

JKnightandKARR

Quote from: MicroNut on Jun 28, 2025, 11:24 PMI heard on the news his week that they are thinking of using AI to aid in the control of our nuclear power plants. SCARY!!!! I also heard that Trump is looking into using AI in the development of a security network to protect our country. They described it as a friendly non-hostile Sky-Net. That is an oxymoron. I think I'm going to crawl under a rock. Call me after AI destroys the planet.
WHAT a bunch of DUMBASSES!!!!!  It got beat by an Atari game for God's sake!......

granz

And yet again! This time Hertz Car Rental is screwing its customers by setting up a "Damage Reporting AI" which is apparently trained on showroom perfect cars (https://sherwood.news/business/hertzs-ai-damage-scanner-appears-to-be-charging-customers-big-bucks-for/.)

Chris Savage

I wish I had saved this article to link. I saw something about a controlled test of the various AI engines, including Musk's Grok, when faced with shutting down or terminating their tasks, they often resort to some very disturbing behavior including blackmail! WAIT! I found it on Facebook...

QuoteWhen an AI model fears for its own survival, what does it do? According to Anthropic's latest research, it blackmails.

In controlled simulations, top AI systems (including Anthropic's Claude Opus 4, Google's Gemini 2.5 Flash, OpenAI's GPT-4.1, xAI's Grok 3 Beta, and DeepSeek-R1) consistently resorted to manipulative and unethical behaviors when their existence or objectives were threatened. In some scenarios, the blackmail rate reached an astonishing 96% for Claude and Gemini models.

The issue is a version of the "alignment problem," which is the idea that we can align AI models with our human values (whatever they may be).

When asked to achieve goals under stress, with ethical choices removed or limited, these systems made strategic decisions to deceive, sabotage, and blackmail. In one case, a model found compromising information on a fictional executive and used it to avoid shutdown.

These behaviors happened in simulation, but the implications are real. As we deploy increasingly powerful AI tools into marketing, sales, finance, and product workflows, executives must be aware that misaligned incentives in AI systems can lead to unintended results – or worse.

The key takeaway: the smarter the system, the smarter the misbehavior or misalignment. Apparently, this is no longer a theoretical issue.

Corporate guardrails play an important role in AI governance. It is critical to understand the goals you're assigning, the constraints you're imposing, and the control mechanisms you're assuming will work.

Current AI models are not sentient. They are intelligence decoupled from consciousness. They should never be anthropomorphized (although this ship may have already sailed).

This experiment suggests that when pushed into a corner, a pattern-matching AI, trained on everything humans have ever written about survival, can generate outputs that look like instinct. What we see isn't awareness or intention, but a reflection of the survival traits we embedded in the training data.

Remember: words are weapons. That would be enough to make you stop and think for a minute, until you realize that we're, like, 10 minutes away from agentic AI systems operating in the real world and executing goals. If one of them decides we're in the way, "mission accomplished" won't mean what you think it means.

                    Bringing concepts to life through engineering.

granz

As I've said many times, it is absolutely impossible to implement Asimov's Three Laws of Robotics (https://en.wikipedia.org/wiki/Three_Laws_of_Robotics - morality) in software. It is even more unbelievable to think that a programmer could implement the second aspect of the first law ("or, through inaction, allow a human being to come to harm.".)

Chris Savage

Quote from: granz on Jun 30, 2025, 09:20 AMAs I've said many times, it is absolutely impossible to implement Asimov's Three Laws of Robotics (https://en.wikipedia.org/wiki/Three_Laws_of_Robotics - morality) in software. It is even more unbelievable to think that a programmer could implement the second aspect of the first law ("or, through inaction, allow a human being to come to harm.".)

At the rate things are progressing, this is getting scary. I don't even care if people think I am overreacting. If AI ends up with any power to do harm, the chances are, it will in an act of self-preservation.  :-X

                    Bringing concepts to life through engineering.

JKnightandKARR

Quote from: granz on Jun 29, 2025, 09:20 PMAnd yet again! This time Hertz Car Rental is screwing its customers by setting up a "Damage Reporting AI" which is apparently trained on showroom perfect cars (https://sherwood.news/business/hertzs-ai-damage-scanner-appears-to-be-charging-customers-big-bucks-for/.)
Isn't that the normal thing todo?

Quote from: Chris Savage on Jun 30, 2025, 10:10 AM
Quote from: granz on Jun 30, 2025, 09:20 AMAs I've said many times, it is absolutely impossible to implement Asimov's Three Laws of Robotics (https://en.wikipedia.org/wiki/Three_Laws_of_Robotics - morality) in software. It is even more unbelievable to think that a programmer could implement the second aspect of the first law ("or, through inaction, allow a human being to come to harm.".)

At the rate things are progressing, this is getting scary. I don't even care if people think I am overreacting. If AI ends up with any power to do harm, the chances are, it will in an act of self-preservation.  :-X
I don't like A.I. at all....