So the computer lied to you, it claimed it read something in a manual and didn't.
This is not making a very good case for the programs currently referred to as "AI." You sound like you're saying that it's quite untrustworthy. Which seems to be making my argument for me.
That said, what makes you believe that it hadn't "read" any 8080 documentation, i.e., it was not included in its training set? The training set is claimed to include massive amounts of information from the Internet, and 8080 documentation is widely available on the Internet in many copies.
I've kinda lost the thread on what point you're trying to make, I'm just trying to make my own point that humans and AIs both are capable of shoveling out garbage code....
Humans can use logic to determine whether code is reliable or garbage. LLMs can not.
Given enough time and resources, perhaps an AI can even produce high quality code like a human.
I see no indication that we're anywhere near that, and strong indications that the current methods of pursuing AI, via stochastic parrots, can not.
I didn't say that and that argument is stupid. You know that, I know that, yet you use it.
You were the one using that argument, unless I completely misread, "On the other hand, some humans hallucinate constantly, too." Why are you making that point if not to justify that it's these "AI" programs are "intelligent" even though they "hallucinate"?
The number of people knowing technical details about obscure platforms (and the 8080 is obscure by today's standards) is quite small.
I don't see how that's relevant. The point is that there
can be and
are humans knowing about these things, and information about them is well documented on the web. I happened to pick this example because it was handy, but with little thought I'm sure you can come up with plenty of other things well documented in the LLM's training data that the LLM can clearly demonstrate that it can't reason about.
Again, you seem to be going back to, "Many humans fail on this, so that justifies saying that LLMs are intelligent when they fail on this." That's not how intelligence works. That is, in fact, how stupidity works.
Ask a random person on the street about i8080's flags - and you will get as much hallucination as any AI system if you force an answer.
I'm not "forcing" an answer at all; I'm simply asking. The "AI" always has the option (as does any human) of saying, "I don't know." Yet AIs virtually never say that. It claims over and over again that it's writing correct 8080 code that's clearly wrong, even after being told it was wrong, and even after saying it's correcting its work it still comes out with the
exact incorrect code it previously printed. That is not showing any sign of intelligence at all.
You are expecting more from the AI system than from regular humans...
I am not. I can go up to any person in this forum and have a much more sensible conversation with them compared to the one I started this thread with, even if it's just the human coming back with, "I don't know" at the start, or at least accepting that they don't know after a couple of tries.
Imagine I went up to someone in this forum and had this conversation:
Me: Does 8080 POP AF preserved the carry flag?
Person: Yes.
Me: You're wrong. It does not; here are two separate references to manuals that make it clear it doesn't, because it's loading the previous value from the stack, overwriting the current value of the carry flag.
Person: ...
Me: Does 8080 POP AF preserved the carry flag?
Person: Yes.
You would no doubt consider this person irrational, as I would I. You certainly wouldn't (I hope) consider this to be an intelligent conversation on the part of that person.
Programming can be supported by AI.... It doesn't do the business logic well, but it handles scaffolding and boilerplate and can deal with common tasks and API support.
IDEs that do not claim to be using any sort of AI have been doing this for decades. And the ability to do this is
far short of the claims that are being made for the current generation of "AI" programs.