• Please review our updated Terms and Rules here

Discuss AI Here

You can pretty much bet that the primary use will be for the basest, greedy, application. Like Internet porn, trackers, and massive databases of user information.

I got into an argument once with a Google insider who claimed they weren't using search results to plot general short and long-term trends
not analyitcs for ads, which we know they do, based on search queries.
 
I would believe you've got any clue at all about what you're talking about if you explained why this time it's different. But if you're even aware that this is the third or fourth major "AI" "revolution" in the past fifty years, you're not telling us that, much less why this isn't a fad when the other ones were.

It is different this time because they've abandoned everything in the expert systems space and gone full in on statistical networks because this time they have the distributed compute and storage power they didn't have in the past and people anthropomorphize the results allowing them way too much benefit of the doubt, especially in areas where the users have little knowledge of what the right answer might be.
 
Well, every programmer, for example. When I use an AND or OR instruction on the 8080, the CPU cares not at all about word and sentence patterns, but always sets the carry flag to zero, regardless of what what you've read from people reading or misreading the manual. I understand this and can make use of this fact; that's why I can write 8080 code and LLMs can't.
Every programmer who as read and understood the 8080 ISA manual and derivative work. I could point to all sorts of programmers who don't know how the carry flag on 8080 instructions work, they don't know until they absorb the patterned knowledge put down by Intel in their literature. People are just as capable of misinterpreting a manual or simply not knowing a fact exists within one. You can understand and make use of this fact to write better 8080 code. An LLM, if trained with this fact in mind, can also stand to use it. It can also stand to be stubborn and insist that it is right, just like a human who doesn't understand the 8080 can.

Long story short, without the knowledge, you couldn't act on it, same as an AI system, and there are varying degrees to which people actually understand and retain knowledge they've been exposed to, just like AI systems. I don't think the technology is great, but I don't really see "it doesn't know minute details of the 8080 ISA" as meaning code it creates is completely worthless, it just means this specific model's 8080 code is about as useful as the average person's 8080 code (perspective, the majority of humans don't even know what an 8080 is...)
 
It costs eye-watering amounts of money to run
A little over half a century ago, computers cost eye-watering amounts of money to run. That's a short-sighted argument. Also, we have proof (= the goo between our ears) that more energy-efficient pathways to intelligence exist.

Building efficient airplanes is not done by building efficient birds. It is done by understanding how birds fly. Early attempts at flying were neither efficient nor useful, not cheap, and definitely not safe. Just ask Lilienthal.

hallucinates constantly
It's manageable for sufficiently short conversations, but hallucination is currently a problem. On the other hand, some humans hallucinate constantly, too.
And while problematic for large language models, hallucination is less of a problem with image generation, for example. Just try again with a different seed.

has yet to demonstrate useful capability beyond "it can fill out boilerplate text for you, but you still have to edit the content yourself,"
In some areas, AI can currently handle 80% of the jobs 80% of the time. Far from being universally useful, but ... still better than some outsourcing efforts I've seen.

I believe that AI-generated images are already widely used and only spreading. Producing a decent image in 15 seconds locally on a modern-ish laptop at 40W is definitely not insanely expensive. My attic light bulb uses more energy.
 
"more energy-efficient pathways to intelligence exist."

and a techo-parrot which requires it's own nuclear power plant to run going down the current
technical trajectory will not bring us any closer to understanding how that meat bag actually works.

the parrots sidestep that whole bioethics problem though if you wanted to build artificial meatbags
 
Last edited:
As Joseph Weizenbaum said, "I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people."

I could point to all sorts of programmers who don't know how the carry flag on 8080 instructions work, they don't know until they absorb the patterned knowledge put down by Intel in their literature.
Except that someone just absorbing patterns rather than reading and understanding the documentation and being able to apply that knowledge doesn't really know how the carry flag works on 8080 instructions. Like LLMs, they have no way of actually ascertaining what their code does.

Once you've read that AND and OR always clear the carry flag, if you have intelligence you can apply this to any situation, knowing that there is no possible situation where you will execute one of those instructions and end up with the carry flag set. This is very simple logic. It's also something that LLMs are inherently incapable of doing. I don't think I need to point out that our technological civilisation relies on logic like this, not pattern matching.

People are just as capable of misinterpreting a manual or simply not knowing a fact exists within one.
That people are capable of making the same mistakes as an LLM does not make LLMs intelligent.

Ironically, understanding that requires understanding that this argument is a formal fallacy, something that, being reliant on logic rather than pattern matching, LLMs simply cannot do.

I think it's dangerous to argue that these are even the same type of mistake; that seems likely to be a case of ELIZA effect, though there's probably a whole Ph.D. thesis in a discussion of that.

...it just means this specific model's 8080 code is about as useful as the average person's 8080 code (perspective, the majority of humans don't even know what an 8080 is...)
This argument mystifies me. "LLMs are as ignorant and stupid as some people who've not studied a subject at all" is not exactly a ringing endorsement.

Let's be clear about the situation here. The LLM in question did claim to have "read" the manuals that clearly explained, in unambiguous terms, that AND and OR always clear the carry flag. We'd expect a person of any reasonable intelligence, who's read this recently and is focused in particular on how the carry flag is being used, never to claim that, after executing an AND or OR instruction, the carry flag could possibly be set. (This doesn't exactly take a massive amount intelligence.) Yet an LLM cannot do the same thing, because it is incapable of even simple reasoning.

It's manageable for sufficiently short conversations, but hallucination is currently a problem. On the other hand, some humans hallucinate constantly, too.
Ah, another example of, "Humans can be broken, ignorant or stupid, therefore when an 'AI' produces similar results to such a human it's exhibiting intelligence."

You'll note that we don't let such humans design our bridges. Or trust them much at all in the areas where they're broken/ignorant/stupid, unless we're also broken/ignorant/stupid in those areas.

In some areas, AI can currently handle 80% of the jobs 80% of the time. Far from being universally useful, but ... still better than some outsourcing efforts I've seen.
And what exactly are these areas? It's certainly not doing that in programming, where you still absolutely need a programmer to get anything done. Even "full self driving" of is far ahead of AIs on the programming side, and that has been more or less stalled for a decade or more.
 
It is different this time because they've abandoned everything in the expert systems space and gone full in on statistical networks because this time they have the distributed compute and storage power they didn't have in the past...
I'm not sure I see how that difference applies. Every previous "AI summer" has also had more compute power and storage than the previous one. And, more generally, every AI summer has been doing something significantly different from the previous one, so we'd expect to see that in this iteration of it, too. But having seen several examples of, "it's different now, we have more power, and we're doing things differently," followed by an AI winter, my pattern matching is starting to kick in....

and people anthropomorphize the results allowing them way too much benefit of the doubt, especially in areas where the users have little knowledge of what the right answer might be.
That is definitely not a difference; that's something we saw from the start. It's called the ELIZA effect.
 
So the computer lied to you, it claimed it read something in a manual and didn't. Sounds like a college kid saying they totally read the assignment and getting obstinate when challenged. I've kinda lost the thread on what point you're trying to make, I'm just trying to make my own point that humans and AIs both are capable of shoveling out garbage code, humans and AIs both are capable of shoveling out somewhat workable code. Given enough time and resources, perhaps an AI can even produce high quality code like a human. I don't really have a horse in the race but I also don't see it as impossible, everything exists on a spectrum.
 
Ah, another example of, "Humans can be broken, ignorant or stupid, therefore when an 'AI' produces similar results to such a human it's exhibiting intelligence."
I didn't say that and that argument is stupid. You know that, I know that, yet you use it.

The number of people knowing technical details about obscure platforms (and the 8080 is obscure by today's standards) is quite small. Ask a random person on the street about i8080's flags - and you will get as much hallucination as any AI system if you force an answer. You are expecting more from the AI system than from regular humans, and are unhappy that it does not deliver. How surprising.

And what exactly are these areas?
For example generating video subtitles or image descriptions automatically. Sure, the AI makes mistakes (sometimes severe), but the overall quality is acceptable. Blind people happily prefer an 80% accurate image description to a 0% clue. And even if you think the accuracy is garbage, it's still good enough to make subtitling videos substantially easier and faster for humans.

Image generation is wide-spread already. Apparently, it's good enough.

Programming can be supported by AI. I don't use it myself, but I know people both online and offline who find it useful to save time, especially for one-offs. It doesn't do the business logic well, but it handles scaffolding and boilerplate and can deal with common tasks and API support. The latter is more important in modern languages and libraries than in traditional assembly or C programming.
 
So the computer lied to you, it claimed it read something in a manual and didn't.
This is not making a very good case for the programs currently referred to as "AI." You sound like you're saying that it's quite untrustworthy. Which seems to be making my argument for me.

That said, what makes you believe that it hadn't "read" any 8080 documentation, i.e., it was not included in its training set? The training set is claimed to include massive amounts of information from the Internet, and 8080 documentation is widely available on the Internet in many copies.

I've kinda lost the thread on what point you're trying to make, I'm just trying to make my own point that humans and AIs both are capable of shoveling out garbage code....
Humans can use logic to determine whether code is reliable or garbage. LLMs can not.

Given enough time and resources, perhaps an AI can even produce high quality code like a human.
I see no indication that we're anywhere near that, and strong indications that the current methods of pursuing AI, via stochastic parrots, can not.

I didn't say that and that argument is stupid. You know that, I know that, yet you use it.
You were the one using that argument, unless I completely misread, "On the other hand, some humans hallucinate constantly, too." Why are you making that point if not to justify that it's these "AI" programs are "intelligent" even though they "hallucinate"?

The number of people knowing technical details about obscure platforms (and the 8080 is obscure by today's standards) is quite small.
I don't see how that's relevant. The point is that there can be and are humans knowing about these things, and information about them is well documented on the web. I happened to pick this example because it was handy, but with little thought I'm sure you can come up with plenty of other things well documented in the LLM's training data that the LLM can clearly demonstrate that it can't reason about.

Again, you seem to be going back to, "Many humans fail on this, so that justifies saying that LLMs are intelligent when they fail on this." That's not how intelligence works. That is, in fact, how stupidity works.

Ask a random person on the street about i8080's flags - and you will get as much hallucination as any AI system if you force an answer.
I'm not "forcing" an answer at all; I'm simply asking. The "AI" always has the option (as does any human) of saying, "I don't know." Yet AIs virtually never say that. It claims over and over again that it's writing correct 8080 code that's clearly wrong, even after being told it was wrong, and even after saying it's correcting its work it still comes out with the exact incorrect code it previously printed. That is not showing any sign of intelligence at all.

You are expecting more from the AI system than from regular humans...
I am not. I can go up to any person in this forum and have a much more sensible conversation with them compared to the one I started this thread with, even if it's just the human coming back with, "I don't know" at the start, or at least accepting that they don't know after a couple of tries.

Imagine I went up to someone in this forum and had this conversation:

Me: Does 8080 POP AF preserved the carry flag?
Person: Yes.
Me: You're wrong. It does not; here are two separate references to manuals that make it clear it doesn't, because it's loading the previous value from the stack, overwriting the current value of the carry flag.
Person: ...​
Me: Does 8080 POP AF preserved the carry flag?​
Person: Yes.​

You would no doubt consider this person irrational, as I would I. You certainly wouldn't (I hope) consider this to be an intelligent conversation on the part of that person.

Programming can be supported by AI.... It doesn't do the business logic well, but it handles scaffolding and boilerplate and can deal with common tasks and API support.
IDEs that do not claim to be using any sort of AI have been doing this for decades. And the ability to do this is far short of the claims that are being made for the current generation of "AI" programs.
 
You were the one using that argument, unless I completely misread,
You misunderstand. People being stupid does not mean that AI doing the same is smart.
Assuming that AIs must always perform as good as or better as all humans is unfair. That is a different argument.

The point is that there can be and are humans knowing about these things,
Yet very few people do. You expect LLMs to perform at least equal to experts in any field; you would not expect your medical doctor to know about 8080 flags.

I'm not "forcing" an answer at all; I'm simply asking.
Yes, current LLMs are trained to always answer. Some cultures or activities (such as torture) train humans to do the same. And to no surprise, the result is hallucination. I see this as a huge problem in the current generation of LLMs, not something which is impossible to overcome in the next decade.

In my experience, Japanese people prefer to lie to my face rather than admit failure on their part. To me, that is unacceptable - yet I work with Japanese people and have to deal with it somehow. I see this as a cultural problem in Japan.

Imagine I went up to someone in this forum and had this conversation:
Try this conversation with a six-year old and it may sound very similar. Try it with a bricklayer and it may also sound very similar. The latter one does build bridges.

Again, I'm not saying "AI smart because human stupid", but if we are trying to understand and emulate the way human brains work, we will get brain-like behaviour. Even the parts we are not interested in. Adapting it perfectly will take time, but I don't think it's purely impossible.

IDEs that do not claim to be using any sort of AI have been doing this for decades.
That's not an argument about AI.

I've been taking university courses on speech recognition and processing. Dictation (speech-to-text) devices have been around for many decades and work quite well for narrow fields and drop off sharply for general input. AI-driven approaches have fully replaced them, otherwise speech-driven assistants would have been impossible. Yes, there is human involvement, but the technology itself is far, far better.

Just because a good solution has existed for decades does not mean that AI cannot replace it.
 
You misunderstand. People being stupid does not mean that AI doing the same is smart.
Exactly my point. LLMs are stupid, not smart. If humans did what LLMs do, the humans would be considered "stupid."

Assuming that AIs must always perform as good as or better as all humans is unfair. That is a different argument.
I am not assuming that. However, it's immediately clear that 1. LLMs have no idea what they know or don't know, and 2. LLMs cannot do even basic reasoning. Thus, stupid. See my example conversation.

Yet very few people do. You expect LLMs to perform at least equal to experts in any field; you would not expect your medical doctor to know about 8080 flags.
I don't know why you keep coming back to this. The "AI" I use has been trained on 8080 assembly books, claims to know (and even be good at) 8080 assembler, and writes actual, runnable 8080 assembly code.

Yes, current LLMs are trained to always answer.
Well, any entity that confidently claims knowledge about something it has no knowledge about is stupid, not intelligent. Being able to understand something about what you do and don't know is a fundamental aspect of intelligence.

I see this as a huge problem in the current generation of LLMs, not something which is impossible to overcome in the next decade.
Are you aware of the fifty-year-plus history of AI? This is exactly the claim that many AI proponents have been making during that entire period, and they've invariably been wrong about this. What's changed so dramatically? Only that "AI" programs now generate text that sounds more convincing, and thus many more people (even researchers deep into the field who should know better) have succumbed to something along the lines of ELIZA effect.

In my experience, Japanese people prefer to lie to my face rather than admit failure on their part. To me, that is unacceptable - yet I work with Japanese people and have to deal with it somehow. I see this as a cultural problem in Japan.
Interesting. I've lived and worked in Japan for more than twenty years, and don't see this any more often here than I saw it in North America. Take from that what you will.

Try this conversation with a six-year old and it may sound very similar.
Perhaps, but I'm not so sure about that. Do you have any evidence that it would?

Try it with a bricklayer and it may also sound very similar. The latter one does build bridges.
I very much doubt that.

Regardless, would you consider someone who denies reality to the degree that they're saying contradictory things in the same breath to be intelligent?

Again, I'm not saying "AI smart because human stupid", but if we are trying to understand and emulate the way human brains work, we will get brain-like behaviour.
But we are not getting brain-like behaviour here. People with brains can use logic to find inconsistencies in what they say and believe. LLMs cannot do that. Thus they are lacking in something basic to and essential for intelligence.
 
I still haven't seen anything said other than AIs that don't "know" facts can come up with the same garbage as humans that don't know facts. I don't think anyone is saying an AI is better than a human that it literally does worse than. If whoever created the model you're using has positively claimed "this model has been trained on the intricacies of the Intel 8080 ISA" then you've been lied to. If the model simply claimed to be trained from a mass of information available to it, and you assumed that meant it was trained on the 8080 ISA, that assumption is on you. That's like, as already mentioned in thread, assuming a medical doctor can write 8080 asm because of the sheer time they spent in school and the materials that school had in its library. Maybe the library has a copy of the 8080 manual. Doesn't mean every student of the school has read it cover to cover. Heck, I have an Intel 8080 manual on my bookshelf, I couldn't tell you every detail in it, it's there for when I do need to educate myself on the details.

I don't know if anyone else is advocating that a hallucinating AI should be put in a position of power in a mission critical application. I'm saying anything but, I would hold it to the same standards I hold people to. That is both I wouldn't task it with things I know it can't handle *and* I won't immediately dismiss it when it doesn't know some niche fact that it has not been guaranteed to know. If I dismissed every person that came before me that didn't know about 8080 carry flag behavior, I'd have no peers in this life. Instead it's about knowing people's strengths, or in this case, the strengths of any given LLM. If it is not explicitly stated to be strong in programming, expecting that is a fools errand. If it is stated as such, and fails, that's a reflection on its "patron" as a snake oil salesman.
 
In fact, I suspect that most AIs have only been trained on text/information available as html cleartext; that is, I've not found one yet that has probed the depth of the photo-imaged manuals on, say, bitsavers. Given the pitfalls of OCR, I suspect that might not happen for a very long time.
 
In fact, I suspect that most AIs have only been trained on text/information available as html cleartext; that is, I've not found one yet that has probed the depth of the photo-imaged manuals on, say, bitsavers. Given the pitfalls of OCR, I suspect that might not happen for a very long time.
https://github.com/tesseract-ocr/tesseract Tesseract OCR software has been out for a long time as open source. If desired, it could be done, the only question is, how important did the AI trainers think this information was?
 
I've not found one yet that has probed the depth of the photo-imaged manuals on, say, bitsavers..
I've been OCRing files for years now. I still have a ways to go to do all the backlog.
When all of this started getting hyped, I noticed a sharp increase of bulk downloading of the site's
contents by adr ranges of well-known companies and from really weird locations like 30 miles
west of Madison, WI. I assume they are using it for techno-parrot food.
 
If anything this just leads me to believe LLM systems could at least serve as a litmus test. Like if an LLM can manage to run circles around you on a given subject, seeing that as meaning you aren't up to snuff in a subject rather than interpreting that as the LLM is. The LLM may not be, but if you can't even reach its standard...maybe you need to do some more studying until you can. Not to put anyone down by the way just speaking generally. Like cjs, you've far demonstrated you're a better programmer than the model you're citing, but it may still prove a better programmer than most. Would I trust it with mission critical work? No, but I likewise wouldn't trust a human that knows as much or less than it. The humanity or not isn't the problem, it's the statistical reliability of its solutions, which can be plotted against the statistical reliability of its human peers to determine where it falls on any given subject.
 
Would you trust an LLM AI against a human SME in the area where the SME has real understanding? Of course not.

If you're worried about an AI taking your job, then the obvious solution to me is to be better-versed than the AI.
 
Back
Top