• Please review our updated Terms and Rules here

Discuss AI Here

Yeah the scraping is a pain, that's on the industry players to enforce some sort of ethics, which of course they refuse to do. Even if there were rules though, I suspect they'd amount to a robots.txt situation where it's a suggestion that nobody materially holds anyone else accountable to.

I have opinions on the cost of standards that I could fill multiple replies with, so I'll just leave it narrowly at this: If it is fact that LLMs do not have access to a given standard governing something you're prompting it on, then imo nothing it says on the subject should be trusted. It should be held to the same standards as publications subject to rigorous peer review. Otherwise it's the same hearsay you'd get from some rando on Stack Overflow that also doesn't cite their sources. If someone wants to put forth their given AI of the day as a programming assistance tool, then it'd be nice if they demonstrate that they've ensured the tool has access to these sorts of specifications. Then it might justify the licensing terms involved in that sort of thing. Still, do you get in an Internet Archive situation where they have a finite number of licenses to the material but theoretically infinite requests to refer to the material? If the model isn't spitting back verbatim text, just analysis based on it, is that copyright friendly? That's one of the many reasons I just don't bother engaging, the legal frameworks just haven't caught up yet, but you know there are lawyers out there just ready to strike the second they smell blood.
 
Today, like most days, I had my ups and downs with AI. Turns out it doesn't know the difference between a Bond and an I-Bond, so that wasn't super helpful.

On the other hand it did write perfect Arduino code for taking serial input and driving a stepper motor the specified number of steps. Like compiled and ran the first time perfect.

But whoever said, "when the AI fad is over" is in for a surprise. This fad isn't ending. Someone described the current state of AI to be "primordial goo" and I believe it.

I would like to have some input as to the order in which it's applied, though. Very first on the list is autocorrect on my phone. (I mean make it better, not worse. Though I'm not sure how they could make it worse)
 
The fad will be over when AI is only used in the places it actually works well and not the hype it replaces everyone and does everything.
 
The primary money-maker for AI will be to shape public opinion by whatever means. If you want to witness psychological pressure refined to target specific individuals, just wait. The old-style ad campaign is a dying duck.
 
The DoD does use a portion of spent nuke fuel in various armament configurations.
Sure we used depleted Uranium 5 inch shells up until recently. A Gunners Mate on my ship told me in 2007 or 2008 they just shot off thier last round of DU... Since they never got a chance to use it during live fire.
 
AI is improving. A few weeks back, I tried Gemini, and it blew me away.

It was accurate, specific and reliable. It helped me locate chips, was able to give me specific part numbers, tell me the order in which the manufacturers took on the part, provide good suggestions for alternative (pin and non-pin compatible) and had insights I didn't expect. It also had untouchable... I'm trying to think of the word... Morals? Ethics? Simply put, it was impossible to trick it into doing something it didn't want to do, in this case, telling me a spoiler about a book I once wrote, right up until I claimed to have already read the story and I just wanted to discuss the twist with it.

It was an amazing experience. I was shocked at how smart it was. It was almost a human like interface. Being given something like a chip designator is rare enough for an AI, so I checked them against a data sheet search and it located twice as many of the different manufacturers as I could with Google alone. Also, it understood context around the chips, and that I was looking for specific chip characteristics. It also claimed things I've never heard an AI claim before, such as that it would remember our interactions, and would learn from them, feeding them back into it's own processes.

Then I went back the next day and wanted to continue learning how much it had progressed, and Gemini was complete crap. It wasn't able to do any more ( and I'd say a bit less ) than Chat GPT this time. It could no longer give me specific information that I could verify, understood none of the prompts from the previous day, and failed at basic tasks. It was back to being a normal AI.

Apparently they test next-gen AI in the wild at times - and that's likely what I encountered the first time. Honestly, this thing would have been good enough to help me troubleshoot a vintage repair - it was doing that well on questions.

Current AI aren't great. But from having that brief exposure, I suspect that the next generation is going to be useful. Imagine an AI that could read every post on this forum, and then cross reference that with other information? It could likely provide step by step instructions to fix just about anything.

Current AI is pretty bad. It hallucinates a lot, and has absolute confidence that it's right when it's wrong. This AI did not have those flaws.

AI is going to be as big a revolution to the next generation as Alta Vista was the the Internet Generation and 8 bit home PCs were to the Information Generation. Couple that with VR and realistic animations and AI ability to mix dreaming with physical models and Hollywood is finished. Within a few years, AI should be able to direct movies from a book and invite you into the world it creates to share that story.
 
Sure we used depleted Uranium 5 inch shells up until recently. A Gunners Mate on my ship told me in 2007 or 2008 they just shot off thier last round of DU... Since they never got a chance to use it during live fire.
Check this Quora article out:

 
And how exactly are these techo-parrots ingesting these rather expensive per copy not anywhere officially for free standards?
I suspect they're not.
Whether they are or not, it doesn't matter, at least as far as the correctness of the output it produces. The output these are designed to produce are sentences that sound like they could plausibly be in the input they've been trained on. Not consistent with the input, just plausibly sounding like it, but randomly varying.

If it is fact that LLMs do not have access to a given standard governing something you're prompting it on, then imo nothing it says on the subject should be trusted....If the model isn't spitting back verbatim text, just analysis based on it,,,
Nothing should be trusted anyway. Models not only are unable to spit back verbatim text, but they are designed exactly not to do that. And the "analysis" is not based on any meaning in the text, just what words and phrases tend to appear near others.

Today, like most days, I had my ups and downs with AI. Turns out it doesn't know the difference between a Bond and an I-Bond, so that wasn't super helpful.
It doesn't "know" anything. Again, the output is not based on facts in the input; it's based on word and sentence patterns and frequencies.

On the other hand it did write perfect Arduino code for taking serial input and driving a stepper motor the specified number of steps. Like compiled and ran the first time perfect.
A stopped clock is right twice a day.
 
For AI, you can try using Claude 3.5 , but I am not sure if it costs. I had tested and got good results from Phind.com , however, ASM seems to be something that AIs have trouble with, from reading other people's experiences with it. Whether it is because the short mnemonic codes get ignored or the significance of English-like words used in ASM or what - maybe even the formatting of ASM being so different from other languages, it doesn't have the comprehension that it does for C or Python.

Would be interesting to hear your results from the same prompt given to Phind.com or Claude.
 
But whoever said, "when the AI fad is over" is in for a surprise. This fad isn't ending. Someone described the current state of AI to be "primordial goo" and I believe it.
It costs eye-watering amounts of money to run (and to be clear, it is currently heavily subsidized by a couple major corporations who are desperate to make it A Thing, and they aren't charging anything like what it costs, let alone turning a profit on it,) hallucinates constantly (often in ways that could get credulous, unwary people killed,) and has yet to demonstrate useful capability beyond "it can fill out boilerplate text for you, but you still have to edit the content yourself," let alone fulfill the pie-in-the-sky promises of obsoleting all your hu-man fleshworkers being some ultra-smart personal assistant that knows everything and can take care of all the parts of life that you'd rather not bother with. Even its own boosters admit that it's long-term unviable without Unspecified Future Innovations in energy and number-crunching hardware; they're just hoping that everyone is too distracted by the hype to notice. It's going away, it's just a question of when the bubble bursts.

Ed Zitron has done a lot of good writing on this over the last year or so; I'd highly encourage anybody interested in understanding what the real motives behind the LLM bubble are to check it out. We may well end up in a future where machines think, for all I know - but we're not gonna get there by making ELIZA run faster.
 
  • Like
Reactions: cjs
It's going away, it's just a question of when the bubble bursts.
I'll bet you a bag of chips :)

Just like primordial goo isn't Andy Worhol AI hasn't even gotten started yet.

They (the ubiquitous they) are working on a new generation of chips that don't require as much power, AI has already been integrated into more systems then we even know about, etc.

But I won't pepper you with arguments since you've already made up your mind.

Now, the question of recovery of investment is an interesting one. In order to recover that kind of money AI would have to be like cel phones and, probably, that is the model they are after, but they have to find that killer app, the one everyone would be willing to pay $20 a month for. No company wants to lose that race.

I am curious as to what that will be. Anyone willing to go in on that thought experiment is welcome to chime in - what killer app would you be willing to spend the money on? If your answer is "nothing" then, you know, don't bother to answer :)

A couple of things come to mind for me:
0) if it weren't available free I would pay $20 for code assistance. Currently I only use it a couple of times a week so I never exceed their free tier limit. But the company I worked for before I retired did pay for everyone to have it.

1) Circuit design
2) in home assistant in physical form for when I get really old. (I mean older than I already am). With a physical off switch. (I know, $ a lot more than 20)
3) complete elimination of spam, scams and other online annoyances from my life :) . I really shouldn't have to endure this, it should have been blocked from the get-go. But here we are.
 
Last edited:
But whoever said, "when the AI fad is over" is in for a surprise. This fad isn't ending.
I would believe you've got any clue at all about what you're talking about if you explained why this time it's different. But if you're even aware that this is the third or fourth major "AI" "revolution" in the past fifty years, you're not telling us that, much less why this isn't a fad when the other ones were.

0) if it weren't available free I would pay $20 for code assistance.
In other words, it is a fad because you're not willing to pay anywhere near what the service costs to provide? Or were you under the terrible misapprehension that code assistance used daily (as a developer would at a job) can be provided for $20/month?

3) complete elimination of spam, scams and other online annoyances from my life :) . I really shouldn't have to endure this, it should have been blocked from the get-go. But here we are.
This would be wonderful, and it kind of feels like better "AI" should be able to help with this in some way. But if they can, why aren't they even starting on this now, several years in? There is a huge demand for this. My guess is that they're not because LLMs are good at generating text, but not doing any real analysis of it.
 
It doesn't "know" anything. Again, the output is not based on facts in the input; it's based on word and sentence patterns and frequencies.
I mean...find me a person who operates on universal primitives and undeniable facts rather than just operating on word and sentence patterns and frequencies, then this point will have some meat on it. That is the one area that does make me admit AI may actually have something resembling "intelligence", our own intelligence is just pattern recognition on steroids.
 
I mean...find me a person who operates on universal primitives and undeniable facts rather than just operating on word and sentence patterns and frequencies....
Well, every programmer, for example. When I use an AND or OR instruction on the 8080, the CPU cares not at all about word and sentence patterns, but always sets the carry flag to zero, regardless of what what you've read from people reading or misreading the manual. I understand this and can make use of this fact; that's why I can write 8080 code and LLMs can't.

Humans' use of logic is not "pattern recognition on steroids"; it's in fact quite the opposite: it's what we use when being correct is more important that coming up with an answer quickly.
 
Back
Top