The problem with "any source" is that it's pretty easy for a slightly trained human to assess the veracity of a web page or a written manual. Because low effort is going to be immediately visible. A "wrong" font, typos, or design elements plus content that shouldn't be there, is enough for people to start questioning.
Large language models take those sources at face value and recast them using superior oratory skills. Like in professions where people talk without saying anything but still give listeners an impression they received a fact or an opinion - it's the form that counts, not the content.
I'm pretty much sure there is nothing I would use LLMs for because I don't use them. They're usable for text transformation if one keeps the target domain in check. For programming languages, one could throw a construct from a language he knows to a language he doesn't, getting a tailored example, opposed to sitting down 30 minutes in front of manuals and learning the basic syntax from zero. But it highly depends what languages we're talking about. Java to Kotlin? Sure. NASM to A86? Err I don't think so.
With AI computing problems you first think about the frequency of your problem. How many sources on the biggest internet sites were generated by humans thinking over that problem or a similar one. Of course if we're talking basic problems in prevalent popular technologies, the sources are aplenty and the chances of "AI" hallucinating the correct thing gets high. But then again, you can google it yourself. With harder technology stacks sources diminish and accuracy starts crumbling down. Text transformation is one thing, sifting through technology documentation is another. We're after an exact piece of data. When you give LLMs source documents and ask reinterpretation, if they're prosaic, any errors in the output you'll manage in manual correction because brunt of the workload, prosaic writing, has been lifted off your shoulders. For people that actually use LLMs to enhance productivity, this is the tradeoff they're trying to accomplish, LLM does the form, people correct the content. I do believe this is useless for "us" because if you ask for a fact you want a fact. Asking then fact checking just makes the asking part redundant.
The downside of using it as a Google intermediary is not clear at first but it's a barrier towards the real source - very simple, start googling vintage computing and you'll run into vcfed in no time. Then you realize it's even better signing up and talking to real people with real experience and real facts. LLM in between, and you might not be aware of how easy it is to get help on whatever concerns you.