87 points basilikum 3 hours ago 100 comments
How do you deal with that? Do you try to tell them about hallucinations and that LLMs have no concept of true or false? Or do you just let them be? What do you do when they do that in a conversation with you or encounter LLMs being used as a source for something that affects you?
chipgap98 2 hours ago | parent
atomicnumber3 2 hours ago | parent
sodapopcan 2 hours ago | parent
al_borland 2 hours ago | parent
I’ve seen some people quote AI like you’re saying. However, when I preface something with “ChatGPT said…”, my intention is to convey to the listener that they should take it with a grain of salt, as it might be completely bull shit. I suppose I should consider who I’m talking to when I make that assumption.
jazzyjackson 43 minutes ago | parent
It’s not quite anthropomorphizing either that’s the issue, need a word for “treating it as tho it were a machine conscious that exists alongside humanity*”, how does cyborgropomorphizing sound
* and not merely a markov chain running in Sam Altman’s closetBobbyTables2 1 hour ago | parent
Which is more believable?
“The sky is filled with a downpour of squealing pigs. Would you like me to suggest the best type of umbrella?”
“Sky pigs squealing”
yammosk 1 hour ago | parent
sodapopcan 45 minutes ago | parent
ACow_Adonis 2 hours ago | parent
And if you previously were unaware of the insanity and irrationality passing under the surface of such human activity, I guess it can come as a bit of a shock :)
heliumtera 2 hours ago | parent
It happened with science, politics, traditional media, history books, "good engineering practices" applied to IT, OOP,tdd,DDD,server side rendering, containerization... Literally every bullshit shilled to the moon is accepted without second guessing and you would be without a job, in an asylum, for questioning 2 of them in a row.
Why is it different now? EVERYTHING is bullshit, only attention matters. And craftsmanship.
soopypoos 1 hour ago | parent
cookiengineer 1 hour ago | parent
For pretty much everything there is a conspiracy theory out there claiming the opposite, and these types usually started out searching the internet for someone else who believes the same that they did at the time.
But, as we all know, this technique will eventually lead to overfitting. And that's what those types of people have done to themselves.
Well, and as lack of education is the weakness of democracy, there's a lot of interested parties out there that invest money in these types of conspiracy websites. Even more so after LLMs.
Whoever controls the news controls the perpetual presence, where everything is independent of the forgotten history.
basilikum 2 hours ago | parent
chipgap98 2 hours ago | parent
BobbyTables2 1 hour ago | parent
ares623 2 hours ago | parent
sbinnee 2 hours ago | parent
sodapopcan 2 hours ago | parent
renewiltord 2 hours ago | parent
paul_n 2 hours ago | parent
However, if I notice a friend is about to harm themselves in some way I’ll pull open their ChatGPT and show them directly how sycophantic it is by going completely 180 on what they prompted. It’s enough to make them second guess. I also correct people who say “he or she” when referring to an LLM to say “it” in dialog, and explain that it’s a tool, like a calculator. So gentle reframing has helped.
Sometimes I’ll ask them to pause and ask their gut first, but people are already disconnected from their own truths.
It’s going to be bumpy. Save your mental health.
max8539 2 hours ago | parent
kace91 2 hours ago | parent
I'm saying that because they were not going to be critical of the search results, and google is not exactly showing objective truth in the first positions nowadays.
fxtentacle 2 hours ago | parent
I treat the LLM like a diety. Every sane person understands well enough that the Bible is not to be taken literally. And then when someone talks about using LLMs, I always rephrase that as prayer.
platevoltage 2 hours ago | parent
basilikum 2 hours ago | parent
khuey 2 hours ago | parent
selcuka 2 hours ago | parent
ddawson 2 hours ago | parent
LLMs aren't a special case to me. Glue doesn't belong on pizza and you shouldn't eat one rock a day but we've been giving and getting bad advice forever. The person needs to take ownership for the output and getting it right, no matter the source, is their responsibility.
PaulKeeble 2 hours ago | parent
Like most things that go mainstream it will take a good while before people understand, by which point they will have learnt a lot of things that aren't true and they will never let them go. We might get a healthy use of current AI at some point in the future or if the product drastically improves.
All you can do now is hold them to the same standard you normally would, if you catch them lying whether an AI did it or not its their responsibility and you treat them accordingly.
ggm 2 hours ago | parent
I doubt you can stop them from asking machines for answers. What you can do is aide them to learn how to distrust the answers competently, but outside their field of knowledge, applying skepticism is hard.
The irony of Gell-Mann Amnesia is that Michael Chrichton, who is said to have named it, suffered from it badly: Wrote well within his field, misapplied sciences to write well outside it, and said things which were indefensible.
maxdo 2 hours ago | parent
eranation 2 hours ago | parent
ericpauley 2 hours ago | parent
As a test I just did exactly what you said in a Claude Opus 4.6 session about another HN thread. Claude considered* the contradiction, evaluated additional sources, and responded backing up its original claim with more evidence.
I will add that I use a system prompt that explicitly discourages sycophancy, but this is a single sentence expression of preference and not an indication of fundamental model weakness.
* I’ll leave the anthropomorphism discussions to Searle; empirically this is the observed output.
jazzyjackson 54 minutes ago | parent
Which is to say, of a million people who just started playing with LLMs, a bunch of people will get hit or miss, while one guy is winning the neural net lottery and has the experience of the AI nailing every request, some poor bloke is trying to see what all the hype is about and cannot get one response that isn’t fully hallucinated garbage
ericpauley 44 minutes ago | parent
odo1242 33 minutes ago | parent
mkozlows 8 minutes ago | parent
beeflet 44 minutes ago | parent
https://claude.ai/share/47145af0-47d1-451b-813c-131ec48e7215
Maybe it is possible with a more complex or subjective question.
keithnz 2 hours ago | parent
roywiggins 2 hours ago | parent
It's not actually realizing anything so much as it's following your lead. Yes, followup questions can help dislodge more information, but fundamentally you can accidentally or on purpose bully an LLM to contradict itself quite easily, and it is only incidentally about correctness.
heliumtera 2 hours ago | parent
roncesvalles 2 hours ago | parent
heliumtera 2 hours ago | parent
If you full throttle a BWM1000 S RR for a split of a second,at 30mph first gear, it will self eject beneath you. If you do that, for any length of time, you're dead. Do the same on a 50cc motorbike and not much will happen. Even for extended periods of time, not much would happen. You could hold it down until you run out of fuel or the universe gets cold and die, not much would happen.
You see, it's not that they are lazy. Or they haven't put any amount of time into understanding how llms operate. Again I am sorry, most people are not capable, at all, of understanding what is happening at inference time. Most developers, nerds, hackers, who do understand how computers operate, cannot really grasp the basics of what an llm is or what the f is going on. Imagine the average guy, your lawyer, the MBA type of person.
atomicnumber3 2 hours ago | parent
I firmly believe that every single person on this entire planet has a depth to them that far, far exceeds anything an LLM could even begin to approximate. I'm sorry you're in a position that you can't see that at all - that each and every one of them feel happiness and sadness and love and hate and fear and rage and inspiration and passion and are utterly human. I hope you see it someday.
heliumtera 2 hours ago | parent
basilikum 1 hour ago | parent
Look I get your sentiment. Sometimes it feels like you're the only thinking, conscious being. Surrounded by beings who fundamentally cannot understand that A –> B does not imply B –> A. Beings that say things that are so obviously non-sequiturs or contradictory.
But calling people NPCs is the most NPC thing you can do. There is more to people than logical reasoning and these things often impede or completely block reasoning. Very intelligent people sometimes say the most grotesque things. People turn mad and mad people sometimes get their head set straight.
Sometimes it's not so much about the pure ability to reason but the goal of that person and whether they see understanding something or trying to understand it as helpful towards that goal.
I do agree though that the more intelligent someone is the less likely it is that other things will block their intelligent ability and the harder it is for them to fool themselves into believing absurd nonsense and to blind themselves from apparent truth.
Sometimes after talking with someone – or rather trying to but ending up only talking to them because they just do not manage to understand what I'm saying or to engage with it in any way – I wonder how they manage to get through every day life as that requires solving way more complex practical problems. Yet they do.
roguechimpanzee 2 hours ago | parent
mathgladiator 2 hours ago | parent
JuniperMesos 53 minutes ago | parent
spacecadet 2 hours ago | parent
userbinator 2 hours ago | parent
fallinditch 2 hours ago | parent
"Tell me about all the potential pitfalls of blindly trusting LLM output, and relate a couple or three true stories about when LLM misinformation has gone badly wrong for people."
uyzstvqs 2 hours ago | parent
It usually involves some form of "well, no, hold on..."
sbinnee 2 hours ago | parent
I didn't tell her why LLMs can make mistakes or hallucinate because I thought that she would not appreciate my mansplaining.
Looking forward though, my boring answer would still be education. It is going to take time. But without understanding LLMs, they will not be easily persuaded.
nomilk 2 hours ago | parent
katet 2 hours ago | parent
A: Why is drinking coffee every day so good for you?
B: Why is drinking coffee every day so bad for you?
Question A responds that it has "several health benefits", antioxidants, liver health, reduced risk of diabetes and Parkinson's.
Question B responds that it may lead to sleep disruption, digestive issues, risk of osteoporosis.
Same question. One word difference. Two different directions.
This makes me take everything with a pinch of salt when I ask "Would Library A be a good fit for Problem X" - which is obviously a bit leading; I don't even trust what I hope are more neutral inputs like "How does Library A apply to Problem Space X", for example.
ericpauley 2 hours ago | parent
Good:
> The research is generally positive but it’s not unconditionally “good for you” — the framing matters.
> What the evidence supports for moderate consumption (3-5 cups/day): lower risk of type 2 diabetes, Parkinson’s, certain liver diseases (including liver cancer), and all-cause mortality……
Bad:
> The premise is off. Moderate daily coffee consumption (3-5 cups) isn’t considered bad for you by current medical consensus. It’s actually associated with reduced risk of type 2 diabetes, Parkinson’s, and some liver diseases in large epidemiological studies.
> Where it can cause problems: Heavy consumption (6+ cups) can lead to anxiety, insomnia……
This isn’t just my own one-off examples. Claude dominates the BSBench: https://petergpt.github.io/bullshit-benchmark/viewer/index.v...
whattheheckheck 2 hours ago | parent
tayo42 1 hour ago | parent
katet 1 hour ago | parent
SMAAART 2 hours ago | parent
Is this something you can control or is this outside your control?
ericpauley 2 hours ago | parent
This of course doesn’t apply to high-stakes settings. In these cases I find LLMs are still a great information retrieval approach, but it’s a starting point to manual vetting.
jesterson 2 hours ago | parent
Now they got another "God" in LLM.
How to deal? Just ignore. There is way more stupid people with stupid opinions than we can possibly estimate.
vcryan 2 hours ago | parent
notnullorvoid 2 hours ago | parent
Some comments here equating it to people who blindly believe things on the internet, but it's worse than that. Many previously rational people essentially getting hypnotized by LLM use and loosing touch of their rational thinking.
It's concerning to watch.
mathgladiator 2 hours ago | parent
Ask AI to cite sources and then investigate the sources, or have another agent fact check the relevancy of the sources.
You can use this thing called ralph that let's you burn a lot of tokens at scale by simply having a detailed prompt work on a task and refining something from different lenses. It too AI about an hour to write: https://nexivibe.com/avoid.civil.war.web/
I do this on things that I know very well, and the moment I let it cook and iterate, collect feedback, the results become chef's kiss.
The agentic era that we are in is... very interesting.
000ooo000 2 hours ago | parent
It's incredible watching people determine that outsourcing their thinking and work to what has been generously described as a junior coworker is a new 'skill'. Words are losing their meaning, on multiple levels.
quirkot 2 hours ago | parent
mathgladiator 1 hour ago | parent
Claude max-x20 is $2,400 a year.
I talk to the computer like a person to get the computer to do things that humans used to do. Having managed people before, I'm going all in on AI.
mtndew4brkfst 1 hour ago | parent
mathgladiator 1 hour ago | parent
sublinear 2 hours ago | parent
These people may be idiots who are impossible to reason with, but at least for now the LLMs have not been completely driven into the ground by SEO. They might actually be getting a taste of what it feels like to not be an idiot. I'm happy for them, but they'll snap out of it when their trust is broken. It's probably sometime soon anyway.
paulcole 1 hour ago | parent
dyauspitr 1 hour ago | parent
wolvoleo 1 hour ago | parent
So I said, don't ever trust the output of an LLM without verification. However this caused me some hassle with the AI adoption manager. We have minimum-use AI KPI's for employees and he asked me to stop saying these things or people will use it less.
In the end I just hated the company a little bit more. I'm just sick of fighting against idiot. And he does have a point, our leadership is pretty crazy about the AI hype, they want everyone to be on it all the time. They don't seem to care whether it adds value or if it even detracts.
WillAdams 1 hour ago | parent
rjpruitt16 1 hour ago | parent
dlm24 1 hour ago | parent
For me, for example have seen and experienced doctors making mis diagnosis (and they a reputable source), so what is the difference really?
I guess your question depends on the context they using the LLM as well for and what sort of questions they are asking.
Scientific fact based or opinion questions?
b00ty4breakfast 1 hour ago | parent
dlm24 56 minutes ago | parent
mnmnmn 1 hour ago | parent
esperent 1 hour ago | parent
Can you give an example of what kind of question you mean here?
Given that most people's idea of a reputable source is whatever comes up on the first page of Google or YouTube, I think we should use that as the comparison rather than dismissing LLM results. And we should do some empirical testing before making assumptions, otherwise we're just as bad as the people we are complaining about.
Whatever results we get, the real problem is that most people's ability to verify information was not good before LLMs, and it's still not good now.
So now you're dealing with LLM hallucinations, and before you were dealing with the ravings of whatever blogger or YouTuber managed to rank for this particular query.
Alen_P 53 minutes ago | parent
perfmode 53 minutes ago | parent
panarky 42 minutes ago | parent
If they ask what I think, I tell them.
If they don't want my opinion I keep it to myself.
0xbadcafebee 37 minutes ago | parent
acheron 10 minutes ago | parent
I’ll take LLMs any day over what search and the rest of the Internet has turned into.