Ask ChatGPT to estimate the carbs in your lunch. Now ask it again. And again. Five hundred times. You’d expect the same answer each time. It’s the same photo, the same model, the same question. But you won’t get the same answer. Not even close — and the differences are large enough to cause a
How come it’s inaccurate about 40% of the time when I know the answer then? It’s a bullshit factory. A chatbot that’s fundamentally designed to sound like a person and be able to respond to any prompt. But truth isn’t any part of the fundamental architecture of an LLM.
Probably never. Just like people never realized how computers work, how networks work, how businesses work, how economies of scale work, how financial markets work, how…
We the people don’t give a shit about how anything works, for the most part. Exceptions include your narrowly focused expertise. We convince ourselves that we understand things, using top-down perspectives, because it’s easier than actually understanding things from a bottom-up perspective.
Even the strongest critics of AI can’t substantively explain how AI works. They use misnomers like “glorified autocomplete” to reason about it’s inaccuracy, rather than understanding the fundamental limitations of the approach used.
When are people going to realize that an LLM is not a calculator and doesn’t actually know anything?
That it is not a calculator and is horrible at determinism is not debatable, however its (very biased) huge knowledge is its core feature
How come it’s inaccurate about 40% of the time when I know the answer then? It’s a bullshit factory. A chatbot that’s fundamentally designed to sound like a person and be able to respond to any prompt. But truth isn’t any part of the fundamental architecture of an LLM.
Bullshit factory is very apt. I was using it for an open book exam and it gave answers entirely skewed to the way the question was asked.
For example, if I asked “is X bacteria a pathogen in Y disease”, it would say yes, it was a very bad pathogen.
If I asked “what effects does X bacteria have in this body system”, it said it was a beneficial bacteria.
Never trust the AI summary, you have to fully read the studies.
Well first AI tech corporations need to do advertising that AIs can keep doing all this.
Probably never. Just like people never realized how computers work, how networks work, how businesses work, how economies of scale work, how financial markets work, how…
We the people don’t give a shit about how anything works, for the most part. Exceptions include your narrowly focused expertise. We convince ourselves that we understand things, using top-down perspectives, because it’s easier than actually understanding things from a bottom-up perspective.
Even the strongest critics of AI can’t substantively explain how AI works. They use misnomers like “glorified autocomplete” to reason about it’s inaccuracy, rather than understanding the fundamental limitations of the approach used.