• pHr34kY@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    1
    ·
    11 hours ago

    I appreciate the honesty when they say it’s an AI response and not genuine knowledge.

    When I tell someone “an LLM told me that…” It’s usually followed by “Let’s see if there’s any truth to it.” An AI response should always be treated as a suggestion, not an answer.

    Hell, Google’s AI still doesn’t know which day the F1 GP is on this week. It was wrong by a whole week a while back. Now it’s only off by a day.

    • mcv@lemmy.zip
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      6 hours ago

      An AI response should always be treated as a suggestion, not an answer

      Exactly. An AI response can be a great way to get started on a topic you know little about, but it’s never a definitive answer. You have to verify whether it’s actually true. Whether it works. Never trust it blindly.

      • Panthenetrunner@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 hours ago

        I feel like a big barrier is people anthropomorphizing the AI. It’s not “ChatGPT generated this” it’s “ChatGPT said this”. I don’t necessarily blame people for it, machine that speaks to you short circuits something in people’s brains and it’s not like we’ve got better language to talk about it. It’s just that… people treat it as an opinion, not as software output. And so long as that’s how people handle it, I just don’t know if a “healthy” use of the technology is possible.