• TheTechnician27@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    3 days ago

    LLMs do not possess the ability to reason over the information that it is fed.

    Ah, yes, I forgot that if an LLM has no conscious ability to reason, then we shouldn’t have any terminology to describe the general process it’s using to create an output. Case closed. I’m glad you’ve enlightened us about how useful jargon isn’t actually useful. Data goes in, data goes out; you can’t explain that.

    • Leon@pawb.social
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      3 days ago

      That isn’t what I said. You’re doing a pretty good LLM impression yourself.

      I hate it when people use unnecessary terms to describe something. Hiding the actual workings behind silly marketing buzzwords serves to sensationalise what these things actually do.

      That is why I hate marketing buzzwords.

      Putting an LLM to process the output of a search in a repository of scientific papers isn’t going to automatically make the output useful or accurate. Papers aren’t necessarily high quality just because they’ve been published, just look at the garbage that Lisa Littman, Kenneth Zucker, and their ilk have shat out over the decades.

      An LLM, no matter how many scripts or cleverly written prompts you augment it with, will never be able to differentiate good science from bad, and will just as easily give equal credence to garbage papers as it will to actual quality ones. That’s a problem, without “hallucinations” even entering the picture.

      Edit: I think the overall idea of the site is awesome, knowledge should be freely available. I just don’t see the value add that an LLM provides. I only see problems with it.

      • TheTechnician27@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 days ago

        Putting an LLM to process the output of a search in a repository of scientific papers isn’t going to automatically make the output useful or accurate. Papers aren’t necessarily high quality just because they’ve been published.

        For someone who likes to get riled up about people not responding to “what you said”, this whole tangent about the accuracy of RAG and the fact scientific papers aren’t automatically 100% reliable is pretty hilarious. Literally nobody was arguing that it makes it “automatically useful or accurate” or that published papers are “necessarily high-quality”.

        You’re genuinely acting like you’re taking issue with terminology describing a process because that process isn’t perfect. “RAG” adequately describes a general technique to improve the accuracy of an output of a query to an LLM, and all you’re doing now is pissing and moaning that “um, just because it’s published doesn’t mean it’s *high-quality” – which has categorical fuck-all to do with the usefulness of the term.

        We’ll continue to use it, and you’re welcome to continue being annoyeed by it.

        PS: I write material that LLMs are trained on as a hobby; sorry if it annoys you that my writing style is coincidentally similar.