• Deebster@programming.dev
    link
    fedilink
    English
    arrow-up
    63
    arrow-down
    5
    ·
    2 days ago

    Have they taken out the AI generated papers? We know that training LLMs on LLM-generated text leads to an absolute collapse in quality, and we also know that AI has been showing up in papers so if they haven’t, then this will be quite unreliable.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      36
      arrow-down
      1
      ·
      2 days ago

      We know that training LLMs on LLM-generated text leads to an absolute collapse in quality.

      This is often repeated, and true. But needs to be qualified.

      Modern LLMs use tons and tons of “augmented” data, which is code for LLM generated or massaged data. Some is even generated during training, and judged; papers on that are what made Deepseek famous.

      Training on LLM trash will, of course, yield greater trash, and obviously good text has to come from something real. But that’s because slop is slop. And there are issues with “deep frying” LLMs, yes, but simply training on LLM on LLM output does not necessarily reduce quality. It often helps, significantly.


      And we also know that AI has been showing up in papers so if they haven’t, then this will be quite unreliable.

      Now this is a problem.

      TBH LLMs would be pretty good at flagging papers for humans to check, similar to what Wikipedia is already doing. But yeah, if you just feed a prompt bad papers, LLMs just assume the context is true, generally, and that’s a tremendous problem.

    • T156@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 days ago

      I would be surprised if it was something that they trained themselves, and not an off the shelf model hooked up to a search.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        It’s probably their own search/RAG backend, or at least their configuration of some open source project.

        And that’s the important part. Get the article retrieval right, and the LLM performance isn’t that important; they could self-host Qwen 27B or something and it’d work fine.