• Tamps@feddit.uk
    link
    fedilink
    English
    arrow-up
    123
    ·
    11 hours ago

    Just another form of vendor lock-in. If your business model is mostly/entirely dependent on an external party, that should be a well understood risk.

    • shirasho@feddit.online
      link
      fedilink
      English
      arrow-up
      32
      ·
      8 hours ago

      I am responsible for gathering information on AI to determine whether we should use it for our next project. The ask was to use it for a critical process task. Immediately in my head I was like “no, we are not using AI at all”, but I obviously need quantifiable data. This is just another thing to add to my list of why using AI for core processes is one of the stupidest things you could ever do.

      • boonhet@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        3
        ·
        56 minutes ago

        Well you could also spec out a few machines for local LLMs as a sensitive alternative to show the higher ups. “This is what it’ll cost us if we don’t want to be caught with nothing but our dicks in our hands when a vendor decides to shut us down for actually using the shit we’re paying for” and “it’ll end up saving us money eventually” (if you can male the case for that, you’ll have to do your own calculations).

        Keep in mind the top of the line models will require some 600-700 GB of VRAM IIRC, may want to check ollama for examples. And you’d want redundancy of course, not a single machine.

        Capex will usually seem more sensible to businesses than opex since it’s a one time thing, but this should be big enough to deter them unless you work at a really big company.

        But also, what type of task is it? Perhaps AI is not a bad fit, just LLMs. There are plenty of decent use cases for other types of AI. For an example you could tell if something is a hot dog or not with pretty good accuracy.