TheCornCollector@piefed.zip to LocalLLaMA@sh.itjust.worksEnglish · 7 days agoQwen3.6 27B releasedhuggingface.coexternal-linkmessage-square30linkfedilinkarrow-up159arrow-down11file-text
arrow-up158arrow-down1external-linkQwen3.6 27B releasedhuggingface.coTheCornCollector@piefed.zip to LocalLLaMA@sh.itjust.worksEnglish · 7 days agomessage-square30linkfedilinkfile-text
minus-squareAbrinoxus@thelemmy.clublinkfedilinkEnglisharrow-up2·6 days agoKoboldcpp is an easy way to get into running local llm:s they have executables for linux, mac, windows on their github and a simple gui to load a model
minus-squaresexual_tomato@lemmy.dbzer0.comlinkfedilinkEnglisharrow-up3·6 days agoI use ollama to run it, litellm to put an openai API in front of it, and use it via any app that can talk to an open AI API.
minus-squareAbrinoxus@thelemmy.clublinkfedilinkEnglisharrow-up1·4 days agoNp. Kobold has a very active discord as well (i think matrix too)
minus-squarevenusaur@lemmy.worldlinkfedilinkEnglisharrow-up2·4 days agoI ended up starting with llama.cpp. I’ll check out Kobold too.
Koboldcpp is an easy way to get into running local llm:s they have executables for linux, mac, windows on their github and a simple gui to load a model
I use ollama to run it, litellm to put an openai API in front of it, and use it via any app that can talk to an open AI API.
Thanks!
Np. Kobold has a very active discord as well (i think matrix too)
I ended up starting with llama.cpp. I’ll check out Kobold too.