TheCornCollector@piefed.zip to LocalLLaMA@sh.itjust.worksEnglish · 5 days agoQwen3.6 27B releasedhuggingface.coexternal-linkmessage-square30linkfedilinkarrow-up158arrow-down11file-text
arrow-up157arrow-down1external-linkQwen3.6 27B releasedhuggingface.coTheCornCollector@piefed.zip to LocalLLaMA@sh.itjust.worksEnglish · 5 days agomessage-square30linkfedilinkfile-text
minus-squarethedeadwalking4242@lemmy.worldlinkfedilinkEnglisharrow-up2·4 days agoI can run models locally super easy in the CLI with a tool called ollama
minus-squarevenusaur@lemmy.worldlinkfedilinkEnglisharrow-up1·3 days agoCool I’ve heard of it but I know there are a lot of variable. What model and size are you running with what hardware?
minus-squarethedeadwalking4242@lemmy.worldlinkfedilinkEnglisharrow-up2·3 days agoI’ve only ran super small models. I have a cheap gaming laptop with a Nvidia 3060 with like 8gb of vram. Gemma4 will probably be a good model to try on your hardware
I can run models locally super easy in the CLI with a tool called ollama
Cool I’ve heard of it but I know there are a lot of variable. What model and size are you running with what hardware?
I’ve only ran super small models. I have a cheap gaming laptop with a Nvidia 3060 with like 8gb of vram.
Gemma4 will probably be a good model to try on your hardware
Thanks!