When using llama.cpp, does it pass your prompts through a web server to process? Any privacy concerns?
Sounds like I’m looking at a few grand to run something decent. I’ll need to do more research before I commit to that big of a purchase, but your machine sounds nice!
Are there any small models you recommend that can run on 16GB DDR4 and an i7? No dedicated graphics card with separate VRAM. Maybe I’ll just experiment with something v small first.
No - it absolutely does NOT pass to a clandestine web-server. llama.cpp has thousands of eyes on the code; there’d be an uproar if there was any sneaky bullshit telemetry inbuilt.
PS: llama.cpp has its own Web-ui front end in built (think: chatGPT but local on your machine) that’s really, really nice and worth considering as your daily chat front end.
Small models in the 16GB range: sure. What would you like to do with your LLM? General use or something specific?
Thanks. I don’t understand web UI enough. Thought it had to be hosted somewhere. I’ll try it out.
I just want to use it for general web search, data processing and maybe some light automation. In the beginning I just wanna understand how it all works and how to set it up so small model is fine.
The web-ui is the thing you type on :) You host it yourself. llama.cpp is the back end runner…it just so happens that it now has a in-built front end too. You can see it below
(Most things run llama.cpp underneath btw and then slap something else on top)
Probably you’re going be better served with Jan.ai until your up on your feet; it’s a little friendlier / less cryptic when starting out. Jan has both llama.cpp AND a different web-ui and stuff on top. All of it always on your machine.
Thank you!!! This is awesome!
When using llama.cpp, does it pass your prompts through a web server to process? Any privacy concerns?
Sounds like I’m looking at a few grand to run something decent. I’ll need to do more research before I commit to that big of a purchase, but your machine sounds nice!
Are there any small models you recommend that can run on 16GB DDR4 and an i7? No dedicated graphics card with separate VRAM. Maybe I’ll just experiment with something v small first.
Thanks again!
No - it absolutely does NOT pass to a clandestine web-server. llama.cpp has thousands of eyes on the code; there’d be an uproar if there was any sneaky bullshit telemetry inbuilt.
PS: llama.cpp has its own Web-ui front end in built (think: chatGPT but local on your machine) that’s really, really nice and worth considering as your daily chat front end.
Small models in the 16GB range: sure. What would you like to do with your LLM? General use or something specific?
Thanks. I don’t understand web UI enough. Thought it had to be hosted somewhere. I’ll try it out.
I just want to use it for general web search, data processing and maybe some light automation. In the beginning I just wanna understand how it all works and how to set it up so small model is fine.
The web-ui is the thing you type on :) You host it yourself. llama.cpp is the back end runner…it just so happens that it now has a in-built front end too. You can see it below
https://github.com/ggml-org/llama.cpp/discussions/16938
(Most things run llama.cpp underneath btw and then slap something else on top)
Probably you’re going be better served with Jan.ai until your up on your feet; it’s a little friendlier / less cryptic when starting out. Jan has both llama.cpp AND a different web-ui and stuff on top. All of it always on your machine.
https://www.jan.ai/docs/desktop/quickstart
As I recall, Jan has has a few one-touch install models (older but pretty decent ones; worth trying when just starting out)
Are you running llama.cpp via Docker or you compiled on your machine?
I run it bare metal. No docker.
cool, i’m doing the same. i thought you had to run it via Docker, but I’m running via powershell and the web UI.
Got it. I’ve built simple webpages and opened those files in my browser to preview them, but didn’t know those would connect to anything.
I’ll check out Jan and Llama and see which works for me.
Thanks!!
You would be surprised by the smaller 7-12B LLMs. Give them tools and they can work well
Thanks! I imagine you can create and pull tools from somewhere. Where is a good place to find prebuilt tools?
Depends on what you settle on. Eg: OpenWebUI has a bunch here -
https://openwebui.com/search?sort=top&t=all&page=1&type=tool
Thanks! LLM Council tool sounds cool. Will I be able to just get an MCP server link and plug it into the llama.cpp UI or more involved?