this post was submitted on 22 Apr 2026
1 points (100.0% liked)

LocalLLaMA

4673 readers
3 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

Rules:

Rule 1 - No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.

Rule 2 - No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.

Rule 3 - No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.

Rule 4 - No implying that models are devoid of purpose or potential for enriching peoples lives.

founded 2 years ago
MODERATORS
 

Recently made a post about the 35b MOE. Now the dense 27b variant has been released.


top 25 comments
sorted by: hot top controversial new old
[–] SuspciousCarrot78@lemmy.world 0 points 5 days ago* (last edited 4 days ago)

For anyone that doesn't like reading model cards or whitepapers (ie: most people) Bijan just did a review of it

https://www.youtube.com/watch?v=N-0WtgxJ7ZU

It's...very, very impressive.

[–] TheCornCollector@piefed.zip 0 points 5 days ago

Artificial Analysis just posted their results and there seems to be a similar increase in output token usage as the 35B model.
Graph of Artificial Analysis benchmark token output shows 40% increase

[–] venusaur@lemmy.world 0 points 6 days ago (4 children)

Can I get a recommendation for idiots guide to running this model locally and what kinds of expectations with the recommended setup/s?

I presume I need at least a 24GB GPU? Can I trust buying a used one or should buy new?

What about GPU rental/cloud services?

Do I run it through terminal or it has a UI?

Does it have all the same features and ease of use as public LLM’s like Claude and ChatGPT including MCP?

What else should I know?

Tons of questions. Where can I learn this stuff without breaking the bank?

Thanks!

[–] Abrinoxus@thelemmy.club 0 points 4 days ago* (last edited 4 days ago) (2 children)

Koboldcpp is an easy way to get into running local llm:s they have executables for linux, mac, windows on their github and a simple gui to load a model

[–] venusaur@lemmy.world 0 points 4 days ago (1 children)
[–] Abrinoxus@thelemmy.club 0 points 3 days ago (1 children)

Np. Kobold has a very active discord as well (i think matrix too)

[–] venusaur@lemmy.world 0 points 3 days ago

I ended up starting with llama.cpp. I’ll check out Kobold too.

I use ollama to run it, litellm to put an openai API in front of it, and use it via any app that can talk to an open AI API.

[–] SuspciousCarrot78@lemmy.world 0 points 5 days ago (1 children)

Rules of thumb

  • For a 27B: if you want it to run entirely on your GPU, you will need to use a quantisation that fits + leave room for KV cache. So (for example), if your model GGUF was 10GB, I'd leave another 2GB for kv cache, meaning you'd need 12gb to run it with a reasonable context length. I haven't looked at the quants for Qwen3.6 27B yet...I imagine the "good baseline" quant is what...12? 15gb?

Having said that, remember that 1) you can run partially on CPU/GPU 2) use lower quants etc. So, if you have "just" 12GB, a lower quant (I dunno...IQ3_XS?) might get you over the line

  • You can run it however you want :) For someone brand new, the best all in one is Ollama or Jan.ai.

  • Yes. Jan.ai has MCP tooling (I imagine ollama does as well), so you can follow the how-to's to set that up. Read their docs? What do you need to do with MCP?

  • What you should know: you'll reach a point where "more parameters = better performance" needs to be balanced against cost and smarter tooling. Don't be tempted to drop $$$ on something thinking you can just throw money at the problem to make it go away.

[–] venusaur@lemmy.world 0 points 4 days ago (1 children)

Thanks! I’m experimenting with my laptop with 16GB RAM and no GPU/VRAM. I installed llama.cpp and am testing Gemma 7b Q5 but it’s not answering prompts correctly. It’s analyzing the prompt and not answering the question, or it gives me a poem haha. Trying to figure it out.

Any lightweight model you recommend for just chat experimenting for now? Can they connect to the internet?

[–] SuspciousCarrot78@lemmy.world 0 points 4 days ago* (last edited 4 days ago)

I'll never not recommend Qwen3-4B 2507 instruct...because despite being ancient in AI terms (so, 8 months lol) it's solid. Notably, the base models in Jan are all Qwen 3-4 variants.

Most models can search the web, if they have access to web searching tool.

[–] Rookeh@startrek.website 0 points 6 days ago (1 children)

There's a lot to cover here but I'll try to touch on each point:

The key requirement is fast memory that can be addressed by your GPU, and ideally a lot of it - hence the insane cost of this hardware right now.

Remember that you need space for the model's weights (think of this as its 'knowledge base') and the context window, which is basically the data needed for the LLM to keep track of your current conversation with it (effectively its short term memory).

With smaller pools of VRAM (8-16gb) you will have to compromise and either have a more capable model that will lose context quickly and start hallucinating, or a less capable model that can maintain a session for a bit longer but overall less 'smart'.

For software - there are a couple of options for running the LLM itself, Llama.cpp is one of the more popular tools and is the one that I use. It has a web UI with the usual chat interface, and also exposes an API that you can plug other tools (e.g. opencode) into, depending on your use case.

In terms of hardware recommendations, at 20GB+ of VRAM you do have a bit more headroom compared to more consumer grade GPUs, but to be honest the most cost effective way to get a shitload of VRAM is likely not with a dedicated GPU but actually using a system based around a recent APU.

I got a Minisforum MS-S1 last year for exactly this purpose. It is based on AMD's Strix Halo platform which it has in common with the Framework Desktop and a couple of other similar devices.

It has 128gb of unified RAM which can be divided between the GPU and CPU however you like, so plenty of capacity for even fairly chunky models. It also uses a tiny amount of power compared to a more traditional system with a dedicated GPU, while also giving really reasonable performance for most AI workloads, more than enough for use in a homelab.

For cloud rental - doable, but pricing is a factor, and of course this will not actually be running locally.

Usability - manage your expectations, but overall for a lot of use cases and of course depending on the model that you are running and the resources you throw at it, it can be comparable with especially older iterations of ChatGPT, Gemini etc.

But remember, you are not a Google or an Anthropic and do not have an infinite pool of compute to throw at your model, nor do you have access to the specific models they are using.

[–] venusaur@lemmy.world 0 points 5 days ago (2 children)

Thank you!!! This is awesome!

When using llama.cpp, does it pass your prompts through a web server to process? Any privacy concerns?

Sounds like I’m looking at a few grand to run something decent. I’ll need to do more research before I commit to that big of a purchase, but your machine sounds nice!

Are there any small models you recommend that can run on 16GB DDR4 and an i7? No dedicated graphics card with separate VRAM. Maybe I’ll just experiment with something v small first.

Thanks again!

[–] SuspciousCarrot78@lemmy.world 0 points 5 days ago* (last edited 4 days ago) (1 children)

No - it absolutely does NOT pass to a clandestine web-server. llama.cpp has thousands of eyes on the code; there'd be an uproar if there was any sneaky bullshit telemetry inbuilt.

PS: llama.cpp has its own Web-ui front end in built (think: chatGPT but local on your machine) that's really, really nice and worth considering as your daily chat front end.

Small models in the 16GB range: sure. What would you like to do with your LLM? General use or something specific?

[–] venusaur@lemmy.world 0 points 4 days ago (1 children)

Thanks. I don’t understand web UI enough. Thought it had to be hosted somewhere. I’ll try it out.

I just want to use it for general web search, data processing and maybe some light automation. In the beginning I just wanna understand how it all works and how to set it up so small model is fine.

[–] SuspciousCarrot78@lemmy.world 0 points 4 days ago* (last edited 4 days ago) (1 children)

The web-ui is the thing you type on :) You host it yourself. llama.cpp is the back end runner...it just so happens that it now has a in-built front end too. You can see it below

https://github.com/ggml-org/llama.cpp/discussions/16938

(Most things run llama.cpp underneath btw and then slap something else on top)

Probably you're going be better served with Jan.ai until your up on your feet; it's a little friendlier / less cryptic when starting out. Jan has both llama.cpp AND a different web-ui and stuff on top. All of it always on your machine.

https://www.jan.ai/docs/desktop/quickstart

As I recall, Jan has has a few one-touch install models (older but pretty decent ones; worth trying when just starting out)

[–] venusaur@lemmy.world 0 points 4 days ago (1 children)

Are you running llama.cpp via Docker or you compiled on your machine?

[–] SuspciousCarrot78@lemmy.world 0 points 4 days ago

I run it bare metal. No docker.

[–] Stiggyman@ani.social 0 points 5 days ago (1 children)

You would be surprised by the smaller 7-12B LLMs. Give them tools and they can work well

[–] venusaur@lemmy.world 0 points 4 days ago (1 children)

Thanks! I imagine you can create and pull tools from somewhere. Where is a good place to find prebuilt tools?

[–] SuspciousCarrot78@lemmy.world 0 points 4 days ago

Depends on what you settle on. Eg: OpenWebUI has a bunch here -

https://openwebui.com/search?sort=top&t=all&page=1&type=tool

[–] thedeadwalking4242@lemmy.world 0 points 6 days ago (1 children)

I can run models locally super easy in the CLI with a tool called ollama

[–] venusaur@lemmy.world 0 points 5 days ago (1 children)

Cool I’ve heard of it but I know there are a lot of variable. What model and size are you running with what hardware?

I've only ran super small models. I have a cheap gaming laptop with a Nvidia 3060 with like 8gb of vram.

Gemma4 will probably be a good model to try on your hardware

[–] rando@sh.itjust.works 0 points 6 days ago

Excited to try it, 3.5 was great. i am going to wait for llama.cpp to get some updates before I try it though