this post was submitted on 16 Apr 2026
1 points (100.0% liked)

LocalLLaMA

4588 readers
9 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

Rules:

Rule 1 - No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.

Rule 2 - No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.

Rule 3 - No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.

Rule 4 - No implying that models are devoid of purpose or potential for enriching peoples lives.

founded 2 years ago
MODERATORS
 

32 GB VRAM for less $1k sounds like a steal these days, and I'm sure it's not getting cheaper any time soon.

Does anyone here use this GPU? Or any recent Arc Pros? I basically want someone to talk me out of driving to the nearest place that has it in stock and getting $1k poorer.

you are viewing a single comment's thread
view the rest of the comments
[–] afk_strats@lemmy.world 0 points 1 day ago (1 children)

I find llama.cpp with Vulkan EXTREMELY reliable. I can have it running for days at once without a problem. As far as tokens/sec that's that's a complicated question because it depends on model, quant, sepculative, kv quant, context length, and card distribution. Generally:

Models' typical speeds at deep context for agentic use. Simple chats will be faster

Model Quant Prompt Processing (tok/s) Token Generation (tok/s) Hardware Quality
Qwen 3.5 397B Q2_K_M 100-120 18-22 2 x 7900 + 4 x Mi50 ★★★★★
Gemma4 31B or Qwen3.5 27B Q8_0 400-800 20-25 2 x 7900xtx ★★★★
Qwen 3.6 35B Q5_K_M 1000-2500 60-100 2 x 7900xtx ★★★★
Qwen 3.5 122B Q4_0 200-300 30-35 4 x MI50 ★★★★
gpt-oss 120b mxfp4 (native) 500-800 50-60 3 x Mi50 ★★
Nemotron 3 Nano 30B IQ3_K_XXS 2500-3000 150-180 1 x 7900xtx
[–] lavember@programming.dev 0 points 1 day ago

that's sick, thanks for sharing