this post was submitted on 23 Jul 2025
1 points (100.0% liked)

homeassistant

19313 readers
1 users here now

Home Assistant is open source home automation that puts local control and privacy first.
Powered by a worldwide community of tinkerers and DIY enthusiasts.

Home Assistant can be self-installed on ProxMox, Raspberry Pi, or even purchased pre-installed: Home Assistant: Installation

Discussion of Home-Assistant adjacent topics is absolutely fine, within reason.
If you're not sure, DM @GreatAlbatross@feddit.uk

founded 2 years ago
MODERATORS
 

What is everyone using for the LLM model for HA voice when selfhosting ollama? I've tried llama and qwen with varying degrees of understanding my commands. I'm currently on llama as it appears a little better. I just wanted to see if anyone found a better model.

Edit: as pointed out, this is more of a speech to text issue than llm model. I'm looking into the alternatives to whisper

you are viewing a single comment's thread
view the rest of the comments
[–] doodlebob@lemmy.world 0 points 9 months ago (2 children)

The Gemma 27b model has been solid for me. Using chatterbox for TTS as well

[–] smashing3606@feddit.online 0 points 9 months ago

thanks I'll give gemma a shot

[–] spitfire@lemmy.world 0 points 9 months ago (1 children)

27b - how much VRAM does it use?

[–] doodlebob@lemmy.world 0 points 9 months ago (1 children)
[–] spitfire@lemmy.world 0 points 9 months ago (1 children)

So basically for people who have graphic cards with 24GB VRAM (or more). While I do, it's probably something most people don't ;)

[–] doodlebob@lemmy.world 0 points 9 months ago

Yeah, I went a little crazy with it and built out a server just for AI/ML stuff 😬