hydrofoiling catamarans, but yes
lime
boooo +1
...you okay man?
sounds like you should transition to being an architect
it's melted snowflakes do try to keep up
not tried gemma yet, i've stayed away from google stuff. maybe i'll give it a shot.
yeah one of those framework machines with 128GB shared ram would have been amazing. shame they're sending money to racists.
one of my most recent fun activities came from discovering the "allow editing" button in koboldcpp. since the model is fed the entire conversation so far as its only context, and doesn't save data between iterations, you can basically re-write its memory on the fly. i knew this before but i'd never though to do it until there was an easy ui option for it, and it turned out to be a lot of fun, because when using a "thinking" model like qwen3.5 you can convince it that it's bypassing its own censorship.
basically you give the model a prompt to work off of, pause it in the middle of the thinking process, change previous thoughts to something it's been trained to filter out (like sex or violence or opinions critical of the ccp), and it will start second-guessing itself. sometimes it gets stuck in a loop, sometimes it overcomes the contradiction (at which point you can jump in again and tweak its memory some more) and sometimes it gets tied up in knots trying to prove a negative.
a previous experiment was about feeding stable diffusion images back into itself to see what happens. i was inspired by a talk at 37c3 where they demonstrated model collapse by repeatedly trying to generate the same image as they put in (i think this was how sora worked).
i will never understand that attitude. four hours of exploration, learning and puzzle-solving sounds like the best part of the job. an isolated, well-specified problem that can be completed in a day is like the most fun you can have programming. why swap that for an hour of code review?
i did my first machine learning course more than 10 years ago, so i'm not ashamed to admit that i bought beefier hardware to play around with local models in early 2023. i still like doing that. mostly because i know my gpu is powered entirely off of fossil-free energy and because i decided early on not to spew the output all over the internet unless it was poignant. or funny. not as in "the llm told a good joke", more as in "i compressed this poor thing to fit on a cd and now it can only talk about dolphins".
qwen3.5-12B really screams along on a 7900xtx. like, up to 70-100 tokens a second. perfect for seeing the results of your torture methods quickly.
no, robert deniro is not "in heat"



holy fuck the resolution to the cow thing had me in stitches.