Large Language Models

262 readers
1 users here now

A place to discuss large language models.

Rules

  1. Please tag [not libre software] and [never on-device] services as such (those not green in the License column here).
  2. Be useful to others

Resources

github.com/ollama/ollama
https://github.com/danny-avila/LibreChat
github.com/Aider-AI/aider
wikipedia.org/wiki/List_of_large_language_models

founded 2 years ago
MODERATORS
1
 
 

Rules

  1. Please tag [not libre software] and [never on-device] services as such (those not green in the License column here).
  2. Be useful to others

Resources

github.com/ollama/ollama
github.com/danny-avila/LibreChat
github.com/Aider-AI/aider
wikipedia.org/wiki/List_of_large_language_models

2
3
4
5
6
 
 

Hello,

I have been looking into a new laptop, and was coming across ones with these NPUs heavily advertised in them. Doing some reading, they don't seem extremely functional at this stage.

They are around 45-50 TOPS at the highest it seems. I found some articles and comments suggesting that 'could' be useful for locally using smaller models, but also statements conflicting with that. As well, most, if not all, 'technical use' of them seem locked into the Windows environment. Even some program from AMD allowing local LLM use requires a Windows Server for it to communicate with, iirc. (AMD, GAIA)

So, is there, currently, any technical use for these, such that it makes much sense to grab a device with one for tinkering?

I'd considered experimenting with smaller models and seeing what comes of those (if small model improvements come through as DeepSeek proponents might suggest).

I'm also just generally new to the technology, but intrigued by the potential to localize usage; not only because of the potential to limit the environmental impact of large data center use.

Any comments, ideas, suggestions, or general pointing in a direction is very appreciated.

Thank you for taking the time. Have a good day!

7
 
 

Hallucinations are destroying under-resourced languages

These have been abundant even before #GenAI, when they were generated by machine translation. And for whatever motivation naive users have flooded crowdsourced resources with such hallucinations.

https://www.technologyreview.com/2025/09/25/1124005/ai-wikipedia-vulnerable-languages-doom-spiral/

@llm

8
1
submitted 7 months ago* (last edited 7 months ago) by VoxAliorum@lemmy.ml to c/llm@lemmy.world
 
 

Yesterday I had a brilliant idea: why not parse the wiki of my favorite table top roleplaying game into yaml via an llm? I had tried the same with beautfifulsoup a couple of years ago, but the page is very inconsistent which makes it quite difficult to parse using traditional methods.

However, my attempts where not very successful to parse with a local mistral model (the one you get with ollama pull mistral) as it first insisted on writing more than just the yaml code and later had troubles with more complex pages like https://dsa.ulisses-regelwiki.de/zauber.html?zauber=Abvenenum So I thought I had to give it some examples in the system prompts, but while one example helped a little, when I included more, it sometimes started to just return an example from the ones I gave to it via system prompt.

To give some idea: the bold stuff should be keys in the yaml structure, the part that follows the value. Sometimes values need to be parsed a bit more like separating pages from book names - I would give examples for all that.

Any idea what model to use for that or how to improve results?

9
 
 

Since openai removed access to 4.5, I am looking for something comparable from any other company.

Personally, I used it when 4o was not good enough. 4.5 was way better at research and doing more complex programming tasks.

What is comparably good in your experience?

10
11
 
 

cross-posted from: https://lemmy.world/post/32632017

Is there a good latex ocr model that I can run with Ollama?

I have tried https://github.com/lukas-blecher/LaTeX-OCR but the result is just not promising. Is there a good Open Source latex ocr tools that I could use with ease?

I am hoping that there is a ollama llm with image support that specializes on latex ocr.

thanks a lot in advance!

12
1
submitted 10 months ago* (last edited 10 months ago) by autonomoususer@lemmy.world to c/llm@lemmy.world
 
 

They cry when companies profit from their work, while ignoring the most blatant solution from the start: the AGPL.

Now, its libre software license text file has been replaced with a fake, banning us users from freely forking new versions.

Open WebUI v0.6.6+ ... now adds a ... branding ... clause.

The original BSD-3 license continues to apply for all contributions made to the codebase up to and including release v0.6.5.

13
 
 

Open WebUI lets you download and run large language models (LLMs) on your device using Ollama.

Install Ollama

See this guide: https://lemmy.world/post/27013201

Install Docker (recommended Open WebUI installation method)

  1. Open Console, type the following command and press return. This may ask for your password but not show you typing it.
sudo pacman -S docker
  1. Enable the Docker service [on-device and runs in the background] to start with your device and start it now.
sudo systemctl enable --now docker
  1. Allow your current user to use Docker.
sudo usermod -aG docker $(whoami)
  1. Log out and log in again, for the previous command to take effect.

Install Open WebUI on Docker

  1. Check whether your device has an NVIDIA GPU.
  2. Use only one of the following commands.

Your device has an NVIDIA GPU:

docker run -d -p 3000:8080 --gpus all --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda

Your device has no NVIDIA GPU:

docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

Configure Ollama access

  1. Edit the Ollama service file. This uses the text editor set in the $SYSTEMD_EDITOR environment variable.
sudo systemctl edit ollama.service
  1. Add the following, save and exit.
[Service]
Environment="OLLAMA_HOST=0.0.0.0"
  1. Restart the Ollama service.
sudo systemctl restart ollama

Get automatic updates for Open WebUI (not models, Ollama or Docker)

  1. Create a new service file to get updates using Watchtower once everytime Docker starts.
sudoedit /etc/systemd/system/watchtower-open-webui.service
  1. Add the following, save and exit.
[Unit]
Description=Watchtower Open WebUI
After=docker.service
Requires=docker.service

[Service]
Type=oneshot
ExecStart=/usr/bin/docker run --rm --volume /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower --run-once open-webui
RemainAfterExit=true

[Install]
WantedBy=multi-user.target
  1. Enable this new service to start with your device and start it now.
sudo systemctl enable --now watchtower-open-webui
  1. (Optional) Get updates at regular intervals after Docker has started.
docker run --rm --volume /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower --run-once open-webui

Use Open WebUI

  1. Open localhost:3000 in a web browser.
  2. Create an on-device Open WebUI account as shown.
14
 
 

I'm running ollama with llama3.2:1b smollm, all-minilm, moondream, and more. I am able to integrate it with coder/code-server, vscode, vscodium, page assist, cli, and also created a discord ai user.

I'm an infrastructure and automation guy, not a developer so much. Although my field is technically devops.

Now, I hear that some llms have "tools." How do I use them? How do I find a list of tools for a model?

I don't think I can simply prompt "Hi llama3.2, list your tools." Is this part of prompt engineering?

What, do you take a model and retrain it or something?

Anybody able to point me in the right direction?

15
 
 

Did any of you already took a look at the A2A protocol page on github?

16
 
 

cross-posted from: https://lemmy.dbzer0.com/post/41844010

The problem is simple: consumer motherboards don't have that many PCIe slots, and consumer CPUs don't have enough lanes to run 3+ GPUs at full PCIe gen 3 or gen 4 speeds.

My idea was to buy 3-4 computers for cheap, slot a GPU into each of them and use 4 of them in tandem. I imagine this will require some sort of agent running on each node which will be connected through a 10Gbe network. I can get a 10Gbe network running for this project.

Does Ollama or any other local AI project support this? Getting a server motherboard with CPU is going to get expensive very quickly, but this would be a great alternative.

Thanks

17
18
 
 

This is an update to a previous post found at https://lemmy.world/post/27013201


Ollama uses the AMD ROCm library which works well with many AMD GPUs not listed as compatible by forcing an LLVM target.

The original Ollama documentation is wrong as the following can not be set for individual GPUs, only all or none, as shown at github.com/ollama/ollama/issues/8473

AMD GPU issue fix

  1. Check your GPU is not already listed as compatibility at github.com/ollama/ollama/blob/main/docs/gpu.md#linux-support
  2. Edit the Ollama service file. This uses the text editor set in the $SYSTEMD_EDITOR environment variable.
sudo systemctl edit ollama.service
  1. Add the following, save and exit. You can try different versions as shown at github.com/ollama/ollama/blob/main/docs/gpu.md#overrides-on-linux
[Service]
Environment="HSA_OVERRIDE_GFX_VERSION=10.3.0"
  1. Restart the Ollama service.
sudo systemctl restart ollama
19
1
submitted 1 year ago* (last edited 11 months ago) by autonomoususer@lemmy.world to c/llm@lemmy.world
 
 

Ollama lets you download and run large language models (LLMs) on your device.

Install Ollama on Arch Linux

  1. Check whether your device has an AMD GPU, NVIDIA GPU, or no GPU. A GPU is recommended but not required.
  2. Open Console, type only one of the following commands and press return. This may ask for your password but not show you typing it.
sudo pacman -S ollama-rocm    # for AMD GPU
sudo pacman -S ollama-cuda    # for NVIDIA GPU
sudo pacman -S ollama         # for no GPU (for CPU)
  1. Enable the Ollama service [on-device and runs in the background] to start with your device and start it now.
sudo systemctl enable --now ollama

Test Ollama alone

  1. Open localhost:11434 in a web browser and you should see Ollama is running. This shows Ollama is installed and its service is running.
  2. Run ollama run deepseek-r1 in a console and ollama ps in another, to download and run the DeepSeek R1 model while seeing whether Ollama is using your slow CPU or fast GPU.

AMD GPU issue fix

https://lemmy.world/post/27088416

Use with Open WebUI

See this guide: https://lemmy.world/post/28493612

20