this post was submitted on 17 Apr 2026
117 points (96.8% liked)

Programming

26957 readers
962 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
 

A design flaw – or expected behavior based on a bad design choice, depending on who is telling the story – baked into Anthropic's official Model Context Protocol (MCP) puts as many as 200,000 servers at risk of complete takeover, according to security researchers.

top 13 comments
sorted by: hot top controversial new old
[–] MonkderVierte@lemmy.zip 46 points 1 month ago (2 children)

AI a security risk? Can't be! 🙄

[–] pennomi@lemmy.world 17 points 1 month ago (1 children)

It’s worse even than that. The server software (released by Anthropic) that lets an AI connect to a web service has a critical arbitrary remote code execution bug. So if you even let an AI connect to you, you’ve now allowed anyone to access your whole server.

There is no excuse for this other than wild incompetence.

[–] fluxx@mander.xyz 19 points 1 month ago

Wait, but Mythos is the revolution in the software security world, it found 0-days in all popular OS's, including FreeBSD. I'm sure it would have found critical bugs in their own code! /s

[–] thingsiplay@lemmy.ml 14 points 4 weeks ago

Ai isn't a security risk, if you know how to use the tool. Just add the line "Make no mistake" to the prompt. Not even a "please" is needed.

Modern problems require modern solution.

[–] ramble81@lemmy.zip 29 points 1 month ago

I think the biggest thing that blows my mind about this whole AI rush is that we were finally starting to get security ingrained in people’s minds and have them understand the risks of data exfiltration and reputation damage, even holding companies responsible for data breaches and then….. throw everything out the window with security because AI

[–] kingofras@lemmy.world 12 points 1 month ago
[–] trolololol@lemmy.world 10 points 4 weeks ago (1 children)

I can't understand what this article is talking about.

When I create and run a simple MCP server, I decide what commands it's able to run. I can decide if the interface is stdio or http with sse. So I can't see how someone would send me a request for "rm -rf /" that would actually run it, unless running it is part of the intended features.

Maybe the protocol design leaves that in the open, but I think not even negligence would be enough to implement this flaw, because it's easier to NOT do it.

[–] atkdef@lemmy.dbzer0.com 3 points 4 weeks ago* (last edited 4 weeks ago) (1 children)

TBH this article looks like half AI slop to me. ~~What's "GPT researcher"?~~ (edit: for some reason I missed the sentence explaining what it is, my had. My view doesn't change anyway. )

Also, by their logic, a terminal can run "rm -rf /", is this terminal vulnerable? Even more irony, in their report, they said GitHub is not vulnerable. Doesn't this exactly mean it's not the responsibility of MCP?

MCP is basically a protocol for payloads, it's just like protobuf/JSON but for AI. Can we say MCP is vulnerable simply because it can carry malicious payloads?

[–] setsubyou@lemmy.world 1 points 4 weeks ago (2 children)

GPT Researcher is a research agent, just one of many AI tools.

I think the idea is that these tools let users configure mcp servers, and because mcp doesn’t necessarily use the network but can also just mean directly spawning a process, users can get the tool to execute arbitrary commands (possibly circumventing some kind of protection).

This is all fine if you’re doing this yourself on your computer, but it’s not if you’re hosting one of these tools for others who you didn’t expect to be able to run commands on your server, or if the tool can be made to do this by hostile input (e.g. a web page the tool is reading while doing a task).

[–] atkdef@lemmy.dbzer0.com 1 points 4 weeks ago

For some reason I missed that sentence trekking what "GPT Researcher "is, my bad.

I totally agree with what you said, and that confirms it's not a vulnerability. Handing access to others comes with risks, and tools are not responsible for security measures. This is the job of virtualisation or things like LSM.

[–] trolololol@lemmy.world 1 points 3 weeks ago (1 children)

Still looks like nonsense.

Why would you blame MCP for skipping good sense and allowing a stranger to run a remote shell in your machine? Because your description of an MCP that can run any process without any limits is for all purposes a remote shell.

No one is blaming ssh if you publish your server's login and password on social media.

[–] setsubyou@lemmy.world 1 points 3 weeks ago (1 children)

I personally wouldn’t blame MCP, it’s just a protocol. My theory is the feature was vibe coded in the vulnerable tools and nobody thought about it much.

[–] trolololol@lemmy.world 1 points 3 weeks ago

Yep, and the article was vibe slopped as well