We CAN do both. They might contribute to each other.
But what we can definitely fucking all agree is that spending all of our money on weapons in an effort to kill each other over which colour clothes Santa is wearing is pretty dumb.
We CAN do both. They might contribute to each other.
But what we can definitely fucking all agree is that spending all of our money on weapons in an effort to kill each other over which colour clothes Santa is wearing is pretty dumb.
30 TB of memory?!
You’re a billionaire!!
Behold the Splendor!

Hence the “all” in “all their hardware”.
Ah fuck. Just as I’m about to be RTOd into the Chelsea office. The only respite was walking over to Battersea Park at lunch.
Can you freely choose what OS you run on all their hardware? If not, it’s a walled-garden.
I’d buy you a beer for that summary. That is exactly SPOT ON.
Anthropic right now are the good people.
That probably won’t last. But out of a bad bunch they’re the least bad.
To call AI useless is quite a strong statement.
There’s a million places to use it!
The problem is that the market thinks there’s a billion places to use. And right now we’re funding 999 million places that shouldn’t be using AI but have the funding to do that dumb thing so we can figure the one million places where it makes fantastic sense.
Aren’t we saying exactly the same? Give it an MCP server or a native skill that CAN track time.
Everyone’s getting their knickers in a twist over nothing here.
Of course an AI can track time, if it’s given access to a timer MCP server.
Can we track time without tools, just in our heads? Certainly not very accurately. We can, however, track it reasonably accurately if given access to a quartz stop watch (typically +/-15 s/year)
A language model is based around language and reasoning by words/symbols. It’s not a surprise it doesn’t have timing capability.
What Altman SHOULD be embarrassed about is that the model lies about its capabilities. That implies that the context is still not right - it should be adequately trained and given context to prevent the lying. That implies a much more worrying issue - and something that Anthropic handles far better, IMHO (when asked if it can track time, if says “no, not on my own”, and then proceeds to build a JavaScript timer that it offers up to track time).



I love how a specific question was being asked and then you get downvotes for answering it.