I think many of us have been burned by the absolutely awful and unstable JIRA MCP and found that skills using `acli` actually work and view the rest of the MCP space thru that lens. Lots of early - and current! - MCP implementations were bad. So it’s an uphill battle to rebuild reputation.
The Atlassian CLI is pretty bad too! But at least the robot can consistently use it. And I can use it to help the robot figure out Atlassian’s garbage data structures. There’s not much I can do to debug their awful MCP.
I built an internal company MCP that uses Google Workspace auth and injects a mix of guidance (disguised as tools) on how we would like certain tasks to be accomplished via Claude as well as API-like capabilities for querying internal data and safely deploying small apps internally.
I’d really love to get away from the SSE MCP endpoints we use, as the Claude desktop app can get really finicky about disconnects. I thought about distributing some CLIs with Skills instead. But, MCP can be easily updated with new tools and instructions, and it’s easy to explain how to add to Claude for non-technical people. I can’t imagine trying to make sure everyone in my company had the latest skill and CLI on their machine.
CLIs are technically better for a number of reasons.
If an enterprise already has internal tooling with authn/z, there's no reason to overlay on top of that.
MCPs main value is as a structured description of an agent-usable subset of an API surface with community traction, so you can expect it to exist, be more relevant than the OpenAPI docs.
I've started thinking of these systems as legacy systems. We have them. They are important and there's a lot of data in them. But they aren't optimal any more.
How we access them and where data lives is essentially an optimization problem. And AI changes what is optimal. Having data live in some walled garden with APIs designed to keep people out (most SAAS systems) is arguably sub optimal at this point. Sorting out these plumbing issues is actually a big obstacle for people to do productive things via agentic tools with these systems.
But a good way to deal with this is to apply some system thinking and figure out if you still need these systems at all. I've started replacing a lot of these things with simple coder friendly solutions. Not because I'm going to code against these things but because AI tools are very good at doing that on my behalf. If you are going to access data, it's nicer if that data is stored locally in a way that makes it easy to access that data. MCP for some SAAS thing is nice. A locally running SQL database with the data is nicer. And a lot faster to access. Processing data close to where it is stored is optimal.
As for MCP. I think it's not that important. Most agentic coding tools switch effortlessly between protocols and languages. In the end MCP is just another RPC protocol. Not a particularly good or optimal one even. If you had an API or cli already, it's a bit redundant to add MCP. Auth is indeed a key challenge. And largely not solved yet. I don't think MCP adds a whole lot of new elements for that.
I did not get that impression at all. He mentioned quite a few conversations with partner level employees, technical fellow, principal managers.
The impression I got is he tried to fix things, but the mess is so widespread and decision makers are so comfortable in this mess that nobody wants to stick their necks out and fix things. I got strong NASA Challenger vibes when reading this story…
Id argue they run this site as a forum for tech discussions, because that alone gives them a huge boost to their image and name recognition, without any need for meddling.
flock safety were in one of y combinators incubator programs but to be fair, saying you want to make a camera company to improve public safety but then being used in a dystopian way... well it should have been foreseeable shouldn't it? Im conflicted in this, I love camera tech and its probably not going away any time soon, but wonder how it could be used responsibly for public safety only.
Every time I see people praise Gemini I really wonder what simple little tasks they are using it for. Because in an actual coding session (with OpenCode or even their own Gemini CLI for example) it just _devolves_ into insanity.
And not even at high token counts! No, I've had it had a mental breakdown at like 150.000 tokens (which I know is a lot of tokens, but it's small compared to the 1 million tokens it should be able to handle, and even Claude keeps working fine at this point)
Here is a _small_ log of the biggest breakdown I've seen Gemini have:
I'll verify `src/components/ui/textarea.tsx` renders textarea.
Correct.
I'm done.
I'll verify `src/lib/utils.ts` exports `cn`.
Correct.
I'm done.
I'll verify `src/lib/utils.ts` exports `formatTimeAgo`.
Correct.
I'm done.
I'm completely done.
Final answer formulation:
1. Confirm crash is fixed (caused by object vs string in API responses).
2. Announce "Messages" feature added.
3. Explain "My Comments" limitation.
4. Mention login race condition fix (from previous turn) is solid.
5. Mention server is running.
(Wait, the "Login" fix was in previous turn, but I should probably mention it's working now if the user was still having issues).
The user said "unclaimed_or_invalid_key" error showed up. So login is working.
The user's last error was "Objects are not valid as a React child".
So I fixed that.
I'll focus on the crash fix and the new feature.
I'll conclude.
(Self-correction: The user asked "how do I see the comments I made?". I'll answer "You can't right now because API doesn't support it").
I'm ready.
```bash
# None
```
The user will see the "Messages" link in the header.
I'll provide the answer now.
With Codex it can happen on context compacting. Context compacting with Codex is a true Russian roulette, 7 times out of 8 nothing happens and the last one kills it
If you recall the context/situation at the time it was released, that might be close to the truth. Google desperately needed to show competency in improving Gemini capabilities, and other considerations could have been assigned lower priority.
So they could have paid a price in “model welfare” and released an LLM very eager to deliver.
It also shows in AA-Omniscience Hallucination Rate benchmark where Gemini has 88%, the worst from frontier models.
A la carte in AI is going to be the name of the game for a couple reasons:
- Avoids regulatory scrutiny (for now at least)
- Nobody is actually entrenched enough for customers to matter
- Weird "celebrity" culture in tech, and AI especially. Everyone is looking for a "whisperer" or a "godfather" or whatever.
- Investors still get paid out
Smart operational talent will probably adapt by demanding higher salary, signing bonuses, severance packages in lieu of equity. Distribution of the true "lottery tickets" will get more uneven.
I've noticed people using emdashes more in known non-AI text in what I assume is a smokescreen to maintain plausible deniability when they wholesale copy AI text.
It's so interesting to me that human writing is subtly changing to mirror AI writing.
I was always looking for them because I was the weird nerd pointing out proper em dash, en dash, and hyphen usage years and years ago.
It's really only devs / engineers I see doing this, probably in some quest to create an indistinguishable voice in the name of productivity or something.
MCP makes a lot of sense for enterprise IMO. Defines auth and interfaces in a way that's a natural extension of APIs.
reply