One of my most humbling experiences was entering a puzzle hunt at CMU in the mid 2000s with early undergrad friends. We completely flailed. I still remember watching other groups out the window running around the campus while it took us most of the day just to get out of our room on the first puzzle.
It was a great reminder for a bunch of hotshot underclassmen that there were so many skills we all had yet to develop.
Yeah I think you hit on the head a good way to use it though. I'm not on MacOS but KDE has a little tool called krunner[1] that lets you perform simple tasks from a small pop-up on your desktop. It would be cool if I could do slightly agentic things from there with a local model like ask what the capital of Austria is, or what's the current exchange rate between two currencies.
iPad is around 10% of Apple’s revenue. How many parents are going to give their kids their $1500 iPad Pro instead of just buying them a $300 low end iPad?
Except they're still not accepting any feedback around AGENTS.md as a standard. You need to explicitly symlink CLAUDE.md to AGENTS.md in a workspace in order to Claude to work like every other agent when it comes to loading context.
I hacked together something similar to the concept they describe a few months ago (https://github.com/btucker/agentgit) and then ended up not actually finding it that useful and abandoning it.
I feel like the value would be in analyzing those rich traces with another agent to extract (failure) patterns and learnings, as part of a flywheel setup. As a human I would rarely if ever want to look at this -- I don't even have time to look at the final code itself!
> value would be in analyzing those rich traces with another agent to extract (failure) patterns and learnings
Claude Code supports hooks. This allows me to run an agent skill at the end of every agent execution to automatically determine if there were any lessons worth learning from the last session. If there were. new agent skills are automatically created or existing ones automatically updated as apporpriate.
Yes, I've done the same. But the issue is that the agent tends to learn too many lessons, or to overfit those lessons to that single session. I think the benefit of a tool like this is that you can give that agent a wider view when formulating recommendations.
Completely agree. But I wonder how much of that is just accomplished with well placed code comments that explain the why for future agent interactions to prevent them from misunderstanding. I have something like this in my AGENTS.md.
There is no such command, according to the docs [0]. /s
I continue to find it painfully ironic that the Claude Code team is unable to leverage their deep expertise and unlimited token budget to keep the docs even close to up-to-date automatically. Either that or they have decided accurate docs aren't important.
I've been working on https://github.com/btucker/selkie which is a complete implementation of the Mermaid parser & renderer in rust as an experiment in what's possible with Claude Code. It's still rough around the edges, but I've been blown away by what's been possible. (I'm now using it as a test repo for https://github.com/btucker/midtown)
Last year here in Chicago my wife's bike was stolen overnight. It has an airtag hidden in a bell on the handlebars. When we woke up and noticed it was missing, we traced it to a park not too far away. We ran over there and called the Chicago PD who showed up in <10min. We told them a description of the bike and showed where FindMy said it was. They went and retrieved it. Surprisingly happy ending & I was impressed the Chicago PD were so helpful!
I haven't dug too deep, but it appears to be using a bubblewrap sandbox inside a vm on the Mac using Apple's Virtualization.framework from what I can tell. It then uses unix sockets to proxy network via socat.
I disagree with labeling AI to be a cargo cult. Crypto fits the description but the definition of a cargo cult has to imply some sort of ultimate end in which its follower's expectations are drastically reduced.
What AI feels like is the early days of the internet. We've seen the dot com bubble but we ultimately live in the internet age. There is no doubt that post-AI bubble will be very much AI orientated.
This is very different from crypto which isn't by any measure a technological leap rather more than a crowd frenzy aimed at self-enrichment via ponzi mechanisms.
It was a great reminder for a bunch of hotshot underclassmen that there were so many skills we all had yet to develop.
reply