5. TungstenTool -- Ant-only tmux virtual terminal giving Claude direct keystroke/screen-capture control. Singleton, blocked from async agents.
6. Magic Docs -- Ant-only auto-documentation. Files starting with "# MAGIC DOC:" are tracked and updated by a Sonnet sub-agent after each conversation turn.
7. Undercover Mode -- Prevents Anthropic employees from leaking internal info (codenames, model versions) into public repo commits. No force-OFF; dead-code-eliminated from external builds.
ANTI-COMPETITIVE & SECURITY DEFENSES
8. Anti-Distillation -- Injects anti_distillation: ['fake_tools'] into every 1P API request to poison model training from scraped traffic. Gated by tengu_anti_distill_fake_tool_injection.
UNRELEASED MODELS & CODENAMES
9. opus-4-7, sonnet-4-8 -- Confirmed as planned future versions (referenced in undercover mode instructions).
10. "Capybara" / "capy v8" -- Internal codename for the model behind Opus 4.6. Hex-encoded in the BUDDY system to avoid build canary detection.
11. "Fennec" -- Predecessor model alias. Migration: fennec-latest -> opus, fennec-fast-latest -> opus[1m] + fast mode.
UNDOCUMENTED BETA API HEADERS
12. afk-mode-2026-01-31 -- Sticky-latched when auto mode activates
15. fast-mode-2026-02-01 -- Opus 4.6 fast output
16. task-budgets-2026-03-13 -- Per-task token budgets
17. redact-thinking-2026-02-12 -- Thinking block redaction
18. token-efficient-tools-2026-03-28 -- JSON tool format (~4.5% token saving)
19. advisor-tool-2026-03-01 -- Advisor tool
20. cli-internal-2026-02-09 -- Ant-only internal features
YOLO CLASSIFIER INTERNALS (previously only high-level known)
36. Two-stage system: Stage 1 at max_tokens=64 with "Err on the side of blocking"; Stage 2 at max_tokens=4096 with <thinking>
37. Three classifier modes: both (default), fast, thinking
38. Assistant text stripped from classifier input to prevent prompt injection
39. Denial limits: 3 consecutive or 20 total -> fallback to interactive prompting
40. Older classify_result tool schema variant still in codebase
COORDINATOR MODE & FORK SUBAGENT INTERNALS
41. Exact coordinator prompt: "Every message you send is to the user. Worker results are internal signals -- never thank or acknowledge them."
42. Anti-pattern enforcement: "Based on your findings, fix the auth bug" explicitly called out as wrong
43. Fork subagent cache sharing: Byte-identical API prefixes via placeholder "Fork started -- processing in background" tool results
44. <fork-boilerplate> tag prevents recursive forking
45. 10 non-negotiable rules for fork children including "commit before reporting"
DUAL MEMORY ARCHITECTURE
46. Session Memory -- Structured scratchpad for surviving compaction. 12K token cap, fixed sections, fires every 5K tokens + 3 tool calls.
47. Auto Memory -- Durable cross-session facts. Individual topic files with YAML frontmatter. 5-turn hard cap. Skips if main agent already wrote to memory.
48. Prompt cache scope "global" -- Cross-org caching for the static system prompt prefix
> This is the most expensive failure mode with AI-assisted coding, and it’s not wrong syntax or bad logic. It’s implementations that work in isolation but break the surrounding system.
This is spot on. Zooming out, a perfectly written implementation that follows all the conventions but misses the mark on its business goal is as, if not more, expensive. I think adding a brief.md artifact to the beginning of the flow (where you store the problem, desired change, primary metric, feature-kill criteria) can go a long way.
There's a great video from "Technology Connections" covering one-pedal driving here [1]. The gist of it is that one-pedal breaking doesn't have a great way to control the break lights, making it difficult for the cars behind you to recognize that you're slowing down, which can cause accidents.
This is only a problem in some EVs. Teslas, for example, illuminate the brake lights correctly when regenerative braking is used. The pinned comment on the video also suggests Hyundai may have fixed this on their cars by now, although I don't know whether they followed through.
IIRC the industry went from hard regen "may illuminate" -> "shall not illuminate" -> "must illuminate". Some cars designed and/or type approved when the middle one was regulatorily current follows it.
Manual transmissions (and sport shifting automatics) of most cars from 1904-2024 of sports coupes to trucks don't illuminate the brake lights when engine braking. Drivers following cars should be basing most decisions based on relative speed based on these existing.
If you were to follow behind me in my 2004 manual sports car or a 2021 automatic and only applied brakes when I did, you'd be speeding in my neighborhood and rear ending me, as I keep both in second gear and it keeps me at 20mph.
Having no brake lights for regenerative braking sounds incredibly dangerous. Regenerative braking can slow the vehicle down quite quickly, and as the name "one-pedal driving" suggests, it's common to barely touch the brake pedal at all.
Perhaps a better version of your complaint is that Tesla has made the threshold for the brake lights too sensitive; but I would be curious how much observation went into drawing that conclusion.
I drive my manual and auto cars like this, downshifting instead of using the brakes and preventing dangerous acceleration. I often only use the brakes from 20mph and below (though may touch them during a downshift)
Thinking it's more dangerous is odd as it's a great example of fail safe.
Wasn't his whole problem with one particular implementation of one-pedal driving, the Ioniq 5? As I recall, it was poorly implemented and had a relatively common use case that would allow you to stop without the brake lights ever coming on.
This isn't really a problem that other EVs have had. The brake lights come on when you slow down, just as if you had used the brake pedal. It's mostly as easy as triggering at a certain amount of kW generated.
> Wasn't his whole problem with one particular implementation of one-pedal driving, the Ioniq 5?
Yes. I watched that Tech Connections video a while back. In that implementation, his car would slow down too quickly without brake lights. I think it would get down to something like 15 mph from 55 before they'd come on, yet the deceleration was at least as rapid as casual braking.
I agree with the Tech Connections example; I have no comment on other EVs.
I can go from 75 to 5 mph in my ICE with the brake lights never coming on. Or, I can maintain 75 mph with my brake lights on 100% of the time.
But really, there should just be a standard - it seems obvious and easy for EVs to get it right: If deceleration is > X m/s^2, then brake lights are on.
The European Union has a regulation that requires EVs to illuminate their brake lights anytime the regenerative-braking system’s deceleration rate exceeds 1.3 meters per second squared, or about 0.13 g.
Simply because I've been driving TVs for a while and "in the beginning" when there was no rules for brake lights to be turned on when regenerative breaking, I got into trouble (German highways) with cars driving behind me and not being able to anticipate / react fast enough when I used it.
I quickly learned to force the break lights and started to follow this issue closely. That's how I became aware of the number.
How the EU saddled for 1.3 and not 1.2 or 1.4 is beyond my knowledge.
Unless I downshift using paddles, my ICE will coast forever from 75. My PHEV, on the other hand, will slow down much faster. I want my PHEV to illuminate the brake lights if I'm decelerating above a certain rate, just as you suggest.
It never occurred to me that I might want my ICE to do the same, but I suppose it would be helpful if I'm heading downhill and downshift without touching the brake.
How will we then be able to tell which drivers are ignorantly riding their brakes all the way down a long hill, versus the drivers using engine braking?
>Brake lights.....will remain on when the vehicle fully stops.
Mmmm, nothing better than a dark, rainy evening in November in a city after a long day at work and you're stuck for minutes on end right behind a car with red lights burning into your retinas.
Wankers.
Thank you for putting the list together! If I may make a suggestion, what you wrote above would be an excellent first paragraph in the readme, as it explains the intent well and the criteria for how you add books to the list.
If you'd like to use something like this in your own APIs to let your clients filter requests or on the CLI (as is the intention with gron), consider giving "json-mask" a try (you'll need Node.js installed):
If you've ever used Google APIs' `fields=` query param you already know how to use json-mask; it's super simple:
a,b,c - comma-separated list will select multiple fields
a/b/c - path will select a field from its parent
a(b,c) - sub-selection will select many fields from a parent
a/*/c - the star * wildcard will select all items in a field
> Start by running the migration helper on a current project. We’ve carefully minified and compressed a senior Vue dev into a simple command line interface. Whenever they recognize an obsolete feature, they’ll let you know, offer suggestions, and provide links to more info.
Vue's docs are a masterclass in docs-writing, imho. Clear. Concise. Empathetic. Written for humans.
I think the difference is that the browser already contains the shadow dom needed to render the video tag. But for a custom tag it would need to fetch, parse and execute your component code before it can begin to render.
That's an interesting read. I'm thinking on whether the real question is what is the declarative difference between the video tag, which by normal application lets everyone feel find about that content hidden by the shadow DOM vs the APIs we're developing for our custom elements. There seems to be an important piece of learning, not just for SSR, but for custom element development in general, that I think we can build off of.
I’m excited about this. There’s usually a ton unnecessary CPU usage with the current techniques, not to mention the human time waste of re-implementing them in JS. There’s feature detection for this too, so we can gracefully degrade “loading” to existing solutions and take advantage of it right away.