It should be possible to do this w/ eBPF. Monitor network i/o & rewrite the request on the fly to include the proper tokens & signatures. The agent can just be given placeholder tokens. That way all the usual libraries work as expected & the secrets/signatures are handled w/o worrying about another abstraction layer. Here is some prior art: https://riptides.io/blog/when-ebpf-isnt-enough-why-we-went-w...
Wait until you folks learn about the quantum panopticon. It sounds fake but governments everywhere are recording as much encrypted data as possible in hopes of decrypting it in the future w/ quantum computers: https://link.springer.com/article/10.1007/s11023-025-09723-2
Off the top of my head, connections on fiber bundles (which define a notion of "parallel transport" of points in the total space, allowing you to "lift" curves from the base space to the total space) are more general than Riemannian metrics, so maybe there are some ML concepts that can be naturally represented by a connection on a principal bundle but not by a metric on a Riemannian manifold? At least this approach has been useful in gauge theory; there must be enough theoretical physicists working in ML that someone would have tried to apply fiber bundle concepts.
Lie brackets are bi-linear so whatever you do per example automatically carries over to sums, the bracket for the batch is just the pairwise brackets for elements in the batch, i.e. {a + b + c, d} = {a, d} + {b, d} + {c, d}. Similarly for the second component.
Haven't read the post yet but I think the general technique is variations on spectral analysis. Break up the image into spectral components & then figure out a relative similarity metric based on spectral statistics.
Edit: That's exactly what they do. Basic stuff if you know the fundamental techniques.
I recently wrote a simple interpreter for a stack based virtual machine for a Firefox extension to do some basic runtime programming b/c extensions can't generate & evaluate JavaScript at runtime. None of the consumer AIs could generate any code for the stack VM of any moderate complexity even though the language specification could fit on a single page.
We don't have real AI & no one is anywhere near anything that can consistently generate code of moderate complexity w/o bugs or accidental issues like deleting files during basic data processing (something I ran into recently while writing a local semantic search engine for some of my PDFs using open source neural networks).
I am building an assembler+compiler+VM for a python-like statically typed language with monomorphized generics and Erlang-style concurrency. Claude Sonnet/Kimi/Gemini Pro (and even ChatGPT on occasion) are able to handle the task reasonably well because I give them specs for the VM that have been written and rewritten 200+ times to remove any ambiguities and make things as clear as possible.
I go subsystem by subsystem.
Writing the interpreter for a stack vm is as simple as it gets.
At which point you tell them they are being extremely reckless but subtly mention that something new & even scarier is being developed internally that's going to blow everything else out of the water.
We know it, we’re just susceptible to it. Like not eating for an extended time, you know you’ll get hungry and then you do. There’s a very basic but powerful response to danger and a need for safety.
reply