1) commit messages often capture the "why" something changed - versus the code/tests which focus on the what/how for right now.
2) when you have a regression being able to see the code before it was introduced and the code which was changed at the same time is very helpful in understanding the developer's intent, blindspots in their approach, etc.
well i also think there’s another bottleneck.. knowing how to structure your conversations with the LLM to not muddle things up.
when i work with these models, I try to maintain same way i would do things in real life and i tend to notice that it’s kinda taking me some time to even get these tasks done because now i have to read each lines to be sure it wrote it the way i would
Ok so in a situation like regular orchestration you would essentially layout all possible steps the LLM can take in your code in a big orchestration layer, and if it hits the sensitive endpoint the orchestration that can occur past that will block off web search. In the design that is. But for something like a manus style agent where you're outsourcing all the work but allowing it to hit your MCP it just becomes a regular API the LLM can call
what does “exist” mean in this case.. what is factored to determine a place exist? the building is there? people are speaking about it on social media? they have ad on google that point to the local address etc?
even humans don’t do this unless there’s a crazy bug causing them to search around every possible angles.
that said, this sound like a great and fun project to work on.