I totally fell for the "obscene memory" trap myself. My first encounter with suffix trees outside of a textbook was for an ITA Software 'Instant Search' puzzle. The requirement was sub-0.1ms search on a large string database, I went straight for a generalized suffix tree. Then I realized they had asked for the solution to fit within a 1GB heap. :(
Author here. It’s a very hopeful time for interactive learning. Coincidentally, it is inspiring that both OpenAI and Anthropic released these improved visualization capabilities just this week.
Using Gemini's canvas for chord progressions is a great example of this. When I was building this suffix tree visualizer, I kept thinking about how much "spatial" intuition is required to understand the algorithm; having these live, interactive environments available to students is a massive step forward.
Hi HN,
I wrote this post after an experiment over the holidays. I had Claude write a cross-stack feature touching our database, cloud infra, mobile app, and the embedded application running on our hardware devices at work. What would usually take me a week took an afternoon to generate.
But it still took weeks to test and merge.
The takeaway for me was that for teams operating with legacy debt, or teams where verification requires physical interaction (you can't throw prompt engineering at a hardware test bench), AI doesn't solve the bottleneck; it just shifts it. We are making code generation incredibly cheap, but the cost of verification and code review isn't shrinking, and the burden is falling on our most senior engineers.
I’d be curious to hear how others managing complex or non-standard codebases are adapting their CI/CD and review processes to keep up with the volume of AI-generated code.
I was asking because if you optimize for pitches response rate, you end up with better pitches but not necessarily good matches between candidates/companies.
I tend to agree with this point of view: whiteboard interviews fail to predict how you will write real code once you are hired, your actual code is definitely a better indicator.
Asking candidates their github usernames is starting to seem a common trend, in my experience.
I have seen a lot of articles on HN that would suggest that someone somewhere is using github for hiring, but I have not heard of anyone actually doing it.
Sounds like a smart idea, to me. Since I started using Alfred on my Mac I'm able to control almost everything from the keyword and I'm faster at almost everything.
Being able to guess the user intentions from the context would be a great advantage, but the challenge would be balancing context-dependent commands from the most general ones, in my opinion.
I'd be curious to know how much of this applies to non-native English speakers.
I guess their native language can shape the way they build sentences and influence the way they communicate in different way. Or, maybe, they just attempt to correlate more with native English speakers.
I wrote up the full 'war story' of how I had to profile the heap and optimize the node representation (shaving bytes off the edge storage) just to get it to boot without an OOM error: https://www.abahgat.com/blog/the-programming-puzzle-that-got...
It’s the most tangible example I've run into of where theoretical O(n) space complexity meets the reality of object pointer overhead.