For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | DennisL123's commentsregister

Happy to have my mind changed, yet I am not 100% convinced closing the issue as completed captures the feedback.


From the contents of the issue, this seems like a fairly clear default effort issue. Would love your input if there's something specific that you think is unaddressed.


From this reply, it seems that it has nothing to do with `/effort`: https://github.com/anthropics/claude-code/issues/42796#issue...

I hope you take this seriously. I'm considering moving my company off of Claude Code immediately.

Closing the GH issue without first engaging with the OP is just a slap in the face, especially given how much hard work they've done on your behalf.


The OP “bug report” is a wall of AI slop generated from looking at its own chat transcripts


Do you disagree with any of the data or conclusions?


I must admit, the fact that the writing was well formatted and structured was an instant turn off. I did find it insightful. I would have been more willing to read it if it was one lower case run on line with typos one would expect from a prepubescent child. I am both joking and being serious at the same time. What a world.


Yes


I'm open to hearing, please elaborate


It's only slop if it's wrong or irrelevant.


I commented on the GH issue, but Ive had effort set to 'high' for however long its been available and had a marked decline since... checks notes... about 23 March according to slack messages I sent to the team to see if I was alone (I wasnt).

EDIT: actually the first glaring issue I remember was on 20 March where it hallucinated a full sha from a short sha while updating my github actions version pinning. That follows a pattern of it making really egregious assumptions about things without first validating or checking. Ive also had it answer with hallucinated information instead of looking online first (to a higher degree than Ive been used to after using these models daily for the past ~6 months)


It hallucinated a GUID for me instead of using the one in the RFC for webscokets. Fun part was that the beginning was the same. Then it hardcoded the unit tests to be green with the wrong GUID.


[flagged]


Well Ive never had the issue before and have hit that / similar issues every few days over the past couple weeks.


Gotcha. It seemed though from the replies on the github ticket that at least some of the problem was unrelated to effort settings.


If agents can already read and rewrite code, literate programming might actually be unnecessary. Instead of maintaining prose alongside code, you could let agents generate explanations on demand. The real requirement becomes writing code in a form that is easily interpretable and transformable by the next agent in the chain. In that model, code itself becomes the stable interface, while prose is just an ephemeral view generated whenever a human (or another agent) needs it.


I am already doing that. For performance, I am just caching the latest explanation alongside the code.


Who knows .. ;)


There is an issue on their GitHub relating to customisable contraction hierarchies, but it doesn't seem to have gotten much interest.

https://github.com/Project-OSRM/osrm-backend/issues/6574


Yes, the idea was to transition to a variant of MLD that lends itself to the offline use case and would use an updated modeling to be more space efficient while preserving performance. There exists an incomplete (and private) PoC written in Rust.


Agree, mercator projection isn’t great. But it’s pretty simple.

All of the mapping apps are rooted in paper maps. That’s what most people find accessible in a natural way.

So, in any 2D world view some projection must be chosen, and you can fundamentally chose between true angles or true size. Because of that choice any projection is a distortion. Choosing true angles has advantages when it comes to turning projected data into something like turn instructions in your nav app. And then again, mercator projection is easy to use. So, bottom line it’s a mix of people are used to it and simplicity of using it.


Last minute I sold 200 €/$£/¥ worth of product. At this rate and because online sales is 24/7, I am hitting 200*60*24*365 = 105.120.000 ARR. tada!


If these customers have set an automated system to pay you $200 every single minute, that’s correct. If they haven’t and it was just a one off sale, you are missing the “recurring” part in ARR.


In the US the price varies a lot by state [1].

[1] https://www.electricchoice.com/electricity-prices-by-state/


OSRM founder, here. Yes, you are right, many of the speedup techniques are related. My personal opinion is, tho, that looking at the identification of important nodes is best captured by the ideas of applying partitioning to multi-level dijkstra and by what’s called hub-labels. The latter has a close relationship to Contraction Hierarchies.


Hi! If I remember rightly, you can run contraction hierarchies but stop short of the full contraction, and use the core vertices as "hubs"?

Hope you don't mind but I took a little look at your posting history and saw this: https://news.ycombinator.com/item?id=41954120

I've been researching this lately, as we've recently implemented traffic patterns in our routing model and are just now working on live traffic updates. The easiest way to adapt our existing code looks like Customizable Contraction Hierarchies. There's a really nice review paper here: https://arxiv.org/abs/2502.10519 . The technique is to apply nested dissections to build a "metric-independent" hierarchy based purely on connectivity, which gives a decent quality of contraction regardless of transit times. Is that what you mean by decomposing the network into "cells"?


I was referring to variants of Customizable Route Planning. Easiest to implement is likely the partitioner of Sommer et al, and a unidirectional multi-level dijkstra.


Ah, I guess you mean this paper then: https://www.microsoft.com/en-us/research/wp-content/uploads/...

There are many similarities between this approach and customisable contraction hierarchies. The latter allows a particularly elegant query-time algorithm involving only a couple of linear sweeps, I suspect even in the many-many scenario.


Yep, that’s the one. There are a number of follow-up papers that engineer individual aspects of the implementation.


Efficiency, not effectiveness. They are all effective in the sense that they produce sorted results. Even the non-modern sort algorithms are effective in the sense that the results are correct. This should be about the efficiency with which they do it, right?


From TFA:

The title is an homage to Eugene Wigner's 1960 paper "The Unreasonable Effectiveness of Mathematics in the Natural Sciences".


Agreed. Effectiveness would imply that some algorithms are more likely to sort the list correctly than others, or they sort a higher percentage of elements. Efficiency is about factors external to the correctness


"The Unreasonable Effectiveness of Mathematics in the Natural Sciences" is one of those titles that gets imitated a lot for some reason. Maybe even more than "Goto Considered Harmful".


Coming next: “What we talk about when we talk about modern sort algorithms”


or "we need to talk about what we talk about when we talk about the unreasonable effectiveness of title memes are all you need considered harmful"


"Optimize your sorts with this one weird trick."

"What they don't want you to know about sorting."


Ascending is all you need.


A way to give radix sort a performance boost is to allocate uninitialized blocks of memory for each bucket that are of size n. It exploits the MMU and you don’t have to worry about managing anything related to buckets potentially overwriting. The MMU is doing all the bookkeeping for you and the OS actually allocates memory only on page access.


A winner tree uses extra space, doesn't it? That might exclude it from certain applications to be an alternative. Four-ary heaps are roughly (ymmv) twice as fast as binary heaps by exploiting cache locality in a better way for small key/value types. And it seems to be a sweet spot since eight-ary heaps don’t deliver additional improvements.


Yes, since the inner nodes duplicate information, it uses more space (roughly twice as much). I've found them generally most effective for things like merging 256 streams or doing Dijkstra with not super-many nodes (e.g. Contraction Hierarchies). If you have millions or billions of entries, then cache considerations start becoming more important than branch mispredictions and you want something like a B-heap.


That's a nice niche you found (spoken from one heap fan to another) but I have to say I strongly disagree with your use of *roughly* twice as much

At best you were off by one but in the context of performance, you'd want to assign that extra to a super-root of ±inf in order to save log2n extra range checks per heap-up, no?


I don't think I understand what you are saying. Anyway, if you are strapped for space, do not use a winner tree.


Ah, I meant that for a classic heap, it's convenient to assign h[0] to the limit of your goal function (e.g. -inf for a min heap) cause that way you can skip the while i>0 check and just do while h[i>>1] > min(h[i], h[i+1]), asserting that the condition is definitely false when i < 2

I guess that it's not as important when you're explicitly willing to pay the piper and run the full log2n iterations just for the sake of branch predictability.

Thanks for the algo though, before today I never would've thought about doing a branchless downheap with i = i<<1 + h[i<<1] != h[i]


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You