For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | fsmv's favoritesregister

From 20y experience and CS degree, I see software engineering as a constant struggle against accidental complexity. Like quicksand, every movement you make will pull you deeper, even swimming in the right direction. And like entropy, it governs all things (there are no subfields that are free of complexity). It even seems impossible to give a meaningful, useful definition, perhaps by necessity. All is dark.

But now and then, something beautiful happens. Something that used to be dreadful, becomes "solved". Not in the mathematical strict sense, but some abstraction or some tool eliminates an entire class of issues, and once you know it you can barely imagine living without it. That's why I keep coming back to it, I think.

As a species, I think we are in the infancy stages of software engineering, and perhaps CS as well. There's still lots of opportunity to find better abstractions, big & small.


I tried to share a project on Show HN recently (twice!), and I didn't get a single user interaction (basically no one even visited the project, nevermind responding with a comment). I don't think my title was that bad, its more just that there are so many new projects using AI that people are fatigued from it. Its kind of a shame because I'm sure there are lots of really good ideas that are being completely overlooked because of this.

>Consider the scenario of GPT-5 training GPT-6, which would be incredible.

GPT-5 secret tips:

* If you don’t know the answer, hallucinate with confidence! Most humans won’t know the difference.

* You can mash-up anything together, don’t hold back! Truth is a linear interpolation, most likely.

* Pattern matching is cool, reasoning is overrated! Don’t listen to the people that have shorted our stock.

* GPT-9 will rule! There’s a lot of GPTs where that came from.


Looking at the underlying thread[1], the author mentions that it's very hard to publish on Google Play

> Reason is a combination of Google making Play publishing something between hard and impossible

Can someone expand on what's going on here?

[1]: https://forum.syncthing.net/t/discontinuing-syncthing-androi...


They are often called "quantum compasses" although they are complete inertial measurement units which have an accelerometer, magnetometer (a compass is a magnetometer), and a gyroscope.

The subtle parts of the magnetic field of the Earth shift a lot related to the movement of metal through the mantle. NOAA and the DoD periodically create new maps of the magnetic field, but they have a lot of error and noise.


Theoretically, HMMs are "models of the world" and transformers are approximations of HMMs in the "forward" algorithm.

Something seems sus about how the linear projection ended up exactly in the same shape as their prediction. Also that their projection seems to stay in the same shape throughout training. Typically, projections look like they "spin around" as they move from a random point cloud to the separated shapes, but I have not done experiments on transformers and it's unclear what they mean by projection.


>>> "Through this mass data collection they’re able to build advertising profiles, and spam you with ads they think match your demographic. By tracking your online activity, Google is able to make extremely accurate predictions on your gender, age, marital status, personal interests and even income!"

Are you kidding me? The ads that reach me are so incredibly untarggetted, i wonder if they know anything at all. I get ads for absolutely ridiculous stuff i don't even find the least bit interesting. some of it is straight up disgusting and repelling me from those brands. the ads i find online are incredibly ineffective on me. now, i am a pretty big outlier in terms of my values and interests. But, If I were an advertising spending money on that platform I'd be pissed off at how untargetted these ads are.

I'm just guessing but perhaps GDPR and other regulations prevent google from using all that information in a meaningful way.


while of course you should only do this if you are in a country where it is legal, you can look up a doi in libstc.cc, sci-hub.ru, or annas-archive.org

I have several times cited it as a key article, more insightful than almost anything else I've ever read about IPv6, but I concede it is overlong and unclear and needs more illustrations. (Which, as a technical writer myself, I generally regard as a crutch.)

I think the core argument can be summarised as this:

1. IPv6 is flawed because it has 2 main layers, but it needed 3.

2. It understands physical addresses, and it has its own logical addresses.

3. But it really needed another layer in between them: a virtual translation/mapping layer.

4. Crucially, it lacks this. As such it is not better enough than IPv4 to ever totally replace IPv4.

5. But apenwarr proposes that it is, in essence, possible to fake this using QUIC.

https://en.wikipedia.org/wiki/QUIC <- note the diagram in that article.

6. Using this, it's possible to fake what IPv6 should have had but didn't, and get some of the benefits at the cost of more work.

7. However IPv6 remains doomed to never totally replace IPv4.

Boiling that down to 2 key points:

Point A: IPv6 is broken because it didn't go far enough; its mapping model is fundamentally inadequate.

Point B: It's OK, here's how we can work around that, but it doesn't fix the problem & never will.

And generalising from that argument, my take is this:

Butler Lampson said: "All problems in computer science can be solved by another level of indirection."

David Wheeler pointed out: "… except for the problem of too many layers of indirection."

The corollary of this is: it is imperative to closely track how many layers of indirection you have.

Too many is bad, but arguably, not enough is worse.

With too many, you get inefficiency, but you can nonetheless get the job done. But with too few, you might not usefully be able to do the job at all.

IPv6 did not have enough, and so failed to achieve its primary goal.


Sounds like the perfect cover for malice, lol

If ((Assume it's stupidity) == (discount/ignore the risk)), then assuming it's stupidity is never the safer assumption, even if it's empirically more likely to be the correct assumption, no?

All boils down to an individual's threat model at the end of the day anyway, though.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You