For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | DenisM's commentsregister

>to a language which has already surpassed its complexity budget

I've been thinking that way for many years now, but clearly I've been wrong. Perhaps C++ is the one language to which the issue of excess complexity does not apply.


In essence, a standard committee thinks like bureaucrats. They have little to no incentive to get rid of cruft and only piling on new stuff is rewarded.

In D, we are implementing editions so features that didn't prove effective can be removed.

I don't know what you mean by effective - I can come up with several different/conflicting definitions in this context.

I think what you meant to say is popular. If a feature is popular it doesn't matter how bad it turns out in hindsight: you can't remove it without breaking too much code (you can slowly deprecate it over time, I'm not sure how you handle deprecation in D, so perhaps that is what editions give you). However if a great feature turns out not to be used you can remove it (presumably to replace it with a better version that you hope people will use this time, possibly reusing the old syntax in a slightly incompatible way)


I am sadly not in the position to use D at work, but I appreciate your work!

Yeah dude but you've really marketed D poorly. I remember looking at D what must be 15 years back or so? And I loved the language and was blown away by its beauty and cool features. But having no FOSS compiler and the looming threat of someone claiming a patent (back then it was unclear that Mono/C# was "legal" and even Java hung in the balance) was too scary for me to touch it.

Now I'm old and I believe D has missed its opportunity. Kinda sad.


D is 100% open source. The gnu D compiler and the LLVM D compiler were always 100% open source.

I don't recall anyone making a patent claim.

Open source and free software isn't the same thing. Nobody made a claim on Java either, until someone did. I just distinctly remember explicitly not exploring D for that reason. Also this way way before LLVM and I also don't think GNU had a D compiler back then. There was only the (and I really believe it was closed source) Digital Mars compiler.

15 years ago, both LLVM and GNU had a D compiler. gdc (the GNU compiler) was not an official part of the gcc collection, but it was definitely there and 100% open source.

All three compilers shared the open source D front end. The DMD backend source code was available for anyone to use, it just couldn't be redistributed. We were eventually able to fully Boost license it.

The DMD compiler always had source available for free from Digital Mars. I never sold a single copy :-)


The scheme folks managed to shed complexity between R6RS and R7RS, I believe.

So perhaps I think the issue is not committees per se, but how the committees are put together and what are the driving values.


Notably they didn't fully shed it, they compartmentalized it. They proposed to split the standard into two parts: r7rs-small, the more minimal subset closer in spirit to r5rs and missing a lot of stuff from r6rs, and r7rs-large, which would contain all of r6rs plus everyone's wildest feature dreams as well as the kitchen sink.

It worked remarkably well. r7rs-small was done in 2013 and is enjoyed by many. The large variant is still not done and may never be done. That's no problem though, the important point was that it created a place to point people with ideas to instead of outright telling them "no".


In the good old days Netflix had "Dynamic HTML" code that would take a DOM element which scrolled out of view port and move it to the position where it was about to be scrolled in from the other end. Hence he number of DOM elements stayed constant no matter how far you scroll and the only thing that grows is the Y coordinate.

They did it because a lot of devices running Netflix (TVs, DVD players, etc) were underpowered and Netflix was not keen on writing separate applications. They did, however, invest into a browser engine that would have HW acceleration not just for video playback but also for moving DOM elements. Basically, sprites.

The lost art of writing efficient code...


> Hence he number of DOM elements stayed constant no matter how far you scroll and the only thing that grows is the Y coordinate.

This is generally called virtual scrolling, and it is not only an option in many common table libraries, but there are plenty of standalone implementations and other libraries (lists and things) that offer it. The technique certainly didn't originate with Netflix.


Yes, tables and lists, since they have a fixed height per item/row. Chat messages don't have a fixed height so its more difficult. And by more difficult I mean that every single virtual paging library that I've looked at in the past would not work.

But they do have constant height in the sense that, unless you resize the window horizontally, the height doesn’t change.

For what it’s worth, modern browsers can render absurdly large plain HTML+CSS documents fairly well except perhaps for a slow initial load as long as the contents are boring enough. Chat messages are pretty boring.

I have a diagnostic webpage that is a few million lines long. I could get fancy and optimize it, but it more or less just works, even on mobile.


Exactly, browsers can render it fast. It's likely a re-rendering issue in React. So the real solution is just preventing the messages from getting rendered too often instead of some sort of virtual paging.

Dynamic height of virtual scrolling elements is a thing. You just need to recalculate the scrollable height on the fly. tanstack's does it, as do some of the nicer grid libraries.

To be fair I haven't looked at any solutions in about a decade lol

Its been about three years but infinite scroll is naunced depending on the content that needs to be displayed. Its a tough nut to crack and can require a lot of maintenance to keep stable.

None of which chatgpt can handle presumably.


And yet ChatGPT does not use it.

GP was mentioning that a solution to the problem exists, not that Netflix specifically invented it. Your quip that the technique is not specific to Netflix bolsters the argument that OpenAI should code that in.


I'm ignorant of the tech here. But I have noticed that ctrl-F search doesn't work for me on these longer chats. Which is what made me think they were doing something like virtual scrolling. I can't understand how the UI can get so slow if a bunch of the page is being swapped out.

Ctrl-A for select all doesn't work either. I actually wondered how they broke that.

They didn't actually name the solution: the solution is virtualization.

They described Netflix's implementation, but if someone actually wanted to follow up on this (even for their own personal interest), Dynamic HTML would not get you there, while virtualization would across all the places it's used: mobile, desktop, web, etc.


This is how every scrolling list has been implemented since the 80s. We actually lost knowledge about how to build UI in the move to web

The biggest issue is that there is no native component support for that. So everyone implements their own and it is both brittle and introduces some issues like:

- "ctrl + f" search stops working as expected - the scrollbar has wrong dimensions - sometimes the content might jump (common web issue overall)

The reason why we lost it is because web supports wildly different types of layouts, so it is really hard to optimize the same way it is possible in native apps (they are much less flexible overall).


Right. This is one of my favorite examples of how badly bloated the web is, and how full of stupid decisions. Virtual scrolling means you're maintaining a window into content, not actually showing full content. Web browsers are perfectly fine showing tens of thousands of lines of text, or rows in a table, so if you need virtual scrolling for less, something already went badly wrong, and the product is likely to be a toy, not a tool (working definition: can it handle realistic amount of data people would use for productive work - i.e. 10k rows, not 10 rows).

Agreed - I've had this argument with people who've implemented virtual scroll on technical tools and now users can't Ctrl-F around, or get a real sense of where they are in the data. Want to count a particular string? Or eyeball as you scroll to get a feel for the shape of it?

More generally, it's one of the interesting things working in a non-big-tech company with non-public-facing software. So much of the received wisdom and culture in our field comes from places with incredible engineering talent but working at totally different scales with different constraints and requirements. Some of time the practices, tools, approaches advocated by big tech apply generally, and sometimes they do things a particular way because it's the least bad option given their constraints (which are not the same as our constraints).

There are good reasons why Amazon doesn't return a 10,000 row table when you search for a mobile phone case, but for [data ]scientists|analysts etc many of those reasons no longer apply, and the best UX might just be the massive table/grid of data.

Not sure what the answer is, other than keep talking to your users and watching them using your tools :)


Desktop GUI toolkits aren't less flexible on layout, they're often more flexible.

We lost it because the web was never designed for applications and the support it gives you for building GUIs is extremely basic beyond styling, verging on more primitive than Windows 3.1 - there are virtually no widgets, and the widgets that do exist have almost no features. So everyone rolls their own and it's really hard to do that well. In fact that's one of the big reasons everyone wrote apps for Windows back in the day despite the lockin, the value of the built-in widget toolkit was just that high. It's why web apps so often feel flaky and half baked compared to how desktop apps tend(ed) to feel - the widgets just don't get the investment that a shared GUI platform allows.


Laser turrets near highest-value targets?

It becomes defense in depth though, perimeter defense is no longer enough. Thats kinda new.


Laser takes as much as 10s to disable a single plastic drone at a distance of ~400 meters. The slowest drone flies at 20 m/s.

So realistically a laser drone weapon can eliminate just a couple of drones until a third or a fourth one comes through and destroys your turret.


Create a separate Mac / Windows non-admin account just for coding? I’m sure there are parental control measures for either platform. As time goes you can update the deny list of web sites.

Another thing that helps is recording your screen for the whole day. Once you start doing review in the evening it will create back-pressure on the monkey brain that jumps to distractions.

Yet another thing is to setup a separate computer. You can browse crapnet as long as you want, but you have to walk to another desk. The back pressure is subtle but has long-term effect and requires very kittke will power.


>Create a separate Mac / Windows non-admin account just for coding?

Yes, I got as far as creating a separate account on my MBP a few years ago and I do programming and open source stuff with that account. And it has helped quite a bit! Although it's not perfect (case in point, I am here on HN right now).


If X is against law Y the recourse is to seek judgment from courts. If it’s not against the law the recourse is to seek new law from Congress.

The difference is significant for that reason alone. The other reason is that if you’re looking to recruit supporters you will get more of them if you get your ducks in a row. Disorganized ducks impair credibility and create friction.

Not making the distinction between the two is only helpful for the purpose of blowing off steam and the only outcome is outrage fatigue.


> Your opinion.

A matter of opinion.

It’s also the law of this place, and that is not a matter of opinion.


> It’s also the law of this place

Guideline. Guidelines are not laws. Want to know what else is an HN guideline? "Please don't complain that a submission is inappropriate" right along with "Please don't post comments saying that HN is turning into Reddit".

And even if it were true that it's a law, it would still be entirely irrelevant to the discussion of whether it's dumb or not.

> and that is not a matter of opinion

"unless they're evidence of some interesting new phenomenon" is a matter of opinion. Evidence of, interesting, new, all matters of opinion.


That classic argument for step up is that a farmer’s son inheriting the farm suddenly owes a lot of taxes which he can’t pay without selling the farm.

Is this a real problem? And how do we fix that?


It is a real problem but its presented in a misleading way.

If you inherit a farm worth 1m and you have to pay estate taxes on that of 100k. You do not have to sell the farm, you can instead take out a mortgage to cover that 100k.

When we frame it that we its still very fair to the person inheriting the farm because who wouldnt want a $1m farm for 100k.


Except you don't pay any estate tax unless your estate is larger than $15 mil


In American maybe thats the case. Im talking about situations where we are trying to implement an estate tax and the discussion is framed as basically forcing the receiver to sell the estate to pay the tax instead of presenting it accurately as taking a small loan to pay the tax.


You fix it by sidestepping it. Don't do it, even though it is the simplest/easiest to implement, and instead create 5 other laws that surround it and accomplish the same goal. Loop the hole back a them.


The key flaw in the argument is that high quality interfaces do not spring to life by themselves. They are produced as an artifact of numerous iterations over both interfacing components. This does not invalidate the entire article, but interface discovery phase has to become the integral part of disposable systems engineering.


I agree with your comment. but in defense of the articles scope: as someone trying to build longterm utility and developer infrastructure, I find the vibecoded prototypes to help me dial in on interfaces. Instead of waiting to hear back from only downstream user experiences (because you previously didn't have time to experience the various consequences of your choices yourself), you can get the various experiences under your own belt, and better tune the interfaces :)

Which is to say, the interface exploration comes part-and-parcel with the agent tooling in my experience

This comment is a little orthogonal to the content of the article, but my experience made what they wrote click with me


I found it a lot safer to cherry pick into a new temp branch, test things out, rename the old branch to an archive name, and rename the new temp branch to old name.

That is until I started using graphite, that solved the problem completely for me. The only trick is to never mix graphite and git history editing.


If you have not already, try Graphite. You will be delighted as it serves that exact purpose.


I use magit which afaik is still undefeated for this workflow. Particularly with this snippet to “pop down” individual ranges of changes from a commit: https://br0g.0brg.net/notes/2026-01-13T09:49:00-0500.html .


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You