For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more throwaway892238's commentsregister

That's a lot of mays. One might imagine that before this stuff becomes the latest version of an internet standard, these theoretical qualifications might be proven out, to estimate its impact on the world at large. But it was useful to one massive corporation, so I guess that makes it good enough to supplant what came before for the whole web.


HTTP/2 or /3 were never about optimizing bandwidth, but latency.


Google did a great deal of research on the question using real-world telemetry before trying it in Chrome and proposing it as a standard to the IETF’s working group. And others including Microsoft and Facebook gave feedback; it wasn’t iterated on in a vacuum. The history is open and well documented and there are metrics that support it. See e.g. https://www.chromium.org/spdy/spdy-whitepaper/


Lol, wait, HTTP2 and HTTP1.1 both trounce HTTP3? Talk about burying the lede. Wasn't performance the whole point behind HTTP3?

This chart shows that HTTP2 is more than half as slow as HTTP1.1, and HTTP3 is half as slow as HTTP2. Jesus christ. If these get adopted across the whole web, the whole web's performance could get up to 75% slower . That's insane. There should be giant red flags on these protocols that say "warning: slows down the internet"


If the last decade of web protocol development seems backwards to you after reading one benchmark then why immediately assume it's insane and deserves a warning label instead of asking why your understanding doesn't match your expectations?

The benchmark meant to compare how resource efficient the new backend for curl is by using localhost connectivity. By using localhost connectivity any real world network considerations (such as throughput discovery, loss, latency, jitter, or buffering) are sidestepped to allow a direct measurement of how fast the backend alone is. You can't then assume those numbers have a meaningful direct extrapolation to the actual performance of the web because you don't know how the additional things the newer protocols do impact performance once you add a real network. Ingoring that, you still have to consider the notes like "Also, the HTTP/1.1 numbers are a bit unfair since they do run 50 TCP connections against the server." before making claims about HTTP2 being more than half as slow as HTTP1.1.


> Wasn't performance the whole point behind HTTP3?

Faster, more secure, and more reliable, yes. The numbers in this article looks terrible, but real-world testing¹ shows that real-world HTTP/3 performance is quite good, even though implementations are relatively young.

"…we saw substantially higher throughput on HTTP/3 compared to HTTP/2. For example, we saw about 69% of HTTP/3 connections reach a throughput of 5 Mbps or more […] compared to only 56% of HTTP/2 connections. In practice, this means that the video streams will be of a higher visual quality, and/or have fewer stalls over HTTP/3."

¹https://pulse.internetsociety.org/blog/measuring-http-3-real...


With their other losses they'll need some impressive financing to get anything new off the ground. Good luck securing that in 2024. Even if they get it, they're gonna take another bath if they try to target sub-$25K with a crossover. That's ICE territory; hybrids and all-electric crossovers are already $10-15K more.


What are you even talking about? They have plent of cash and no debt. They are also not losing money.

They have plenty of money to invest. And they have been working on the project for a while now.


Every developer needs to learn two lessons about great interfaces:

1. Layout, format [aka style], and interactivity, is both hard and necessary

2. Interfaces are subtly complex

Number 2 is the reason that "simple" formatting markup, like Markdown, will never be good enough to make a great interface. You can make a poor interface with Markdown, and that poor interface is good enough for very simple use cases. But people quickly get annoyed by simple use cases. They create friction, both for the designer and the user. More useful functionality is desired, and you need something more complex to enable that.

I've written a lot of tech docs. Markdown is one of the worst methods I've found to write them. Not because its formatting isn't intuitive, easy to remember, and easy to use. But because the documents it creates are not conducive to the purpose of the document: making it easy to read and remember information.

As for more general interfaces, the easiest way to understand why you need more elaborate methods, is HTML. When HTML was created by Tim-Berners Lee ~1990, he intended for format and layout to be independent and configurable. However, for years it lacked both formatting and layout options, and lots of people complained. It took six years for CSS to eventually become a standard, and before that, lots of hacks and incompatibility (and it turned out, more of them afterwards). The entire time people were complaining about how any document editor could easily format and layout exactly what the author wanted, but web browsers persisted in being difficult and inferior. Tables were only added in HTML 2.0, five years after the Web's birth.

You can still get by with not-very-good markdown, for not-very-good interfaces. But you will never have a Great interface without the ability to control layout, format, and interactivity, in subtly complex ways.

There is no easy way for a human to do that with text. But there is an invention that enabled users to create complex and rich layout, formatting, and interactivity: the WYSIWYG interface. With this tool, the human no longer needs to remember control characters, markup syntax, or formatting codes. The user no longer needs to spend endless time typing in code, displaying the result, being unsatisfied, and tweaking the code again, trying desperately to just get the screen to show them what they know they want. A WYSIWYG editor enables a human to completely discard any knowledge of how to format or layout content, and instead simply move things around on their screen with a mouse, completely eliminating complexity for the human, and enabling the creation of better content.

I think the reason modern software abandoned WYSIWYG, is that modern software isn't created for users. Most modern software is open-source (or close), and as such, it's created by developers, for developers. Developers don't want to spend time engineering a WYSIWYG, because they have no need for a good interface, and they don't care about wasting their time writing code. To them, writing code is the whole point, regardless of how long it takes, or what the outcome is. That's why Markdown was invented, and why people persist in trying to force Markdown to make something better, without realizing that that's impossible. They want to to build a castle, but they don't want to use anything other than the twigs and rocks lying about their feet.


This one weird trick will fix your memory leaks forever


The US, among other industrializing nations at the turn of the 19th century, had a pretty poor record for reliability, safety, etc. Over the years, local groups gathered together and lobbied for changes, usually to protect or defend a particular trade. Towards the late 19th century, more organizations were popping up to push for professional standards, licensing requirements, codes of ethics, public safety and social responsibility. For the better part of two centuries, this country (and others) have been very slowly yet incrementally improving how engineering is done.

Developing nations like India need their own time to move through similar processes. There still needs to be people lobbying, doing the work of organizing, and changing the way things work. But it's going to take decades, and maybe even centuries, like it did for other nations.


I think this paper is well-intentioned, but is trying to treat the symptoms rather than the cause.

The paper focuses on microservices, and then tries to avoid claims of "they just don't like microservices" by describing the ways in which microservices are improperly used. Do they go back and compare this to monoliths or other architectures? Nope; it's really just "hey I have another microservices idea", heavily gilded. They mention "monolithic applications divided into logically distinct components", but you could just claim your microservices are divided into logically distinct components.

They also seem to completely ignore the problem that a logical separation doesn't mean your components are better off. In a complex system, often completely separate components still need to be integrated together in order for the system to function at all, much less operate efficiently. It's not a design flaw to combine different things. It depends on the application. So just separating things logically isn't some scientific computing advancement, it's just categorization.

In reality, their solution (a "single binary business logic application" and "an interface that can combine them") is literally a description of shell scripting with Unix tools. Don't get me wrong, that obviously works great, since it's been popular for 44 years (older than IPv4). But if you want to come up with some kind of modern paradigm for distributed computing, maybe we should flush it out a bit more. What we have here is a Google engineer's attempt to make a paper suggesting we make shell scripting for the web, without much to show for it.

(Personally, I think the more people try to control the interface, the worse things get. The best and most long-lived solutions in all of computing have had almost no interface at all; a raw TCP stream, 3 raw file descriptors, a set of random arguments, and a set of random key=value pairs, have enabled all modern computing paradigms to flourish)


Cute trick, but it's not actually what the title claims.

Since this is actually env calling bash first, not docker, this should just be a Bash script. You can still feed the Dockerfile to docker build via STDIN. But you'd gain the ability to shellcheck the Bash, the code would be easier to read, write, maintain, add comments to, etc. You could keep the filename the same, run it the same way, etc. The way they've done it here is just unnecessarily difficult.


> You can still feed the Dockerfile to docker build via STDIN.

but you'd then have to work out how to "filter out" the bash commands inside this bash script to make it a valid docker file.

Unless of course, you entirely store the docker file contents inside heredocs. That works fine, but it's not as "cool" as "executing" dockerfiles as a script.


You can say it is wrong without being insufferably condescending


I mean, it's electromagnetic waves. The right wavelength will bounce off things and increase likelihood of receiving a coherent signal. Works for a lot of things; RF, sound, light. RF has low energy and huge wavelengths. IIRC keyfobs are in the 200-600MHz range.

Ha, found some (kind of) evidence:

Radiowave Effects on Humans - March 28, 1980 / T. Neil Davis (https://www.gi.alaska.edu/alaska-science-forum/radiowave-eff...)

  One reason the question is unanswered is that the energy absorbed by a human from radio waves depends upon the relationship between the size of the human and the frequency of the radio waves. Just as a TV antenna of the right length and orientation picks up the best signal (the most energy) from a transmitted wave, so it is with a human being. It appears that the cranial cavity of a mammal will resonate at specific radio frequencies determined by the size of the brain cavity. At these resonant frequencies the human head will absorb vastly more radiowave energy than it will at other nearby frequencies.
  
  An adult's head will resonate at a frequency between 350 and 400 MHz (megahertz). Being smaller, a child's head will resonate at a higher frequency, somewhere between 600 and 850 MHz. Since each individual may have his or her own resonant frequency, a particular frequency radiowave might affect one person more than another. Consequently, testing on humans--even if people are willing to let this happen--can be rather complicated.
Basically the human head is a resonance chamber that probably amplifies the signal. But also your body is made of water, and RF bounces off metal and water. The capacitive coupling of skin probably adds an enhancement to the effect.


> For the past year, I have been searching for a replacement for my Lenovo Ideapad. Its 8 GB RAM couldn't keep up with the bloated software I run these days.

I have the same laptop (using it right now). You can add another 8GB, even though the specs say you can't. (thanks, lenovo...) The big limitation seems to be CPU, despite having an i7-6500U in this one. Opening Google Maps makes the machine crawl, but most other sites don't have a problem. It's definitely not speedy but it is functional with about 100 tabs open.

The main reason I want to replace it is to use a USB-C docking station, have real graphics support ("hybrid" graphics is crap in Linux), and have more than 5 hours battery life. Otherwise it's kind of amazing how well it's working after 7 years.

Even though it's technically "heavy" for a modern laptop, it actually feels easier to pick up and carry around than lighter laptops. The corners are angled/sloped so that they're very easy to slip a hand around, and the case is sturdy held in one hand even when open (thanks to the case being half-metal, half-plastic). Way easier to move around than my Macbook. A great example of how design makes a big difference to usability.


You can buy used Lenovo USB-C docks very cheaply but I've had some issues with those.

I also tried those big rectangular HP USB-C docks, but those things break so quickly they often arrive DOA.

My point is: it's a _great_ idea but I haven't found a dock that works relaibly yet.


I have a Dell D6000 USB-C dock that works decently with my MacBook. Monitor isn't detected at first, but I unplug and re-plug and it detects it. There's supposed to be Linux support for the USB DisplayLink chipset but it doesn't work for me.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You