For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | slashdev's favoritesregister

> It's much more comfortable to be the person that "could be X" than to be the person that tries to actually do it.

Brilliant insight.

Reminds of me this, from Theodore Roosevelt's Citizenship in a Republic:

> It is not the critic who counts; not the man who points out how the strong man stumbles, or where the doer of deeds could have done them better. The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood; who strives valiantly; who errs, who comes short again and again, because there is no effort without error and shortcoming; but who does actually strive to do the deeds; who knows great enthusiasms, the great devotions; who spends himself in a worthy cause; who at the best knows in the end the triumph of high achievement, and who at the worst, if he fails, at least fails while daring greatly, so that his place shall never be with those cold and timid souls who neither know victory nor defeat.

Good luck, and go get 'em.


Most software engineers are seriously sleeping on how good LLM agents are right now, especially something like Claude Code.

Once you’ve got Claude Code set up, you can point it at your codebase, have it learn your conventions, pull in best practices, and refine everything until it’s basically operating like a super-powered teammate. The real unlock is building a solid set of reusable “skills” plus a few agents for the stuff you do all the time.

For example, we have a custom UI library, and Claude Code has a skill that explains exactly how to use it. Same for how we write Storybooks, how we structure APIs, and basically how we want everything done in our repo. So when it generates code, it already matches our patterns and standards out of the box.

We also had Claude Code create a bunch of ESLint automation, including custom ESLint rules and lint checks that catch and auto-handle a lot of stuff before it even hits review.

Then we take it further: we have a deep code review agent Claude Code runs after changes are made. And when a PR goes up, we have another Claude Code agent that does a full PR review, following a detailed markdown checklist we’ve written for it.

On top of that, we’ve got like five other Claude Code GitHub workflow agents that run on a schedule. One of them reads all commits from the last month and makes sure docs are still aligned. Another checks for gaps in end-to-end coverage. Stuff like that. A ton of maintenance and quality work is just… automated. It runs ridiculously smoothly.

We even use Claude Code for ticket triage. It reads the ticket, digs into the codebase, and leaves a comment with what it thinks should be done. So when an engineer picks it up, they’re basically starting halfway through already.

There is so much low-hanging fruit here that it honestly blows my mind people aren’t all over it. 2026 is going to be a wake-up call.

(used voice to text then had claude reword, I am lazy and not gonna hand write it all for yall sorry!)

Edit: made an example repo for ya

https://github.com/ChrisWiles/claude-code-showcase


rr has always worked with Python in the sense that it can record and replay Python programs.

However, when you try to debug the program you can only debug the C code the Python interpreter is written in.

I suppose you want to be able to debug the Python code itself. Here is something that could do this https://pypy.org/posts/2016/07/reverse-debugging-for-python-... . I don't think the project is active nowadays though. Also I haven't used it so can't say whether it is good or not.

It should be possible to built a Python reverse debugger on top of rr. I know this should be possible because I built something for PHP https://github.com/sidkshatriya/dontbug .

There are other fancy (and possibly better) things that are possible -- instead of building a Python debugger atop rr you can record the full trace of the Python program and then for e.g. store the values of important variables at each executed line of the Python program in a database. This would again use rr as the record/replay substrate but with a slightly different approach. This is an area which I've done some work internally but nothing public released yet :-) !


It depends hugely on how you decide to manage the connection objects. If you have a single thread / single core server that only even opens a single connection, then connection open overhead is never a problem even under infinite load.

The two main issues w opening a connection are:

1. There is fixed cost O(database schema) time spent building the connection stuff. Ideally SQLite could use a “zygote” connection that can refresh itself and then get cloned to create a new one, instead of doing this work from scratch every time.

2. There is O(number of connections) time spent looking at a list of file descriptors in global state under a global lock. This one is REALLY BAD if you have >10,000 connections so it was a major motivator for us to do connection pooling at Notion. Ideally SQLite could use a hash table instead of a O(n) linear search for this, or disable it entirely.

Both of these issues are reasons I’m excited about Turso’s SQLite rewrite in Rust - it’s so easy to fix both of these issues in Rust (like a good hash table is 2 LoC to adopt in Rust) whereas in the original C it’s much more involved to safely and correctly fix the issue in a fork.

Furthermore, it would be great to share more of the cache between connections as a kind of “L2 cache”; again tractable and safe to build in Rust but complicated to build in a fork of the original C.

Notion uses a SQLite-backed server for our “Database” product concept that I helped write, we ran in to a lot of these kinds of issues scaling reads. We implemented connection pooling over better-sqlite3 Node module to mitigate these issues. We also use Turso’s existing SQLite C fork “libsql” for some connections since it offers a true async option backed by thread pool under the hood in the node driver, which helps in cases where you can have a bottleneck serializing or deserializing data from “node” layout to “SQLite c” layout or many concurrent writes to different DBs from a single NodeJS process.


98 lanes of PCIe 4.0 fabric switch as just the chip (to solder onto a motherboard/backplane) costs 850$ (PEX88096). You could for example take 2 x16 GPUs, pass then through (2216=64 lanes), and have 2 x16 that bifurcate to at least x4 (might even be x2, I didn't find that part of the docs just now) for anything you want, plus 2 x1 for minor stuff. They do claim to have no problems being connected up into a switching fabric, and very much allow multi-host operations (you will need signal retimers quite soon, though).

They're the stuff that enables cloud operators to pool like 30 GPUs across like 10 CPU sockets while letting you virtually hot-plug them to fit demand. Or when you want to make a SAN with real NVMe-over-PCIe. Far cheaper than normal networking switches with similar ports (assuming hosts doing just x4 bifurcation, it's very comparable to a 50G Ethernet port. The above chip thus matches a 24 port 50G Ethernet switch. Trading reach for only needing retimers, not full NICs, in each connected host. Easily better for HPC clusters up to about 200 kW made from dense compute nodes.), but sadly still lacking affordable COTS parts that don't require soldering or contacting sales for pricing (the only COTS with list prices seem to be Broadcom's reference designs, for prices befitting an evaluation kit, not a Beowulf cluster).


Makes sense!

I was just talking with a Temporal solutions engineer this week and this metric is their recommended one for autoscaling on. Instead of autoscaling on queue depth, you scale on queue latency! Specifically for them they split up the time from enqueue to start, and then the time from start to done, and you scale on the former, not the total ("ScheduleToStart" in their terms).


There was recently an article about distributed systems that showed up here. (Harry Doyle: Christ, I can't find it. To hell with it!)

And the author made a very interesting point about message queues. Simply, any problem that could be resolved by a message queue could be resolved by load balancing or persistence, and, therefore, messages queues were actually kind of a bad idea.

There were two basic issues.

The first is that because of the nature of message queues, they're either empty or full. The second is that for many of the ways that queue are used, the unit putting the request on the queue in the first place may well be waiting for the response related to the request. So you've just turned an every day synchronous request into a more complicated, out of band, multi-party asynchronous request.

If your queues are not empty, they they are filling up. And queue capacity is a two fold problem. One, is that you simply run out of space. But, more likely, referring to the earlier point about waiting for a response, is that you run out of time. The response does not return fast enough to manage your response window to the unit making the request.

This is a load balancing problem. If the queue is filling you simply don't have the capacity to handle the current traffic. It's also a mechanically simpler thing to send out a request and wait for the response than to do the dance via a message queue.

The second part is that if you're throwing items onto a message queue, and you "don't care" about them, that is it's a true "fire and forget" kind of request and direct response time is not a concern, then what does the queue gain you over simply posting it to a database table? If the request is Important, you certainly don't want to trust it to a message queue, a device not really designed for the storage of messages. Messages in a queue are kind of trapped in no mans land, where the easiest way to get to a message is to dig through the ones piled in front of it.

They're interesting insights and worth scratching your chin and going "Hmmm" over.


Would be interesting to get a list of those 40 papers mentioned

My favorite quote on the meaning of life, from Viktor Frankl's "Man's Search for Meaning":

> For the meaning of life differs from man to man, from day to day and from hour to hour. What matters, therefore, is not the meaning of life in general but rather the specific meaning of a person's life at a given moment. To put the question in general terms would be comparable to the question posed to a chess champion: "Tell me, Master, what is the best move in the world?" There simply is no such thing as the best or even a good move apart from a particular situation in a game and the particular personality of one's opponent. The same holds for human existence. One should not search for an abstract meaning of life. Everyone has his own specific vocation or mission in life to carry out a concrete assignment which demands fulfillment. Therein he cannot be replaced, nor can his life be repeated. Thus, everyone's task is as unique as is his specific opportunity to implement it.


A few books that I've found useful:

The Goal - Eli Goldratt - It is a novel about optimizing a factory, but it is immensely valuable in thinking through what are the actual constraints on your team's ability to deliver software and how do you fix it without making other things worse. The Phoenix Project is kind of a modern retelling, but I'd start with The Goal.

The Principles of Product Development Flow - Reinersten - Great for thinking deeply about how you deliver value through your system and the tradeoffs you are making in what you choose to focus on next. Once again it isn't specific to software, but very relevant.

The 5 Dysfunctions of Teams - Lencioni - Common team problems and what to do instead.

Influence: Science and Practice - Cialdini - How to get people into agreement so you can move forward--a skill that can be used for both good and evil.

The Little Schemer - Friedman and Felleisen - It is kind of hard to describe why this would be useful, but I felt it made me think more deeply about what it means to program a computer in ways that indirectly supported the topics you mentioned. Or maybe I just happened to work through it at a point in my life when I was growing in those other areas too.


The first idea on the list: “ Create an auto-updating website from a Google Doc or Sheet” can be done pretty easily already, without writing code.

(I’m the founder of a company, Makerpad[*], that teaches people no-code)

Glide[1] - lets you create apps from a google sheet. Set up the template connected to your data and as it updates, your app will too.

Stacker[2] - build web-apps without code. Supports Airtable and most recently, Sheets.

Softr[3] - similar for websites but using Airtable as the data source. I think Sheets support will come soon.

Sheets2Site [4] - website builder with google sheets.

For GoogleDocs you can use parameters in the doc that change using something like Zapier (disclaimer: they acquired my company last year). I haven’t explored it the other way around eg. Data into a google doc updating something elsewhere.

[0] https://www.makerpad.co/ [1] https://www.glideapps.com/ [2] https://www.stackerhq.com/ [3] https://www.softr.io/ [4] https://www.sheet2site.com/


Unrelated, but one of my favorite quotes from PKD is this one from Now Wait for Last Year:

---

“All right," Eric agreed. "If you were me, and your wife were sick, desperately so, with no hope of recovery, would you leave her? Or would you stay with her, even if you had traveled ten years into the future and knew for an absolute certainty that the damage to her brain could never be reversed? And staying with her would mean-"

"I can see what it would mean, sir," the cab broke in. "It would mean no other life for you beyond caring for her."

"That's right," Eric said. "I'd stay with her," the cab decided. "Why?" "Because," the cab said, "life is composed of reality configurations so constituted. To abandon her would be to say, I can't endure reality as such. I have to have uniquely special easier conditions."

"I think I agree," Eric said after a time. "I think I will stay with her." "God bless you, sir," the cab said. "I can see that you're a good man.”


> This eliminates the need for an on-battery or plugged-in performance mode because it changes frequencies so fast, you're no longer losing performance by waiting for the software to speed up the cpu. So you can keep it on all the time.

Here's my /etc/rc.d/rc.local script:

    #!/bin/sh
    for policy in /sys/devices/system/cpu/cpufreq/policy*
    do
        echo "power" > "$policy"/energy_performance_preference
    done
Here's available preferences:

    $ cat /sys/devices/system/cpu/cpufreq/policy0/energy_performance_available_preferences 
    default performance balance_performance balance_power power
Basically I can choose between "performance" and "power" (low performance) modes. I'm choosing power, so my fans are silent. And they're silent indeed, no matter the load.

Here's Christopher Hitchens in his 2001 book, Letters to a Young Contrarian:

>Beware of Identity politics. I'll rephrase that: have nothing to do with identity politics. I remember very well the first time I heard the saying "The Personal Is Political". It began as a sort of reaction to defeats and downturns that followed 1968: a consolation prize, as you might say, for people who had missed that year. I knew in my bones that a truly Bad Idea had entered the discourse. Nor was I wrong. People began to stand up at meetings and orate about how they 'felt', not about what or how they thought, and about who they were rather than what (if anything) they had done or stood for. It became the replication in even less interesting form of the narcissism of the small difference, because each identity group begat its sub-groups and "specificities". This tendency has often been satirised—the overweight caucus of the Cherokee transgender disabled lesbian faction demands a hearing on its needs—but never satirised enough. You have to have seen it really happen. From a way of being radical it very swiftly became a way of being reactionary; the Clarence Thomas hearings demonstrated this to all but the most dense and boring and selfish, but then, it was the dense and boring and selfish who had always seen identity politics as their big chance. Anyway, what you swiftly realise if you peek over the wall of your own immediate neighbourhood or environment, and travel beyond it, is, first, that we have a huge surplus of people who wouldn't change anything about the way they were born, or the group they were born into, but second that "humanity" (and the idea of change) is best represented by those who have the wit not to think, or should I say feel, in this way.


We run a fork of GoTrue, using the migrations in this PR:

https://github.com/netlify/gotrue/pull/254

tbh, our fork[1] has deviated a bit from Netlify's so we need to spend some time with them upstream'ing any changes that they would want to merge (perhaps magic links, Azure logins, OAuth scopes).

Long term, I think we will need to run 2 different forks because we have different requirements for multi-tenant. So the benefit here would be sharing "OAuth providers" (eg, if we add Okta, we upstream it, if they add Twitter logins, we pull it)

[1] https://github.com/supabase/gotrue


I cannot answer this question, as I moved abroad for work and my education. However, I do travel a whole lot more, especially during the pandemic with everything being remote.

Most importantly, you only live once: Travel while you still have your health! Travel becomes a lot harder with health problems.

Anyways, if you are an American, you may want to try Croatia, which offers a digital nomad visa for all remote workers worldwide. Croatia allows third country nationals (non-EU/EEA/Swiss citizens) in extremely easily. In fact, if you stay in Croatia for 14 days (with or without the digital nomad visa) you can travel throughout the EU, as long as you follow the pandemic rules. It is a "loophole" for Americans who want to get into the European Union for travel, as Croatia is part of the EU (but not a part of the Schengen zone, so they are allowed to make their own rules for entry a lot easier and without controversy within the EU), but not part of the Schengen zone ("Freedom of Movement zone").

You are also eligible for Croatian national health insurance, which Croatian/EU/EEA/Swiss citizens are likewise eligible for, if you get the Digital Nomad Visa, which is quite a good deal. Croatia costs about the same as the US, worst case scenario, if not less--which is usually the case. The people are extremely friendly and accommodating, too.

Here is some info on the Croatian Digital Nomad Visa: https://www.total-croatia-news.com/news/digital-nomads-in-cr...

This is the website to consult for getting administrative stuff done in Croatia: http://expatincroatia.com/


Earlier last year around May, when the stock market seemed to pick up after the COVID crash, I purchased a number of deep OTM LEAPs (long expiry options; until early this year). They were incredibly cheap. I often had trouble finding a market maker for those.

I only chucked about $2000 in, but these options have grossed $15,000 in realised returns and $25,000 in unrealised returns.

Options allowed me to make a “bet” that stocks would go crazily up. Options can be a valuable tool if you know how to use it.


Great questions.

1. We do attempt to attack cancers by reducing their available energy. That's why, at one point, a major field of research in cancer therapeutics was interfering with angiogenesis, because cancers will secrete messengers that help grow them dedicated (if crappy, low-quality) blood vessels. The issue with "starving" them more starkly is that they're very good at getting a share (e.g., forcing the body to supply them with blood vessels), so you're going to be hitting other labile tissues as fast or faster (skin, GI mucosa, blood and immune cells.)

Another way of targeting their rapid metabolism is pointing our therapy at cells with high replication rates. A number of our cancer therapeutics are aimed directly at cells that are currently replicating, which should selectively hit cancer cells (though again, it hits skin, GI mucosa, blood and immune cells, etc. because they're also high-turnover cells.)

We use methotrexate to interfere with DNA synthesis, thus reducing the rate of replication altogether (in cancer cells, as well as.... above).

The problem is, besides the dose-limiting toxicities of all of these things (because targeting metabolism hits all high-metabolism cells), is that cancer cells are really good at developing resistances. So, for instance, if you starve them of blood supply, they'll switch to anaerobic metabolism of glucose. If you starve them of glucose, well, you can't really - I'll discuss that below. If you give them methotrexate or other nasty drugs, they alter the cells' native drug-efflux pumps to target those drugs better and pump them right out of the cell. Cancer cells have a broken mechanism for protecting DNA - the result is really high rates of cell death among cancer cells, and also really rapid evolution.

In terms of starving cells of glucose: glucose is the least common denominator of cellular metabolism. It's the primary food source for the brain. Different cells have different receptors for absorbing it, with different levels of affinity. If you're running low, pretty much every cell in the body that can will kick up metabolic products to the liver to turn into glucose it can share with the bloodstream - because the best receptors in the bloodstream for picking up glucose belong to the brain. You'll starve, or poison, the brain long before you manage to starve out a cancer. (Yes, Ketone bodies are a thing, but that happens alongside your body mobilizing everything it can to feed the brain, not instead of.)

We also can't 'see' all the tumor. The way cancers actually develop is you have an abnormal cell A, which grows into a tiny nest. These are below detection in any practical clinical way, and we don't want to treat them because they're ridiculously common - your immune system wipes them up. If we tried to detect and treat them all, we'd kill everyone with side effects long before we prevented a fatal cancer.

Out of the bunches of these that develop and die, or develop and go permanently quiet, one gets active enough to start seeding tumor cells into the blood stream. Most of those cells will die, too, because blood is rough for cells not built to withstand it. Most of these are going to be undetectable in any way, and do nothing to people.

(Every time I say something is undetectable, I mean "Except for high precision laboratory experiments used to detect just such things").

Eventually a tiny pre-pre-tumor will start seeding cells into the blood stream that can survive the blood. These will get seeded effing everywhere. Most of these are permanently quiescent and do nothing, ever. They exist at the level of single cells - we can't see them. They don't do anything, metabolic activity very low, so we can't target them.

Once in a blue moon you get one seeded that is actually metabolically highly active. Or maybe it mutates into metabolic activity later. Most of those die.

Once in a blue moon, one of these will live enough to start replicating for real. Most of those get wiped out.

And once in a blue moon, they start replicating for real, and develop immune evasion, and you have something that becomes a cancer, maybe. Or it gets triggered by something external and becomes a cancer. There's a "seed and soil" element here. It'll often start seeding back into the blood stream.

By the time you have a detectable mass, your entire body has been seeded with these cells, most of them both un-image-able and un-selectively-treatable. Luckily, the overwhelming majority of these cells - lots of nines - won't do jack. Of the trillions that will seed your body, if we stimulate them just right, you might get a couple of new tumors, or none at all.

We know this because we learned that tumors benefit from circulating inflammatory markers early in modern oncology. When a surgeon took out a tumor, not infrequently, a patient would come in a year later with a new one or two that weren't previously detectable. We eventually learned that the inflammatory growth signals that come with surgical trauma can provoke an otherwise sleepy tumor cell into metabolic activity.

Which is a roundabout way of saying "cancers are more metabolically varied than the late, aggressive stage of the process we usually refer to as 'cancer' would suggest."

That being said, if you could inject something directly into the tumor (rather than the bloodstream would prioritize sending said poison pill glucose to the brain or liver) and take advantage of its metabolism, that would be great. We do kind of do that: we implant radioactive pellets directly, with the added benefit that we know it won't affect much tissue outside of the immediate area.

I hope my answer was actually useful in providing some biological context? I'm afraid I might have just word-vomited instead of being helpful.


More subtly, here's an example of a hidden race condition in JS:

  async function totalSize(fol) {
    const files = await fol.getFiles();
    let totalSize = 0;
    await Promise.all(files.map(async file => {
      totalSize += await file.getSize();
    }));
    // totalSize is now way too small
    return totalSize;
  }

Questions that I've had good experience with:

* What's your problem? w/ Tell me more (as follow up)

* What are you doing right now for that problem?

* What other solutions have you tried?

* Are those other solutions working? Why / why not?

* Why do you feel our product might work for you?

The last question is a BS filter. If they're just using you for their own research purposes / get their existing vendor to bid lower, their response here will sound thin. Prospects who are early in their decision making won't have much to say here (or they are already late in their decision making and have made a decision to go with another vendor and are just going on a call with you to say that they've done their DD). The more they have to say, the closer they are to the finish line (ie. sale).


At Mobile Jazz (7 years old now) and Bugfender (4-5 years old now) we always had the rule, that people could choose to work as much as they want and when they went. As long as the output and quality was there. Obviously to achieve high quality output you need to be there at certain times (overlap with other team members) and you need to do a certain amount of hours. The problems we had because of this are almost no-existent (specific people that then ended up not staying very long in the company) and the advantages by far outweigh the "loss of control". In the end myself as a business owner and managers have far less stress by trusting people and empowering them to do great work.

With this framework, people will take breaks whenever they feel like. They go doing sports, play with their kids, run errands. They even take whole days off to go surfing or skiing. And I really don't mind. Because I do the same.

But then, if a server goes down late at night, people all the sudden show up by themselves and fix problems. Also on a rainy Saturday or Sunday, people will all the sudden be online and working.

Give people the opportunity to be in charge of their own work schedule, give them the responsibility, make them feel that they're actually responsible and they will shine.

We also just released our company handbook which goes in a lot of detail on how we run a 20+ people remote business:

* https://mobilejazz.com/company-handbook (landing page, if you want to get email updates)

* https://mobilejazz.com/docs/company-handbook/mobile-jazz-com... (direct link to the PDF)

EDIT: Added some more details


(1) Start a freelance practice.

(2) Raise your rates.

(3) As you work for clients, keep a sharp eye for opportunities to build "specialty practices". If you get to work on a project involving Mongodb, spend some extra time and effort to get Mongodb under your belt. If you get a project for a law firm, spend some extra time thinking about how to develop applications that deal with contracts or boilerplates or PDF generation or document management.

(4) Raise your rates.

(5) Start refusing hourly-rate projects. Your new minimum billable increment is a day.

(6) Take end-to-end responsibility for the business objectives of whatever you build. This sounds fuzzy, like, "be able to talk in a board room", but it isn't! It's mechanically simple and you can do it immediately: Stop counting hours and days. Stop pushing back when your client changes scope. Your remedy for clients who abuse your flexibility with regards to scope is "stop working with that client". Some of your best clients will be abusive and you won't have that remedy. Oh well! Note: you are now a consultant.

(7) Hire one person at a reasonable salary. You are now responsible for their payroll and benefits. If you don't book enough work to pay both your take-home and their salary, you don't eat. In return: they don't get an automatic percentage of all the revenue of the company, nor does their salary automatically scale with your bill rate.

(8) You are now "senior" or "principal". Raise your rates.

(9) Generalize out from your specialties: Mongodb -> NoSQL -> highly scalable backends. Document management -> secure contract management.

(10) Raise your rates.

(11) You are now a top-tier consulting group compared to most of the market. Market yourself as such. Also: your rates are too low by probably about 40-60%.

Try to get it through your head: people who can simultaneously (a) crank out code (or arrange to have code cranked out) and (b) take responsibility for the business outcome of the problems that code is supposed to solve --- people who can speak both tech and biz --- are exceptionally rare. They shouldn't be; the language of business is mostly just elementary customer service, of the kind taught to entry level clerks at Nordstrom's. But they are, so if you can do that, raise your rates.


Hi, tech lead of Workers here.

This is tricky to answer concisely because we've implemented a huge number and variety of defense-in-depth measures. (Plus, I'm in transit right now and can't type much -- maybe I'll edit to add more details later.)

First, we believe Chrome has done a pretty good job hardening V8 over the years, and we get a lot of comfort knowing there's a $15,000 bug bounty for V8 breakouts. We update V8 continuously, tracking the version shipping in Chrome.

That said, we obviously don't simply rely on V8 for everything. We've modeled what a V8 breakout is likely to look like, and added a variety of mitigations.

Presumably, a V8 breakout bug is likely to allow an attacker to run arbitrary native code within the Workers runtime process. That's bad, but they will quickly run into some barriers to weaponizing this. For example, we obviously run with ASLR, so an attacker looking for other isolate's data would be flying blind and would likely raise segfaults. Any segfaults in production raise an alert and are investigated. Similarly, we run a tight seccomp fitler that denies all filesystem and network access, and if an attacker ever tries to invoke such syscalls, it will raise an alert and be investigated.

It's worth noting that we do not allow eval() (or any other mechanism of "code generation from strings"), hence all code we execute has to have been uploaded through our code deployment pipeline. This implies we have a copy of all code. When a segfault raises an alert, we immediately look at the code that caused it. Anyone using a zero-day against us is very likely to burn their zero-day long before they manage to pull off a useful attack.

You're probably also wondering about Spectre. Here's a previous comment of mine on that topic: https://news.ycombinator.com/item?id=18280156

Again, this is just a couple of the things we're doing... there's really too much to list in a HN comment. I hope to find time to write this up more formally in the future. :)


This is a relatively new recommendation on V8's part specifically in response to Spectre-type vulnerabilities.

We've spent a lot of time thinking about and building mitigations for speculative side channel attacks. For example, early on in the project -- before anyone even knew about Spectre -- we made the decision that `Date.now()` would not advance during code execution, only when waiting for I/O. So, a tight loop that calls `Date.now()` repeatedly will keep getting the same value returned. We did this to mitigate timing side channels -- again, even though we didn't know about Spectre yet at the time.

Chrome has indeed stated that they believe process isolation is the only mitigation that will work for them. However, this statement is rather specific to the browser environment. The DOM API is gigantic, and it contains many different sources of non-determinism, including several explicit timers as well as concurrent operations (e.g. layout, rendering, etc.). Side channel attacks are necessarily dependent on non-determinism; a fully-deterministic environment essentially by definition has no covert side channels. But, there's no way Chrome can get there.

The Cloudflare Workers environment is very different. The only kinds of I/O available to a worker are HTTP in, HTTP out, `Date.now()`, and `crypto.getRandomValues()`. Everything else is perfectly deterministic.

So, for us, the problem is much narrower. We need to make sure those four inputs cannot effectively be leveraged into a side channel attack. This is still by no means trivial, but unlike in the browser, it's feasible. `getRandomValues()` is not useful to an attacker because it is completely non-deterministic. `Date.now()` we've already locked down as mentioned above. HTTP in/out can potentially be leveraged to provide external timers -- but the network is extremely noisy. A practical attack would require a lot of time in order to average out the noise -- enough time that we can do a bunch of higher-level things to detect and disrupt possible attacks. It helps that Workers are stateless, so we can reset a worker at any time and move it around, which makes attacks harder.

NetSpectre demonstrated that even physical network separation does not necessarily protect you against Spectre attacks. There's simply no such thing as a system that's perfectly secure against Spectre, process isolation or not. All we can do -- aside from going full BSG and giving up on networks altogether -- is make attacks harder to the point of infeasibility. Luckily, we have lots of tools in our toolbox for making Spectre attacks infeasible in the case of Cloudflare Workers.


We use Apollo for the http://expo.io/ client and have found it to be extremely pleasant to use. I love how flexible it is too -- you can use it for weird things like building an ORM layer for SQLite. For example: https://github.com/brentvatne/apollo-sqlite-experiment/blob/... -- the queries here go through a custom NetworkInterface which use a pretty simple graphql resolver (https://github.com/brentvatne/apollo-sqlite-experiment/blob/...) built on graphql-anywhere to pull the data out of the DB.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You