I think this was a pretty honest write up of what went well and what didn't, and I think it's directionally pragmatic on takeaways.
> The agent knew the experiment ended at Day 30 since I told it as much in the system instructions, and so it played it safe. It doubled down on what was already working rather than taking creative risks, whereas a (good) human strategist would’ve experimented aggressively in weeks 1-2 and refined later. The agent just tried to ride out the month at a predictable rate.
> Then when I tried to fix quality myself (the email validation gate), it caused the worst performance of the entire experiment. Same trap that human-run campaigns fall into - optimizing for what’s measurable rather than what matters. Main difference is an AI agent just does it faster and with more confidence, which honestly makes it more dangerous.
> If you’re running any kind of recurring workflow where you pull data, make decisions, and act on them, the loop pattern here probably applies to your work already. The hard part is figuring out what to actually optimize for, and clearly articulating that.
I have mentioned this before, but age verification can be solved by hash chains. They can prove age without compromising privacy.
It is crazy that the solutions Discord goes for are IDs and selfies. It definitely gives the impression that there are shady ulterior motives.
Hash chains are simple. If they were adopted, Discord would clearly be in bad faith taking the steps that they do now. If you search you will find quite a bit of information. My introduction to hash chains is for for age verification specifically:
https://spredehagl.com/2025-07-14/
Going to recommend "Addiction by Design" here. Superb book about the addiction design dynamics in the gambling industry and very reminiscent of what we see in the smartphone/internet universe today. Shout out to the forgotten HN user who recommended it originally, one of the best and most salient books I've read in years.
I wonder if the new drug of choice is actually technology. In some ways I think that the addiction to technology has some similar mellowing effects as drugs. Some research indicates that smartphone addiction is also related to low self-esteem and avoidant attachment [1] and that smartphones can become an object of attachment [2]. The replacement of drugs by technology is not surprising as it significantly strengthens technological development especially as it is already well past the point of diminishing returns for improving every day life.
The first approach (the 'It’s "obviously" the only way to go' one) is called an adjacency list.
The second (the 'vastly simpler method') i don't recall seeing before. It has some fairly obvious deficiencies, but it is clearly enough in some cases.
The third ('namespacing') is called a materialized path.
> Except in very minor cases, duplication is virtually always worth fixing.
I disagree with the severity of this, and would posit that there are duplications that can't be "fixed" by an abstraction.
There are many instances I've encountered where two pieces of code coincided to look similar at a certain point in time. As the codebase evolved, so did the two pieces of code, their usage and their dependencies, until the similarity was almost gone. An early abstraction that would've grouped those coincidentally similar pieces of code would then have to stretch to cover both evolutions.
A "wrong abstraction" in that case isn't an ill-fitting abstraction where a better one was available, it's any (even the best possible) abstraction in a situation that has no fitting generalization, at all.
Elixir has a nice take on this with the `with` keyword/macro
with {:ok, file_handle} <- File.open(filename),
{:ok, contents} <- IO.read(file_handle),
{:ok, parsed} <- MyModule.parse(contents)
do
{:ok, parsed}
end
what this does is run the functions in order top to bottom, and if the return value from each function doesn't match with what is on the left, it returns early with the thing that didn't match, otherwise it continues.
This means you don't need to write each function to take a tuple of {:ok, value} and another clause to take {:error, reason}, you can just write your functions to take the value they care about and let pattern matching in the with block to take care of error propagation.
so if File.open returns {:error, reason} then IO.read never executes and the result of the with is {:error, reason}
It essentially means you can program the happy path and let the caller match on the sad paths (if they want to)
For sure, anything that I could say along the lines of "we're not shutting Flutter down" might be taken as having overtones of the Baghdad Bob meme. And indeed, why should you trust my word?
The reason you should feel confident to use Flutter is because it's strongly in our business interest to invest in it. Over 600,000 apps in the Play Store alone are already written using Flutter, to say nothing of the countless apps for iOS, Windows, macOS, Linux and web. The list includes big brands like Alibaba, BMW, eBay, and SHEIN. Neither Google as a whole, nor Android in particular would be better off if Flutter didn't continue to flourish.
Aside from that, there are thousands of engineers at Google who use Dart and Flutter internally to build a wide variety of apps. There are many millions of lines of code written that power everything from Ads to our internal CRM system. Google wouldn't be better off if we had to throw all that code away and start over.
Lastly, Flutter is very successful. It has a developer base of several million, is growing quickly, and developers tell us it makes them more productive (https://medium.com/flutter/does-flutter-boost-developer-prod...). Happy developers are a prerequisite for a wide variety of other Google APIs and services, so we have a vested interest in continuing that.
Even if it weren't for Google, there are more contributors to Flutter from outside Google than there are Flutter team employees. Those contributors include big companies like Samsung, Canonical and Sony, as well as prolific individual developers like @a14n (https://github.com/a14n).
We're working hard on lots of fun new stuff right now, including a rewrite of our graphics rendering engine. If you haven't seen it, check out https://wonderous.app, which is using the new engine on iOS. We think it shows the potential of Flutter well!
I've been doing this kind of thing for years with two notable differences:
1. I don't believe people actually hand type-in these values, so I'm not really concerned about the 'l' vs '1' issue. I do base 32 without `eiou` (vowels) to reduce the likelihood of words (profanity) sneaking in.
2. I add two base-32 characters as a checksum (salted of course). This is prevents having to go look at the datastore when the value is bogus either by accident or malice. I'm unsure why other implementations don't do this.
The Elements of Computing Systems: Building a Modern Computer from First Principles [0] [1]
Easily one of the most interesting and engaging textbooks I've read in my entire life. I remember barely doing any work for my day job while I powered through this book for a couple weeks.
Also, another +1 to Operating Systems: Three Easy Pieces [2], which was mentioned in this thread. I read this one cover to cover.
Lastly, Statistical Rethinking [3] really did change the way I think about statistics.
The one that I find easiest to understanding is still the one that I wrote about a decade ago when I first had to work with OAuth 2. All others I understanding by mapping what they said to concepts in mine, and that seems to work pretty well.
My wife and I have a rule, "everything needs to have a home." Because if it doesn't have a home, it becomes clutter, and after enough clutter, it finds a home... usually a sub-optimal one, like a junk drawer.
The kids (4 and 5) have adapted to this wonderfully. It really helps them. It makes cleanup a trivial task because everything is known to belong somewhere specific.
Related to this: the recognition that everything is harder in a messy home. If you have stuff everywhere, you are paying a small tax any time you want to find or do something. Even cluttering your cupboards and drawers means you're tediously sifting through too much stuff or constantly worried about knocking something over while getting something else out. It's been especially good to avoid the dance of removing items to get the items underneath, then putting them back.
Finally: the lesson that when you keep stuff, you are paying a "tax" on keeping it. Throw away stuff you don't think you'll ever need again. It's cheaper to re-buy 1 or 2 things than to keep 100 of them for years and years. That storage space could be better used.
Bonus: If everything has a home, and you run out of homes, you quickly recognize that you have too much stuff and it might be time to make trade-offs. This puts an upper bound on the amount of stuff in our home.
Note that this could all easily sound super hardcore but it's not. It's just a general guide we have. We aren't forcing our kids to throw excess toys away and we're not writing a book about it. A flexible tool to guide behaviour, not enforce it.
> Do not fall into the trap of anthropomorphising Larry Ellison. You need to think of Larry Ellison the way you think of a lawnmower. You don't anthropomorphize your lawnmower, the lawnmower just mows the lawn, you stick your hand in there and it'll chop it off, the end. You don't think 'oh, the lawnmower hates me' -- lawnmower doesn't give a shit about you, lawnmower can't hate you. Don't anthropomorphize the lawnmower. Don't fall into that trap about Oracle. — Brian Cantrill (https://youtu.be/-zRN7XLCRhc?t=33m1s)
Remember how medieval scholars would argue how many angels could dance on the head of a pin and craft beautiful logical arguments relying on their view of cosmology?
These weren’t dumb people. These were likely the smartest and best educated.
Tying theories to empirical science is what keeps logic grounded in reality.
I failed the interview for an internship I really wanted in my 2nd year of engineering; I did get a shit internship that summer, but being really shaken at my incompetence, I took up this book, and quite honestly, it changed everything!
It truly sparked an interest in systems for me. The book helped me build a strong foundation in systems; Processes, memory, filesystems, networks, concurrency, synchronization and more. After reading OSTEP, it felt like an epiphany, and I charted a path for the rest of 2 years of college around distributed systems, systems research, and virtualization.
And the best part is that all this knowledge is free! Kudos to Professor Remzi and his work!
For people interested in more details about Postgres internals I cannot recommend https://www.interdb.jp/pg/ enough, an excellent text with lots of details
Along similar lines, I've adopted a hyper-frequent commit pattern in git. I do a bunch of meaningless micro-commits as I'm making progress, and then rewrite them all into one or two meanginful commits once I've reached a working state of whatever I was trying to do.
I find it's helpful for not losing work / easily backing up if as I'm going along I realize I want to change approach.
(For the micro commit I have a git command "git cam" that just commits all changes with the message "nt". Then once I'm ready to do a "real commit", I have "git wip" which rolls back all the nt commits but leaves them in checkout; then I can make one or two "real" commits.)
I wonder if dura would be even better, or if the commit frequency would end up being too fine-grained and obscure?
100% this, there is an extremely small, extremely vocal minority which gets amplified by social media and the media tends to report on this small group as its views are outrageous and generate outrage which in turn generate clicks and ad revenue for those outlets. Boring dependable/sensible government is not something that generates huge ad revenue.
One of the links is to Google’s Sight Reliability Workbook (https://sre.google/workbook/table-of-contents/). At my last company, we implemented SLOs based on the advice in this workbook, and I thought it was an excellent approach. Having internal reliability targets that directly map to the meaningful parts of the customer experience, and setting up dashboards and alerts based on these targets, is a very powerful approach to achieving reliability that matters for users.
These stories usually spur more enthusiasm for buying cryptocurrency, but ironically those buyers aren’t interested in spending cryptocurrency using their MasterCard.
They’re hoping other people will buy cryptocurrency from these announcements, driving the price up. Or, more likely, they’re just hoping other people will buy cryptocurrency and not use it in these spending systems.
Spending cryptocurrency would result in selling that cryptocurrency, which would drive the price down. That’s not what cryptocurrency investors actually want.
Should also note that MasterCard crypto transactions almost certainly won’t be settled on the blockchain. Not with $8 Bitcoin transaction fees. They’ll just be denominated in cryptocurrency and people can deposit/withdraw in certain cryptocurrencies. MasterCard only needs to buy and sell on the blockchain as needed to provide an FX window. The actual transactions would be stored in traditional database systems (aside from customer deposits/withdrawals, just like Coinbase)
Why? Because MasterCard would get to act as an exchange and collect exchange fees. Exchange fees are a great way to charge consumers for spending their own money in 2021, when normal credit cards actually pay people 1-2% to use them. Cryptocurrency’s inefficiency is their financial upside.
The Cambridge Bitcoin Electricity Consumption Index gives a much higher estimate (78 TWh vs. 117 TWh), which makes it about the same as the annual electricity usage of United Arab Emirates (pop. 9.9 million).
Bitcoin uses as much electricity as Chile and a single transaction has the same carbon footprint as ~700k Visa transactions. This may also be underestimated and account for around half of global data centre energy consumption as per a recent study. [1]
Modern shells are powerful enough to help you remember, if you learn to configure them appropriately. My histories are always saved because each shell instance gets its own HISTFILE, like so:
export HISTFILE=$HOME/.history/${TTY##\*/}.$SHLVL
As I use different terminal windows for different tasks, this keeps history files rather concise thematically.
And I let the shell add timestamps too, so I can grep for entries produced during a certain time span:
zsh:
setopt EXTENDEDHISTORY # add timestamps
bash:
HISTTIMEFORMAT="%F %T "
I write perl or shell script files, of course, if it's more than some a handful of lines.
I maintain a very popular piece of FOSS software as my full time job (you've all heard of it, many of you use it).
Easily the worst part of the job is toxic users who hop on to issues demanding you implement them immediately and belittling your planning ability. Worse when you were planning on implementing it soon anyways, but now if you do it's "rewarding" their behaviour (in their eyes at least), and they become invigorated to go and spread their toxicity even further. Alternatively, you can hold off on implementing it until things cool down, but then all the nice users who have been patiently waiting get screwed.
I'm forever grateful that I actually get FAANG salary to do this -- I wouldn't keep it up if I was getting the little-to-noting many FOSS contributors get.
I think what people are missing with all these analogies about burglaries and negligence is the funny difference between cyberspace and meatspace: In cyberspace, your attacker can be anywhere on the planet, located in virtually any jurisdiction, and reliably tracing and attributing attacks is a very difficult task. In meatspace, your attacker must be physically present and is generally obvious and thus vulnerable. This difference has dramatic implications on the ability of the enforcement model to reduce incidence of attacks.
In meatspace, assigning 100% of the burden of blame to the attacker and absolving the victim of any blame at all agrees with our ideas of morality and sort of works because there is a non-negligible chance of holding the attacker accountable. This provides a measure of deterrence to would-be attackers.
In contrast, in cyberspace, the chance of holding attackers accountable is much lower. There is little deterrence to would-be attackers, especially state-sponsored attackers. Here we need to let go of our fantasy that blame must be assigned according to our idea of who is morally at fault.
Of course the attacker is always morally at fault. But legally, we must hold accountable organizations who are breached, because we need them to improve their security posture. An improved security posture is the only realistic path to a future with fewer and less impactful cyberspace attacks.
Strict liability or "victim blaming" for cyberspace attacks goes against our notions of morality but IMO it is essential.
The reason you can’t buy a good webcam is the same reason you can’t buy a high quality monitor outside of LG’s unreliable apple collab.
The lazy conglomerates who sell these peripherals often don’t actually produce the parts in them. They simply rebrand commodity cameras and IPS panels in a crap plastic housing and slap their logo on it.
Then they give the product a hilariously user-hostile product name, like “PQS GRT46782-WT” as an extra f-you to the user.
They don’t care about you because they have no ongoing relationship with you, and their executives mistakenly see their own products as commodities.
Combine this with the fact that most home users don’t care about good quality or even know what it is, and you have the current situation.
A friend once described the peripheral market as “Assholes selling crap to idiots.”
> The agent knew the experiment ended at Day 30 since I told it as much in the system instructions, and so it played it safe. It doubled down on what was already working rather than taking creative risks, whereas a (good) human strategist would’ve experimented aggressively in weeks 1-2 and refined later. The agent just tried to ride out the month at a predictable rate.
> Then when I tried to fix quality myself (the email validation gate), it caused the worst performance of the entire experiment. Same trap that human-run campaigns fall into - optimizing for what’s measurable rather than what matters. Main difference is an AI agent just does it faster and with more confidence, which honestly makes it more dangerous.
> If you’re running any kind of recurring workflow where you pull data, make decisions, and act on them, the loop pattern here probably applies to your work already. The hard part is figuring out what to actually optimize for, and clearly articulating that.