For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | vovanidze's commentsregister

the nostalgia for 1999 isnt really about missing dial up or basic html, its about mourning the loss of user agency. back then the browser actually worked for you. today the client is basically a hostile enviroment running megabytes of third party js just to track telemetry.

going back to gopher or text-only browsers feels like admitting defeat tbh. we can still build incredibly fast modern apps if we just stop treating the users hardware like an infinite resource for adtech. you dont need massive frameworks and client--side bloat to make something good.


In 1999, hostile websites would pop up endless new windows full of advertisements that you were powerless to stop unless you simply held down "ALT+F4" or "CTRL+ALT+DEL". Part of Mozilla Firefox's appeal is that it came with a pop-up blocker. [0]

Do you know anything about the Browser Wars? People literally had to put up images telling you which browser to use if you wanted to actually experience their website the way it was intended. Otherwise, it was just broken. [1]

[0] https://www.nytimes.com/2004/01/19/business/as-consumers-rev... (sorry for the tracking code, but this is a "gift" article and it was the best source I could find on popup ads)

[1] https://en.wikipedia.org/wiki/Browser_wars


Hostile pages did that. Today, nearly every page has a dozen tracking scripts, starts off with a cookie popup, probably pops up a "please log in" or "please give me money" after you scroll half way down, still has ads that even more effectively mimic the site topic and design to trick you into clicking them, pops up a newsletter or cupon code popup if your cursor leaves the viewport, might be secretly running experiments on you by A/B testing titles, images or testimonials...

The assault on your attention is way worse these days, it's just (mostly) contained to the viewport.


In 1999, most websites were not hostile because they weren’t chasing diminishing returns from ad-tech companies. Most of the companies I worked with were trying direct revenue models getting people to buy things or subscribe directly, and the ad market paid a lot better for less obtrusive ads - the whole real-time bidding process to run arbitrary JavaScript was yet to come.

Now almost everyone has pressure to find new revenue streams and maximize income while Google and Facebook have sucked up most of the revenue, so you see more and sleazier ads everywhere and sites which rely on you reading or watching are on a much more aggressive treadmill trying to constantly give you new things to see ads on so the experience is more frenetic.

It feels not unlike how in the mid-20th century people left work at work when they went home and only extraordinary circumstances would result in phone calls home, etc. whereas in this century it’s just expected that everyone carries a smartphone and checks email/Slack. More efficient in some ways, for sure, but a lot of stress ground out of people for no extra compensation.


You nailed it. Retreating to text-only kinda misses the whole point. The browser back then wasn't just a document viewer. It was a portal.

People forget that the internet used to be a place you went. It was entirely separate from our analog lives. You sat down, you fired up the machine, and your screen became this portal cut right into the universe. The juxtaposition between that visually-stunted, industrial-grade gray interface and the shocking immediate global access we suddenly had... it was the everything. The UI wasn't 'boring'. It was the clunky machine whose buttons you pressed (literally) to touch the world.

Today it's all hyper-lubricated feeds, and scammy-shiny UI trying to hijack your dopamine. But back then, the machine worked for you. It was a tool for discovery. A fucking frontier.

I've been trying to build a shrine to that precise feeling, to see if I can grab the modern web and force it to face it's beautiful glorious past - to that specific gateway-to-the-world, electronic frontier feeling. Just a small set of experiments. Incomplete as a monument to the totality of it. Merely a partial body of work, trying to express what it felt like to be there. I built a Win98/1999 environment: https://win9-5.com/desktop.html to browse the web from a (abominatively) multi-tab Netscape re-imagining. It runs a live, remote modern browser session inside a pixel-perfect 98 shell. Forcing the modern web through that dial-up era lens... it’s evocative + modem sounds. The aim is to remind you what it felt like when the web was a boundless horizon, not a walled garden of weirdo nimby's and microstates and regulatory capture etc etc etc. Sometimes I catch a flash of that fire again using it. Sometimes...


"going back to gopher or text-only browsers feels like admitting defeat tbh."

Interesting perspective

I have been using text-only browsers continually since the mid 90s (no lynx after 1999)

As such, I never "went back" to using the text-only browser as I have always used one, but as graphical browsers became worse I used them less

The customised text-only browsers I use today are 1.3 and 2.0 MB. I can compile them in seconds on underpowered computers

The so-called "modern" browser is [rapidly-expanding size] MB, not was easy to customise via editing the source code and takes substantial resources plus time to compile

Today, most www use for me is text-only. I am consuming information not graphics. I prefer textmode to X11 or the like

I avoid making HTTP requests to remote servers with graphical, Javascript-enabled browsers

I prefer using TCP clients and TLS forward proxies for making HTTP requests (at least one forward proxy now even has its own TCP client)

I use the text-only browser to read HTML files

For example, yc.htm is a file I create each day that contains all HN submissions where discussion is still open

Today yc.htm is 12 MB and there are 7268 stories

Using tiny command line utilities I wrote, the HTML in yc.htm is processed to CSV and added to an SQLite database. The unique domain names, today about 3704 of them, are extracted from all item URLs in yc.htm and DNS data is obtained via 1-3 pipelined lookups, each over a single TCP connection. The DNS data is then processed and inserted into an SQLite database

The TLS forward proxy stores the DNS data in memory so that I do not have to make any remote DNS when accessing the www

The yc.htm file can be opened in a text-only browser but I would not attempt it in a graphical one. The text-only browser feels more robust, less likely to stall or crash. I prefer the text-only formatting

By not using a graphical browser I would not say I am "admitting defeat". I have full control over HTTP headers, I only retrieve the data I want, I never see any ads, I do not send data to trackers or telemetry collection points

On the other hand, using a graphical browser does feel like "admitting defeat" as by doing so I allow "web developers" to destroy all preferences I have for how and when I want data retrieved and presented. If I allow the graphical browser unfettered access to the internet, and allow it to run unreviewed Javascripts, I lose all control over DNS and HTTP requests. For me, the experience of using a graphical browser where I have no such control is slow and painful, a horrible "user experience"

I do not see using "old" software as "nostalgia". I see it as being practical. "New" software generally sucks

The loss of "user agency" as some call it only occurs if one uses a so-called "modern" browser and runs Javascript. The seizure of user agency is accomplished by getting people to use a particular "user agent" that is controlled by online advertising companies (e.g., Google) or their business partners (e.g., Mozilla). If we called this a "choice" perhaps some readers might disagree. But so-called "Big Tech" have consistently argued that people "choose" Big Tech's "user agents" that, via their design and "default settings", effectively remove user agency for the majority of people who use them


"(at least one forward proxy now even has its own TCP client)"

Correction: s/TCP/HTTP/

I also use a separate TCP client from the TLS proxy author


You can use Dillo and explore the light web, gopher and gemini with no JS or big plugins at all.

Gopher and Gemini are simple scripts, so is the actions plugin, but I have my own written in 'rc', a shell borrowed from Plan9 and a bit improved (readline keys, history).

For audio/video I just spawn mpv+yt-dlp in the spot.

https://dillo-browser.github.io/


charging an enterprise premium just to give users the privlege of opting out of metadata harvesting is wild. they know exactly how valuable project structures, issue titles and sprint cadence data are for training models.

this trend is exactly why relying on client-side 'trust" or saas toggles is failing us. if you want real privacy now you have to build it into the architecture. aggressive edge caching and stateless proxies that sanitize payloads before they even hit the upstream provider is basically mandatory now. if the data never reaches their persistence layer, they cant train on it. we need to start trusting our own infra, not their updated tos.


the npm supply chain attacks were a massive wakeup call. the fact that we normalized storing sensitive tokens in localstorage for the last decade is wild.

moving to a bff pattern isnt just about hiding tokens, its about reducing the client attack surface entirely. shifting api orchestration and sanitization to edge proxies makes so much more sense. the browser should just be a dumb terminal rendering ui, not a secure vault managing state and credentials


he silent downgrades are the worst part. dropping the cache ttl from an hour to 5 minutes and spinning it as an optimization is just insulting. relying on their official clients feels like a trap now since they will always throttle the ux to save on their own compute costs. building any serious daily workflow on top of these managed tools right now just feels like building on quicksand.

exactly. calling it 'anonymized' is pure security theater once you have enough data points to map out someones daily routine.

waiting for legislation or eulas to fix this is a lost cause since adtech always finds a loophole. the fix has to be architectural. moving toward stateless proxies that strip device identifiers at the edge before they even hit upstream servers. if the payload never touches a persistent db there is literally nothing to de-anonymize. stateless infra is the only sane way forward


To be honest, I feel like this is where iOS and Android are failing us. Why is every app allowed to embed a bunch of trackers? Only blocking cross-app tracking on user request as iOS does is not enough (and data of different apps/websites can be correlated externally).

Because we don’t enforce antitrust law in this country and the people that make those decisions profit from the ads.

> To be honest, I feel like this is where iOS and Android are failing us. Why is every app allowed to embed a bunch of trackers? Only blocking cross-app tracking on user request as iOS does is not enough (and data of different apps/websites can be correlated externally).

Even if Google and Apple both want to commit to fighting this, it becomes a game of whack-a-mole, because there are all sorts of different ways to track users that the platforms can't control.

As an easy example: every time you share an Instagram post/video/reel, they generate a unique link that is tracked back to you so they can track your social graph by seeing which users end up viewing that link. (TikTok does the same thing, although they at least make it more obvious by showing that in the UI with "____ shared this video with you").


im not sure about allowed. perhaps required may be closer.

why would someone include tech that makes people think twice about using the app, unless it is required if you want to "sell" in a particular venue.

if your developing geolocation based apps, location tracking is a core function.

a calender, absolutely does not require location tracking beyond what side of the prime meridian are you on.


> if your developing geolocation based apps, location tracking is a core function.

But the subsequent sale of that data is not—is the discussion here.


and the reason why that data is available for sale, starts with forced collection of data, if you want to participate in an app store as a developer.

you cant sell what you dont have unless you lie lower than a rug.

fix the data collection problem and a second order effect of no data for sale emerges.


Are you suggesting Android/iOS app developers are forced into data collection somehow? If so, how? I'm genuinely curious.

> why would someone include tech that makes people think twice about using the app, unless it is required if you want to "sell" in a particular venue.

Because the overwhelming majority of people don't think twice about this tech.

I do, and that's why I use a lot of web tools or old-fashioned phone calls, but most people think metadata=unimportant and assume that the purpose of the app is what it does for them rather than to gather their personal information for sale.


How is this legal under the GPDR? There is clear examples in the citizenlab document of a user been tracked inside of the EU from outside.

Is there not also a requirement for clean consent? Ie a weather app can’t track your precise location?


w3m is still the goat for quick lookups when you dont want to open a browser and lose 2gb of ram instantly. funny how we need forks of text browsers in 2026 just to keep the web readable. really like the addition of gemini support here...

definitely giving this a spin on my dev box later. the fact that it still uses a quilt patch series for dev is such a throwback tbh.


The quilt patch series comes from the time when I was basing my work on the Debian version, it was easier for me to follow upstream than rebasing branches.

Most patches are now merged into master, only some unfinished work is still in that series. I should update the docs.


people wildly underestimate the os page cache and modern nvme drives tbh. disk io today is basically ram speeds from 10 years ago. seeing startups spin up managed postgres + redis clusters + prisma on day 1 just to collect waitlist emails is peak feature vomit.

a jsonl file and a single go binary will literally outlive most startup runways.

also, the irony of a database gui company writing a post about how you dont actually need a database is pretty based.


The irony isn’t lost on us, trust me. We spent a while debating whether to even publish this one.

But yeah, the page cache point is real and massively underappreciated. Modern infrastructure discourse skips past it almost entirely. A warm NVMe-backed file with the OS doing the caching is genuinely fast enough for most early-stage products.


props for actually publishing it tbh. transparent engineering takes are so rare now, usually its just seo fluff.

weve basically been brainwashed to think we need kubernetes and 3 different databases just to serve a few thousand users. gotta burn those startup cloud credits somehow i guess.

mad respect for the honesty though, actually makes me want to check out db pro when i finally outgrow my flat files.


I'm feel like I could write another post: Do you even need serverless/Cloud because we've also been brainwashed into thinking we need to spend hundreds/thousands a month on AWS when a tiny VPS will do.

Similar sentiment.


Serverless is cheap as hell as low volumes. Your tiny VPS can't scale to zero. If you're doing sustained traffic your tiny VPS might win though. The real value in Cloud is turning capex spend into opex spend. You don't have to wait weeks or months to requisition equipment.

id 100% read that post. the jump from free tier serverless to why is my aws bill $400 this month for a hobby project is a rite of passage at this point. a $5 hetzner or digitalocean box with dokku/docker-compose is basically a superpower that most newer devs just bypass entirely now.

You are both right, with the exception that it requires knowledge and taste to accomplish, both of which are in short supply in the industry.

Why setup a go binary and a json file? Just use google forms and move on, or pay someone for a dead simple form system so you can capture and commmunicate with customers.

People want to do the things that make them feel good - writing code to fit in just the right size, spending money to make themselves look cool, getting "the right setup for the future so we can scale to all the users in the world!" - most people don't consider the business case.

What they "need" is an interesting one because it requires a forecast of what the actual work to be done in the future is, and usually the head of any department pretends they do that when in reality they mostly manage a shared delusion about how great everything is going to go until reality hits.

I have worked for companies getting billions of hits a month and ones that I had to get the founder to admit there's maybe 10k users on earth for the product, and neither of them was good at planning based on "what they need".


> weve basically been brainwashed to think we need kubernetes and 3 different databases just to serve a few thousand users. gotta burn those startup cloud credits somehow i guess.

I don't think it makes any sense to presume everyone around you is brainwashed and you are the only soul cursed with reasoning powers. Might it be possible that "we" are actually able to analyse tradeoffs and understand the value of, say, have complete control over deployments with out of the box support for things like deployment history, observability, rollback control, and infrastructure as code?

Or is it brainwashing?

Let's put your claim to the test. If you believe only brainwashed people could see value in things like SQLite or Kubernetes, what do you believe are reasonable choices for production environments?


i think you missed the "on day 1" part of my comment. k8s, iac, and observability are incrdible tools when you actually have the scale and team to justifiy them.

my point is strictly about premature optimizaton. ive seen teams spend their first month writing helm charts and terraform before they even have a single paying user. if you have product-market fit and need zero-downtime rollbacks, absolutly use k8s. but if youre just validatng an mvp, a vps and docker-compose (or sqlite) is usually enough to get off the ground.

its all about trade-offs tbh.


> i think you missed the "on day 1" part of my comment. k8s, iac, and observability are incrdible tools when you actually have the scale and team to justifiy them.

No, not really. It's counterproductive and silly to go out of your way to setup your whole IaC in any tool you know doesn't fit your needs just because you have an irrational dislike for a tool that does. You need to be aware that nowadays Kubernetes is the interface, not the platform. You can easily use things like minikube, k3s, microk8s, etc, or even have sandbox environments in local servers or cloud providers. It doesn't matter if you target a box under your desk or AWS.

It's up to you to decide whether you want to waste your time to make your life harder. Those who you are accusing of being brainwashed seem to prefer getting stuff done without fundamentalisms.


Definitely appreciate the post and the discussion that has come from it... While I'm still included to just reach for SQLite as a near starting point, it's often worth considering depending on your needs.

In practice, I almost always separate the auth chain from the service chain(s) in that if auth gets kicked over under a DDoS, at least already authenticated users stand a chance of still being able to use the apps. I've also designed auth system essentially abstracted to key/value storage with adapters for differing databases (including SQLite) for deployments...

Would be interested to see how LevelDB might perform for your testing case, in that it seems to be a decent option for how your example is using data.


Except that eventually you'll find you lose a write when things go down because the page cache is write behind. So you start issuing fsync calls. Then one day you'll find yourself with a WAL and buffer pool wondering why you didn't just start with sqlite instead.

The second paragraph sounds eerily AI-generated.

> people wildly underestimate the os page cache and modern nvme drives

And worse, overestimate how safe is their data!

All this fancy thing about not using a RDBMS could had been true only if the APIs and actual implementation across ALL the IO path were robust and RELIABLE.

But is not!

EVERY LAYER LIES

ALL of them

ALL OF TIME

That is why the biggest reason building a real database (whatever the flavor) is that there is no way to avoid pay performance taxes all over the place because you can't believe the IO and having a (single | some files) getting hammered over and over make this painfully obvious.

One of the most sobering experiences is that you write your IO with all the care in the world, let the (your brand new) DB run for hours, on good, great, hardware, and in less than a week you will find that that breaks in funny ways.

P.D: Was part of a team doing a db


> seeing startups spin up managed postgres + redis clusters + prisma on day 1 just to collect waitlist emails is peak feature vomit.

I'm pretty sure most startups just use a quick and easy CRM that makes this process easy, and that tool will certainly use a database.


this hits home. ai makes it way too easy to ship 'more' without asking if we actually should. we've been in feature vomit mode for years now, shipping mb's of js for the most basic crud tasks. simplicity is becoming an act of engineering discipline rather than just a design choice tbh. less is actually more

still haven't found anything that replaces mc for me. the 2-pane layout is basically muscle memory at this point. everything modern just feels way too bloated or slow. mc is great but customizing the bindings is a total headache tbh. really like the idea of better vim integration here. curious how it handles performance on large directories with 10k+ files? giving it a spin...


I recently switched to almost exclusively using vifm with some zoxide/fzf extra commands + some custom previewers/openers.

vifm is great if you live in vim, but mc is just hardcoded into my brain at this point. totally agree on zoxide + fzf though, cant imagine navigating a large fs without them now. they're the only 'modern' additions to my workflow that actually felt worth the setup time.

I'm using fman, which is like a lightweight graphical alternative to mc.


seeing that Panda-70M research paper linked above makes this even crazier. the 'good faith' part of DMCA is basically never enforced in reality. platforms have used the 'shoot first, ask questions later' approach for so long that individual creators are just collateral damage. its about time someone actually challenged this in court. the power imbalance here is just wild tbh.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You