For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more pixelmonkey's commentsregister

I love the onesec extension and I've often thought society would be better off if this were the way Apple and Google implemented their app timer functionality on iOS and Android. If you could just mark certain apps as addictive and be given a simple few-second prompt before displaying each of them, it'd stop or soften a lot of the addictive loops, I think. I use onesec app on Android solely to do this to YouTube, but the fact that it isn't native introduces some weird bugs, especially when opening YouTube links from other apps (which I live with anyway, but alas).


I wrote up my own version of this idea in "The smartphone app audit."

https://amontalenti.com/2024/03/26/the-smartphone-app-audit

If the idea of auditing all your apps seems daunting, you can take a look at how I did it in bulk by using screenshots of my app launcher screen, then OCR and LLMs to help me do an initial pass at categorizing them. That let me do one quick bulk cleanup.

I found that it's better to simply delete apps and keep the total app count on your phone low, rather than use the various parental control / digital minimalism / Freedom.to style app blocking ideas.

Removing browsers from my phone never seemed like an option for me, but even so, removing all the addictive apps really reduces doomscrolling and other mindless scrolling a good bit.

Lately, I also put any newly installed apps in a "Purgatory" app launcher group and if I notice any of them having addictive qualities, I uninstall them. I did this recently with the Bluesky and Discord apps, for example.


Here is what I think is going on in this announcement. Take the 4 major commodity cloud companies (Google, Microsoft, Amazon, Oracle) and determine: do they have big data centers and do they have their own AI product organization?

- Google has a massive data center division (Google Cloud / GCP) and a massive AI product division (Deep Mind / Gemini).

- Microsoft has a massive data center division (Azure) but no significant AI product division; for the most part, they build their "Copilot" functionality atop their partner version of the OpenAI APIs.

- Amazon has a massive data center division (Amazon Web Services / AWS) but no significant AI product division; for the most part, they are hedging their bets here with an investment in Anthropic and support for running models inside AWS (e.g. Bedrock).

- Oracle has a massive data center division (Oracle Cloud / OCI) but no significant AI product division.

Now look at OpenAI by comparison. OpenAI has no data center division, as the whole company is basically the AI product division and related R&D. But, at the moment, their data centers come exclusively from their partnership with Microsoft.

This announcement is OpenAI succeeding in a multi-party negotiation with Microsoft, Oracle, and the new administration of the US Gov't. Oracle will build the new data centers, which it knows how to do. OpenAI will use the compute in these new data centers, which it knows how to do. Microsoft granted OpenAI an exception to their exclusive cloud compute licensing arrangement, due to this special circumstance. Masa helps raise the money for the joint venture, which he knows how to do. US Gov't puts its seal on it to make it a more valuable joint venture and to clear regulatory roadblocks for big parallel data center build-outs. The current administration gets to take credit as "doing something in the AI space," while also framing it in national industrial policy terms ("data centers built in the USA").

The clear winner in all of this is OpenAI, which has politically and economically navigated its way to a multi-cloud arrangement, while still outsourcing physical data center management to Microsoft and Oracle. Probably their deal with Oracle will end up looking like their deal with Microsoft, where the trade is compute capacity for API credits that Oracle can use in its higher level database products.

OpenAI probably only needs two well-capitalized hardware providers competing for their CPU+GPU business in order to have a "good enough" commodity market to carry them to the next level of scaling, and now they have it.

Google increasingly has a strategic reason not to sell OpenAI any of its cloud compute, and Amazon could be headed in that direction too. So this was more strategically (and existentially) important to OpenAI than one might have imagined.


I'm a generation older. To me, there were three big shifts.

One was that Facebook/Twitter/etc. proved that web publishing could be made more convenient by making it more centralized, and that access to an audience was, in some way, more important than access to publishing tools. No matter how good open web publishing tools got, they couldn't compete with Facebook et. al. at providing some access to an audience, even if that audience was as small as your friends and family.

The second was a shift in who developed "internet infrastructure." In the 80s and 90s (and before), it was mainly academics working in the public interest, and hobbyist hackers. (Think Tim Berners-Lee, Vint Cerf, IETF for web/internet standards, or Dave Winer with RSS.) In the 00s onward, it was well-funded corporations and the engineers who worked for them. (Think Google.) So from the IETF, you have the email protocol standards, with the assumption everyone will run their own servers. But from Google, you get Gmail.

The third -- and perhaps most important shift -- was the move from desktop software to web + mobile software as the primary computing platform for most people. Such that even if you were a desktop user, you did most of your computing in the browser. This created a whole new mechanism for user comfort with proprietary fully-hosted software, e.g. Google Docs. This also sidelined many of the efforts to keep user-facing software open source. Such that even among the users who would be most receptive to a push for open protocols and open source software, you have strange compromises like GitHub: a platform that is built atop an open source piece of desktop software (git) and an open source storage format meant to be decentralized (git repo), but which is nonetheless 100% proprietary and centralized (e.g. GitHub.com repo hosting and GitHub Issues).

You ask how to "dismantle" this. I've long pondered the same question. I am not sure it can be dismantled. It doesn't seem like these shifts can be undone. Where I've personally ended up is that small communities of enthusiast programmers and power users can embrace open source, open protocols, and decentralization for its obvious benefits, but that it won't ever be a mass market again.


I think you're right, and I don't think it's just about public content being "exploited" to train AI models and the like. Rather, even before LLMs, there was a growing sense that publishing ideas or essays publicly is "risky" with very little reward for the very real risks.

I wrote about this a little in "The Blog Chill":

https://amontalenti.com/2023/12/28/the-blog-chill

Speaking personally, among my social circle of "normie" college-educated millennials working in fields like finance, sales, hospitality, retail, IT, medicine, civil engineering, and law -- I am one of the few who runs a semi-active personal site. Thinking about it for a moment, out of a group of 50-or-so people like this, spread across several US states, I might be the only one who has a public essay archive or blog. Yet among this same group you'll find Instagram posters, TikTok'ers, and prolific DM authors in more private spaces like WhatsApp and Signal groups. A handful of them have admitted to being lurkers on Reddit or Twitter/X, but not one is a poster.

It isn't just due to a lack of technical ability, although that's a (minor) contributing factor. If that were all, they'd all be publishing to Substack, but they're not. It's that engaging with "the public" via writing is seen as an exhausting proposition at odds with everyday middle class life.

Why? My guesses: a) smartphones aren't designed for writing and editing, hardware-wise; b) long-form writing/editing is hard and most people aren't built for it; c) the dynamics of modern internet aggregation and agglomeration makes it hard to find independent sites/publishers anyway; and d) the risk of your developed view on anything being "out there" (whether professional risk or friendship risk) seems higher than any sort of potential reward.

On the bright side, for people who fancy themselves public intellectuals or public writers, hosting your own censorship-resistant publishing infrastructure has never been easier or cheaper. And for amateur writers like me, I can take advantage of the same.

But I think everyday internet users are falling into a lull of treating the modern internet as little more than a source of short-form video entertainment, streams for music/podcasts, and a personal assistant for the sundries of daily life. Aside from placating boredom, they just use their smartphones to make appointment reminders, send texts to a partner/spouse, place e-commerce orders, and check off family todo lists, etc. I expect LLMs will make this worse as a younger generation may view long-form writing not as a form of expression but instead as a chore to automate away.


Reminds me of:

"Why should I change my name? He's the one who sucks."

-Michael Bolton in 'Office Space'

Also funny since the main character in Office Space is named "Peter." I now wonder if this character name is, itself, a reference to the Peter Principle. I wouldn't be surprised given that it was written by Mike Judge (same creator as HBO/Max 'Silicon Valley' years later).


Such a great project. Cool to know that one of my favorite authors, Sinclair Lewis, now has all his popular novels starting to enter the Public Domain. I had only just recently started reading "Dodsworth" so I'll now switch to the Standard EBooks version!


Obligatory meme:

https://rakhim.org/honestly-undefined/19/

I'm personally in the top left corner and bottom right corner at the same time, which is sort of funny.

I have used WordPress since 2004-2005, and I've also written a Python static site generator before using Flask + Frozen-Flask[1]. I've also made stops through tools like Sphinx, Hugo, Gatsby, and VitePress[2]. But my personal site continues to run WordPress[3].

I think I'd prefer something like VitePress these days for a technical documentation site. It has a lot going for it for that use case. And it feels built to last.

On true wikis that one can self-host, I recently learned that MediaWiki with a reasonable theme like Citizen[4] is a nice choice for an open source powered private wiki. Although I do find the Mediawiki markup language a little cumbersome versus simpler markup languages like reST or Markdown/MyST in the Python community (or GitHub-Flavored Markdown or Asciidoc supported elsewhere). But Mediawiki has a lot of nice features -- after all, Mediawiki powers Wikipedia. The theme makes it work properly on mobile, adds a little more structure for automatic ToC, and makes content editing a bit simpler.

It still isn't nearly as polished as commercial wiki-like software (e.g. Notion) but it's better than open source wikis used to be.

On the subject of the blog post, I think bit-rot or info-rot is the natural order of things. The kind of software you run isn't going to change those facts. And if you're curating knowledge about technical computing subjects (that isn't about durable topics like, say, C and Linux system calls), you should expect exponential decay.

I do find it kind of amusing how many tools and frameworks developers have created for making it easier to edit HTML pages, though. Truly a foundational 21st century problem that deserves a technical solution that can last for decades without itself bit-rotting.

[1]: https://frozen-flask.readthedocs.io/

[2]: https://vitepress.dev/

[3]: https://amontalenti.com/archive

[4]: https://www.mediawiki.org/wiki/Skin:Citizen


Using a smartphone isn't "computing" in the same sense as programmers using a laptop for programming. Neither is it in the same sense as writers using a laptop for writing.

Consider: Some devices are consume-only (e.g. Kindle, original iPod); some devices are consume-heavy, create-light (modern iPhone or Android smartphones); some devices are consume-light, create-heavy (modern programmer laptops); some devices are create-only (e.g. typewriters, or modern alternative experiments like Freewrite).

If you only use the consume-only or consume-heavy devices, you're not "computing" in any creative sense. In the same way, you usually aren't writing in any creative sense if you only carry books (or a Kindle) in your backpack and read all day.

Most people spend weeks or months per year "consuming content with a computing device" while not creating a single thing with that computing device. Some of that is communication, like text and group chat messages. But quite a lot of it is short-form content that is passively delivered to eyeballs and ears.

Either way, this isn't creative computing. It may not even be deep content consumption. The typical and popular content doesn't resemble books. There is a widespread and population-wide decline in deep reading as short-form video rises and app notifications flit users from smartphone app to app. You have to be very intentional to even wrangle a smartphone media diet that leaves the space for deep reading. To escape the trap, most find the need to put the smartphone away and pick up a device with different affordances (a Kindle to deep read, or a laptop to code or write, as examples).

Even if you manage to escape attention farming, due to the design of smartphones as omnipresent devices (on-screen keyboards, touch screens, cellular data plans), creativity will still usually be out of the question for the vast majority of users. For that, you usually need to simply put the smartphone away. I personally find that to do creative computing I need a device with a physical keyboard, a mouse-equivalent, and, optionally: a speedy wired/wifi connection and a large monitor on a comfortable desk in a quiet workspace.

See also "Putting Your Media on a Diet":

https://amontalenti.com/2024/01/31/media-diet

And also "The smartphone app audit":

https://amontalenti.com/2024/03/26/the-smartphone-app-audit


I knew you could do that, but never knew you could do it with "|&" as a shorthand. TIL!

I found this comment in a bash documentation example to verify:

# To send stderr through a pipe, "|&" was added to Bash 4 as an abbreviation for "2>&1 |".


Fair warning - this is a bash-ism, so if you're writing a script that needs to run on multiple shells and/or a more primitive shell, you should avoid it.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You