Apart from the UI which is crap since their last major update. There are menu options everywhere, two ribbons on the top, a hamburger menu on the right and another on the left. For a long time you opened Thunderbird and it didn't default on the last message that you received but somewhere in the middle of the heap.
> Apart from the UI which is crap since their last major update
But when they updated the UI, they
- Added options to use to make it very close to the old layout
- Set those options for you if you had it customized like that in the previous version
Which is IMHO much better than how Mozilla handled the redesign - you can get the old style in a GitHub repo thanklessly maintained by one person[0], enable userchrome support in about:config (until they decide to take it away one day!), and enable compact mode (also gated behind about:config and called "Compact (not supported)". Oh, and remember to update the userchrome every few updates because they keep breaking it.
That's the difference between user-centric and not user-centric.
It's hard to be in charge of a project like this. You're criticized no matter what you do.
The old UI was criticized by some for being outdated, a mix of old and new styles, didn't fit well with new OS/app styles, etc. It was crap. So they update the UI and it's still crap... for other users. Damned if you do and damned if you don't.
Both outlook.office.com and mail.google.com use much more memory and CPU than any "fat" client, and are constantly changing little things about the UI. Safari now often closes outlook automatically on an M5 Mac because it's using significant amounts of energy.
And I use a fat client because I like having all of my email addresses aggregate to one place, and I like it when that software gives me a modern look and feel ¯\_(ツ)_/¯
All your mail in one place does not require a 'fat' client, something like Claws mail [1] (not in any way related to the recent LLM claws craze) can handle it without problems. Modern looks, well... it looks the way it looked about 25 years ago give or take a few iterations of GTK. Compact, efficient, to the point. If that's not your thing and you'd rather have large amounts of empty space and unrecognisable buttons it can be skinned to look 'modern'. In my startup sequence I launch 4 communications tools on one screen: gajim, telegram-desktop, signal-desktop and claws-mail in that order. Even though Claws gets launched last it appears first on screen because it is lightweight while the other three are anything but - Telegram is a native QT application, gajim is Python (nothing more needs to be said) and signal-desktop is Electron (even less needs to be said).
As a longterm thunderbird user I find this annoying. I appreciate it being maintained more actively again but I really liked the fact that the UI stayed stable for years. Changing things to make them "more modern" is just annoying. No one asked for this.
When you delete a message after that the old message remains selected and so if you hit delete again thinking the last time didn’t do the trick, you deleted another message and now go to deleted messages and try to find what you deleted.
The app has a phantom message even in empty folders that it keeps selected. Unread bubble and nothing else, an empty message. You can’t even delete it. Sometimes it persists between app restarts.
It shows unread count on a folder just because it feels like it.
hasn't been my experience, I am able to search for information faster, speed to correct answer has increased on average, I feel, if we start providing examples your argument starts to fall fast. Think about the average Google Search, you really think it gets it wrong? Your search query is probably more obscure than mainstream web users.
They ALSO know that and are making a stand about this in particular use of figurative language since anthropomorphizing llms is a thing we're already seeing used for accountability washing. If we, the public, don't let the language shift to acting like these LLMs are actual people then we, the public, can do a better job of keeping our intuitions right about who is responsible for these products doing wacky/destructive/abusive/evil things instead of falling into the trap of "<personified name of LLM product > did/said it".
There have been discussions about this chip here in the past. Maybe not that particular one but previous versions of it. The whole server if I remember correctly eats some 20KWs of power.
A first-gen Oxide Computer rack puts out max 15 kW of power, and they manage to do that with air cooling. The liquid-cooled AI racks being used today for training and inference workloads almost certainly have far higher power output than that.
(Bringing liquid cooling to the racks likely has to be one of the biggest challenges with this whole new HPC/AI datacenter infrastructure, so the fact that an aircooled rack can just sit in mostly any ordinary facility is a non-trivial advantage.)
> Bringing liquid cooling to the racks likely has to be one of the biggest challenges with this whole new HPC/AI
Are you sure about that? HPC has had full rack liquid cooling for a long time now.
The primary challenge with the current generation is the unusual increase of power density in racks. This necessitates upgrades in capacity, notably getting 10-20 kWh of heat away from few Us is generally though but if done can increase density.
Watt is a measure of power, that is a rate: Joule/second, [energy/time]
> The watt (symbol: W) is the unit of power or radiant flux in the International System of Units (SI), equal to 1 joule per second or 1 kg⋅m2⋅s−3.[1][2][3] It is used to quantify the rate of energy transfer.
You would hope that an EV reporting x kWh/hour considers the charge curve when charging for an hour. Then it makes sense to report that instead of the peak kW rating. But reality is that they just report the peak kW rating as the "kWh/hour" :-(
I asked because that's the average power consumption of an average household in the US per day. So, if that figure is per hour, that's equivalent to one household worth of power consumption per hour...which is a lot.
Others clarified the kW versus kWh, but to re-visit the comparison to a household:
One household uses about 30 kWh per day.
20 kW * 24 = 480 kWh per day for the server.
So you're looking at one server (if parent's 20kW number is accurate - I see other sources saying even 25kW) consuming 16 households worth of energy.
For comparison, a hair dryer uses around 1.5 kW of energy, which is just below the rating for most US home electrical circuits. This is something like 13 hair dryers going on full blast.
At least with GPT-5.3-Codex-Spark, I gather most of the AI inference isn't rendering cat videos but mostly useful work.. so I don't feel tooo bad about 16 households worth of energy.
To be fair, this is 16 households of electrical energy. The average household uses about as much electrical energy as it uses energy in form of natural gas (or butane or fuel oil, depending on what they use). And then roughly as much gasoline as they use electricity. So really more like 5 households of energy. And that's just your direct energy use, not accounting for all the products including food consumed in the average household.
Consumption of a house per day is measured in kiloWatt-hours (an amount of power like litres of water), not kiloWatts (a flow of power like 1 litre per second of water).
I think you are confusing KW (kilowatt) with KWH (kilowatt hour).
A KW is a unit of power while a KWH is a unit of energy. Power is a measure of energy transferred in an amount of time, which is why you rate an electronic device’s energy usage using power; it consumes energy over time.
In terms of paying for electricity, you care about the total energy consumed, which is why your electric bill is denominated in KWH, which is the amount of energy used if you use one kilowatt of power for one hour.
He allegedly spent $70M to market that dreadful documentary about Melania Trump. Surely he could afford spending that much every year to keep an historic paper afloat.
That movie will be quite case study in media bias. Depending who is reporting on my social media feed, it was either the most successful movie of all time with every single showing at capacity, the run being extended, and gen z girls being the main demographic for a movie certain to clean up awards. Or it was a flop that lost money.
It can be both! You can fill up the seats with people that you pay to watch it!
You can also look it up on Rotten Tomatoes where it currently has a 99/100 audience score and then look it up on IMDB, where it has 1.3/10. I personally believe none of the two are completely legitimate, but I think it's pretty obvious which of the two is more astroturfed.
Instead of both-sides-ing this, you can look at objective data. Here's BoxOfficeMojo: https://www.boxofficemojo.com/release/rl4287397889/. Right now it says $8.1M in the US, $75k worldwide. Not bad for a movie that cost $40M to make and about as much to market, huh?
One rationalisation I've heard is that it made more money than expected for a documentary. If we take that at face value, it's worth asking why Bezos felt the need to pay Melania tens of millions more than the budget for the typical documentary.
Your case study in media bias writes itself. All it took was a google search.
Eh, he also spent $35m on marketing, most other documentaries get a couple of million at most. So, sure, it's breaking records by tweaking the inputs on a previous unseen scale.
Amazon hasn't done any theater stuff like this before, this is the first year they are putting multiple streaming movies into theaters as an experiment. They usually spend money on marketing just for the platform. This was likely a big success because it was double the projected box office everyone was expecting. Even bad PR is good for business sometimes.
There’s an abundance of public data on people’s interests (their comments and reactions to posts), which we evaluate with our in-house AI agent to build high-intent contact lists.
That's fishy and depending on the jurisdiction it could also be illegal. I wouldn't want to receive a personalized e-mail from someone who scraped my public comments on some platform. It would seem too fucking intrusive.
How do we know that ad revenue will be huge? 80% of the questions that I ask can't be monetized because they're not about purchase intent. And even if they could, has OpenAI built an auction system to bid on keywords? How exactly will all this work and be streamlined in the next 18 months to the point that it could generate the revenue they need to keep with the ridiculous investment requirements in infrastructure?
The thing i keep coming back to is that an LLM backed query is so, so much more expensive than a typical web request. What kind of advertising is going to align in the value necessary to cover those costs, plus margin? Chatbots aren't YouTube, users aren't going to sit through 30 second ads, I don't think.
The difference for me is that 20+ years ago if you went to an online community there was no filter to protect you. You’d have to choose wisely what to say and avoid being confrontational because there was the very possible chance of someone more knowledgeable than you putting you in place. With social media that exposure diminished because everyone can build a bubble of like-minded people. I like been corrected, while it hurts on a personal level it also helps you evolve as a human being because you become more knowledgeable. An Internet where everyone agrees with you is boring. And because everything is a bubble we went full circle and now there are culture wars everywhere. You can’t have a decent conversation where people just present their arguments without throwing punches and accusations all around. I feel like Hacker News is the last bastion of civilized conversations at scale.
I love tailwind. AI chatbots are useless. Old internet was bad. Asians are proven to have the highest IQ. There's no logical reason for humans to exist. DOOM was a bad game.