For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more 1980phipsi's commentsregister

How is this for an ordering:

Good human written docs > AI written docs > no docs > bad human written docs


Any idea why swapping didn't pan out for Tesla? My understanding is they are doing that in China.


What do you mean? It did pan out for Tesla. Faking a single demo granted them 75% more ZEV emissions credit government subsidys [1]. That increased their profits by hundreds of millions of dollars.

All they had to do was go on stage and “swap” a battery without any clear video of the process and never “demonstrate” it ever again.

This is a company known for faking prominent demos like the FSD demo (where it crashed into a wall during filming), the solar roof demo (where they used regular roof tiles and claimed they were solar panels), the optimus demo (where they were teleoperated), etc.

Assuming they even did a battery swap, for which the official demo presents no clear video evidence, preferring overhead views over a close-up of the process or a glass enclosure to see the inner workings, it was at best a one-off custom-made device at the time. The one battery-swap station they claim existed has zero stories of any actual battery swaps, instead only evidence of it operating as a regular Supercharger [2].

[1] https://thewaroncars.org/episode-88-tesla-is-a-fraud-with-ed...

[2] https://slate.com/technology/2022/05/elon-musk-tesla-twitter...


They didn’t fake the demo, but the legislature quickly rewrote the law because it was intended to give Toyota ZEV credits for hydrogen cars.

Tesla did briefly operate a swap station at the site of the Harris Ranch Supercharger until California changed the rules.

There are several reports from people who used it on teslamotorsclub.com, and I saw it with my own eyes.

Hilarious that your source is Ed Neidermeyer. Perhaps the only thing more impressive than Elon’s lies about the state of self driving are Ed’s lies about how Tesla is going bankrupt Any Day Now.

It’s ok though, Ed’s stock manipulation antics enabled me to stuff my IRA with Tesla shares (since sold, when Elon went nuts) and make a nice little headway on my retirement savings.


Cool. Then if you are not lying, then it should be easy to present a clear video demonstrating the automatic battery swapping machine in action and swapping out a battery in 90 seconds as claimed in the demo.


Battery swap was and remains really risky for anyone doing it. You're taking a $10k asset, and swapping it for another $10k asset of unknown provenance. Does anyone really want to be in a situation where they purchase a new Tesla with a brand new, max-range battery pack, then swap it once on a road trip and get one that's been used for 300k miles and is at 75% of original capacity?


The bigger risk is you need a standard battery pack. Sure you can put 3 in a truck or something, but you lose all the space that a standard battery size wouldn't fit but you can cram a cell in. Electric car design is about stuffing batteries where there is space - you need a lot of cells, but the individual cells are small.


> of unknown provenance

I don't understand the comment. Of course they know where the batteries come from. They know everything about the battery.


As long as you can always swap your battery again I don't see the problem

As long as the average battery health in the system is like 90% and the minimum is say 80% why would you care if you're getting a new battery every few days?

If anything it removes a big cause of depreciation from your car


That is fine if you always are swapping. However if you normally charge at home that becomes a big deal - if your current battery wears out/fails because it has 500,000k miles on it are you out a new one? Do you pay for the tow to get to a swap station? Again, if everyone always swapped this would be easy to amortize and not a problem but the mixed use.

Of course this isn't a new problem. I know people who own their own welding gas tank - but they always swap the tank out. The place they swap at somehow handles when the tank needs to be re-certified - and people don't ask questions.


Eh?

In some mysterious future where swapping EV batteries during a road trip is a normal activity, then the battery packs won't be living in a vacuum -- their status can be known. Whether it is known by reading the pack's own electronics, by status reports from connected vehicles and charging stations, by direct measurement, or by some combination of these things: The status is knowable. It doesn't have to be a big ball of mystery.

How much value the marketplace finds in this health status is a different question. And this question is one that we cannot yet know the answer to -- this is not a reality that we presently live in.

We can speculate about how that potential future may be shaped, but that kind of speculation is kind of meritless since that version of the future may never actually happen (and at the present, it sure does seem very unlikely to happen any time soon).


> Any idea why swapping didn't pan out for Tesla?

~~Two~~ Three things:

1. California changed the rules shortly after Tesla demonstrated their swap station, which practically eliminated the tax credit for battery swap (at the behest of lobbyists for Toyota, who were backing Hydrogen Fuel Cell technology). Specifically, the credit would be prorated by the percent of “fast refueling” sessions a car did, so EVs primarily charged at home received almost nothing while HFCV got the full credit. Building swap capability adds complexity to the car (think about all the fluid connections), which isn’t worth it without credits.

2. It was also around this time that a Model S ran over an anvil (or something) which punctured the pack and started a fire. In response, Tesla added an aluminum battery shield, which further complexifies swapping and was probably the final nail in the coffin.

3. The logistics of storing your very expensive battery (so you could get it back later) basically make the system unworkable. When the Tesla swap station at Harris Ranch (you can still see the former building, next to Harris BBQ, which currently houses the restrooms) was operational, you had to make a reservation some hours in advance so that Tesla could have a pack ready and be ready to take your pack to/from storage.

3a. Gresham’s Law. Without eventually returning the pack to the original owner, there is an adverse selection problem: people with very weak packs will gladly roll the dice on a swap, but those with brand new packs are reluctant. So the average quality of packs in the swap network will quickly decline creating a death spiral.

3b. You could probably fix 3a by leasing the battery (or selling battery-as-a-service) but car buyers mostly don’t like that, especially back in 2013.


In addition to all of that, it just doesn't outcompete recharging in the end.

Even if the swap station got to where it was hoped (no reservation needed, automated, drive in, swap, drive out), you'd have a choice between a ~5 minute stop at a swap station, or a ~10-20 minute stop at a charging station. The swap station is always going to be more expensive since it's inherently more complicated, so you're spending more to save a few minutes. And when you stop at the swap station you can go answer the call of nature and grab a snack while you wait. If you want to do either of those anyway, then visiting the swap station means you'll do the swap, then go do those other things, and probably not save any time at all.

Charging time just isn't that much of an issue at this point. I've been driving a Tesla for a decade at this point, with thousands of miles of road trips around the eastern US, and I've never found myself wishing for battery swap infrastructure. Newer cars charge much faster than mine does, too.


I could potentially see value in a car with a smaller built-in battery for use around town, and an empty space for a larger battery, that you rent from swap stations for longer road trips. Of course, that doesn't work with anything on the road today.


Probably because the economics just don't make sense here. You'd have to have so many compatible cars on the road, driving all day with no opportunity to charge. I'm having a hard time imagining a place I've been to in North America where that'd seem logical.

> they are doing that in China

Are they actually doing that at scale?


A little out of date now but:

> As of June 2024, Nio had installed 2,432 power swap stations in China, including 804 along highways, representing the largest battery swapping network in the country. Nio aims to expand to 4,000 stations globally by 2025. By February 2025, Nio had 3,106 battery swap stations in China, with 964 located along highways. In January 2025 alone, Nio added 111 swap stations and provided 2,949,969 battery swap services, averaging 95,160 daily.

https://enertherm-engineering.com/chinas-battery-swap-revolu...


From the country that brought you mass wastage of bicycles, now we get battery swapping.

This is pretty much just a "gamble by deploying as quickly as possible making our system the standard if it catches on" type of investment.


> Reduced Upfront Costs: Battery swapping allows drivers to purchase EVs without bearing the full cost of the battery, often the most expensive component.

I also wonder if it's a scheme to get people through the door and then leech off them with a lifetime subscription.


> Anyone familiar with basic economics is pulling their hair out reading this, because there's one extremely obvious way to lower the price of building new housing: Reducing or eliminating tariffs on construction equipment and materials and ensuring a robust supply of low-cost labor.

And just in general reducing the restrictions on building in places with high rent to income ratios.


Yeah, I don't mind paying for something, but they broke a bunch of stuff last year and it's still not fixed. That's what annoys me about plex.


> Game publishers have already publicly floated the idea of not selling their games but charging per hour. Imagine how that impact Call of Duty or GTA.

MMORPGs have had monthly subscription fees for a long time.

For a lot of games if they charged by the hour would probably see less revenue...people buy tons of games and then barely ever play them.


You assume there's only one type of player. Some players fall into a category I lovingly call Madden Guy. These people will play some other games, but they will play one game *a lot*. Call of Duty, Arc Raiders, Destiny, Battlefield, Fortnite, these are the type of games who attract these kind of players. If a game has 600 purchasable items, a seasonal battlepass-type thing, multiple female rappers skins, and a publisher-financed pro scene, it's probably one of those games.

Those games 100% already have game modes you pay by the hour. They will have special modes you access with currency and you need to keep paying to keep playing. Those modes are usually special, with increased and unique drops.


I think it's reasonable to argue something like, "some IP protection is good, but too much is bad, and we probably have too much right now." It would be impossible to calibrate the laws so that the amount of IP protection is socially optimal, but we can look at the areas where the protection is too much and start there.


It's not impossible at all. We should do 5 year copyright - 99% of all commercial profit of all media is collected within 5 years of publishing.

Copyright is granted to media creators in order to incentivize creativity and contribution to culture. It's not granted so as to empower large collectives of lawyers and wealthy people to purchase the rights and endlessly nickel and dime the public for access to media.

Make it simple and clear. You get 5 years total copyright - no copying, no commercial activity or derivatives without express, explicit consent, require a contract. 5 years after publishing, you get another 5 years of limited copyright - think of it as expanded fair use. A maximum of 5% royalties from every commercial use, and unlimited non-commercial use. After 10 years, it goes into public domain.

You can assign or sell the rights to anyone, but the initial publication date is immutable, the clock doesn't reset. You can immediately push to public domain, or start the expanded fair use period early.

No exceptions, no grandfathering.

There's no legitimate reasons we should be allowing giant companies like Sony and HBO and Paramount to grift in perpetuity off of the creations and content of artists and writers. This is toxic to culture and concentrates wealth and power with people that absolutely should not control the things they do, and a significant portion of the wealth they accumulate goes into enriching lawyers whose only purpose in life is to enforce the ridiculous and asinine legal moat these companies and platforms and people have paid legislators to enshrine in law.

Make it clear and simple, and it accomplishes the protection of creators while enriching society. Nobody loses except the ones who corrupted the system in the first place.

We live in a digital era, we should not be pretending copyright ideas based on quill and parchment are still appropriate to the age.

And while we're at it, we should legally restrict distribution of revenues from platforms to a maximum of 30% - 70% at minimum goes to the author. The studio, agent, platform, or any other distribution agent all have to divvy up at most 30%.

No more eternal estates living off of the talent and creations of ancestors. No more sequestration of culturally significant works to enrich grifters.

This would apply to digital assets, games, code, anything that gets published. Patents should be similarly updated, with the same 5 and 10 year timers.

Sure, it's not 100% optimal, but it gets a majority of the profit to a majority of the creators close enough and it has a clear and significant benefit to society within a short enough term that the tradeoff is clearly worth it.

Empowering and enabling lawyers and rent seekers to grift off of other peoples talent and content is a choice, we don't have to live like that.


I'm fairly certain that would not work at all for media such as sci-fi/fantasy books, where a system like this would result in people just forever reading older books which are free and effectively kill the market.

There is a limited amount of time to read in a day and the amount of 10+ year old content that is still amazing is more then anyone could ever read, and it's hard to compete with free.

I think video games is actually kinda an anomaly when it comes to copyright because they have been, on average, getting better and better then games released even in the recent past, mostly due to hardware getting better and better. Also any multiplayer game has the community issue where older games tend to no longer have a playerbase to play with.

Same could be said about movies/tv shows that rely on CGI up until somewhat recently where the CGI has pretty much plateaued.


I think the sales of books is pretty much uncoupled from the supply or price, as piles and piles of great books are available for free online or at the local library.

More recorded shows exist than any one man can watch in a lifetime, and yet there are multiple concurrent series ongoing right now.

I think the real kicker is that IP law was built around things like books, that don't suddenly stop working or need to be maintained, etc. Modern laws should take software into account and deal with it differently.


> 99% of all commercial profit of all media is collected within 5 years of publishing.

If that were true of music, companies wouldn't be buying back catalogues for (upwards of) hundreds of millions[0][1].

[0] https://apnews.com/article/music-catalog-sales-pop-rock-kiss...

[1] https://uk.finance.yahoo.com/news/ranked-biggest-music-catal...


If it were modified to "99% of media has commercial profit collected within five years" it's probably pretty close to the mark, given how much is released and never reprinted/etc.

However, even 1% of a very large market is a huge tail, which is valuable.


Regardless, change the game. If you have a valuable, useful platform, and compete with other platforms for quality and delivery of service, then you're optimizing for the right things. If you have valuable media and the platform only serves to collect fees for the privilege of accessing the media, then you're optimizing the thing that is net negative for society, and ends up with adtech and degraded service and gotchanomics to try to nickel and dime you at every opportunity.

Imagine a world in which spotify and youtube and netflix had to compete on product and service quality, instead of network effects and legal technicalities. In which you could vibe code an alternative platform and have it be legally feasible to start your own streaming service merely by downloading a library of public domain content, then boot-strapping your service and paying new studios for license to run content, and so on.

The entire ecosystem would have to adapt, and it would be incredibly positive for creatives and authors and artists. There wouldn't be a constant dark cloud of legal consequences hanging over peoples heads, with armies of lawyers whose only purpose in life is to wreck little people who dare "infringe" on content, and all the downstream nonsense that comes from it.

Make society better by optimizing the policies that result in fewer, less wealthy, and far less powerful lawyers.


>> A few billionaires might have additional vacation homes, but they are not going to consume a million homes, much less 10 million.

> Sumner is somehow unfamiliar with the concept of a landlord or vacant property investment.

I'm sure he is not unfamiliar with either...

Not sure what landlords have to do with anything since washing machines are often included as part of a rental (or the apartment doesn't have a washing machine, but what does that have to do with landlords?).

And vacant property investment is a small fraction of total property ownership in the US. It's more common that people have a vacation home and rent it out part of the year.

>> progressive consumption taxes

> When someone proposes one, let me know.

They have been proposed...many times. In fact, the US's system has elements of a progressive consumption tax already since people can put retirement savings in IRAs/401ks. What would make it a more complete progressive consumption tax would be to either raise the limits on contributions to these retirement accounts (and remove income limits), and also introduce accounts like these that are meant as more universal savings vehicles. This is preferred (in my view at least) to just cutting dividend and capital gains rates to 0% since that would benefit existing rich people.


> It is also important to note that, until recently, the GenAI industry’s focus has largely been on training workloads. In training workloads, CUDA is very important, but when it comes to inference, even reasoning inference, CUDA is not that important, so the chances of expanding the TPU footprint in inference are much higher than those in training (although TPUs do really well in training as well – Gemini 3 the prime example).

Does anyone have a sense of why CUDA is more important for training than inference?


NVIDIA chips are more versatile. During training, you might need to schedule things to the SFU(Special Function unit that does sin, cos, 1/sqrt(x), etc), you might need to run epilogues, save intermediary computations, save gradients, etc. When you train, you might need to collect data from various GPUs, so you need to support interconnects, remote SMEM writing, etc.

Once you have trained, you have frozen weights/feed-forward networks that consist out of frozen weights that you can just program in and run data over. These weights can be duplicated across any amount of devices and just sit there and run inference with new data.

If this turns out to be the future use-case for NNs(it is today), then Google are better set.


All of those are things you can do with TPUs


Won't the need to train increase as the need for specialized, smaller models increases and we need to train their many variations? Also what about models that continuously learn/(re)train? Seems to me the need for training will only go up in the future.


That's the thing - nobody knows. LLM architecture is constantly evolving and people are trying all kinds of things.


This is a very important point - the market for training chips might be a bubble, but the market for inference is much, much larger. At some point we might have good enough models and the need for new frontier models will cool down. The big power-hungry datacenters we are seeing are mostly geared towards training, while inference-only systems are much simpler and power efficient.

A real shame, BTW, all that silicon doesn't do FP32 (very well). After training ceases to be that needed, we could use all that number crunching for climate models and weather prediction.


it's already the case that people are eeking out most further gains through layering "reasoning" on top of what existing models can do - in other words, using massive amounts of inference to substitute for increases model performance. Whereever things plateau I expect this will still be the case - so inference ultimately will always be the end game market.


Some more traditional number crunching has long looked at lower- and mixed-precision hardware.


It's just more common as a legacy artifact from when nvidia was basically the only option available. Many shops are designing models and functions, and then training and iterating on nvidia hardware, but once you have a trained model it's largely fungible. See how Anthropic moved their models from nvidia hardware to Inferentia to XLA on Google TPUs.

Further it's worth noting that the Ironwood, Google's v7 TPU, supports only up to BF16 (a 16-bit floating point that has the range of FP32 minus the precision. Many training processes rely upon larger types, quantizing later, so this breaks a lot of assumptions. Yet Google surprised and actually training Gemini 3 with just that type, so I think a lot of people are reconsidering assumptions.


This is not the case for LLMs. FP16/BF16 training precision is standard, with FP8 inference very common. But labs are moving to FP8 training and even FP4.


When training a neural network, you usually play around with the architecture and need as much flexibility as possible. You need to support a large set of operations.

Another factor is that training is always done with batches. Inference batching depends on the number of concurrent users. This means training tends to be compute bound where supporting the latest data types is critical, whereas inference speeds are often bottlenecked by memory which does not lend itself to product differentiation. If you put the same memory into your chip as your competitor, the difference is going to be way smaller.


Training is taking an enormous problem and trying to break it into lots of pieces and managing the data dependency between those pieces. It's solving 1 really hard problem. Inference is the opposite, it's lots of small independent problems. All of this "we have X many widgets connected to Y many high bandwidth optical telescopes" is all a training problem that they need to solve. Inference is "I have 20 tokens and I want to throw them at these 5,000,000 matrix multiplies, oh and I don't care about latency".


I can't think of any case where inference doesn't care about latency.


I cant thinl of any reason training isnt going to become real time with a significant cpu budget.


I think it’s the same reason windows is inportant to desktop computers. Software was written to depend on it. Same with most of the software out there today to train being built around CUDA. Even a version difference of CUDA can break things.


CUDA is just a better dev experience. Lots of training is experiments where developer/researcher productivity matters. Googlers get to use what they're given, others get to choose.

Once you settle on a design then doing ASICs to accelerate it might make sense. But I'm not sure the gap is so big, the article says some things that aren't really true of datacenter GPUs (Nvidia dc gpus haven't wasted hardware on graphics related stuff for years).


That quote left me with the same question. Something about decent amount of ram on one board perhaps? That’s advantageous for training but less so for inference?


inference is often a static, bounded problem solvable by generic compilers. training requires the mature ecosystem and numerical stability of cuda to handle mixed-precision operations. unless you rewrite the software from the ground up like Google but for most companies it's cheaper and faster to buy NVIDIA hardware


> static, bounded problem

What does it even mean in neural net context?

> numerical stability

also nice to expand a bit.


Aka vertical integration.


It’s much clearer when you write these problems in terms of matrix math. The minimum variance portfolio is very important in finance.


How would you write this with matrices? It seems like there are many ways you could generalize.


Let w be the vector of weights and S be the comformable matrix of covariances. The portfolio variance is given by w’Sw. So just minimize that with whatever constraints you want. If you just asssume weights sum to one, it is a classic quadratic optimization with linear equality constraints. Well known solutions.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You