No, it doesn't really matter if they pay in cash or stock. If you think NVDA has room to run you're welcome to use your buyout money to buy NVDA on the open market.
I'm not sure in this specific case. They could choose to pay the employees some portion of the funds.
If not, the owners are likely liable to be sued for "selling in effect" without paying equity holders.
Presuming the company becomes a defacto subsidiary of Nvidia (even if not legally so)
My guess, without researching it, is they will compensate existing equity holders to avoid that possibility. I mean the valuation multiple is enormous, it's worth it simply to derisk the legal aspect.
For vested RSUs it's likely that the Groq husk will pay out the $20B as a dividend or buyback or something. I don't know if unvested RSUs are accelerated or just canceled. Of course the employees will receive new RSUs when they join Nvidia.
Local tools/skills/function definitions can already invoke any API.
There's no real benefit to the MCP protocol over a regular API with a published "client" a local LLM can invoke. The only downside is you'd have to pull this client prior.
I am using local "skill" as reference to an executable function, not specifically Claude Skills.
If the LLM/Agent executes tools via code in a sandbox (which is what things are moving towards), all LLM tools can be simply defined as regular functions that have the flexibility to do anything.
I seriously doubt MCP will exist in any form a few years from now
Yes, which is why the companies that develop the models aren't cost viable. (Google and others who can subsidize it at a loss obviously are excepted)
Where is the return on the model development costs if anybody can host a roughly equivalent model for the same price and completely bypass the model development cost?
Your point is inline with the entire bear thesis on these companies.
For any use cases which are analytical/backend oriented, and don't scale 1:1 with number of users (of which there are a lot), you can already run a close to cutting edge model on a few thousand dollars of hardware. I do this at home already
Open source models are still a year or so behind the SotA models released the last few months. The price to performance is definitely in favor of Open Source models however.
DeepMind is actively using Google’s LLMs on groundbreaking research. Anthropic is focused on security for businesses.
For consumers it’s still a better deal for a subscription than to invest a few grand in a personal LLM machine. There will be a time in the future where diminishing returns shortens this gap significantly, but I’m sure top LLM researchers are planning for this and will do whatever they can to keep their firm alive beyond the cost of scaling.
I am not suggesting these companies can't pivot or monetize elsewhere, but the return on developing a marginally better model in-house does not really justify the cost at this stage.
But to your point, developing research, drugs, security audits or any kind of services are all monetization of the application of the model, not the monetization of the development of new models.
Put more simply, say you develop the best LLM in the world, that's 15% better than peers on release at the cost of $5B. What is that same model/asset worth 1 year later when it performs at 85% of the latest LLM?
Already any 2023 and perhaps even 2024 vintage model is dead in the water and close to 0 value.
What is a best in class model built in 2025 going to be worth in 2026?
The asset is effectively 100% depreciated within a single year.
(Though I'm open to the idea that the results from past training runs can be reused for future models. This would certainly change the math)
For sure, all these companies are racing to have the strongest model, and as time goes on we quickly start reaching diminishing returns. DeepSeek came out at the beginning of this year, blew everyone's minds, and now look at how far the industry has progressed beyond it.
It doesn't even seem like these companies are in a battle of attrition to not be the first to go bankrupt. Watching this would be a lot more exciting if that was the case! I think if there was less competition between LLMs developers could slow down, maybe.
Looking at the prices of inference of open-source models, I would bet proprietary models are making a nice margin on API fees, but there is no way OpenAI will make their investors whole because they make a few dollars of revenue for a million tokens. I am terrified of the world we will live in if OpenAI will be able to reverse their balance sheet. I think there's no where else that investors want to put their money.
The other nightmare for these companies, is that any competitor can use their state of the art model for training another model. As some Chinese models are suspected to do. I personally think it's only fair, since those companies in the first place trained on a ton of data and nobody agreed to it. But it shows that training the frontier models have really low returns on investment
I'm confused by all the takes implying decode is more important than prefill.
There are an enormous number of use cases where the prompt is large and the expected output is small.
E.g. providing data for the LLM to analyze, after which it gives a simple yes/no Boolean response. Or selecting a single enum value from a set.
This pattern seems far more valuable in practice, than the common and lazy open ended chat style implementations (lazy from a product perspective).
Obviously decode will be important for code generation or search, but that's such a small set of possible applications, and you'll probably always do better being on the latest models in the cloud.
The US nuked Japan in the 1940s and only a short few years later were close allies.
Similarly, Germany was the de facto enemy of the Ally aligned world in the 40s and only a few short decades later best friends with most.
You can identify countless similar atrocities in US history after which relations stabilized within a few years.
Most in life are short term oriented and it's rare that these things produce lasting effects on perception.
Does anyone care or even remotely think about what Bush did as president 20 years ago? Some politically oriented and historically minded folks yes, 99% no.
They really have to be long lasting and persistent transgressions to produce generational distrust e.g. Japan invading China/Korea many times over the last few hundred years.
However the new gen seems far less concerned about this too.
Given Trump's larger than life character/ego/presence, it's more likely that anything he does will be attributed to him instead of the country as a whole. Which makes his actions perhaps even less impactful than a more neutral presenting president doing the same.
> The US nuked Japan in the 1940s and only a short few years later were close allies.
I don't think that's a great example. A few years after the bombs, Douglas MacArthur was ruling Japan. They didn't have much of a choice. Japan was occupied for almost 7 years, had their constitution rewritten to make them essentially reliant on the US for security.
That goodwill is almost entirely due to the US' need to rebuild the Japanese economy rapidly to counter threats from China, North Korea and the Soviet Union. That happened through favourable trade deals and mass outsourcing of automotive and electronics jobs and diverting capital away from domestic military as per the US-written constitution of Japan.
Western and Japanese companies alike started moving production outside Japan once wages started eating into profit margins. Today, underemployment is common in Japan and the low birth rate is one of several symptoms of the economic stagnation that began in the early '90s. Populist governments won't be far behind.
I don't think Japan after WW2 is the right comparison here.
My understanding is that generally they expected a MUCH more severe set of penalties and occupation after losing - especially given the unconditional surrender, and instead got a stable and functional provisional government for the next 7 years.
---
Basically - Japanese sentiment is not a parallel to this. That time:
1. Japan started hostilities with a surprise attack
2. Lost, complete with unconditional surrender
3. Was then occupied by a government that was more stable, left much of the existing civilian infrastructure in place, and forgave many key figures (not the least of which was the emperor)
4. Then that government helped them roll out "new deal" style social reforms.
That is absolutely, utterly at odds with the current situation. There - the Allied powers were relatively graceful, culturally aware, and interested in a stable, functional government.
Here - We're insulting our friends, from a position where there's no moral high ground to stand on. Personally, I don't think they'll forget so quickly, and I think things will get much worse if the US continues down this path.
Can we rebuild those relationships? Sure, seems likely on a long enough timespan, but it takes a hell of a lot more effort to get it back than it does to throw away.
Those two examples relied on heavy investment over many years by the worlds sole superpower to bounce back. See the Marshall Plan and MacArthur’s reconstruction of Japan. These things do not happen automatically, and unlike those examples, there doesn’t seem to be a rich, well-run superpower waiting in the wings to lift anyone back up. I’m sure some relations will normalize post-47 but economic might and attractiveness to the world may not.
The US military physically occupied and restructured the governments of Japan and Germany. A better example might be Vietnam, which has become a US partner but it took a couple generations.
It took about 50 years before India started to trust that relations with America could be positive. Then it took a lot of trust and concessions on both sides to strengthen that over the next 20 years under Bush, Obama, Trump and Biden - all of whom went out of their way to court India as an ally and a counterweight to China.
Then Trump decided his shitty Nobel Prize mattered more and threw 20 years of hard work in the dump.
The consensus in India is that America is perfidious. You claim that countries have goldfish memories, but that doesn’t seem to be the case with Indo-American relations. It was immensely hard to build this relationship and easy to burn it.
But who knows? Maybe you have an insight into how Indian people think.
Going from an empire/reich to a democracy ain't quite the same thing as a presidential election. The USA will have a hard time recovering from institutional damage being done to it right now.
There's a regime change like "the second TV debate was a clown show, most swing states saw +5% move away from the incumbent, with the usual stronghold states still holding their established positions, and the latest poll listed inflation as voters' top concern."
...and then there's a regime change like "your capital city and every industrial center is firebombed to oblivion, kindergarteners are begging on streets, soldiers are coming back from POW camps to find their home burnt to ground and their whole family dead, and the occupying forces are executing high ranking officers of the previous regime for war crimes, just to drive the point home that their ideology will never be tolerated ever again."
No, uncertainty and stability matter in business, a lot. Not that Trump is the reason we lack political stability. It’s the two party system Washington warned us about, boaz and jachin. But that lack of stability is arguably pretty stable itself. American dynamism has pros and cons. Much of today’s context goes back to political and trade decisions made back in the 90s.
No amount of hardening or fine-tuning will make them immune to takeover via untrusted context