No, not surprising. But some of your statements are hard to square with the current circumstances, so help us understand where you're coming from:
> When he says "I will end the war in three days", I think he genuinely believes it. So I think he is genuine.
This is what Putin said about his "Special military operation" that has stretched on for 4 years now. Hostomel turned into an irreversible, taxing conflict on the Russian people: https://en.wikipedia.org/wiki/Battle_of_Antonov_Airport
If the United States' hubris costs it half as much as Russia's did, we'll never be respected on the global stage again. The best option was to honor and enforce Iran's JCPOA agreement with the IAEA, but that's not possible now that America and Israel climbed the escalation ladder. Any deal we strike under duress will be worse and cost taxpayers more than peacetime diplomacy.
I think we can see eye-to-eye with each other, but I'd have to hear how you think this type of strategy benefits America. From the macro-scale, this does nothing to bolster a conflict over the First island chain and weakens America's strategic credibility abroad. Iran and Israel are a sideshow compared to the eventual conflict with China, and the results we've seen from the Persian Gulf do not bode well for America's power projection.
Largely agreed but if one expects any sort of escalation with China that actually makes this (and Ukraine) quite useful by providing real world data about the recent technological changes to war as well as serving as a minor stress test for the military's organizational structures. To be clear I don't mean to imply that as being even remotely reasonable as justification.
>I'd have to hear how you think this type of strategy benefits America
The only answer I have is the one that Trump keep on repeating, about how Iran should not have a nuclear weapon.
> JCPOA agreement
If I am the POTUS and is really concerned about what happens to US after my term, I would be very concerned about what happens when the term of the agreement expires. This fits with Trump has been saying. That he became the POTUS because he found those that came before him not doing a good job. So it follows that he might think his successors also would not do a good job. So when he says he want a permanent solution to Iran's nuclear threat, I think that is why.
This is what I said before. If you take what Trump says and do, in different contexts, it matches. I mean, he has an underlying philosophy and world view that he has built up, and is not derived from the thoughts and philosophy of others. This is another reason why I like him, but causes a huge majority of intellectuals to hate him. Because most of the intellectuals derive or outsource their view and thinking to other thought leaders...
> The only answer I have is the one that Trump keep on repeating, about how Iran should not have a nuclear weapon.
Then you don't have an answer. The Israeli media played that line for close to 40 years, lamenting Iran being "mere months" away from a nuke - for decades at a time.
Iran has nothing to do with American homeland security. America's involvement in Iran is purely for political and economic reasons, there is no credible threat to America in Iran any more than there is in Sudan or Yemen.
> If I am the POTUS [...] I would be very concerned about what happens when the term of the agreement expires.
Genuinely, why? The JCPOA is a joint plan, America's opinion only matters insofar as we can compel Iran to comply. Pulling out increases the likelihood that Iran races to build a bomb with their HEU. Bombing them, like in the Twelve-Day War and Operation Midnight Hammer, did not compel any compliance. The uranium is still a problem.
> So when he says he want a permanent solution to Iran's nuclear threat, I think that is why.
I think that is bogus. Iran is not a credible nuclear threat to the United States or it's citizens, so the US would only be going to war to protect Israel. In which case, we don't even need the nuke pretext and we can just admit that it's a protectionist war to defend our fragile satellite state instead of lying about ICBM threats.
So... This is worth a personal Google search on your part. This organization is a large part of the life blood for all research and development in the United States. It funds research, students, projects.
You know how the US had people from all over the world trying to get into our schools, and how they regularly figured things out important economic healthcare and other discoveries by being ahead of the curve? This group is a huge reason why.
Here's a good link for just 9 things that came from nsf funded studies. https://www.nytimes.com/2025/05/16/science/federally-funded-... the first being GPS. There are way more and the obvious ripple down effect of having trained people who went into industry and innovated in the private sector.
> You know how the US had people from all over the world trying to get into our schools, and how they regularly figured things out important economic healthcare and other discoveries by being ahead of the curve? This group is a huge reason why.
A lot of people from all over the world are trying to get into US schools because it's a way to get a US visa, and legal residency in the US is valuable enough that people are willing to go to great lengths in order to obtain it.
This isn't just a hypothetical, a guy I know from Europe who is currently enrolled in a master's degree program in the US told me outright that the primary motivating factor for him to get a master's degree was because it was the most expedient way for him to get a visa to the US. I know other people who have deliberately enrolled or sought to enroll in higher education programs in other countries they wanted to spend time in, as a way of getting a visa, plus in some cases a stipend from the government of that country.
Certainly some people who attempt to immigrate to the US via the educational system are in fact doing valuable research work - but I think the vast majority of them are not, and I don't really trust the current leadership of the NSF to set up systems that accurately discern between research programs that genuinely benefit the country or the world; and research programs that are actually an excuse to get US visas to smart but ultimately mediocre people from other countries who would prefer to be in the US rather than their own country.
As someone with a bit of higher education experience. There was a large amount of Chinese students paying cash to do their Ph.d's in the US. Probably 50% of the student body in some schools. A large portion of that 50% went immediately back to China after obtaining their degrees.
LLMs "hallucinate" because they are stochastic processes predicting the next word without any guarantees at being correct or truthful. It's literally an unavoidable fact unless we change the modelling approach. Which very few people are bothering to attempt right now.
Training data quality does matter but even with "perfect" data and a prompt in the training data it can still happen. LLMs don't actually know anything and they also don't know what they don't know.
I won't quibble even though I likely should. Have to remember this is HN and companies need to shill their work otherwise ... Yes.
I will play along and assume this is sound. 10-40% +/- 10% is along the lines of "sort of" in a completely unreliable, unguaranteed and unproven way sure.
That’s not the only issue. They also have the problem that they’re built to always give an affirmative answer and to use authoritative wording, even when confidence is low. If they were trained to answer “I don’t know” instead of guessing, they’d hallucinate a lot less, but nobody seems to want that.
It calls to mind the issue of search engines that refuse to return “0 results found” anymore. Now they all try to give you related but ultimately incorrect results.
To me, that feels like gaslighting. It’s like if you ask someone to buy cheddar cheese at the store and they come back with mozzarella, and instead of admitting that the store was out of cheddar, they try to convince you that you actually really want mozzarella.
If they were trained that an answe of "I don't know" was an acceptable answer, the model would be prone to always say "I don't know" because it's a universally acceptable answer.
Though it could be a Honeypot they are probably hoping to train on all the ways someone might try to do this. Or maybe funds are really low and they need a smoke screen for a really bad actor to go in and try to do it for real.
I could probably do this, but why on earth would I want to immediately put myself on a list as a dangerous person. The main problem with this is, even if somehow they stopped all points of failure with gpt5.5 which they can't, you can distill a new model from gpt5.5 or any other model and get anything you would want in probably under 4b parameters. A lot of this is theater so they don't get sued as easily when it inevitably happens.
Distillation doesn't have to use weights. Think of it as a fine tune. The basic form of it is, you ask a large model lots of questions and you train the small model on the results. Even better if you ask it to explain it's rationale. There are tons of schemes for it do some searching around. One I remember is for each prompt, ask the small model to answer, have a big model review and critique the answer, train on the results.
I won't go into how that applies specifically with relation to this article. But you can even use distillation as a service tools. I believe they support this to some extent, though probably not for chatgpt.
I think a year ago or so there was some sort of scandal about other companies doing this to chatgpt. As well as individuals dumping their entire training sets. Lots of ways, hypothetically of course things like this could be and likely are being done right now.
By making millions of queries to frontier models from a lot of accounts, collecting the results as a dataset, and finetuning your model on it. Chinese companies have been caught doing it on an industrial scale several times now.
Google search has gone way down hill after they nerfed it and then did nothing to prevent the flood of AI slop seo websites. So unfortunately, instead of sharing links everyone now gets sent to the inefficient text generator that hallucinates nonsense and will color the average summary of a topic by whoever trained it and your most recent chat history instead.
I haven't run a Google search in two years. Your comment just made me realize that. Doing a Google search is like trying to watch cable after being on YouTube for years.
I use different search engines than Google. They have similar issues, but some are better at ignoring the slop.
I just cannot justify the environmental impact and surveillance of using LLMs for everything. I prefer to summarize recent information myself. LLMs are not particularly good at it.
Funny thing about the cable analogy. Ever since all streaming providers have started cranking up prices and still forcing users to see hundreds of ads my family has been buying second hand dvds. So we have regressed from streaming to right after cable. I know one family that went back to cable, they do still watch YouTubes here and there but they got sick of it.
If my user cannot install software in their own computer then I do not want their money. They have issues they need to work out on their own and they might be better off saving their money.
reply