I understand that calling it ‘agentic’ is nice for marketing, but most of what is described in this blog post is not related to agents. The design patterns you describe are explicitly non-agentic. Many of the use cases described are better handled by a single LLM call rather than an agent.
Finally, saying that agents can have predictable behavior is wrong (except on simple tasks where you shouldn’t be using an agent anyway). Agents loop and compound their input, making them highly non-deterministic even for the same prompt.
They may not be acting in good faith but there is extremely clear evidence that UCLA has engaged in illegal racial hiring and admissions practices and has supported antisemitism on campus. UCLA chose to give them that ammunition.
(Both links are German language - that should not be a problem anymore these days I hope)
I don't see why you refuse to even consider that there is more than 1 and 0 absolutes.
The majority of protests does not create such headlines. We have had smaller and larger protests in many cities(here in Germany). People meet, walk, speak, and that is fine and nobody complains. But especially in 2024 that was not all that happened.
I read similar students about threatened Jewish students at US universities.
It is, as so often, caused by a violent and vocal minority. But if a hundred protesters are peaceful and only one hits you on the head you still have a very bad day as the victim.
Maybe it is getting better, this report says that in 2025 things have improved for both Muslim and Jewish students: "The biggest takeaways from Harvard’s task force reports on campus antisemitism and anti-Muslim bias" -- https://edition.cnn.com/2025/04/29/us/harvard-reports-antise... (May 2025)
Israel is probably the “strongest” nation in figuring out propaganda and mass disinformation campaigns. It is extremely easy to say an entire protest is invalid because of the actions of a few either legitimate or illegitimate protestors.
This is why this tactic is so easily deployed across any protest that seriously threatens states. We saw the US use this tactic in BLM protests of 2020.
Anyway these protests are legitimate despite the events that you’re mentioning happening. “Fed posting” is a thing people are slowly learning to spot.
> “Fed posting” is a thing people are slowly learning to spot.
Are you accusing me? I waited patiently for years building a profile here, for this one moment, to derail this comment, yay!
> Anyway these protests are legitimate despite the events that you’re mentioning happening
And so is the violence I linked to. If you have ANY proof whatsoever that the lecture hall and the Jewish student were actually attacked by Jewish saboteurs - despite the identity of the people involved known - please post it.
Otherwise you are the one doing the shitposting here.
> Something we have evidence of Israel actually doing:
So now you can just tell everybody to ignore any and all events that don't fit your narrative? How convenient!
And by the way, those "protests" would gain a lot of credibility if they were also targeted against Hamas.
Not at all. I’m saying that it’s useless to measure a protest based off of an action of one or two individuals. Which is always just one or two individuals.
I wonder why the last Trump administration didn't do anything when there was a rally where people flew nazi flags, chanted "jews will not replace us" on the grounds at UVA, and murdered a counterprotestor. Why was that antisemitism overlooked?
The term agent is just way overloaded. This guy defines it completely differently the the big labs, and I’ve seen half a dozen different definitions in the last few months.
In the long run the definition used by OpenAI, Anthropic et al will win out so can we just all switch to that?
Even as someone who is skeptical about LLMs, I’m not sure how anyone can look at what was achieved in AlphaGo and not at least consider the possibility that NNs could be superhuman in basically every domain at some point
They claim that a small Chinese hedge fund could acquire $1bln in GPUs, with no state support, including many sanctioned chips, then trained a model optimized for a far smaller server compute size, and that they have a source at this very small fund who is willing to admit to export violations. A 40bln param active model is exactly the size you would expect from a server of the size they claim.
What’s more likely - that semianalysis made it up like they have a bunch of other things, or that all the above is true?
SemiAnalysis is wrong. They just made their numbers up (among many other things they have invented - they are not to be trusted). I have observed many errors of understanding, analysis and calculation in their writing.
Deep Seek R1 is literally an open weight model. It has <40bln active parameters. We know that for a fact. That size of model is definitely roughly optimally trained over the time period and server times claimed. In fact, the 70bln parameter Llama 3 model used almost exactly the same compute as the DeepSeek V3/R1 claims (which makes sense, as you would expect a bit less efficiency for the H800 and for the complex DeepSeek MoE architecture).
The comment is talking like Texas is a bastion of energy, when they've made the news multiple years in a row about how entire cities collapse over what most states would consider trivial amounts of snow. It is hard to be a world leader in energy and to also have a wiki on your recurring energy crisis issues.
Finally, saying that agents can have predictable behavior is wrong (except on simple tasks where you shouldn’t be using an agent anyway). Agents loop and compound their input, making them highly non-deterministic even for the same prompt.