Inference is cash positive: it's research that takes up all the money. So, if you can get ahold of enough users, the volume eventually works in your favour.
The idea is war of attrition and then as your potential competitors run out of money and it costs too much for a new entrant, you raise your prices to be profitable and/or enshittify your product.
Right, but unlike with social products (where the network of users is essential) or transportation/food delivery (where providers will follow the user volume) I just don’t see any stickiness benefit for OpenAI. A user’s conversation history is the only potentially valuable bit, but I think most users treat their ChatGPT history like their Google search history; disposable.
Can someone give me an explanation for why Mike Cardwell, who lives in the UK, blocks Iranian IPs for “your decision to arm Russia with drones so that they can indiscriminately massacre civilians.” but does not block US or UK IPs for their decision to arm Israel?
The answer would seem to me to be racism – he values Ukrainian lives more than Palestinians and considers non-white citizens in non-democratic countries responsible for their country’s actions, but white citizens of democracies innocent.
I’m not sure if you’re actually asking or not, but in case you want to hear an answer, it’s because Israel is considered to be defending itself after a horrible attack, not that dissimilar to Ukraine really in some people’s view (me included). This goes back to a worldview where residents of a country share a portion of the blame for their country’s actions. This view is not universal, but has a certain mindshare (again, myself included), and if you imagine it fair for residents of a place to share the fate caused by the cumulative actions of that place, it’s not a big stretch to see how Israel’s actions can be considered natural and just by some, and so can the sanctions against residents of Iran and Russia.
The vast majority of the world including the ICC and the UN does not consider Israel to be defending itself. By your own logic, Israelis are culpable for the genocide and land theft committed by their government.
I believe “vast manority” may be a stretch; do you happen to know if there’s a way to quantify that? In my bubble, almost everyone that I personally know and who generally has reasonable beliefs, is on the Israel’s side of this conflict.
Why do we keep having these LLM studies that are completely unsurprising. Yes, the probabilistic text generator is more likely to output a correct answer when the input more closely matches its training sources than when you add random noise to the prompt. They don’t actually “understand” maths. It’s worrying how much research seems to operate from the premise that they do.
"It’s worrying how much research seems to operate from the premise that they do."
They are testing an hypothesis, we don't know if they're optimistic or pessimistic about it. Is it even relevant?
They have studied that LLMs can be easily confused with non-sequitors, and this is interesting. Maybe prompts to LLM should be more direct and foccused. Maybe this indicates a problem with end users interacting with LLMs directly - many people have difficulty on writing in a clear and direct way! Probably even more people when speaking!