The last quote, to a layperson, may sound completely sinister, but therein lies a deep and open computer science question: AIs really do seem to get their special capabilities from having a degree of freedom to output wrong and false answers. This observation goes all the way back to some of Alan Turing's musings on how an AI might one day be possible. And then there were early theorems related to this e.g. PAC learning. I'd love to know about what's happened since on this aspect, such as the role of noise and randomness, and maybe even hallucinations are a feature-not-bug in a fundamental sense, etc.
>>Do they even have direct access to published works to use as reference material?
I mean, clearly, given that it did answer my question eventually. Also wasn't it a whole thing that these models got trained on entire book libraries(without necessarily paying for that).
>>I wouldn't expect any LLM to be able to respect such a request
Why though? They seem to know everything about everything, why not this specifically. You can ask it to tell you the plot of pretty much any book/film/game made in the last 100 years and it will tell you. Maybe asking about specific chapters was too much, but Neuromancer exists in free copies all over the internet and it's been discussed to death, if it was a book that came out last year then ok, fair enough, but LLMs had 40 years of discussions about Neuromancer to train on.
But besides, regardless of everything else - if I say "don't spoil the rest of the book" and your response includes "in the last chapter character X dies" then you just failed at basic comprehension? Whether an LLM has any knowledge of the book or not, whether that is even true or not, that should be an unacceptable outcome.
I wouldn't expect an AI to know exactly what happens in every chapter of a book.
Knowing the plot of Neuromancer isn't the same as being able to recite a chapter by chapter summary.
I tried this Neuromancer query a few times and results greatly vary with each regeneration but "do not include spoilers" seems to make Gemuni give more spoilers, not less.
The mistake in these types of arguments is that natural, classical-artificial, and/or neural-net-artificial learning methods all employ some kind of counterexample/counterfactual reasoning, but their underlying methods could well be fundamentally different. Thus these arguments are invalid, until computer science advances enough to explain what the differences and similarities actually are.
Ferran Adria drew culinary inspiration from a bag of potato chips
As someone experienced with a privileged elite educational background, I can guarantee that intellectuals love the highbrow and lowbrow, the authentic and the kitsch; rather, it is a sign that someone is not acculturated if they have the stereotypical impression of the intelligentsia, which makes the OC's comment ironic, they are telling on themselves.
Those X-Y sentences are like nails on chalkboard to me, but I genuinely wonder why it is so pervasive in LLM arguments. It is like trying very hard to think in binary terms yet failing.
I don't get how you have considered all these details yet didn't try to steelman the "hint" better, e.g. 30 minutes of relaxed meditation compared to 30 minutes of sauna usage, as opposed to some vague definition of "do nothing" and whether different social classes effectively have very different baselines of doing nothing, such as their stress levels, does playing golf count as free time, or sunning on the deck of a cruise ship is that "doing nothing", etc. at which point the discussion about confounders really gets in the weeds. Unlike CPUs human in/activity is not like a no-op instruction
You can read the reports and then you will know what counts as a free time, it's clearly defined. Note that I'm not saying that socioeconomic status might not confound results - I'm just saying that available free time most likely does not and that poorest decile generally has much more free time than richest decile. I don't get why is it so hard to accept?
Rights are morally absolute, and the cynical insistence that they must be traded off is both fallacious and intellectually hypocritical. You want certain weaker rights, then just admit it, don't be disingenuous about it.
I have been discovering/enjoying the 'smol' web, unironically.
Hmm, there's a site I wanted to share with you but I can't find it atm., it's a directory of personal websites sorted by topic. It pops in here from time to time.
reply