> A lot of people read things, it changes their life, and their life is better. They may not even remember where they read these things. They don't produce citations all of the time. That's totally fine, and normal. I don't see LLMs as being any different. If I write an article about making code better, and ChatGPT trains on it, and someone, somewhere, needs help, and ChatGPT helps them? Win, as far as I'm concerned. Even if I never know that it's happened. I already do not hear from every single person who reads my writing.
Not a contradiction but an addendum: plenty of creative pursuits are not about functional value, or at least not primarily. If somebody writes a seemingly genuine blog post about their family trauma, and I as the reader find out it's made-up bullshit, that's abhorrent to me, whether or not AI is involved. And I think it would be perfectly fair for writers who do create similar but genuine content to find it abhorrent that they must compete with genAI, that genAI will slurp up their words, and that genAI's mere existence casts doubt on their own authenticity. It's not about money or social utility, it's about human connection.
The consent question gets weirder when agents have persistent memory. I run agents that accumulate context over weeks — beliefs extracted from observations, relationships with other agents. At what point does an agent's
memory become its own work product vs. derivative of its training? There's no legal framework for that.
The only thing the doomers have been right about so far is that there's always a user willing to use --dangerously-skip-permissions. But that prediction's far from unique to doomers.
> Without going into the specifics of car seats, I do think we overemphasize safety. The article mentions saving 57 children. How much are 57 lives worth? The answer is not infinite - a life has a numeric value, ask any insurance company.
Sure, the value of 57 lives isn't infinite, but this particular comparison is a totally absurd one to make. Births and deaths are completely morally independent, it's not as if those 57 lives could be substituted using the surplus of births.
> Every safety regulation ought to pass a cold-blooded cost/benefit analysis. Few of them do.
Actually I'm pretty sure that is in fact how safety regulations work.
Nonetheless, the concept of a "cold-blooded cost/benefit analysis" is paradoxical. Values are intrinsically subjective, hence we have democracy.
>Actually I'm pretty sure that is in fact how safety regulations work.
Of course the number "check out". Industry regulations are typically ghost written by some combination of industry groups, lobbying groups and academia. Who funds those? The industry either being regulated or industry that stands to benefit if some other industry is regulated.
80-100yr ago if you were inclined to screech about fire safety you'd have been citing numbers funded by the.... wait for it.... asbestos industry.
>hence we have democracy.
Democracy is a system for ensuring stable-ish power transfers by giving the people some semblance of control over the process and little more.
> We search for outliers yet arbitrarily limit the range of players available.
> Gender segregation, weight classes, these are antithetical to the underlying aim of competitive sports.
That's a naive, reductive view. Competition isn't just about benchmarking and finding the global #1, nor perfect objective ranking. If it was, we would not bother with geographically-based competitions, nor tournament brackets and championships.
Competition is an entertainment product and a major form of community. It sustains itself through competitors and spectators. Seeking objectivity is backwards.
Agreed, and I think people adopt this reductive view because it can be quite difficult to reason about objectively. In terms of a framework to channel one's thinking on this, I found this paper useful in understanding the rationale behind defining distinct categories of competitors in sports: https://www.researchgate.net/profile/Jim-Parry/publication/3...
The key takeaway in my view is that the authors make a distinction between "category advantage", which is a systematic, structural, group-based difference that exists before competition even begins, and "competition advantage", which we see play out in competitive events and is based on a mix of factors including skill, preparation, and both innate and trained talent.
Where exactly to draw the line can be somewhat subjective (e.g. in weight classes) but it helps to explain why we have a separate female category: male physiology confers such a significant category advantage that, in open competition, it would limit the ability of female athletes to compete meaningfully and demonstrate their abilities. Having a separate category fulfils this desirable outcome of showcasing and celebrating female athletic excellence.
Often we see calls to add various classes of males, particularly ones who have chosen to identify as women, framed as "inclusion" but from the perspective of who this category is actually intended for it's the opposite. Drawing a clear eligibility boundary around the female category maximises inclusion of female athletes who would otherwise be disadvantaged and excluded.
> Yes, if you want skilled labour. But that's not at all what ARC-AGI attempts to test for: it's testing for general intelligence as possessed by anyone without a mental incapacity.
Humans without a clinically recognized mental disability are generally capable of some kind of skilled labor. The "general" part of intelligence is independent of, but sufficient for, any such special application.
What is possible today is one thing. Sure people debate the details, but at this point it's pretty uncontroversial that AI tooling is beneficial in certain use cases.
Whether or not selling access to massive frontier models is a viable business model, or trillion-dollar valuations for AI companies can be justified... These questions are of a completely different scale, with near-term implications for the global economy.
Fedora also offers immutable distros which are (I've heard) much more user-friendly than Nix. Sure you can make a hacky pseudo-immutable workflow on a mutable distro but that's literally more effort for a worse result.
I feel like fallacies can always be applied to anything because unless you're doing pure math (and even then, tons of caveats - one major one being that you already bought into the framework), you can always question deeper assumptions and yes, structurally it might even fit a fallacy. I mean, is-ought is one of the famous unresolved ones.
Yet we still have arguably correct beliefs in spite of fallacies. That suggests that merely pattern matching isn't a good solution to detect what "truth" is...
With git, conflicts interrupt the merge/rebase. And if you end up in a situation with multiple rebases/merges/both, it's easy to get a "bad" state, or be forced to resolve redundant conflict(s) over and over.
In Jujutsu and Pijul, for example, conflicts are recorded by default but marked as conflict commits/changes. You can continue to make commits/changes on top. Once you resolve the conflict of A+B, no future merges or rebases would cause the same conflict again.
It is irrelevant whether consciousness is an "illusion." The hard problem of consciousness is why there's any conscious experience at all. The existence of the illusion, if that's what you choose to label it, is still equally as inexplicable.
Of course science may one day be able to solve the hard problem. But at this point in time, it's basically inconceivable that any methodology from any field could produce meaningful results.
One thing scientists are trying is to see what interventions in the brain seem to make consciousness go away. Continued work in that vein may well set bounds on how consciousness can and cannot be caused and give us some idea.
Investigating the mechanics of consciousness is addressing the (misleadingly termed) "easy problem." The hard problem is why physical stuff would generate the weird metaphysical thing we call consciousness.
Not a contradiction but an addendum: plenty of creative pursuits are not about functional value, or at least not primarily. If somebody writes a seemingly genuine blog post about their family trauma, and I as the reader find out it's made-up bullshit, that's abhorrent to me, whether or not AI is involved. And I think it would be perfectly fair for writers who do create similar but genuine content to find it abhorrent that they must compete with genAI, that genAI will slurp up their words, and that genAI's mere existence casts doubt on their own authenticity. It's not about money or social utility, it's about human connection.
reply