Oh course it's a high bar! Why should anyone care about this work outside of the time of their release? It's modern culture but it ain't gonna be passed down anytime soon.
If the bar is that people will continue reading their books in 200 years, than which fiction writers of the last few decades would go into your list of "good"?
I like his reasoning about feedback, but found his rejection of this funny:
> I was afraid then that I had consigned myself to writing stories about children in jeopardy. But in fact I was writing character stories rather than idea stories. And THAT was how I built a career, not by self-imitation ...
From my perspective, having read about a dozen of his books and enjoyed his writing, I couldn't help but feel that the protagonists and the adversity facing them get really repetitive - almost all of them really are about unusually smart children in various forms of jeopardy, all achieving almost impossible to believe control over their situation. They're great power fantasies for a kid, but when trying to revisit his writing when I was older, I found myself really disappointed with the lack of range.
If you google it, revenue is at £8 billion (Office for Budget Responsibility) and in decline, and NHS spending is at £2.6 billion in England, so the bulk of it (NHS England).
I do not have the specific info/ref to hand, but at one point some years ago, smoking brought in something of the order of nine times as much into the NHS as it spent on smoking related illnesses. I was very surprised by this.
Even so, the NHS's goals are rightly such that greatly reducing the harm done by smoking is preferred over keeping this revenue. Unlike a tobacco company that would not factor harms external to the organisation into the profit and loss calculation.
Indeed, and if we accept the argument of this tech approaching AGI, we should expect that within x years, the subscription cost may exceed the salary cost of a junior dev.
To be clear, I'm not saying that it's a good thing, but it does seem to be going in this direction.
If LLMs do reach AGI (assuming we have an actual agreed upon definition), it would make sense to pay way more than a junior salary.
But also, LLMs won’t give us AGI (again, assuming we have an actual, meaningful definition)
I absolutely do not accept that argument. It’s clear models hit a plateau roughly a year ago and all incremental improvements come at an increasingly higher cost.
And junior devs have never added much value. The first two years of any engineer’s career is essentially an apprenticeship. There’s no value add from have a perpetually junior “employee”.
For me too, it was around that time last year, with GPT-5, Claude Sonnet 4.5 and then Gemini 3 that I started feeling that these models are clearly becoming great at reasoning. I'm not at all opposed to saying that they are around PhD-level on at least some domains.
> We want to make these capabilities available to the scientists and research organizations best positioned to advance human health, while maintaining strong safeguards against biological misuse. The Life Sciences model is launching through a trusted-access deployment structure for qualified Enterprise customers in the U.S. to start, with controls around eligibility, access management, and organizational governance.
I'm absolutely ok with a legitimate lab scientist conducting biochemical research getting suggestions about substances that are generally considered dangerous but might be appropriate for their study, and it'll be up to the scientist to discern whether it is indeed appropriate to use.
reply