> Which means we need people like Alice! We have to make space for people like Alice, and find a way to promote her over Bob
The solution is relatively simple though - not sure the article suggests this as I only skimmed through:
Being good in your field doesn't only mean pushing articles but also being able to talk about them. I think academia should drift away from written form toward more spoken form, i.e. conferences.
What if, say, you can only publish something after presenting your work in person, answer questions, etc? The audience can be big or small, doesn't matter.
It would make publishing anything at all more expensive but maybe that's exactly what academia needs even irrespective of this AI craze?
I thought that was kind of how the hard sciences work already?
My grad school friend who was a physicist would write his talk just before his conferences, and then submit the paper later. My experience in CS was totally backwards from that.
15 years ago I was thinking about switching my career to a different industry altogether, just didn't know what it would be. One thing I knew was that I was so tired of building web sites and backends. Boring, repetitive, uninspiring.
Then a friend asked me to write a simple iPhone app. I had no idea what development for Apple platforms would be like...
Fast forward to 2026, I'm 57 now, still in tech, building apps for Apple platforms, still enjoying it very much.
The durability of their products still surprises me. I still own and use iPhone 11 (still it is my first iPhone when I switched from Android). Still getting latest iOS updates and functioning very well and may last for 2 more years. What other phone could do this?
I’ve had the exact opposite journey. Native apps, disillusioned and frustrated with the backwards tooling, moved on to more open platforms (web apps and backends)
I’m curious what you find “backwards” about native tooling. I know the sentiment is common, and there must be some truth to it. But my partner works in web infra and frequently laments her inability to trace a single request through her company’s monolith while trying to reconstruct a failure from logs, and I am baffled that there’s no equivalent to attaching a debugger and stepping through execution.
> AI should learn to say two things: ‘I don’t know’ and ‘you’re wrong.’
My guess is, the next evolutionary step of LLM's should be yet another layer on top of reasoning, which should be some form of self-awareness and theory of mind. The reasoning layer already has some glimpses of these things ("The user wants ...") but apparently not enough to suppress generation and say "I don't know".
Well they managed the "you're wrong" bit at least. Sometimes ChatGPT tells me I'm wrong when I'm not. Still can't do "I don't know" which is probably the bigger problem.
Claude models have made very good progress (see BS benchmark), and that probably explains why they're leading now. others will follow this precedent shortly, no doubt.
While Swift now has the `borrowing` and `consuming` keywords, support for storing references is nonexistent, and the only way to return/store `Span`s, etc, is only possible through using experimental `@lifetime` annotations.
Swift is a nice language, and it's new support for the bare necessity of affine types is a good step forward, but it's not at all comparable with Rust.
You don't even need to reinvent a walkable city, just look at any medieval historical town that is say ~500 years old, almost untouched, and has restricted traffic today (possibly with no public transport whatsoever). These towns are a pure joy to live in, they are walkable with no other options, quiet, pleasant and overall healthy to live in in all respects.
We keep rediscovering that we're happier and more fulfilled when we live in ways that are more like how we've been living for most of the last million years. Also we are disgusted by our ancestors and look down upon them.
I'm a big fan of Swift (and SwiftUI), such a concise and elegant language. Beauty.
Also I appreciate how you made all backend calls just static functions which they always should be. People tend to overcomplicate these things and add a lot of boiler plate and unnecessary bureaucracy.
Absolutely. Even worse, when you ask AI to solve a problem it almost always adds code even if a better solution exists that removes code. If AI's new solution fails, you ask it to fix, it throws even more code, creates more mess, introduces new unnecessary states. Rinse, repeat ad infinitum.
I did this a few times as an experiment while knowing how a problem could be solved. In difficult situations Cursor always invariably adds code and creates even more mess.
I wonder if this can be mitigated somehow at the inference level because prompts don't seem to be helping with this problem.
Same thing happens with infrastructure config. Ask an AI to fix a security group issue and it'll add a new rule instead of fixing the existing one. You end up with 40 rules where 12 would do and nobody knows which ones are actually needed anymore.
Being an AI skeptic more than not, I don't think the article's conclusion is true.
What LLM's can potentially do for us is exactly the opposite: because they are trained on pretty much everything there is, if you ask the AI how the telephone works, or what happens when you enter a URL in the browser, they can actually answer and break it down for you nicely (and that would be a dissertation-sized text). Accuracy and hallucinations aside, it's already better than a human who has no clue about how the telephone works or where to even begin if the said human wanted to understand it.
Human brains have a pretty serious gap in the "I don't know what I don't know" area, whereas language models have such a vast scope of knowledge that makes them somewhat superior, albeit at a price of, well, being literally quite expensive and power hungry. But that's technical details.
LLMs are knowledge machines that are good at precisely that: knowing everything about everything on all levels as long as it is described in human language somewhere on the Internet.
LLMs consolidate our knowledge in ways that were impossible before. They are pretty bad at reasoning or e.g. generating code, but where they excel so far is answering arbitrary questions about pretty much anything.
Very important distinction here you’re missing: they don’t know things, they generate plausible things. The better the training, the more those are similar, but they never converge to identity. It’s like if you asked me to explain the S3 API, and I’m not allowed to say “I don’t know”, I’m going to get pretty close, but you won’t know what I got wrong until you read the docs.
The ability for LLMs to search out the real docs on something and digest them is the fix for this, but don’t start thinking you (and the LLM) don’t need the real docs anymore.
That said, it’s always been a human engineer superpower to know just enough about everything to know what you need to look up, and LLMs are already pretty darn good at that, which I think is your real point.
So the market is going to be flooded with this type of soulless books that have no distinct character or style, just pure dry facts?
In a sense, "I wrote a book about it" is disingenuous and I agree the author's bullet list would probably be more interesting and would save us a lot of time.
I would take back my negative feedback in that case! Am reading the book, content is interesting, but I am never sure what is actually your thought vs LLM fillers!
The solution is relatively simple though - not sure the article suggests this as I only skimmed through:
Being good in your field doesn't only mean pushing articles but also being able to talk about them. I think academia should drift away from written form toward more spoken form, i.e. conferences.
What if, say, you can only publish something after presenting your work in person, answer questions, etc? The audience can be big or small, doesn't matter.
It would make publishing anything at all more expensive but maybe that's exactly what academia needs even irrespective of this AI craze?
reply