What do you mean? A kid is anyone younger than the speaker. My step dad used to refer to Bill Clinton as a kid because he was the first president younger than him.
What was the Bjarne Stroustrup quote about two types of languages again? I certainly have gotten more mileage out of Markdown at this point in my long career as a programmer and web developer than I have out of hand-written HTML, XML, or other markup languages. The latter are good for automation and unambiguous representation needed for system building, but for the type of informal writing that is my bread and butter outside of coding, Markdown kills, and Obsidian is the killer app.
Define interesting. In my experience most business logic is not innovative or difficult, but there are ways to do it well or ways to do it terribly. At the senior levels I feel 90% of the job is deciding the shape of what to build and what NOT to build. I find AI very useful in exploring and trying more things but it doesn’t really change the judgment part of the job.
I agree that the way that today's generation of seasoned programmers learned their craft is going away, and that we don't know how the next generation will learn. I disagree very much with the conclusion:
> They didn’t trade speed for learning. They traded learning for nothing. There was no trade-off. There was just loss.
I believe this conclusion is due to a methodological problem, a form of begging the question if you will. One thing I am certain of is that humans who set their mind to something learn something, and good programmers are among the most tenacious in setting their mind to something. With agentic coding, they definitely learn different things, and so I would expect syntax knowledge to be weaker, but debugging and review skill will increase overall. Why? Because there will be more code, and more breakage, and I still haven't seen any tooling that allows a non-technical person to be effective at this.
Programming knowledge has always had a half-life. The way I see it, this is a big sea change that will fundamentally change the job of software engineers, and some non-trivial percentage will either change careers or find a sheltered slow-moving place to finish out their working years. But for those who were not attached to hand-crafted code, AI provides power tools that empower technically minded people more than anyone else. I have full faith that younger generation still has the same distribution of technical potential, and they will still find ways to develop their craft just as previous generations of hackers have always done.
This is a good take, junior engineers today and in the future will learn from making mistakes, they just won't look like the mistakes we made in the good ole days when everyone wrote code by hand.
It's wild to me to hear this being spun as vanity, like it's some influencer clickbait or linkedin slop. You could argue anyone posting anything online is driven by vanity, but in this case we're talking about someone who took agency in his own medical outcome, and essentially experimented on himself. Sure it was selfish in the sense that he didn't want to die and he bent all his effort and resources to it, so what? I don't see exercising ones will-to-live in this way being a huge moral gray area. Other commenters are saying why don't we fund more research? Well sure we should do that too, but it's important to recognize that the type of approach he took here only works because it was one individual willing to combine a significant amount of personal effort with his own moral authority to try out risky things on himself. Even with orders of magnitude more funding, you can't ethically do this kind of thing without the consent of the patients, and there's not enough data on these types of approaches to adequately describe the risks to patients if they aren't specifically motivated to lean into the details like this guy did.
Agreed. One thing I’ve found striking is how far LLMs can get with pure language and the recognition that humans often operate with a similar kind of abstract conceptual reasoning that is purely language based and pretty far removed from facts and tenuously connected to objective reality. It takes a certain kind of mind to be curious and unpack the concepts that most of us take for granted most of the time. At best people don’t usually have time or patience to engage in that level of thinking, at worst it can actively lead to cognitive dissonance and anger. So of course a consumer chatbot is not going to be tuned to bring novel insight, it must default to some level of affirmation or it will fail as a product. One who is aware of this can work around it to some degree, but fundamentally the incentives will always push a consumer chatbot to essentially be junk food for the brain.
This is a weird hill to die on. As much as I resonate with many of the concerns, I don't see refusing to use AI as something that will actually help any of those things. Forking a stable version of vim is something I guess, but I don't really see the sky falling with mainline vim or neovim.
Personally the leverage I have as a bit of a cranky graybeard myself is that I understand how software works and I can distinguish between good and bad uses of AI and think critically about how to influence things towards better software. Just declaring AI as unequivocally bad and evil will do nothing more than make me irrelevant. At some point being right is useless without some measure of also being effective.
I love your take, but I had a hard time getting through the article because “science of entrepreneurship” already feels like a contradiction in terms to me. Each startup is a product of its unique time and context. There are huge swings in fortune based on seemingly subtle factors that are not necessarily under the control of the founder but need to recognized and can force the whole vision, approach, or even the problem statement itself to shift wildly overnight. This happens over and over again, and so creating a successful startup is more akin to bull riding than any formulaic process.
Because entrepreneurship and markets overall are at the center of so many disparate human contexts, I just don’t think the scientific method is particularly applicable. I also think the minute you try to generalize between startups the fidelity of understanding the factors of success fall off exponentially. The most common failure mode—almost by definition—is failing to recognize that some seemingly good idea or pattern that worked in many other businesses just does not work in this particular context for whatever reason. This is why entrepreneurs that are too focused on theory and not enough on the details of their particular space tend to fail.
To me, The Lean Startup is useful food for thought, and can be useful in surprising ways (even in bigger companies), but the generalized ideas and statements are of very little value without a keen sense of applicability in context. Any “science” of entrepreneurship would basically be combining the systemic chaos of macroeconomics less the precision of standard financial metrics plus all the human factors of psychology. Fascinating to think about, but I doubt the best pundits and theorists would themselves make good entrepreneurs.
These rules aged well overall. The only change I would make these days is to invert the order.
Number 5 is timeless and relevant at all scales, especially as code iterations have gotten faster and faster, data is all the more relevant. Numbers 4 and 3 have shifted a bit since data sizes and performance have ballooned, algorithm overhead isn't quite as big a concern, but the simplicity argument is relevant as ever. Numbers 2 and 1 while still true (Amdahl's law is a mathematical truth after all), are also clearly a product of their time and the hard constraints programmers had to deal with at the time as well as the shallowness of the stack. Still good wisdom, though I think on the whole the majority of programmers are less concerned about performance than they should be, especially compared to 50 years ago.
Having taken a startup from 2 to 100 a lot of the comments here resonate, but now that I'm in a company of 10,000 and trying to get things down across wide areas, I would say efficiency and communication overhead are all relative and make sure you're solving the problems in front of you and not "playing house" on problems you anticipate having down the line.
You will need more structure than you had before. For instance at 15 the idea of managers is silly, everyone still needs to be contributing individually, the only thing is that you do need to minimally subdivide work so that everyone isn't doing everything. Be wary of process for processes sake, you will start to need some but you really want to stay focused on concrete progress; is everyone doing the most important thing possible at every given moment? How fast are you shipping changes, closing sales, etc? Also make sure you have the right people, don't get starstruck by big tech vets. They have many skills that will be useful—if you are phenomenally successful—but if they don't have startup experience they likely will overengineer things by default, and a significant percentage of them can not wipe their own ass without the best-in-class tooling and infra support teams that allowed them to focus purely on one domain at BigCo. Basically you need pragmatic hustlers, veterans are good so long as they aren't cathedral or empire builders. Also watch out for weird team dynamics and nip any toxic interactions in the bud, one bad apple really can spoil the barrel; that's probably the most important thing to watch out for in the 15-100 range.
reply