Very preliminary testing is very promising, seems far more precise in code changes over GPT-5 models in not ingesting irrelevant to the task at hand code sections for changes which tends to make GPT-5 as a coding assistant take longer than sometimes expected. With that being the case, it is possible that in actual day-to-day use, Haiku 4.5 may be less expensive than the raw cost breakdown may appear initially, though the increase is significant.
Branding is the true issue that Anthropic has though. Haiku 4.5 may (not saying it is, far to early to tell) be roughly equivalent in code output quality compared to Sonnet 4, which would serve a lot users amazingly well, but by virtue of the connotations smaller models have, alongside recent performance degradations making users more suspicious than beforehand, getting these do adopt Haiku 4.5 over Sonnet 4.5 even will be challenging. I'd love to know whether Haiku 3, 3.5 and 4.5 are roughly in the same ballpark in terms of parameters and course, nerdy old me would like that to be public information for all models, but in fairness to companies, many would just go for the largest model thinking it serves all use cases best. GPT-5 to me is still most impressive because of its pricing relative to performance and Haiku may end up similar, though with far less adoption. Everyone believes their task requires no less than Opus it seems after all.
I find it funny that with the WFH movement people some to have forgotten the lessons learned from the early 2000s outsourcing/offshoring craze.
Many companies tried to move their development offices offshore to India, Vietnam, Bulgaria, Ukraine, anywhere with significantly cheaper labour. And the first lesson these companies learned was the second you've trained a person from that area to the point that you are happy, they will quit, because now they have the skills to get hired by another multinational and leave for a more affluent pastures.
It wasn't until people realized to took more to build offshore workforces. You have to build teams of people, you have to support their families (especially is south east asia), you need to provide benefits and incentives for them to stay in their home town. But most of all you need to become part of the local community. Otherwise you will always be a naive multinational to be taken advantage of as long as it's convenient. And the intelligent and driven people, the exact people you want to hire there, all know this.
I can see the exact same thing playing out in the WFH movement, with managers proclaiming that now that they can hire anywhere they can do away with much of the support and non-financial compensation inherit to office work. But they will run head first into the same problems as offshoring. A job is more than money, especially in an creative/problem solving industry like software development (and if you disagree, you are lying to yourself), your work, your colleagues and your company shape you, (should) support and reflect on you to all future employers. Loyalty has to be earned, and if all you are doing is relying on salary for this, then your loyalty is worth the dollars between you and your competitor.
My view is that we'll settle on a bimodal steady state. With WFH first companies that put in the time, money and effort needed to support a remote workforce. And traditional office work (with maybe a bit more flexibility) going back to full time face to face. I don't think the hybrid model can work.
If you really want to push yourself - I strongly suggest looking into OMSCS. If you get in, you can take classes in areas you may not be familiar (ML, AI, compiler optimisations, many things). It will stretch your knowledge for sure and expose you to a wide variety of areas that don’t come up at all in typical web dev.
I'm working on a new chapter for my course Mastery with SQL (https://www.masterywithsql.com) covering query performance and indexing with PostgreSQL.
I want to really allow everyone to see first-hand the impact of re-writing queries to be more performant and adding the right indexes so I've been spending a lot of time to create great exercises where you get to optimise poorly performing queries over some very large and interesting data sets.
I launched the course on HN a bit over a week ago and had a really great response (https://news.ycombinator.com/item?id=20260292) so has been great motivation to continue working hard! Really enjoying myself at the moment.
Branding is the true issue that Anthropic has though. Haiku 4.5 may (not saying it is, far to early to tell) be roughly equivalent in code output quality compared to Sonnet 4, which would serve a lot users amazingly well, but by virtue of the connotations smaller models have, alongside recent performance degradations making users more suspicious than beforehand, getting these do adopt Haiku 4.5 over Sonnet 4.5 even will be challenging. I'd love to know whether Haiku 3, 3.5 and 4.5 are roughly in the same ballpark in terms of parameters and course, nerdy old me would like that to be public information for all models, but in fairness to companies, many would just go for the largest model thinking it serves all use cases best. GPT-5 to me is still most impressive because of its pricing relative to performance and Haiku may end up similar, though with far less adoption. Everyone believes their task requires no less than Opus it seems after all.
For reference:
Haiku 3: I $0.25/M, O $1.25/M
Haiku 4.5: I $1.00/M, O $5.00/M
GPT-5: I $1.25/M, O $10.00/M
GPT-5-mini: I $0.25/M, O $2.00/M
GPT-5-nano: I $0.05/M, O $0.40/M
GLM-4.6: I $0.60/M, O $2.20/M