For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more HarHarVeryFunny's commentsregister

It's not that strange - the industry wants customer lock-in, not commodification.

> He didn’t say that.

Actually he did, or something very close to it.

Obviously SOMETIMES you can add more developers to a project to successfully speed it up, but Brooks point was that it can easily also have the opposite effect and slow the project down.

The main reason Brooks gives for this is the extra overhead you've just added to the project in terms of communications, management, etc. In fact increasing team size always makes the team less efficient - adds more overhead, and the question is whether the new person added adds enough value to offset or overcome this.

Most experienced developers realize this intuitively - always faster to have the smallest team of the best people possible.

Of course some projects are just so huge that a large team is unavoidable, but don't think you are going to get linear speedup by adding more people. A 20 person team will not be twice as fast as a 10 person team. This is the major point of the book, and the reason for the title "the mythical man month". The myth is that men and months can be traded off, such that a "100 man month" project that would take 10 men 10 months could therefore be accomplished in 1 month if you had a team of 100. The team of 100 may in fact take more than 10 months since you just just turned a smallish efficient team into a chaotic mess.

Adding an AI "team member" is of course a bit different to adding a human team member, but maybe not that different, and the reason is basically the same - there are negatives as well as positives to adding that new member, and it will only be a net win if the positives outweigh the negatives (extra layers of specifications/guardrails, interaction, babysitting and correction - knowing when context rot has set in and time to abort and reset, etc).

With AI, you are typically interactively "vibe coding", even if in responsible fashion with specifications and guardrails, so the "new guy" isn't working in parallel with you, but is rather taking up all your time, and now his/its prodigious code output needs reviewing by someone, unless you choose to omit that step.


>> He didn’t say that. > Actually he did, or something very close to it.

Yeah, the "something very close to it" is what I quoted. And I'll repeat: distinction matters.

> don't think you are going to get linear speedup by adding more people.

I didn't either say, nor imply, this. Of course communication and coordination is overhead. Let's quote Brooks from the same article some more: The maximum number of men depends upon the number of independent subtasks.

Which is why in modern times you have a bunch of theoretical and practical research around team topologies, DORA, Reverse Conway Manoeuvre, the push to microservices, etc, etc. You can boil all that down to "maximize team independence while making each team as productive as possible."

This is a wonderful tangent (and if this interests you, I heartily recommend the Team Topologies book), but can we just keep in mind the gp never actually said he was overhiring for a single project? Parent latched onto a wrong idea and ran with it.


There already exist products like LiteLLM that adapt tool calling to different providers. FWIW, incompatibility isn't just an opensource problem - OpenAI and Anthropic also use different syntax for tool registration and invocation.

I would guess that lack of standardization of what tools are provided by different agents is as much of a problem as the differences in syntax, since the ideal case would be for a model to be trained end-to-end for use with a specific agent and set of tools, as I believe Anthropic do. Any agent interacting with a model that wasn't specifically trained to work with that agent/toolset is going to be at a disadvantage.


Presumably the hosting services are resolving all of this in their OpenAI/Anthropic compatibility layer, which is what most tools are using. So this is really just a problem for local engines that have to do the same thing but are expected to work right away for every new model drop.

You're apparently assuming that AI related layoffs are rational, based on those making the decisions having good information about what their own organizations are achieving with AI.

I think this is far from the truth. In many companies AI has become a religion, not a new technology to be evaluated and judged. Employees are told to use AI, and report how much they are using, and all understand the consequences of giving the wrong answer. The CEO hears the tales of rampant AI use and productivity that he is demanding to hear, then pats himself on the back and initiates another layoff. Meanwhile in the trenches little if anything has actually changed.


> assuming that AI related layoffs are rational

Nope. I’m saying if firms lay off on the assumption of AI gains that never come, they’ll be beaten by firms who don’t.


OK, but your post reads as if you think that AI being the cause of layoffs can't be true if AI is "worthless" (less capable than they are assuming), which is false.

CEOs are laying off because of AI because they think it will save them money, but are doing so based on misinformation, largely due their own insistence that everyone uses AI, and report how much they are using - they are just hearing what they asked to hear (just like Mao hearing about impossible levels of rice production during the "Great Leap Forward"). I'm not making this up - I've seen it first hand.

You can see the proof of this - companies laying of because of what they mistakenly believe AI can do - in companies like Salesforce, forced to do an embarassing U-turn and hire people back when the reality sets in. At least Salesforce were quick to correct - most big companies are not so nimble or ready to admit their own mistakes.

We seem to have reached mania-like levels of rice-production reporting, with companies like Meta now taking AI token usage as a proxy for productivity and/or a measure of something positive, and apparently having a huge leaderboard displaying who is using the most (i.e. spending the most money!). The only guaranteed outcome of this is that they will indeed see massive use of tokens, and a massive AI bill, and then in a year or so will likely be left scratching their heads wondering why nothing much appears to have changed.


> your post reads as if you think that AI being the cause of layoffs can't be true

Sorry, I was unclear. Those statements can’t both be true in the long run. They can absolutely be true in the short run.


"You think that AI will take your job, disrupt society, and has a 25% chance of being an EXISTENTIAL threat?! Who told you that?!"

Meta apparently now has a "leaderboard" for who is using the most AI - consuming the most tokens. Must make Anthropic happy, since Meta is using Claude, and accounts for some significant percentage (10%? 20%?) of their total volume.

Token usage is a different and more sympathetic heuristic than LOC produced.

The metric by itself tells you nothing about what value those tokens produced, but to some extent it represents the amount of thinking you are able to offload to the computer for you.

Wide breadth problems seem to scale well with usage, like scanning millions of LOC of code for vulnerabilities, such as the recent claude mythos results.


The trouble with rewarding token usage is the same as rewarding LOC written/generated - if that's what you are asking for then that is what you will get. Asking the AI to "scan the entire codebase for vulnerabilities" would certainly be a good way to climb the leaderboard!

Absolutely, no one should be rewarded for either tokens used or LOC generated. I just think in the absence of any incentives, token usage is a better heuristic as to value produced than LOC generated.

If you don't want/need to program at lowest level possible, then Pytorch seems the obvious option for AMD support, or maybe Mojo. The Triton compiler would be another option for kernel writing.

I don't think that's something that can be pitched as a CUDA alternative. Just different level.

Triton, while a compiler, generates code at a lower level than CUDA or ROCm.

The machine code that actually runs on NVidia and AMD GPUs respectively are SASS and AMDGCN, and in each case there is also an intermediate level of representation:

CUDA -> PTX -> SASS

ROCm -> LLVM-IR -> AMDGCN

The Triton compiler isn't generating CUDA or ROCm - it generates it's own generic MLIR intermediate representation, which then gets converted into PTX or LLVM-IR, with vendor-specific tools then doing the final step.

If you are interested in efficiency and wanted to write high level code, then you might be using Pytorch's torch.compile, which then generates Triton kernels, etc.

If you really want to squeeze the highest performance out of an NVIDA GPU then you would write in PTX assembler, not CUDA, and for AMD in GCN assembler.


Writing twice makes sense if time permits, or the opportunity presents itself. First time may be somewhat exploratory (maybe a thow-away prototype), then second time you better understand the problem and can do a better job.

A third time, with a new abstraction, is where you need to be careful. Fred Brooks ("Mythical Man Month") refers to it as the "second-system effect" where the confidence of having done something once (for real, not just prototype) may lead to an over-engineered and unnecessarily complex "version 2" as you are tempted to "make it better" by adding layers of abstractions and bells and whistles.


I agree with what you're saying about writing something twice or even three times to really understand it but I think you might have misunderstood the WET idea: as I understand it, it's meant in opposition to DRY, in the sense of "allow a second copy of the same code", and then when you need a third copy, start to consider introducing an abstraction, rather than religiously avoiding repeated code.

Personally, even for a prototype, I'd be using functions immediately as soon as I saw (or anticipated) I needed to do same thing twice - mainly so that if I want to change it later there is one place to change, not many. It's the same for production code of course, but when prototyping the code structure may be quite fluid and you want to keep making changes easy, not have to remember to update multiple copies of the same code.

I'm really talking about manually writing code, but the same would apply for AI written code. Having a single place to update when something needs changing is always going to be less error prone.

The major concession I make to modularity when developing a prototype is often to put everything into a single source file to make it fast to iteratively refactor etc rather than split it up into modules.


> mainly so that if I want to change it later there is one place to change, not many

But what happens when new requirements come in for just one of the things? If you left them separate, it's an easy change of a few lines. If you created an abstraction, now you either have to add a bunch of if statements, or spend time undoing the entire abstraction that you spent X amount of time creating.

If a bunch of other code has built up around that abstraction, undoing it can become a serious chore. I've worked on apps that had way too many premature abstractions, and we just had to live with it because it would be too risky and onerous to try to undo them.

In my experience, it's generally an order of magnitude easier to add an abstraction to a mature app when you get tired of making changes in multiple places, than to remove one when the app evolves and you realize these things aren't actually that similar. Also when you wait to abstract, you might see a better way to do it, or how to reduce the scope so that you're using composition to share a bunch of smaller pieces vs. sharing the entire page/object/interface/endpoint/etc.

Obviously, this isn't a blanket rule. There's an aspect of soothsaying to guess which things might diverge and which are likely to spawn a lot more similar copies.


> But what happens when new requirements come in for just one of the things?

I guess it could happen, but that depends on your mental model when coding - if you're just pattern matching similar chunks of code (which are not being used in a semantically identical way) then all bets are off, although that seems a very alien concept of how someone might code.

OTOH, if you have a higher level mental model of what you are doing then it's not a matter of "this looks like common code" but rather "i need to do the exact same operation" (same inputs/outputs/semantics) here. Maybe I'm expressing it poorly, but I can't recall ever having to fork a function because requirements at two call sites just diverged.


The danger with people that claims to follow DRY is that they don’t check first that they are repeating yourself. As soon as they’re encounter similarity, they assume equality and rush to abstract it. But if one knows the domain enough to know that some logic is the same, not just similar, then no need to write it twice first.

Are competitors doing well? It's really a bit of a weird product category - not really appealing to vegetarians or meat eaters. Who are they marketing it to?

People who eat meat but feel bad about it, apparently.

This is an extremely loud online group but apparently barely exists in real life.


So, like most extremely loud online groups then?

For an international perspective, I can tell you that their competitors are doing very well in my corner of Europe, but the competitions quality is 10x-100x that of Beyond.

People buy competitors products because they are simply legitimately fine tasting products on their own, no vegetarian vs meat marketing required.

Beyond just has shit product, even if they genuinely were the first to develop the technology.


I guess I just don't get it. Obviously there's a decent sized market for vegetarian convenience food, but the meat-based branding, and attempts to copy texture/flavor of meat products would seem a turn off for that market. Good flavors and mouth feel (not tofu!) are important, but why explicitly try to copy meat unless meat eaters are the market you are targeting?

It'd make more sense to me to have different products/brands/advertising for different market segments. For the meat eaters the marketing would be "healthy/cheap, tastes just like beef/chicken" (which seems to be what Beyond Meat are going for), and for the vegetarians "delicious flavors, plant based, high in protein" (not "fake beef").


> Good flavors and mouth feel (not tofu!) are important

as one of those vegetarians who isn't particularly compelled by anything intentionally imitating meat, this is always somewhat funny to me. tofu already has good flavor and mouthfeel if prepared well, and presumably the rest of this alleged market segment is as capable of preparing tofu adequately as i am personally. so even if beyond was to pivot to (also) being beyond tofu, i fail to see how that would capture appreciably more of the market.

i could be wrong, but it's always seemed to me that most of the apparent demand for something better than tofu is not in fact coming from inside the house.


May you share a recipe or two?

I love tofu too but mostly buy pre-prepared ones, open and bite it cold (yummy !) My current favorites are the "dried" Taifun like their Japanese Style.

https://www.taifun-tofu.de/en/products/tofu-filets-japanese-...


sorry, can't say i've ever cooked from a recipe, but i highly recommend caramelizing already-crispy tofu with soy sauce, sriracha, and ginger (+ fish sauce if you swing that way)

> I guess I just don't get it

Imagine you liked something, then realized that thing was bad and you didn't want to do it anymore. Then somebody offered you an alternative without the ethical problems.

I ate meat for like 30 years, then as I learned more about the realities of the meat industry in the US (suffering of animals, development of antibiotic resistant bacteria, pollution of air and water, exploitation and harm to workers, etc) I decided I couldn't buy meat anymore. I like having burgers and sausages and such, and Beyond meat gives me something that tastes good, is easy to cook, fits into a healthy diet.


>not really appealing to vegetarians or meat eaters

Why not? I'm a vegetarian/vegan for a long while now (I started during covid) and I enjoy fake meat burger or as protein in my meal once in a while. Same goes or my girlfriend. I assume most (ethical) vegetarians are in the same boat. I am a former meat eater, I enjoy the taste of meat.

FWIW vegan meat substitutes are popular and getting even more popular here (EU country). For example all burger places and many regular restaurants have something similar on the menu. I avoid beyond though, it's always the most expensive option, without quality to justify it.


Vegetarian and vegan menu options are extremely common here in the US too, but I'd say not so much these meat substitute products at fast food places. One of the big chains (Burger King? McDonalds?) had a Beyond burger when it first came out, but otherwise you need to avoid the big chains and may find a veggieburger on the menu, just called that - a veggie patty of some nature, not pretending to be meat. You can buy Quorn etc products in all the supermarkets.

America only has the shallowest appearance of a democracy where voters get to control who is elected.

The electoral college system, coupled with it's winner-takes-all implementation in most states, means that voting is a sham for 80% of the population. The other 20% live in a swing state and their vote can at least potentially affect the outcome of an election, but even there "your vote" will literally be cast opposite to what you put on the ballet unless you end up being part of the winning majority.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You