I'm not going to dogpile criticism on Tailwind or Adam, whose behavior seems quite admirable, but I fundamentally agree with the thrust of the parent comment. It's unfortunate for Tailwind and anyone who was invested in the project's pre-2022 trajectory, but no one is entitled to commercial engagement by unaffiliated third parties.
Here's a similar example from my own experience:
* Last week, I used AI to help me prepare a set of legal agreements that would have easily cost $5k+ in legal fees pre-2022.
* I've also started a personal blog and created a privacy policy and ToS that I might otherwise have paid lawyers money to draft. Or more realistically, I'd have cut those particular corners and accepted the costs of slightly higher legal risk and reduced transparency.
* In total, I've saved into the five figures on legal over the past few years by preparing docs myself and getting only a final sign-off from counsel as needed.
One perspective would be that AI is stealing money from lawyers. My perspective is that it's saving me time, money, and risk, and therefore allowing me to allocate my scarce resources far more efficiently.
Automation inherently takes work away from humans. That's the purpose of automation. It doesn't mean automation is bad; it means we have a new opportunity to apply our collective talents toward increasingly valuable endeavors. If the market ultimately decides that it doesn't have sufficient need for continued Tailwind maintenance to fund it, all that means is that humanity believes Adam and co. will provide more value by letting it go and spending their time differently.
Laws are not intellectual property of individuals or companies, they belong to the public. That's a fundamentally different type of content to "learn" from. I totally agree that AI can save a lot of time, but I don't agree that the creators of Tailwind don't see any form of compensation.
It does not feel not right to me that revenue is being taken from Tailwind and redirected to Google, OpenAI, Meta and Anthropic without 0 compensation.
I'm not sure how this should codified in law or what the correct words are to describe it properly yet.
I see what you're getting at, but CSS is as much an open standard as the law. Public legal docs written against legal standards aren't fundamentally dissimilar to open source libraries written against technical standards.
While I am all for working out some sort of compensation scheme for the providers of model training data (even if indirect via techniques like distillation), that's a separate issue from whether or not AI's disruption of demand for certain products and services is per se harmful.
If that is the case, it's a very different claim than that AI is plagiarizing Tailwind (which was somewhat of a reach, given the permissiveness of the project's MIT license). Achieving such mass adoption would typically be considered the best case scenario for an open source project, not harm inflicted upon the project by its users or the tools that promoted it.
The problem Tailwind is running into isn't that anything has been stolen from them, as far as I can tell. It's that the market value of certain categories of expertise is dropping due to dramatically scaled up supply — which is basically good in principle, but can have all sorts of positive and negative consequences at the individual level. It's as if we suddenly had a huge glut of low-cost housing: clearly a social good on balance, but as with any market disruption there would be winners and losers.
If Tailwind's primary business is no longer as competitive as it once was, they may need to adapt or pivot. That doesn't necessarily mean that they're a victim of wrongdoing, or that they themselves did anything wrong. GenAI was simply a black swan event. As a certain captain once said, "It is possible to commit no mistakes and still lose. That is not a weakness; that is life.".
Anyone who spends any amount of time perusing discussions on social media will quickly observe what a rare gift strong reading comprehension turns out to be.
I think there's an important material difference between the two. China's single party is authoritarian and uncontested. America's two major parties are mildly authoritarian on different axes, but average out to a mostly liberal status quo in practice. The relative chaos and transparency of America's system are what they are, but it isn't an autocracy at this point.
There's also a significant growing political push to transition away from FPTP voting in the US, which would dismantle the current duopoly.
> The relative chaos and transparency of America's system are what they are, but it isn't an autocracy at this point.
You can get locked up with a 2 million dollar bond for posting a facebook meme in the US, as demonstrated by a recent case[1]. I don't know what value the transparency holds here? It's certainly already crossed the Rubicon into overt authoritarianism in the past year.
Thanks for sharing, that is really bad. I did say "mostly", though. A bizarre anomaly that will most likely get thrown out in court is still pretty different from comparable practices in e.g. the UK that are standard procedure.
The transparency I referred to was primarily to the American political system's airing of its dirty laundry out in the open, which is inherently going to look more chaotic than disputes between internal factions of a single party because so much of it is performative.
In summary, the Luddites had a point. It doesn't mean they were ultimately correct, just that their concerns were valid.
Regardless of anyone's thoughts on genAI in particular, it's important for us as a society to consider what our economic model looks like in a future where technology breaks the assumption of near-universal employment. Maybe that's UBI. Maybe it's a system of universally accessible educational stipends and pumping public funds into venture capital. Maybe it's something else entirely.
I think the author has a fair take on the types of LLM output he has experience with, but may be overgeneralizing his conclusion. As shown by his example, he seems to be narrowly focusing on the use case of giving the AI some small snippet of text and asking it to stretch that into something less information-dense — like the stereotypical "write a response to this email that says X", and sending that output instead of just directly saying X.
I personally tend not to use AI this way. When it comes to writing, that's actually the exact inverse of how I most often use AI, which is to throw a ton of information at it in a large prompt, and/or use a preexisting chat with substantial relevant context, possibly have it perform some relevant searches and/or calculations, and then iterate on that over successive prompts before landing on a version that's close enough to what I want for me to touch up by hand. Of course the end result is clearly shaped by my original thoughts, with the writing being a mix of my own words and a reasonable approximation of what I might have written by hand anyway given more time allocated to the task, and not clearly identifiable as AI-assisted. When working with AI this way, asking to "read the prompt" instead of my final output is obviously a little ridiculous; you might as well also ask to read my browser history, some sort of transcript of my mental stream of consciousness, and whatever notes I might have scribbled down at any point.
> the exact inverse of how I most often use AI, which is to throw a ton of information at it in a large prompt
It sounds to me that you don't make the effort to absorb the information. You cherry-pick stuff that pops in your head or that you find online, throw that into an LLM and let it convince you that it created something sound.
To me it confirms what the article says: it's not worth reading what you produce this way. I am not interested in that eloquent text that your LLM produced (and that you modify just enough to feel good saying it's your work); it won't bring me anything I couldn't get by quickly thinking about it or quickly making a web search. I don't need to talk to you, you are not interesting.
But if you spend the time to actually absorb that information, realise that you need to read even more, actually make your own opinion and get to a point where we could have an actual discussion about that topic, then I'm interested. An LLM will not get you there, and getting there is not done in 2 minutes. That's precisely why it is interesting.
You're making a weirdly uncharitable assumption. I'm referring to information which I largely or entirely wrote myself, or which I otherwise have proprietary access to, not which I randomly cherry-picked from scattershot Google results.
Synthesizing large amounts of information into smaller more focused outputs is something LLMs happen to excel at. Doing the exact same work more slowly by hand just to prove a point to someone on HN isn't a productive way to deliver business value.
> Doing the exact same work more slowly by hand just to prove a point to someone on HN isn't a productive way to deliver business value.
You prove my point again: it's not "just to prove a point". It's about internalising the information, improving your ability to synthesise and be critical.
Sure, if your only objective is to "deliver business value", maybe you make more money by being uninteresting with an LLM. My point is that if you get good at doing all that without an LLM, then you become a more interesting person. You will be able to have an actual discussion with a real human and be interesting.
Understanding or being interesting has nothing to do with it. We use calculators and computers for a reason. No one hires people to respond to API requests by hand; we run the code on servers. Using the right tool for the job is just doing my job well.
> We use calculators and computers for a reason. No one hires people to respond to API requests by hand; we run the code on servers
We were talking about writing, not about vibe coding. We don't use calculators for writing. We don't use API requests for writing (except when we make an LLM write for us).
> Using the right tool for the job is just doing my job well.
I don't know what your job is. But if your job is to produce text that is meant to be read by humans, then it feels like not being able to synthesise your ideas yourself doesn't make you excellent at doing your job.
Again maybe it makes you productive. Many developers, for instance, get paid for writing bad code (either because those who pay don't care about quality or can't make a difference, or something else). Vibe coding obviously makes those developers more productive. But I don't believe it will make them learn how to produce good code. Good for them if they make money like this, of course.
> We were talking about writing, not about vibe coding. We don't use calculators for writing. We don't use API requests for writing (except when we make an LLM write for us).
We do however use them to summarize and transform data all the time. Consider the ever present spreadsheet. Huge amounts of data are thrown into spreadsheets and formulas are applied to that data to present us with graphs and statistics. You could do all of that by hand, and you'd probably have a much better "internalization" about what the data is. But most of the time, hand crafting graphs from raw data and internalizing it isn't useful or necessary to accomplish what you actually want to accomplish with the data.
You seem to not make the difference between maths and, say, literature or history.
Do you actually think that an LLM can take, say, a Harry Potter book as an input, and give it a grade in such a way that everybody will always agree on?
And to go further, do you actually use LLMs to generate graphs and statistics from spreadsheet? Because that is probably a bad idea given that there are tools that actually do it right.
> Do you actually think that an LLM can take, say, a Harry Potter book as an input, and give it a grade in such a way that everybody will always agree on?
No, but I also don't think a human can do that either. Subjective things are subjective. I'm not sure I understand how this connects to the idea you expressed that doing various tasks with automation tools like LLMs prevent you from "internalizing" the data, or why not "internalizing" data is necessarily a bad thing. Am I just misunderstanding your concern?
Many of the posts I find here defending the use of LLMs focus on "profitability". "You ask me to give you 3 pages about X? I'll give you 3 pages about X and you may not even realise that I did not write them". I completely agree that it can happen and that LLMs, right now, are useful to hack the system. But if you specialise in being efficient at getting an LLM to generate 3 pages, you may become useless faster than you think. Still, I don't think that this is the point of the article, and it is most definitely not my point.
My point is that while you specialise in hacking the system with an LLM, you don't learn about the material that goes into those 3 pages.
* If you are a student, it means that you are losing your time. Your role as a student is to learn, not to hack.
* More generally as a person, "I am a professional in summarising stuff I don't understand in a way that convinces me and other people who don't understand it either" is not exactly very sexy to me.
If you want to get actual knowledge about something, you have to actually work on getting that knowledge. Moving it from an LLM to a word document is not it. Being knowledgeable requires "internalising" it. Such that you can talk about it at dinner. And have an opinion about it that is worth something to others. If your opinion is "ChatGPT says this, but with my expertise in prompting I can get it to say that", it's pretty much worthless IMHO. Except for tricking the system, in a way similar to "oh my salary depends on the number of bugs I fix? Let me introduce tons of easy-to-fix bugs then".
We were talking about writing, not about vibe coding.
No one said anything about vibe coding. Using tools appropriately to accomplish tasks more quickly is just common sense. Deliberately choosing to pay 10x the cost for the same or equivalent output isn't a rational business decision, regardless of whether the task happens to be writing, long division, or anything else.
Just to be clear, I'm not arguing against doing things manually as a learning exercise or creative outlet. Sometimes the journey is the point; sometimes the destination is the point. Both are valid.
I don't know what your job is.
Here's one: prepping first drafts of legal docs with AI assistance before handing them off to lawyers for revision has objectively saved significant amounts of time and money. Without AI this would have been too time-consuming to be worthwhile, but with AI I've saved not only my own time but the costs of billable hours on phone calls to discuss requirements, lawyers writing first drafts on their own, and additional Q&A and revisions over email. Using AI makes it practical to skip the first two parts and cut down on the third significantly.
Here's another one: writing technical reports for my day job. Before they'd integrated AI into their platform, I would frequently get rave reviews for the quality and professionalism of my issue reports. After they added AI writing assistance, nothing changed other than my ability to generate a greater number of reports in the same number of billable hours. What you're suggesting effectively amounts to choosing to deliver less value out of ego. I still have to understand my own work product, or I wouldn't be able to produce it even with AI assistance. If someone thinks that somehow makes the product less "interesting", well then I guess it's a good thing my job isn't entertainment.
Don't get me wrong: I don't deny that LLMs can help tricking other humans into believing that the text is more professional than it actually is. LLMs are engineered exactly for that.
I'd be curious to know whether your legal documents are as good as without LLMs. I wouldn't be surprised at all if they were worse, but cheaper. Talking about security audits, that's actually a problem I've seen: LLMs makes it harder to detect bad audits, and in my experience I have been more often confronted to bad security audits than to good ones.
For both examples, you say "LLMs are useful to make more money". I say "I believe that LLMs lower the quality of the work". It's not incompatible.
There's no "tricking" involved, and no basis for your assumption that LLMs lower the quality of work. I would suggest that what you and the author are observing is actually the opposite effect: LLMs broadly help improve the quality of work, all else being equal. The caveat is that when all else is not equal, this manifests in bad work being improved to a level that's still bad. The issue here is students using advanced tooling as an excuse to be lazy and undercut their own learning process, not the tool itself. LLMs are just this generation's version of Wikipedia and spell check.
As much as the author rightfully complains about the example in the post, a version that only said "explain the downsides of Euler angles in robotics and suggest some alternatives" would obviously be far worse. In this case, the AI helped elevate clear F-level work to maybe a C. That's not an indictment of AI; it's an indictment of low-quality work. LLMs lower the bar to produce passable-looking bad work, but they also lower the bar to produce excellent work. The confirmation bias here is that we don't know how many cases of B-level work became A papers with AI assistance, because those instances don't stand out in the same way.
In the audit example, LLMs aren't doing the audit. They synthesize my notes into a useful starting point to nullify writer's block, and let me focus more of my time on the hard or unique aspects of a given report. It's like having an intern write the first draft for me, typically with some mistakes or oversights, occasionally with a valuable additional insight thrown in, and often with links to a few helpful references for the customer that I wouldn't necessarily have found and included on my own. That doesn't lower the quality; it improves it.
As far as the legal example, it really depends on the complexity of a given instance and the guidance you've provided to your lawyers. A good lawyer won't sign off on something that fails to meet the requested quality bar (if anything, the financial incentive would be for them to err on the side of conservatism and toss out the draft you'd provided). But of course this all depends on you having a clear enough understanding of what you're trying to accomplish, and enough familiarity with legal documents and proficiency with language to shape everything into a passable first draft. AI speeds this up, but if you don't know what you're doing then the AI won't solve that for you. It's a tool like any other, and can be used properly or improperly.
I think that mindet directly correlates with the kind of AI hat prompted this article: "It doesn't matter" in your eyes. You don't see the task as important, only the output and that it makes you money. the craft is less important than what you can sell it for.
Yes, a learning exercise has a goal to extend your own knowledge. Business is, especially as of late, figuring out how to get the cheapest but still acceptable work to the recipient for the highest value. I suppose it's on the recipient for not checking the quality of their commission.
If you present your AI-powered work to me, and I suspect you employed AI to do any of the heavy lifting, I will automatically discount any role you claim to have had in that work.
Fairly or unfairly, people (including you) will inexorably come to see anything done with AI as ONLY done with AI, and automatically assume that anyone could have done it.
In such a world, someone could write the next Harry Potter and it will be lost in a sea of one million mediocre works that roughly similar. Hidden in plain sight forever. There would no point in reading it, because it is probably the same slop I could get by writing a one paragraph prompt. It would be too expensive to discover otherwise.
To be clear, I'm not a student, nor do I disagree with academic honor codes that forbid LLM assistance. For anything that I apply AI assistance to, the extent to which I could personally "claim credit" is essentially immaterial; my goal is to get a task done at the highest quality and lowest cost possible, not to cheat on my homework. AI performs busywork that would cost me time or cost money to delegate to another human, and that makes it valuable.
I'm expanding on the author's point that the hard part is the input, not the output. Sure someone else could produce the same output as an LLM given the same input and sufficient time, but they don't have the same input. The author is saying "well then just show me the input"; my counterpoint is that the input can often be vastly longer and less organized or cohesive than the output, and thus less useful to share.
> someone could write the next Harry Potter and it will be lost in a sea of one million mediocre works that roughly similar.
To be fair, the first Harry Potter is a kinda average British boarding school story. Rowling is barely an adequate writer (and it shows badly in some of the later books). There was a reason she got rejected by so many publishers.
However, Netscape was going nuts and the Internet was taking off. Anime was going nuts and produced some of the all time best anime. MTV animation went from Beavis and Butthead to Daria in this time frame. Authors were engaging with audiences on Usenet (see: Wheel of Time and Babylon 5). Fantasy had moved from counterculture for hardcore nerd boys to something that the bookish female nerds would engage with.
Harry Potter dropped onto that tinder and absolutely caught fire.
I don't really assossiate harry potter's rise with that of the internet. By the time it lit the internet ablaze was in the 2000's, after the first few movies aired.
It certainly wasn't the writing that elevated it. I think it was as simple as tapping into an audience who for once wasn't raised as some nuclear family. a Cinderella esque tale of being whisked away from abuse mixed with a hero's journey towards his inevitable clash with the very evil that set this in motion.
The movies definiely helped too. The first few were very well done with excellent child actors. Watching many other fantasy adaptations try to replicate that really shows just how the stars align into making HP a success.
I was surprised to find how not true that is when I eventually read the books for myself, long after they became a phenomenon. The books are well-crafted mystery stories that don't cheat the reader. All the clues are there, more or less, for you to figure out what's happening, yet she still surprises.
The world-building is meh at best. The magic system is perfunctory. But the characters are strong and the plot is interesting from beginning to end.
> In such a world, someone could write the next Harry Potter and it will be lost in a sea of one million mediocre works that roughly similar. Hidden in plain sight forever. There would no point in reading it, because it is probably the same slop I could get by writing a one paragraph prompt. It would be too expensive to discover otherwise.
This has already been the case for decades. There are probably brilliant works sitting out there on AO3 or whatnot. But you'll never find them because it's not worth wading through the junk. AI merely accelerates what was already happening.
>AI merely accelerates what was already happening.
I think "merely" is underselling the magnitude of effect this can have. Asset stores overnight went form "okay I need to dig hard to find something good" to outright useless as it's flooded with unusable slop. Google somehow got worse overnight for technical searches that aren't heavily quieried.
I didn't really desire such accelerations for slop, thanks. At least I could feel good knowing human made slop was learned from sometimes.
I've found that AI is incredibly valuable as a general thinking assistant for those tasks as well. You still need enough expertise to know when to reach for it, what to prompt it with, and how to validate the utility and correctness of its output, but none of that consumes as much time as the time saved in my experience.
I think of it like a sort of coprocessor that's dumber in some ways than my subconscious, but massively faster at certain tasks and with access to vastly more information. Like my subconscious, its output still needs to be processed by my conscious mind in order to be useful, but offloading as much compute as possible from my conscious mind to the AI saves a ton of time and energy.
That's before even getting into its value in generating content. Maybe the results are inconsistent, but when it works, it writes code much more quickly than any human could possibly type. Programming aside, I've objectively saved significant amounts of time and money by using AI to help not only review but also revise and write first drafts of legal documents before roping in lawyers. The latter is something I wouldn't have considered worthwhile to attempt in most cases without AI, but with AI I can go from "knowing enough to be dangerous" to quickly preparing a passable first draft on my own and having my lawyers review the language and tighten up some minor details over email. That's a massive efficiency improvement over the old process of blocking off an hour with lawyers to discuss requirements on the phone, then paying the hourly rate for them to write the first draft, and then going through Q&A/iteration with them over email. YMMV, and you still need to use your best judgement on whether trying this with a given legal task will be a productive use of time, but life is a lot easier with the option than without. Deep research is also pretty ridiculous when you find yourself with a use case for it.
In theory, there's not really anything in particular that I'd say AI lets me do that I couldn't do on my own*, given vastly more hours in the day. In practice, I find that I'm able to not only finish certain tasks more quickly, but also do additional useful things that I wouldn't otherwise have done. It's just a massive force multiplier. In my view, the release of ChatGPT has been about as big a turning point for knowledge work as computers and the Internet were.
*: Actually, that's not even strictly true. I've used AI to generate artwork, both for fun/personal reasons and for business, which I couldn't possibly have produced by hand. (I mean with infinite time I could develop artistic skills, but that's a little reductive.) Video generation is another obvious case like this, which isn't even necessarily just a matter of individual skill, but can also be a matter of having the means and justification to invest money in actors, costumes, props, etc.
I would argue that it isn't only possible, but on track to arrive sooner than most people realize:
* AI models are steadily continuing to improve in capabilities and efficiency
* Massive investments are being made in scaling up AI infrastructure (see Stargate and xAI Colossus)
* Tesla expects to produce a few thousand Optimus robots this year and use them for some level of internal production workload, meanwhile Hyundai has acquired Boston Dynamics with what I can only assume is a plan to take its tech out of the research labs and commercialize it at scale
* Aside from all the other recent and ongoing advances in energy tech and infrastructure, production fusion power is coming; if you take sama-backed Helion's word for it, they may be fulfilling a contract to deliver it to Microsoft as soon as 2028 (knock on wood)
Add all that together, and it's not difficult to see a trend that converges on a rapid massive expansion of global and particularly US manufacturing output kicking off within the next decade or two. As soon as the hardware and software are good enough for robots to outcompete average unskilled human laborers at most tasks on cost and quality, expect fully automated assembly lines to start pumping out humanoid robots 24/7, which will then be put to work 24/7 on any number of manufacturing and construction projects with logistics based around autonomous vehicles.
The overhead of US labor cost and safety regulations will become moot with machines doing the work, while our abundance of resources and first mover advantage on AI will give us a big headstart over the rest of the world. Meanwhile, our low population density means we'll have a ton of empty land to build on and a population size that will make UBI payments comparably easy. In that scenario, eclipsing 2025 China's shipbuilding capacity will be the least of our concerns. Whoever wins the AI race wins global hegemony, and right now that race is America's to lose.
All of which is to say, there's a reasonable argument that America is currently sitting at a firm local minimum in strength and prosperity, which conversely means that China is plausibly approaching a ceiling on its own relative military and economic power for the foreseeable future. If that is the case, it means that the next decade or so may be an exceptionally high-risk period for Taiwan. However, it also means that competent US leadership would throw everything it has at a defense of Taiwan in the event of an invasion; irrespective of any fabrication capacity that may end up built out in the US, allowing a Chinese takeover of the main TSMC facilities would be surrendering far too great a strategic asset in the AI race. That being the case, while Chinese leadership may or may not agree, I would argue that the rational move on China's part would actually be to give up on Taiwan and focus on investing heavily in SMIC and other fronts of the AI race. Invading would at best yield a pyrrhic victory, at worst yield an expensive defeat and burn a bridge with the people of Taiwan for generations. The right move would be to put aside the short-term economic gambit and nationalistic fervor, and instead lay out a roadmap for a possible future peaceful unification or alliance by proving themselves to be a good neighbor over time.
That was my exact reaction. I'm not sure how much I like Nvidia acquiring Arm, but on some level as an American my instinct is to encourage it.
If it is the FTC's actual position that the deal would harm consumers or the industry as a whole, it's certainly admirable that they would ostensibly prioritize that over US strategic interests.
This makes me wonder if their analysis shows that the merger would do sufficient harm within the industry as to actually run counter to US interests. If ARM is shaping up to become a pillar of the Western world/economy while China and its sphere of influence consolidate around RISC-V, then anything that harms Arm's market position is also a geopolitical risk to the West. The US government pushing for such a merger, at a time when China is investing heavily in semiconductor manufacturing capabilities while eyeing a conquest of Taiwan/TSMC, would therefore be shooting itself in the foot. Better to grow the pie than risk blowing it up for a slightly larger slice.
I don't think it's the US strategic interest for Nvidia to own ARM. International corporations don't have any particular loyalty to the country where their corporate HQ is located, and fewer semiconductor companies just mean higher prices and fewer choices. Also the acquisition would accelerate movement by Nvidia's competitors away from ARM. The only pie that is grown by merging two successful companies is the wealth of the stockholders, everyone else is worse off.
Looking at this from the perspective of a US citizen I see it as a very bad merger and I'm glad they're suing to stop it. We need more competition not less. Nvidia doesn't exactly have the best record in serving the public.
"then anything that harms Arm's market position is also a geopolitical risk to the West"
This is some seriously flawed thinking extremely convenient for corporate interests. Mixing up marlets with national security leads to the kind of atrocities that make totalitarian states proud.
"Coca-Cola Co.'s Colombian bottlers are working with death squads to kill, threaten and intimidate plant workers,"
I haven't read the article or the book in question, but this reminds me of the hullabaloo about git master branches.
Never in my life have I associated git or its default branch name with American slavery. I would be shocked if a single black programmer in the world had considered before last year that it might be offensive. I haven't discussed the issue with any black friends or family, but it's just such an odd leap to make even if the claim about its etymology is potentially correct.
Of course, if even a minority of black programmers were or are offended, it's not my place as a non-black person to tell them they're wrong. I'll happily rename all my branches, update all my code that references master, rename all "master" test environments/hostnames, and update the gitconfig on all my machines, if credible surveys come out showing that a statistically significant number of black programmers genuinely consider master branches offensive.
Until then, I'm not going to do free labor to back up what so far appears to be an ill-conceived PR stunt on GitHub/Microsoft's part. The only arguments I've seen in favor of renaming master have been from white people appealing to technicalities/etymology, as if solving a puzzle to prove whether it should be offensive according to some logic, rather than any kind of appeal to the feelings of actual humans.
"Robotic wokeness" is a nice term to describe this phenomenon. I'm all for Black Lives Matter and correcting genuine injustices, but it's harmful to everyone when private interests artificially manufacture outrage and social justice causes solely for their own benefits.
Because y'know, branches? Trunk and branches? It's the older term. I ran into it using Fossil and liked it, now all my git repos have trunks as well.
Of course there's nothing wrong with master, that was just histrionic bullying. At best it was people doing something they can do (change variable names) as a substitute for something they can't (meaningfully affect racial injustice).
But like I said. I took the opportunity. Way better name than main... or master for that matter.
Makes sense. I'm not familiar with Fossil, but I know trunk was also the standard when I used SVN back in the day. I think the trunk-branch metaphor makes more sense in the centralized VCS model where there is actually something special about the trunk that makes it distinct from branches, whereas in a DVCS like git master is just another branch except that it happens to be the default one checked out.
Ultimately, it doesn't matter to me in and of itself. Even before last year, it wasn't that unusual to see git/GitHub repos with default branch names other than master (dev is probably the next most common one I've seen), and presumably not because of anything to do with racial justice. "Master" seems most practical to me for the moment, since it's the most standard/common term and is relatively non-overloaded, whereas "main" may be more ambiguous unless expanded to "the main branch" (depending on context).
It almost feels silly to bikeshed about something so minor, but it's not fair for anyone to demand that millions of people change their behavior and/or spend money without meeting some reasonable burden of proof. It's a slippery slope to allowing such unfalsifiable Pascal's-wager-type propositions ("foo may or may not be harmful, so better do bar instead just in case") to be used for more malicious manipulation of markets and geopolitics.
Trunk worked fine for Subversion since it kind of sucked at branching. You generally had to have a reference branch where everything else was understood to be a copy of it.
Modern systems like git don't really work this way. I don't think trunk is properly descriptive anymore because it isn't necessarily the root of all branches. In a GitFlow branching model, commits can move back and forth on feature, bugfix and release branches without ever being merged to develop. On top of that you have branches like gh-pages which should have no shared history at all with other branches.
In keeping with GitFlow, the main development branch on many of my projects is called "develop". I like the name: this is where development happens, where new features are first merged. Unlike "master" or "main" it doesn't imply that it's stable or deployable, and unlike "trunk" it doesn't imply that it's special or that it's the root of all other branches.
> Modern systems like git don't really work this way. I don't think trunk is properly descriptive anymore because it isn't necessarily the root of all branches. In a GitFlow branching model, commits can move back and forth on feature, bugfix and release branches without ever being merged to develop. On top of that you have branches like gh-pages which should have no shared history at all with other branches.
This criticism apply to every other branch naming, especially "master" (it's not a master copy of anything, then). Maybe "main" would be the best one since it highlights the most important branch of that git tree.
Sure, and this is one of the nice things about git, that it supports a number of flexible workflows.
My repos are in fact trunk and branch shaped: the trunk branch is always the first commit, and I tend to just have feature branches. That won't last forever, I'll need to start having release branches and the like, but 'trunk as root' with the first commit is going to be a constant.
So 'develop' is a fine choice. I worked at a company which had a 'master' branch but really only used it for releases. It wasn't a continuous deployment model, so everything would land on 'develop', turn into a candidate branch near release time, and then land once on master when it was blessed for release.
So master was just a series of squashed deltas between point releases, which kind of matches the semantics of 'master' if you squint: like it was the gold master from which release copies (including one client who took delivery by CD-R!) was pressed.
Most of the Git projects I've worked with followed the "trunk" model of there being one core branch; in theory, you can use git in other ways but in practice that's the main one. That said, trunk works, but so does main, production, or even development depending on whether you want to communicate that the mainline branch is considered stable or unstable.
One good thing that has come of Github changing the master/main/default/primary branch to "main" is that it forces git-related software/products to drop the assumption that the primary branch is `master`.
Unfortunately, Heroku "fixed" this problem in completely the wrong way, by now allowing deployments from `master` or `main`, instead of making it user-configurable.
So although I'm not particularly a fan of the PC brigade researching the etymology of technical terms to find new ways to feel outraged, I do appreciate the fact that in many cases, it forces developers to create software without arbitrary assumptions.
For the record, I do think in many (most?) cases, these crusades [0] are the social justice equivalent of bike-shedding, so that companies and individuals can pat themselves on the back and say "we're fixing racism/sexism/foobarism" without actually doing any real work to solve inequality issues in the tech industry. It also allows them to squarely put the blame on others (i.e. whichever 1970's white men came up with master/slave or blacklist/whitelist), while avoiding the need for meaningful introspection and contemplation of their own role (as people in the industry, regardless of identity) in the inequality of the tech industry.
[0] Apparently, this is also a word we're not allowed to use
In my opinion and experience people bringing up "the master name for hard drive cable is bad" and "look, they want to change the name of an insignificant detail in computers but no black people really care" are put in the same bag and forced to tag along.
Apparently my original reply to this was controversial. Let's try again. Please explain how it has nothing to do with me when I'm the one being asked to foot the bill.
Let's say it costs a total $500 - $1000 worth of resources to excise the term "master" from all the code and infrastructure under my control. You're asking me to spend that uncritically and unquestioningly. If that's what you want, send me $1000 and it'll be done by the end of the month.
As I said, I'm perfectly willing to donate to progressive causes I believe in (and do, frequently); I have no problem doing this if someone makes the minimal effort to prove that it would be a positive use of my money. I have a very big problem with donating to what amounts to Microsoft's marketing department. Convince me that this is more worthwhile than donating the same amount of cash to charity.
You're free to ignore my request and suggest (again) that I'm a racist, but that's not going to open my wallet; I'll just continue using master branches and also think you're a jerk.
Devil's advocate: why should they be selective about who they receive donations from?
Obviously they shouldn't name a building or something after the guy, but, to give an extreme example, would it be a problem to take money with no strings attached from a serial killer or war criminal?
Edit: Okay, this is a pretty good answer: https://news.ycombinator.com/item?id=20963945. I mean maybe it could be an acceptable solution for them to make a public statement like "we hate this guy but we're taking his money anyway", but having no association whatsoever is probably safer.
Epstein cited his relationships with these schools (built through fundraising) repeatedly both to intimidate his victims into silence and convince authorities to let him go. This wasn't a case of him sending a check and walking away, he was extracting a lot of value for his predatory operation from these institutions. They should face the music for their role in what happened.
Here's a similar example from my own experience:
* Last week, I used AI to help me prepare a set of legal agreements that would have easily cost $5k+ in legal fees pre-2022.
* I've also started a personal blog and created a privacy policy and ToS that I might otherwise have paid lawyers money to draft. Or more realistically, I'd have cut those particular corners and accepted the costs of slightly higher legal risk and reduced transparency.
* In total, I've saved into the five figures on legal over the past few years by preparing docs myself and getting only a final sign-off from counsel as needed.
One perspective would be that AI is stealing money from lawyers. My perspective is that it's saving me time, money, and risk, and therefore allowing me to allocate my scarce resources far more efficiently.
Automation inherently takes work away from humans. That's the purpose of automation. It doesn't mean automation is bad; it means we have a new opportunity to apply our collective talents toward increasingly valuable endeavors. If the market ultimately decides that it doesn't have sufficient need for continued Tailwind maintenance to fund it, all that means is that humanity believes Adam and co. will provide more value by letting it go and spending their time differently.