As very much an outsider and, to some extent, apostate to all this, it's pretty astonishing to see.
Unironically not just delegating all thinking to a sketchy and untrustworthy machine, but doubling down on it by aping the caveman in the belief that this will more effectively summon the great metal-wing sky god and bring limitless yum stuff.
Wow. I don't even have to do anything. You guys are disemvoweling yourselves in some kind of strange ritual. You sure are trusting souls!
I believe that everyone deserves food housing and and security regardless of what they do or don’t do professionally. I have no logical argument only a moral one, which I sense would not be sufficient to convince you.
From the post it's clear that the shop has a set schedule of services and prices that the bot is pulling from. All the things you're saying are true for a shop that needs to custom quote each job but do not apply to the situation as presented.
It's clear that the author interpreted the data that way, yes.
And perhaps the shop actually charges the same for brakes whether it is an Ford F150 or a Toyota Corolla.
But that seems very unlikely to me. While they're both very common vehicles, they are also very different and the parts have substantially different costs associated with them.
As the resident Diagram Maker at my job I really appreciate any and all discourse on the topic. Knowing the purpose of your diagram is a hugely under-appreciated part of the process. Service flow chart or system architecture? High level system overview or actionable, followable flow-chart? The engineer in me always wants to put All The Things in the chart, to make it maximally "correct". It's never the right move. But how to make it clear what's included or not, and why?
I still struggle with finding the best approach each time; I'd love more discussion of this stuff.
Just because you said that you were interested in some
Opinions, one of the least appreciated aspects of any documentation (but especially diagrams) is defining who the stakeholders are at the start of the document. It’s the difference between having frustrated users who can’t understand things to happy users that understand limitations.
The corollary to this is that the best diagram that boundaries are often along communication lines between teams. This is Conway’s law all the way down. And the reason is that most often people use diagrams to get a spatial sense of where ‘they’ fit into things. I have only anecdotal evidence for this, but the most helpful and lasting diagrams I’ve ever made are when 1) they define (and stick to) specific stakeholders, and b) they are delineated by groups/teams.
Yep! That's almost always the correct solution. It can be a lot to figure out, tho: which perspectives are most valuable to present? Are the linkages clear? Does this kind of box belong on THIS chart or THAT chart?!
The issue is that in domains novel to the user they do not know what is trivially false or a non sequitur and the LLM will not help them filter these out.
If LLMs are to be valuable in novel areas then the LLM needs to be able to spot these issues and ask clarifying questions or otherwise provide the appropriate corrective to the user's mental model.
In my job the task of fully or appropriately specifying something is shared between PMs and the engineers. The engineers' job is to look carefully at what they received and highlight any areas that are ambiguous or under-specified.
LLMs AFAIK cannot do this for novel areas of interest. (ie if it's some domain where there's a ton of "10 things people usually miss about X" blog posts they'll be able to regurgitate that info, but are not likely to synthesize novel areas of ambiguity).
They can, though. They just aren't always very good at it.
As an experiment, recently I've been using Codex CLI to configure some consumer networking gear in unusual ways to solve my unusual set of problems. Stuff that pros don't bother with (they don't have the same problems I face), and that consumers tend to shy away from futzing with. The hardware includes a cheap managed switch, an OpenWRT router, and a Mikrotik access point. It's definitely a rather niche area of interest.
And by "using," I mean: In this experiment, the bot gets right in there, plugging away with SSH directly.
It was awful with this at first, mostly consisting of a long-winded way to yet-again brick a device that lacks any OOB console port. It'd concoct these elaborate strings of shit and feed them in, and then I'd wander over and reset whatever box was borked again. Footgun city.
But after I tired of that, I had it define some rules for engaging with hardware, validation, constraints, and for order of execution, and commit those rules to AGENTS.md. It got pretty decent at following high-level instructions to get things done in the manner that I specified, and the footguns ceased.
I didn't save any time by doing this. But I also didn't have to think about it much: I never got bogged down in wildly-differing CLI syntax of the weirdo switch, the router (whose documentation is locked behind a bot firewall), and access point's bespoke userland. I didn't touch those bits myself at all.
My time was instead spent observing the fuckups and creating a rather generic framework that manages the bot, and just telling it what to do -- sometimes, with some questions. I did that using plain English.
Now that this is done, I get to re-use this framework for as many projects as I dare, revising it where that seems useful.
(That cheap switch, by the way? It's broken. It has bizarro-world hardware failure modes that are unrelated to software configuration or firmware rev. Today, a very different cheap switch showed up to replace it. When I get around to it, I'll have the bot sort that transition out. I expect that to involve a bit of Q&A, and I also expect it to go fine.)
I caught that too. The piece is otherwise good imo, but "the luddites were wrong" is wrong. In fact, later in the piece the author essentially agrees – the proposals for UBI and other policies that would support workers (or ex-workers) through any AI-driven transition are an acknowledgement that yes, the new machines will destroy people's livelihoods and that, yes, this is bad, and that yes, the industrialists, the government and the people should care. The luddites were making exactly that case.
> while it’s true that textile experts did suffer from the advent of mechanical weaving, their loss was far outweighed by the gains the rest of the human race received from being able to afford more than two shirts over the average lifespan
I hope the author has enough self awareness to recognize that "this is good for the long term of humanity" is cold comfort when you're begging on the street or the government has murdered you, and that he's closer to being part of the begging class than the "long term of humanity" class (by temporal logistics if not also by economic reality).
> We should hate/destroy this technology because it will cause significant short term harm, in exchange for great long term gains.
Rather
> We should acknowledge that this technology will cause significant short term harm is we don't act to mitigate it. How can we act to do that, while still obtaining the great long term gains from it.
reply