For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | kbenson's commentsregister

If you have a complaint against "scientists" as hsme homogenous group, I think I'm going to have to ask you to explain how these particular scientists did not do that, and why you would think this is a problem of scientists (a label for a largelt disparate group not connected through any specific communication or hierarchy and mostly in output) in general?

The first time I ever heard The Glitch Mob I had such a clear memory of this games soundtrack come to mind that I mentioned it to my brother soon after (as it was his commodore and his copy of the game I was playing when I was young). I'm not even sure if the song I heard even sounds like the game soundtrack particularly closely, but the connection in my mind was very strong.

I know exactly how you feel - The Way Out Is In (https://youtu.be/kqFqG-h3Vgk) heavily evokes video games for me

Well, I've taken to describing the best responsible use of AI to help your work as though you have an executive assistant, so I can see why people would come to that conclusion. I don't tend to think of booking flights for that though, I tend to think of asking them to gather information and present it to me so I can review it for whether it's appropriate to include, probably with changes, in whatever I'm working on. Perhaps an executive assistant isn't the right term for that, or perhaps it's just that different people and different industries have vastly different ideas of how to make use of an executive assistant. I don't know enough to answer that.


It's not quite as straightforward as that though. You're also required to pay a large sum up front to get the worker, have to pay for room and board and health for the worker, including children of workers which while they are investments that may eventually pay off, are mostly cost sinks until at least a few years have passed. There's more of a trade-off that might be immediately obvious when you dig into the reality of what it took.


Salved are free in neither up front nor ongoing cost, just as industrial equipment is not. It comes down to costs. Industrial equipment that is most costly than slaves seems unlikely to supplant them based on monetary incentives alone, while once it is less costly it's just the social economic momentum which needs to be overcome, which is likely a matter of time.

Importantly, I think there's only so much advancement you can get out of people by investing in economies of scale and iterating on process (and people, as icky as that idea is), while there's a huge amount of advancement to get out of machinery, including moving to whole new categories of machinery (which depending on how far you want to take the "slaved are machines" metaphor is waht a shift away from slaves was in the first place). In that respect maybe what you're noting is just that the shift from slaves to machines was the first in an iterative process which is speeding up over time.

> If we'd abolished slavery in Roman times we might have terraformed Mars by now.

I think maybe the right was to look at it is if we were able to abolish slavery and keep the same output (which might have required an economic or social system that incentivized farm consolidation for economies of scale that plantations were able to more easily achieve), then yes, we would have terraformed mars by now, but probably just because we happened to be along the tech tree earlier in the timeline.


Identical might be a bit strong. It's only identical if we signed a law that made oil and gas illegal tomorrow. There are definitely parallels, but this is much more of a normal market situation where most things are handled through incentives, not regulation to such an extreme degree we make the common immediately illegal.

Perhaps most importantly, it not being an immediate change allows the entrenched interests time to shift their strategies and portfolios over time to take advantage of the more economically advantageous option. Many people aren't happy with the time frames that generally requires, but they also seem to be very happy with reliable energy and and economy that doesn't collapse overnight and having invested a year or two ago in a car which would become worthless tomorrow.


> After abolition, the South's per capita productivity dropped substantially, and remained 20% lower per capita in 1880 than it had been in 1860.

I wonder how much of that was because of economies of scale (Even if it's forced scale). Plantations are large and have many workers, and can scale without having to worry (to a degree) about retaining workers, since workers are for most intents just machines you invest in and pay to keep up in that system, which allows for easier scaling.

We've seen increasing consolidation of farms into large entities over the centuries, so perhaps this was just a system that made that much, much easier to do.


Why would we assume an LLM, even one that doesn't appear to have a bias like that built in, doesn't have one? Just because we can't identify it immediately, does not mean it doesn't exist.

Groups of people can and do have bias, but I also think it's much harder to control the outcome (for better or worse) when inputs are more diverse.


There very likely is existing research into evaluating political bias in LLMs, not too sure, but I do think it's very possible to have an evaluation framework that could test LLMs for political bias and other biases. Once we have such a test and an LLM that passes it, we can be certain (to some confidence, for some topics, for some biases, etc etc) that the LLM won't be biased.

For humans, there is no such guarantee. The humans can lie, change their mind, etc. See Wikipedia, where they talk about how they are not biased, they have many processes that ensure no biases, blah blah blah, and it turns out they are massively biased, what a surprise.

Of course, who evaluates the evaluators/evaluation frameworks comes into play but that's a much easier problem.


> See Wikipedia, where they talk about how they are not biased, they have many processes that ensure no biases, blah blah blah, and it turns out they are massively biased, what a surprise.

It's clear you have some unfounded issue with Wikipedia. They are not "massively biased", that's a talking point propelled primarily by the right/far right because of a desire to rewrite history to match their ideological needs.

Saying "there very likely is existing research into evaluating political bias in LLMs" essentially means very little because

1. By your own admission you can't even say for sure that such research is actually happening (it probably is, but you admit you don't actually know) 2. There is no guarantee such research will lead to anywhere anytime soon 3. Even if it does, how does a means of evaluating bias in LLMs provide a path to eliminating it?


It’s not “unfounded”. Wikipedia is biased and saying that’s “propaganda” or a result of propaganda is a nonsense non-argument.

> Saying "there very likely […]

What’s with this nitpicky stuff. A simple google search shows there’s tons of research in LLM political bias evaluation.

> There is no guarantee [..] path to eliminating it?

It’s research. Sure there’s no guarantee but given progress in LLM, I would be optimistic rather than pessimistic.


> It’s not “unfounded”. Wikipedia is biased and saying that’s “propaganda” or a result of propaganda is a nonsense non-argument.

It specifically is unfounded if you have no credible sources to back it up. "Trust me bro" doesn't qualify.

> What’s with this nitpicky stuff

This is HN, you should be prepared to validate what you're saying, or accept you'll be challenged to do so.

> It’s research. Sure there’s no guarantee but given progress in LLM, I would be optimistic rather than pessimistic.

This is a really poor argument when advocating it (AI) as a viable replacement for the status quo.


There has been lots of discussion about wikipedia’s bias in HN and elsewhere for years and I’m not going to rehash all of that.

> […] AI) as a viable replacement for the status quo.

Given that the status quo is clearly biased and structurally unwilling to be unbiased due to existing political affiliation, even an AI that is not evaluated all that well will be better. It can only get better from this status quo, so it’s a fine argument.


Discussion doesn't constitute consensus or conclusion - as I said several comments up, widespread bias in Wikipedia is a talking point propagated by those with an agenda to distort factual accuracy - people like Musk have hardly been subtle about this being their objective.

> even an AI that is not evaluated all that well will be better

This is just intellectual laziness. If you don't like Wikipedia that's fine, but if you're going to make the effort of characterising it as such on a public forum, the least you can do is make an effort to that point. This certainly isn't a "fine" argument at all.


From the description given, "The developer hired contractors who didn't know what they were doing and ignored stop work orders when the city learned of the problems" seems like it might have a lot to do with it.


Perhaps this is just a form of technical writing you're unfamiliar with? Those titles are pretty standard for what I consider good technical writing section headers. LLM writing tendencies are tendencies LLMs have integrated by encountering those tendencies. If your assessment standard for AI is just "common best practices for a subset of good writers", then I think perhaps you need to adjust how you assess to be a bit more nuanced.


For some reason people frequently suggest that my problem with LLM writing is that it's too good. Allow me to restate that I find fault with how the article is written, and that I do not in any way perceive this to be good writing. The flaws happen to manifest in a way that I would expect LLM flaws to manifest, which I also do not find to be good writing. I do not find LLMs to have absorbed good technical writing tendencies at all. Instead they absorb sensationalist tendencies that are likely both more common in their dataset and that are likely intentionally selected for in the reinforcement learning phase. Writing which is effective, in the same way that clickbait headlines and Youtube thumbnails are effective, but not good. I felt as though this article was, through its headers and overuse of specific rhetorical devices, constantly trying to grab my attention in that same shallow manner. This gets tiring at length, and good technical writing does not need to engage in such tendencies.

If you disagree and find this to be good writing, you are entitled to your opinion, but nonetheless this is my own feedback on the article.


Can you please share an example of what you perceive to be good writing so we can compare?


Sure, I guess? I feel like this is getting rather in the weeds and will not necessarily lead the conversation in any kind of particularly productive direction, but I will nonetheless take the opportunity to promote what I consider to be excellent writing. Dan Luu is a favorite of mine, and offers what I find to be a much more rewarding use of reading time. A sample picked basically at random: https://danluu.com/ftc-google-antitrust/


Ok that's fair, he's a pretty unusual or at least he's a writer that cares a great deal about his writing, he's talked in the past about his writing gets getting passes from people so there's at least a quality bar there

Thanks for clarifying, in this case it might be comparing apples to oranges as I'd be surprised if most people approach they're writing like he does


> For some reason people frequently suggest that my problem with LLM writing is that it's too good.

> I felt as though this article was, through its headers and overuse of specific rhetorical devices, constantly trying to grab my attention in that same shallow manner.

I think perhaps you're quick to assess a certain type of writing, which many see as done quite well and in a way that's approachable and is good at retaining interest, as AI. Perhaps you just don't like this type of writing that many do, and AI tries to emulate it, and you're keying on specific aspects of both the original and the emulation and because you don't appreciate either it's hard for you to discern between them? Or maybe there is no difference between the AI and non-AI articles that utilize these, and it's just your dislike of them which colors your view?

I, for one, found the article fairly approachable and easy to read given the somewhat niche content and that it was half survey of the current state of our ability to handle change in systems like these. Then again, I barely pay any attention to section titles. I couldn't even remember reading the ones you presented. Perhaps I've trained myself to see them just as section separators.

In any case, nothing in this stuck out as AI generated to me, and if it was, it was well enough done that I don't feel I wasted any time reading it.


I am a technical writer. This article is not good technical writing.

Good technical writing allows you to get to and understand the point in a minimum of time, has a clear and obvious structure, and organizes concepts in such a way that their key relationships are readily apparent. In my opinion this article achieves none of these things (and it also is just bad insofar as its thesis is confused and misleading in a very basic way—namely the relationship between functional programming philosophy and distributed systems design is far more aligned than it suggests, and it sets up a false dichotomy of FP versus systems, when really the dichotomy is just one of different levels of design (one could write the exact same slop article about what OOP "gets wrong" about systems—it gets it "wrong" because low level programming paradigms techniques are in fact about structuring programs, not systems, and system design is largely up to designers—the thesis is basically "why don't these pragmatic program-leave techniques help me design systems at scale" or in other words "why don't all these hammering techniques help me design a house?")


I would only loosely categorize this as technical writing, depending on how you categorize technical writing. It seems much more a survey of problems and discussion piece, with notes about projects making inroads on the problem. It's definitely not a "this is how you solve this problem, and these are the clear steps to do so" type of article. Maybe that's some of the disconnect in how we view it. If I was hoping that this communicated a clear procedure or how to accomplish something, I would be disappointed. I don't think that was their intention.

I came away with some additional understanding of the problem, and thinking there are various nascent techniques to address this problem, none of them entirely sufficient, but that it's being worked on from multiple directions. I'm not sure the article was aiming for more than that.


I'm a highly literate reader and writer of technical topics, and there are a lot of bad technical writers who think they aren't. Except perhaps for the title, which is way too narrow, the article is excellent writing about a technical topic (which is quite different from technical writing)--but then I actually read it, so I know that he doesn't talk about a dichotomy between FP and systems, but rather between single programs and systems, and he explicitly says that his points aren't restricted to FP, but that because FP addresses the single program issues so well, FP programmers are particularly prone to missing the problem.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You