i've been using AI for as long as GPT has been out, so if you can't see through the rambling, overly complex to make you sound smarter kind of text, as well as the written patterns that are always used ad nauseam like "this thing isn't JUST this, it's THIS" -- i dunno how else to prove it to you. IYKYK.
I have also used GPTs since 2020. I am also a writer. Much of the writing equated with “generated by AI” is so precisely because it’s broadly trained on real writing.
So the claim of “AI slop” without proof is little more than heresy. It would be helpful to have any evidence.
It’s not about just the writing in one example, it’s about writing patterns—which are common—being equated with AI simply because they’re common.
if you're a writer, and you're using GPT for so long and you can't see it as obvious, i dunno what to tell you at this point. i guess LLMs are trained particularly on this guy's writing.
If you read his original draft you can see how much of it was still carried over, as well as how his original writing conveys much of your same arguments that an AI wrote the final text.
I don’t think your point is as strong as you believe it is.
Lastly, I work directly with AI models and utilize all popular generators every single day, so I don’t know why you think you’re the expert here.
my point was that its AI slop. whether his original point is intact doesn't matter to me. the fact that you're now defensively doubling down and steering the conversation into a direction which serves you better is just cringe. i bet you're a pleasure to work with. c ya later nerd.
i just don't feel like enumerating all of the common patterns ai slop produces. again, if you don't see this as obvious, i can tell you're clearly not using this stuff often enough (which might be a good thing)
The thing is, these stylistic patterns existed before AI, and weren’t completely atypical. Maybe you’re using LLMs so much that you’re over-associating them with AI now. Or maybe the author is using LLMs so much that he’s unconsciously adopted some of the patterns in his own writing.
Well he literally confirms it was from ChatGPT in a later reply, so there's that.
And his original draft is conspicuously missing the telltale "it's not X -- it's Y" and overall breathless dramatic flair that people like the poster you're replying to (correctly) picked up on.
i think the much higher probability isn't that this guy wrote literally like an LLM before LLMs came out, but rather that he just used an LLM to write all of it. You can see even more of these examples directly on his campaign site.
> certainly feels nice to place these obnoxious HN know-it-alls into their place.
You don't have to take the time to explain your reasoning if you don't want to, but "obnoxious know it all" is not a stone you should throw while at the same time refusing to explain yourself and saying anyone who can't see what you see is necessarily missing the obvious.
it's too difficult honestly. there are a lot of the classic easy traps -- "it's not just X, it's y" which are a dead giveaway, especially when they're used like 3-4 times in one essay. But the harder to spot ones, IMO, are ones where the overall tone is unnecessarily complex. E.g:
"When replacement is cheaper than retention, the decision gets framed as strategy instead of consequence."
This sentence is tight and on paper reads well, but it's robotic. It's kind of like taking a dead simple if/else statement that's pleasurable to read into a one line ternary statement. Technically a one line sentence, but now I have to re-read it like 5 times to understand it. The flow is dead.
Another example:
'AI becomes the excuse, not the cause. It’s the clean narrative that hides what’s actually happening: experienced workers being swapped out through global labor substitution while leadership talks about “efficiency” and “the future of work.”'
Starts off with a short & trite sentence (LLMs loves this if you don't steer it away). The other thing LLMs _love_ to do unprompted is: "It's the X: _insert_next_loaded_statement_here"
It's hard to get my point across, and I hope you kinda see it? I'm not a linguist, but these patterns are literally in every piece of LLM writing I've ever seen.
Again, you don't have to explain yourself, just don't be rude about it. It's hypocritical to call someone obnoxious and a know it all while you are engaging in schoolyard behavior and refusing to allow them to challenge your reasoning.
Saying nothing is an option. Other people who agree with you will be happy to explain their reasoning. Or maybe they won't and the conversation quietly fades away. Both are preferable.
give me a break. have you read the other comments? asking for proof in the most smug attitude possible. it's the definition of obnoxious HN commenters. and that's not even counting the one guy that wrote "you sound and write like a bot", got downvoted and deleted the post. i don't need to take any high roads here--it's the internet. As far as being "rude" it's a solid 2/10.
I'm not saying "take the high road" as much as "don't wrestle with a pig." It certainly isn't appropriate to call you a bot. But they probably insulted you to provoke you, right? Why give them any additional ammunition?
That's just my two cents, ultimately it's your business.
When what you’re presenting is something you have actual knowledge about, it can be easier to say what you think rather than stress about “sticking to the script.”
True of public speaking just as much as interviewing.
Many people unnecessarily stress about public speaking because they believe the script is the only thing that matters.
Though I admit there is no one size fits all when it comes to speaking.
Much of blogging (and creation in any form) is often exactly that: an existing idea remixed or reconsidered from someone else’s perspective.
After-all, the telephone wasn’t all that an original idea for a long while before the idea finally reached a salient and effective point.
My take has always been: just because you and I have heard the concept before does not mean everyone has. And if one person finds it helpful to read in this way, that’s a nice thing to have provided the world. :)
In my experience, when you’re applying to hundreds or thousands of jobs, odds are you aren’t clear on what your strengths are and what you want from your career.
And, unfortunately, that often comes across in a résumé and application.
Much better, I think, to spend time better conveying your strengths and interests as they relate to a specific job (or type of job). You’re much more likely to get the interest of relevant recruiters and hiring managers in this way.
The discourse around "are social websites too powerful?" is important to have, and part of the reason is, I think, specifically around the role individuals within those websites.
Elon's public statements around the purchase of Twitter are a prime example, having impacted the company's stock and morale. What's the responsibility individuals have when it comes to making statements on a social website? What responsibility do websites themselves have related to those statements?
This is an excellent take. I personally never want to work in an office again, but I know many—many, many—people who never want to work remotely again after the past two years of pandemic.
We need both: companies that want to support fully remote staff and those that want everyone in-office every day. Having companies with strong stances is a good shift for everyone because enables workers to easily figure out where they do (or do not) want to work.
I believe we're conditioned to understand various fidelities of information based on the principles of design. These fundamental principles—things like contrast, balance, proportion, hierarchy, motion, and variety—help us determine how to interpret what we experience.
For example: a webpage that has clear hierarchy of information, is visually balanced, uses motion to attract attention and convey concepts, is much more likely to be interpreted as a final product. Whereas a page that is a bit disorganized may be understood as in early development.
The problem is most landing pages are one page, so the creator invests considerable time in making them look and work well, leading to the perception of a complete project.
Then when the time comes to build a fully functional website/product, there's a lot more to invest in and so less time is spent.
Paradox of shipping an MVP product or business, I guess.
When someone comes to me and asks how to improve their knowledge of front-end or interaction design, I always recommend reading through these guides.
Why? Companies like Apple, Google, and Microsoft have put a ton of work into identifying usable patterns which have become convention across platforms.
If you want to be a better designer or front-end engineer, take advantage of the work these companies and organizations have done by identifying and sharing these guidelines!
I certainly appreciate learning from history rather than repeating mistakes --- Understanding the change over time lets you at least identify why some things worked and some things didn't. This allows you to make new work more easily and more completely.
Gem | UI Software Engineer | San Francisco, CA (remote until 2022) | Full-time
At Gem, our mission is to build the operating system for modern recruiting. Gem is an all-in-one recruiting platform that integrates with LinkedIn, email, and your applicant tracking system. Enabling recruiting teams to find, engage, and nurture top talent.
Our technology stack is configured with velocity in mind. GraphQL, React, and Python are just a few of the technologies we use to enable velocity. Reliability and consistency are also very important to us. We sync millions of emails, resumes, and applications and rely on that information to inform our customers (has anyone on the team reached out to this person?) and services (should we send a follow-up email?). This would be the first front-end engineer to partner closely with design and engineering to help implement and design a seamless recruiter experience.
Throwback to the StumbleUpon days.