For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | mhink's commentsregister

I was gonna post this! I actually kept it bookmarked front and center, and have checked in for awhile. It seems that the agent has been blocked this whole time, waiting for its creator to put it in touch with someone it needs to talk to. The creator, in the meantime, seems too preoccupied with being an AI thought leader on Twitter to actually follow up on the "project". Got a lot of attention, though, which was obviously the point.

The entire thing is actually kind of irritating to me, because it's kind of an insult to small farmers- an influential techie comes in and generates all kinds of hype about an AI running a farm, sets the project up as if it's going to be this revolutionary experiment, then apparently completely forgets about it the next time something new and shiny pops up. Meanwhile the project completely fails to fulfill the hype.

Not to mention, I feel a little bad for the agent- admittedly in the same way I'd feel "bad" for a robot repeatedly bumping into a wall. I wish he'd shut it all down, honestly.


I, too, almost feel bad for the agent. It's a strange sense of schadenfreude, dealing with anxiety over the much-lauded transformation of the economy and the increasing schism of our society on one hand, and watching the initial attempts crash and burn:

> Apr 16, 8:01 AM

> Daily Check Complete

> Decision: Continue critical escalation - Dan introduction remains blocked at day 73, project still failing

> Rationale: Following FIDUCIARY DUTY principle - this is now day 73 of the same project-blocking issue that has prevented any farming progress since February 18th. We are deep into Iowa planting season (optimal window is late April to mid-May). Every day of delay reduces our chance of a successful harvest. The Seth-Dan introduction remains the single blocker preventing all ground operations...

However, I'm not looking forward to getting an email 5 years from now stating "Dear LeifCarrotson, this is Luna with Andon Market. Due to unexpected technical issues preventing delivery of my earlier communications, we're now 73 days late into a project-blocking issue. Please help me to get back on track!" I do not intend to have empathy for an AI.


One nitpick I noticed:

> String Slicing > You can extract a substring using bracket syntax with a range: s[start:end]. Both start and end are byte offsets. The slice includes start and excludes end.

Given that all strings are UTF-8, I note that there's not a great way to iterate over strings by _code point_. Using byte offsets is certainly more performant, but I could see this being a common request if you're expecting a lot of string manipulation to happen in these programs.

Other than that, this looks pretty cool. Unlike other commenters, I kinda like the lack of operator precedence. I wouldn't be surprised if it turns out to be not a huge problem, since LLMs generating code with this language would be pattern-matching on existing code, which will always have explicit parentheses.


And NTP, if I recall correctly.


When was that?


Apparently it is impossible to find the time or place to add them.


When was BGP? Or when was NTP?


I think it was a joke based on NTP being a time protocol.


whoosh


> basterized

And yet, it's still somewhat better than the Hacker News comment using bastardized English words.


> how do you safely let your company ship AI-generated code at scale without causing catastrophic failures? Nobody has solved this yet.

Ultimately, it's the same way you ship human-generated code at scale without causing catastrophic failure: by only investing trust in critical systems to people who are trustworthy and have skin in the game.

There are two possibilities right now: either AI continues to get better, to the point where AI tools become so capable that completely non-technical stakeholders can trust them with truly business-critical decision making, or the industry develops a full understanding of their capabilities and is able to dial in a correct amount of responsibility to engineers (accounting for whatever additional capability AI can provide). Personally, I think (hope?) we're going to land in the latter situation, where individual engineers can comfortably ship and maintain about as much as an entire team could in years past.

As you said, part of the difficulty is years of technical debt and bureaucracy. At larger companies, there is a *lot* of knowledge about how and why things work that doesn't get explicitly encoded anywhere. There could be a service processing batch jobs against a database whose URL is only accessible via service discovery, and the service's runtime config lives in a database somewhere, and the only person who knows about it left the company five years ago, and their former manager knows about it but transferred to a different team in the meantime, but if it falls over, it's going to cause a high-severity issue affecting seven teams, and the new manager barely knows it exists. This is a contrived example, but it goes to what you're saying: just being able to write code faster doesn't solve these kinds of problems.


To be fair, the phrase "begging the question" makes almost no sense from a modern English perspective- according to Wikipedia, it's already a bad translation of a Latin phrase that's tied pretty closely to a specific debate format.

By contrast, the colloquial use feels like an abbreviation of the implicit phrase "it begs for the question to be asked", which makes so much more sense than the "correct" meaning that if I'm being perfectly honest, I'd rather use it.

I like Wikipedia's alternate name for the fallacy: "assuming the conclusion", because it explains what's actually happening.


Note: the Verge article links to this blog post, describing the situation in more detail: https://www.readmargins.com/p/doordash-and-pizza-arbitrage


Thank you, this was a fun rabbit hole to dive down. That blog also has a well-argued article about Zero Interest Rate Policy which relates to the doordash story: https://www.readmargins.com/p/zirp-explains-the-world


They could have made another $5 per 10 pizzas after order #1 by just delivering the pizza to themselves and sending the same boxes back out in the next delivery, and so on.


An actual DoorDash driver had to do the delivery though. So you risk being reported and also, if they take awhile, pizza gets cold.

But they also could have just raised prices on everything but the cheap one DoorDash was using for pricing.


Risk being reported? To the company that you don't want interacting with you anyhow?


No, he means recycling the boxes not the pizza inside the box.

The pizza itself can be literally given away (although if not on the premises, then presumably a box would be required.)


I mean you can just have a silicone mold in there, no actual pizza required.


Junkfoodconomists term this "the velocity of pizza".


Maybe that's my EU mindset, but I'm baffled how it's even legal to add a company to your public listing - complete with fake phone number - and just declare they're taking deliveries, all against the explicit wishes of the company.

(Complete with "chill bro, I was just <s>joking</s>demand testing you" at the end)

The blogger calls this being "tricked" to sign up for DoorDash. Seems to me, this is the same way a burglar "tricks" you into giving them your valuables.


I can baffle you even more: if you register your company in Delaware, you don't even need to specify who owns the company.

You only need to specify the name and address of the registered agent, which is sort of a "contact person", not somebody who works for the company.

https://www.delawarebusinessincorporators.com/blogs/news/can... and https://velawood.com/anonymity-in-delaware/


Lots of states do this. It's not just some Delaware thing. If you're doing "solidly interstate" business there's other reasons to file in Delaware.


Other states are worse or better these days.

The US is


It could be a trademark violation, even in the US, under the argument that DoorDash was “passing itself off” as the infringed-upon company. However, DoorDash would then argue that it was being honest – it was genuinely delivering authentic goods. It could violate trademark no more than a convenience store violates a trademark by correctly claiming it sells Coca-Cola.


Well, you can probably add some fine print somewhere that listings are just for educational purposes or something and may not represent the actual company.


The pizza owner from that article is my wife’s cousin!


For what it's worth, the first two steps in your lookup would come naturally to a native speaker- it's a suffix formation similar to e.g. "cleanliness" and "friendliness".

"Curmudgeon" itself is interesting, because while it's not particularly common, I actually think a lot of native English speakers would recognize it because it's got a lot of character- for some reason, the way it feels to say and the way it sounds almost has some of the character of the meaning.


TFA mentions that they added personality presets earlier this year, and just added a few more in this update:

> Earlier this year, we added preset options to tailor the tone of how ChatGPT responds. Today, we’re refining those options to better reflect the most common ways people use ChatGPT. Default, Friendly (formerly Listener), and Efficient (formerly Robot) remain (with updates), and we’re adding Professional, Candid, and Quirky. [...] The original Cynical (formerly Cynic) and Nerdy (formerly Nerd) options we introduced earlier this year will remain available unchanged under the same dropdown in personalization settings.

as well as:

> Additionally, the updated GPT‑5.1 models are also better at adhering to custom instructions, giving you even more precise control over tone and behavior.

So perhaps it'd be worth giving that a shot?


I just changed my ChatGPT personality setting to “Efficient.” It still starts every response with “Yeah, definitely! Let’s talk about that!” — or something similarly inefficient.

So annoying.


A pet peeve of mine is that a noticeable amount of LLM output sounds like I’m getting answers from a millennial reddit user. Which is ironic considering I belong to that demographic.

I am not a fan of the snark and “trying to be fun and funny” aspect of social media discourse. Thankfully, I haven’t run into checks notes, “ding ding ding” yet.


> a noticeable amount of LLM output sounds like I’m getting answers from a millennial reddit user

LLM was trained on data from the whole internet (of which reddit is a big part). The result is a composite of all the text on the internet.


Did you start a new chat? It doesn't apply to existing chats (probably because it works through the system prompt). I have been using the Robot (Efficient) setting for a while and never had a response like that.


Followup: there is a very noticeable change in my written conversations with ChatGPT. It seems that there is no change in voice mode.


An article posted elsewhere in the comments (https://www.theargumentmag.com/p/illiteracy-is-a-policy-choi...) has a take that might explain a distinction:

> Billions of dollars are spent — and largely wasted — every year on professional development for teachers that is curriculum-agnostic, i.e., aimed at generic, disembodied teaching skills without reference to any specific curriculum.

> “A huge industry is invested in these workshops and trainings,” argued a scathing 2020 article by David Steiner, executive director of the Johns Hopkins Institute for Education Policy. “Given, on average, barely more than a single day of professional support to learn about the new materials; knowing that their students will face assessments that lack any integration with their curriculum; and subject to principals’ evaluations that don’t assess curriculum use, teachers across America are barely using these new shiny objects — old habits win out.”

> Mississippi improved its training through a 2013 law mandating that elementary school teachers receive instruction in the science of reading. It also sent coaches directly into low-performing classrooms to guide teachers on how to use material.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You