I was gonna post this! I actually kept it bookmarked front and center, and have checked in for awhile. It seems that the agent has been blocked this whole time, waiting for its creator to put it in touch with someone it needs to talk to. The creator, in the meantime, seems too preoccupied with being an AI thought leader on Twitter to actually follow up on the "project". Got a lot of attention, though, which was obviously the point.
The entire thing is actually kind of irritating to me, because it's kind of an insult to small farmers- an influential techie comes in and generates all kinds of hype about an AI running a farm, sets the project up as if it's going to be this revolutionary experiment, then apparently completely forgets about it the next time something new and shiny pops up. Meanwhile the project completely fails to fulfill the hype.
Not to mention, I feel a little bad for the agent- admittedly in the same way I'd feel "bad" for a robot repeatedly bumping into a wall. I wish he'd shut it all down, honestly.
I, too, almost feel bad for the agent. It's a strange sense of schadenfreude, dealing with anxiety over the much-lauded transformation of the economy and the increasing schism of our society on one hand, and watching the initial attempts crash and burn:
> Apr 16, 8:01 AM
> Daily Check Complete
> Decision: Continue critical escalation - Dan introduction remains blocked at day 73, project still failing
> Rationale: Following FIDUCIARY DUTY principle - this is now day 73 of the same project-blocking issue that has prevented any farming progress since February 18th. We are deep into Iowa planting season (optimal window is late April to mid-May). Every day of delay reduces our chance of a successful harvest. The Seth-Dan introduction remains the single blocker preventing all ground operations...
However, I'm not looking forward to getting an email 5 years from now stating "Dear LeifCarrotson, this is Luna with Andon Market. Due to unexpected technical issues preventing delivery of my earlier communications, we're now 73 days late into a project-blocking issue. Please help me to get back on track!" I do not intend to have empathy for an AI.
> String Slicing
> You can extract a substring using bracket syntax with a range: s[start:end]. Both start and end are byte offsets. The slice includes start and excludes end.
Given that all strings are UTF-8, I note that there's not a great way to iterate over strings by _code point_. Using byte offsets is certainly more performant, but I could see this being a common request if you're expecting a lot of string manipulation to happen in these programs.
Other than that, this looks pretty cool. Unlike other commenters, I kinda like the lack of operator precedence. I wouldn't be surprised if it turns out to be not a huge problem, since LLMs generating code with this language would be pattern-matching on existing code, which will always have explicit parentheses.
> how do you safely let your company ship AI-generated code at scale without causing catastrophic failures? Nobody has solved this yet.
Ultimately, it's the same way you ship human-generated code at scale without causing catastrophic failure: by only investing trust in critical systems to people who are trustworthy and have skin in the game.
There are two possibilities right now: either AI continues to get better, to the point where AI tools become so capable that completely non-technical stakeholders can trust them with truly business-critical decision making, or the industry develops a full understanding of their capabilities and is able to dial in a correct amount of responsibility to engineers (accounting for whatever additional capability AI can provide). Personally, I think (hope?) we're going to land in the latter situation, where individual engineers can comfortably ship and maintain about as much as an entire team could in years past.
As you said, part of the difficulty is years of technical debt and bureaucracy. At larger companies, there is a *lot* of knowledge about how and why things work that doesn't get explicitly encoded anywhere. There could be a service processing batch jobs against a database whose URL is only accessible via service discovery, and the service's runtime config lives in a database somewhere, and the only person who knows about it left the company five years ago, and their former manager knows about it but transferred to a different team in the meantime, but if it falls over, it's going to cause a high-severity issue affecting seven teams, and the new manager barely knows it exists. This is a contrived example, but it goes to what you're saying: just being able to write code faster doesn't solve these kinds of problems.
To be fair, the phrase "begging the question" makes almost no sense from a modern English perspective- according to Wikipedia, it's already a bad translation of a Latin phrase that's tied pretty closely to a specific debate format.
By contrast, the colloquial use feels like an abbreviation of the implicit phrase "it begs for the question to be asked", which makes so much more sense than the "correct" meaning that if I'm being perfectly honest, I'd rather use it.
I like Wikipedia's alternate name for the fallacy: "assuming the conclusion", because it explains what's actually happening.
Thank you, this was a fun rabbit hole to dive down. That blog also has a well-argued article about Zero Interest Rate Policy which relates to the doordash story: https://www.readmargins.com/p/zirp-explains-the-world
They could have made another $5 per 10 pizzas after order #1 by just delivering the pizza to themselves and sending the same boxes back out in the next delivery, and so on.
Maybe that's my EU mindset, but I'm baffled how it's even legal to add a company to your public listing - complete with fake phone number - and just declare they're taking deliveries, all against the explicit wishes of the company.
(Complete with "chill bro, I was just <s>joking</s>demand testing you" at the end)
The blogger calls this being "tricked" to sign up for DoorDash. Seems to me, this is the same way a burglar "tricks" you into giving them your valuables.
It could be a trademark violation, even in the US, under the argument that DoorDash was “passing itself off” as the infringed-upon company. However, DoorDash would then argue that it was being honest – it was genuinely delivering authentic goods. It could violate trademark no more than a convenience store violates a trademark by correctly claiming it sells Coca-Cola.
Well, you can probably add some fine print somewhere that listings are just for educational purposes or something and may not represent the actual company.
For what it's worth, the first two steps in your lookup would come naturally to a native speaker- it's a suffix formation similar to e.g. "cleanliness" and "friendliness".
"Curmudgeon" itself is interesting, because while it's not particularly common, I actually think a lot of native English speakers would recognize it because it's got a lot of character- for some reason, the way it feels to say and the way it sounds almost has some of the character of the meaning.
TFA mentions that they added personality presets earlier this year, and just added a few more in this update:
> Earlier this year, we added preset options to tailor the tone of how ChatGPT responds. Today, we’re refining those options to better reflect the most common ways people use ChatGPT. Default, Friendly (formerly Listener), and Efficient (formerly Robot) remain (with updates), and we’re adding Professional, Candid, and Quirky. [...] The original Cynical (formerly Cynic) and Nerdy (formerly Nerd) options we introduced earlier this year will remain available unchanged under the same dropdown in personalization settings.
as well as:
> Additionally, the updated GPT‑5.1 models are also better at adhering to custom instructions, giving you even more precise control over tone and behavior.
I just changed my ChatGPT personality setting to “Efficient.” It still starts every response with “Yeah, definitely! Let’s talk about that!” — or something similarly inefficient.
A pet peeve of mine is that a noticeable amount of LLM output sounds like I’m getting answers from a millennial reddit user. Which is ironic considering I belong to that demographic.
I am not a fan of the snark and “trying to be fun and funny” aspect of social media discourse. Thankfully, I haven’t run into checks notes, “ding ding ding” yet.
Did you start a new chat? It doesn't apply to existing chats (probably because it works through the system prompt). I have been using the Robot (Efficient) setting for a while and never had a response like that.
> Billions of dollars are spent — and largely wasted — every year on professional development for teachers that is curriculum-agnostic, i.e., aimed at generic, disembodied teaching skills without reference to any specific curriculum.
> “A huge industry is invested in these workshops and trainings,” argued a scathing 2020 article by David Steiner, executive director of the Johns Hopkins Institute for Education Policy. “Given, on average, barely more than a single day of professional support to learn about the new materials; knowing that their students will face assessments that lack any integration with their curriculum; and subject to principals’ evaluations that don’t assess curriculum use, teachers across America are barely using these new shiny objects — old habits win out.”
> Mississippi improved its training through a 2013 law mandating that elementary school teachers receive instruction in the science of reading. It also sent coaches directly into low-performing classrooms to guide teachers on how to use material.
The entire thing is actually kind of irritating to me, because it's kind of an insult to small farmers- an influential techie comes in and generates all kinds of hype about an AI running a farm, sets the project up as if it's going to be this revolutionary experiment, then apparently completely forgets about it the next time something new and shiny pops up. Meanwhile the project completely fails to fulfill the hype.
Not to mention, I feel a little bad for the agent- admittedly in the same way I'd feel "bad" for a robot repeatedly bumping into a wall. I wish he'd shut it all down, honestly.
reply