> Good time to be in business if you can see through the bs and understand how these systems actually function
You missed out the most crucial and least likely requirement (assuming you're not self employed); management also need to be able to see through the bs.
"The muse visits during the act of creation, not before. Start alone."
That has actually been a major problem for me in the past where my core idea is too simple, and I don't give "the muse" enough time to visit because it doesn't take me long enough to build it. Anytime I have given the muse time to visit, they always have.
> No more AI thought pieces until you tell us what you build!
Absolutely agree with this, the ratio of talk to output is insane, especially when the talk is all about how much better output is. So far the only example I've seen is Claude Code which is mired in its own technical problems and is literally built by an AI company.
> Write your own code without assistance on whatever interval makes sense to you, otherwise you'll atrophy those muscles
This is the one thing that concerns me, for the same reason as "AI writes the code, humans review it" does. The fact of the matter is, most people will get lazy and complacent pretty quickly, and the depth of which they review the code/ the frequency they "go it alone" will get less and less until eventually it just stops happening. We all (most of us anyway) do it, its just part of being human, for the same reason that thousands of people start going to the gym in January and stop by March.
Arguably, AI coding was at its best when it was pretty bad, because you HAD to review it frequently and there were immediate incentives to just take the keyboard and do it yourself sometimes. Now, we still have some serious faults, they're just not as immediate, which will lead to complacency for a lot of people.
Maybe one day AI will be able to reliably write the 100% of the code without review. The worry is that we stop paying attention first, which all in all looks quite likely
> Absolutely agree with this, the ratio of talk to output is insane, especially when the talk is all about how much better output is.
Those of us building are having so much fun we aren't slowing down to write think pieces.
I don't mean this flippantly. I'm a blogger. I love writing! But since a brief post on December 22 I haven't blogged because I have been too busy implementing incredible amounts of software with AI.
Between Christmas and New Year's Day I was on vacation, so I had plenty of time. Since then, it's only been nights & weekends (and some early mornings and lunch breaks).
RatatuiRuby is pretty new still: its beta launch was Jan 20. Octobox's TUI is built on it [0], and Sidekiq is using it to build theirs [1].
I believe they'll be maintainable long-term, as they've got extensive tests and documentation, and I built a theory of the program [2] on the Ruby side of it as I reviewed and guided the agent's work.
I am getting feedback from users, the largest of which drove the creation of (and iteration upon) Rooibos. As a rendering library, RatatuiRuby doesn't do much to guide the design or architecture of an application. Rooibos is an MVU/TEA framework [3] to do exactly that.
Tokra is basically a tech demo at this stage, [4] so (hopefully) no users yet.
Lights out manufacturing is always the boogieman that's being built or coming tomorrow. Never seems to happen though. The Wikipedia article for it only cites two such factories, and at least one of them requires humans still and isn't fully lights out.
Can't believe someone setup some kind of AI religion with zero nods to the Mechanicus (Warhammer). We really chose "The Heartbeat is Prayer" over servo skulls, sacred incense and machine spirits.
I guess AI is heresy there so it does make some sense, but cmon
> Re: Django is OK for simple CRUD, but falls apart on anything complex
Maybe my experience of working with Django on complex applications has coloured my view on it a bit, but I always think the opposite; it seems overkill for simple CRUD, even if I love using it
I'll add one; Add shell_plus. It makes the django shell so much nicer to use, especially on larger projects (mostly because it auto-imports all your models). IIRC, it involves adding ipython and django_extensions as a dependency, and then adding django-extensions (annoyingly, note that the underscore changes to a dash, this trips me up everytime I add it) to your installed apps.
Saying that, I'm sure django-extensions does a lot more than shell_plus but I've never actually explored what those extra features are, so think I'll do that now
Edit: Turns out you can use bpython, ptpython or none at all with shell_plus, so good to know if you prefer any of them to ipython
In the default shell? I've definitely started new django projects since 2023 and I seem to remember always having to use shell_plus for that, though maybe thats just become something I automatically add without thinking
Edit: Yep, you're right, wow thats pretty big for me
To me it seems like it'd only get more visible as it gets more normal, or at least more predictable.
Remember back in the early 2000's when people would photoshop one animals head onto another and trick people into thinking "science has created a new animal". That obviously doesn't work anymore because we know thats possible, even relatively trivial, with photoshop. I imagine the same will happen here, as AI writing gets more common we'll begin a subconscious process of determining if the writer is human. That's probably a bit unfairly taxing on our brains, but we survived photoshop I suppose
The obviously fake ones were easy to detect, and the less obvious ones took some some sleuthing to detect. But the good fakes totally fly under the radar. You literally have no idea how much of the images you see are doctored well because you can't tell.
Same for LLMs in the near future (or perhaps already). What will we do when we'll realize we have no way of distinguishing man from bot on the internet?
I'd say the fact that you know theres some photoshop jobs you can't detect is proof enough that we're surviving it. It's not necessarily that we can identify it with 100% accuracy, but that we consider it a possibility with every image we see online
> What will we do when we'll realize we have no way of distinguishing man from bot on the internet?
The idea is this is a completely different scenario if we're aware of this being a potential problem versus not being at all aware of it. Maybe we won't be able to tell 100% of the time, but its something which we'll consider.
> Are there really people who "spend weeks planning the perfect architecture" to build some automation tools for themselves?
Ironically, I see this very often with AI/vibe coding, and whilst it does happen with traditional coding too, it happens with AI to an extreme degree. Spend 5 minutes on twitter and you'll see a load of people talking about their insane new vibe coding setup and next to nothing of what they're actually building
The argument I usually hear is that you only truly get the 10x improvements with <new-model> (right now, Opus 4.5), so they've only had a few months, not years. In a few months, it'll turn out that <new-model> wasn't actually capable of that, but <new-new-model> is, and as that's not been out long its unfair to judge so early. And so the cycle begins anew
This is where you get to this weird juxtaposition of "AI can now replace humans" existing simultaneously with "Its unfair to compare human work to AI work".
Like if a human said they started a farm, but it turns out someone else did all the leg work and they were just asked for an opinion occasionally, they'd be called out for lying about starting a farm. Meanwhile, that flies for an AI, which would be fine if we acknowledged that theres a lot of behind the scenes work that a human needs to do for it.
You missed out the most crucial and least likely requirement (assuming you're not self employed); management also need to be able to see through the bs.