I'd like to share my project that let's you hit Tab in order to get a list of possible methods/properties for your defined object, then actually choose a method or property to complete the object string in code.
This applies well enough with beef. You still want to maximize pasture rotation and quick moves improve feed quality and speeds recovery between grazing.
Source: have 320 Angus/Simmental pairs. Working on an opensource cow collar (agopencollar.com)
Oh, that's been a thing for a couple years now. Going in and fixing vibecoded projects, especially shit produced in the early goldrush, seems to be a burgeoning industry. Funny thing is, the fixer is probably vibe coding, just with better models. Which might be perfectly cromulent depending on how good they are at whipping that mule team.
Funny enough I was processing some handwritten tables into excel with Sonnet. It did way better than I thought it would, I'd say like 95%.
I did have it put confidence indexes next to the output per line, and that was pretty useless, they were either really high or really low, and the confidence didn't match the mistakes at all.
IMHO LLMs cannot provide statistically confident measures, and they are terrible at pretending to be capable of doing so.
What worked: You use an OCR that provides character/word-level bounding boxes and let the LLM extract from data. Then the LLM is capable of "calculating" a confidence of extracted data.
This reads less like LLM output than it does someone just transcribing their brief notes as they did their research. Lot of missing subject nouns, which is not something I'd expect to see from AI output.
You can ask an LLM to write in a different voice—they don't all sound exactly the same, though this one is no different than other examples.
When I use an LLM, it tries to sound like me but there are still tendencies it falls back on, especially when the context window begins to expand.
The 'missing subject nouns' is probably the LLM's way of sounding like an authoritative source in a technical field since many programmers like to write that way.
- Subtitles have the rhetoric turned to 11 with LLMs. (Note: Who has ever had multiple sentences as a blog post heading? It's bizarre) :
- LLM "The Demo Works. Production Does Not."
- Human "AI is why this project exist, and why it's as complete as it is"
- Sources for claims that call for evidence
- LLM "Six months ago, a practitioner could name a preferred OCR engine with confidence. Based on what I read, that confidence is gone." - *What was read?*
- Human "AI coding tools and playing slot machines"[ref]
- Variable paragraph lengths, where things that need more explanation have longer paragraphs (and vice versa)
- LLM *Scroll through—each thing is about the same length*
----
There are lots of tells like this. This is a moment to get good at detecting LLM text in case it's surreptitiously used to your detriment.
Absolutely. You got the joke, or? This was the main point of the full article. No primary sources. Only unverified aggregates. Strong contrast to what I did normally once per month.
> Variable paragraph lengths
I tried to compare it to the URL you posted. It's quite similar. I would have rather have said. Shorter sentences. Shorter Paragraphs. But let's not fight on this ;)
Sounds about right. Yes, I've been doing it for decades now and besides telling you who's selling email lists, it makes filtering much easier. Filtering by To: is pretty low effort compared to Bayesian spam filters etc. They get tossed in a Sieve filter as soon as they become a problem, and I'll send a bitch letter to the leaker with another random email address to see how dedicated they are to screwing me.
reply