I wonder if this sort of corruption will become a new negotiation tactic.
Give us what we want and we'll delay the announcement long enough for you to make preparations.
I don't know how Polymarket works, so maybe you can enlighten me: can Polymarket be subpoenaed to provide the recipients of the payouts? Is there some insulation to keep them ignorant of their identity?
New York is suing them for access as they have the ability to regulate gambling.
The federal government is fighting this attempts, backing the company’s assertion that a “prediction market” is not gambling, and the Feds have sole regulatory power. Coincidentally, Donald Trump, Jr is an investor in Polymarket and an advisor to Kalshi.
I was going to write, "not for long," which might be true for some. But then I realized there will always be a difference between LLM output and human writing. We don't read blogs because of their facts, we read them because of how the facts are presented and how the author's personality comes through on the page.
EDIT: That said, LLMs are great at faking it, and a lot of amateur writing will be difficult to distinguish from LLM output. So I'm disagreeing with myself a bit.
But we are talking about "slurping up" IP and regurgitating it right? OK. So if I slurp up Mickey Mouse and output Micky Mouse that's an offense. But what if I slurp up a billion images and output some chimera? That's what the LLMs do. And that's what humans do too.
Browsers don't allow notifications if you don't have the site open. Browser ads can get blocked by browser extensions. Browsers make it harder to have an icon for a site/service directly on the home screen. Browsers make it harder to get extensive permissions. Browsers allow content to displayed without first being run through an approval process.
For companies these are all downsides but for me they are all upsides. It really is us vs them when it comes to apps vs browsers. The only reason they offer websites at all is out of fear of losing a big chunk of users.
Browsers most definitely do allow for notifications if you don't have the site open. I use that feature all the time and it works perfectly.
Google Chrome does seem to catch spam sites that abuse notification permissions to send ads, though, so for a certain category of crapware websites aren't an option.
> what if the most important professional “developer skill” to learn or improve is how to effectively use coding agents?
Well, it's not. There's a small moat around that right now because the UX is still being ironed out, but in a short while able to use coding agents will be the new able to use Excel.
What will remain are the things that already differentiate a good developer from a bad one:
- Able to review the output of coding agents
- Able to guide the architecture of an application
> in a short while able to use coding agents will be the new able to use Excel.
Yeah, but there’s “able to use Excel”, and then there’s “able to use Excel.”
There is a vast skill gap between those with basic Excel, those who are proficient, and those who have mastered it.
As in intermittent user of Excel I fall somewhere in the middle, although I’m probably a master of knowing how to find out how to do what I need with Excel.
The same will be true for agentic development (which is more than just coding).
And the last two are much more important.
Don't forget that most decision makers and people with capital are normies, they don't live in a tech bubble.
If we know the outcome of that code, such as whether it caused bugs or data corruption or a crappy UX or tech debt -- which is potentially available in subsequent PR commit messages -- it's still valuable training data.
Probably even more valuable than code that just worked, because evidently we have enough of that and AI code still has issues.
I see this line of thought put out there many times, and I've been thinking: why do people do anything at all? What's the point? If no one at all is even reviewing the output of coding agents, genuinely, what are we doing as a society?
I fail to see how we transition society into a positive future without supplying means of verifying systemic integrity. There is a reason that Upton Sinclair became famous: wayward incentives behind closed doors generally cause subpar standards, which cause subpar results. If the FDA didn't exist, or they didn't "review the output", society would be materially worse off. If the whole pitch for AI ends with "and no one will even need to check anything" I find that highly convenient for the AI industry.
You could e.g. write specs and only review high level types plus have deterministic validation that no type escapes/"unsafe" hatches were used, or instruct another agent to create adversarial blackbox attempts to break functionality of the primary artifact (which is really just to say "perform QA").
As a simple use-case, I've found LLMs to be much better than me at macro programming, and I don't really need to care about what it does because ultimately the constraint is just that it bends the syntax I have into the syntax I want, and things compile. The details are basically irrelevant.
Code quality will impact the effectiveness of ai. Less code to read and change in subsequent changes is still useful. There was a while where I became more of a paper architect and stopped coding for a while and I realized I wasn't able to do sufficient code reviews anymore because I lacked context. I went back into the code at some point and realized the mess my team was making and spent a long while cleaning it up. This improved the productivity of everyone involved. I expect AI to fall into a similar predicament. Without first hand knowledge of the implementation details we won't know about the problems we need to tell the AI to address. There are also many systems which are constrained in terms of memory and compute and more code likely puts you up against those limits.
I don't disagree that code quality is currently more important than it's ever been (to get the most out of the tools). I expect that quality will increase though as people refine either training or instructions. I was able to get much better (well factored, aligned to business logic) output that I'm generally happy-ish with a couple months ago with some coding guidelines I wrote. It's possible that newer models don't even need that, but they work well enough with it that I haven't touched those instructions since.
I mean, sure, for programming macros. Or programming quick scripts, or type-safe or memory-safe programs. Or web frontends, or a11y, or whatever tasks for which people are using AI.
But if you peel back that layer to the point where you are no longer discussing the code, and just saying "code X that does Y"... how big is X going to get without verifying it? This is a basic, fundamental question that gets deflected by evaluating each case where AI is useful.
When you stop being specific about what the AI is doing, and switch to the general tense, there is a massive and obvious gap that nobody is adequately addressing. I don't think anyone would say that details are irrelevant in the case of life-threatening scenarios, and yet no one is acknowledging where the logical end to this line of thinking goes.
I mean, the promise of perfect AI and perfect robotics is that humans would no longer have to do anything. They could live a life of leisure. Unfortunately, we're going to get these perfect AI and perfect robotics before we transition socially into a post-scarcity, post-ownership society. So what will happen is that ownership of the AI and robots will be consolidated into the hands of the few, the vast rest of us will have nothing economically relevant to do, and we'll probably just subsist or die.
We're already seeing this today. Every year, thousands of people are becoming essentially irrelevant to the economy. They don't own much, they don't invest much, they don't spend much money, they don't make much money, and they are invisible to economics.
> They don't own much, they don't invest much, they don't spend much money, they don't make much money, and they are invisible to economics.
Indeed. Sometimes I think the so-called “lower classes” end up functioning more like crops to be farmed by the rich. Think, dollar stores that sell tiny packages of things at worse unit cost, checking account fees, rent-a-center, 15% interest auto loans and store credit cards with 30% interest…
I've definitely felt this kind of way in the past. But these days I'm not so sure.
Setting aside the AI point about it, the idea of people becoming essentially irrelevant to the economy is an indictment on society. But I'd argue that the indictment really is towards what constitutes measurement in the economy. Not an indictment on society itself, or technology.
Sure, someone may not spend much money or produce much money, but if they produce scientific research or cultural work that is intangibly valuable it is still valuable regardless of whether economists can point to a metric or not. Same goes for the infinite amounts of contributions to our world from nature: what is the economic value of a garden snake or a beetle? A meaningless question when the economy can only see things in dollars.
They will still be turning out the same problematic code in a few years that they do now, because they aren’t intelligent and won’t be intelligent unless there is a fundamental paradigm shift in how an LLM works.
I use LLMs with best practices to program professionally in an enterprise every day, and even Opus 4.6 still consistently makes some of the dumbest architectural decisions, even with full context, complete access to the codebase and me asking very specific questions that should point it in the right direction.
I keep hearing “they aren’t intelligent” and spit out “crap code”. That’s not been my experience. LLMs prevented and also caught intricate concurrency issues that would have taken me a long time.
I just went “hmmm, nice” and went on. The problem there is that I didn’t get that sense of accomplishment I crave and I really didn’t learn anything. Those are “me” problems but I think programmers are collectively grappling with this.
They are not intelligent. Full stop. Very sophisticated next word prediction is not intelligence. LLMs don’t comprehend or understand things. They don’t think, feel or comprehend things. That’s just not how they work.
That said, very sophisticated next word predictors can and sometimes do write good code. It’s amazing some of the things they get right and then can turn around and make the weirdest dumbest mistakes.
It’s a tool. Sometimes it’s the right tool, sometimes it’s not.
None of those things will be necessary if progress continues as it has. The AI will do all of that. In fact it will generate software that uses already proven architectures (instead of inventing new ones for every project as human developers like to do). The testing has already been done: they work. There are no vulnerabilites. They are able to communicate with stakeholders (management) using their native language, not technobabble that human developers like to use, so they understand the business needs natively.
If this is the case then none of us will have jobs; we will be completely useless.
I think, most likely, you'll still need developers in the mix to make sure the development is going right. You can't just have only business people, because they have no way to gauge if the AI is making the right decisions in regards to technical requirements. So even if the AI DOES get as good as you're saying, they wouldn't know that without developers.
For some definition of work, yes, not every definition. Their product is not without flaw, leaving room at for improvement, and room for improvement by more than only other AI.
> There are no vulnerabilities
That's just not true. There's loads of vulnerabilities, just as there's plenty of vulnerabilities in human written code. Try it, point an AI looking for vulns at the output of an AI that's been through the highest intensity and scrutiny workflow, even code that has already been AI reviewed for vulnerabilities.
Buying a (relatively expensive) CO2 monitor is one of those purchases that I was pensive about at first, but have zero regrets about a few years later. I was ignorant of a lot of things related to air quality, CO2 in particular. We were foolishly running a gas stove in a house with no ventilation, which probably had us up in the 1500+ range every time. This may seem like an obvious no-no to most of you but it was not a lesson we had ever learned.
You also get to see some other interesting observations, like how local construction digging up dirt on your street can cause elevated radon levels for months on end.
For in-house monitoring it's tricky because pretty much every vendor who makes more than bare-bones ones goes out of business or discontinues the product 12-18 months after you've bought it (Air Mentor, Awair, BlueAir, EdiGreen, Foobot, the list goes on). The best one I've found are QingPing, colour LCD touch-screen display with WiFi access, been around for years, regularly update the firmware and hardware, actually provide real product support, and have things like MQTT integration if you're using HA.
I have a few of those around my house as well. However I have noticed that my VOC readings are not consistent, even if nothing in the area of the room has changed. I've reached out to their support about it but they're not much help. One thing I have noticed is a correlation with VOCs and C02, one(C02) seems to impact another(VOC)...which I don't think supposed to be the case. I was digging to the forums awhile back but the only conclusion I came up with is, you can't trust the VOC readings on these (or most consumer) devices...just too many variables and the sensors don't know/measure the full picture. It still bothers me though to look down in our son's room and see VOCs measurements elevated.
There was a moment where the VOC measurement was stuck at an elevated level. The suggested solution was to blow some air in there to knock (presumably) dust off the sensor, which worked. It could be that the VOC sensor is not great, but it could also be that it gets dirty.
That's like saying without gas stations good luck getting gasoline to the people. It goes without saying that batteries are an essential part of most renewable solutions.
I'm still reading a lot about theoretical storage ideas, but much less than I'd like about massive deployments, so I think it doesn't quite go without saying.
I don't know how Polymarket works, so maybe you can enlighten me: can Polymarket be subpoenaed to provide the recipients of the payouts? Is there some insulation to keep them ignorant of their identity?
reply