This bit of article is what I'm hopeful will happen:
> That explanation broadly matches what we’re seeing in recent versions of Google Play, where new warning messages emphasize developer verification, internet requirements, and potential risks, while still allowing users to proceed.
Yes, exactly. A lot of demos just don’t fail in the real world. They were never designed for real usage in the first place. They work once, in a clean flow, and fall apart as soon as people behave… like people.
The only big noticeable issue for me was building a large enterprise images (like Splunk). This issue was fixed [1]. Other than that I have not seen any issues with IO or performance. Running Splunk/OpenSearch/ElasticSearch, some performance tests, enterprise software written in Go (building for arm64/amd64). No issues at all.
Why not just bet heavily against and then inform maintainers? By just betting on it instead it makes you look like you, or someone you know planted the malware
That is what I meant to say, that you'd inform the maintainers along with your bet against the commit. In this thought experiment I assumed that the maintainers are already being spammed by AI so heavily that the bet is necessary to get their attention. (Neal Stephenson had something similar going in in Anathem, he called them "bogons".)
In the case where you're betting heavily in favor of a commit, maybe because you've reviewed it and think it's good, maybe because it contains malware you want to inject... you'd be attracting reviewer attention to that commit because if they can talk the maintainers out of it they end up with more of your money.
Probably the best strategy for a malicious committer would be to sneak through a low value nothing-to-see-here commit, because the low bet would not attract extra reviewer attention, so the maintainers have to set it high enough that it still incentivizes review.
I don't want to live in this world, by the way, I'm just afraid we might have to.
I was thinking of a similar idea to this, but for news/tweets/posts. As a consumer of media, I might decide to only read media with $x staked, so AI media factories need to be willing to stake that much to reach audiences, and will get penalized when they are wrong…
I imagine a hard problem is building a system to resolve these markets.
> And it is very disheartening to see people without any skills to behave the way they do.
The way the do, which is? I've skimmed comments and a lot of them is hate, hostility towards OP's project and coders "without skill" in general, also denial because there's no way anything vibe-coded worked. At best, there is strong tribalism on both ends.
There is definitely tribalism. I think a lot of the negativity is people who recognize the long term goals of these companies, not just to tech workers. Right now, these models are a threat to people who worked hard and invested their time, while it lets inexperienced or lazy people appear more competent. I think that less experienced developers (or people who don't care anymore or maybe ever) see what an LLM can do and immediately believe it will solve all their problems. That if you are not embracing this with full force you are going to be left behind.
You might see more opposing views in this thread, but if you browse this site often you'll see both sides.
Those embracing it heavily do not see the nuances carefully creating maintainable solutions, planning and recognizing tech debt, and where it's acceptable short term. They are also missing the theory building behind what is being created. Sure AI models might get even better and could solve everything. But I think it's naive to think that will be generally good for 90% of the population including people not in tech.
Using these models (text or image) devalues the work of everyone in more than one way. It is harmful for creative work and human expression.
This tech, and a lot of tech, especially ones built by large corporations for profit extraction and human exploitation, is very unlikely to improve the lives at a population level long term. It can be said for a lot of tech (ie. social media = powerful propaganda). The goal of the people creating these models are to not need humans for their work. At which point I don't know what would happen, kill the peasants?
No need to go that far. I bounced off weekend projects many times because I lost interest the moment I had to relive fighting the "modern" frontend ecosystem set up (or whatever else unrelated to the actual building), which is what I was already doing at the day job. In the end I just gave up because I'd rather get some rest and fun out of my time off. Now I can just skip that part entirely instead of tanning in front of <insert_webpack_or_equivalent> errors for hours on Saturday afternoon.
No, but if something is going to be close to free to produce the consequence will be that no commercial piece of music will be incentivized to be produced by humans.
Commercial music isn't the only way to make music, but it pays people that want to professionally work as musicians.
In other words, the current system allows a select few artists to make money/fame from doing something they want to do (opposed to have to do to make a living). Or also, AI music will lessen the good feeling some people get when they believe that musicians can make money producing music.
I don't disagree that these things exist, but I do believe that these are mostly propped up by dynamics that will soon no longer exist.
> Or also, AI music will lessen the good feeling some people get when they believe that musicians can make money producing music.
If that is your way of saying that AI will remove the possibility for humans to create music full time due to there being no money anymore in music then sure.
> I don't disagree that these things exist, but I do believe that these are mostly propped up by dynamics that will soon no longer exist.
The same dynamic that propped up blacksmiths, potters, tailors, etc.: the absense of scaling/automating technology. There is still demand for authentic artisanal crafts and the "good feeling" that these people can earn money, but the magnitude has been reduced to the farmer's markets.
I can see a similar thing playing out with music. There will still be some token demand, but people will not pay the same when they can have a magic, infinitely producing on-demand, tailored-to-their-taste music machine, at vanishingly small marginal costs.
Just a realistic thing. Or, a good thing for consumers, a bad thing for producers (and a bad thing for producers who are actually consumers in disguise of a desired lifestyle and/or status).
Good for consumers is highly debatable since we'd lose one more social connection in life. Something we are running a very high debt tab for already.
We would also lose musical knowledge since all the full-time musicians would have to stop playing. Only amateur musicians would remain.
And the "desired lifestyle" / "desired status" would be transferred to the already very, very rich and powerful AI company CEOs. Such an improvement ...
Looking at the surface, it is true, but there are caveats:
- Not all musicians are in the field because it pays, some of them haven't earned a cent
- There are talented people who would like to create music but are forced to work long hours, which leaves them drained. Perhaps in the future, humans won't have to work that much, which will allow them to pursue creative hobbies such as music making
- Artists will be able to continue performing live, which will act as a huge filter for the AI-generated content and keep paying them.
Aside from that I agree, though musicians just one of many groups disrupted by AI and I wouldn't say they'll be the ones hurt most by it, mostly because they can continue to "exist" outside of the Internet, and experiencing music live could become more popular because of it. A lot of assumptions here, I know
> Perhaps in the future, humans won't have to work that much,
I think that this is the fairytale part that I have trouble accepting.
Coming from a country that has a very limited social welfare system I don't believe that the political or social climate is adapted to take such steps in a future where a lot of things are automated.
It goes against everything that we've seen in the last 150 years.
> Artists will be able to continue performing live, which will act as a huge filter for the AI-generated content and keep paying them.
Or AI "musicians" will play live events as holograms.
> Aside from that I agree, though musicians just one of many groups disrupted by AI and I wouldn't say they'll be the ones hurt most by it, mostly because they can continue to "exist" outside of the Internet, and experiencing music live could become more popular behind it.
Sure, they might not be the most affected by AI, but they would still be affected which is the reason I'm not a fan of AI in music. This pushback doesn't need to be reserved to the most impacted activities.