Also, with hardware being now more expensive, just being able to swap parts for upgrades or repairs is way more appealing than before.
Not to mention they don't spend time with marketing fluff about AI, which in the current market is winning them some clients.
But I also think the fact that they have been here for a long time now, and they got the pro backward compatible with the old 13 means people trust them now. They delivered.
Hasn't this always been the case? If a movie or show features product placement, a TV station playing said movie/show doesn't get any of the proceeds from that advertisement, do they?
This is just the pros having more tact than amateurs, and actual writers. I do see some “influencers” that do more of a pure product placement. They just happen to be drinking a specific energy drink in every video where it sits perfectly with the label out. I see some YouTubers trying to get better at integrating the ad into the video, but most of them can’t be bothered to write and record a custom script.
That said, Subway often seemed to get pretty heavy with its product placement. The last season of Chuck had a good amount of this, even what was essentially an ad read right in the middle of an episode by Big Mike. On Community they personified Subway and based a whole episode on him. In the Office they brought in Ryan Howard to say “eat fresh” over and over again, and even called out that it was for Subway to make sure it didn’t go over anyone’s head. Subway was big on sponsoring the last seasons of struggling shows with loyal fanbases, and littering the episodes with Subway product placement to the point where it became a plot point. I remember Zachary Levi (Chuck) tweeting out to ask everyone to go buy some Subway before the finale. It sounded like if Subway saw enough of a spike in buying from the sponsorship, they might fund yet another season.
I know, but I don't see a fundamental difference. If TV networks are happy to pay for a show that also gets advertising revenue from product placement, I don't see why YouTube would not be happy to deliver ads and pay some percent of that to a channel that displays its own ads. Especially given that YouTube has much, much less cost per video than a traditional network, which can only broadcast one program at a time.
Many ransomware groups of today operate in the same way a legal tech startup would. It’s a large organization with clear goals, not just some guys fooling around. It’s a funny thought tho.
How do laypersons (noobs) like me learn about this stuff? Like Wired magazine technical level.
I've just started Darknet Diaries podcast. So great.
When I worked on electronic medical records, I assumed it was just a matter of time until we were hacked (too). All the most banal reasons: many vendors, shared passwords, root/admin access, etc.
> One major problem I see with the use of AI is that it will prevent people from building an understanding of <insert problem domain X here>.
I don’t really think this is a problem. AI is a tool, you still learn while using it. If you actually read, debug and maintain the produced code, which I consider a must for complex production systems, it’s not really that different compared to reading documentation and using Stack Overflow (i.e., coding the way it was done 10 years ago). It’s just much more efficient and it makes problems easier to miss. Standard practices of AI assisted development are slowly forming and I expect them to improve over time.
I’ll bite - I’ve been a dev at a new company for about a year and a half. I had mostly done front end work before this, so my SQL knowledge was almost nonexistent.
I’m now working in the backend, and SQL is a major requirement. Writing what I would call “normal” queries. I’ve been reaching for AI to handle this, pretty much the whole time - because it’s faster than I am.
I am picking up tidbits along the way. So I am learning, but there’s a huge caveat. I notice I’m learning extremely slowly. I can now write a “simple” complexity query by hand with no assistance, and grabbing small chunks of data is getting easier for me.
I am “reading, debugging, and maintaining” the queries, but LLMS bring the effort on that task down to pretty much 0.
I guarantee if I spent even 1 week just taking an actual SQL class and just… doing the learning, I would be MUCH further along, and wouldn’t need the AI at all. It’s now my “query tool”. Yeah, it’s faster than I am, but I’m reliant on it at this point. I will SLOWLY improve, but I’ll still continue to just use AI for it.
All that to say, I don’t know where the future goes - our company doesn’t have time to slow down for me to learn SQL, and the tool does a fine job - it’s been 1.5 years and the world hasn’t ended, I can READ queries rather quickly - but writing them is outsourced to the model.
In the past, if a query was written on stack overflow, I would have to modify it (sometimes significantly) to achieve my goal, so maybe the learning was “baked in” to the translation process.
Now, the LLM gives me exactly what I need, no extra “reinforcement” work done on my end.
I do think these tools can be used for learning, but that effort needs to be dedicated. In many cases I’m sure other juniors are in a similar position. I have a higher output, but I’m not quickly increasing my understanding. There’s no incentive for me to slow down, and my manager would scoff at the idea, really. It’s a tough spot to be in.
I can corroborate this. I coached mechanical engineers who had to learn some programming to conduct research by analyzing factory machine data I provided them (them being the domain experts). The ones who learned python and sql using AI hardly had learned anything after half a year, the ones I instructed where to find the API docs and a beginner tutorial weren’t just much further along, they were also on a faster trajectory for the future. I think AI is a beginner trap because it allows them to throw shit at the wall and see what sticks. It is much more useful in the hands of an expert in the long term.
I think this has been shown for fast majority with homework. You just don't learn much by copying homework from somewhere else. Actual effort is needed for learning process. Unless you are some weird most likely rare genius...
Also makes me think of lot of incidental learning that can go on. Like when looking at API docs noticing the other things. Might not be useful now, but could very well be later.
That would be groundbreaking news. A tool works either deterministically or it is broken.
A more helpful analogy is "AI is outsourced labor". You can review all code from overseas teams as well, but if you start to think of them as a tool you've made too big promotions into management.
> It’s just much more efficient and it makes problems easier to miss. Standard practices of AI assisted development are slowly forming and I expect them to improve over time
Bravo! IMHO, AI just underscores core high quality engineering practices that any high quality engineer has been practicing already.
AI is a tool that provides high leverage - if you've been following practices that allow sloppy coding, AI will absolutely amplify it.
If anything, I would guess that the AI assisted future will require engineers to think through the problem more upfront and consider edge cases instead of jump and type out the first thing that comes to mind - the AI can spit out code way faster.
There's an alternate, vibe coded universe where engineers just spit out slop but as I wrote in another comment here, there are tools to detect that. These are tools that sound "enterprisey" and that's because before AI, no one else had to deal with such scale of code - it's was just far too expensive to read, update and create PRs.
Those boundaries are coming down and now almost everyone who can pay for Oxygen tanks have a shot at scaling Mt. Everest.
I think the advantage of Flash-MoE compared to plain mmap is mostly the coalesced representation where a single expert-layer is represented by a single extent of sequential data. That could be introduced to existing binary formats like GGUF or HF - there is already a provision for differently structured representations, and that would easily fit.
In some ways AI sounds almost utopian. I theory it could redistribute manpower more evenly between small and large businesses, allowing them to compete more fairly and improving the efficiency of capitalism (the idealistic model, not the real world state). However, than you remember that the AI tech is currently almost fully in control by the big tech (and its next generation) and you have to ask whether they’ll be able to sabotage that improvement because they will do their worst for sure since liberating the market is not beneficial to them. Let’s hope that despite all odds and current trends we actually reach a state where AI is possible to run on-prem/locally and there are still SOTA models at least as open as they are today.
Strongly dispute this. Compute very depreciates rapidly. Inference is cheaper than training. DeepSeek was the warning shot across their bow, but the big AI firms can't afford to change course without jeopardizing their "Wile E. Coyote off the cliff" economics.
LLM performance is already plateauing; models will get more efficient. Good-enough models will be deployed on chips, the same way H.264 is a good-enough video codec but used ubiquitously.
More than your points, I'm very curious how these AI companies are going to turn profit without making using AI insanely expensive. Some time ago, each prompt was highly subsidized, I doubt the picture has changed much.
Edit: maybe the model efficiency you mentioned is the key, we'll see.
I suspect they just won't. First-mover disadvantage is real for many markets. Everyone knows Amazon, but how many remember Kozmo.com?
My assumption is that OpenAI, Anthropic, etc will go bankrupt and eventually be subsumed into Microsoft/Google/ByteDance & friends. New entrants will take their pioneering work and sell inference for pennies on the dollars without investing in massive R&D spend.
Supposedly they could make money, if they wouldn't have to burn a lot on the research. There was an interview with Dario where he stated this and hinted to the fact that a monopoly would not have the research problem, and thus could start making money.
reply