@dang, just wanted to say that it seems that the response to your statement does also seem to be AI generated. Dead-internet theory is turning real day by the day, oof.
I dabble in correcting other people’s spelling on occasion (can’t help it). Somewhat frustratingly, the usual reaction is “language evolves” and “everyone uses it this way” and “if it is understood, it does not matter how you wrote it”.
> I'd argue work on meaningful security improvements is mostly available outside industry security roles.
I drift in and out of security roles and definitely agree. If a company truly wants secure products the proper way is to do that from the ground up as the product is designed, architected and developed. The optimal role for building secure products is to have security awareness and priority embedded in the design and engineering team. Not as an afterthought from a security team.
Alas! Most companies don't care that much, so if you want to drive the product to be more secure, it can sometimes be more effective to do it from the security organization. If the company culture is to ignore security, you can drive more improvement from infosec because then that's your job. But it's not the optimal way to get there.
LT1812 is my weapon of choice for ultra low RF stuff (think Tayloe mixer frontends). Readily available, pennies to buy, reasonably flat to about 20MHz although THD is getting a little rough up there, possibly because I'm using it wrong.
Network effect means it will be a huge and risky undertaking, and one needs to solve the bootstrap problem. But the costs of video delivery means that one would have to burn serious cash in the meantime. So it works in tandem.
TikTok kinda did manage to make a dent though - I suspect it substitutes for YouTube in some cases (though not all).
I too am waiting for the pendulum to swing from clean corporate cookie cutterism back to dumb fun and I believe it's up to us to make that change. It probably won't happen on its own.
ur probably right on the margin. Anthropic doesn't break it out, but enterprise spend on Bedrock is the highest quality revenue in the AI stack right now. Itzs sticky, multi-year, embedded in existing AWS commits. OpenAI was watching that compound while stuck on Azure
I feel like one of the takeaways here is that Rust protects your code as long as what your code is doing stays predictably in-process. Touching the filesystem is always ripe with runtime failures that your programming language just can't protect you from. (Or maybe it also suggests the `std::fs` API needs to be reworked to make some of these occurrences, if not impossible, at least harder.)
On a separate note: I have a private "coretools" reimplementation in Zig (not aiming to replace anything, just for fun), and I'm striving to keep it 100% Zig with no libc calls anywhere. Which may or may not turn out to be possible, we'll see. However, cross-checking uutils I noticed it does have a bunch of unsafe blocks that call into libc, e.g. https://github.com/uutils/coreutils/blob/77302dbc87bcc7caf87.... Thankfully they're pretty minimal, but every such block can reduce the safety provided by a Rust rewrite.
> The root cause is not thinking: Why is root chrooting into a directory they do not control?
Because you can't call chroot(2) unless you're root. And "control a directory" is weasel words; root technically controls everything in one sense of the word. It can also gain full control (in a slightly different sense of the word) over a directory: kill every single process that's owned by the owner of that directory, then don't setuid into that user in this process and in any other process that the root currently executes, or will execute, until you're done with this directory. But that's just not useful for actual use, isn't it?
Secure things should be simple to do, and potentially unsafe things should be possible.
My fiancees company has no developers, yet everyone has a paid subscription to LLMs. Certainly not $15/hour, and I don't think it's likely they'll ever pay that for everyone, but I don't find it hard to picture the aggregate cost of subscriptions on a global basis to far exceed $600m/day between far more people on subscriptions cheaper than $15/hour but more expensive than today, and companies ending up paying far more than $15/hour averaged over their developers for additional use. E.g. I already run agents 24/7 just for me. I couldn't yet justify $15/hour, but the amounts I'm spending is steadily increasing as I manage to squeeze returns from more and more things.
Sure, it's back of napkin math, and I also think that several of the companies we see today won't survive and/or will only survive due to consolidation, but I also think the spend is going to be immense.
With respect to the datacentres, I expect we'll see inference costs crash over the coming years - we're only seeing the beginning of what dedicated ASICs will do to inference, and what work to make models more efficient will do to the need for the very largest models, and while that might drive down the spend on individual subscriptions, I think it will drive up the total spend dramatically as cheaper models become capable enough to put them "everywhere".
But, yeah, ultimately we're guessing. I'm happy to put my guesses on the record, though, and look forward to look back and see how wrong I got it in a couple of years.
The problem though is, using undocummented communication channels on private phones by people not technically inclined. That Signal is an american company and subject to NSA scrutiny while the users a politicians of a foreign goverment only makes this worse.
So, national messengers, controlled by experts, that archive communication and run on trusted hardware, would be the best solution for the work of democratic goverments I would think.
Of course, the possibility of software quality and security experts in service of the goverment is probably just wishful thinking.
You're not wrong, but it will always be the case that the web platform lags native. There will always be stuff you can't do without a native client. The proportion of apps that it's viable to run as a PWA will probably increase over time, but the platforms have both the ability and incentive to stay out ahead.
I don't think so? In the case of p2p (including radicle) you can run multiple not-so-reliable nodes. Of course that won't work for serving up a web frontend but that's distinct from the git repo itself.
OpenAI just gave up Azure exclusivity, killed the AGI clause, and stopped paying Microsoft revenue share to get on AWS. Anthropic figured out 18 m ago that enterprises buy from their cloud, not from the best model. OpenAI is just catching up.
I've recently built a script that periodically (every 25 minutes) fetches the latest merged PRs to check for some potential rule violations. I'm not an admin and couldn't get the events API working, so I just resorted to polling.
On an average ~8 hour working day, there's at least one failed request. In fact, looking over the logs, I can't spot a single day that did not have a failed request.
Now, I can't guarantee that these are all caused by GitHub (as opposed to my connection), but it is pretty funny.
This is not really a complex question as much as it is an analogy demonstrating that allowing third parties to dictate how you live leads to a huge loss of your freedom with bad consequences on your independence and control. But you are right: I could say this in my above comment.
So yeah, their implementation of chmod checked if a path was pointing to the root of the filesystem with 'if file == Path::new("/")'.
How the f** did this sub-amateur slop end up in a big-name linux distribution? We've de-professionalized software engineering to such a degree that people don't even know what baseline competent software looks like anymore