It's also not about lack of support, but the fact that you have to pay extra for every single decompiler. This sucks if you're analyzing a wide variety of targets because of the kind of work you do.
IDA also struggles with disasm for Harvard architectures which tend to make up a bulk of what I analyze - it's all faked around synthetic relocations. Ghidra has native support for multiple address spaces.
I really want to like Binary Ninja, but whenever I have the choice between not paying (Ghidra), paying for something that I know works (IDA) and paying for something that I don't know if it works (Binja) then the last option has always lost so far.
Maybe we need to get some good cracked^Wcommunity releases of Binja so that we can all test it as thoroughly as IDA. The limited free version doesn't cut it unfortunately - if I can't test it on what I actually want to use it for, it's not a good test.
(also it doesn't have collaborative analysis in anything but the 'call us' enterprise plan)
Well, Ghidra's strength is batch processing at scale (which is why P-Code is less accurate than IDA's but still good enough) while allowing a massive amount of modules to execute. That allows huge distributed fleets of Ghidra. IDA has idalib now, and hcli will soon allow batch fleets, but IDA's focus is very much highly accurate analysis (for now), which makes it a lot less scalable performance wise (for now).
> Warp, while excellent, requires individual approval for each command—there’s no equivalent to Claude’s “dangerous mode” where you can grant blanket execution trust.
That’s a lie. I simply added “.*” to the whitelist. It’s a regex.
Spending significant time adapting core kernel code or developing a safe Rust abstraction for DMA, only to be summarily shut down by a single gatekeeper who cites "not wanting multiple languages" is demotivating. It's especially incongruent given that others have championed Rust in the kernel, and Linux has begun hosting Rust modules.
If the project leadership — i.e. Linus — truly wants Rust integrated, that stance needs to be firmly established as policy rather than left up to maintainers who can veto work they personally dislike. Otherwise, contributors end up in a limbo where they invest weeks or months, navigate the intricacies of the kernel's development model, and then find out a single personality is enough to block them. Even if that personality has valid technical reasons, the lack of a centralized, consistent direction on Rust's role causes friction.
Hector's decision to leave is understandable: either you have an official green light to push Rust forward or you don't. Half measures invite exactly this kind of conflict. And expecting one massive rewrite or an all‐encompassing patch is unrealistic. Integration into something as large and historically C‐centric as Linux must be iterative and carefully built out. If one top‐level developer says "no Rust", while others push "Rust for safety", that is a sign that the project's governance lacks clarity on this point.
Hector's departure highlights how messy these half signals can get, and if I were him, I'd also want to see an unambiguous stance on Rust — otherwise, it's not worth investing the time just to beg that your code, no matter how well engineered, might be turned down by personal preference.
I think initially Linus stance of encouraging Rust drivers as an experiment to see how they turn out was a right decisions. There should be some experience before doing long term commitments to a new technology.
But since then a lot of experience was made and I got the notion that the Rust drivers were quite a success.
And now we are at a point where proceeding further does need a decision by Linus, especially as one of the kernel maintainers is actively blocking further work.
> Spending significant time adapting core kernel code or developing a safe Rust abstraction for DMA, only to be summarily shut down by a single gatekeeper who cites "not wanting multiple languages" is demotivating
Christoph Hellwig is one of the oldest subsystem maintainer.
Maybe the Rust developer shall have a more careful behaviour. Nobody wants to break core kernel code.
This is a fundamental misunderstanding of the structure of the linux kernel, the nature of kernels in general, and the ways one performs automated verification of computer code.
Automated verification (including as done by rust in it's compiler) does not involve anything popularly known as AI, and automated verification as it exists today is more complete for rust than for any other system (because no other widely used language today places the information needed for verification into the language itself, which results rust code being widely analyzable for safety).
Human validation is insufficient and error prone, which is why automated verification of code is something developers have been seeking and working on for a long time (before rust, even).
Having "explicit" (manual?) memory management is not a benefit to enabling verification either by humans or by machines. Neither is using a low level language which does not specify enough detail in the type system to perform verification.
Kernel modules aren't that special. One can put a _lot_ of code in them, that can do effectively anything (other than early boot bits, because they aren't loaded yet). Kernel modules exist for distribution reasons, and do not define any strict boundary.
If we're talking about out-of-tree kernel modules, those are not something the tend to exist for a long time. The only real examples today of long lived out-of-tree modules are zfs (filesystem) and nvidia (gpu driver). These only exist out-of-tree because of licensing and secrecy. This is because getting code in-tree generally helps keep code up to date with less effort from everyone involved: the people already making in-tree changes can see how certain APIs are being used and if those in-tree folks are more familiar with the API they can/may improve the now-merged code. And the formerly out-of-tree folks don't have to run their own release process, don't have to deal with constant compatibility issues as kernel APIs change, etc.
>Human validation is insufficient and error prone,
Basically, if you assume thats impossible for humans to be correct, or that its impossible to write correct memory safe C code, you start down the path that leads to things like Java, Haskell, and now Rust. And then when nobody takes you seriously, you wonder why - well, its because you are telling people who know how to write correct and memory safe C code that we are insufficient and error prone
>Kernel modules aren't that special.
By definition, they interface with the core kernel code. They are not core kernel code
Because it forces the developer to think about what is being written at every step of the way, instead of relying on language features that are by far not complete in terms of providing memory safety.
Naive take would be that it adds abstraction that you need to keep checked, in addition to the kernel code itself. Not making a value statement at all on the level of impact that actually has in practice.
That's shifting the goal post.
Ergonomically I don't know a single language that is both trying to make you write secure and correct code without sacrificing ergonomics.
Does that mean usage of all file lockers are equally flagged as suspicious?
My memory gets a bit patchy here, but I'm pretty sure the detective said something along the lines of "MEGA is only ever used for <thing I was accused of>". Which struck me as blatantly false and indicative of either an agenda or incompetence.
And I tend to subscribe to Hanlon's razor:
Never attribute to malice that which is adequately explained by stupidity.
Hearsay, but I hear (and say) they basically got netflow and hell many ISPs are selling it and at least one firm exists re-correlating it back to users. Now mix networks are going to need to start being set to X GB/day and then uniformly emit traffic regardless if legit or padding. Also say goodbye to low latency.
I think Hyphanet or IPFS or the like with fixed and padded bandwidth links will be the next step in privacy. Again, there are consumer ISPs selling netflow data!
I'm renting a cube (of sorts) in a building in Alexandria VA that sold for something like 30% of what the owners paid for it originally just the other month.
We're definitely witnessing a crash in US office space prices.
19h's comment that started this sub-thread is a perfect example of the horrible state of civics education.
I'm not an accountant and I definitely don't know taxes properly either, but I do know enough about taxes to know that businesses pay taxes on their net profit (gross income - losses = net profit). Those losses can be payroll costs, costs of goods, shipping and handling costs, rent, loan payments, insurance premiums, equipment purchases and upkeep, travel and lodging expenditures, and so on.
Assuming that the losses are legitimate and can be audited if necessary, this is not tax evasion and to baselessly accuse anyone of it is literally defamation.
Presumably you weren't planning to do that when you set it up. But sometimes people do lose money, particularly if they took a risk. If you take enough risks, you'll probably always have some losses ready to be realized.
Brilliant. I own some Bed Bath and Beyond stock. I was going to sell it for a ton of money, but now that the company is bankrupt, I can sell it for practically nothing and avoid taxes!
You probably can. You wouldn't have bought them strategically for that purpose of course.
Most people probably pick a mix of winners and losers. After you find out which ones were the losers, then I guess you can cash them in to lower your taxes strategically. I think that's the idea.
...depends on the jurisdiction. In many property taxes aren't based on the last transaction price, they're based on what the city assesses the prices as. Selling the property for $1 won't affect the value of the property, unless all the other buildings in the same area do the same thing.
>- Money laundering? Check.
Except property transfers are public information so it's obvious what's going on.
Surely, the loss of some user metadata is a catastrophe on par with the burning of the Library of Alexandria. Perhaps next time, the Internet Archive should consult their vast army of paid engineers and their bottomless coffers to ensure such a calamity never befalls humanity again. After all, what's the point of preserving vast swathes of human knowledge and culture if one can't access their personal bookmarks from 2007?
Absolutely phenomenal quality. Subscribed to the pro plan! Please add an option to use Claude as I absolutely prefer it to GPT4 and it's probably cheaper too.