For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | 19h's commentsregister

Which exotic architectures is IDA missing from your perspective?


Stuff I've recently analyzed that IDA has no decomp support for (and Ghidra's is anywhere from good enough to actually good):

  - AVR
  - Z80
  - HC08
  - 8051
  - Tricore
  - Xtensa
  - WebAssembly
  - Apple/Samsung S5L87xx NAND controller command sequencer VLIW (custom SLEIGH)
And probably more that I've forgotten.

It's also not about lack of support, but the fact that you have to pay extra for every single decompiler. This sucks if you're analyzing a wide variety of targets because of the kind of work you do.

IDA also struggles with disasm for Harvard architectures which tend to make up a bulk of what I analyze - it's all faked around synthetic relocations. Ghidra has native support for multiple address spaces.


Binary Ninja supports some of them as well, highly recommend.


I really want to like Binary Ninja, but whenever I have the choice between not paying (Ghidra), paying for something that I know works (IDA) and paying for something that I don't know if it works (Binja) then the last option has always lost so far.

Maybe we need to get some good cracked^Wcommunity releases of Binja so that we can all test it as thoroughly as IDA. The limited free version doesn't cut it unfortunately - if I can't test it on what I actually want to use it for, it's not a good test.

(also it doesn't have collaborative analysis in anything but the 'call us' enterprise plan)


Ghidra has a slightly different focus than IDA, so they're definitely not just using Ghidra :-)


I have only a very basic understanding of the two tools. Can you give me just some highlights regarding their differences?


Well, Ghidra's strength is batch processing at scale (which is why P-Code is less accurate than IDA's but still good enough) while allowing a massive amount of modules to execute. That allows huge distributed fleets of Ghidra. IDA has idalib now, and hcli will soon allow batch fleets, but IDA's focus is very much highly accurate analysis (for now), which makes it a lot less scalable performance wise (for now).


> Warp, while excellent, requires individual approval for each command—there’s no equivalent to Claude’s “dangerous mode” where you can grant blanket execution trust.

That’s a lie. I simply added “.*” to the whitelist. It’s a regex.


The frustration seems justified.

Spending significant time adapting core kernel code or developing a safe Rust abstraction for DMA, only to be summarily shut down by a single gatekeeper who cites "not wanting multiple languages" is demotivating. It's especially incongruent given that others have championed Rust in the kernel, and Linux has begun hosting Rust modules.

If the project leadership — i.e. Linus — truly wants Rust integrated, that stance needs to be firmly established as policy rather than left up to maintainers who can veto work they personally dislike. Otherwise, contributors end up in a limbo where they invest weeks or months, navigate the intricacies of the kernel's development model, and then find out a single personality is enough to block them. Even if that personality has valid technical reasons, the lack of a centralized, consistent direction on Rust's role causes friction.

Hector's decision to leave is understandable: either you have an official green light to push Rust forward or you don't. Half measures invite exactly this kind of conflict. And expecting one massive rewrite or an all‐encompassing patch is unrealistic. Integration into something as large and historically C‐centric as Linux must be iterative and carefully built out. If one top‐level developer says "no Rust", while others push "Rust for safety", that is a sign that the project's governance lacks clarity on this point.

Hector's departure highlights how messy these half signals can get, and if I were him, I'd also want to see an unambiguous stance on Rust — otherwise, it's not worth investing the time just to beg that your code, no matter how well engineered, might be turned down by personal preference.


I think initially Linus stance of encouraging Rust drivers as an experiment to see how they turn out was a right decisions. There should be some experience before doing long term commitments to a new technology.

But since then a lot of experience was made and I got the notion that the Rust drivers were quite a success.

And now we are at a point where proceeding further does need a decision by Linus, especially as one of the kernel maintainers is actively blocking further work.


> Spending significant time adapting core kernel code or developing a safe Rust abstraction for DMA, only to be summarily shut down by a single gatekeeper who cites "not wanting multiple languages" is demotivating

Christoph Hellwig is one of the oldest subsystem maintainer.

Maybe the Rust developer shall have a more careful behaviour. Nobody wants to break core kernel code.


> Christoph Hellwig is one of the oldest subsystem maintainer.

Also it's not his domain, from Marcan's Reddit response Greg is one in charge of maintaining this area.

So this is drive by reviewing.


[flagged]


This is a fundamental misunderstanding of the structure of the linux kernel, the nature of kernels in general, and the ways one performs automated verification of computer code.

Automated verification (including as done by rust in it's compiler) does not involve anything popularly known as AI, and automated verification as it exists today is more complete for rust than for any other system (because no other widely used language today places the information needed for verification into the language itself, which results rust code being widely analyzable for safety).

Human validation is insufficient and error prone, which is why automated verification of code is something developers have been seeking and working on for a long time (before rust, even).

Having "explicit" (manual?) memory management is not a benefit to enabling verification either by humans or by machines. Neither is using a low level language which does not specify enough detail in the type system to perform verification.

Kernel modules aren't that special. One can put a _lot_ of code in them, that can do effectively anything (other than early boot bits, because they aren't loaded yet). Kernel modules exist for distribution reasons, and do not define any strict boundary.

If we're talking about out-of-tree kernel modules, those are not something the tend to exist for a long time. The only real examples today of long lived out-of-tree modules are zfs (filesystem) and nvidia (gpu driver). These only exist out-of-tree because of licensing and secrecy. This is because getting code in-tree generally helps keep code up to date with less effort from everyone involved: the people already making in-tree changes can see how certain APIs are being used and if those in-tree folks are more familiar with the API they can/may improve the now-merged code. And the formerly out-of-tree folks don't have to run their own release process, don't have to deal with constant compatibility issues as kernel APIs change, etc.


>Human validation is insufficient and error prone,

Basically, if you assume thats impossible for humans to be correct, or that its impossible to write correct memory safe C code, you start down the path that leads to things like Java, Haskell, and now Rust. And then when nobody takes you seriously, you wonder why - well, its because you are telling people who know how to write correct and memory safe C code that we are insufficient and error prone

>Kernel modules aren't that special.

By definition, they interface with the core kernel code. They are not core kernel code


This doesn't make sense to me, why is a manual language that requires validation better than a language that enforces some safety on its own?


Because it forces the developer to think about what is being written at every step of the way, instead of relying on language features that are by far not complete in terms of providing memory safety.


Naive take would be that it adds abstraction that you need to keep checked, in addition to the kernel code itself. Not making a value statement at all on the level of impact that actually has in practice.


I don't get this what so ever, is there memory allocation bugs in GC languages such as Java?

Even if that is the case Rust specifically is design not to be GC.

Ontop of that you csn do manual memory management if you want in Rust afaik.


Rust with manual memory management is just more cumbersome than C


That's shifting the goal post. Ergonomically I don't know a single language that is both trying to make you write secure and correct code without sacrificing ergonomics.

Closest is most likely Ada.


The patch is just a binding / abstraction of DMA for writing Rust drivers.


You should explicitly mention that this is your blog post, and that this is a "members-only" post?


As someone working in comint I can assure you that there's nothing special about MEGA compared to others in terms of flagging.


That's what someone in comint would say ;)

Does that mean usage of all file lockers are equally flagged as suspicious?

My memory gets a bit patchy here, but I'm pretty sure the detective said something along the lines of "MEGA is only ever used for <thing I was accused of>". Which struck me as blatantly false and indicative of either an agenda or incompetence.

And I tend to subscribe to Hanlon's razor:

Never attribute to malice that which is adequately explained by stupidity.


Hearsay, but I hear (and say) they basically got netflow and hell many ISPs are selling it and at least one firm exists re-correlating it back to users. Now mix networks are going to need to start being set to X GB/day and then uniformly emit traffic regardless if legit or padding. Also say goodbye to low latency.

I think Hyphanet or IPFS or the like with fixed and padded bandwidth links will be the next step in privacy. Again, there are consumer ISPs selling netflow data!


Or maybe, just maybe, this is tax evasion?


I'm renting a cube (of sorts) in a building in Alexandria VA that sold for something like 30% of what the owners paid for it originally just the other month.

We're definitely witnessing a crash in US office space prices.


I never understood the argument that “locking in your losses is actually a form of tax evasion” - or am I misunderstanding what you are trying to say?


I don't really know how taxes work, but I'd imagine it's because somehow it lowers your tax bill for that year.


19h's comment that started this sub-thread is a perfect example of the horrible state of civics education.

I'm not an accountant and I definitely don't know taxes properly either, but I do know enough about taxes to know that businesses pay taxes on their net profit (gross income - losses = net profit). Those losses can be payroll costs, costs of goods, shipping and handling costs, rent, loan payments, insurance premiums, equipment purchases and upkeep, travel and lodging expenditures, and so on.

Assuming that the losses are legitimate and can be audited if necessary, this is not tax evasion and to baselessly accuse anyone of it is literally defamation.


The alleged tax avoider is in no way harmed by a kook on HN claiming illegal avoidance, so the elements of defamation are not met.


which is offset by the fact that you just lost a ton of money, so it doesn't really add up


Presumably you weren't planning to do that when you set it up. But sometimes people do lose money, particularly if they took a risk. If you take enough risks, you'll probably always have some losses ready to be realized.


Selling a NYC skyscraper for 97.5% under market value? Yeah, that's tax evasion 101.

Reasons:

- Property taxes? Tanked.

- Capital gains? What capital gains?

- Money laundering? Check.

- Gift tax dodge? Probably.

- Transfer tax? Lol.

- Asset value shenanigans? You bet.

IRS gonna love this one. Good luck explaining that "market rate" to the auditors.


Brilliant. I own some Bed Bath and Beyond stock. I was going to sell it for a ton of money, but now that the company is bankrupt, I can sell it for practically nothing and avoid taxes!


You probably can. You wouldn't have bought them strategically for that purpose of course.

Most people probably pick a mix of winners and losers. After you find out which ones were the losers, then I guess you can cash them in to lower your taxes strategically. I think that's the idea.


>- Property taxes? Tanked.

...depends on the jurisdiction. In many property taxes aren't based on the last transaction price, they're based on what the city assesses the prices as. Selling the property for $1 won't affect the value of the property, unless all the other buildings in the same area do the same thing.

>- Money laundering? Check.

Except property transfers are public information so it's obvious what's going on.


Surely, the loss of some user metadata is a catastrophe on par with the burning of the Library of Alexandria. Perhaps next time, the Internet Archive should consult their vast army of paid engineers and their bottomless coffers to ensure such a calamity never befalls humanity again. After all, what's the point of preserving vast swathes of human knowledge and culture if one can't access their personal bookmarks from 2007?


It's a lot more simple these days with modern iOS versions: https://schlaubischlump.github.io/LocationSimulator/


Absolutely phenomenal quality. Subscribed to the pro plan! Please add an option to use Claude as I absolutely prefer it to GPT4 and it's probably cheaper too.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You