Weapons are designed with an opponent in mind, and guarded against the expected threat models from that opponent. Everything breaks down when the opponent does not what you want them to.
When you decapitate a well organised military, all you achieve is installing a new enemy you know little about you can’t predict their actions and that now know they are fighting for their own survival.
That's a lovely thing to say, but if your existence is being threatened by an aggressor, I wouldn't blame you for throwing out the rulebook.
In my view, if someone invades your territory and starts attacking you, you have no obligation to follow any sort of "principles" or "rules" when it comes to how you fight back. Anything you need to do to the attackers in order to defend yourself and your people is, by definition, morally defensible.
(Do note that I said "need". Doing arbitrary messed-up things that don't actually further the goal of driving back the attackers is not ok.)
There is no if. We've already done that. So yes, we are no better than them. So answer the question. Why would Iran follow conventions it's enemy that started a war of aggression is not following?
Right but the reason we have rules against people declaring no quarter is to prevent a race to the bottom. It is absolutely reasonable to respond to a no quarter declaration in kind, which is... again... the entire reason we have prohibitions on it.
They won't face any US law. AIUI, they have been getting letters from the DOJ office of legal counsel that say it's legal. This effectively immunizes them (the DOJ can't turn around and charge you with a crime, if they advised you beforehand it was not a crime).
The best shot would be to turn them over to the ICC
> they have been getting letters from the DOJ office of legal counsel that say it's legal. This effectively immunizes them (the DOJ can't turn around and charge you with a crime, if they advised you beforehand it was not a crime).
This is not true.
OLC opinions are just that: opinions. They are non-binding and non-promissory. They are an important factor in any assessments as a norm, but definitely not dispositive and not legally binding.
The only real barrier is the pardon power, but I'm personally fine at this point with totally breaking the seal, trying and jailing every criminal in the administration(++), and consider the pardon power gone for good. Small price to pay.
Yep. And war crime seems to have lost all meaning in the US.
But, even if you dismiss the idea of international standards, this is clearly very bad for US soldiers (and sailors, airmen, etc). I wonder if they see that.
> But, even if you dismiss the idea of international standards, this is clearly very bad for US soldiers (and sailors, airmen, etc). I wonder if they see that.
A 4x RPi Zero Ws Docker Swarm cluster running the dockerised versions of Hercules with VM/370 Sixpack, VM/370 CE and MVS TK 4. All in an IKEA picture frame.
That, weirdly, should be fine; ARM is bi-endian in the sense of being perfectly happy to run either way. In fact, the easiest way I know of to test software on a big endian system is to run a perfectly ordinary Raspberry Pi with NetBSD's big endian port for it:)
Yeah, I know ARM is bi-endian (pretty much all non-x86 archs used nowadays are) but the question is if it's actually enough to have a software base for it. NetBSD having an ARM port in BE is great but most arm stuff is done for LE systems since MacOS, and NT, and most Linux stuff is LE. This isn't that much of a problem in the free software world because we like to test things on obscure architectures but for the kind of proprietary stuff that you'd want to run on arm might have problems (assuming it wasn't ported AIX already)
Not in the real world, but this is kind of how Asimov’s robots interpret their 3 laws - it’s about consequences much more than what the order is. Also, they weight consequences of inaction as well and might be driven to action when not acting could cause a violation.
Our AI is nowhere near the level of sophistication required to implement something like that, but it’s still an interesting idea.
You're right that current systems aren't close to that level of reasoning.
What I'm wondering is whether we can approximate some of it structurally — by defining when execution is allowed or not — even without that level of sophistication in the model itself.
Curious how far you think simple constraint systems can go before something like that kind of reasoning becomes necessary.
reply