This isn't even that new. Part of the motivation for building autonomous nuclear response programs during the cold war was specifically to remove accountability, and guilt, from human operators. But AI does bring it to a new level.
> Part of the motivation for building autonomous nuclear response programs during the cold war was specifically to remove accountability, and guilt, from human operators.
Details please. Because I can see the reality being most likely an attempt to avoid conflict by solidifying MAD, by trying to prevent a human from vetoing a second-strike.
sadly that's also true within Ukraine. like, I know that Russians are handling Ukrainian prisoners of war very brutally (no sources, why: [0]) but, if not for [0] AND if I wouldn't be killed by my co-citizens for that, I would point out a good chunk of misconduct on Ukrainian side as well.
I also recall the history lessons. I can't remember anyone who committed a war crime against Nazi Germany that also was internationally prosecuted. yep, the West did prosecute domestically, and there were some loud cases with German POWs, but I can't recall any, any Soviet soldier being charged for e.g. rape.
[0]: there is nothing public to link to that remained up, and I'm long out from private Telegram channels where such videos are posted; plus, even if I could, you and mods wouldn't want to see the video of someone getting beheaded
I'm not sure "being gamed" is the lens I would see this particular instance through. People (some at least) have gotten into their heads that they can ask LLMs objective questions and get objectively correct answers. The LLM companies are doing very little to dissuade them of that belief.
Meanwhile, LLMs are essentially internet regurgitation machines, because of course they are, that's what they do. Which makes them useless for getting "hard truth" answers especially in contested or specialized fields.
I'm honestly afraid of the impact of this. The internet has enough herd bullshit on it as it is. (e.g. antivaxxers, flat earthers, electrosensitivity, vitamin/supplement junk, etc.) We don't need that amplified.
By that logic, LLMs would be essentially useless considering the amount of garbage that exists on the internet. And, honestly, for things like this they are. But they're not marketed as such, and _that_ is the problem.
The public at large doesn't seem to care about this distinction.
Here's a proof. Search for this in google: "ai data centers heat island". Around 80 websites published articles based on a preprint which was largely shown to be completely wrong and misleading.
It matters because for medical questions, you [are supposed to] go to a medical professional, and those very much cares about and make that distinction.
Which is exactly the problem here; it "used to be" that reasonable people would disbelieve random things they find on the internet at least to some degree. "Media literacy". LLMs don't seem to have that capability, and a good number of people are using LLMs in blissful ignorance of that fact. They very confidently exclaim things that make them sound like experts in the field at question.
Would it have made a difference for the AI data center heat island thing you're quoting? maybe not. But for medical matters? Most people wouldn't even have caught wind of this odd fake disease. LLMs just amplify it and serve it to everyone.
I agree with you and I think the companies have solved it. I think they should be more skeptical of medical articles in general and be more conservative.
> Which is exactly the problem here; it "used to be" that reasonable people would disbelieve random things they find on the internet at least to some degree. "Media literacy". LLMs don't seem to have that capability, and a good number of people are using LLMs in blissful ignorance of that fact.
I completely disagree with this part. LLM's absolutely have the ability to be skeptical but skepticism comes at a cost. LLMs did what used to be a reasonable thing - trust articles published in reputed sources. But maybe it shouldn't do that - it should spend more time and processing power in being skeptical.
The definition of a preprint is that it isn't peer reviewed. Unless you're an expert in the field, you IMHO shouldn't be looking at preprints. Might be OK if they come recommended by multiple unaffiliated experts (i.e. kinda half reviewed), but definitely not by default.
Meson is a python layer over the ninja builder, like cmake can be. xmake is both a build tool and a package manager fast like ninja and has no DSL, the build file is just lua. It's more like cargo than meson is.
I didn't claim it was a package manager, just that it looked similar. The root post said "build tool", and that's what Meson is as well.
Other than that, both "python layer" and "over the ninja builder" are technically wrong. "python layer" is off since there is now a second implementation, Muon [https://muon.build/], in C. "over the ninja builder" is off since it can also use Visual Studio's build capabilities on Windows.
Interestingly, I'm unaware of other build-related systems that have multiple implementations, except Make (which is in fact part of the POSIX.1 standard.) Curious to know if there are any others.
Shipping anything built with -march=native is a horrible idea. Even on homogeneous targets like one of the clouds, you never know if they'll e.g. switch CPU vendors.
The correct thing to do is use microarch levels (e.g. x86-64-v2) or build fully generic if the target architecture doesn't have MA levels.
Not the OP, but: -march says the compiler can assume that the features of that particular CPU architecture family, which is broken out by generation, can be relied upon. In the worst case the compiler could in theory generate code that does not run on older CPUs of the same family or from different vendors.
-mtune says "generate code that is optimised for this architecture" but it doesn't trigger arch specific features.
Whether these are right or not depends on what you are doing. If you are building gentoo on your laptop you should absolutely -mtune=native and -march=native. That's the whole point: you get the most optimised code you can for your hardware.
If you are shipping code for a wide variety of architectures and crucially the method of shipping is binary form then you want to think more about what you might want to support. You could do either: if you're shipping standard software pick a reasonable baseline (check what your distribution uses in its cflags). If however you're shipping compute-intensive software perhaps you load a shared object per CPU family or build your engine in place for best performance. The Intel compiler quite famously optimised per family, included all the copies in the output and selected the worst one on AMD ;) (https://medium.com/codex/fixing-intel-compilers-unfair-cpu-d...)
> Not the OP, but: -march says the compiler can assume that the features of that particular CPU architecture family, which is broken out by generation, can be relied upon. In the worst case the compiler could in theory generate code that does not run on older CPUs of the same family or from different vendors.
Or on newer CPUs of the same vendor (e.g. AMD dropped some instructions in Zen that Intel didn't pick up) or even in different CPUs of the same generation (Intel market segmenting shenanigans with AVX512).
Just popping in here because people seem to be surprised by
> I build on the exact hardware I intend to deploy my software to and ship it to another machine with the same specs as the one it was built on.
This is exactly the use case in HPC. We always build -march=native and go to some trouble to enable all the appropriate vectorization flags (e.g., for PowerPC) that don't come along automatically with the -march=native setting.
Every HPC machine is a special snowflake, often with its own proprietary network stack, so you can forget about binaries being portable. Even on your own machine you'll be recompiling your binaries every time the machine goes down for a major maintenance.
it certainly has scale issues when you need to support larger deployments.
[P.S.: the way I understand the words, "shipping" means "passing it off to someone else, likely across org boundaries" whereas what you're doing I'd call "deploying"]
On every project I've worked on, the PC I've had has been much better than the minimum PC required. Just because I'm writing code that will run nicely enough on a slow PC, that doesn't mean I need to use that same slow PC to build it!
And then, the binary that the end user receives will actually have been built on one of the CI systems. I bet they don't all have quite the same spec. And the above argument applies anyway.
If you use a cloud provider and use a remote development environment (VSCode remote/Jetbrains Gateway) then you’re wrong: cloud providers swap out the CPUs without telling you and can sell newer CPUs at older prices if theres less demand for the newer CPUs; you can’t rely on that.
To take an old naming convention, even an E3-Xeon CPU is not equivalent to an E5 of the same generation. I’m willing to bet it mostly works but your claim “I build on the exact hardware I ship on” is much more strict.
The majority of people I know use either laptops or workstations with Xeon workstation or Threadripper CPUs— but when deployed it will be a Xeon scalable datacenter CPU or an Epyc.
Hell, I work in gamedev and we cross compile basically everything for consoles.
So you buy exact same generation of Intel and AMD chips to your developers than your servers and your cutomsers? And encode this requirement into your development process for the future?
No? That would be ridiculous. You’re inventing dumb scenarios to make your argument work.
It’s more like: some organizations buy many of the same model of server, make one or two of them their build machines, and use the rest as production. So it’d be totally fine to use march=native there.
You just wouldn’t use those binaries anywhere else. Devs would simply do their own build locally (why does everyone act like this is impossible?) and use that. And obviously you don’t ship these binaries to customers… but, why are we suddenly talking about client software here? There’s a whole universe of software that exists to be a service and not a distributed binary, we’re clearly talking about that. Said software is typically distributed as source, if it’s distributed at all.
There’s a thousand different use cases for compiling software. Running locally, shipping binaries to users, HPC clusters, SaaS running on your own hardware… hell, maybe you’re running an HFT system and you need every microsecond of latency you can get. Do you really think there are no situations ever where -march=native is appropriate? That’s the claim we’re debunking, the idea that "-march=native is always always a mistake". It’s ridiculous.
We use physical hardware at work, but it's still not the way you build/deploy unless it's for a workstation/laptop type thing.
If you're deploying the binary to more than one machine, you quickly run into issues where the CPUs are different and you would need to rebuild for each of them. This is feasible if you have a couple of machines that you generally upgrade together, but quickly falls apart at just slightly more than 2 machines.
Lots of organizations buy many of a single server spec. In fact that should be the default plan unless you have a good reason to buy heterogeneous hardware. With the way hardware depreciation works they tend to move to new server models “in bulk” as well, replacing entire clusters/etc at once. I’m not sure why this seems so foreign to folks…
Nobody is saying dev machines are building code that ships to their servers though… quite the opposite, a dev machine builds software for local use… a server builds software for running on other servers. And yes, often build machines are the same spec as the production ones, because they were all bought together. It’s not really rare. (Well, not using the cloud in general is “rare” but, that’s what we’re discussing.)
As a FOSS maintainer… code was already cheap before. Good code wasn't. And it still isn't… even if only because the cost includes review, but still often enough for the code itself too.
Ships would have not been able to pass freely at a later point. That’s why Iran was building and buying these missiles. Folks look around and say wow they did so much damage - yea now imagine 2x-5x the number of missiles and launchers and by the way why not build a nuclear bomb to really make sure the rest of the world pays them for oil and energy.
Of course Iran wasn’t going to close the straight yet, they didn’t have the ability to inflict enough pain to deter US, Israeli, and/or Gulf State strikes to prevent them from closing it.
Where are you getting this idea that anyone is paying Iran? Genuinely confused about this. The only thing that has happened is that the US made Iran open the Straight up for two weeks in exchange for a pause in bombing. Nothing else has been agreed to. What source are you looking at that says anyone is paying them and that is has been agreed to?
-Complete cessation of the war on Iraq, Lebanon, and Yemen
-Complete and permanent cessation of the war on Iran with no time limit
-Ending all conflicts in the region in their entirety
-Reopening the Strait of Hormuz
-Establishing a protocol and conditions to ensure freedom and security of navigation in the Strait of Hormuz
-Full payment of compensation for reconstruction costs to Iran (via reparations in the form of USD2 Million per ship Hormuz fee to be shared with Oman[?] for some reason? Again, I don't understand why anyone is paying anything to anyone else?)
-Full commitment to lifting sanctions on Iran
-Release of Iranian funds and frozen assets held by the United States (Also to be used as reparations to Iran. Again, why?)
-Iran fully commits to not seeking possession of any nuclear weapons (More on this below. And it's a doozy.)
-Immediate ceasefire takes effect on all fronts immediately upon approval of the above conditions
------------
OK. Now that is the english language version. The Farsi version, which is not being reported in the media, contains the following language as well: "acceptance of enrichment". (Which again, to me, seems like it would be a non-starter.) The idea being that enrichment is a dual use technology I assume?
The full version isn't being reported in English language media, but the Administration has it. When asked about what's in the plan, the White House will only confirm that "yes", it is 15 points and not just the 10 we know about. So that answer at least confirms there are additional points. Which, again, even if there weren't added points, the 10 we know about mean that everyone still pays Iran for passage through the straits.
I'm gonna be honest here, this seems totally unworkable. I'll even go further, and characterize this as Iran giving us a list of conditions for our surrender. This is not acceptable. This is materially worse than the status quo that existed 2 months ago.
This isn't answering what I asked though. This is a statement of Iranian talking points but there is no agreement, the US hasn't "capitulated", nor have further talks taken place. Nobody has agreed to pay Iran anything. It doesn't matter what they say.
When you write things like this:
> Which, again, even if there weren't added points, the 10 we know about mean that everyone still pays Iran for passage through the straits.
It's like who cares what they wrote in these 10 points? They can demand the moon be made of cheese too. There will be no paying to use the Strait because like other points in these 10 demands the US and Gulf States won't agree to it.
When Iran wrote this did you like, think that they made these demands and then other countries are trying to comply with them or something? It doesn't matter what Iran writes. It only matters what the US says will happen as we see fit.
> Doing nothing would have been better than this.
Doing nothing means the following:
- Iran continues to stock pile missiles
- Iran gets to a point where they have so many missiles that it becomes untenable for the US to stop them from buying and building more missiles because the destruction they would create for Gulf States and others that they hold hostage aren't worth the risk
- Because Iran can't be stopped they would continue their pursuit of a nuclear weapon
Then Iran can enact whatever toll they want on the Straight and there's nothing anyone can do about it and we're right here where we are now except the US has pulled out of the region and Iran's crazy regime is making billions from Gulf States and the international community by taxing trade. That's why the US struck now instead of waiting - if we wait there's nothing we can reasonably do!
Sit down and think this through for yourself. Of course you can argue "Iran wouldn't do that" but you have to take them at their word and through their activities which indicate that is indeed what they planned on doing. Doing nothing means we have a much, much bigger problem down the line. Doing something now means we can likely prevent that bigger problem from occurring in the first place.
Maybe I should have been more clear? These are the points in the proposal that the Iranians/pakistanis sent to Trump that Trump said formed the basis for the ceasefire. Which it doesn’t. There is nothing there for us.
It doesn’t matter anymore in any case as Israel just launched a massive barrage. So there will be no ceasefire now anyway.
No worries, sorry if I wasn't clear as well. To your point, I didn't really think a ceasefire would last long anyway because neither side has any interest in changing their perspective and at the end of the day the US holds the upper hand and the folks they are "negotiating" with are, well, rather delusional.
You or your subordinates target an elementary school: that's a war crime.
Your "battlefield AI" targets an elementary school: software bug, it happens, can't be helped.
reply