Ever since I was young I was fairly divided on the subject. I've dealt with some highschool students affected by the downed aircraft MH17 and that lead to lots of grief among students. It usually lead to strong anti-war sentiments but some also felt a need to "do" something with it.
If no one works on defence systems then all the things we have could become jeopardized, perhaps not this week but in 5 years. Therefore I can reconcile the idea of working for defence related r&d. I also know that these sentiments are used by unscrupulous individuals to gain influence, but I don't feel like we should let that cause a divide between people with a strong moral compass and those without, since we'd be worse off if there was no one in a position of power to make moral decisions. That requires people to judge work based on it's content instead of the domain. It also requires workforce to have enough collective pressure to stall immoral defence (or rather attack) systems.
Automated decisionmaking tools throw a wrench into this because it brings us steps closer to mass deployment of questionable and potentially unhinged munitions. If laws mandated human-in-the-loop systems it would be a better outcome.
No one should apologize for feeling conflicted while giving an issue considerable thought. Constantly reassessing your position based on the changing nature of the world should be encouraged to be the default approach.("Constantly" within reason of course).
I can imagine some Americans making a decision based on the threat of other authortarian states and being left completely bewildered when they have to grapple with the notion that their government may be the bigger threat to their own security.
> If no one works on defence systems then all the things we have could become jeopardized, perhaps not this week but in 5 years. Therefore I can reconcile the idea of working for defence related r&d.
I am not saying this line of thinking is completely absurd. But I think every individual considering this should reflect a lot. (1) Is your country using its ""defense"" systems wisely? (2) Won't the technology be replicated by adversaries anyways? (3) etc..
Overall, the number of people and resources spent on Weapons R&D is probably significantly more than people working on things like diplomacy, ethics, or activism for international human rights (assuming human rights violations are the only legitimate reason for war).
It's significantly safer for individual nations and humanity as a whole if we're not all armed to the teeth constantly on the brink of large conflict, and instead are more or less ethically aligned, all respect basic human rights, and respect other nations.
> If no one works on defence systems then all the things we have could become jeopardized, perhaps not this week but in 5 years
All the things we have are jeopardized because those systems are actually attack systems and were just used to start a war. We will be lucky if it wont grow into WWIII.
And I just read an article about how those defense systems are used to bomb hospitals with double tap tactic - meaning you bomb rescuers when they come. Literally the first day of that no-defense war, they were used to bomb a school. And before that, they were used to execute fishermen and maybe smugglers with no judicial review. Just to make someone feel manly.
> If no one works on defense systems then all the things we have could become jeopardized
The reality is that the US government has not historically been engaged in defense. They have been engaged in offense. If you live in the united states and work on "defense," you are working on offense. If even if you are designing something like missile interceptors, they have historically been used primarily to protect US assets in wars that the US started.
That's a psychotic thing to say about starting new wars and aiding genocides. The only thing that's being defended are the profits of western oligarchs.
At the same time, I remember growing up in the internet's wild west and bad encounters weren't an issue for me because of the golden rule I was taught from the start: you don't give your personal information and you don't interact with complete strangers. Learning to navigate the web instead of being in a walled garden was helpful in many ways.
The better question to ask ourselves is, does the capability to gather more information also lead to more power to act on this information? If the investigative resources are spread thin already it's not like they're gonna catch more criminals with investing more there. Repelling questionable individuals off the platform with lots transparancy -is- an effective way, but just a specific tool for a symptom.
I think a part of a better solution is to give parents and children better tools to manage their social graph themselves. Essentially the real problem is discovery and warding off of social outliers in a way that doesnt out all responsibility on opaque algos or corporations.
A part of their e2e keys could be shared using an intentionally obtuse way like mailing an item or a physical "friend code". That way parents and vetted friends can have their privacy.
You don't need to tie an id to someone's person to get positive confirmation on someone's poor behaviour. If someone crossed the line then parents can see it and escalate. In additon, what would happen to a child with abusive parents who can then arbitrarily restrict and deny a childs freedom to communicate? I did not have this myself, but without free access to other minds and information I would have been duller. Does a large information dragnet really serve our collective interests or are more precise tools needed?
> I think a part of a better solution is to give parents and children better tools to manage their social graph themselves. Essentially the real problem is discovery and warding off of social outliers in a way that doesnt out all responsibility on opaque algos or corporations.
This is actually a key consideration for the proposed implementation. The biggest issue for parents when restricting their children's online activity is that they simply don't understand the tool available for it.
By having a "child mode" iPhone, parents don't have to know any of that. They simply buy the iPhone Kids for their children and then get a plain iPhone for themselves.
If these restrictions were to actually be enforced by law as well, then it would make it very easy for teachers and other guardians to check if a device is appropriate for the child using it.
From what I've seen, the bad effects don't necessarily just come from free access to the internet, but that everyone around them in their social group has a video camera that can covertly record, they're all immature children and thus you cannot slip up once or you get kid cancelled, and they start doing a collective dissociative freeze response in a self-imposed emergent panopticon as a result.
So if the teen phone turned into a restricted "call mom" device with no cameras and with neon yellow obvious fuck you coloring and a restricted set of apps, and police took away a full phone much like they take away cigs and beer it might be enough to break the critical mass to create this issue. They can have dedicated cameras for video club, use the family computer, have an xbox or switch and have whatever tech experience that millenials had, the last generation to not have exponential increases in anxiety , depression and sexlessness.
It's the covert camera + internet that it's the key issue.
There are cache control instructions already. The reason why it goes no further than prefetch/invalidate hints is probably because exposing a fuller api on the chip level to control the cache would overcomplicate designs, not be backwards compatible/stable api. Treating the cache as ram would also require a controller, which then also needs to receive instructions, or the cpu has to suddenly manage the cache itself.
I can understand why they just decide to bake the cache algorithms into hardware, validate it and be done with it. Id love if a hardware engineer or more well-read fellow could chime in.
Ive honestly wondered why people do not expect, install or demand orange tinted lights? In the old times where we had sodium vapour lamps I was always under the impression that the orange wavelengths were specifically picked because our visual system is keenest under dusklike lighting conditions. Why then do we shift so much towards cold harsh lights? I dont think it's just the brightness that makes it unpleasant but also the wavelengths.
Ive honestly wondered why people do not ask for or demand orange tinted lights? In the old times where we had sodium vapour lamps I was always under the impression that the orange wavelengths were specifically picked because our visual system is keenest under dusklike lighting conditions. Why then do we shift so much towards cold harsh lights? I dont think it's just the brightness that makes it unpleasant but also the wavelengths.
> In the old times where we had sodium vapour lamps I was always under the impression that the orange wavelengths were specifically picked because our visual system is keenest under dusklike lighting conditions.
The yellow from sodium vapor lamps was an artifact of the physics of the emission spectrum of excited sodium atoms.
The reason sodium vapor lamps were used was because before the advent of LED's, they were the most power efficient (lowest electrical energy consumption for a given light output). The 'yellow' light was just a byproduct of "more efficient bulb".
If no one works on defence systems then all the things we have could become jeopardized, perhaps not this week but in 5 years. Therefore I can reconcile the idea of working for defence related r&d. I also know that these sentiments are used by unscrupulous individuals to gain influence, but I don't feel like we should let that cause a divide between people with a strong moral compass and those without, since we'd be worse off if there was no one in a position of power to make moral decisions. That requires people to judge work based on it's content instead of the domain. It also requires workforce to have enough collective pressure to stall immoral defence (or rather attack) systems.
Automated decisionmaking tools throw a wrench into this because it brings us steps closer to mass deployment of questionable and potentially unhinged munitions. If laws mandated human-in-the-loop systems it would be a better outcome.