Slippery slope only applies if there's actually a slippery slope to slide down. Nothing about this change indicates that Wikipedia will start charging the general public for access.
10x software engineers are mostly 10x problem solvers who happen to be programmers. The driving attitude is refusing to work hard to accomplish something of low significance, which is mostly done thanks to: finding clever solutions, reducing feedback loops when debugging or testing, gaining proper understanding of systems to avoid empirical tweaking, asking around quickly when getting stuck, and also cleverly navigating company politics in order to ship something in environments where it is hard.
I would say that, from the point of view of the kernel maintainers, that question is irrelevant, as they never agreed to taking part in any research so. Therefore, from their perspective, all the behaviour is genuinely malevolent regardless of the individual intentions of each UMN researcher.
I'm surprised it passed their IRB. Any research has to go through them, even if it's just for the IRB to confirm with "No this does not require a full review". Either the researchers here framed it in a way that there was no damage being done, or they relied on their IRB's lack of technical understanding to realize what was going on.
According to one of the researchers who co-signed a letter of concern over the issue, the Minnesota group also only received IRB approval retroactively, after said letter of concern [1].
I'd love to see what they submitted to their IRB to get the determination of no human subjects:
It had a high human component because it was humans making decisions in this process. In particular, there was the potential to cause maintainers personal embarrassment or professional censure by letting through a bugged patch. If the researchers even considered this possibility, I doubt the IRB would have approved this experimental protocol if laid out in those terms.
The only relevant question is:
"Will the investigator use ... information ... obtained through ... manipulations of those individuals or their environment for research purposes?"
which could be idly thought of as "I'm just sending an email, what's wrong with that? That's not manipulating their environment".
But I feel they're wrong.
https://grants.nih.gov/policy/humansubjects/hs-decision.htm would seem to agree that it's non-exempt (i.e. potentially problematic) human research if "there will be an interaction with subjects for the collection of ... data (including ... observation of behaviour)" and there's not a well-worn path (survey/public observation only/academic setting/subject agrees to study) with additional criteria.
Agreed: sending an email is certainly manipulating their environment when the action taken (or not taken) as a result has the potential for harm. Imagine an extreme example of an email death-threat: That is an undeniable harm, meaning email has such potential, so the IRB should have conducted a more thorough review.
Besides, all we have to do is look at the outcome: Outrage on the part of the organization targeted, and a ban by that organization that will limit the researcher's institution from conducting certain types of research.
If this human-level harm was the actual outcome means the experiment was a de fact experiment including human subjects.
I have to admit, I can completely understand how submitting source code patches to the linux kernel doesn't sound like human testing to the layman.
Not to excuse them at all, I think the results are entirely appropriate. What they're seeing is the immune system doing its job. Going easy on them just because they're a university would skew the results of the research, and we wouldn't want that.
Agreed: I can understand how the IRB overlooked this. The researchers don't get a pass though. And considering the actual harm done, the researchers could not have presented an appropriate explanation to their IRB.
One of the important rules you must agree to is that you cannot deceive anyone in any way, no matter how small, if you are going to claim that you are doing exempt research.
These researchers violated the rules of their IRB. Someone should contact their IRB and tell them.
This was (1) research with human subjects (2) where the human subjects were deceived, and (3) there was no informed consent!
If the IRB approved this as exempt and they had an accurate understanding of the experiment, it makes me question the IRB itself. Whether the researchers were dishonest with the IRB or the IRB approved this as exempt, it's outrageous.
Just so you know, you appear to have been shadowbanned. I'm not sure why, probably for having a new account and getting quickly downvoted in this thread. (Admittedly you come across slightly strong, but... not outside of what I think is reasonable, so I dunno what's going on.)
I do recommend participating more in other threads and a little less in this thread, where you're repeating pretty much the same point over and over.
Yeah, I don't think they can claim that human subjects weren't part of this when there is outrage on the part of the humans working at the targeted organization and a ban on the researchers' institution from doing any research in this area.
It does prevent anyone with a umn.edu email address, be it a student or professor, of submitting patches of _any kind,_ even if they're not part of research at all. A professor might genuinely just find a bug in the Linux kernel running on their machines, fix it, and be unable to submit it.
To be clear, I don't think what the kernel maintainers did is wrong; it's just sad that all past and future potentially genuine contributions to the kernel from the university have been caught in the crossfire.
I looked into it (https://old.reddit.com/r/linux/comments/mvd6zv/greg_khs_resp...). People from the University of Minnesota has 280 commits to the Linux kernel. Of those, 232 are from the three people directly implicated in this attack (that is, Aditya Pakki and the two authors of the paper), and the remaining 28 commits is from one individual who might not be directly involved.
The professor, or any students, can just use a non edu email address, right? It really doesn't seem like a big deal to me. It's not like they can personally ban anyone who's been to that campus, just the edu email address.
no, that would get them around an automatic filter, but the ban was on people from the university, not just people using uni email addresses.
I'm not sure how the law works in such cases, but surely the IRB would eventually have to realize that an explicit denouncement by the victims means that the "research" cannot go ahead
Which is completely fine, IMO, because,as pointed out already, the university's IRB has utterly failed here. There is no way how this sort of "research" could have passed an ethics review.
- Human subjects
- Intentionally misleading/misrepresenting things, potential for a lot of damage, given how widespread Linux is
- No informed consent at all!
Sorry but one cannot use unsuspecting people as guinea pigs for research, even if it is someone from a reputable institution.
I think in explicitly stating that no on from the university is allowed to submit patches includes disallowing them from submitting using personal/spoof addresses.
Sure they can only automatically ban the .edu address, but it would be pretty meaningless to just ban the university email host, but be ok with the same people submitting patches from personal accounts.
I would also explicitly ban every person involved with this "research" and add their names to a hypothetical ban list.
As a Minnesota U employee/student you cannot submit officially from campus or using the minn. u domain.
As Joe Blow at home who happens to go to school or work there you could submit even if you were part of the research team. Because you are not representing the university. The university is banned.
It would be hard to show this wasn’t genuine behaviour but a malicious attempt to infect the Linux kernel. That still doesn’t give them a pass though. Academia is full of copycat “scholars”. Kernel maintainers would end up wasting significant chunks of their time fending off this type of “research”.
The kernel maintainers don't need to show or prove anything, or owe anyone an explanation. The University's staff/students are banned, and their work will be undone within a few days.
The reputational damage will be lasting, both for the researchers, and for UMN.
One could probably do a paper about evil universities doing stupid things.. anyway evil actions are evil regardless of the context, research 100-yrs ago was intentionally evil without being questioned, today ethics should filter what research should be done or not
And because web app publishers have full control. If I have a web application, I can update it right now and it will be updated for ALL my users at the same time. No store policies bullshit, no need to somehow notify users that a new version has to be downloaded and installed, no fragmentation of your user base because half of your customers stay at an old version due to whatever reasons out of your reach.
But you don’t get full control, and that’s why people go native. Because platforms rightly distrust web apps. This will likely be true for a long time as security becomes a bigger deal everyday.
Most "modern" platforms distrust native apps too. (Android, iOS and increasingly macOS, Windows and even Linux)
This is one of the main reason that I often prefer web apps. (my preference obviously depends on the use case) I would much rather run a random company's messaging/video chat/whatever app inside my browser with strong sandboxing. Because browsers have truly accepted that applications should be considered malicious by default.
This. I cannot stress how important is this for modern app development. Just like Deno doesn't trust any script that is passed to it, no user should trust an app just cause it's installed directly in the system (and I can't believe I lived with that mindset as well). The developers should grow cautious of any tool and library they install, and the user should inspect more often what kind of access is it giving to anything they browse on the web, cause at least there they can block it.
But in those more modern platforms, web apps still have a big disparity vs native in terms of privilege, and it's not obvious why the gap should shrink; after all, apps that run native have passed their internal bureaucracy process and are thus more trustworthy.
You're talking about the relationship between publisher and platform, and how there are web wins in the direction of managing and updating apps.
But those wins must also be balanced with the loss of control due to those same platforms distrusting your app. Now your app, for all the wins it's going to achieve on maintenance and updating, is also going to take hits from its inability to do things that other people take for granted on native apps.
They are part of the same story of balancing pros and cons.
Of course. Everybody here is aware of the tradeoff of using web technologies versus native ones. The fact that I emphasize what I see as the strongest benefits doesn't mean I'm not aware or ignore the tradeoff.
As someone who works with a lot of large businesses; they vastly prefer SaaS delivered over the web these days. Not only does it mean there’s no infrastructure to manage, there are also no deployment / upgrade headaches if you need to roll a client out to a few thousand corporate users.
Web apps are also a hell of a lot easier to secure these days (as long as you trust / validate the platform’s security, which should be in the contract anyway). Which is also why a lot of “native” apps are pretty much web apps with some wrappers around platform-native hardware integration.