Discrimination is not bad in itself. A scientific approach is to apply discrimination over biased sample to compensate the bias. The difficulty is how to compensate fairly. And when you compensate fairly, you will have people who will say you compensated too much and other people who will say you compensated too little.
When I look at the other opinions and values of the majority of people who say that DEI compensate unfairly and too much, I either see that 1. they don't even accept to consider that maybe there was a bias, 2. they also defend policies quite marked politically. These two things make me think there is not really a compensation that it is too big, and that it is just the people who have different values that say they disagree. If it was indeed unfairly balanced, they would be more "pro-DEI on principle" that would react on the dysfunction. (not saying they don't exist, but they are just too few)
If they actually wanted to correct the bias, they would push for standardized testing. Instead, they parasitically push for their own positions, salaries, and to put on good theater. I see the difference and I am not amused.
So what you are saying is that you see a bias against males, or a bias in "pro-DEI" people's behavior.
But where is your standardized testing to prove that? Does it mean that you are not interested in finding the correct correction, but rather to push for a situation that profit you (either because it is directly advantageous, or because it says that "your side" is right and "their side" is wrong)?
This is what I don't understand. Saying "bias = 0" is as much a critical decision as saying "bias = 17". But it looks like people can both say "they say there is a bias of 17 but they don't test it enough to my eyes so they are bad, we should act as if the bias was 0, and I don't do any testing but it is ok because my critics against them don't apply to me".
(edit: also, I think that your argument "standardized testing" is just in bad faith. A lot of DEI policies come with method to measure the impact and process to adapt the correction level based on how things evolved. You may not like these methods, but these methods would simply not even exist if indeed they were not interested in finding the correct correction. It feels like it's an easy argument "they do stuff, but let's just decide it does not count unless I arbitrarily decide it does")
I think that you are right, but what is the worse (illustrative numbers):
- 70% of men being allowed to get 20$ and 30% being excluded from this opportunity and getting only 5$
- 100% of women being excluded from the opportunity to get 20$ and getting only 12$
Sure, the excluded women gets more than the excluded men, but it is also very unfair that men have 70% chance to "make it" while women have 0% chance to "make it".
Not saying one is worse than the other (and it is illustrative numbers anyway), but just to illustrate that 1. in both cases, looking at only one metric is not enough, 2. at the end, the answer is not really "objective" or "mathematical", and two persons can reach different conclusions based on their values.
No one is going to take you seriously if your argument is the very reductive "because they choose to" (being tongue-in-cheek here, don't take it seriously)
It is very clear that public image has a huge impact on what people choose. For example, people who consider themselves introvert choose, in majority, to avoid fields that have a strong extrovert vibe. Similarly, people will tend to not choose fields if the field "gives a vibe" they don't feel they belong to. So, if there is an initial bias toward men, the fact that some people don't choose the field is in no way a proof that there is no bias.
I agree that the 10% number is not the best, but the "corrected" number where you take the samples in same job and position does the same mistake. In fact, there are arguments that in these cases, you have a selection bias (some of the men in the field are seeing this field as their calling, but some of the men are just doing it as a job without being overly passionated, while the women that are not overly passionated just don't choose this job) and that using this methodology, women should overperform because there is a gap. The "real" number is probably in between.
A first comment said "without saying why". The second comment just says that this is blatantly not true, and that the rationale presented has been since confirmed as a very accurate prediction.
It is strange to pretend that there is no cultural bias and then given an example that is usually explained because Asians seem to culturally value more education than white British.
How will you explain that Asians outperform white British otherwise, knowing that the idea that Asians and white British are genetically different enough to explain this has been scientifically debunked, or that adopted Asians don't show the same pattern as not adopted Asians?
(and, yes, of course SAT is highly predictive of college performance, isn't that the point: people who get better training get better college performance while not being "smarter", just "better trained")
I’m talking about the supposed cultural bias of the test itself, not cultural differences among test takers. A culturally biased test is one that requires familiarity with a particular culture, generally that of the people who wrote the test. If Asians do better on a test developed by British people, that suggests that the test itself is not culturally biased.
Your argument would have been more convincing if the Asians were getting the same as British people.
But as soon as the Asians do better, it means that the whole comparison is meaningless. It means that Asians and British scores differently. Maybe Asians normally should score 14 and British score 10, but they score 12 because the test is culturally biased.
(sure, we expect that a small difference in "normal situation" should be relatively small, but the samples are defined as biased, so you cannot really rely on them unless making unproven assumptions)
There is no reason to expect that the test results would be the same across all demographic groups, and in fact, everything we know about psychometry (i.e. the science of mental testing) suggests that we should expect exactly opposite. See e.g. "Intelligence: Knowns and unknowns", which described the consensus position of the American Psychological Association as of 1995:
> The cause of [test achievement] differential is not known; it is apparently not due to any simple form of bias in the content or administration of the tests
themselves.
Not sure what is your point, the "test achievement" mentioned in the document refers to totally different "test" that the ones we were talking about.
Also, on just pure logic, I don't think the document shows what you think it shows. The document you provide (which is 30 years old, so with just this one, we should not assume it reflects today's consensus) explains that the difference is not understood, and that there is no _obvious_ answer, neither from biology, from group culture or from bias in the tests. In other words: the difference is due to something _not obvious_, for example (but not limited to, of course, it's just an example), _not obvious_ form of bias.
It misses the point: if you split the universe in two identical parallel universe, but then take the same individual and in one universe train them 2h per week, and in the other universe, train them 8h per week, do you really think, whatever the test is, the second one will not perform better?
This is the point of training: the more training you have access to, the better you do. If it was not the case, then the notion of school itself as a way of training people to be able to think by themselves will not have any sense.
And that is just training. Even with the same amount of class hour, kids who don't have to worry about take care of their siblings, of the house chores, or of even having access to decent relaxing conditions will get higher score even if they are in fact less smart.
Yes, more training will invariably give better outcome for a given individual. But some people are just incredibly more talented than others due to genetics alone.
If you want to build an elite sport team, I don't think you want to artificially put less athletic kids for the reason they had to work harder.
I think the question is why do we need elite higher education at all. Maybe we don't. In my view, we want to funnel the brightest people there and make sure they get access to the best resources.
That's the point: people with more training that reach high grade are LESS GOOD than people with low training that reach lower grade.
You are saying that you don't want less athletic kids being accepted artificially. That's exactly the point: the score does not correspond to the talent, you have to correct for it: to compare 2 persons on their merit if they have had different training, you need to calibrate to get a variable that correspond to the merit.
Probably said somewhere else, but "green" originally means "does not create toxic waste when used". Nuclear is nice and good for the environment, but it does fit in the definition "produce waste", even if this waste can be considered as small or can be somehow treated.
I'm surprised by what you say, it is not at all my experience. Are you sure you are not over-interpreting what your friend said, or that your friend's experience was not unusual?
1) People at CERN publish papers in "normal" physics journals, which do the usual peer review. Few articles that I've myself per-reviewed were not from my own experiment. There is, of course, also an internal reviewing for each collaboration, but it is to improve the quality and something totally natural and obvious if you want to have a collaboration (by definition, a collaboration is a place where people read each other work and feedback to each others). But it is totally different from "the work is only reviewed by the collaboration".
2) I've worked ~5 years in one experiment, and ~5 years in another, and I did not notice any different terminology. In both experiments, I've very rapidly met and learned the name of people of other experiments working on similar subject. I don't know any workshop or conference where the invited scientists are not from different experiment. During these events, there are a lot of exchanges.
3) What is true, and it is maybe the reason of your misunderstanding, is that you are strongly advised to not share non-cross-checked material outside of the collaboration. The goal is to avoid biasing the independent experiments: if you notice a strange phenomena that will later turn out to be a statistical fluctuation or if you use a new methodology that will later turn out to have unnoticed systematical biases, if you mention this to the other experiment, you will "contaminate" them: they may focus their research or adopt the flawed methodology. But this is only for non-cross-checked and it does not make any sense to pretend that it has a negative impact (a lot of scientists, in collaboration or not, towards all history, don't like to share their preliminary results before they acquired a good confidence that what they saw it reliable).
4) Do you have example of things that one could not understand while it was done down the hall from them? I don't recall "not being able to understand" (the point of a publication is to explain, so people care about making it understandable). I do recall "harder to understand", but it was often from people from the same collaboration, and the reason was because of they needed to use some mathematical tools I did not know and that there was not really any other way.
I'm sure there are cases where two groups end up diverging and it makes the collaboration more challenging. But I really doubt it is not something exceptional, and that everyone in the collaborations will try to mitigate.
Your comment makes me wonder to which extend the outsiders of CERN don't have plenty of crazy myths totally disconnected from the reality. I guess it is a good example why people like Hossenfelder are a problem: they feed on these myths and cultivate them.
Even if we are generous and accept that GU was more criticized than other bullshit papers, the claim still needs to prove that the difference of treatment is due to some real bias and not a simple fluctuation.
"I saw 2 persons being judged by a judge, and turned out they were both guilty of the same crime, but the first one got less than the second one. The first one had the same letter in second position in their family name as the judge, so it's the proof that judges are biased favorably towards people who have the same second letter"
But then, the problem is that "their own bullshit papers" is doing a very heavy lifting here. The point of Hossenfelder is that String Theory is as bad as GU. But is it really the case? Hossenfelder keep saying it's true, but a lot of people are not convinced by her arguments and provide convincing reasons for not being convinced. The same kinds of reasons don't apply to GU, so it already shows that GU and String Theory are not on the same level. Even if String Theory has some flow or is misguided on some aspect, does it mean that the level of rejection in an unbiased world will obviously be the same as any other bullshit theory.
Another aspect that is unfair is that a lot of "bullshit theory within the sector" dies without any publicity. They stop rapidly because from within the sector, it is more difficult to surface them without being criticized early. For example, you can have 100 bullshit theories "within the sector" and 3 survive and surface without being as criticized as GU while 97 have been criticized "as much" as GU during their beginning which stopped them growing. Then, you can just point at one of the 3 and say "look, there is one bullshit theory there, it's the proof that scientists never confront bullshit theories when it comes from within". Without being able to quantify properly how the GU-like theories are treated when they are "within", it is just impossible to conclude "when it is from within, it is less criticized".
I think I get your point. Unfortunately I'm in no way able to speak to string theory other than what I know from pop culture, so it's way out of my league. I only commented on this thread because after reading the blog and having watched the video, it felt that I got something else from the video. Perhaps being "in" you get other nuances. That makes sense.
"she is essentially equating Weinstein's theory to all other theoretical physics"
Sure, it is an extrapolation to say "all other".
But this sentence still has the point of showing how unfair and unscientific is the basis of Hossenfelder's arguments. Even if you don't know String Theory, you should stop and think "ok, but how can she pretend that conclusion is valid" (in my previous comment, I provided 3 elements she overlooked: the fact that BU and String Theory may not be the same level of bullshit-ness, the fact that having 2 different theories receiving a different treatment can be explained by other reasons that a bias for the insider, the fact that she has no access to the rate of criticism of BU-like theories that come from the inside).
Even if you don't know String Theory, you should ask "did she even consider that maybe there are differences in the level of bullshit-ness that make some people criticize GU and not String Theory".
Did they account for the "usefulness" of the code produced.
In my company, one problem is that developers produce internal tools that do not correspond to what other employees need. It is even worse when developers are more distant from the users and don't socialize with them.
The "creativity" can increase, it does not mean that it is a good thing if they invent things that are not what people need.
Not sure I understand. My point is that "having devs being more creative" does not always mean it is a good thing. If the dev is creating more inventive code that solves what they incorrectly think is the problem while not solving the real problem, then it is a waste of time and money.
I'm sure that, obviously, the dev is convinced that their inventivity is genius and solves the problem. But we need someone else, impartial, to estimate if the amount of code is worth it.
When I look at the other opinions and values of the majority of people who say that DEI compensate unfairly and too much, I either see that 1. they don't even accept to consider that maybe there was a bias, 2. they also defend policies quite marked politically. These two things make me think there is not really a compensation that it is too big, and that it is just the people who have different values that say they disagree. If it was indeed unfairly balanced, they would be more "pro-DEI on principle" that would react on the dysfunction. (not saying they don't exist, but they are just too few)