# Optimal discrimination II

In the last post we discussed that labor market discrimination in a functioning market is quite unlikely. But call-back rates for identical applications differ consistently by race or ethnicity. Even if we allow for ability differences, this seems difficult to explain without discrimination. Of course, if 50% of all companies hire in a discriminatory fashion and the other 50% are completely fair, the end result will likely be very minimal differences in wages or unemployment after controlling for ability while call-back rates can be quite different. This might still be part of the story.

However, even if all companies hire exactly according to the expected job performance of the applicants, call-back rates of applicants of different ethnicity will still differ even with an identical resume. The main reason for this is regression to different means. The qualification of an applicant can be seen as one measurement of his ability. His actual job performance can be seen as another. These two measurements will correlate imperfectly. Therefore the second measurement is expected to regress to the population mean.

If the population mean is lower for one ethnic group than for another, the expected regression to the lower mean leads to a lower expected job performance and therefore fewer call-backs.

Yes, that means that for every level of ability a member of a lower performing group will be expected to perform worse than equally able people of a higher performing group. This sure sounds like discrimination, doesn’t it? If for every level of ability one group is underestimated compared to the other, obviously the whole group has to be underestimated, right?

Well, actually not. This is an instance of the famous Simpson’s paradox, where a statistic can be the case for all subgroups but still not hold for the whole group. The easiest way to see this, is to realize that it doesn’t actually matter to which group you belong, if you look at people x standard deviations out from their group’s mean. An Asian-American who is one standard deviation above the Asian-American mean in terms of his qualification, will still be expected to only be 0.5 standard deviations above the Asian-American mean in terms of job performance (if we assume a correlation of 0.5 between qualification and job performance). So an Asian-American who is one standard deviation above his group’s mean in actual ability will be underestimated to the same degree as an African-American who is one standard deviation above his group’s mean. So summed over the respective bell curves the underestimation that both groups suffer is exactly the same.

The second statistical phenomenon that might play a role is the fat tail. If you create a cutoff above which you would want to interview a candidate, the group of people above the cutoff from the higher performing group will on average be more capable than the group of people above the cutoff from a lower performing group. This is due to the fact that bell curves drop more quickly the farther out from the tail you are.

So to get the same average ability and maybe the same success rate of candidates you might want to use a higher cutoff for applicants from lower performing groups. I am not suggesting that HR knows this. I am just suggesting that if you average over enough companies the practices followed might be close to statistically optimal. Otherwise you get arbitrage opportunities and those tend to be found and exploited (and if they are, they vanish).