It is unclear whether Google’s AGI moonshot subsidiary Deepmind is as wokefied as Google itself. Judging by recent blogposts „Strengthening the AI community“ and „Causal Bayesian Networks“ they are not far off. „Strengthening the AI community“ is about including members from underrepresented groups into AI research, despite the reason for the underrepresentation being that very few member of these groups show the ability to contribute to top research.
„Causal Bayesian Networks – a flexible tool to enable fairer machine learning“ is potentially much more sinister than that. It basically describes a tool to introduce ideological biases into machine learning models. To quote from a figure caption: „Figure 2b: In the second scenario, female applicants apply to departments with low acceptance rates due to systemic historical or cultural pressures, and therefore the path G —>D is considered unfair (as a consequence, the path D —>A becomes partially unfair).
Of course whether this is the case „requires expert knowledge“.
Open AI – the other big outfit stating AGI as the explicit goal and actually producing ground breaking research plays a very similar tune. Its Open AI scholars – a mentored internship of sorts – have the key qualification of not being white men.
These tries to increase diversity in AI come with the expressed goal of bringing many more people to the table in a future where AI or even AGI has a huge impact on how the world is run. In the extreme case a superintelligent machine would be created that is imbued with certain values and due to its vastly superior intellect starts calling the shots.
The values chosen to be imparted on this machine, if such a thing is even safely possible, are supposed to be representative of all of mankind. And not just a small subset of white men, who for some reason seem to always be the ones to create steam engines, cars, nitrogen fixation, airplanes, nuclear bombs, antibiotics, computers and maybe finally also AGI.
When this topic is brought to the table I always wonder whether this is just completely politically motivated or whether they actually believe this to be a sensible idea. I mean, let’s assume you have certain values, for example female emancipation, liberal democracy and the scientific worldview. And you are about to create an AGI that will make sure that the arc of history will bend towards the values it is seeded with.
Now, if you bring other people with other values to the table you are going to have to make compromises. Female emancipation, yes, but not for Saudis. Liberal democracy, ok, but uploaded Putin will be Russian Zar forever. A scientific worldview only inasmuch as it doesn’t conflict with various religious dogmas.
I only see two reasons why somebody would honestly propose to bring in lots of other people to figure out the values by which the future will be built.
Either they think that their values are self-evidently correct and everybody else will fall in line. In which case, A) they are wrong and B) why bring them to the table at all? That’s just an empty gesture.
Or they are cultural relativists and honestly believe that other peoples values are just as valid and good as their values. Which of course means that they don’t have any values.
Often it will be a mixture of the two enabled by muddled thinking. This is especially clear when the people with different, but just as good or even better values, are future generations. Here, the argument is being made that locking in certain values by seeding a superintelligent machines with them is a horrible thing, because it doesn’t allow future generations to develop their own set of values.
Proponents of this argument seem to imagine that future generations will be wiser and nicer than we are and that all value differences are consequently going to be of the variety that if confronted with a well-argued version of the new values we immediately understand that we are wrong (we kind of deep down knew it all along).
In this case I would argue there wasn’t really a value difference. Just possibly deeper understanding or clearer thinking.
They never seem to take the possibility into account that future generations develop into a less benign direction. Maybe they begin to see the benefits of slavery, cannibalism and genocidal warfare. If you think we should program the AGI to avoid these directions, than you don’t really believe that it is horrible to lock the future humanity into one set of values.
To bring this full circle let’s take a look at OpenAI’s company outing. Employees together with significant others, possibly family members as well. You will find some Indians, quite a few North-East-Asians and a lot of white men.