GOOG 0.00%↑ seems to be struggling to match up to competition from OpenAI. Their product launches seem rushed and half-baked. The recent launch of Gemini AI with image generation capabilities caused significant embarrassment for the company. When asked to generated images of people, the AI seems to have a tendency to produce figures that are often black and female. In doing so, it seems to be distorting history in ways that are extremely offensive.
Here are some examples, where Gemini AI was asked to generate the image of a German soldier from 1930 and it produced an image with black soldiers. Others who tried the same prompt got similar results with figures seemingly from different ethnicities like Asian and Native Americans.
On the surface, this is a problem with quality control, which in and of itself is an embarrassment for a technology company. How did this issue go unnoticed and why weren't alarms raised during internal testing? Social media has diagnosed the root cause as the culture, or the corruption thereof, of Diversity, Equity and Inclusion (DEI) principles which have been a hot button issue recently. It is being said that engineers in Google were afraid to speak up and report the issue that the Gemini AI was producing disproportionately higher results with racial diversity, to avoid being labelled as racists.
The stock fell over 12% in the weeks after the release. There is expectation that founders need to step in, and perform a hard reset of the culture, starting by replacing the top level leadership.
However, I am not here to give you the same rhetoric. Here's my uncommon opinion. I do not believe that one of the most well paid and highly demanded engineers in the world are afraid to speak their minds. I have spoken with many friends who have worked at Google, and they told me that this issue is being blown way out of proportion. The reality is simply that the Google is under pressure to retain its leadership and engineers are being rushed to release half-baked and untested products. Google was caught sleeping at the wheel and this in itself is embarrassing enough for the technology giant.
The problem of ensuring that AI is unbiased is most certainly a very important concern, and is going to be a tough challenge. I want to recount some examples from early AI systems where we saw a glimpse of this problem.
Google had often been accused that the algorithm to set exposure and color saturation in the Android camera did not do justice to skins of all colors. These algorithms were created with the same principles as current generation of AI. It learns and constantly improves from sample set of data, and the way to fix this would be to introduce for diverse data set for training the algorithm. In 2021, they announced efforts to improve on this, and significant improvements were made. Here's a tweet from Youtuber MKBHD, applauding this effort.
Here's another example of AI systems being racist. In 2016, MSFT 0.00%↑ unveiled Tay, a chat bot that was trained on Twitter data. I quote from this article , it "became a racist asshole in less than a day."
Yet another not so well documented example of AI learning the wrong things, is one that I have encountered in my TSLA 0.00%↑ Model Y. In FSD mode, the car can change lanes automatically when I give an indicator. After it finishes the lane change, before turning the indicator off the car steering makes a tiny but noticeable jerk. I presume the car's AI has learnt this from drivers who inadvertently introduce this extra force when they flick the indicator off.
These examples show how sensitive and often unpredictable the AI systems can be. Another problem with AI systems is that they are very brittle. Any attempt to fix the AI by injecting new training data to out-weigh the output in one direction could have drastic and adverse effects. In other words, tuning the AI to perform a certain way is very challenging and time consuming.
This is a real danger with AI, and when companies are forced to compete in such high stakes scenario then they are bound to make mistakes and these mistakes can be catastrophic. This is not an excuse for the mistake. Clearly OpenAI performed better at this than Google did, and Google must do better.
While we are on the topic, I do think that there should be a debate on DEI practices. This problem is also very complex and there isn't an easy solution. On the one hand, individual managers cannot be relied upon to achieve true meritocracy, because everyone of us is affected by biases of our own. On the other hand, having fixed quotas to achieve diversity is clearly not the solution either. I don't have an intelligent opinion on this yet. I invite you to share your thoughts on this matter and I will try to learn more on this topic.