Main points:
- How AI might make you a slave
- How propaganda machines can teach you to ruin your country by AI chatbots
- Biases are enhanced by AI, but propaganda can make it even worse
Context
Bias itself is a big problem, which is deeply rooted in the history of mankind and by itself is a psychological reaction to the complexity of the world (this bug helped us to evolve btw).
Imagine all the history-scale data we have as humans, and How Bad that data is, where most of the people are just slaves, most of the history written by winners. Then we try to teach AI, on that data. Should we expect AI to teach us back to make our own slaves or to live like slaves?
Google was trying to fight the most problematic part of AI. The thing is that AI multiplies biases, it makes it even worse due to modeling and AI training nuances, — and it is why we have all these funny stories about openAI knowing only white engineers, or women as servants.
I think society just realizes how hard the problem is to tackle, and that even Google can’t make it right, they basically were trying to even the main types of biases which led to fun stories.
I do not believe they will lose much more in the AI race, as everyone already treats OpenAI as a winner. And it seems, it is safer to have biased AI chatbots rather than these embarrassing cases, where European kings become black women, so your service can’t even make value for the user.
Problem statements
BUT, despite the context, imagine a bit of the future.
Check out it from that side. In the Russia-Ukraine war, russians are using a tremendous amount of bots which have more than 173 million of contacts per month just for the Ukraine narrative, and it is just one of their big propaganda centre called “Центр Ц” (they do same to influence US election in the US)
Nuance is, that companies such as OpenAI, use all the opened data, right? Wikipedia, forums, comments, sites. So, to change the world, you can just generate a bit more “right text” by bots. This time not about race or gender, but e.g. “Should we have a country like America at all?”. So when new models at OpenAI or Google get that data to train, these services will be teaching your kids back on how to leave your country, just because russian bots suggest it, by making a lot of discussions in the right internet places.
So we might laugh at google’s attempt to fight some prejudiced biases, but it does not look that funny when an entire nation's security could be threatened at ease.
Final
Do we make technology safeguards in the right place?