Almost no one knows, but Artificial Intelligence (AI) has been around for at least half a decade. Despite being around for so long, it was only now, with the creation of more and more advanced processors, that we started to take advantage of this technology in our day-to-day.
AI sounds like something from science fiction, as we’re well used to in movies, but it’s actually just computers performing functions and algorithms in an attempt to learn. Artificial Intelligence is a field that studies how machines can solve problems similar to humans.
In terms of its possibilities, it’s easier to say what AI can’t do than the other way around, since its applications are almost countless. While the possibilities are practically limitless, the challenges that these technologies will face are proportional to their capabilities.
One of the big problems that AI slips into when trying to solve are those involving social issues. This happens because Artificial Intelligence is trained to recognize patterns which, in general, generate exclusion rather than inclusion.
Imagine, for example, a smartphone camera software trained to adjust the brightness from a database that mostly uses photos of white people. It sounds absurd, but that’s exactly what happens. Anyone who likes photography knows that controlling brightness is difficult and depends on several factors, including skin tone. Due to this “standardizing” and diversity-blind feature, smartphone camera software has grossly failed to capture black skin tones.
Knowing this problem, Google took the lead in working to create technologies that promote inclusion, capable of learning and understanding diversity. During a live broadcast on Google I/O, a programmers conference that is held annually in the United States, Google announced that it was working to make its cameras more accurate, avoiding excessive glare and representing people more realistically. For this, it will start using inclusive databases, which include people of all skin colors and tones.
While algorithms can generate exclusion, it is our responsibility, as experts in the field, to do just the opposite, that is, to create technologies like Google’s, which promote inclusion and equality rather than exclusion and segregation.
Imagine another example: a virtual assistant who was trained from a database containing mostly Native American voices. Probably, when listening to an Indian or Chinese person speaking English, you will have difficulties in being able to interpret the variation of sounds and pronunciation. A very frustrating and embarrassing experience.
But not even virtual assistants were left out of the changes. Some technologies unable to recognize non-native speakers now have this functionality. The pioneer in promoting them was Apple, which adjusted Siri to understand and even speak with different accents, including British, Indian, Irish and Australian.
Today, we can say that thanks to sensitive people and companies with an inclusive perspective, important social issues, such as diversity, have become part of technological discussions. The future for inclusive AI is incredible, and the opportunities for us to make a difference by promoting solutions like Apple and Google are numerous. It is a topic that gains traction on a daily basis and will be mandatory in the technology field.
By People + Data + Product team:
– Tamires Carneiro – Tech Recruiter | Diversity and Inclusion Expert
– Mateus G. Ignácio – Product Owner | Product & Machine Learning Expert
– Juliana Lilian Duque – Data Enginner | Data Science Expert
– Fábio Zanin – Data Engineer | BI Expert
– Gilson Bernichi – Data Engineer