Last month at MWC in Barcelona, the session panels focused on the hottest topics in mobile, such as 5G, artificial intelligence and blockchain. The more controversial panels discussed the bias found in data, and how that data goes onto inform algorithms, which results in unethical conclusions. Speakers and panelists pointed out the racial bias in prison sentencing, gender bias in mortgage loans, financial institutions, age-related bias that occurs during job recruitment, and pre-existing conditions in health care coverage. .
Danny Guillory, the head of global diversity and inclusion at AutoDesk told Fast Company that by running a search for a professional social network for social engineers, the results were primarily Caucasian men. Guillory pointed out that when you engage or ask for more results, the AI delivers candidates with similar attributes – more Caucasian men. Another example of AI bias is the notorious Microsoft’s Tay AI, when released on Twitter back in March of 2016, the AI quickly became misogynist and racist on social media within a staggering 24 hours.
AI may seem like an auxiliary technology to how we live our daily lives today, however, it will soon be the primary driver across the tech industry. PricewaterhouseCoopers estimates the world economy will reach an additional 15.7 trillion in value by 2030 due to artificial intelligence. To put this into perspective, the top 5 technology companies today have a combined value of $4 trillion, which includes Apple, Amazon, Microsoft, Google and Facebook. The annual global technology spend is similar – about $3 trillion. Over the next decade, AI will drive a market 5x the size of tech’s current global spend.
Although this growth is exciting on many levels, the panelists at MWC 2019 voiced concerns about the handling of inherent biases that comes from data, as clearly discrimination by age, race, gender, education or other factors within audience segmentation is counterproductive to the advancement of society that AI promises.
AI algorithms are responsible for making consequential decisions and are trained to find lookalikes or other markers to learn patterns. Some argue that the bias occurs when the computer system reflects the humans who designed it. Proven downsides to artificial intelligence have surfaced in recent years, for instance as how fake news allegedly influenced the 2016 Presidential election. These accusations are proof that we have run out of time in addressing these concerns, especially as we near the precipice of a much larger, multi-trillion-dollar AI market.
Provided there is more diversity within the field of artificial intelligence, many of the panelists asked who should regulate the infractions of algorithmic bias – governments or markets? Many felt there should be an international community to establish guidelines for AI. But even then, will the lower classes be invited or what level of inclusivity will an international community realistically provide for, as the world’s most vulnerable and marginalized people are unlikely to be represented. In this way, AI could further the gap between lower class and upper class along socioeconomic lines, if it has not done so already as AI is currently in use by the largest financial funds in capital markets.
The unanimous solution among the panelists and speakers was to broaden the conversation and not limit artificial intelligence jobs only to technical experts. “Requiring someone to know Python in order to work with AI is not democratizing AI,” one panelist pointed out. Along these lines, a more human centric approach is necessary.