UNO November 2019

Biases and Artificial Intelligence: careful with the data

“Systematic error introduced into sampling or testing by selecting or encouraging one outcome or answer over others.” That is how the RAE (Royal Academy of the Spanish Language) defines ‘bias’. But how do these biases affect Artificial Intelligence? When we talk of bias in AI, we do so in the same way as in any other activity or knowledge area. We talk of prejudice, conceptions of reality according to which we subconsciously make decisions.

If we stop to think, the biases affecting Artificial Intelligence are in the data and algorithms. But when we talk of data, we take into account not only gender, age and race, which could be the first ones that come to mind, but any data referring to a person. And not only that, but we go one step further, referring also to the importance given to each of these data and its usefulness for structuring them or the algorithm chosen to operate with them. Granting a mortgage, hiring someone in a company… the algorithm created for each of these tasks makes decisions, in which the biases have often not been hacked.

Therefore, we need to stress the important roles of all those who work with these algorithms, people who are able to work with that “raw material”, i.e. the data. Just as we take into account what materials we are going to use if we want to construct a building (in order to follow the correct line in terms of environmental sustainability and the health and safety of those who are going to occupy it), in this case our materials are data and, therefore, those people are responsible for having an ample knowledge of the raw material they are working with: where they come from, what type of data they are, the quality of the data, etc. And they must know what parameters an algorithm applies to make the decision. Since according to the GDPR, any decision based on automated processing must be explained if so requested by the data subject, and Article 22 GDPR even acknowledges the user’s right to object to decisions based on automated processing or not to be subject to such decisions.

“We need to stress the important roles of all those who work with these algorithms, people who are able to work with that ‘raw material’, i.e. the data”

In order to understand what we are talking about, I will give an example. If we make a quick search in Google for famous researchers in history, the results will contain far more men’s names than women’s. This is another case in which we can blame the algorithm. In fact, we have recently seen how a little girl proposed an algorithm to Google that would make it possible for at least one female scientist to be included in all searches of this kind, thus also training the algorithm. But returning to the matter at hand, the question of biases, there is a way to solve this problem: making the people who work directly or indirectly with the data aware of the issue. They should be aware, above all, that there is no single truth, but that balancing is necessary, putting all the data into an equation so that the decision made is unbiased. In other words, within the implicit bias existing in decision-making by a person, for example, they should select which parameters are or are not important or which algorithm is going to be used, and will have the least negative impact on the person who is going to be affected by the decision-making.

But what about those people? Up to now, in IT, information engineers used to have “absolute knowledge”, without taking account of other areas of knowledge directly related with the user as a data subject, as a person. With Artificial Intelligence, we have taken a different course: we have realized that humanities are important. For example, think of the role of linguistics in information automation or conversation systems, or the role of ethics in regulation and social good, among many other fields. So why this sudden change? The reason is that we can respond to questions that machines cannot respond to: creativeness, empathy, intuition, moral values…

“The challenges posed by AI are both diverse and thrilling, since this technology impacts both personal and socioeconomic facets”

As regards ethics, or responsibility for producing AI solutions, consider for a moment self-driving cars. Whose fault is it if there is an accident? The person who developed the hardware or the software? Who should I run over if there are several pedestrians crossing the road and my car has a fault? Here we have to face the complexity of ethics. Therefore, everyone who works with algorithms must take account of the huge social impact of their technology when making decisions.

There is a place where data biases proliferate every second or thousandth of a second: social networks. In all the history of mankind, this is when the largest quantity of content is being produced. We do this using our cell phones, every time we publish content on our networks, comment on articles, etc. When people publish content in these channels, they wittingly or unwittingly reveal their prejudices, opinions, the way they see the world, in short, their biases. Companies that operate with this kind of data to generate products or make decisions must be particularly careful when processing unstructured data from social networks or open forums.

“With Artificial Intelligence, we have taken a different course: we have realized that humanities are important”

With respect to the impact of AI in our work, another issue that those of us who work in AI are forever hearing and reading in the press or other media is that AI is going to destroy a lot of jobs, as repetitive tasks and processes currently performed by people are going to be automated. Indeed, certain jobs that can be automated will disappear; this has been happening throughout history (e.g. ice delivery men, milkmen, night watchmen…) accompanying technological progress. AI is undoubtedly one of the principal driving forces of the current industrial revolution, which was and is triggered by digital technology. This situation leads us to self-reflection, asking ourselves the following questions: What differential value regarding a machine can I contribute to my job or my company? For example, in a job that entails reading, understanding, and extracting information from legal documents, something that can already be done by a machine, as we do at Taiger. The fact of using a machine to automate processing enables that person to concentrate on contributing more value to the processing, particularly regarding their client, because they will be able to devote more time to listening, strategy, empathy… In short, the personalized service and tasks with a high impact on personal and professional aspects, such as strategy, creativity and problem solving, among others.

I invite anyone, regardless of whether or not they are related with technology or digital, to get to know what Artificial Intelligence is, how algorithms work (in selection, interpretation, decision making…) when used to automate processing or make decisions and, above all, find out how to manage and discover what is contained in the data, the principal raw material of the current industrial revolution. Just as we, as a society, are saying “No to plastic”, so should we start defending “Careful with the data”.

And those of us who work in this exciting field of AI must be aware of the nature of the data we are working or going to work with (source, quality, biases, etc.) and what we are going to do with them to observe their direct and indirect socioeconomic impact. I myself and other prominent people in AI are already talking to those responsible in the central and regional governments to find out what measures are being taken to hack those biases and what regulatory framework they are working in.

Once we know the actual scope of AI, we will be reassured. For the time being, machines do not have that associative capacity to conjugate or use intuition. We have gained those abilities through experience. This will take some time to develop because it requires a highly complex cognitive system, of understanding first and developing later, as in the case of jokes or irony. If people often find them difficult to understand, imagine machines.

In short, the challenges posed by AI are both diverse and thrilling, since this technology impacts both personal and socioeconomic facets. Hence any profile we may have fits in with this technology. Never before had any technology valued humanities so much, because if companies are customer-oriented and geared towards personalization, what can be better than working on emotional innovation. As Maya Angelou said: “People will forget what you said and what you did, but they will never forget how you made them feel”.

“We must be aware of the nature of the data we are working or going to work with […] and what we are going to do with them to observe their direct and indirect socioeconomic impact”

Cristina Aranda
Business Development for Europe in Taiger and co-founder of Mujeres Tech
Cristina works on Business Development for Europe in Taiger, an Artificial Intelligence company. She is also co-founder of Mujeres Tech, an association aiming to promote initiatives among girls, young people and adult women and men with a view to increasing the presence of women in the digital sector. She is on the Red.es gender round table (Ministry of Economy and Enterprise). She has a PhD in Theoretical and Applied Linguistics, a BA in Hispanic Studies, Master in Internet Business and heads up the Data in Real Life module at Master Data Analytics, ISDI (Internet Development Institute). [Spain]

We want to collaborate with you

Do you have a challenge?

Would you like to join our team?

Do you want us to speak at your next event?