Life is for sharing.
Telekom CR-facts
Soziales [1] January, 2022

Bias and fairness in AI

When AI makes the headlines these days, it is all too often because of problems with bias and fairness. This happens whenever AI models systematically...

When AI makes the headlines these days, it is all too often because of problems with bias and fairness. This happens whenever AI models systematically disadvantage certain groups or individuals. Deutsche Telekom AG's T-Labs are therefore working with Magenta Austria to check for bias before an AI model is used.

If a model's decision is influenced by bias, the model has not learned a realistic or complete picture of the environment in which it will be used after development. When such a model goes into production, it will not evaluate different situations equally well. Such learned bias is usually due to the fact that the database provided to the learning process is not sufficiently complete or balanced. The reasons for unintentionally incomplete/unbalanced data can be many, as are the dangers associated with it:
Here, there is interaction bias, where, for example, people create interaction biases when they interact with AI systems or intentionally try to influence them and produce biased results. An example of this is when people intentionally try to teach chatbots bad language.
Selection bias, where, for example, in applicant selection processes, men are selected preferentially because the AI model was previously "trained" with mainly male data and it could not reasonably handle more female connotated hobbies or foreign languages in the CV.
Implicit bias is about unconscious bias and the associated risk that technology will systematically disadvantage People of Color, for example.

Intelligent systems make similar mistakes to humans, but in automated ways and potentially on a larger scale. These mistakes can have serious consequences for both individuals and companies - from lost revenue to legal consequences to brand and reputation damage. When using personal data and/or ensuring the technological participation of diverse user groups, close attention must be paid to whether data biases can potentially exclude or discriminate against people. To evaluate common bias measurements and matching countermeasures for telecom use cases, T-Labs worked with Magenta Austria on their churn and propensity models to ensure that certain groups of people are not systematically neglected.


arrow_right