The introduction of artificial intelligence (AI) into the video surveillance arena has been transformative. It has enabled physical security systems to become faster and more intelligent, as valuable collected data is converted into functional and applicable insights, which organization leaders and security teams can use to better protect their sites.
The positive impact of AI is notorious. Video can now be viewed from a unified security center for a quicker, more pertinent response to threats. AI has also provided cameras with the ability to analyze video, by utilizing facial recognition technology, facilitating access to and enhancing the security of commercial enterprises.
Further, artificial intelligence has amplified data usefulness and visibility, without having to rely on human intervention. An advanced system can notify facility managers of serious events requiring immediate action and abnormal behaviors that could potentially lead to a critical emergency. The capability of timely identifying these kinds of situations makes AI a prime component of modern surveillance systems.
Although the benefits of implementing video surveillance solutions that work with AI are outstanding, a growing body of research indicates that artificial intelligence can lead to gender and race-biased outcomes, particularly to the detriment of minority populations and women. Since AI is considered to be a system capable of taking decisions, ensuring these decisions are fair and just is of utmost importance.
Biased insights could be a result of the persistent lack of diversity in the technology sector. Unconsciously, human creators might be “baking” biases into the AI systems they create, by training the artificial brains with dogmatic data or rules that include implicit inequalities.
Biased anomaly detection
Anomaly detection — the process of identifying items or events that differ from the norm — can be especially prejudiced. It has been found that AI algorithms often predict most white faces to be “normal”, while many black faces are assessed as “anomalies”. Likewise, more females are predicted to be in the “normal” group, with more males predicted to be “anomalies”.
Discriminatory categorizations can result in grave security mistakes, angry customers, and public embarrassment, as detection algorithms are more likely to predict that African Americans or darker-skinned males are deviations from the rules. However, if a white person has malicious intent, a one-sided system will probably not detect the threat.
Ways to prevent biased AI algorithms:
As machines replace humans in more occupations, transparency of how artificial brains process information and how they make decisions is key to trusting them. Hence, AI-powered video surveillance systems must demonstrate moral and ethical standards to avoid gender and race biases, in addition to being able to deter crime.