Our portfolio of services is provided by a team of skilled and qualified experts, who have in-depth knowledge of security principles and processes, a comprehensive understanding of your vertical, experience in developing intricate projects, and adherence to Security 101’s core values of fanatical customer service and integrity.

How to prevent bias in AI-powered video surveillance




The introduction of artificial intelligence (AI) into the video surveillance arena has been transformative. It has enabled physical security systems to become faster and more intelligent, as valuable collected data is converted into functional and applicable insights, which organization leaders and security teams can use to better protect their sites.

The positive impact of AI is notorious. Video can now be viewed from a unified security center for a quicker, more pertinent response to threats. AI has also provided cameras with the ability to analyze video, by utilizing facial recognition technology, facilitating access to and enhancing the security of commercial enterprises.

Further, artificial intelligence has amplified data usefulness and visibility, without having to rely on human intervention. An advanced system can notify facility managers of serious events requiring immediate action and abnormal behaviors that could potentially lead to a critical emergency. The capability of timely identifying these kinds of situations makes AI a prime component of modern surveillance systems.

Although the benefits of implementing video surveillance solutions that work with AI are outstanding, a growing body of research indicates that artificial intelligence can lead to gender and race-biased outcomes, particularly to the detriment of minority populations and women. Since AI is considered to be a system capable of taking decisions, ensuring these decisions are fair and just is of utmost importance.

Biased insights could be a result of the persistent lack of diversity in the technology sector. Unconsciously, human creators might be “baking” biases into the AI systems they create, by training the artificial brains with dogmatic data or rules that include implicit inequalities.

Biased anomaly detection

Anomaly detection — the process of identifying items or events that differ from the norm — can be especially prejudiced. It has been found that AI algorithms often predict most white faces to be “normal”, while many black faces are assessed as “anomalies”. Likewise, more females are predicted to be in the “normal” group, with more males predicted to be “anomalies”.

Discriminatory categorizations can result in grave security mistakes, angry customers, and public embarrassment, as detection algorithms are more likely to predict that African Americans or darker-skinned males are deviations from the rules. However, if a white person has malicious intent, a one-sided system will probably not detect the threat.

Ways to prevent biased AI algorithms:
  1. Behavioral detection rather than facial detection
    Focusing on detecting anomalous conduct rather than faces is key to mitigating biased conclusions. A first-class system can detect fighting, aggressive actions, abnormal shopping behavior, or slip and falls. In addition, an equitable system should support real-time, multiple stream processing, as well as offline analysis of images. Once an event is detected, an alert is sent to the designated users.
  2. AI Governance
    It is incumbent to have someone, within the organization, responsible for evaluating and tracking AI usage and fairness. This includes understanding exactly how machine algorithms are processing data and how judgments are made.
    Since an AI system is continuously self-learning, it can be adjusted to each particular case to reflect real demographic information.
  3. Choose a professional integrator
    Most end-users are primarily concerned with the ability of their AI-based video surveillance systems to deter crime. Nevertheless, it is critical to choose an expert integrator that understands the risks of bias issues and uses moral algorithms to avoid unethical behaviors from computer systems.

As machines replace humans in more occupations, transparency of how artificial brains process information and how they make decisions is key to trusting them. Hence, AI-powered video surveillance systems must demonstrate moral and ethical standards to avoid gender and race biases, in addition to being able to deter crime.