Does artificial intelligence help protect vulnerable groups or increase the damage they suffer ?

 


The world is witnessing an accelerated adoption of artificial intelligence technologies in areas aimed at preventing abuse and protecting vulnerable groups, such as children in the alternative care system, adults in nursing homes, and students in schools, as these tools promise to instantly detect risks and alert authorities before serious damage occurs.

But behind this bright promise lie deep challenges and real risks, and fundamental questions arise, such as: are we building smarter protection systems, or are we automating the same mistakes and biases that these groups have suffered for decades

Therefore, in this report we will highlight the uses of artificial intelligence in protection, the challenges and ethical concerns facing its use in this sensitive field, as well as solutions that can be applied:

What are the uses of artificial intelligence in protection?

Artificial intelligence offers great potential to enhance the efficiency and effectiveness of social protection systems when applied wisely, and its uses stand out in several key areas, including:

Language pattern analysis: natural language processing techniques are used to analyze written or spoken language in text messages, with the aim of detecting patterns of threat, manipulation and control, which may help identify cases of domestic violence and enable authorities to intervene early.

Predictive modeling: childcare agencies rely on predictive AI models to calculate risk indicators in families, which helps social workers prioritize high-risk cases and intervene early.

Surveillance: AI-powered surveillance cameras help analyze people's limb movements, not faces or voices, to detect physical violence in care facilities.

Decision support: these tools have proven their ability to help social workers intervene earlier by providing them with data analytics that may not be visible to the naked eye.

However, Dr. Aislinn Conrad, Associate Professor of social work at the University of Iowa, provides a critical perspective on the use of artificial intelligence in protecting vulnerable groups, and her vision is based on her experience of up to 15 years in research on domestic violence, as she believes that existing systems, with their good intentions, often fail the people they are supposed to protect.

Dr. Conrad is currently involved in the development of the iCare project, an innovative monitoring system based on an AI–powered camera, which is characterized by its ability to analyze the movements of people's limbs – rather than faces or voices-to detect indicators of physical violence.

Dr. Conrad poses a fundamental question that lies at the heart of the debate about the future use of artificial intelligence in the field of social welfare: can Artificial Intelligence really help protect vulnerable groups, or is it just automation of the very systems that have long harmed them This question reflects a legitimate concern about the ability of new technologies to overcome the shortcomings of human systems, or will they repeat their mistakes and challenges in new ways

Major challenges.. When technology inherits the injustice of the past:

Does artificial intelligence help protect vulnerable groups or increase the damage they suffer

Many AI tools are trained to learn by analyzing historical data, however, the danger is that history is full of inequality, bias and erroneous assumptions, and this reality is also reflected in humans designing and testing AI systems, leading to potentially harmful and unfair results.

Because of these biases inherent in the data and those who make the systems, AI algorithms may end up replicating systematic forms of discrimination, such as racism or class discrimination.

For example, a 2022 study conducted in Allegheny County, Pennsylvania, showed that a risk forecasting model aimed at estimating families ' risk levels – using assessments given to Hotline employees to help them sort calls – would have reported black children for investigation by more than 20% compared to white children, if the model had been used without human supervision, and when social workers were involved in the decision-making process, this percentage decreased to 9%, proving that total dependence on the machine amplifies existing injustice.

For example, another study showed that natural language processing systems misclassified African-American Vernacular English as aggressive at a much higher rate than standard American English, up to 62% more in certain contexts.

At the same time, a 2023 study found that AI models often have difficulty understanding context, meaning that messages containing sarcastic or humorous texts can be incorrectly classified as serious threats or signs of distress, which can lead to unnecessary and harmful interventions.

These shortcomings can lead to the recurrence of major problems in social protection systems, as people of color have long been subjected to excessive control in child welfare systems, sometimes due to cultural misunderstandings, and other times due to deep-rooted prejudices.

Studies have shown that black and Indigenous families face disproportionately higher rates of reports, investigations and family separation than white families, even after taking into account income and other socio-economic factors.

Monitoring at the expense of privacy


Even when AI systems succeed in reducing damage to vulnerable groups, they often do so at an alarming cost, and these challenges are evidenced by a pilot program conducted in 2022 in Australia.The AI camera system, which was used in two care homes, generated more than 12,000 false alarms in one year, and this huge number of incorrect alarms significantly exhausted staff, resulting in the omission of at least one real incident. Although the system has shown improvements in its accuracy over time, the independent audit concluded that over a period of 12 months, it had not achieved a level considered acceptable to staff and management, highlighting the gap between technical promises and operational reality.

Artificial intelligence applications also affect students. in American schools, artificial intelligence monitoring systems such as: Gaggle, GoGuardian, and Securly are marketed as essential tools to keep students safe.These programs are installed in students ' devices in order to monitor their online activity and identify anything worrisome, but they have also been proven to indicate harmless behaviors as worrisome, such as writing short stories containing mild violence or researching topics related to mental health. Other systems that use cameras and microphones in classrooms to detect aggression often misidentify normal behaviors, classifying normal behaviors such as laughing or coughing as danger indicators, sometimes leading to unjustified interference or disciplinary measures that may harm students instead of protecting them. These problems that AI systems show are not just isolated technical errors; they are a reflection of deep flaws in how AI is trained and deployed, as these systems learn from previous data selected and classified by humans, data that often clearly reflect the inequality and social biases existing in our societies. Dr. Aislinn Conrad believes that artificial intelligence can still be a force for good, but only if its developers prioritize the dignity of the people these tools are designed to protect. and Dr. Conrad has developed a framework called (shock-responsive artificial intelligence), and this framework is based on four basic principles, Survivor control over monitoring and data: individuals, especially those under surveillance, should have the right to decide how and when their data is used, which enhances trust and encourages positive engagement with support services. This control also increases their interaction with support services, such as creating customized plans to keep them safe or get the necessary assistance. This approach ensures that technology acts as an enabler rather than a coercive surveillance tool. Bias auditing to ensure neutrality: governments and developers should conduct regular tests of their systems to detect and minimize racial and economic biases, and open-source tools such as IBM'S AI Fairness 360, Google's What-If Tool, and Fairlearn can help detect and minimize these biases in machine learning models before they are deployed, ensuring that algorithms do not duplicate or amplify societal biases found in historical data. Privacy by design: systems must be built from the very beginning with the aim of protecting privacy, and open source tools such as: (Amnesia) from OpenAI, (differential Privacy Library) from Google, (SmartNoise) from Microsoft, can help to hide the identity of sensitive data by removing or hiding identifying information, in addition, face obscuring technologies can be used to hide the identity of people in video or photo data, allowing data analysis without compromising individual privacy. Ultimately, artificial intelligence will not be able to replace the unique human ability to empathize and understand the context. But if it is designed and implemented in accordance with strict ethical principles, it can become a tool that will help us provide more care and protection to vulnerable groups, not more punishment and surveillance.

Post a Comment

Previous Post Next Post