18/11/25

Latin American women drive change in AI algorithms

Organisations across Latin America are seeking to combat gender inequalities and racial biases fostered by AI algorithms. Image credit: Sula Batsú Cooperative, Costa Rica, under Creative Commons CC-BY 4.0 license
Organisations across Latin America are seeking to combat gender inequalities and racial biases fostered by AI algorithms. Copyright: Sula Batsú Cooperative, Costa Rica, (CC-BY 4.0)

Speed read

  • Argentinian platform uses AI to curb domestic violence
  • In Chile, an NGO explores how to avoid bias in AI
  • Mexico faces AI ‘errors’ in tackling school dropout rates

Send to a friend

The details you provide on this page will not be used to send unsolicited email, and will not be sold to a 3rd party. See privacy policy.

This article was supported by the International Development Research Centre (IDRC), Canada

[BUENOS AIRES, SciDev.Net] “There is a lot of data on femicides, but we need to know what happens before a femicide—what causes a woman to be killed at the hands of a man.”

So says Ivana Feldfeber, a feminist activist from Argentina and founder of Latin America’s first gender data observatory, DataGénero.

To try to answer such questions, Feldfeber and her team created an artificial intelligence (AI) programme that sifts through court case documents for information that could be useful in tackling gender-based violence.

“Privacy, justice and equity must be built into AI by design. AI allows for calculations, statistics and pattern identification, but it depends on training databases that reflect specific worldviews.”

Jamila Venturini, co-executive director, Derechos Digitales

The open-source programme, installed on local servers to ensure data security and confidentiality, makes light work of aggregating data that is often scattered across numerous different files.

“It does not solve violence, but it helps us to make inequalities and violence visible so that civil society and the government can take action,” Feldfeber told SciDev.Net.

“It is important evidence, especially for those who say that it does not exist and that the reports are false.”

Since its introduction in 2021, AymurAI has been used in courts in Argentina, Chile and Costa Rica.

Big tech bias

The AI, which received funding from Canada’s International Development Research Centre (IDRC) and the Patrick McGovern Foundation, among others, was developed before the rise of Chat GPT as a natural language processing model. Unlike many other models, it sends exactly what it finds to a database, without interpretation or modification.

SciDev.Net donation appeal

It is one of a number of Latin American initiatives, from Mexico to Argentina, seeking to fight back against the algorithms created by big tech companies, which often reinforce biases and accentuate gender gaps.

“[These are] algorithms that claim to be intelligent when in reality they amplify inequalities, perpetuate racial and gender biases, and consolidate a new form of knowledge extractivism,” writes Italian philosophy professor Matteo Pasquinelli in his book The Eye of the Master: A Social History of Artificial Intelligence.

In its efforts to counter this, AymurAI was built to be open source, published online and free to use, with data protected and anonymised. So far, it includes data from more than 10,000 court rulings.

The team also plan to add a function that converts audio files to text. Once validated, this could serve as lasting evidence to prevent victims of abuse from having to testify repeatedly about traumatic events.

“We are interested in protecting victims—ensuring that their addresses are not leaked and that they are not stigmatised at work or within their families. This is very sensitive data,” Feldfeber adds.

‘Oppressive technologies’

amilia Venturini (second from left), co-executive director of Derechos Digitales, explaining plans to make AI more focused on Latin American needs and gender inclusive. Copyright: Press/Derechos Digitales.org

Jamilia Venturini (second from left), co-executive director of Derechos Digitales, explaining plans to make AI more focused on Latin American needs and gender inclusive. Copyright: Press/Derechos Digitales.org

Elsewhere in Latin America there are similar efforts to close the gender gap through AI.

Jamila Venturini is co-executive director of Derechos Digitales, a non-profit organisation founded in Chile more than two decades ago, which analyses technology-related public and private policies across the region.

Venturini and her team start from the premise that AI decisions are generally made far from Latin America and without considering regional realities.

For Venturini, technology often reflects worldviews that are “alien to the demands of Latin American communities in terms of gender, race, age, abilities”.

Her organisation seeks to call out what she describes as “oppressive technologies”, while also highlighting people who are developing new alternatives.

“We want to propose public policies to change this oppression and […] promote them from a feminist perspective,” she explains.

However, deployment of new technologies is patchy, due to difficulties with the use and storage of consistent data.

“AI depends on the processing of large amounts of data, and the lack of structure in the public sector hinders these uses,” explains Venturini.

“The absence of protection frameworks associated with this data lowers human rights guarantees and reliability.”

She adds that governments feel the urge to implement AI, even if it is in its infancy or risky.

“Privacy, justice and equity must be built into AI by design,” she says, adding: “AI allows for calculations, statistics and pattern identification, but it depends on training databases that reflect specific worldviews.

“If I create a system […] and only use one segment of the population, I am creating a biased and oriented system.”

School dropouts

In Mexico, Cristina Martínez Pinto founded PIT Policy Lab in 2021, a company that seeks to combine technology with social good.

Among other things, she set up a project with the state of Guanajuato to see how AI can be used to predict school dropout rates.

“Using the tool, we detected that there was misidentification of 4,000 young people who were at risk of dropping out,” says Martínez Pinto.

With detailed monitoring, the organisation observed that the technical team tasked with job was made up of only men and that the relevant questions were not being asked.

They proposed to use open-source tools to root out and mitigate against biases, providing training for local officials on human rights and gender in the context of AI.

“We use an open-source tool that reviews the data and provides recommendations for correcting biases,” Martínez Pinto explains.

“The intention is not to be passive consumers of what other countries put together […], partnerships between public and private sectors are needed.”

Improving the data on which AI is trained to account for local factors, could help eliminate bias. Credit: Fernando de Rosa/Wikimedia under Lic. CC Atrib. 3.0

Improving the data on which AI is trained, to account for local factors, could help eliminate bias. Credit: Fernando de Rosa/Wikimedia (CC BY-SA 3.0 Deed.)

Local factors

Computer scientist Daniel Yankelevich is principal researcher at Fundar, an organisation dedicated to the study, research and design of public policies for social and economic development in Argentina.

“If you have an AI system with the country’s laws and it uses confidential information, it must be local and have locally installed servers,” he says, reflecting on work such as that of DataGénero.

He gives an example of the need for a local approach: “If a company wants to use a predictive model, it is very different if you are predicting the behaviour of an Argentinian to someone from Finland.

“Behaviour is marked by local factors. It is obvious that models from other cultures cannot be exported, much less if one thinks about eliminating the bias that is embedded in societies.”

The solution for Yankelevich is to improve the data with which the AI is trained and provide technical solutions so that biases can be spotted and corrected.

He adds: “We seek to go down to more local data and not have such biased samples. For that we have to make models oriented to our region.”

This article was produced by SciDev.Net’s Latin America and Caribbean desk. It was sponsored by the International Development Research Centre (IDRC) of Canada.