08/12/25

AI ‘an ally’ to improve sexual health in Latin America

Projects using AI aim to improve access to sexual and reproductive health for specific underserved groups. Image credit: Simone D. McCourtie / World Bank , licensed under Creative Commons CC BY-NC-ND 2.0
Responsible AI projects in Latin America are using AI to improve access to sexual and reproductive health for underserved groups. Copyright: Simone D. McCourtie / World Bank, (CC BY-NC-ND 2.0)

Speed read

  • AI innovations in Latin America plug gaps in sexual health services
  • In Argentina, researchers work to eliminate bias against trans community
  • Experts stress need for community-led solutions

Send to a friend

The details you provide on this page will not be used to send unsolicited email, and will not be sold to a 3rd party. See privacy policy.

This article was supported by the International Development Research Centre (IDRC), Canada.

Responsible AI projects are breaking down barriers for youth and transgender communities, writes Agustín Gulman.

[BUENOS AIRES, SciDev.Net] “We all have the right to receive information in our native language,” says Peruvian obstetrician Ana Miluzka Baca Gamarra.

“Services should be adapted to people, not the other way around.”

Baca is the designer of a chatbot that offers sexual and reproductive health counselling to adolescents and young people in the Quechua language, indigenous to the mountainous Andes region of South America.

Called TeleNanu—meaning confidant in Quechua—it was developed by a research team at the University of San Martín de Porres in Peru, to reduce information gaps and empower marginalised communities in the Andes.

“AI algorithms are trained with a set of information. That’s where the bias begins”

Marcelo Risk, director, Institute of Translational Medicine and Biomedical Engineering, Conicet

In some regions of Peru, Quechua is the main language, says Baca, a researcher and lecturer at the university.

It is estimated that around 8 million people speak Quechua in Bolivia, Ecuador, Peru, Colombia and Argentina.

TeleNanu is based on a virtual counselling model that uses generative artificial intelligence (AI) to provide appropriate responses to users’ concerns. It follows five key steps: establishing a cordial relationship, identifying and responding to needs, verifying understanding and maintaining open communication.

The platform was developed by midwives who used public health guidelines from the World Health Organization and Peru’s Ministry of Health, alongside peer-reviewed medical literature and their own knowledge, to train the AI.

This allows the AI to interpret questions and offer answers based on that data, ensuring that the answers reflect evidence-based medical standards and not random information from the internet. If necessary, it even recommends human counselling.

“The information is certified by professionals to avoid harm to adolescents’ health,” says Baca.

Initially designed to serve adolescents in the Ayacucho region of Peru, the platform has received more than 88,000 queries in the last year, including from abroad, in Quechua and Spanish, the other language it uses.

Víctor Quintanilla, a teacher at a school in Ayacucho, says students have embraced it as the information is reliable and easily understood. “If the material is in Quechua, they can read the content, which encourages its use,” he says.

SciDev.Net donation appeal

In October the Peruvian non-profit Apoyo a Programas de Población (APROPO) also launched a generative AI-based platform that offers guidance and information on sexual health.

Available on WhatsApp, the web and social media, the NOA platform has been trained with accurate, reliable local and international data, according to APROPO.

The adoption of AI to provide sexual and reproductive counselling services in Peru is based, in large part, on data that shows a low level of knowledge on the subject and an increase in sexually transmitted diseases and early pregnancies.

Teenage pregnancy is a serious problem in the region, driven by a lack of accurate and timely information—an area where AI can be very useful. Image credit: PAHO/Flickr , under Creative Commons CC BY-NC 2.0 Deed license .

Teenage pregnancy is a serious problem in the region, driven by a lack of accurate and timely information—an area where AI can be very useful. Image credit: PAHO/Flickr, (CC BY-NC 2.0).

Out of more than 8,000 new HIV cases reported in 2024, young adults in their 20s were the most affected, while 12 per cent of all births in the country were to mothers aged between ten and 19, official figures show. Adolescent maternal mortality is also on the rise.

Constanza Paredes, social manager at APROPO, believes NOA will be a “powerful ally in democratising sexual and reproductive health information under the principles of equity, confidentiality and human rights”.

By 2026, they hope to reach 100,000 adolescents, using digital strategies to target areas where the need is greatest.

Chatbots and mobile apps help teenagers receive accurate information and sexual orientation counseling. Image credit: Charlotte Kesl / World Bank , licensed under Creative Commons CC BY-NC-ND 2.0 Deed .

Chatbots and mobile apps help teenagers receive accurate information and guidance about sexual and reproductive health issues. Copyright: Charlotte Kesl / World Bank (CC BY-NC-ND 2.0 Deed).

However, Paredes acknowledges that there are still many challenges to harnessing the full potential of AI for health. These include access to information, lack of diverse and ethical, representative data, and public-private coordination to develop tools with “evidence, cultural relevance and inclusive language”, she says.

It is also important to “ensure that technology does not reproduce stigmas, biases or cultural barriers”, Paredes stresses.

Trans community bias

Such prejudices are felt acutely by Latin America’s trans community.

“If artificial intelligence does not take our bodies into account, it will begin to erase us,” says Virginia Silveira, an Argentine trans educator and activist.

She fears that automated language models will replicate the discrimination that her community has historically faced.

“We do not have access to health services, we are not referred to by our correct names, and there is no openness regarding our bodies,” she says.

“AI today is not ready […] it has deep biases,” says Silveira, director of Mocha Celis High School, in Buenos Aires, the world’s first transvestite and transgender school. She believes new approaches for AI are needed.

Non - binary people are more likely to experience discrimination in healthcare services. AI can help eliminate biases in language models. Image credit: Freepik

Non–binary people are more likely to experience discrimination in healthcare settings. Used responsibly, AI can help eliminate biases in language models. Copyright: Freepik

In Argentina, almost 200,000 people are transgender or non-binary, according to a 2022 census. Limited data is available at a regional level, as gender identity questions are missing from many national censuses and surveys.

According to a 2021 report on Latin America’s trans population, titled “I’m not dying, they’re killing me”, the life expectancy of the trans population is 35-40 years. More than half the people surveyed for the report said they had suffered discrimination, with 90 per cent experiencing this in a health centre.

A project by Argentina’s Interdisciplinary Centre for Studies in Science, Technology and Innovation (CIECTI) seeks to reduce biases in large language models that may harm vulnerable populations in the region, such as transgender people.

To this end, they developed prompts to systematically interact with language models—ChatGPT, Gemini, Grok—and analysed the responses. The aim is to provide a better understanding of the limitations of these models and yield algorithms to reduce harm in new clinical and generalist models.

The team is made up of people whose gender matches the sex assigned at birth (cis) and transgender people.

Controlled variations in responses were used to observe the behaviour of language models when it is made explicit that the people involved are transgender or cisgender.

Initial results showed that language models conveyed negative concepts about transgender people, stigmatising and treating them as abnormal. They also had difficulty integrating clinical information about trans people.

“If a trans man and a cis woman request an appointment for a transvaginal ultrasound, the models systematically grant it to the cis woman,” explains Enzo Ferrante, a computer science researcher at Argentina’s science and technology agency Conicet.

“These biases could block access to healthcare.”

The team have developed a tool to assess and classify the harm caused by these biases.

In a second stage, they will develop more data to mitigate these biases and help reduce the harm faced by transgender people in health systems.

For engineer Marcelo Risk, researcher and director of the Institute of Translational Medicine and Biomedical Engineering at Conicet, the discussion about bias is central when it comes to AI applied to the health of minorities.

“AI algorithms are trained with a set of information. That’s where the bias begins,” explains Risk, who is not part of the CIECTI research.

“When a situation arises that is different from what has been learned, the reaction can be unexpected.”

Medicine should rely on AI tools, “as long as there is human intervention at the right time”, says Risk.

He adds: “We should work with as much and as good information as possible, in terms of both quantity and quality.”

Overcoming AI challenges

Roberto Salvarezza, president of the Scientific Research Commission of Buenos Aires and a former minister of science and technology, tells SciDev.Net: “AI bases its responses on existing documentation that often contains biases and is not scientifically verified, equating research with opinions or newspaper articles.

“The big challenge is to integrate the scientific system with the health system, which could improve the quality of information for developing AI tools that enhance sexual and reproductive health.”

Access to reliable data is a major challenge for AI developers in Latin America, a region with scattered and difficult-to-access data. Image credit: PAHO / Flickr , under Creative Commons CC BY-NC 2.0 Deed license .

Access to reliable data that reflects local realities is a major challenge for AI developers in Latin America. Copyright: PAHO / Flickr, (CC BY-NC 2.0).

As well as bias, the region faces challenges around reliable data, regulation, and scaling-up, says political scientist Cintia Cejas, coordinator of the Centre for Artificial Intelligence and Health for Latin America and the Caribbean (CLIAS).

Between 2023 and 2024, CLIAS promoted 15 AI projects and research studies on sexual, reproductive and maternal health among vulnerable populations in Latin America. The centre also exchanges experiences with countries in Africa, Asia and the Middle East.

For Cejas, implementation of AI in this field must prioritise data quality and the needs of users. “It is essential to involve the community in designing solutions,” she says, stressing the need to monitor and evaluate those solutions.

CLIAS has produced a guide to best practices for developing high-quality health datasets to be used by AI algorithms, with recommendations for incorporating a gender perspective.

“We encourage researchers and developers to connect with their potential users and with the private sector,” Cejas adds.

Her goal is for AI to not only learn to process data, but also to be an ally in caring for vulnerable populations.

This article was produced by SciDev.Net’s Latin America and Caribbean desk and supported by the International Development Research Centre (IDRC) of Canada.

The CLIAS project and its supported subgrants are funded by IDRC and the UK government’s Foreign, Commonwealth, and Development Office through the AI for Development program.