What happens when your language has no data? No training set, no voice model, no translation engine. For hundreds of millions of people, that is not a technical gap. It is a daily barrier to education, news, healthcare, and work.

Anna Mae Yu Lamentillo built NightOwl AI to answer that question practically. Two million words digitized. Twenty countries. A platform that transcribes oral speech, digitizes archives, and delivers news in regional Philippine languages, all designed to run offline as well as online.

Anna Mae Yu Lamentillo NightOwl AI

Anna Mae Yu Lamentillo

AI is moving quickly into daily life, but access, literacy, governance, and protections are not keeping pace, especially for marginalized communities. When inclusion and access lag behind deployment, the future gets decided by default.

AI is moving quickly into daily life, but access, literacy, governance, and protections are not keeping pace, especially for marginalized communities. When inclusion and access lag behind deployment, the future gets decided by default.

Lamentillo is a Filipina lawyer, Oxford and LSE-educated researcher, and founder of NightOwl AI, a platform incubated at the London School of Economics that uses machine learning to preserve endangered languages.

Her platform has digitized over two million words across languages mainstream technology ignores, built a volunteer network spanning 20 countries, and earned recognition from She Shapes AI, who first awarded her the 2024/25 AI and Learning prize and then appointed her to their Global Awards Council for 20z25 to 2026. She is also a member of One Young World’s Indigenous Advisory Circle, joining 26 representatives shaping global programs through indigenous perspectives.

This conversation covers how NightOwl AI was born, and what responsible AI looks like when you build it from the ground up.

This text was checked for grammar and the introduction refined using an AI tool (Claude). Before publication, it was reviewed and verified by a human to ensure accuracy and clarity.
This text was checked for grammar and the introduction refined using an AI tool (Claude). Before publication, it was reviewed and verified by a human to ensure accuracy and clarity.
Q & A

Branislava Lovre:  Was there a specific moment that made you realize this was the problem you wanted to address with AI?

Anna Mae Yu Lamentillo: Yes. It was the contrast between two “voices” in my life. In school, English felt like a gate I had to force open, especially with my speech defect, the stutter, the mocking, the constant feeling that my words were being judged before they were even heard. At home, when my mother whispered proverbs in Kinaray-a, my mouth would loosen and my confidence would return because the sounds were familiar, ours.

Years later, when I tried speaking Kinaray-a to a mainstream conversational AI system and it couldn’t respond, it hit me. We were building a digital future where millions of people would be unheard by default. That was the moment I knew I wanted to use AI to make language access fairer and to preserve what is at risk of disappearing.

Branislava Lovre: NightOwl AI began in your student dorm room. How did a project initially focused on preserving Philippine languages grow into a global network spanning communities in more than 20 countries?

Anna Mae Yu Lamentillo: It grew because the problem is painfully universal. We started from the Philippines, Tagalog, Cebuano, Ilokano, and my own experience with Kinaray-a, but people from other places immediately recognized the same pattern. Their languages were not in mainstream technology, their elders’ knowledge was not being digitized, and their communities were being pushed further to the margins online.

Once we proved the pilot could work, the growth became relationship-driven. Volunteers, linguists, teachers, archivists, journalists, and community leaders reached out. That is how we became a network, from Colombia to Nigeria to the United Kingdom, built around shared urgency and local trust.

Branislava Lovre: When you first started building it, what did the very first version of the tool look like, and what were the biggest lessons or mistakes you encountered in those early stages?

Anna Mae Yu Lamentillo: The earliest version was simple and scrappy. It was a basic pipeline to collect text and speech, build initial dictionaries, and produce rough translations and transcripts. It was more a prototype that proved it was possible than a polished platform.

The biggest early lessons were:

• Data is not just data. Language carries identity, history, and rights, so consent, ownership, and context cannot be afterthoughts.
• Quality is contextual. A good translation is not only grammatically correct, it has to be culturally faithful and community-approved.
• Offline matters. If you are serious about reducing digital exclusion, you design for limited connectivity and real-world constraints from day one.

Image source:  Envato

Branislava Lovre: Your career spans so many vital topics, and most of them are at the heart of today’s biggest debates. In this conversation, I’d like to focus primarily on ethics, and I think algorithms and data are always the right place to start. You often emphasize that there is no such thing as a neutral algorithm. Can you explain what you mean by that, and why understanding how algorithms work and how data is collected is really the foundation of any serious ethical discussion about AI?

Anna Mae Yu Lamentillo: NightOwl AI is a purpose-driven AI company incubated under LSE Generate, harnessing cutting-edge machine learning to preserve endangered, low-resource languages with complex morphology and close the digital divide in marginalized communities.

In practice, it helps communities and institutions:

• Document oral speech by recording, transcribing, and organizing stories and conversations
• Digitize archived texts, including older materials like 1930s archives, so they can be searched, studied, and taught
• Support learning and intergenerational transfer so elders can teach and children can learn in their own language, online or offline, with pride and consent

We have also partnered with media organizations to pilot systems that deliver news in regional Philippine languages, so information access is not locked behind a single dominant language.

Branislava Lovre: You have pointed out that only around 2% of the world’s languages receive AI funding, while nearly 90% are completely ignored. Why do you think this imbalance exists, and do major technology companies genuinely recognize this problem?

Anna Mae Yu Lamentillo: The imbalance exists because AI investment follows scale and convenience. High-resource languages have massive datasets, clear commercial markets, and easier benchmarks, so they become the default.

Low-resource languages require deeper community engagement, harder data collection, and more careful ethics, so they are often treated as too complex or not profitable enough.

Some major tech companies do recognize the issue at the level of statements and limited initiatives, but recognition is not the same as structural change. Until inclusion is treated as a core requirement, not a side project, the gap will remain.

Reveal Quote

“Stop assuming AI is neutral or equally available to everyone. It reflects what it was trained on and who was excluded. 

Reveal Quote

“Insist on inclusion. Local-language integration, affordable internet and devices, and community-owned data practices. A future where only a handful of languages are digitally alive is not progress. We can and must leave no community behind.”

Branislava Lovre: You often emphasize that AI should be “community-owned.” What does that mean in practice?

Anna Mae Yu Lamentillo: It means communities are not just data sources. They are decision-makers. Practically, that includes:

• Consent-first data practices, clear permission, clear limits, and the right to say no
• Shared governance over what gets collected, how it is labeled, and how it is used
• Benefit-sharing so communities gain tools, training, access, and long-term value, not extraction
• Respect for cultural protocols, what can be shared publicly, what must remain private, what requires elders’ guidance

This differs from typical big-tech development where data is often gathered at scale, centralized, and optimized primarily for product growth rather than local ownership and cultural safety.

Branislava Lovre: What does collaboration with marginalized communities look like in practice?

Anna Mae Yu Lamentillo: Collaboration looks like co-creation. Community workshops, local language experts guiding standards, elders validating meanings and cultural nuance, educators shaping learning use cases, and local teams deciding priorities such as translation, archives, storytelling, or news.

To keep communities as active partners, we build feedback loops into the process. Community review is not optional, and success is not measured only by model metrics. It is measured by whether the tool is trusted, used, and beneficial locally.

Branislava Lovre: When your work spans different countries and communities, how do you ensure cultural sensitivity comes first?

Anna Mae Yu Lamentillo: By assuming nothing is universal except dignity.

We start with local leadership and local priorities.
We adapt workflows to cultural protocols.
We validate outputs with speakers, not just engineers.
We build safeguards around sacred, sensitive, or private knowledge.

The goal is not just to add a language, but to support the community’s right to represent itself on its own terms.

Lightbox with the word “Language” surrounded by scattered letters on a colorful background

Image source:  Envato

Branislava Lovre: As Chief Future Officer, what worries you more right now?

Anna Mae Yu Lamentillo: The mismatch between the speed of AI development and the slowness of social and institutional readiness worries me most.

AI is moving quickly into daily life, but access, literacy, governance, and protections are not keeping pace, especially for marginalized communities. When inclusion and access lag behind deployment, the future gets decided by default.

Branislava Lovre: You received the She Shapes AI Award for your work. How important is recognition like this?

Anna Mae Yu Lamentillo: It matters beyond me. Recognition helps shift attention and resources toward approaches that are inclusive, community-driven, and ethical by design.

It signals that building with communities, preserving languages, and fighting digital exclusion are not niche concerns. They are central to responsible AI.

Branislava Lovre: As AI becomes part of everyday life, what is one thing people should stop assuming about it, and one thing they should start paying more attention to?

Anna Mae Yu Lamentillo: Stop assuming AI is neutral or equally available to everyone. It reflects what it was trained on and who was excluded.

Start paying attention to who is represented and who is missing. Which languages, which communities, which accents, which realities. If your language cannot be understood by the systems shaping education, work, and information access, that is not a technical detail. It is a fairness issue.

Branislava Lovre:  If you could share one message directly with our readers about living and working with AI today, what would you want them to keep in mind?

Anna Mae Yu Lamentillo: Access is a prerequisite for responsibility.

We can debate AI’s power and risks all day, but if millions of people are locked out by language, by cost, or by connectivity, then the AI future is already unequal.

Insist on inclusion. Local-language integration, affordable internet and devices, and community-owned data practices. A future where only a handful of languages are digitally alive is not progress. We can and must leave no community behind.

About The Author

Branislava Lovre

Branislava Lovre works with media organizations, CSOs, and institutions to implement ethical AI in practice, delivering hands-on training, strategic guidance, and keynote talks on responsible AI adoption.

Branislava Lovre

Branislava Lovre works with media organizations, CSOs, and institutions to implement ethical AI in practice, delivering hands-on training, strategic guidance, and keynote talks on responsible AI adoption.