Imagine you’ve lost your keys in the dark. You keep searching under the streetlamp, not because that’s where you dropped them, but because that’s where the light is.

For Dr. Ansgar Koene, this is the perfect metaphor for our current approach to Artificial Intelligence: we measure what is easy, efficiency, while the true definition of “good” remains lost in the shadows.

Ansgar Koene

Ansgar Koene

“The speed of technology adoption is not determined by how fast we can program, but by the ‘speed of trust’ the public has in the system. Regulation like the AI Act is not a brake on innovation; it serves as a necessary ‘badge’ proving a product is safe enough for people to actually allow it into their lives.”

“The speed of technology adoption is not determined by how fast we can program, but by the ‘speed of trust’ the public has in the system. Regulation like the AI Act is not a brake on innovation; it serves as a necessary ‘badge’ proving a product is safe enough for people to actually allow it into their lives.”

In an era where algorithms decide what news we read, and even how our children perceive the world, “neutrality” has become a dangerous myth.

As the Global AI Ethics and Regulatory Leader at EY and the chair of the groundbreaking IEEE P7003 standard, Dr. Koene is the man bridging the gap between global boardrooms and the technical guardrails of the digital future.

In this AImpactful dialogue, Branislava Lovre sits down with Koene to examine why ethics is an engineering imperative and how protecting the most vulnerable must be embedded by design.

This text was checked for grammar and the introduction refined using an AI tool (Claude). Before publication, it was reviewed and verified by a human to ensure accuracy and clarity.
This text was checked for grammar and the introduction refined using an AI tool (Claude). Before publication, it was reviewed and verified by a human to ensure accuracy and clarity.
Q & A

From Neuroscience to Global Policy


Branislava Lovre: Good day, Ansgar. It is truly a pleasure to have you with us today. At this moment, you are one of the world’s leading voices on AI ethics. I always like to ask one simple question to all our guests: what was that decisive moment when you decided to follow artificial intelligence in your particular case? I’m sure it was during your studies and the preparation of your master’s thesis, but was there a specific moment when you became so interested in AI? 

Ansgar Koene: I’m glad to be here. Yes, my journey into the world of AI indeed began during my master’s thesis. I was studying electrical engineering, somewhat following in my brother’s footsteps, and I became very interested in control systems, initially more from the robotics side. I was thinking about the control of robotics more at the management level, which is effectively the level of artificial intelligence. 

My interest in AI was actually driven by a desire to understand human decision-making and behavior. I asked myself: how can we use this kind of technology, a system that can learn from behavior, learn from the past, where its future actions are influenced by past experiences, as a way of understanding how humans learn and develop throughout their lives?  So, it has always been a blend of technology and human behavior; technology as a tool for understanding ourselves, but also as a way to help us achieve what we want. 

Initially, I viewed it from a robotics perspective, but that was during one of the “AI winters,” so there weren’t many funds available for pure research. There were some things related to industrial applications, but that didn’t interest me as much, which is why my PhD was in computational neuroscience. I took that robotic perspective and applied it to human-machine control of eye movements. How does the brain translate the sensory stimuli we pick up with our eyes into control signals for the muscles to move the eyes to the next relevant position? I always thought of the brain as a control system. I was that guy who could tell you how it works but absolutely could not tell you exactly where it is in the brain. That part never “clicked” for me. 

My journey from that research approach and the question of how to make technology functional to social impacts and ethical issues happened through a transition from bioinformatics and information-sharing frameworks that began to appear in the early 2000s. I was thinking about how we could encourage greater information sharing about the behavioral side of experiments. I played around with trying to create a space for sharing experimental data in the form of a wiki, which never really took off, but it led me to think more about using online data to understand human behavior, specifically computational social science. 

That soon led to questions about the ethics of using online data. In 2014, at the University of Nottingham, the research project I was working on dealt with the use of online data, and that’s when we truly delved into questions of the ethical review process within universities, as there was no systematic or consistent way to do it. You could propose an identical research project to the computer science department, the psychology department, or the business school, and the ethical review would be completely different. They would think about the data in an entirely different way. 

Branislava Lovre: That is incredibly interesting. Could you give us an example of how those perspectives would differ in practice? 

Ansgar Koene: Of course. If you use Twitter data to analyze human conversational patterns, the psychology department will think of it as human data. You would need an ethical review and permission to use it. The computer science department or the business school would think of it as archived data, just like anything else in databases. Since you aren’t interacting directly with people, they would ask why you even need an ethical review of human interaction. These are completely different ways of looking at the same experiment.

This opened further questions about how we interact with people online, for example, through recommender systems. This led to the “UnBias” project on bias in recommender systems and how it shapes people’s experiences in the online world, especially the experience of young people. The project focused on how young people aged 13 to 17 engage on online platforms and what worries them, but also whether we are providing them with the right information in digital literacy courses. 

That was the first project where I strongly engaged with civil society groups and policymakers. We had a specific part of the project dedicated to stakeholder engagement, ensuring we could translate research results into the policy space. It was a time when many political discussions about digital legislation and data privacy were starting. 1 I was in the UK in the post-Snowden era, when privacy questions about how national security agencies access data were being reviewed, and the GDPR was being finalized. That’s how I started getting more involved in the science-to-policy communication aspect and gradually moved from direct research to research communication, ethics, and policy. 

In 2016, IEEE launched its Global Initiative on Ethics of Autonomous and Intelligent Systems and sought to launch a series of ethical standards related to AI. I was asked to chair the standard on Algorithmic Bias Considerations. We intentionally called it “bias considerations” rather than “de-biasing” to emphasize the fact that it is about understanding the type of bias you have in the system. Understanding whether something is biased or not depends on whether the way the system differentiates the output is relevant to the task it is supposed to perform. You cannot neutrally say that an output is biased unless you know what the system’s task is.  It was this work on the IEEE standard that led to greater engagement with industry and contact with EY, as they joined the standard I was chairing. Through those conversations, the role I now have, Global AI Ethics and Regulatory Leader at EY, was created.

Documents and writing instruments placed on a desk, representing research review and academic evaluation processes.

Image source:  Envato

 

Ethics as an Engineering Imperative


Branislava Lovre: Your career spans so many vital topics, and most of them are at the heart of today’s biggest debates. In this conversation, I’d like to focus primarily on ethics, and I think algorithms and data are always the right place to start. You often emphasize that there is no such thing as a neutral algorithm. Can you explain what you mean by that, and why understanding how algorithms work and how data is collected is really the foundation of any serious ethical discussion about AI?

Ansgar Koene: Yes, that is crucial. If we ultimately want to use automated systems, whether what we used to call automated decision-making processes or today’s AI, we want to use them to help us create an environment that will be better for people. Of course, the question is, what do we mean by “better”? That is where the discussion on ethics and the definition of good begins. 

I think one of the challenges is that the definition of “good” in engineering spaces often defaults to what is easy to measure. I’m sure you’re familiar with the analogy of a person who lost their keys in the dark but searches for them under the streetlamp because “there is light there.” We often go in that direction, measuring whether an AI system is successful based on efficiency criteria rather than the ultimate good it delivers to people. That second point is so difficult to define and measure. 

That’s why many ethical frameworks consist of engaging the stakeholders who will be impacted by the systems and the process itself, which is naturally difficult. It’s not a simple checkbox. You can’t just say, “we did this, check the box.” We know that “good” is an evolving concept, something you must constantly return to.  But that doesn’t work easily in a space where we have to push a product out quickly, where we have to meet the next deadline or get a new round of funding. 

At a higher level, at the state level, we must have a well-functioning economy to provide all the services people need for a good life. But how will we achieve that? There are so many possible trade-offs you have to make between data privacy and data availability for system training, between getting as much information as possible about health problems for new research and considering the specific vulnerabilities of an individual patient in each specific case. What is more important? There must be a dialogue about these things.

One way I summarize what we did with the algorithmic bias consideration standard is to say that it’s actually about mindfulness in decision-making. Through various stages of the process, you have to consider: did we make a decision here? Using a specific dataset was a decision; it wasn’t just an “easy dataset we already had.” The very choice to use that dataset is a decision that will affect the system. 

We often choose to measure a system’s efficiency with a simple question: does it replicate the way we did things in the past? And that is a choice. Perhaps the way we did things in the past isn’t the way we want future outcomes to look, and we shouldn’t say the system is working correctly just because it replicates our past behavior. We often hear about a conflict between removing bias and achieving perfect accuracy. This is positioned as a potential trade-off. But what is often missing is the question: how did we define accuracy? 1 If we say that removing unintended bias reduces accuracy, doesn’t that imply our previous definition of accuracy was wrong? If high accuracy introduces unintended bias, then something is wrong with the definition of accuracy. But such a challenge to our definition of accuracy is often not mentioned at all. 

Instead, it is brought in from the side, saying that “we also need to remove bias.” And then it is positioned as an additional problem; perhaps it is even considered that “ethicists are making life difficult” for product delivery. But ultimately, if we were clear about what we call bias, removing it means our system works better, it will be more resilient, it will work better with clients, and they will be happier. So, there should be no conflict if we have thought about it enough.

Reveal Quote

We do not provide transparency for transparency’s sake. We provide it to enable different parties to understand: Do I want to engage with this? Is the output reasonable compared to what I should expect? Am I being treated appropriately compared to others? That is what people actually want to know.

Reveal Quote

We must conceptualize for ourselves what “good” looks like, and then we will know how to critically engage with these technologies to ensure they help us move toward that better environment.

The EU AI Act and Global Competitiveness


Branislava Lovre: When the EU AI Act was first adopted, there were concerns that it might slow down innovation, that companies in other countries without such regulation could move faster. Others argue that clear rules actually help companies by giving them defined boundaries to work within. Where do you stand on that, especially when you’re advising company leaders and boards on the ethical and legal side of AI?

Ansgar Koene: I think that concern is real. It’s about competitiveness. It’s about how companies operating in the European space or planning to export to Europe, and therefore must comply with this regulation, can ensure they still compete on equal footing with those outside that space. Also, how they can use that compliance as a “badge”, proof that they are compliant with this legislation as a way to position themselves in the market as a preferred supplier of such systems.

I think there are two aspects to innovation. One is the innovation of technology per se, how to advance and develop new technologies. The other is how to actually bring them to market and get them adopted. That second part, as we see now with AI, doesn’t necessarily run at the same speed and doesn’t depend on the same factors. The speed of adoption is often closer to the speed of trust in the system than the speed at which systems are developed. And trust is one of the things such regulation seeks to bring to the market.

When we talk about pure technology development, it is very important to recognize that the AI Act does not impose limitations on the research side of AI. If you are exploring what is possible with the technology from a purely technical perspective, doing it within the company, and the system’s outcomes will not affect people, then the AI Act says, go ahead, it does not apply to the pure research space. It applies as soon as you plan to release it to the market. And as I said, the market’s interest in technology is closely linked to the market’s trust in that technology. 

What we hear from my colleagues who are more engaged with clients is that the challenges the EU has in competitiveness with the US and China have little to do with regulatory compliance. It’s more about the investment ecosystem and the challenges of doing business across different markets. Some of that has to do with compliance, but not with technology regulation like the AI Act, but more with other business challenges. 

Of course, there is the question of implementing regulation. If it is implemented poorly, for example, if the processes meant to help small, medium, and now, according to the “digital omnibus,” mid-cap companies, do not work, that will be a problem. If those regulatory sandboxes do not work, that’s a problem. If the technological standards or the European Commission’s guidelines that help in interpreting what you need to do are not available, that can affect your ability to implement systems.

That’s why we must ensure enough resources for all parties that need to help: standardization bodies, national and regional regulators, and certification bodies.  Even where there is no third-party certification requirement, if you truly want to use compliance with the AI Act as proof that your product is more reliable than one from another jurisdiction, certification can be that extra step you want to take. 

There is no fundamental reason why well-implemented regulation would stifle innovation, unless you are trying to innovate in a direction that society does not accept. I hope such innovation will be stifled. But if you are doing innovation that is truly good for society and the economy, regulation can be a support, if we get the implementation right. 

That’s why the discussion about the digital omnibus and the need for slightly longer timeframes makes sense. Honestly, all of us involved in standard development, when we heard the initial deadlines from the political discussions, said, that is not realistic. This extra space being created will be beneficial. Regulation provides a baseline so that companies wanting to do the right thing don’t have to compete with those “cutting corners” because they are desperate to reach the market by the next investment round. 

Branislava Lovre: We’ve talked about company responsibility and regulation, but I think transparency deserves its own spotlight and it opens up a whole different set of questions. You’ve made an interesting point in your talks that simply showing someone the code or explaining how a “black box” works doesn’t really help. So what does meaningful transparency actually look like, and how should we be approaching that question?

Ansgar Koene: That is absolutely correct. It’s one thing to say we need transparency, and another to clarify what that actually means. The Information Commissioner’s Office in the UK did a great job a few years ago trying to clarify what transparency means in the context of GDPR.  

The fact is that not everyone is interested in the same information. We do not provide transparency for transparency’s sake. We provide it to enable different parties to understand: Do I want to engage with this? Is the output reasonable compared to what I should expect? Am I being treated appropriately compared to others? That is what people actually want to know. They don’t care how the algorithm works. They want to know that the algorithm treats them similarly to others and that the outcome makes sense. That is the level of explanation we need to achieve. 

If the stakeholder is the person about whom the automated system makes a decision, they need to know what kind of data was used. That way, they can tell you: “Actually, that’s wrong data, that’s someone else with the same name, you’ve confused me and that’s why my credit rating is wrong.” They must be able to say if the data is up-to-date. There must always be clarity on where responsibility lies. Who do I contact if something goes wrong? Often you will have to provide them with the easiest point of contact, and then organizations must resolve it among themselves. 

You need to provide information that a decision has been made and how they can react or file a complaint if they believe the decision is wrong. You might want to provide information about the system’s average performance so they get a sense of whether the outcome makes sense for them.  On the other hand, if you are giving information to an insurance provider, an auditor, a regulator, or someone in your supply chain, they need different information. They want to see what tests you conducted to ensure the system is reliable, that it won’t break if the input space changes slightly. They want to see risk mitigation methods: how quickly do you identify failures and resolve them? 

If you are talking about transparency to your board of directors who need to decide where to invest, you need information on how you evaluated the various options that existed. You might need information on how you ensured sufficiently skilled technical people on the project. The level of transparency and focus is different, but ultimately it all comes down to the fact that transparency serves the goal of understanding whether the system can be responsibly implemented.

Judge’s gavel resting on the European Union flag, symbolizing the EU AI Act and European AI regulation.

Image source:  Envato

Safety by Design


Branislava Lovre: I’d like to highlight your work as a trustee of the 5Rights Foundation, where you advocate for a digital world designed with children in mind from the very beginning. What needs to change for that to truly happen? What would the internet look like if it were a safe space by default, starting from the initial design and settings?

Ansgar Koene: An important point here is again mindfulness about where we implement these technologies. Many challenges and problems for young people online actually stem from the fact that online spaces were created by adults, for adults, thinking of people like themselves. They didn’t take into account that children would use it. Simply putting somewhere in the terms of use, which no one has ever read, that the platform is intended for those over 13 is not a way to be mindful of young users.

This means thinking about the ways young people might engage with the platform. What are the ways they might have problems? The biggest problems are things like addiction to platforms. Honestly, that’s not just a problem for young people, but for adults too, because that’s how they are built. But we recognize that children are more vulnerable because they have less life experience to judge whether a certain action would be acceptable or not. That’s why we have extra rules in the offline world. We must think of children in the same way online. In the offline world, we don’t allow children to ride the bus completely unescorted at a certain age. How do we resolve that question while they “ride” the internet? 

We know children can be cruel to each other, bullying on the playground is something that has always existed. We must know that this can also happen if we create online spaces for their interaction. You get cyber-bullying. On the playground, we have teachers who monitor and intervene. We must have such activities in the online space as well. 

Also, very importantly for children, we all did it, we made mistakes as children. We do stupid things that we would really like to forget. We must enable things to be forgotten even if they happened online. There must be a way to delete your history and not have it come back as a boomerang later in life.  That is a huge challenge because online, by default, everything is recorded and stored. Even if you delete it from your account, you don’t know if someone took a screenshot. These are real challenges. 

The biggest problem is that we didn’t even stop to think about the existence of those challenges when creating technologies because they were created by men aged 20, 30, or 40 for people like themselves. That’s why those spaces don’t work well for anyone not in that demographic. One of the challenges in the current discussion is that we have had so many years of platform failure to take responsibility, so the easy solution seems to be, fine, then just ban children from access until they turn 16. 

But that ultimately won’t help children. How will we truly profit if we block their access until they turn 16, and then at that age they suddenly join such a community without any experience of that space and without the ability to understand how it works? 

Additionally, if you just try to block their access, it makes it even more interesting. We must ensure a way for young people to engage, but in a way that suits the developmental stage they are in. Most importantly, talking about blocking access risks removing platform responsibility because it shifts it to children, parents, and the government, while platforms abdicate responsibility for making the space itself a good space. 

Branislava Lovre: We mentioned children, but the truth is, we’re all influenced by AI. We have been for years, and even more so now with AI-generated content and deepfakes. Yet many people aren’t fully aware of it. What can we do about that? Is the answer AI literacy training? Should journalists take more space in their newsrooms to explain these processes? Or is it something else entirely? How would you approach that challenge?

Ansgar Koene: It is a challenge we all face, and it’s further complicated by the fact that the nature of disinformation is rapidly changing as technology becomes better at creating deepfakes or text that addresses you directly.  At the simplest but also most dangerous level, phishing and cyber attacks now use these technologies to mass-send messages that sound very real. 

Understanding risks through digital literacy is the foundation we need. I’m sure everyone working in large organizations is increasingly getting messages from their cybersecurity teams, including simulated attacks to see if you recognize them. That is extremely valuable, but it cannot be the only part of the conversation.  We need platforms that are proactive in identifying false narratives. I think that is an area where platforms are still holding back too much.

On the technology side, we need to improve watermarking or other ways of indicating that content is synthetically generated. This is a tough challenge because many watermarks can be removed if you know how. On the other hand, only those ethically engaged will introduce those watermarks; someone can use an AI system that doesn’t use them and post content without them. So, full reliance on watermarks is not a good option, but it should be one tool in the box.

Also, there are systems that detect whether AI generated something. Naturally, that leads to using the detector to “feed” the other system and teach it how to create content that cannot be detected. That is the fundamental way AI systems are built, through those feedback loops.  Those systems usually work for a while, and then they can no longer detect AI. In the end, it will all come down to human critical engagement with content. You must not just say “I saw the video, so it must be true.” You must ask yourself, does this make sense? What are the alternative sources? 

The last point I want to highlight is the enormous responsibility of people in the media space and high-profile figures like politicians. They must not replicate false narratives nor encourage them. Because if a person considered a voice of reason participates in a false narrative, they automatically put their stamp of approval on it and make it stronger, which should worry all of us.

Child using a laptop at home, representing online safety and digital environments designed for young users.

Image source: Envato

The Myth of AI Singularity

Branislava Lovre: Recently, we’ve even seen the first social networks designed exclusively for AI agents. A lot of people worry about the singularity, the idea that AI could become infinitely intelligent. But you’ve pointed out several times that it’s not that simple, that we tend to forget machines, just like humans, depend on physical limits, energy, and resources. Could you explain that perspective?

Ansgar Koene: Yes, the singularity narrative suffers from the fact that it is mostly spoken about by very intelligent people. Intelligent people like to think that intelligence is the most important thing in the room, the deciding factor. And while more intelligence obviously means you can probably get more out of a limited set of resources, it doesn’t matter how intelligent you are if you have no resources, then you can’t do anything.  The idea that you can operate in a hypothetical virtual space of computation and do anything without touching the real world that has limitations is a fallacy. 

You cannot have infinite exponential growth because there simply isn’t infinite energy or infinitely available materials for making chips. People talk about a “post-scarcity” future if AGI (artificial general intelligence) becomes capable of all kinds of tasks. Well, AGI might be able to calculate a task, but performing the task still requires physical embodiment. 1 Even if that embodiment is a robot, every robot is made of materials. Those materials become a resource bottleneck. Every system needs energy. Robots, physical things, cannot inhabit the same space at the same time; they are limited like everything else. All those boundaries of the physical world don’t disappear because you are using AGI. There will be great progress and efficiency, but everything will hit new boundaries. Exponential growth becomes an “S” curve that hits a new limiter. 

The question is also how fast we are even approaching that point. We’ve seen that social network experiment where AI systems interact, which supposedly showed emergent behaviors. But there seem to have been many challenges there, like the fact that some of the systems were humans playing the roles of AI. Additionally, many AIs were created by humans with a specific “seed” of instruction on how they should behave. So, how much of that behavior is pure is not entirely clear. 

Emergent behavior is something that always happens when systems interact. We have examples of bird flocks or bee swarms emerging from very simple algorithms like “adjust distance to another point.” That in itself is nothing miraculous, but it is a condition if we are talking about systems that will develop beyond what we initially programmed them for. 

The problem is also linguistic. The terminology we use  singularity and AGI is often not well defined. And then people debate: “We will have AGI next year,” and another says “no, it will take 10 more years.” They are not actually talking about the same thing. To one person, AGI means a system that can somewhat do several things, while to another, it means a system that is perfect at everything. We need more clarity: what will the system actually be able to do? And show me that in a real environment.  In robotics, for example, demos often look much better than reality. As soon as you change the environment slightly, things stop working.  A few years ago at RoboCup, one team had a problem because the lighting in the arena was different than in their test space, and none of the robots worked correctly. Recently I heard that robots folding clothes fail if the clothes are placed slightly differently or inside out. Robotics can be fragile; that’s why simulation is one thing, and reality is something else entirely. 

Branislava Lovre: When you mention robots, people immediately think about being replaced. As AI transforms more and more jobs, you’ve spoken about Universal Basic Income as one possible response. Do you think UBI is a realistic solution, and how should we be thinking about the broader impact of AI on jobs, wealth distribution, and the role of work in people’s lives?

Ansgar Koene: The challenge where automation replaces people or strongly upgrades their roles is not new. We’ve had it forever, since the first chariots. One of the current problems is the speed at which technology seeks to be implemented. I emphasize the speed of implementation, not the speed of the technology’s progress itself. Those are two separate decisions. 

It is a decision whether to implement an AI system for specific tasks, rather than saying, technology might move this fast, but we want to conduct the organizational transformation process at a certain pace so our people can adopt it more easily and we can position ourselves better.  Thinking about the speed of adoption must be conscious. It shouldn’t just be “it’s on the market, we must buy and install,” and then scratch our heads because it doesn’t work as we thought. 

The role of humans in the workforce will change. Long-term, for most, it will be for the better. But short-term, there is a transition phase. We have a major problem with the question of who gains the financial benefit from how these systems are implemented. We see an accumulation of wealth in an increasingly smaller circle of people. We are moving back to a space where asset holders are important, once they were landowners, now they will be owners of machines, data centers, and robots accumulating all the value.

We must think about the impact of that on society and ensure we still have a society people enjoy living in. That’s where UBI can play a role. The idea is to tax sufficiently or otherwise redistribute the value accumulated by that small group of actors across the population. Realistically, how much money do you really need? Do you truly have a better life if you have a trillion dollars compared to a billion? Maybe it could be shared with more people. 

Experiments with UBI have shown it can work. It can be more efficient than a system where you conduct numerous means tests and heavy bureaucracy to find out who can apply for how much. It removes part of the stigma of being a person receiving benefits; if everyone receives them, it’s not a sign of failure.  There is logic in thinking about it, but it is challenging because it requires a fundamental rethink of the government’s role, the role of jobs in people’s lives, and corporate responsibility toward society. All those things must be reconceptualized together for UBI to be the right path. 

It probably wouldn’t be a solution in itself, but part of a broader way of thinking. But we must think about it now, not in 10 years after a revolution happens because people were so disenfranchised. We must put a credible concept on the table so people can say, I will support this policy direction, instead of just being angry and joining anti-authoritarian or similar groups. 

Branislava Lovre: We’ve covered so much ground today, and I feel like we could easily keep going for another hour. Is there a final thought you’d like to leave our readers with?

Ansgar Koene: Certainly. I would say it all comes down to being mindful and thinking about what you are getting into. Do not mindlessly buy into narratives or adopt specific technologies. Really think about what value you yourself will get from that technology? 7 What do you actually want to achieve?

What is your concept of what a good life is and what a good social environment is to be in, and observe these technologies from that perspective. They can be of great benefit if we use them in the right way, but we cannot use them in the right way if we haven’t even thought about what “the right way” actually means. That’s where ethical thinking comes in. We must conceptualize for ourselves what “good” looks like, and then we will know how to critically engage with these technologies to ensure they help us move toward that better environment.

About The Author

Branislava Lovre

Branislava Lovre works with media organizations, CSOs, and institutions to implement ethical AI in practice, delivering hands-on training, strategic guidance, and keynote talks on responsible AI adoption.

Branislava Lovre

Branislava Lovre works with media organizations, CSOs, and institutions to implement ethical AI in practice, delivering hands-on training, strategic guidance, and keynote talks on responsible AI adoption.