AI is moving fast. Can the law keep up?
Every day, AI tools are reshaping how we work, create, and innovate. But as AI evolves, so do the legal and ethical challenges around it. How do we regulate a technology that’s still unfolding? How do we balance innovation with responsibility? And what exactly is a regulatory sandbox, and why is it becoming one of the most powerful tools in AI governance?
In this episode of AImpactful, we sit down with Katerina Yordanova, an AI law expert whose work dives deep into the legal complexities of artificial intelligence. She doesn’t just analyze AI laws—she helps shape them.
Katerina is currently a doctoral researcher at the Centre for IT & IP Law at KU Leuven, where she leads research on AI governance and regulatory sandboxes, a project funded by imec’s prestigious PhD research grant.
With over five years of experience at KU Leuven, Katerina has worked on European and Belgian commercial projects, advising on IP licensing, data protection, cybersecurity, biometric data processing, corporate due diligence, and AI policy. She is also a frequent speaker at international conferences, shaping discussions on the legal and ethical challenges of emerging technologies.
Beyond academia, she has gained hands-on experience in both the private and public sectors, working with international law firms, the United Nations, and NGOs. Her expertise spans digital rights, corporate social responsibility, and the intersection of AI and human rights.
But don’t worry—this isn’t a dry legal discussion. Katerina has a gift for breaking down complex legal topics with clarity, humor, and a healthy dose of skepticism.
What You’ll Learn in This Episode:
- AI regulation: What’s real, what’s hype, and what actually matters
- Regulatory sandboxes explained—why they’re game-changers for AI innovation
- The European AI Act—why Katerina is one of its biggest critics
- The hidden risks of AI that no one is talking about (hint: it’s not just bias and ethics!)
- Why AI governance is like coding—one wrong word can change everything
Who Should Tune In?
- Tech innovators & startups
- Legal professionals & policymakers
- Journalists & researchers
- Anyone curious about how law & AI collide—and what it means for the future
Episode Details:
- Duration: 31 minutes
- Guest: Katerina Yordanova – ICT Lawyer & Lecturer
- Host: Branislava Lovre
- Format: Video podcast
AI Usage Notice: In preparing this introduction and the episode transcript, AI tools were used with careful human oversight and editing. We believe in transparency regarding the use of AI in our work.
Transcript of the AImpactful Vodcast
Branislava Lovre: Welcome to AImpactful. Today we will talk about a regulation with a special focus on sandboxes. Our guest is Katerina Yordanova, an ICT lawyer and lecturer. Welcome, Katerina.
Katerina Yordanova: Thank you. Thank you for inviting me.
Branislava Lovre: To start, I have a general question. What should we know about AI regulation?
Katerina Yordanova: Well, that’s the question for a million bucks, right? Everyone is talking about AI regulation these days, and we find ourselves in a unique period where governments around the world are trying to regulate AI, attempting to predict the future and even set rules before technologies fully emerge. This is entirely new for us as lawyers, and of course, lawyers are also lawmakers. Traditionally, regulations come into play after a technology has evolved and issues become apparent, but now we are operating on a completely different scale. Governments are regulating AI at national, interstate, and even global levels, and only time will tell whether these measures will prove effective; perhaps in five years we will know if we are headed in the right direction or need to adjust our approach.
Branislava Lovre: Today we will talk about sandboxes. What exactly are they and why are they so important?
Katerina Yordanova: Regulatory sandboxes are a relatively new concept, having emerged in 2014; not very long ago. They stem from the idea of anticipating technological developments, where regulators and legislators aim to set up frameworks before AI or any other technology hits the market. In computer science, a sandbox is a safe environment where you can test software without risking your system. Similarly, regulatory sandboxes offer a controlled space for companies to test their products while engaging in a knowledge exchange with regulators. Many view them as a regulatory tool, but I see their true value in fostering a symbiotic relationship. Regulators gain firsthand insights to craft better policies, while innovators receive early guidance to identify and mitigate potential legal risks, ensuring compliance and avoiding hefty fines, especially in regions like Europe. This mutual exchange is at the core of what makes regulatory sandboxes so effective.
Branislava Lovre: Can you give an example of a sandbox?
Katerina Yordanova: Yes, of course. I often worry that people do not really understand what a sandbox is, and while they don’t necessarily need to, it is helpful that we have resources like Google and experts who explain these specific tools more clearly. Usually, when I give an example, I start with the very first functioning sandbox we have – the one still operating in the UK by the Financial Conduct Authority (FCA). They created the first sandbox, which is very logical since sandboxes began to be used around 2014. That was when fintech was booming and new technologies created both exciting opportunities and significant regulatory challenges. In the UK, with a market saturated by new technologies and constantly emerging financial services, the regulator decided to establish a relationship with innovators so both parties could learn from each other. This arrangement allows innovators to feel more secure about their products and to reach the market faster.
They created what I call a tool – although it is more of a process – because a sandbox consists of several stages. It begins with an application process, followed by a selection phase. Once the candidates are selected, the regulator develops a testing process to determine what aspects of the technology need to be evaluated and then conducts the testing itself. Throughout this process, there are ongoing discussions with regulators at each step, and at the end, the regulator typically gives the green light for the innovator to go to market. In short, a sandbox is both a tool for regulators and a process that innovators must navigate in order to bring their products to market more quickly and safely.
Branislava Lovre: How are sandboxes regulated?
Katerina Yordanova: Traditionally, sandboxes have been regulated very loosely. For instance, the FCA had the mandate to create them and determined on its own how they should be structured. This mandate is crucial, especially for states with complex governmental structures such as Belgium, where different regions have distinct competencies. In many cases, certain states even have laws specifically regulating experimental spaces like regulatory sandboxes or testbeds, although this is not very common.
However, with the introduction of the AI Act, we are taking a different step. Not only does the Interoperable Europe Act—which was adopted before the AI Act—include a section on sandboxes, but the AI Act itself makes it mandatory for Member States to have at least one sandbox or to participate in one, possibly as a cross-border project. The regulation provides a legal definition and outlines the aims and goals of a sandbox, although many detailed aspects, such as entrance criteria, may be left to the Member States. An implementation act is expected to be adopted by the Commission, perhaps next year, which will provide more in-depth details on the structure of sandboxes. This means that in Europe the regulatory framework for sandboxes is becoming more defined.
Branislava Lovre: What is the current state of sandboxes worldwide?
Katerina Yordanova: As I mentioned earlier, in Europe, sandboxes—especially those for AI—are at the center of the discussion. We are awaiting the implementation act, while most Member States have already begun various initiatives to prepare and establish regulatory sandboxes. Since sandboxes must be operational by the time the AI Act becomes applicable, which is about two years after it enters into force, many countries have started preparing reports or even creating sandbox environments based on existing infrastructures. In Europe, the picture is very dynamic. We see sandbox initiatives emerging worldwide—from Africa to Asia (with notable examples being Singapore and South Korea). In the United States, the term “regulatory sandbox” is less common; they are more often referred to as test beds with regulator participation. Other countries, like Brazil, have also begun establishing regulatory sandboxes for AI. Every country is clearly trying to get on board with the sandbox model, and it will be interesting to see which approach proves most successful and what challenges arise.
Different governmental structures also present challenges. For example, in Belgium and Germany, where responsibilities are divided among various levels of government, coordinating a unified sandbox initiative can be complicated. There can be conflicting requirements between different governmental levels, and these technical issues must ultimately be resolved through legal means that respect constitutional rights.
Then there is the monetary challenge. Sandboxes, especially for AI, are expensive. In the fintech sector, a single financial regulator oversaw innovation in a relatively homogeneous market. In contrast, AI applications can span the medical, financial, and energy sectors, each with its own regulatory body. Coordinating between these different regulators to agree on testing protocols requires substantial resources, which makes the process expensive.
Because of these costs, some smaller Member States have adopted alternative approaches. For instance, Bulgaria has taken a bottom-up approach by leveraging an existing public institute dedicated to AI. This institute already has testing facilities, which means there is no need to create a sandbox from scratch without physical testing infrastructure. I even participated as an external consultant on that project, and I really love the idea. In this model, the public institution provides the infrastructure and testing protocols, while regulatory authorities, such as the AI Authority and the Privacy Authority, contribute legal and regulatory expertise. They can even share expenses, thereby saving resources. This model is attractive because it repurposes existing facilities rather than building new ones from the ground up.
Another challenge in Europe is the issue of regulatory exemptions. Many EU-level rules apply to regulated industries, and national regulators cannot simply opt out of these rules. Without the ability to offer regulatory leeway, it becomes harder to attract innovative companies to test their products in a sandbox environment. I have considered other types of incentives, but so far, few alternatives have been made official.
Branislava Lovre: What motivated you to choose this topic for research?
Katerina Yordanova: It was a side research line. I started in 2019, I think. I wrote a blog post because I saw the term somewhere. I became interested since I like the creative use of multidisciplinary terms. Then I began familiarizing myself with the concept and found it very pragmatic and interesting—after all, it offers a very different way of regulating compared to traditional approaches. Initially, it was just a side interest, even before we started talking about AI sandboxes. It was before the AI Act, and due to various circumstances in my life, it eventually became a main topic. What really drove me was the pragmatism and logic behind it—it’s a simple yet effective and original approach.
Branislava Lovre: Did you discover anything very interesting or important?
Katerina Yordanova: As a legal researcher, one might expect me to uncover something strictly legal—and indeed there were many fascinating legal insights. For example, I discovered that the concept of legal certainty is not as clear-cut as we might assume. If you ask a lawyer, “Do you know what legal certainty is?” they might reply, “Of course, that’s something we learned in our first year,” but when pressed to define it, they often backtrack because they aren’t entirely sure what it means. That uncertainty was quite surprising. Beyond that, I noticed that especially in Europe we tend to create complex problems and solutions for issues that might not even exist, while simple, effective solutions to actual problems are overlooked. Without spoiling the end of my thesis, one of my main recommendations is that we focus on addressing real needs rather than inventing problems.
Branislava Lovre: It is important to mention the European AI Act in this context. What are your thoughts on it?
Katerina Yordanova: I am a very harsh critic of the AI Act. My main issue isn’t just with the content—I have concerns about that too—but primarily with the way it is written. I often joke that law is a bit like coding because it forms its own separate language. For us lawyers, our legal code is second nature, but for someone outside the field, the same words might mean something completely different. I usually use privacy as an example: for computer scientists, privacy is a technical concept, whereas for us, privacy is a fundamental right. Although the underlying idea may be similar, the operational meaning is different. This leads to frequent misunderstandings, especially when working with tech clients. The language of the AI Act is atrocious—it uses words that require interpretation, and ultimately, it will be up to lawyers, courts, and regulators to interpret it. If we don’t understand it, we have a big problem. When the Act is written in a way that doesn’t align with our familiar legal language, it forces us out of our comfort zone and encourages creative interpretations, which undermines legal certainty. I’ve compared different language versions of the Act, and although every official version is supposed to have equal power, discrepancies exist—for example, between the Bulgarian and French versions—leading to uncertainty.
Branislava Lovre: How do regulatory sandboxes manage data protection?
Katerina Yordanova: According to the AI Act, there is a whole provision dedicated to data protection. The core principle remains the same—all the data protection rules from the GDPR apply. During testing, both regulators and companies must ensure that if personal data is used, those rules are not disregarded, even in a sandbox environment. However, the Act offers an incentive: during testing, it is possible for certain personal data that was lawfully collected for one purpose to be used for another purpose without obtaining new consent from the data subjects. Of course, this is subject to many additional requirements—around 13 conditions if the product or service is considered for the public good. So, while this incentive exists, meeting all the conditions simultaneously is nearly impossible. Still, in theory, it is usable.
Branislava Lovre: What are the biggest AI risks that are not talked about enough?
Katerina Yordanova: Everyone discusses AI risks these days, but it’s hard to distinguish what is truly mainstream from what is amplified within our echo chambers. For example, the risk of using AI for autonomous weapons is widely recognized—it’s already a reality we’ve witnessed. However, one risk that tends to be swept under the rug is the environmental impact of AI, especially generative AI. Many people use tools like ChatGPT as if they were just search engines, without realizing the enormous energy and resources consumed with each use. This is particularly problematic for data centers in developing countries, where resources such as water are becoming scarce due to global warming. Such environmental concerns deserve more attention. Additionally, while some worry that AI might kill creativity or disrupt the job market, I don’t believe it will destroy jobs entirely. However, over-reliance on AI tools might reduce our willingness to put effort into tasks, ultimately lowering the quality of services and products.
Katerina Yordanova: Thank you, Katrina.
Branislava Lovre: It was a pleasure to meet you.
Katerina Yordanova: Thank you. Thank you.
Branislava Lovre: You watched another episode of AImpactful. Thank you, and see you next week.



Leave A Comment