In today’s rapidly evolving technological landscape, it’s paramount to have visionaries and experts who not only understand the intricacies of Artificial Intelligence but also navigate its ethical, legal, and societal dimensions. One such luminary is Irina.

Irina Buzu

“We should harness AI’s potential to address pressing societal challenges, such as healthcare, climate change, and education.”

“We should harness AI’s potential to address pressing societal challenges, such as healthcare, climate change, and education.”

Passionate about information technology, innovation, art, and artificial intelligence, Irina stands at the forefront of AI policy and cybersecurity. Serving as an Advisor on AI and Cybersecurity to the Deputy Prime-Minister of the Republic of Moldova, her influence is far-reaching. Her academic pursuits, a PhD research in International Law, specifically zero in on AI policies and regulation, signaling her deep commitment to understanding and shaping this transformative technology.

Beyond her advisory role, Irina is intricately woven into the fabric of European AI discussions. She’s a proud member of the European AI Alliance, a fellow at the Center for Artificial Intelligence and Digital Policy (CAIDP), and an Emerging Tech affiliated expert at EUROPULS. Through these roles, she delves deep into the intersection of algorithmic decision-making, ethics, and public policy. Her focus? To unravel how the technology that underpins automated algorithmic decision-making not only functions but also impacts our global perspective and daily choices.

As we connect with Irina through the digital space, we engage in an in-depth conversation, exploring her insights on the role of AI in our future society, the ethical implications of algorithmic decision-making, and the evolving landscape of AI regulation and literacy. What follows is an enlightening exchange that brings to the forefront the important role AI will play in shaping our world.

Q & A

Q. Irina, your journey from techlaw and intellectual property to AI regulation is fascinating. Could you walk us through what drew you into the complex world of AI legality?

A. As an attorney and intellectual property counsel, I remember once explaining to one of my clients how important it was to value and protect their intellectual creations, and then later that day, I stumbled upon an article about a portrait generated by an AI that sold for over $400k at Christie’s in New York – which in terms of technological advancement was utterly fascinating, but from a legal perspective, it immediately raised a hundred questions, specifically, who is the actual author of that portrait, and who owns the copyright? From then on, I started researching more about the legal loopholes that needed to be filled in order to regulate AI, considering that legal frameworks are essential to ensure that AI is used responsibly and to address the challenges and questions that arise as AI continues to evolve.

Q. Your PhD topic is generating quite an interest – ‘Legal personhood and accountability of AI in the era of digital creativity.’ Can you elaborate on the primary objectives of your PhD research in AI regulation?

A. Essentially, my research aims to deepen our understanding of the legal and ethical dimensions of AI in the context of digital creativity. By addressing questions of legal personhood, accountability, IP rights, and regulatory frameworks, it seeks to contribute to the responsible development and regulation of AI systems in the creative domain.

This image was created with the assistance of DALL·E 3

Q. AI’s ability to create content brings forth intriguing questions about intellectual property. How do you envision the future of IP rights in an AI-driven world, and how is AI challenging traditional notions of intellectual property?

A. In the context of an AI-driven world, the future (or rather present, considering the impact of ChatGPT, for instance) of IPRs is undergoing a transformation, posing intriguing challenges to traditional notions of IP. While AI is often viewed as a tool employed by human creators, traditional IP rights may continue to apply when AI operates under human direction. However, distinguishing between AI-generated and human-generated content will be imperative.
One of the central challenges is determining authorship and ownership when AI independently generates creative content. We’ll need to consider recognizing AI as a creative contributor and define whether AI-generated works qualify for copyright protection and, if so, who holds those rights.
We can also speculate that in view of AI-generated content, new copyright categories tailored to AI-created works could emerge, potentially featuring distinct copyright terms or attribution requirements. We may also expect new licensing models to emerge for AI-generated content. Stakeholders, including rights holders, AI developers, and content creators, may engage in negotiations to fairly compensate all involved parties through royalties and licensing agreements.
Legal boundaries of fair use and transformative works will be redefined in the context of AI-generated content. Determining when AI-generated content qualifies as a transformative use of existing works will be a key legal challenge.

Q. With AI’s decision-making processes often being a ‘black box’, how can we ensure those behind these algorithms are held accountable?

A. AI decision-making processes are often characterized as ‘black boxes’ due to their complexity and limited transparency, and the main challenge lies in establishing accountability. I think this can only be addressed via a comprehensive approach.
Transparency is the foundational step toward understanding and assessing AI accountability, as regulations can be enacted to compel organizations to disclose the inner workings of their AI algorithms and decision-making processes. In addition to transparency, the development and adoption of explainability tools and techniques will shed light on the decision-making processes of AI systems, making them more comprehensible to both experts and the general public.
Accountability can be further reinforced through independent audits of AI systems and addressing bias to ensures that AI-driven decisions are not discriminatory or unfair.
Moreover, legal frameworks, ethical guidelines and robust data governance practices should be integral to AI development practices. Prioritizing ethics ensures that AI aligns with moral principles, fostering trust and accountability.
Finally, public engagement and oversight are key to a democratic approach to AI accountability. Including the public in AI policy decisions ensures that diverse perspectives are considered.

Reveal Quote

“The rapid growth of AI technology, along with its ethical implications and wide-ranging societal impacts, has drawn my attention to the complex ecosystem of AI regulation.”

Reveal Quote

“In the coming decade, I think AI regulation is expected to mature and prioritize ethical considerations, sector-specific regulations, and global coordination.”

Q. Considering the rapid pace at which AI is evolving, how challenging is it to set up a robust legal framework?

A. The rapid growth of AI technology, along with its ethical implications and wide-ranging societal impacts, has drawn my attention to the complex ecosystem of AI regulation, and specifically to AI’s potential legal status. There are many ethical concerns associated with the societal impacts that AI systems can have, ranging from bias and discrimination to privacy and transparency. The question is, how can we ensure that the ethical pillars and principles upon which we build AI regulation are future-proof and also technologically neutral? If we manage that, we would have the advantage of being broad enough to adapt to changing circumstances, albeit with the risk of being so vague as to not offer meaningful guidance in specific cases. This brings us back to the Collingridge dilemma, who argued that instead of trying to anticipate risks, more promise lies in laying the groundwork to ensure that decisions about technology are flexible and reversible.

Q. At Europuls, you’re deeply involved in emerging technologies and digital transformation policy analysis. What should we know about this sector?

A. Technological innovation is widely recognized as a key driver of economic growth, and today’s developments in the design and deployment of digital technologies are no exception, helping humans reach new heights of productivity and efficiency. Given the current context driven by economic recovery, the role of technology grows even further: building a more sustainable and resilient future should be a top priority on a global scale.
While Europe’s recovery is deeply intertwined with macroeconomic policies and reforms, there is a consensus on the critical role of digital technologies and the need to foster green growth. The EU’s plans to harness the opportunities brought by digital transformation are commonly referred to as Europe’s “digital transition.” Strategic documents such as the Digital Decade Communication and the subsequent Path to the Digital Decade decision proposal articulate its vision and commitments for the coming years.
Policy analysis in the field of emerging technologies and digital transformation is dynamic, requiring ongoing research, stakeholder engagement, and adaptation to evolving challenges and opportunities. Since emerging technologies are transforming and profoundly shaping economies, societies, and industries worldwide, at Europuls we aim to conduct comprehensive policy analysis in this sector.

This image was created with the assistance of DALL·E 3

Q. How do you think Europe is positioning itself in terms of technological legality compared to the rest of the world?

A. It is important to note that Europe’s approach to technological legality reflects its cultural and societal values, emphasizing individual rights, privacy, and ethical considerations. This approach may differ from that of other regions, such as the US or China, which may prioritize different aspects of technology regulation and development. Apart from taking a proactive approach to regulating AI, Europe aims to strike a balance between fostering innovation and ensuring regulatory oversight. The AI Act, for instance, seeks to create a regulatory framework that encourages innovation while addressing the associated risks. Another important aspect is that the EU has shown a commitment to sustainable and inclusive technology development, such as initiatives promoting green AI and social inclusion in AI projects.

Q. You’re leading interdisciplinary research at the crossroads of algorithmic decision-making, ethics, and public policy. What are the primary objectives in understanding how technologies shape our worldview and influence our decisions?

A. When understanding how technologies shape our worldview and influence our decisions, first and foremost, the aim is to assess how emerging technologies influence our decisions and shape our worldview. This involves a thorough examination of the impact of algorithmic decision-making systems on individuals, communities, and society as a whole. One of the primary goals is to investigate potential biases within these algorithms and their consequences on fairness, equity, and justice. Transparency and explainability are also crucial aspects in terms of making algorithmic decision-making processes more understandable and accountable to users and stakeholders. Additionally, we focus on developing ethical frameworks for algorithmic decision-making that emphasize principles like autonomy, privacy, accountability, and human dignity. Our work extends to finding ways to protect individuals’ privacy in the era of data-driven decision-making while still harnessing the power of data. Collaborating with policymakers is another key objective. We work closely with them to establish regulatory guidance and standards for the responsible development and deployment of algorithmic systems, particularly in contexts where ethical considerations and the public interest are paramount. Public awareness and education are also high on our agenda. We believe it’s essential to engage in outreach and educational initiatives that help individuals understand how technologies influence their decisions and lives. This empowers people to make informed choices about technology use. Our interdisciplinary approach involves collaboration with experts from various fields, such as computer science, ethics, social sciences, and law, to gain a comprehensive understanding of the intricate relationship between technology and society. Ultimately, our overarching objective is to ensure that technology is developed and used in ways that respect ethical values, safeguard individual rights, and promote the common good, all while recognizing the potential benefits of algorithmic decision-making in solving complex societal challenges.

This image was created with the assistance of DALL·E 3

Charting the Course: A Deep Dive into AI’s Ethical, Regulatory, and Educational Horizon

Q. Being part of the European AI Alliance offers a unique perspective. How would you characterize the collective vision of this alliance for AI in Europe?

A. The European AI Alliance is a platform for stakeholders, including experts, researchers, policymakers, and industry representatives, to engage in discussions and collaboration on AI in Europe. The initiative is part of a broader European AI strategy, which aims to promote AI development and adoption while addressing ethical, legal, and societal change. The European AI Alliance is both a gate of resources and a forum dedicated to all legal, technical, and economic implications that AI presents to our societies. Documents such as the Ethics Guidelines for Trustworthy AI and the AI Act were shaped in the discussions generated by the AI Alliance. I think this platform helps bridge the gap between various domains, ensuring that AI is developed and deployed in a manner that is not only technologically advanced, but also ethical, legally compliant, and aligned with societal values. This role is pivotal in guiding the responsible evolution of AI within the European context.

Q. How is it working in the AI expert working group, especially with the aim of promoting and converging AI literacy among the youth?

A. Raising awareness and knowledge about AI is an ongoing effort that requires a multifaceted approach involving governments, educational institutions, tech companies, and civil society. AI literacy among the youth is critically important for several reasons, including future of work preparation, economic competitiveness, digital inclusion, empowering creativity and global collaboration, and most importantly, digital governance and informed decision-making. On the one hand, effective digital cooperation requires multistakeholderism. On the other hand, digital governance calls for broader and more meaningful digital youth participation in governance processes, by empowering young people and enriching frameworks for representative democracy, thus making digital governance more democratic, effective and fair. With this in mind, the youth sector as a stakeholder has a key role to play in ensuring that the sector’s needs are reflected in all topics concerning the digital transformation of areas that directly impact the youth. In this way, the youth sector can create opportunities for the development of innovative and value-based perspectives on the digital transformation, and become a co-creator of a robust, inclusive and sustainable digital future.

Q. Do you have any literature or resource recommendations for those who want to delve deeper into the topic of AI and its regulations?

A. With the vast universe of literature, research and articles on the topic, I’m sure everyone can find something that would spark their interest. I would definitely recommend reading ‘We, the robots?’ by Simon Chesterman, but also ‘Human Compatible’ by Stuart Russell. ‘Superintelligence’ by Nick Bostrom is a must read also. For those Alan Turing fans looking for a modern twist on Asimov’s tech sci-fi, ‘Machines like me’ by Ian McEwan is a really good one, too.

Q. For young professionals intrigued by techlaw and AI, which skills or knowledge domains would you suggest they concentrate on?

A. For young professionals intrigued by techlaw and AI, I think it’s essential to develop a combination of legal knowledge, technical understanding, and soft skills to thrive in this field. I would say this has to be a mix of a sound foundation in traditional legal studies, tech literacy and tech skills, data privacy and security, AI ethics, policy and governance, cybersecurity. But foremost young professionals need to do this in view of continuous or life-long learning and committing to stay current with legal and technological advancements. And most importantly, remain curious!

Q. Looking a decade into the future, where do you foresee AI regulation heading, and what potential obstacles might we face?

A. In the coming decade, I think AI regulation is expected to mature and prioritize ethical considerations, sector-specific regulations, and global coordination. Regulations will focus on ensuring ethical and responsible AI, sector-specific safety standards, and data privacy. However, challenges may arise due to the rapid pace of technological advancement, varying global regulatory approaches, ethical dilemmas, effective enforcement, and potential unintended consequences. Navigating these challenges will be an ongoing endeavor, demanding adaptability and responsiveness from regulatory frameworks.

Q. In light of your rich experience in various roles, how do you envision the future of artificial intelligence globally, and what are the key areas we should pay attention to?

A. The future of AI globally holds immense promise and potential, but it also presents profound challenges and considerations. To envision this future, I think ethical considerations will need to continue to be at the forefront of AI development and deployment. We should pay close attention to the responsible and ethical use of AI, ensuring that AI systems respect human rights, fairness, transparency, and accountability. Ethical guidelines and regulatory frameworks will play a critical role in shaping AI’s impact on society.
I also feel that the continuous development of comprehensive AI regulations will be crucial. We need regulatory frameworks that strike a balance between fostering innovation and managing AI risks. These regulations should be adaptable, addressing evolving technology and applications while ensuring public safety and trust.
Global collaboration on AI regulation and standards will be vital. AI is a global endeavor, and harmonizing regulations across borders can promote innovation while maintaining consistent ethical standards. International cooperation will be essential in addressing transnational AI challenges.
Last but not least, we should harness AI’s potential to address pressing societal challenges, such as healthcare, climate change, and education. Policies and investments in AI for the common good will have far-reaching positive impacts. This, coupled with public awareness and engagement, will lead to informed citizens who can actively participate in shaping AI policies and advocate for responsible AI development.

About The Author

Branislava Lovre

Branislava is a Media Expert, Journalist, and AI Ethicist who leverages her expansive knowledge and experience across various media outlets and digital landscapes.

Branislava Lovre

Branislava is a Media Expert, Journalist, and AI Ethicist who leverages her expansive knowledge and experience across various media outlets and digital landscapes.