In this episode of AImpactful, I am excited to welcome Olivia Gambelin, an expert in AI ethics and the author of the upcoming book Responsible AI: Implement an Ethical Approach in Your Organization. Olivia specializes in utilizing ethics-by-design to drive AI innovation and works with leaders to redefine success in emerging technologies.

As the founder and CEO of Ethical Intelligence, Olivia advises a wide range of organizations—from Fortune 500 companies to startups—on the ethical design and strategic development of AI solutions. Her new book guides readers through establishing ethical AI practices, focusing on three core pillars: people, process, and technology.

Beyond her work at Ethical Intelligence, Olivia is active in shaping AI policy and regulation. She is a member of the Founding Editorial Board for Springer Nature’s AI and Ethics Journal, Co-Chair of IEEE’s AI Expert Network Criteria Committee, and serves on several advisory boards.

Someone once advised me that introducing a book with the utmost seriousness would lend it credibility and respect. However, now I realize that it’s crucial whose advice we follow, especially when it affects our natural demeanor. :)

Fortunately, Olivia’s smile and charisma brought the right atmosphere. She created the desired ambiance, just as it should be, right before the book’s release on June 25, 2024.

Tune in to hear Olivia share her experiences in AI ethics and explore the key themes of this field. It’s full of practical advice for any organization aiming to implement responsible AI practices.

Transcript of the AImpactful Vodcast

Branislava Lovre: Welcome to AImpactful. Today, we will talk about the responsible usage of AI. Our guest is Olivia Gambelin, a globally recognized expert in the field of AI. Welcome, Olivia.

Olivia Gambelin: Thank you so much, Branislava. I’m excited to be here.

Branislava Lovre: Today, we will speak about your book, Responsible AI, which will be published soon. What motivated you to start writing it?

Olivia Gambelin: Well, I have always been fascinated by ethics and artificial intelligence. I’ve been working in this field really since it was first making its transition out of academia and into industry. So, I had all of these years of experience built up, of advising and working on very specific AI ethics projects and responsible AI challenges for companies. And I started noticing that a lot of the focus when it comes to responsible AI was a little bit narrow. It was focusing specifically on technical solutions, which are important. But the problems that I was encountering with the clients that I was advising really had to do not just with the technology but more with simple business functions around people and processes. And that goes behind the scenes of actually supporting the building and use of that technology. I started seeing that over and over again. This bigger and bigger hole in the resources that exist out there when it comes to responsible AI, with companies not knowing just where to simply start. You’re not usually starting with the technology; you’re starting with the people. So that was some of the motivation that really kickstarted this entire book.

Branislava Lovre: What was the most challenging part during the writing process?

Olivia Gambelin: The hardest part for me, honestly, was figuring out what not to include in the book because it was very easy for me to try and put all of the information I know into all of the sentences and words. I think I could have written a whole library if I had decided to. So actually having to narrow that down and understanding what some of that preliminary foundational knowledge that’s missing and needs to be there. That was the hard part, the narrowing down of the information.

Branislava Lovre: What makes this book different? And who is it written for?

Olivia Gambelin: What makes this book different is it’s a business book, not a technical book. So if someone is looking for specific solutions around, say, how to implement fairness, privacy, transparency, and accountability technically, they’re going to be very disappointed in this book. This book itself is specifically written for a leadership audience, management, an audience that is putting into place the key structures needed to execute on responsible AI. How I have basically written it, hopefully it’s as evergreen of material as I can get it since the pace of AI changes so quickly. I put a lot of focus into distilling the primary impact points where leaders can translate ethical values into action. But instead of looking at specific ethical principles, I looked at the structures that were needed—the operational, procedural, and cultural structures needed to translate those values into action. So I like to think of it more as providing the scaffolding and the structure of the building that a company will need to execute on responsible AI, rather than giving a recipe for very specific solutions.

Branislava Lovre: Your book covers three main areas: people, process, and technology.

Olivia Gambelin: Yes, so something that is often quite a big mistake, one of the biggest mistakes actually that I find business leaders running into is approaching AI and responsible AI both as a technical problem. And yes, there is a very important technical layer to it all—AI is technology at the end of the day. But that’s really the tip of the iceberg. That’s what is coming out of much more going on below the surface. So you have the tip of the iceberg, which is technology. And then underneath that, you have the people and the process. How I like to explain this is thinking about it: people are really, you know, who’s building your technology. You need to have a who if you’re going to have technology. Then the process is looking at how are they building it? You need to know how they’re building it or using AI in order to actually have the AI at the end of the day. And it’s only then that you’re able to ask, what are they building or what are they using? So you need to know the who and the how before you can actually even begin to approach the what. Because if your people aren’t in place, if they’re not trained, if the culture isn’t right, if they don’t know how to build or they don’t have the proper protocols and governance in place, then the what—the technology that they’re building or using at the end of the day—is going to be full of problems. And it’s going to be very difficult to try and make any significant change or adjustments without having the people and process in place first. So they’re all interconnected, and they’re all equally important. The reason I stress people and process is they are often overlooked when companies are engaging in AI. So I’m trying to give voice to the side of good business practice as the world of AI is really developing.

Branislava Lovre: We cannot forget to mention company values.

Olivia Gambelin: How I like to describe ethics and values—they’re kind of like your directions. So you have your company values, you have your company mission. That’s what you’re trying to achieve. Your values, the ethics of your company, those are the directions to help you go towards that company mission, to achieving it. So you have company values and also societal and regional and industrial standards and values for a reason. They’re the ones that support in actually achieving a mission or achieving a goal. So ethics is really what’s helping you assess if your decisions are in alignment with those values. The closer you are in alignment with those values, the closer you are to success, the better you are at achieving that success, at achieving that mission and those objectives that you set out to in the first place.

Branislava Lovre: Also, we’ll have a chance to learn a bit more about ethics by design methods.

Olivia Gambelin: Absolutely. Ethics by design is my favorite side to ethics. Ethics is kind of—think of it as like a two-sided coin. You have the risk mitigation side and then you have the innovation side to ethics. Risk mitigation, you’re looking to protect your values. Innovation, you’re looking to align with your core values. And it’s really on that innovation side that you see ethics by design at play. What I mean by that is when you are looking at ethics by design, you are looking at specific ethical principles like fairness, trust, transparency. You know, these sound like buzz terms sometimes with how often I say them. But you’re taking these values and you are designing specifically for them. They’re not an afterthought. They’re not something that you have to retrofit or go back and fix. They are part of the design features, and that’s really what it means by ethics by design—incorporating these key values into the actual DNA of these systems that we’re building.

Branislava Lovre: So what are the unique challenges and opportunities when we are talking about implementing responsible AI?

Olivia Gambelin: Probably one of the biggest challenges is the scale at which it needs to be done. There’s a tendency for silos of communication, for departments to be separated in terms of understanding what’s going on across even departments within the same branch of a company. For example, I’ve worked on cases where I’ve introduced colleagues to each other from different teams. And to me, it blows me away because I would think, oh, of course you would naturally be talking together. But no, some of my projects have been to actually find where the communication has broken down and needs to be reestablished. This is all to say, the scale at which responsible AI needs to operate and the amount of communication that it needs for support can be a challenge for large enterprises. One of the strengths for larger enterprises is really the ability to actually set standards of practice. For example, we look to companies like Microsoft, IKEA, and Salesforce as the standard setters for practice in AI in their own respective industries. These large enterprises have such a great opportunity to set that standard in terms of expectations. Now, looking at the SME side, one of the bigger challenges that SMEs currently face is time and resources. They’re not going to have the same access to resources. Right now, responsible AI is still a bit of a niche expertise. So finding someone that has good experience or building out a team that has good experience—that level of expertise can be costly, and it’s necessary. Though there are free resources out there, to make effective change in a company, you do need that expertise. So it can be sometimes difficult for SMEs to access the expertise needed or access, say, the training or the more customized resources since those are still a little bit more on the upper end. But the opportunity for SMEs is it’s much easier for them to pivot. They are in a position where they can actually embrace the innovative side of ethics much more easily. For them, it’s a competitive advantage when they stand out by their values. So they have that great opportunity to embrace and quickly change and adapt to this cutting-edge technology at the same time. So I can’t say which one, if large enterprises have it better or worse off. It’s just unique challenges and opportunities in both directions.

Branislava Lovre: During implementation, it’s important for everyone in the company to collaborate, but it’s also beneficial to have the help of an AI specialist.

Olivia Gambelin: Definitely. There’s a whole chapter in the book just about these kinds of roles. One of the things that I stress with this is, as a reader would experience going through the book, it may feel like this is a lot to handle, a lot to cover. There’s a lot that needs to be done. And that’s true. There’s no beating around the bush here. There’s no easy button out. This does require change. This does require significant work and investment for the long run. Granted, this is work up front for very significant long-term benefits. If you are trying to add on these very critical business tasks onto someone and onto individuals in positions that are already covering other responsibilities, it’s very easy to overload a team. It’s very easy to push people towards burnout when they don’t have both the skill set and the time necessary to execute on these. Versus if you’re bringing in an AI ethicist, someone trained to do this, their sole focus is on executing, creating responsible solutions, ensuring alignment with values. So, it’s almost like a relief of, oh, there’s someone covering it. I have the questions in my head. I want to make sure this is being covered. Oh, there’s someone responsible for this who will tell me if something’s not aligned or if something’s not working. That relief is huge. But also, when you’re working with ethicists, they’re trained to do these tasks. They’re trained to do this kind of critical thinking. So something that would take someone that isn’t trained in the field of ethics or responsibility maybe, say, two weeks, could take an ethicist two days. So it’s also a time of efficiency. You’re bringing in someone that doesn’t need to learn from the ground up. They already have a good, strong understanding of what needs to be done. So then they’re focused more on customizing and executing rather than just trying to get their minds wrapped around the problem in the first place.

Branislava Lovre: When we are talking about responsible AI, what are the most challenging situations at this moment?

Olivia Gambelin: There’s a phrase in English that goes something like, “Every happy family is alike, but each unhappy family is unhappy in its own way.” I know I’ve butchered the phrase there, but it kind of works the same with AI. Responsible, good AI all looks fairly the same—a lot of the solutions, a lot of the structures. Hence what my book is based on is pulling out that structure, the good pieces that are common across the board, and being able to highlight those. That’s the most common standard practice. The challenges and significant issues will look very different depending on what industry you are in, what kind of AI you’re using, and what day of the week it is. There’s always something going wrong with AI. I would say that the significant challenges, though, if I was trying to paint a broad brush, would be the first one: understanding AI as a holistic challenge. It needs the people and process behind it. It is not just a piece of technology. To understand that the adoption and use of AI, how it’s being used, depends less on the technology and more on the people and the process. You’re just going to hear me repeating that over and over again. It depends less on the technology, more on the people and process. So that challenge requires a mental shift, a mindset shift in the approach to AI. That would be one of the big challenges with responsible AI currently right now—the need for that mindset shift. From a business perspective, adoption of AI is actually one of the biggest challenges. We have all these new generative AI tools, but companies are struggling with understanding the right use case for them. I see fewer challenges with companies creating new AI solutions; I see more challenges with companies actually being able to set in place and use that kind of tooling. For example, I was talking to someone the other day whose team just adopted Copilot. She said, “We have it. We don’t know what to do with it. It’s a cool tool, but what do we do with it?” In cases like that, if your end users don’t know what to do with the technology, they’re not going to be your users for very long. So I would say that’s actually a bigger challenge. And so doing that responsibly, learning how to create responsible use cases because, like I said, the use of AI is less about the technology and more about how it’s being used by the people. If we’re looking at the ethics of it all, there are too many different challenges in different directions depending on the industry. But I would love to, instead of saying, “Oh, this is the challenge,” I’d rather challenge people to open their minds to more ethics by design techniques. Instead of tacking on these ethical values at the end and trying to retrospectively fix them, incorporating them into the very beginning, into the very foundations of their AI build and use. So I’d like to challenge people to that in the upcoming years.

Branislava Lovre: If someone wants to learn more about this topic, where should they start?

Olivia Gambelin: As I was saying, one of the biggest challenges is not being able to recognize that responsible AI is a holistic challenge. So my book really stresses the whole picture. The other part, and this actually leads into one of the tools that is accessible through my book, and I’ve released it under a Creative Commons license, is something called The Values Canvas. The Values Canvas is a holistic management template for developing responsible AI strategies and documenting existing ethics efforts. It gives you a full-picture view of what needs to be done and what you are currently doing in terms of ethics and responsibility. It pinpoints these high-impact points where you translate ethics into action. The whole motivation behind creating The Values Canvas was to answer the two biggest questions that companies are stuck on right now: “Where do I start?” and “What am I missing?” You can’t order five ethics in blue. So where do you start? The Values Canvas is designed to help direct you towards where you need to go next. It also addresses the common question, “What am I missing?” The Values Canvas helps give that whole picture and pinpoint where you’re missing these key factors. The book goes into depth on it, but it’s also online at thevaluescanvas.com. You can download your own copy and use it. There are also a series of case studies being published on the website. It’s a great resource, and my hope is that it becomes a commonly used resource for people in responsible AI, creating a shared terminology that we need to operate on and the ability to pinpoint what kind of solutions are needed.

Branislava Lovre: I would like to motivate everyone by saying that I have completed the canvas. The questions are easy to answer, it’s great and user-friendly.

Olivia Gambelin: Thank you. That’s actually great motivation for people. And the canvas is free to download. That’s always helpful to hear. If you come from an entrepreneurial background and you’ve ever run across the Business Model Canvas, The Values Canvas was modeled with the same intention. So hopefully, that gives you a reference point as well for what kind of tool this is.

Branislava Lovre: At the end of this episode, I will say that your book Responsible AI will be published this month.

Olivia Gambelin: A message that I would love to share, specifically about the book as it’s coming out, is that this is part of a bigger picture. It’s a key part of the strategic side to responsible AI. It’s a necessary part that’s been a blocker for a long time. I’m hoping that this helps get through some of that inertia, but to encourage people that they’re out there. Five years ago, this space was like the Wild West. There wasn’t necessarily a clear path towards success. It was confusing. People didn’t know where to start. People didn’t know what was effective. Now, that’s not true. We’ve erased a lot of that confusion. We’ve tackled that. We’ve found good ways to actually apply responsible AI and ethics. There are set methods and templates for success. I say all of this as a word of encouragement that this is not an untamable beast or an impossible mountain to get around. This is actually something quite doable and has significant business impact across the board. The book goes more into that. One of my favorite statistics that I’ve been quoting every year now comes from a study from MIT and BCG, where it found that companies with holistic responsibility initiatives and strategies in place experience a reduction of 28% in the failure rate of AI, which is unheard of. That’s huge. The encouragement is that this reduction in risk that everyone’s after is attainable, and there’s a clear path to that success.

Branislava Lovre: This is the perfect message to end the interview.

Olivia Gambelin: Thank you. Thank you so much. Those were great questions.

Branislava Lovre: You watched another episode of AImpactful. Thank you, and see you next week.