In the dynamic and ever-changing realm of Artificial Intelligence, it’s essential to have thought leaders who can adeptly navigate the complex interplay between technology, ethics, and societal impact. Elizabeth M. Adams stands out as one of these influential figures.

Elizabeth M. Adams

“Biased data can lead to discriminatory outcomes, particularly in areas like housing, education, and employment.”

“Biased data can lead to discriminatory outcomes, particularly in areas like housing, education, and employment.”

With over two decades of experience in the intersection of business, technology, and society, Elizabeth has carved a niche as a Responsible AI influencer, recognized by Forbes as one of the “15 AI Ethics Leaders Showing The World The Way Of The Future.” Her work spans a broad spectrum, from leading large-scale tech initiatives in Fortune 500 companies and various government organizations to her scholarly pursuits that meld theory with impactful results.

Elizabeth’s expertise lies in bridging the gap between technical and non-technical realms, forging alliances that translate complex theories into tangible outcomes. Her dedication to the ethical dimensions of AI is evident in her involvement in numerous projects focusing on AI ethics, governance, and the importance of diversity and inclusion in technology.

As an advocate for Responsible AI, Elizabeth actively participates in shaping policies and frameworks that guide the ethical development and deployment of AI technologies. Her approach is holistic, encompassing not just the technological aspects but also the profound societal implications of AI.

In our conversation with Elizabeth, we delve into her perspectives on the current and future landscape of AI, exploring themes like ethical tech design, the significance of diversity and inclusion in AI, and the challenges of aligning AI strategies with organizational values and societal expectations. The ensuing dialogue provides a rich exploration of how AI is shaping our world and the crucial role responsible innovation plays in this journey.

Q & A

Q. How would you describe the current landscape of AI’s impact on society?

A. The current impact of AI on society is profound and far-reaching, leading many of us to be more deeply engaged with technological advancements than ever before. While this presents exciting opportunities, it also raises serious concerns for those in technology deserts. AI’s transformative influence spans industries from healthcare to entertainment, revolutionizing operations through efficiency and innovation. However, concerns about bias, privacy, job displacement, and ethical considerations have come to the forefront alongside these benefits. Research indicates that AI has affected different groups in distinct ways, often disproportionately affecting historically marginalized communities without avenues for recourse.

In light of these dynamics, organizations must modernize their AI practices. This involves engaging a more comprehensive range of stakeholders and adhering to standards and regulations that encompass responsible AI and innovation principles. These principles include fairness, accountability, explainability, and trustworthiness. By adopting such measures, leaders can effectively address this technology’s promises and perils while striving for a more equitable and responsible future.

Q. For those unfamiliar with these terms, could you explain what AI Ethics and AI Governance mean and why they’re important?

A. AI Ethics pertains to ethical principles guiding the development, deployment, and use of AI technologies. This direction may manifest by becoming integral to the Responsible AI culture. AI Governance involves establishing policies, regulations, and frameworks to ensure Responsible AI practices. Both are pivotal in mitigating potential risks, addressing biases, managing societal impacts, and engaging a broader group of stakeholders in the Responsible AI space.

This image was created with the assistance of DALL·E 3

Q. Can you provide a simple explanation of what “Ethical Tech Design” is and why it matters to everyday technology users?

A. Ethical Tech Design encompasses the creation of technology that strongly emphasizes ethical considerations, user well-being, inclusivity, and broader societal impacts. This approach ensures that technology is aligned with the genuine needs of users, upholds their rights, and prevents any potential harm or biased outcomes. For example, when designing technology, incorporating features that prioritize the well-being and safety of individuals with diverse physical abilities prevents harm to those who are more vulnerable.

Furthermore, integrating employees’ lived experiences as a significant factor in the design and development of AI fosters a culture of trustworthiness and empathy. This practice bridges the gap between technology and humanity, reminding us that our innovations profoundly impact people’s lives. It’s essential to avoid getting swept up in the rush to be the first to market, as this can sometimes result in overlooking the intricate ways humans are affected.

Ethical Tech Design protects against AI decisions driven by malicious intentions and relies on thorough scientific research, critical thinking, and a forward-looking perspective. This involves considering the potential impact on the workforce, including upskilling and reskilling, to ensure that technology adoption doesn’t inadvertently lead to negative consequences. By embracing these principles, Ethical Tech Design sets a course for technology to be innovative and considerate of the broader human context.

Q. You’re involved as an expert in important projects about diversity and inclusion in Artificial Intelligence. Can you explain to us why diversity and inclusion are essential in this field?

A. My primary focus centers on promoting broader stakeholder participation. Recognizing the significance of diverse thought and inclusion, the catalyst for these principles is active participation from various stakeholders. This shifts the concept from a mere topic of discussion to an actionable approach with tangible outcomes. My research focus has been shaped by countless conversations surrounding diversity and inclusion, many lacking substantial follow-through.

This realization propelled me to direct my doctoral research towards Leadership of Responsible AI. This focus aims to comprehend how we can cultivate more ethically sound AI solutions by involving a more comprehensive group of stakeholders, mainly employees from groups negatively impacted by AI. It’s crucial to translate intent into action, and one effective way to gauge participation is by asking pertinent questions: Who played a role? In what capacity? What level of influence did they wield? How did their contributions enhance the outcome?

Moving beyond the theoretical, it’s clear that biased data can lead to discriminatory outcomes, particularly in areas like housing, education, and employment. By actively involving broader employee stakeholder groups, we can harness diverse perspectives. This diversity of input holds the potential to mitigate bias in AI systems, thereby fostering a more equitable and inclusive environment through active participation.

Reveal Quote

“Addressing AI bias demands careful data selection, algorithmic adjustments, and consistent monitoring.”

Reveal Quote

“The potential of AI to positively transform lives motivates me to address challenges, contributing to a more harmonious technological future.”

Q. From your perspective, what would an ideal framework for inclusive, responsible innovation look like?

A. I devised a conceptual “Leadership of Responsible AI” framework in my doctoral research. It begins by identifying appropriate stakeholders and missing perspectives guiding responsible AI design. This includes involving employees in creating informative artifacts like policies and guidelines. The framework emphasizes modernizing processes with responsible innovation, design science research, human-centered design, and a comprehensive approach that evaluates societal impact. An ideal framework ensures unbiased, fair, equitable, inclusive, explainable, trustworthy, and transparent AI systems.

Q. Given your extensive research into AI biases, can you share with us some of the key insights?

A. My research has revealed seven tensions faced by organizations addressing AI bias. These tensions stem from diverse perspectives on Responsible AI, varying notions of Responsible AI responsibilities, reconciling conceptual ideals with practical implementation, maintaining upskilling amidst innovation, balancing the pace of innovation with ethical accountability, leadership versus employee contributions, and integrating evolving Responsible AI principles. Addressing AI bias demands careful data selection, algorithmic adjustments, and consistent monitoring.

This image was created with the assistance of DALL·E 3

Envisioning the Future: Integrating Ethics and Humanity in the AI Revolution

Q. You help leaders in aligning AI strategies with organizational values and societal expectations. Can you provide an example of how this might look in practice?

A. Aligning AI strategies with values and societal expectations entails crafting guidelines for AI development, fostering diverse teams, and routinely auditing AI systems for bias and fairness. It also necessitates cultivating empathy within leaders, helping them recognize the importance of technology working inclusively for all individuals.

Q. What inspires you to continue working in this complex and rapidly evolving field of AI Ethics and Tech Inclusion?

A. My inspiration stems from translating research into practical applications and witnessing leaders and employees championing Responsible AI. The growing awareness, vocal advocacy, and proactive education in Responsible AI are beautiful indicators of progress. The potential of AI to positively transform lives motivates me to address challenges, contributing to a more harmonious technological future.

Q. As we look towards the future of work, what are the major impacts of AI that you anticipate?

A. In the evolving landscape of work, we’re on the brink of witnessing a transformation driven by the automation of routine tasks. This shift will necessitate a comprehensive approach to reskilling and upskilling the workforce. Alongside this transformation, a new wave of job roles will emerge, specializing in AI management, ethics, and development to ensure the responsible evolution of technology.

Moreover, I envision the emergence of a unique role: that of a human evangelist. This individual, much like myself, embodies a distinctive blend of qualities. They possess a deep affinity for technology, yet their primary goal is cultivating an environment where everyone can thrive. This role is akin to being a unicorn—a rare and precious combination of characteristics. The human evangelist will play a pivotal role in ushering in a fresh generation of tech enthusiasts who view the world through a different lens with a mission to nurture an ethos that prioritizes the well-being and aspirations of all individuals.

As we look ahead, I aspire for this concept to become the new norm. Organizations of the future should wholeheartedly embrace these roles—roles that uphold the fundamental principle of placing humans at the forefront of any technological innovation. This approach ensures that our progress remains aligned with our shared humanity, guiding us toward a future where every innovation and advancement places humans at the heart of the equation.

About The Author

Branislava Lovre

Branislava is a Media Expert, Journalist, and AI Ethicist who leverages her expansive knowledge and experience across various media outlets and digital landscapes.

Branislava Lovre

Branislava is a Media Expert, Journalist, and AI Ethicist who leverages her expansive knowledge and experience across various media outlets and digital landscapes.