Connecting technology with society to create a safer, more equitable world is at the heart of Theodora Skeadas’ work. With over a decade of experience in public policy at the intersection of technology, society, and safety, Theodora has significantly impacted how these domains interact. During her tenure on Twitter’s global Public Policy team, she managed a range of policy projects addressing human rights, disinformation, counter-terrorism, and content moderation.

As the Executive Director of Cambridge Local First, Theodora champions a robust local economy, supporting and promoting community businesses. Her previous experience includes consulting at Booz Allen Hamilton on cybersecurity and data innovation for U.S. Federal Government agencies, and advising political campaigns for the Massachusetts Lieutenant Governor and Cambridge City Council.

Don’t miss the latest episode of the AImpactful video podcast, where Theodora shares her compelling journey and insightful perspectives. We discussed the broad and multifaceted nature of technology policy, addressing critical issues such as antitrust, data localization, and misinformation. Her insights emphasize the importance of interdisciplinary collaboration and responsible AI, ensuring technology serves the public good while mitigating potential risks.

Transcript of the AImpactful Vodcast

Branislava Lovre: Welcome to AImpactful. In this episode, we will speak about tech policy, responsible AI, and ethical considerations. Our guest is Theodora Skeadas, Responsible AI and Tech Policy Advisor.Thank you for being our guest.

Theodora Skeadas: Thank you so much for having me. It’s a pleasure to be here with you.

Branislava Lovre: Can we start by explaining what tech policy is?

Theodora Skeadas: Technology policy, as a broad category, encompasses a whole range of different spaces that result from our use of technology. Now, primarily, this has referred to social media companies like Twitter, Facebook, Instagram under the meta umbrella, and YouTube under the Google umbrella, and then TikTok and others. But it has expanded to include gaming companies, dating apps, and of course, now artificial intelligence as well. So it really is a very broad range of organizations.

The kinds of policies that folks refer to in the trust and safety field include antitrust and competition, child sexual exploitation, which is a really big issue where there are a lot of organizations that work very collaboratively to address this issue, crisis response, data localization, elections. Civic integrity is technically a broader category that doesn’t just include elections. It includes other civic events like ballot initiatives, but it mostly refers to elections. Also, there are a lot of online violence issues, intramedia reliability, mis and disinformation, which connects closely to platform manipulation, which is a very big issue and one that we’re seeing a lot of in the news these days, as well as open Internet, privacy, sanctions, and terrorism and violent extremism.

So those are some of the issues, not all, but most of the issues that we looked at in trust and safety at Twitter, for example. So, when I think tech policy, I think of all of these issues. And some people specialize in a subset of them. Some people are privacy experts or experts on researcher access to data, which is a theme that the new Digital Services Act and Digital Market that covers or some people do work specifically in violent extremism online or child safety. People come to the space from different areas. It’s a really exciting field because there are so many ways to be involved and it’s very rapidly developing. It’s changing quickly. It’s always in the news and it’s global. So, for those of us who feel globally connected and globally motivated, it’s a way to be in touch with issues and people all over the world.

I would say that’s how I explain technology policy. And I would just nuance that to say that artificial intelligence, which is the theme of this conversation, makes all of those things easier in the sense that it lowers the barriers to engagement for adversarial actors in all of those spaces. So, for example, in the context of manipulated content in elections, it lowers the barrier to entry for actors who are looking to manipulate content and influence electoral outcomes. AI, as a vehicle in this space, accelerates all of the different issues that we were already addressing.

Branislava Lovre: Could you explain what Responsible AI means and why it’s important more than ever?

Theodora Skeadas: Artificial intelligence, it’s worth noting, has been used by companies for a very long time. It’s using data to predict future text and now potentially image content and video content. And so it’s been around in, for example, predictive texting that we’ve used. So if you were texting friends and you would type something like “Hi, how” and then “are you” was put into the text. That’s an AI-based system. Or, for example, in emails. For several years now, we’ve been able to have predictive text in our emails. Like if you were writing to someone, “hi, it was nice,” then you might see the words “to meet you” suggested to use. Anything that’s suggested is one application of artificial intelligence. But where we’ve seen generative AI proliferate in the last year is interactive AI systems that all of us can use to generate text and image content. And so it’s become so much more widespread because it’s now easily at all of our fingertips. And so AI can be used by people responsibly, but it can also be used maliciously. It’s a powerful tool. And like any tool, it can be used in all directions. And so, when I think of responsible, I think it means recognizing the potential, the incredible potential of AI to disrupt existing systems, both for good and for bad, and thinking about appropriate responses. So, for example, in the private sector, how can industry, how can companies leverage AI carefully in their systems? How can they do so thoughtfully? How can government respond appropriately to make sure that they’re leveraging AI in chat bots or in other ways appropriately to serve their residents, their citizens? And how can civil society engage responsibly in this space? So to me, it refers to all of the different stakeholders that intersect with technology across all of the sectors and the different responsibilities that we have to make sure that we’re using this powerful tool to help people and not hurt people.

Branislava Lovre: Your academic journey is outstanding. How have your studies shaped your perspective on the ethical considerations in AI?

Theodora Skeadas: I, in college, studied philosophy with a focus on ethics in times of conflict, and I wrote my college thesis on just war theory, which is broken up into three major sections. Thinking about justice in approach in war, thinking about justice in war, which is the most widely studied, and then thinking about justice after a war end. So how do you approach, engage in and follow war in a just way? And there are all kinds of very complicated questions about what is referred to as “jus in bello” or justice in war. And at that time, I wasn’t really thinking about AI, but I was thinking about ethical considerations, leveraging technologies in the context of conflict. Now I think about it as the ramifications of AI in the military. So, we’re seeing AI being used in industry, in civil society and academia, but we’re also seeing the proliferation of AI usage in the military. There are a range of ramifications that to me come to mind as ones that definitely value consideration, because they are impactful. AI systems can be involved in adversarial attacks. They can be made vulnerable to adversarial attacks where malicious actors manipulate or deceive AI algorithms to produce incorrect or harmful outputs, which can affect critical systems like autonomous vehicles, surveillance systems and decision-making algorithms. It can also be used to launch cyber attacks on critical infrastructure and data systems, which can disrupt essential services and cause economic damage. That’s one category. I also see the implications of AI in data security and privacy. AI systems rely on very large amounts of data, including sensitive and classified information. And the security and privacy of this data becomes very important if unauthorized access or data breaches can compromise national security, intelligence sources and military operations. There’s also concern around bias and discrimination. So as many of us now understand, AI algorithms can inherit biases present in training data resulting in discriminatory outcomes. So, this can affect areas like law enforcement, immigration and counterterrorism, which can disproportionately impact certain groups and undermine trust in national security institutions. And synthetic media, which refers to manipulated or fabricated content, which includes images, video and audio, can convincingly imitate real events or individuals. And this sophisticated AI algorithm makes it increasingly hard to distinguish between real and synthetic media, which raises concerns about potential discrimination. So those are some. There are also issues around systemic vulnerabilities. So interconnected AI systems that create vulnerabilities that adversaries can exploit around critical infrastructure, communication networks and command and control systems, as well as weaponization and arms races. Advancing AI can lead to the development of autonomous weapons systems, which can potentially change the dynamics of warfare and their uncontrolled proliferation, can really fuel an arms race and raise concerns about their use and then disinformation and propaganda, which is a huge category of issues, but connects to, for example, election integrity and then loss of control is another issue. There are a lot of potential ramifications or applications of AI in the national security space. And when I think about my undergraduate work in philosophy, it to me connects very closely to the use and concerns of AI in national security applications.

Branislava Lovre: After your studies, you’ve built an impressive career working in diverse fields, from global tech to government roles. Could you share how your experiences have influenced your approach to tech policy and strategy?

Theodora Skeadas: Yes, and I would maybe start by highlighting how each of these roles plays into this space. So, for example, at Twitter, we relied on all of the work that we did on algorithms that made recommendations about what content to respond to. Content that was deemed violative of certain policies, for example, was removed. And this was relying on systems that were trained systems, systems trained on very large data sets that made decisions that impacted communities. For example, I submitted a public comment to the oversight board last year about the term Shaheed, which is heavily responded to in the meta ecosystem. And it shows the intersection of policy and technology tech policy, because the policy that Meta has is enforced by algorithms that lead to the facilitation or that facilitate the takedown of a lot of content. Certainly, technology companies have a real role to play, but so does civil society. For example, last year I worked at the National Democratic Institute looking at the full range of reports on online violence against women as they connect to women’s political participation. And so we created a huge database. And out of that database, we summarized all of these recommendations for tech companies on how they can better address the issue of online violence against women. We met with them and we advised them all of the major social media platforms and some of the AI ones as well, on how they could better address this issue. It shows that civil society has a role to play in identifying issues that individuals, especially vulnerable individuals, are experiencing on the ground and then translating that into actionable content for companies that they can then take greater action. When I worked with government, so I consulted to the U.S. federal government on a range of technology issues, including platform manipulation and mis and disinformation, to help the government that understand the scope of the issue and take action. And I also briefly had a role in state government where I was rolling out a pilot program for state government to try out different chat bots to better serve constituents in Massachusetts. So I’ve had the privilege of working across all of these sectors, and I would say my primary impression is that all stakeholders have a role to play. All of these experiences have reinforced for me the idea that technology, policy and strategy doesn’t come from one stakeholder alone. It requires all of the stakeholders.

Branislava Lovre: Reflecting on your achievements, which ones are the most important to you?

Theodora Skeadas: Thank you for that question.  I would say one interesting project that stands out to me and it connects to the previous question, which was how my insights across different sectors have shaped my approach to technology. Policy and strategy was a project that I supported at Twitter called the Twitter Moderation Research Consortium. This was an initiative that looked to share data on state backed information operations with independent researchers. We launched a partnership initially with three research institutions. The Stanford Internet Observatory, which is based in California in the United States, Cazadores de Fake News, which covers Latin America, and then the Australian Strategic Policy Institute. And we launched this program in the summer of 2022 and had planned to also expand it and subsequently did expand it to researchers all over the world. Everybody was invited to participate and we shared this data, so special data that we wouldn’t necessarily have shared in the same way with other institutions, with these individuals and these institutions, so that we could better understand the scope and the nature of government backed manipulation campaigns. And so we published or we worked with the Stanford Internet Observatory on a piece that they ultimately published that talked about an influence operation, campaigns that actually promoted U.S. interests abroad. The work was not able to be directly, directly attributed, but it was widely attributed to the US government and The New York Times covered the piece that was published called Unheard Voice. And after the New York Times published the piece, it was called Facebook, Twitter and Others Removed pro-U.S. Influence Campaign. It was noted because it was the first time that an influence operation promoting US interests abroad had been discovered and taken down from social media platforms. As a result, the Pentagon ordered a review of its overseas social media campaigns. And so what I think is really special about this initiative is it shows the collaboration between sectors. Twitter, a for profit company engaged in data sharing with independent researchers at an academic institution who publish the work and a journalistic entity. The New York Times covered it and ultimately resulted in a policy change and a review of operations at the US federal government level in the Pentagon. And so you can see how all of these stakeholders are really intersecting in ways that are quite pivotal and that none of this work could have happened without the others contributing. I think it’s a cool project because it shows how much you need every stakeholder to be active and contributing. Taking a step beyond multi-stakeholder collaborations are ones that I’m particularly excited about. I really loved managing the Trust and Safety Council because it was a collaboration between industry and civil society. More recently, I have been working with the nonprofit called the Partnership on AI, and we put together an AI policy forum in London in October where we brought together global policymakers across all sectors to talk about some of the really big technology issues affecting AI. We talked about issues around AI governance and safe model deployment, AI safety policy and others. And we also put together a policy forum. I really enjoy working at the intersection of different sectors and the roles and experiences that I’ve had that are really at the intersection of sectors and communities are the ones that are most meaningful to me.

Branislava Lovre: Can you discuss a specific initiative or project you’re currently working on that shows how the state is integrating AI?

Theodora Skeadas: Earlier in the fall, I was working for a state government agency, the Executive Office of Technology Services and Security in Massachusetts, where I was working with the team to identify possible use cases for AI in state government agencies. And so it’s a really interesting question because governments in general have a really high threshold for engagement in the sense that there really is very little, if any, room for error. You cannot deny citizens or resident any public benefits because AI algorithms have recommended specific beneficiaries of public programs. And so when thinking about AI as implemented in government, it’s really careful to distinguish appropriate and inappropriate use cases. Not all use cases are appropriate use cases for government. So specifically, the kinds of use cases that we wanted to stay away from were those that recommended decisions to state government authorities about what individuals were and were not eligible for benefits because there was a lot of room for error there and it was beyond the threshold of acceptance. But what we did think was appropriate was a chat bot for different government agencies that would help nudge users in the direction of services that they might be able to benefit from. For example, if you think about the Executive Office of Veterans Services, which works with veterans, if the data is trained on information on what kinds of services veterans are already benefiting from, it can help recommend to them other services that they might benefit from. So that’s the kind of nudge that we were thinking about implementing in chat bots.

Branislava Lovre: What ethical considerations are important when implementing AI solutions in public policy?

Theodora Skeadas: Ethical considerations that are important when implementing AI solutions in public policy are varied. They include transparency and explainability. It’s really important that systems be transparent in their operations and their decisions because policy makers and the public need to understand how AI solutions are arriving at their conclusions and their recommendations. Another piece is privacy and data protection. So like I said earlier, AI systems rely on very large data sets, which can include sensitive information that is personal. And so it’s important to ensure that data is collected, stored and used in a way that respects privacy rights and complies with data protection laws as well. I mentioned this earlier, but bias and fairness are really critical. AI systems can and do inadvertently perpetuate and amplify biases that are present in the training data. And it’s really important to actively identify and mitigate biases in these models to ensure fairness and to avoid discrimination against certain groups. Accountability and responsibility is also really critical. It’s important for organizations that are deploying AI to create clear lines of accountability so that it’s very clear who is responsible for decision making. There should always be a human in the loop. AI systems should not be making decisions on behalf of people. They should be supporting people in making decisions. It’s also important to include the public in this development because we want to ensure that a diverse range of stakeholders have their voices represented, since they’re all affected by AI systems. And sometimes those of us who are most vulnerable are in many ways most affected but least empowered to address outcomes. And so inclusiveness and participation is critical, and there are other issues like sustainability. So, the environmental impact of developing and running AI systems is huge. It’s not being talked about as much, but the energy consumption involved in running and sustaining large AI models is immense. And so thinking about sustainable practices to minimize the ecological footprint of AI in technologies is important, and then there are longer term impacts. Some of these longer term impacts include potential changes in the labor market, economic implications, and how AI can shape public discourse and democracy. SThese are things that we should all be thinking about. And then lastly, point to regulatory compliance. Increasingly seeing regulatory advancements like the EU AI Act and then in the US, the executive order on AI. And so companies and solutions will need to comply with laws, regulations and policies that are being prepared to support them.

Branislava Lovre: How important is it for different countries to align on policies in terms of global cooperation?

Theodora Skeadas: I think this is a huge question. I would say it’s critical because the word here is interoperability. If countries are creating different standards, it is hard for companies to comply with very different standards. We are seeing a fragmentation of regulatory frameworks, for example, in the U.S. with the challenges that our U.S. federal government is facing, we’re seeing more regulation at the state level. But that means that there is potentially up to 50 different pieces of regulation that companies need to comply with. And so the more interoperable a system is, the easier it is to be implemented and respected globally. And I’ll screen share very quickly just to show so in October partnership on a AI had a policy forum in London and one of the panels was on this issue specifically so governing globally international standards trade and interoperable ability. And what it says here is that AI tools rea across borders and so will their potential impacts. As nation states grapple with how to govern this technology, multilateral bodies are also seeking solutions for safe, responsible and ethical deployment, development and deployment. What is the right balance between domestic and international and policy, for international for policy frameworks and technical approaches? It was a conversation specifically on this issue of interoperability. Increasingly, we are hoping for an alignment on policies and standards, but where there is not alignment, it does create complication for companies looking to comply.

Branislava Lovre: What role does public trust play in the adoption of AI technologies?

Theodora Skeadas: I would say the public plays a big role. One engagement option that comes to mind for me is public comment period. Usually, governments will have public comment options where they’re looking for feedback from experts or really anybody, but especially experts on different pieces of regulation that they are thinking of outputting. So that’s a really specific way to engage. I would say that emphasizing public trust, trust is important for a lot of reasons. Trust facilitates legitimacy and acceptance, so it acts as a form of social license. It grants governments the legitimacy to engage with transformative technologies like AI. Otherwise, without trust, citizens might perceive AI as intrusive or biased or even potentially threatening, which can hinder its adoption. And like I said, we want the public to engage in the process so collaboration and participation can boost trust, but also can shape AI outcomes so that they are more relevant and more user centric. It improves the AI that’s leveraged in government because it is done with public feedback. It’s a better outcome as a result and it also can mitigate potential risks. It’s important for risk mitigation and resilience, issues like algorithmic bias, privacy concerns and misuse can all be at least somewhat mitigated when the public is engaging in the process of producing and disseminating AI in government.

Branislava Lovre: At Aimpactful we always try to showcase at least one tool. What do you recommend we try?

Theodora Skeadas: Partnership on AI is a nonprofit that I had been working with since September of last year, and the organization put out guidance for safe Foundation model deployment. It actually is open for public comment, so if anybody has feedback that they want to share, please do share feedback as well. It’s very much welcomed. The context for this tool, which is meant to be an interactive tool, is that we’ve seen incredible advancements in foundation models, also known as large language models of general purpose AI, and that there’s already recognition that this technology can both help us but also hurt us. And so for all of the reasons that I mentioned earlier, is very critical that organizations that are deploying these systems think carefully about how to deploy them. This is meant to be an interactive tool that generates custom guidance. As you can see, there are a few steps here and then it outputs guidance based on the inputs that you provide it. So first, you choose your foundation model. You have three separate options specialized narrow purpose, advanced narrow and general purpose, and then paradigm shifting or frontier. Let’s say we start with narrow purpose. This defines what it is and then there are definitions here as well and here as well. So again, let’s say we start with narrow purpose. You can select it here and then you choose the type of release. So there are four options open access, restricted API and hosted access, closed development and research release. And again, you can learn more about them by selecting the text that appears beneath each option. Let’s say let’s start with restricted API and hosted access we selected here. And then the third question is, is it an update? So is it an update or not? If it is, then you select yes. Otherwise, you select no. let’s say no. It’s not an update. We won’t select this option. So then option four is show guidance. The guidance appears here. Okay, there are different categories of guidance. Research and development, pre-deployment post-deployment and then societal impact. And under each of these is information about how the organization can proceed with care. So just as an example, you for research and development, you can scan for novel or emerging risks acts as assess upstream security vulnerabilities and establish risk management and responsible structures for foundation models. And the guidance continues onward. So these are very specific recommendations that have been thought out through a multi-stakeholder working group, and you can learn more information here. There are some key takeaways as well as a list of the supporters, the people who were involved in this effort, and ultimately an opportunity to send that feedback. I wanted to share this tool. I think it’s really useful and I definitely recommend folks check it out. If you’re interested in learning more.

Branislava Lovre: What advice do you have for young professionals and students interested in entering the field of tech policy?

Theodora Skeadas: My number one recommendation is to think of this space as a community. There are a lot of us who are involved in the ecosystem, who are connected to each other and in conversation with each other. So, you can join the conversation. For example, All Tech is Human. Where I am an affiliate, is an incredible group of over 7000 people globally who are all interested in and passionate about responsible technology, including responsible AI. And so joining a group like All Tech is Human and getting involved in the different working groups and writing initiatives that the group is involved in is a really good way to plug directly into the heart of the responsible technology movement globally. There are other ways to contribute, so writing public comments and then sharing those on social media to help elevate one’s role in this space as a way both to contribute and also to share. And then I would say reading, learning, there’s a lot to learn. So following newsletters, following accounts on social media, whether it’s LinkedIn, which has become more popular in the last year or Twitter or other channels. TikTok, Instagram there are all kinds of channels that people use to learn about this space. And so reading is a really good way to to educate oneself as one enters the field of technology policy.

Branislava Lovre: Perfect message for the end of this episode. Thank you so much for your time.

Theodora Skeadas: My pleasure. Thank you so much for inviting me to join. It was a pleasure to speak with you.

Branislava Lovre: You watched another episode of AImpactful. Thank you. And see you next week.