A journalist’s social media account gets hacked. A human rights defender faces waves of coordinated attacks. A newsroom becomes the target of AI-generated disinformation.

In this episode of AImpactful, we’re joined by Elodie Vialle, who brings unique expertise in defending journalists and human rights defenders online. Through her work with PEN America and Harvard’s Berkman Klein Center, she bridges journalism, human rights, and technology.

From her early days as an editor-in-chief and radio columnist, she saw how digital threats silence voices. Now, she works to protect journalists and human rights defenders from digital attacks.

Key Insights:

  • How AI creates fake content to discredit journalists
  • Digital surveillance and spyware threats
  • Practical security steps, from risk assessment to two-factor authentication
  • Gender-based disinformation as a weapon

Through her work, she tackles:

  • Why journalists can’t simply leave social platforms
  • Building better reporting systems on social media
  • Creating emergency response channels for attacks
  • Supporting at-risk users and communities

Her solutions help journalists and human rights defenders face coordinated online campaigns designed to silence them.

Episode Details:

  • Duration: 21 minutes
  • Guest: Elodie Vialle – Journalist and Tech Policy Advocate
  • Host: Branislava Lovre
  • Format: Video podcast

Made for: Journalists, human rights defenders, and media professionals who need real solutions to digital threats. Learn from someone actively shaping safer digital spaces for at-risk communities.

AI Usage Notice: In preparing this introduction and the episode transcript, AI tools were used with careful human oversight and editing. We believe in transparency regarding the use of AI in our work.

Transcript of the AImpactful Vodcast

Branislava Lovre: Welcome to AImpactful.

Today we will talk about AI, digital security, and the protection of journalists and human rights defenders. Our guest is Elodie Vialle, a journalist and tech policy advocate. Welcome Elodie.

Elodie Vialle: Thanks so much for having me.

Branislava Lovre: How did you become interested in connecting journalism, technology, and human rights?

Elodie Vialle: I used to be an editor-in-chief of an online media dedicated to social innovation, a radio columnist at French Radio, and a TV journalist. Then, I worked as a consultant for newsrooms internationally to advise them on their editorial and digital strategy. At some point, I began to wonder, “What’s the point of chasing the next digital innovation and implementing it within the media industry if people in many countries aren’t even free to express themselves?”

That realization led me to focus on defending press freedom and working on human rights. In 2017, I joined Reporters Without Borders as the head of the tech desk, where my role was to identify violations of freedom of information online and to create and develop solutions. During that time, I started working on online harassment of journalists and noticed that journalists from various countries and contexts faced similar issues: intimidation, insults, and disinformation.

In fact, online harassment of journalists is often a tactic used in disinformation operations. I began to build a framework for addressing this issue by conducting research, monitoring the global situation, and working on solutions.

I also collaborated with PEN America in the U.S., which had developed one of the most comprehensive resources for journalists facing online abuse. My role there involved adapting these resources to different contexts and engaging internationally.

Over time, I developed an international and holistic approach to combatting online harassment of journalists. This included conducting research for platforms and the tech industry, creating programs for journalists and newsrooms, advising newsrooms, especially in France, and engaging in tech advocacy.

Currently, I’m also active in policy work, particularly in Brussels with the Digital Services Act enforcement and in Geneva, where I aim to amplify the voices of journalists, human rights defenders, and marginalized communities who are attacked online. My overarching goal is to bridge journalism, human rights, and technology.

Branislava Lovre: What challenges do journalists most commonly face today regarding digital security?

Elodie Vialle: Internationally, we observe a rise in digital authoritarianism. There’s a clear connection between attacks on democracies, democratic processes, and the rise of digital tools designed to undermine them. Journalists are on the frontline of these attacks. They face intense surveillance, as seen in the scandals surrounding Pegasus spyware, which is just one of many examples.

Journalists also face hacking attempts, especially on social media, where they are targeted for scams. Recently, there have been numerous cases in Europe where journalists’ accounts were hacked, mainly due to their large followings. Civil society organizations and journalist associations are also very concerned about doxing.

Doxing is when people search for personal information about someone online and use it to discredit them. For instance, they may find an old photo or something trivial you did as a student. This information is then pieced together and shared widely, often with abusive trolls, to create damaging narratives. This kind of orchestrated operation aims to discredit the journalist and is a significant threat.

Unfortunately, this threat disproportionately affects women journalists and journalists from marginalized communities. Addressing this issue is critical, not just because of the harm it does to those individuals, but because it’s a direct threat to freedom of expression, press freedom, and democratic processes.

Branislava Lovre: How does AI affect the safety of journalists and human rights defenders?

Elodie Vialle: To me, AI represents just a new iteration of what I’ve been working on over the last few years. What we observe is that it’s never been easier to create fake content—audio, video, text, entire stories, and narratives—specifically aimed at discrediting journalists.

We’re seeing a rise in gender-based disinformation online, which has become a new weapon in the larger disinformation ecosystem. For instance, a few months ago, ahead of the Slovakian Parliamentary Election, a journalist named Monika Tódová was harassed online. A fake video was produced to fabricate a false conversation between her and a politician. Even though the video was completely fake, it reached thousands of users on social media.

This case, monitored and documented by civil society organizations, shows how the reputation of journalists can be undermined as part of a broader disinformation campaign targeting democratic processes. This is why the use of AI in disinformation is a significant concern for us.

At the same time, we are exploring AI’s potential to better protect journalists online. For example, chatbots could be used to support journalists, as we can’t always be available to assist them directly. We are also considering how AI might help detect and monitor coordinated inauthentic behavior and disinformation campaigns, providing insights into who is behind these attacks.

Many journalists and nonprofit organizations supporting journalists are actively exploring these opportunities right now.

Branislava Lovre: What are the most important steps journalists can take to protect themselves online?

Elodie Vialle: First, you need a plan. When you’re online, it’s helpful not only for journalists and human rights defenders but for all users to have a structured approach. How do you build a plan? The first step is to start with a risk assessment.

Who is your potential adversary? Who might try to attack you or undermine your reputation online? It could be anyone, so it’s essential to identify potential adversaries by creating personas. Your adversary could be this person, this political group, or even a company.

Next, you need to understand their power. It’s different when you’re attacked by a random person versus a political group, a private company, or even a state actor. While even a single stalker can be dangerous, the impact varies based on the power and resources of the adversary.

Then, consider their intent. Are they trying to damage your reputation? Are they attempting to censor your next report? They might try to hack your account, report your social media account as spam to get it blocked, or launch a discrediting hashtag campaign against you.

Finally, you need to think about how to prevent these attacks or at least mitigate their impact. Realistically, we can’t perform miracles. People sometimes ask if we can stop online harassment, and I have to say, “No, it’s not entirely possible.” But what we can do is make it harder for your adversary to achieve their goals.

That’s why we work with journalists through digital safety trainings, focusing on strong, unique passwords and separating personal and professional online spaces. Many outside the journalism community don’t realize that social media is a necessary tool for our work—it’s not just for connecting with friends. Journalists can’t simply leave social media; it’s integral to our work.

This is a perspective I also share with social media platforms when engaging them to support journalists as users of these platforms.

Branislava Lovre: What advice do you have for journalists regarding digital security?

Elodie Vialle: This is the main focus of my work. First, as a journalist, you might want to go to all your social media accounts and enable two-factor authentication as a stronger way to secure your account online. If you don’t do it, the risk of being hacked is much higher.

Secondly, really work on passwords. I recently spoke with a journalist who thought, “Oh my God, I should use a unique password for each platform.” They realized how important it is to separate passwords across different platforms.

So that’s my takeaway, not only for journalists but for everyone watching this. Social media platforms can actually do a lot more to protect journalists, human rights defenders, and other at-risk users who are more likely to be targeted.

PEN America, a nonprofit supporting writers, journalists, and artists globally, has published several reports with specific recommendations for social media platforms on how they can improve user safety. I worked on one of these reports, No Excuse for Abuse: What Social Media Platforms Can Do Now to Protect and Empower Their Users. We made concrete and simple recommendations, like improving the reporting feature.

Right now, if you face abuse, reporting it is complicated. You can’t access a simple dashboard to see all your submitted reports, so there’s no way to document the situation. During an attack, it could be helpful to delegate account access to someone you trust without giving them full control.

Our recommendations are based on interviews with user communities and journalists. We’ve shared these ideas with platforms, and some of these features are now required under the EU’s Digital Services Act (DSA). Platforms must do risk assessments, establish crisis mechanisms, and conduct due diligence to understand their users’ needs and risks online.

Another project I’ve worked on involves creating escalation channels with social media platforms for civil society organizations to protect vulnerable users. Basically, there needs to be someone to respond when there’s an attack. In some documented cases, hate on social media has even led to violence or death, so it’s essential to have content managed quickly in such situations.

In the European Union, major platforms are now obligated to create these points of contact and crisis protocols, which is vital for civil society. I’m part of the Coalition Against Online Violence, representing over 80 nonprofit organizations. One of our main projects is advocating for these escalation channels to better support users online.

Branislava Lovre: What are the key obstacles you face in your work?

Elodie Vialle: There are quite a few obstacles, honestly. When we engage with social media platforms, there’s often resistance because they lack incentives to implement these changes. This is why we also engage with regulators, investors, and other stakeholders.

In working with journalists and media organizations, we face challenges because they’re already dealing with so much—legal threats, potential arrest, physical danger—and online harassment just adds another layer. Many journalists are now traumatized and, unfortunately, some are even considering leaving journalism because of the economic, political, and social pressures they face.

That’s why we use a holistic approach, which includes psychological support. Many organizations lack the resources to handle these challenges fully, so we try to implement solutions that don’t add to their financial or operational burden.

Branislava Lovre: How can social media platforms improve to better protect journalists and human rights defenders? What can we do as a first step?

Elodie Vialle: A critical first step is to include civil society organizations in these discussions. There’s a lot of focus now on AI standards and principles, which is great, but enforcement is what really matters, and that depends on civil society involvement.

To work effectively, user communities need to be integrated into data processes, like labeling datasets and content moderation. But this requires funding. Currently, tech vendors are selling content moderation solutions, yet the communities most affected—at-risk users—aren’t included in the decision-making.

I represent journalists and human rights defenders, but all user communities need to be involved transparently in these processes. Civil society organizations are willing to help, but working at scale requires resources.

Branislava Lovre: That was the last question for today. Thank you, Elodie.

Elodie Vialle: Thank you.

Branislava Lovre: You watched another episode of AImpactful. Thank you, and see you next week.