Welcome to another episode of AImpactful! Today, our guest is Dr. Courtney Radsch, a renowned journalist, academic, and advocate at the forefront of technology, media, and human rights.

With extensive experience in tech policy, internet governance, and media viability, Dr. Radsch delves into the significant influence of big tech on journalism, addressing the challenges and opportunities that arise from this interplay. She shares her research findings, highlighting the critical issues faced by journalists and media organizations.

Dr. Radsch discusses the impact of platforms on journalism, the role of AI in news dissemination, and the importance of ethical considerations in tech policy. Her experiences and research provide a comprehensive look at the current state of media and technology, emphasizing the need for thoughtful regulation and innovation to ensure a fair and informed society.

Join us as we explore the intricate relationship between technology and journalism, and gain valuable insights into navigating the evolving media environment.

Transcript of the AImpactful Vodcast

Branislava Lovre: Welcome to AImpactful.  In this episode, we will speak about misinformation, disinformation, news media, and big tech companies. Our guest is Courtney Radsch. She holds multiple positions all related to technology, media, and human rights. Welcome, Courtney.

Courtney Radsch: My pleasure. Thanks for having me.

Branislava Lovre: Let’s start with the basics. How does big tech impact the journalism industry?

Dr Courtney Radsch: So a handful of tech platforms really control almost the entire underlying infrastructure on which journalism is built as an industry, right? So, you know, we need to have a financially viable journalism industry. But it’s very difficult, as we’ve seen over the past two decades, for news media to become viable and create viable business models in the new economy. And this is not their problem. This was not a problem of their own making. This is because, again, a not even a handful, a couple of firms control outsized influence in key parts of the production chain. These platforms not only control access to the audiences, the publication tools. Think about where do you get your email services, your web hosting services, your cybersecurity services? These companies are at the root of those. And a duopoly controls the ad tech infrastructure on which this entire digital economy is built. They control the ad servers, the ad exchanges, and the process by which that transaction occurs. So there are no real other choices for news media. Right. You have to go where your audiences are. And now, instead of being kind of out on the open Internet, audiences are largely in these walled gardens created by platforms and apps. So, that’s the first condition that we saw really happen over the past, you know, several years, is this translation transformation into these walled gardens. And these companies are able to get Monopoly rents out of that. So we know from behavioral studies, for example, that people act differently on platforms when there is news versus when there is not news. News adds value, right? If you’re doing a search for baby formula, you sure want to have that news about the recall or the dangers posed by, you know, the latest import from whatever country. If you’re, you know, searching for information on your local community, chances are some of that information is going to come from local news organizations. So they’ve provided this value, but they’ve never been able to recapture that value. And efforts by tech companies to narrow down the concept of what value news provides is really dangerous because of course, we know there is this huge problem with mis and disinformation. In previous eras, we called it propaganda, so call it what you will. We know that there is this real big problem of low quality, dangerous, problematic content online. And part of that is because the logic of platforms like Facebook, like social media engagement platforms, which encourage people to create and post, not necessarily only people could be bots, content that is engaging. What’s engaging? It’s something that gets people to stay on or interact with that content. And we know from behavioral psychology and other fields that the type of content that is likely to be more engaging is likely to be more negative. It’s likely to act on your emotions, potentially be more extreme. And so the logic of the information environment has exacerbated some of the polarization. And the, you know, this negative trend towards engagement. Meanwhile, there is a you know, this is all driven by a profit incentive. It is more profitable for the tech platforms to keep you in their walled gardens. So they have used addictive features and design features, everything from, you know, infinite scrolling to dark patterns that are designed to, you know, keep you on those platforms and therefore being able to serve you up ads that keep your eyeballs.

Branislava Lovre: Disinformation and misinformation are being mentioned more and more often. But how or more precisely, why are we in this situation?

Courtney Radsch: Because influence operations now have this amazing infrastructure to micro target you or target specific groups of people or types of people with messaging that they hope will, you know, convey either their product or, you know, whether that product is a politician or, you know, a pair of shoes. Right. So what we’ve seen is that around the world, countries have now adopted influence operations as core part of their political processes. You know, it’s at least 70 countries, according to the Oxford Internet Institute, have weaponized information operations. We know that PR industries around the world are using these tactics. Content farms are putting up low-quality content, and this is only going to become worse with generative AI. Meanwhile, you’ve got these all beleaguered journalists and news organizations who have seen their economic model devastated and really struggling to figure out how to remain viable, trying to compete in this information environment. And it’s very difficult because you are held hostage by the content moderation systems on these platforms, which are opaque and which there is no accountability for. So if your content gets taken down and you’re covering the latest protest or issue in your community, you know, news has a shelf life, and by the time you figure out how to get your content back on, if you are able to do that, it’s probably too late. So it’s a really challenging information environment where the platforms are siphoning off value from the news organizations, where news organizations are having to reorient journalism to the logic of platforms. So you see this, for example, in the type of positions that news organizations have and the ones that they’ve gotten rid of. Now you need to have search engine optimization and data analytics experts. You can’t afford to pay that reporter that used to cover the city council, for example. You can see this in the type of journalism that people do. And you can see this in fact-checking. Right. So we think about misinformation and fact-checking as being this real positive. But in fact, what fact-checking does is because there’s so much crud on social media, there’s so much, you know, inaccuracies and fake information and problematic information circulating is it’s reorienting the editorial independence and editorial decision-making towards fixing and addressing, you know, the problematic facts or issues circulating online rather than putting their attention on, say, more important things like what that local official or, you know, those local agencies are doing. So how does this editorial distortion effect? And it’s becoming harder and harder for quality information, for journalism, for, you know, actual real human-based information to get through the information glut that we’re in. And I think that the generative AI turn that we are seeing right now is going to make that even more difficult with even more information and journalism having to struggle to be seen and to compete with not only the logic of the platforms but also with all of this, you know, generative content that can be created so easily.

Branislava Lovre: What are some potential solutions to the problem of misinformation and disinformation?

Courtney Radsch: One of the problems with the situation that we’re in right now is that misinformation is profitable. And so I think that we need to do a few different types of interventions to address the widespread problem of disinformation, misinformation, and propaganda. First, we need to separate out the ability of publication platforms and dissemination platforms from the ad tech infrastructure and from the data collection. Right. So right now, this is all within, you know, one platform does it all. And this creates, I think, really problematic incentives. And we need, I think very, here in the United States, we really need a privacy law because we have no federal privacy law that governs what type of data can be collected, what kind of information can be datafied in the first place. You know, all of our behavioral and biometric data that are now being collected and turned into data points by these companies, and then the ability to kind of package that and sell that to advertisers of all types. So a national privacy law breaking up the, you know, breaking up companies that own the entire value chain here so that you have more separation, you introduce more friction. And I think looking at laws like common carrier laws for some of these core infrastructural platforms like Search, you know, Google controls 90% of the search market in the U.S. and most countries. So that’s a huge amount of influence over the public sphere over, you know, what people know, how they know it. So looking at that and thinking about, okay, well, you can’t sell, you know, breaking that up, making them into common carriers, you want to be able to self preference your own products, your own information, your own, you know, news outlets if Google decides to get into news. So we can look at common carrier provisions. I think for things like search for things like social media platforms, we need to impose data portability requirements that would allow people to better control their data, but also then to move across services. And again, we can think about, I don’t know, if you know, the people listening to this podcast or watching this episode. Remember when in the US we could not take our telephone number to a different carrier, until you know, they passed a law that made it made data portability, your phone number, a requirement. So we need the same sort of things so that, you know, we know that the network effects of social media are what makes it valuable. It’s who’s on there, etc.. So data portability or again, requiring that these companies be treated as common carriers, which would require that they publish their terms of service and apply them equally. It would prohibit self preferencing. It would prohibit discrimination. And so that could also help address, I think, the disinformation issue because it would reduce the incentives to do so economic incentives, and it would require that the terms of service be equally applied so that, you know, if you’re breaking the violations, even if you’re a, you know, well-known person or politician, you don’t get special treatment.

Branislava Lovre: When we talk about the impact of technology on our society and media, you did a lot of research that got published. What were the most interesting things you discovered?

Courtney Radsch: Well, that’s a great question. Thanks. I think one of the most surprising things that I found recently was I left the Committee to Protect Journalists, where I was the advocacy director for many years, in part because I wanted to work more on technology policy issues. And I saw that there were these recurring things happening to journalists around the world, including their websites and digital platforms were taking down their content and accounts, including in countries where they were often the only or one of the only independent critical media. I saw this with 100% Noticias in Nicaragua, it was really the emblematic case here. And so I decided to do research into that. You know, why are these news sites in third countries being taken down under the Digital Millennium Copyright Act, which is essentially the U.S. Copyright Act, which we’ve exported internationally through our trade agreements and through the terms of service of platforms that have determined how to implement that technologically. And so what I found is actually this is happening all over the world. I didn’t realize how extensive the weaponization of copyright and now privacy laws because of GDPR and the right to be forgotten, have really, we’ve seen public relations firms, influence operations, and state-based actors targeting independent media by leveraging the automated notice and takedown systems and some of the other features of these technical, legal systems to get legitimate news content taken down.

Meanwhile, we see that news organizations are being held hostage to these content farms that siphon off their content, take it, republish it, repackage it, and then earn the revenue on that. So it’s this really challenging dynamic where I think the platforms, although they’re aware of this, that’s such a small part of their market cap, you know, really doesn’t impact them. And so trying to figure out how to change that balance so that they have a responsibility to address those shortcomings.

I was also, I mean, I have been studying content moderation issues for a long time. And so I knew, you know, going into some research that I did where I interviewed and surveyed more than 300 journalists for a report I wrote for Internews. And, you know, just hearing from journalists all over the world how content moderation by Google, Facebook, to a lesser extent Twitter, has really—I mean, most of the media are primarily on those two platforms—how that just has this huge impact on their bottom line, on their ability to gain trust with their users. How they have no way to get in touch with these platforms. How even news organizations that are fact-checking partners or have received some sort of benefit from the platform, you know, partnership or, you know, did a training or whatever, couldn’t get verified on the platform. And then how acute that becomes during crisis.

Branislava Lovre: What is the role of legal initiatives?

Courtney Radsch: So I think in the near term there are a few, you know, promising types of legal regulatory initiatives that are aiming to rebalance the playing field between big tech and news media. And so we see this in news media bargaining codes like the one in Australia or in Canada, as well as the one proposed here in the U.S. So we’re seeing some interest in that around the world, and that’s basically designed to make big tech pay news publishers some part of the value that is generated through the use of news on their platforms. The critics like to call it a link tax, but in fact, what it is, is requiring that these companies that create tremendous value through the use of news snippets, images, headlines are required to compensate the journalists, the journalistic organizations that go out and create that. As my colleague Anya Schiffrin likes to say, you know, right now you have the platforms. It’s like going into a muffin shop where you have all these like beautiful crispy muffins with blueberries and nice tops, and the tech companies go in and take all of the tops and sell, you know, just put those out. And all the publishers are left with are the crumbly bottoms with nothing juicy and they’re supposed to somehow sell the muffin bottoms, you know, that doesn’t work. So these news media bargaining codes are trying to rebalance some of that.

And I think the most important thing about them is they are creating a mandatory forum for negotiation between news media and tech platforms, which is very important as we enter the AI era, because we know how valuable news content is both to large language models. So the inputs to AI systems as well as the application layer. So for example, search. Search is not going to be useful without local news, without, you know, news content. And so we really need to get the framework right. I think copyright holds some promise there. I think we need to make sure that we don’t consider the use, you know, the unfettered crawling and scraping of content, including content from behind paywalls as fair use, especially when it is done by the largest, most wealthiest companies in the world. So I think we have to get the copyright issue right. I don’t think that’s going to be the perfect solution. I do think that this is you know, this technology is new. It is also similar to the past. So we really need to understand what are the similarities to past technologies to pass advances. That’s why we need to look at how have we governed our information communication systems in the past. We can look at the railroad, the telegraph, the radio, television, all of these. We considered common carriers and we imposed, you know, rules accordingly. So I think we need to resurrect that as a as part of the solution.

Branislava Lovre: The last question for today, what are some good resources to learn more about big tech in news media?

Courtney Radsch: I think two foundational reports that we published at the end of last year at Open Markets Institute in the Center for Journalism and Liberty would be really helpful. One is saving journalism, and that looks at how the United States has governed its information communications ecosystems and taking those lessons to the present day to suggest some policies that would help rebalance the playing field between big tech and media. The second is a report we did on AI in the Public Interest, which looks at how do we govern AI, a breaks down, you know, what we mean by AI and how do we govern various components in the public interest? And I think that’s been really, really useful to think about because, we can talk all we want about, you know, safety issues way down the value chain, but we need to make some really important structural interventions. And so this report lays those out. I also mentioned this weaponization of copyright and censorship to censor media, which I did for the Center for International Governance Innovation. So if you want to really get some, like, you know, crazy examples of how the unintended consequences, of how, technology companies implement legal regulatory standards and when there’s a lack of oversight and review of those, that might be interesting. And then maybe the last one would be AI and disinformation paper that I wrote for the OSCE a couple of years ago, which looks at specifically how state-aligned influence operations leverage AI to target and harass journalists and media outlets around the world and suggest some of the things that we can do to mitigate and address those. So we also have for those who are interested in news media bargaining codes, which is kind of one of the issues du jour, we have a global tracker on journalism liberty dot org, the center’s website where you can see all of the countries around the world that are considering or have passed this legislation and what the status is. So check those out. Maybe you can put them in the show notes.

Branislava Lovre: You’ve watched another episode of AImpactful. Thank you and see you next week.