A thousand people spent months imagining what journalism looks like when AI runs the show. They wrote scenarios, flew to Italy, argued in workshops, and produced a report with six possible futures. One year later, three people sat down with an agentic AI system and did the whole thing again in two weeks.
Same project design. Same steps. A hundredth of the cost.
David Caswell and Shuwei Fang led both editions of AI in Journalism Futures, with the Tinius Trust backing the 2025 version through its representative Nicklas Stavnar. The 2024 edition was all human. The 2025 edition was all machine. And the results were close enough to make a lot of people uncomfortable.
Branislava Lovre spoke with David about the 2024 project in an earlier episode. This conversation picks up where that one left off.
David walks through the full process: how the team created a thousand AI personas, each with detailed backgrounds and professional histories, and had them write scenarios for the future of news. How AI judges selected the 40 strongest submissions using the same criteria human judges had used the year before. How 20 digital twins of real experts joined the selected personas for 31 simulated workshop conversations. And how the agentic system then analyzed everything and wrote the report on its own.
Something worth knowing before you press play. Branislava and David recorded this conversation online. That part is completely real. It was Branislava and David’s idea to do something different with the production, and the AImpactful team made it happen. With the help of AI, they took real faces and built animated versions of them, placed them in a generated studio, and gave the conversation a visual form it wouldn’t have had otherwise. Every step was guided and approved by humans.
What we explore:
- How a year-long project with a thousand participants was replicated by agentic AI in two weeks
- The full agentic pipeline: personas, digital twins, simulated workshops, automated analysis
- The difference between automating tasks and automating workflows
- Why this comparison caught attention well beyond journalism
- What a hybrid human-AI edition could look like in 2026
Episode details:
- Duration: 30 minutes
- Guest: David Caswell, author of AI in Journalism Futures
- Host: Branislava Lovre, co-founder of AImpactful
- Format: Animated video podcast with full transcript
Transcript of the AImpactful Vodcast
Branislava Lovre: Welcome to AImpactful. Today we’re joined by David Caswell, one of the leading voices exploring how AI is reshaping journalism. David is one of the authors of AI in Journalism Futures, a scenario-based project that maps out possible futures for news and information. The 2024 edition involved around a thousand people contributing scenarios and more than 60 of them at a workshop in Italy. The 2025 edition took a different approach. Agentic AI systems built the entire project, from the personas to the analysis to the final report. We’ll talk about what that process looked like, what the findings tell us, and what all of this means for journalists, audiences and the future of trust in news.
David Caswell: You’re very welcome. Thank you for inviting me.
Branislava Lovre: The first AI in Journalism Futures report came out in 2024, and it was a major undertaking. What was the moment when you realized there had to be a 2025 edition?
David Caswell: It’s hard to say exactly when. I mean, the 2024 project, which was a very large and expensive and time consuming project, and involved a thousand people contributing scenarios and more than 60 people at this big workshop in Italy last year, a very large project. And when that concluded, I kind of assumed that was the end of that phase of work.
But then in probably around February or March of 2025, it became apparent that these agentic systems, these agentic AI systems, were becoming incredibly powerful and were especially good at working with kind of synthetic personalities, you know, personas, digital twins, this kind of thing. And at that point, I was working with the Chinese agentic system called Manus and was incredibly impressed with it and was doing a project with that.
And so when we got into the summer and OpenAI released the GPT-5 model and then the agent mode that goes with that model, it kind of occurred to me that, well, hang on a second. We can recreate everything we did last year. The thousand people we can recreate with personas, the experts that we brought in, we can recreate with digital twins, we can recreate the workshop with conversations between the personas and digital twins. We can do all of the analysis using the agentic mode. We can write the report using the agentic mode. In fact, we can do the whole project from end to end purely with agentic AI.
So that’s what we decided, that we could try to do this. And we worked through the project over about two weeks in August of 2025, and it worked better than our expectations. We ended up with a product that was equivalent, I think it was maybe not quite as high quality as the manual product, but it was in the equivalent kind of level of quality.
Branislava Lovre: Not everyone watching might be familiar with this project. Can you walk us through what AI in Journalism Futures is, and how the whole process works?
David Caswell: These are both scenario development projects. There’s a field of scenario planning which is quite common for governments or large corporations or militaries who are trying to understand the range of possible outcomes in a situation. And so what we were trying to do is to find out, to identify the range of possible outcomes when AI is very well-established in journalism and is essentially mediating our information environment.
So this is a very uncertain future and nobody really knows how it’s going to play out. But what we wanted to do was use these scenario planning techniques to sort of examine the range of plausible scenarios for how it might occur, what might happen.
And so that was the objective of both projects and the design of both projects was identical. With the exception of the second one, the 2025 one, being purely agentic. And the first one being purely human. But other than that, the process was exactly the same.
And it combined two different techniques for scenario planning. So the first technique is essentially a quantitative version where you solicit a very large number of small scenarios from a large number of people. So we got about a thousand of those. And then the other technique is a smaller group in a very formal, carefully structured workshop, kind of working through a set of very specific steps towards the scenario outcome. And so both of these projects blended those two techniques, one purely with humans and the other one purely with agentic AI.
Branislava Lovre: You’ve already mentioned that the 2025 version was built entirely by agentic AI. Can you walk us through how that worked, step by step? How did you set up the process when AI agents were building the scenarios?
David Caswell: What we did in the agentic version is essentially follow the exact same design as with the human version the previous year. And the reason that we did that was because we wanted the comparison between the manual version and the agentic version to be exact. We wanted it to be an apples to apples type comparison. So we needed it to be structured in the same way. So that was the orchestration of the agentic version of the project.
And so what we did was we started off by creating a thousand diverse AI personas. So these are essentially descriptions of people that don’t exist, but they’re descriptions of plausible people in a lot of detail. So they have hobbies, they live in a place, they have some history, they have some background, they have personal characteristics, they have professional experience. They have skills and talents, a very comprehensive view of these made up people. And they’re not randomly created. They’re created to be in or adjacent to the media innovation space. What we’re trying to do with those AI personas is recreate the thousand people that submitted scenarios the previous year.
So then we had those personas, and then we had each of them write a three or four hundred word description of what they thought the AI mediated information ecosystem would look like in a 5 to 15 year horizon. And that was what we’d asked the humans to do the previous year. And that produced a thousand scenarios, small scenarios.
We had a judging process where we had AI judges. Last year we had human judges, this year we had AI judges. They took some very specific criteria, the same criteria as we used the previous year, and pick the 40 best scenarios from those thousand scenarios that the personas wrote.
Then we had those 40 scenarios and we took the 40 personas that had written the scenarios. And then we added in 20 digital twins. And digital twins are copies, basically like written descriptions of real people in extreme detail. So a digital twin. You find a person, ideally with a lot of online activity. And then you do this kind of deep research analysis of that person from a point of view. And you have this sort of very, very detailed report about that person and how they approach things and how they interact. And then that becomes the basis of a synthetic or an AI version of that person to participate in the project.
Branislava Lovre: So you created the personas, selected the best scenarios, brought in digital twins, and then ran an AI version of the workshop. What did that actually look like in practice?
David Caswell: And that’s just like it sounds. It was discussions. It was the same instructions as we used with the humans the year before. It worked in the same way. It was AI personas talking to other AI personas and digital twins with the same objectives. There are transcripts. It was just a normal conversation, and that was very interesting because you could see a lot of ideas that were surfaced by the interaction between the different personas.
So, you know, I think that produced creativity that wouldn’t normally exist in an AI persona or in the digital twin because of the interaction and the unexpected characteristics of that. So those are very rich conversations. We ended up with 31 different conversations, one hour conversations between groups of personas and digital twins in a very structured way.
And then with all of that material, those transcripts and scenarios and so on, there was an analysis phase where we did all this with the GPT-5 agent mode and then the agent mode wrote the report. We ended up with a 40 page PDF and six scenarios and that report was broadly equivalent to the report from the previous year.
And so it wasn’t just a report. It had been that whole process, the entire interaction leading up to the report that the agentic mode had succeeded in producing. So that was a very good learning because we had this human version of exactly the same thing to compare it with.
Branislava Lovre: Out of everything that came out of this project, what are the findings that surprised you the most, or that you think people should really pay attention to?
David Caswell: So, I’d encourage people to read the report, but one of the big new things for me, one of the big insights for me was where there were two scenarios. One around the tokenizing of trust and another one around the potential for the use of nuanced interactions like emotional affect and so on as part of personalization.
And these were interesting because they both related to this phenomena where there’s things that are in the human world very difficult to be specific about. So trust is that, you know, reputation is that, you know, emotional affect is very difficult to be specific about, or everybody has their opinion. It’s very subjective.
Whereas what these scenarios pointed out correctly was that AI can interpret these kind of things very specifically. And so what that means is you can now generate signals, including signals for marketplaces and signals for tracing and analytics and all of this stuff around these characteristics that had previously been implicit. Essentially the AI enables these previously implicit characteristics to become explicit, to become very obvious and measurable and marketable. And that was a new insight. That was two of the scenarios that were produced.
Branislava Lovre: Those are fascinating insights. Now, this is a question a lot of people will have. When it comes to human oversight in the agentic version, how did that work?
David Caswell: So the only human oversight in any of the agentic version was in the creation of the prompts. So we very, very intentionally provided zero human oversight through the whole process. And in fact, not only did we do no oversight, we took the first version of every prompt. So we didn’t run prompts multiple times and then cherry picked the best results. We wanted this to be very representative of the capabilities of basically GPT-5 agentic mode. And so we used the first pass of every prompt, what you’d call one shot use of AI.
And so the only oversight, if you can call it that, was in the prompts. And the prompts themselves were very heavily influenced, as I said, by the design of the human project in the previous year because we wanted to copy that project exactly. So a lot of the instructions in those prompts were basically copy and pasted from the instructions that we gave the humans the previous year.
So there’s a, you know, in the report for the human version, for the 2024 version, the only AI we used in that report for 2024 was in the executive summary. We used AI to summarize that. We wrote the report manually and then used AI to create the executive summary. And even then we edited that executive summary a little bit before publishing it in the report.
In the 2025 version, it was the exact opposite. So in the 2025 version, not only did the agentic process do the project and write the report, but it did that entirely with no human input at all. And then in the report, we added a single page, a preface that just sort of gave a human introduction to what had happened. So it was the exact opposite of the previous year. The previous year there was basically only one page of AI in the whole project. And in the 2025 version, there was only one human produced artifact, one page of human produced text in the entire project. It was the exact opposite. And that was because we wanted to make these things equivalent. So the comparison could be made.
Branislava Lovre: That’s a remarkable difference compared to the 2024 process. Give us a sense of scale. How long did the whole agentic version take, start to finish? In terms of time and in terms of expense?
David Caswell: The agentic version was about a hundredth of the cost and time, right? And so that is two orders of magnitude. So that is dramatic.
I think the quality, again, as I said earlier, and I’m biased, but the quality of the agentic version I think was slightly less than that of the human version. But it was achieved at a hundredth the cost and in a hundredth the time. Right.
So I think it’s very clear that these agentic systems have an enormous range of use cases in any knowledge production field.
I should mention here that scenario development is a relatively easy knowledge production field because there are no kind of objective criteria for a long time as to whether you got it right or whether you got it wrong. But that’s just the nature of scenario development. I think things like investigative journalism, they have much higher standards and these systems are probably not quite ready for some of those complex investigations yet.
But it’s very clear that they’re in the ballpark. They’re rapidly becoming competitive in these complex multitask situations. There were probably hundreds and hundreds of different tasks that these systems had to organize and do in this process to create this project and report. And so these systems can clearly do that to a very high degree.
Branislava Lovre: You mentioned that scenario planning is a relatively easy knowledge production task. But more broadly, what role do you see agentic AI systems playing in journalism?
David Caswell: I think most of the investment in AI that most publishers have done so far has essentially been in automating tasks and or making tasks more efficient. So things like headline suggestion. Well, people do headlines, suggestion machines can maybe provide ten suggestions in a few seconds. So it makes the headline suggestion more efficient, or summarization or search engine optimization or copyediting, these kind of things.
I think what agentic AI offers is the potential to automate workflows. And that’s essentially what this was. This was the automation of a very complex, very human originating workflow with personas and digital twins. And it was automated quite successfully.
And so I think if you apply that into the journalism world, you could completely see simple investigative journalism being done by agentic systems. That’s quite plausible. You could see, for example, the interview process and processing being done by agentic systems. You could see a certain amount of fact checking, including quite complex fact checking where there are multiple steps. There are some fact checking exercises where the fact checking is almost like a little investigation on its own. You could see these systems quite easily doing that.
I think a lot of the more complex news gathering where you don’t just sort of go and get a press release or a PDF and do something with it, but you really have to make connections and assumptions and read between the lines and assess. Those kind of complex newsgathering steps, I think those are in reach. So I think there’s a lot of these multitask workflows, including workflows with many, many tasks that are very close to being within reach of these systems.
Branislava Lovre: You’ve painted a picture of what’s possible on the production side, but when we think about audiences, communities, trust, verification, credibility. What should journalists and media leaders be paying the closest attention to right now?
David Caswell: If you look at just the models and the usage of models like ChatGPT and the adoption, the very, very rapid adoption of those models, you can sort of see in the numbers. There’s a very good report from the Pew Research people in the United States, for example, in September of 2025, that gets into this. You can see that they are becoming more trusting. People are becoming more trusting as consumers of these models. Right. And so, and that includes actually the willingness to allow these models to do tasks on behalf of the user, which is an agentic function. So it’s not just, you know, I prompt it with a question and get a response. It’s also: are people willing to sort of let these things do tasks for them? And according to that Pew Research, a significant portion of people are.
So I think that a lot of that is trust in essentially the models. I think it’s a little different when you’re talking about trust in agentic systems. And I sort of understand that from a consumer’s point of view, the difference between a model and an agentic system is kind of blurred. They don’t really kind of care about the details. But those agentic systems, you have to trust that not only can they do the individual tasks, but that they can coordinate the tasks. This is this orchestration function.
And so I think things like the memory, the ability of these agentic systems to keep focused on the same project, on the same goal and not sort of drift off and do things, to be accurate doing multiple tasks one after the other, not just accurate in the task itself. That’s important as well. But that’s a new kind of risk. And we need new ways to assess that.
Branislava Lovre: That’s a really important distinction. I’m curious about how people in the industry have been responding to your project. What kind of reactions have you gotten? Anything that surprised you or questions you didn’t expect?
David Caswell: Yes, there’s been some very interesting stuff. Some very unexpected stuff. So one unexpected thing is how the world of people who do scenario planning really, really were interested in this. And so this is like government and militaries and so on. So we didn’t do it because we were interested in automating scenario planning. We did it because we were interested in AI in journalism. That’s our focus, right?
But people who do scenario planning for other things were very interested in the technique. And so that’s been a little distracting because, you know, my work is journalism and news and societal information. So I’ve had to take a little time out here and there to talk about this project with people from other disciplines, about using this technique in other ways. So that’s been an interesting reaction.
I think that in the journalism community, the biggest impact is the comparison. I think it has sort of made people aware. I think there was a period of time, maybe a year or so, where people had started to get comfortable and thought, well, maybe this is it, and sort of the performance has plateaued.
And the fact that we could do this in a way that enables a direct comparison on a complicated, expensive, time consuming project between a human approach and an agentic approach. That comparison really got a lot of people’s attention. So I think that was our objective in doing the project, and it really resonated. I think it woke a lot of people up. I think it caused a lot of people to take a closer look at what was happening with the agentic systems.
And remember, it’s only really been since about February or March 2025 that agentic systems could do this kind of stuff. The first one was this Chinese system. And now Google have their agent mode and OpenAI have their agent mode and so on. But this is all happening in 2025 and the thing that has enabled it are these reasoning models which are very new as well. They’ve only been around for less than a year. So yeah, it’s, you know, I think this was a bit of a wake up call to people who are kind of tracking this.
Branislava Lovre: It sounds like this project resonated well beyond the journalism community. So what happens next? Is there a 2026 edition in the works, and if so, will you go with agentic systems again or change the approach?
David Caswell:This isn’t a certainty, but one of the lessons, one of the learnings that I took from doing the agentic version of this report, from doing both the human version in 2024 and the agentic version in 2025. One of the things that I took from this is how each process, humans versus machines, they have different strengths and different weaknesses.
So the humans are much better at sort of sense making and narrative and storytelling and all that stuff. The machines are much better at being systematic and complete and thorough, but they’re not as good at the storytelling and sense making.
And so I’ve got this growing conviction that if you could combine the best of both systems in a hybrid way, you could get a substantially more powerful result that would be far better than either human or pure machine versions of this.
Now, that would be difficult because in the agentic project, we copied the manual project step by step. You couldn’t do that for a hybrid project. You would have to design it from scratch. Using each, the humans and the machines, for their respective strengths. But I think that’s a very valuable experiment.
And there are others in the news innovation and kind of journalism futures community who are quite interested in that. So it would be a very difficult and challenging design undertaking. But I think for AIJF or AI in Journalism Futures 2026, if it happens, it will be a hybrid system that uses both humans and the machines.
Branislava Lovre: A hybrid approach sounds like a really exciting next step. Before we start wrapping up, is there anything about the report we haven’t touched on? Contributors, key messages, something you’d want to make sure people know?
David Caswell: As I mentioned, the human version had about a thousand people that were involved. And about 60 of them were deeply involved. You know, four days in Italy at a workshop kind of involvement.
Whereas in the agentic version, there was only three of us. There was myself. There was Shuwei Fang, who was my colleague that I worked with on the human version the previous year. And then this project was sponsored by the Tinius Trust and so we had a representative from the Tinius Trust involved as well, Nicklas Stavnar.
And so it was only really the three of us that were involved. I mean, this is part of the problem: not enough people were involved. And even so, the three of us are just readers of the report like everybody else. Right. We’re a little more familiar with the prompts. But in terms of the results and the output, we’re just readers as well.
Branislava Lovre: That really puts things into perspective. Looking a few years ahead, what do you think will change most in the relationship between journalists, their audiences, and AI?
David Caswell: Significant change is kind of baked in already. There’s a question about how long that’s going to take for that to play out. It might be two years, it might be five years or even longer. But I think it’s going to be dramatic.
I think there’s two things that change in terms of the audience experience or the consumer experience.
One thing is the sheer quantity, the amount of information that’s available. And that’s because of newsgathering. AI can gather information, relevant, personal, interesting information, at enormous scale.
And then the other one is the creation of experiences of information using AI. So we already see this with summaries of videos and audio and so on. But I think that process of adapting information to different experiences. Some people call this liquid content. I think that’s unstoppable as well.
And so you have this combination of much, much more access to information, not all information, but much of our information environment, combined with a much more customized or personalized or accessible experience of information. And that’s a very powerful combination.
If we get that right, people can be dramatically better informed and more comfortable with information than they are now. So it could dramatically improve people’s ability to navigate the information space and access information.
Branislava Lovre: Last question. If there’s one thing you’d want someone watching this to take away, what would it be?
David Caswell: The big suggestion I would have is to engage with these agentic systems in large, ambitious projects. And you can only really understand how powerful these systems are and also what they’re not good at. You can only understand the sort of the range of capabilities they have if you do relatively sophisticated projects. If you’re trying to assess these systems using like summarization or copyediting, you will not see the difference between these systems and the earlier models. You have to do more complex projects in order to understand that difference.
Branislava Lovre: David, thank you so much for your time and for sharing all of this with us.
David Caswell: You’re very welcome. This was fun.
Branislava Lovre: Thank you for watching. AImpactful. Follow us and see you in the next episode.



Leave A Comment