Laurens Vreekamp has a way of saying things that stay with you.
Like when he talks about journalists who say they don’t want to use AI, and gently points out that they already are. Every time they unlock their phone. Every time they use Google Translate or let Spotify choose the next song. AI is already woven into daily life.
So, there’s the practical side, how we use AI in our daily work. But there’s an equally important task that often gets overlooked: how we report on AI itself.
Here, Laurens keeps it simple. Good AI reporting requires the same thing as any quality journalism: asking who benefits, seeking diverse voices, and thinking critically about claims that sound too good, or too scary, to be true.
Laurens is a journalist and educator. He founded the Future Journalism Today Academy and has trained thousands of journalists across Europe, not just in how to use AI tools, but in how to think critically about them.
In this episode, we talk about the stories that get lost when AI coverage is driven by press releases. About the people who should be quoted but rarely are. About what it really means to use AI responsibly when you’re understaffed and overstretched. And about why the saved time matters less than what you choose to do with it.
What we explore:
- Why AI coverage so often follows Big Tech’s script, and how to write your own
- The voices missing from most AI stories
- What “responsible use” looks like when resources are tight and deadlines are real
- The quiet revolution: using AI to make space for deeper journalism, not just faster content
- Why Laurens believes journalists must become “beacons of trust”, and what that actually means
Episode Details:
- Duration: 18 minutes
- Guest: Laurens Vreekamp, journalist, educator, and founder of Future Journalism Today Academy
- Host: Branislava Lovre
- Format: Video podcast with full transcript
Transcript of the AImpactful Vodcast
Branislava Lovre: Welcome to AImpactful. AI is everywhere, in our news, feeds, and our daily lives. But how do we actually talk about it in the media? Today, we’ll explore what makes good AI reporting, how to move beyond the hype, and why the human voice behind every story is still the most important.
Our guest today is Laurens, a journalist and educator. He works with newsrooms and creators around the world, exploring how artificial intelligence and new technologies are changing the way we create and share stories.
You’ve been at the intersection of media and technology for years, helping shape how we talk about AI. How would you describe the current state of AI reporting?
Laurens Vreekamp: The way the media is now and journalists are reporting on AI is mostly, the agenda is mostly set, I think, by the big tech companies. So, the big players are, with their marketing and PR, sort of setting the agenda. And I think a lot of journalists are covering what they are turning out as new tools. And they all have these promises that they’re going to change the world, going to change the future of education, of work. It’s all about optimization, efficiency – or the other dystopian vision that AI will sort of blast us all into space in a way, and will take over. And I don’t consider both of these scenarios as very realistic. So that’s what I see happening now.
And I think because some people in the industry are saying we might lose some of the hype right now. If you look into the Gartner hype cycle, a lot of people are saying if you follow this model, which goes from early adoption into sort of where we are now – the peak of inflated expectations – and then the next phase will be the trough of disillusionment. We might go with, especially with generative AI, we might head for that phase. And it takes another one or two extra phases before a new technology becomes sort of set in society and in people’s everyday lives. I see that the hype might be waning in a way.
Branislava Lovre: What are some essential tips for making AI reporting more clear, relevant, and engaging?
Laurens Vreekamp: An important thing is, as always with most quality journalism, is to ask: What’s the agenda? Who can gain by this message? And obviously ask different sources, not just developers, but also people that this software might impact. So if it’s citizens, or if it’s farmers, or if it’s office workers – ask them. Inquire with people, I think that’s always very important.
And I think, yeah, definitely try to get some developers that are not working for the company that makes the tools that you’re reporting on, to get their opinion. And maybe get some independent academics to have their assessment of the software and to fact-check claims that the companies are making, or what the AI software is saying it can do. Have those independent academics reflect on that and put that in your reporting as well, so that you get a more nuanced picture.
Branislava Lovre: Would you say that one key piece of advice is to include a variety of perspectives and voices, especially those outside the tech world, when reporting on AI?
Laurens Vreekamp: You have to zoom out and set your own agenda. So maybe set out on your own topic. How is AI actually impacting the lives of people in my region or in my country, or for my specific publication, my niche target audience? And ask them. And also ask policymakers, ask psychologists, sociologists, all kinds of experts and peers. So I think, as you said, a diversity of perspective really helps to have a more nuanced and better balanced report on AI.
Branislava Lovre: When it comes to ethics, what are the main dilemmas journalists should keep in mind when covering the impact of AI?
Laurens Vreekamp: Well, it’s interesting because when we say ethical AI, we mostly mean simply when we use it, it’s responsible and it’s not biased. But the more I learn about it, the more I read about it, the more I discuss it with people, the harder it gets.
And I think Joy Buolamwini, who wrote the book “Unmasking AI” and has a Netflix documentary called “Coded Bias” – she says you can’t undo bias. Not in humans, but also in data sets. So, it will be there. And there’s another quote – I don’t know who said it – it said: if society is not balanced, is not fair, then the data will not be fair. So you can’t fix society with technology. You will never have a balanced, diverse, and sort of equal data set.
You have to take into account that things might go wrong. I would consider ethical, responsible, and fair use of AI is that you’ve taken into consideration the things that might go wrong. You have the known consequences and you have a mitigation plan.
So one is that you know which kind of problems might occur, and then put a sort of risk number to how high is the risk, and for whom. If you take all these things into account, you can put a mitigation plan into practice. But also you can then decide maybe not to use an algorithm, not to use an AI model for certain activities or certain tasks.
There will be bias, so you can’t remove bias from datasets, from your models. And the funny thing is, when I spoke about this for one of the first times, I met Agnes Stenbom – she’s from Sweden, she’s doing a lot of AI and journalism work. And she made a tool to uncover human bias in the newsroom by using a machine learning model to analyze if there is a preference for maybe older white men that are being quoted and are being called up as experts or being on talk shows. The kind of illustrations, photographs we use with our reporting. There are many human biases as well. And again, that’s not a bad thing – that’s how we cope with everyday life. But if you acknowledge that and have a mitigation plan or an awareness plan, that already makes a lot of impact.
Branislava Lovre: Looking ahead, how do you see the future of AI reporting over the next few years? What trends should we expect?
Laurens Vreekamp: Yeah, that’s an interesting one. So, the trends in AI reporting – I think there will be more nuanced reporting, especially now since some people say that the hype might be fading, the AI bubble might be bursting a bit.
But some people already also compare it to the internet, where we had the dot-com boom and burst in 2001, and then the infrastructure got laid out on a technical level. But also people had to learn how to work with the internet to order things. If you ask people if they have ever ordered clothing via the internet, most people have. But in 2001 people started building startups that could do that. But it was too early because maybe you could not even pay very easily with credit cards or other kinds of solutions.
So maybe with AI, we’ve seen a lot of things already that might not work now but might work in the near future. We don’t know that yet, if you compare it to the dot-com sort of hype cycle.
But the reporting might be more nuanced because we had the hype reporting and then we might have the sort of negative reporting – “so it’s all a fuss” – and then we’ll get into the somewhat more productive, normal phase where the technology is being adopted and we know in which areas, which use cases it might work and where we should not use it.
But especially for the last one, I think we need a lot of conversations, and I think with general audiences as well – where we want this technology to have a role in our society, and where we should not use it. And yeah, I think it’s a natural cycle which we are sort of going through.
For the near future, I think we will have more data scientists and machine learning engineers being part of editorial teams, instead of being tucked away at the back to do the personalization and the recommendation in the data science department. I think data scientists are now becoming a sort of regular staple of the makeup of the editorial team. That’s my prediction for the near future.
Branislava Lovre: Let’s talk about training and support. How does your work help journalists improve their understanding of AI? You’ve collaborated with many teams – what’s your approach to training and capacity-building?
Laurens Vreekamp: One of the things is to sort of educate them in the classical ways of just telling them about a bit of the theory and how it actually works. But I think the most important and most impactful thing is what we always do in our workshops: we have journalists train their own model.
There are many very simple tools where you can actually train a real model that can do computer vision and object detection, so they can use it for satellite imagery or finding photos within a pile of thousands of photos to actually search for specific things.
And the moment they train these models themselves, a lot of things click, make sense. They say, “Oh, I have to think about labels. I have to think about examples that fit into Box A, but not into Box B. I have to think about goals and thresholds. When is 80% enough to be accurate or precise?”
By actually building these things yourself, you come across a lot of questions that need to be addressed. And most of them, they get that it’s very human, that it’s just technology, it’s math and statistics. And you as a human have to make a lot of subjective decisions which are very biased, very non-objective.
I think by educating them, showing them examples of what others are doing, but also having them do it themselves and then have a conversation about it – that is, I think, the most important part.
Branislava Lovre: You also published a book, “The Art of AI.” Could you tell us a bit about that project – who it’s for and what inspired you to write it?
Laurens Vreekamp: Well, so for one thing, the book was published in June 2022, and then at the end of November 2022, ChatGPT came out. And that sort of started the real hype, especially on generative AI.
In our book, we addressed a bit of the predecessors of, say, DALL-E and ChatGPT, but we would have never guessed that generative AI, especially this text-to-text, working with chatbots and using it to create content – we would have never guessed that it would go so fast after June 2022. That it would be there so fast for the general audience to use.
We thought it might help you with doing editorial stuff and maybe using it for brainstorms, for visual brainstorms and imagery and stuff like that. But we never thought that the photorealism would be so good already, so soon.
Our aim of the book was more that you can use the understanding and the processing part of machine learning – finding documents that are similar, finding entities in texts, using it to summarize, doing transcriptions and translations, which are really helpful. So we focused on that and how AI can help you and augment you along the way of the journalistic process or the design process.
And then generative AI came, and then a lot of newsrooms started to do these guidelines because all their columnists had written their first columns with ChatGPT, and they said in the third paragraph, “So this was all written by a machine” – and everyone did that. And then they learned, well, we can’t use this for everything because it has to be fact-checked.
And I think nowadays almost every journalist knows that what we call hallucinations – which is not the best term, but that’s what everyone knows now – you can’t trust the chatbots for facts. They’re sort of really, really big auto-suggest systems instead of search engines. Or writers that – the grammar is great, but it doesn’t make sense and it’s not always right. It’s not factual. So I think most journalists know that by now.
But what you do see is that a lot of journalists are using generative AI, especially in smaller newsrooms, not for primarily journalism tasks, but more for organizational stuff. So they’re making plans and they’re getting feedback on their budget proposals from ChatGPT, and when they are looking for grants or sponsors, they ask Claude AI to give feedback on their slide deck. For doing emails and pitches and stuff like that.
So what journalists and other people are using these chatbots for is not the main output of what a journalist does, but more the secondary, more organizational stuff. And I think that’s an interesting sort of evolution of what these generative AI tools can do for you.
Branislava Lovre: What do you think is the role of journalists in the future, and what about those who are hesitant to use AI?
Laurens Vreekamp: I think the most important part, to answer your question – which is a very good question – is to have the journalist be the sort of center of trust for their audience. I think that role will become way more important than to write well. Obviously, you need to have your facts straight. It needs to be a good story. But I think journalists need to become more of a beacon of trust, I would say, because it’s a sort of global challenge to regain the trust of our audiences.
And that means being more in conversation with your audience than talking to them maybe the old way.
And then one piece of advice might sound a bit harsh, but I think if you’re saying “I don’t want to learn about AI, don’t want to use it” – that’s a very ignorant way of coping with it, because you’re already using it. If you unlock your phone, if you use Google Translate, anything, if you use Netflix or Spotify – there is AI in that.
So, you better know how it works, because the word on the street is you might lose your job not to AI, but you might lose your job to a journalist that knows how to use AI. Right. And then I hope, and this is wishful thinking – that the managers and the editors-in-chief, when they see “we can do stuff with AI,” I hope they see that what AI helps them achieve is not churning out more content, because there is already too much content. But to actually focus on quality reporting. And the saved-up time should be invested in again being in contact with your audience, being in conversations, be out there on the streets, in the fields, in the region, talking to the people that you are actually working for – instead of churning out more digital content or podcasts or newsletters.
I would say if AI frees up some of your time, use that to talk to people, your audience, because I think that investment will give you a lot more return in the long run than being able to churn out seven articles instead of five because you can now write faster, or you can edit faster, or transcribe faster. I think those gains are marginal. And I think that the best gains are where we become a sort of beacon of trust for our audiences.
And that can only be achieved if we know what they feel, how they live, what they need to know, and where we can sort of augment them – because they might not know what they need. That’s our job, to tell them something. But we should be aware of why that is important to them, and what things are important, and what they should know in order to live their lives right.
So the beacon of trust, I would say, that would become a very important role for every journalist and reporter.
Branislava Lovre: And finally, what message would you give to those who are afraid of AI?
Laurens Vreekamp: Well, I don’t want to end with a negative or pessimistic view, but I would say: don’t fear the machines, but fear the humans that power the machines.
But on a positive note – AI is not like a natural phenomenon. We can influence what it does and where we use it and where we don’t. And there’s a lot of regulation, obviously, but we have our influence. We have a responsibility to use it wisely and many times to not use it.
Take that into account. It’s not something that’s sort of being laid upon us top-down. It’s something that you can choose for, and that we have to tell the audience as well. It’s not a given – we can direct its course.
I hope that journalists will take their responsibility and help direct the course to help their democracy. Well, it sounds very grand, but yeah.
Branislava Lovre: Laurens, thank you for this conversation.
Laurens Vreekamp: Thank you.
Branislava Lovre: And to everyone watching or listening – thank you for being with us. If you enjoyed this episode, follow AImpactful for more conversations at the intersection of AI, journalism, and ethics.



Leave A Comment