What happens when newsrooms swap coffee-fueled debates for brainstorming sessions with AI? Can curiosity, honest mistakes, and real teamwork survive in a world buzzing with algorithms, or could they even thrive?
Álvaro Liuzzi knows the answers to these questions. As an Argentine journalist, digital strategist, and one of Latin America’s leading advocates for integrating AI into journalism, he’s spent the last twenty years shaping how newsrooms actually work. His journey stretches from hands-on experiments in local Argentine news outlets to spearheading groundbreaking digital projects for Chequeado.com, Deutsche Welle, La Nación, and the Redacciones5G initiative.
But for Liuzzi, innovation and technology have never been just about new tools. He’s the kind of expert who invites his team in on every experiment, comfortable with awkward starts, invested in learning through both small wins and messy failures, and determined to keep the “human spark” at the core of journalism.

Álvaro Liuzzi
“Remaining customer-obsessed is still the key to product management. If you’re not creating a solution for someone’s problem, you’re likely just building something for the sake of it, which is not a good approach.”
“Remaining customer-obsessed is still the key to product management. If you’re not creating a solution for someone’s problem, you’re likely just building something for the sake of it, which is not a good approach.”
In this conversation, we discuss his practical approach — learn, research, experiment, document — and the essential role that trust, teamwork, and open conversation play in keeping journalism both relevant and human, even as algorithms enter the beat.
Here, Álvaro Liuzzi takes us behind the scenes:
— How did his first AI newsroom experiment nearly fall flat, only to spark a whole new way of working?
— Why does “AI policy” mean more than a list of rules, and what’s the secret for making it work?
— What does it feel like to watch a skeptical team become innovators, solving real problems together?
From lessons learned in Argentinian newsrooms to strategies for building real AI policies, Liuzzi offers an unfiltered, hope-filled roadmap for anyone who believes the future of news belongs to people bold enough to ask: Who do we serve, and how can technology help, not hinder, that mission?
Q. Álvaro, you’re widely recognized as one of the pioneers in Latin America exploring how artificial intelligence can reshape journalism. What first drew you to this field, and how did you manage to identify AI’s potential in media before it became a global trend?
A. For me, it wasn’t the result of a sudden revelation but rather a natural evolution. I’ve spent over two decades working at the intersection of media, technology, and innovation. Throughout that journey, I’ve witnessed several key moments that have reshaped the industry. The emergence of Web 2.0, the rise of social media, and the widespread adoption of mobile platforms are just a few. Each of these shifts deeply transformed how news is produced and consumed, and they’ve always fascinated me, especially because they challenge us to adapt.
Artificial intelligence, particularly in its generative form, stood out to me from the beginning. I sensed early on that this wasn’t just another tool to add to the newsroom — it represented a true paradigm shift. One that would require us to rethink both traditional methods and professional roles in journalism. That intuition led me to dive in fully and begin a process of active exploration that continues to this day.
I started studying AI well before the rise of ChatGPT. My interest wasn’t rooted in the technical aspects but in understanding its cultural, ethical, and productive implications. I wanted to explore how this technology could reshape our relationship with knowledge, how it might change the way we tell stories, what effects it could have on newsroom workflows, and what role journalism should play in that new context.
That curiosity pushed me to write, research, teach, design experimental projects, and support change processes in newsrooms across Latin America. And along the way, I arrived at a conviction that still guides me today: artificial intelligence is not here to replace journalism, but it does force us to rethink it. That’s where its true value lies. It’s not an end goal in itself — it’s a very real opportunity to redefine the kind of journalism we want to create for this new era.
Q. You often speak at international conferences and lead training programs in newsrooms. How important is it to create learning opportunities, and why should newsrooms actively support this kind of professional development for their teams?
A. Artificial intelligence is quickly advancing across all areas of the newsroom, from content production to distribution. This progress is bringing about a profound transformation in professional roles, daily routines, and newsroom culture.
In this context, media organizations that invest in continuous learning are not only strengthening their internal capabilities — they are also building a clear competitive edge.
In the training programs I run with journalists and media outlets across Latin America, I see a recurring pattern. There’s a growing demand from professionals who want to understand what artificial intelligence is, how it works, and above all, how they can apply it in practical ways in their day-to-day work. What until recently felt like science fiction is now part of everyday editorial decisions. If media outlets don’t support this learning journey, they will inevitably be left behind.
One key aspect of this transformation is the ability to attract and retain professionals who may not be technical experts but who have the ability to learn quickly, adapt, and reinvent themselves as new tools and languages emerge. In a constantly changing environment, the ability to unlearn and relearn becomes just as important as any specific skill.
There’s also a strategic dimension that shouldn’t be overlooked. When teams understand how these technologies work, they lose their fear, become more autonomous, and are able to contribute value from a fresh perspective. Artificial intelligence is not just about automating tasks. It also prompts us to rethink workflows, redefine roles, and reexamine dynamics through a critical and innovative lens.
I’m convinced that the media organizations embracing this kind of training are not just adding technical skills. They are creating the conditions for their teams to become active drivers of change. In a landscape as dynamic and challenging as today’s, that mindset can make a meaningful difference in any newsroom’s ability to adapt and innovate.

Álvaro Liuzzi during a presentation organized
by the Asociación de Entidades Periodísticas Argentinas.
(Photo from the private archive of Álvaro Liuzzi.)
Q. In your work, you often rely on the approach “Learn. Research. Experiment. Document.” Why is it so important for journalists to test new technologies and document what they do? How does that process contribute to the growth of the entire newsroom?
A. I’ve always tried to balance my role as a university professor in communication with my work as a journalist and media consultant. These are often two separate worlds, but I strongly believe they should be more connected. Academia brings depth, critical thinking, and methodological tools. The industry, on the other hand, confronts you with urgency, practical challenges, and the day-to-day reality of newsrooms. Being able to move between both spaces allows me to take the best from each and apply useful tools from one to the other.
That’s how I began building and committing to a work model that’s now central to everything I do: learn, research, experiment, and document. I try to be methodical about it because it allows me to organize what I do and share it clearly with colleagues and newsrooms. It’s not just about exploring new technologies out of curiosity. It’s about understanding them in depth, testing them in real-world settings, figuring out what works and what doesn’t, and leaving behind useful records for others.
That documentation can take many forms, from internal guides to case studies on how AI has been implemented in local Latin American media. What matters is that the knowledge doesn’t stay with just one person. Sharing it is also an open and ethical way of working, because it allows others to build on it, apply it, and strengthen the broader ecosystem.
Q. When we talk about AI and journalism, we also have to talk about ethics. You’ve often emphasized the need to create ethical guidelines for the use of AI in newsrooms, and you even developed a custom version of ChatGPT to support that. What’s the most effective way for a newsroom to build its own AI policy? And what’s the most common mistake they tend to make in that process?
A. I think a good starting point for building an AI policy in a newsroom is to ask a simple but fundamental question: why do we want to use this technology? From there, the process should be collective, transparent, and grounded in the newsroom’s context. There are no one-size-fits-all solutions, because every media outlet has its own identity, business model, and workflow. That’s why it’s important that these policies aren’t written in isolation or copied from somewhere else. They should emerge from real conversations among journalists, editors, tech teams, and newsroom leaders.
A good AI policy shouldn’t be a static document. It needs to be a living framework that helps guide practical decisions. What tasks will be automated? How will AI-generated content be validated? What level of human oversight is required? How will readers be informed when AI is involved in the process? These are just a few of the questions that need to be addressed. And above all, the policy should align with the newsroom’s values and its responsibility to the audience.
One of the most common mistakes I’ve seen in this area is treating the issue as purely technical, as if it were just about choosing the right tools or software. But ethical policies aren’t really about technology. They’re about editorial judgment, professional responsibility, and trust. Most importantly, we need to shift the focus from machines to people. To the teams who will be integrating this technology into their daily work. To how they’ll be trained, how decisions will be made, and how human oversight will be maintained throughout the process. Because in the end, it’s not just about what AI can do — it’s about how we choose to use it.
As you mentioned, in 2024 I created a custom version of ChatGPT called AI Guidelines to support these processes. It was trained on more than 40 editorial guidelines from media outlets around the world. That allows it to offer contextual advice, reference frameworks, and practical examples. Its goal isn’t to replace human decisions, but to serve as a support tool that helps newsrooms think through, discuss, and build their own informed and responsible approaches.
Reveal Quote
“Product managers have to pay close attention to the data used for training, considering privacy and consent laws, which will be ubiquitous and vary across regions and states.”
Reveal Quote
“I do use generated systems, like Co Pilot Intelligence for code completions. I experiment with these tools to understand their capabilities… I often ask AI to give me 20 ideas in a specific domain, knowing that most will be boring or familiar, but expecting one or two fresh concepts. This approach expands my thinking.”
Q. You often say that ethics isn’t a one-time task but an ongoing conversation between journalists, technical teams, and audiences. How do you recommend keeping that conversation alive and ensuring accountability in the face of rapidly evolving technology?
A. The integration of AI in newsrooms isn’t just about efficiency or technological innovation. It’s also a deeply ethical issue that shapes how we report the news and how the world is understood. That’s why I always emphasize that ethics can’t be treated as a checklist you complete once and forget. It’s a continuous conversation — one that needs to evolve over time and respond to context.
To keep that conversation alive, it’s essential to create real spaces within newsrooms where the use of AI, its boundaries, and its implications can be openly discussed. Having a written protocol is not enough if no one reviews or talks about it. Ethics requires time, but it also requires structure — whether it’s a planning meeting, internal training, or an editorial committee. Most importantly, it requires diverse voices. These conversations can’t be led only by developers or executives. They need to include journalists, editors, designers, audiences, and the broader community.
It’s also important to document processes and decisions. That helps us reflect on what’s been done, identify mistakes, stay accountable, and learn. And in parallel, we need to communicate those decisions. Audiences value transparency. News organizations that explain how they’re using AI, under what criteria, in which contexts, and with what kind of human oversight, are building trust in a time when the relationship between journalism and technology is raising serious questions.
Q. When you introduced AI at Todo Jujuy, you involved journalists, developers, and content editors. Why is it essential to include the entire team in these kinds of transitions? And how does that impact the overall success of AI initiatives in a newsroom?
A. When I worked as a consultant on the integration of AI at Todo Jujuy, I knew from the start that it couldn’t be a project limited to the tech team or a small group of people. That’s why I involved journalists, developers, content editors, and other team members. Because beyond the specific tool being introduced, what really needs to change is the culture of the organization — how people work, how content is created, how editorial decisions are made, and how collaboration happens within the team.
Artificial intelligence is inherently cross-cutting. It touches everything, from how information is gathered to how headlines are written, how stories are published on social platforms, and how audience behavior is analyzed. If only part of the team understands what’s happening or how the technology works, it’s very likely that misunderstandings, resistance, or poor implementation will follow.
Involving the whole team not only improves the execution of an AI initiative — it also builds ownership. When journalists are included, they understand that this isn’t about replacing their work, but about complementing it. Automating repetitive tasks frees up time and energy for the things that really matter. When developers are exposed to editorial challenges, they can design solutions that are better suited to the realities of the newsroom. And when editors grasp the potential of these tools, they’re able to make stronger strategic decisions. AI becomes a great excuse to open up communication and collaboration across all areas of the newsroom.
Collaborative work also helps identify risks, biases, or unintended consequences more quickly. And it fosters a shared culture of innovation, where technology isn’t something imposed from the outside but something that’s built from within, based on the newsroom’s actual needs.
This kind of holistic approach doesn’t guarantee that everything will go perfectly — but it significantly increases the chances that the change will be sustainable and have a real, positive impact on the organization.

Álvaro Liuzzi at TVMorfosis, a media and innovation forum
on the future of public communication.
(Photo from the private archive of Álvaro Liuzzi.)
Q. You created the guide “AI in Journalism” and lead the newsletter #Redacciones5G, where you share trends and practical advice. What topics and recommendations are currently generating the most interest among journalists?
A. One of the things I enjoy most about running the Redacciones5G newsletter and writing the AI in Journalism guide is being in direct contact with what really matters to people working in newsrooms every day. Right now, there’s a lot of interest — and just as many questions — about the practical uses of artificial intelligence. What journalists and editors are looking for most are applied examples, easy-to-use tools, and hands-on advice they can put into practice without needing to be tech experts.
But the conversation is also expanding beyond the technical side. There’s growing curiosity about how newsroom teams themselves are changing. We’re seeing a clear shift in roles, with positions like product editors, data analysts, innovation leads, and audience specialists becoming more common. That shift is sparking new questions about the kinds of skills needed today and how to support teams so they can adapt and thrive in this evolving environment.
The issue of media sustainability is also very present. Many newsrooms are exploring more diverse business models, including membership programs, paid newsletters, live events, community building, and strategic partnerships with other organizations. What people really want to know is what’s working, why it works, and under what conditions. There’s a growing understanding that there’s no longer a single model that fits all.
Another strong area of focus is local journalism. In a time of global information overload, what’s close to home is becoming meaningful again. Many media outlets are looking for ways to cover their communities using accessible technology, but more importantly, with a participatory approach that’s genuinely connected to the everyday realities of their readers.
And of course, misinformation remains an ongoing and growing concern. Not just from a fact-checking standpoint, but in terms of the urgent need to build clear, trustworthy, and consistent narratives over time.
Q. Based on your experience, what’s the most important lesson you would share with a newsroom that’s just beginning to explore AI?
A. If I had to sum it up in one idea, I’d say the most important lesson is to start with a clear purpose. Don’t bring AI into the newsroom just because it’s trendy or because other outlets are doing it. Do it because it can solve a specific, real need within your newsroom. That kind of clarity from the start helps avoid frustration, leads to better tool selection, and allows for a more accurate evaluation of the impact you’re creating.
It’s also important to understand that you don’t have to do everything at once. There’s a common misconception that integrating AI means undergoing a total and rapid transformation, and that often creates unnecessary anxiety among teams. In my experience, the most effective approach is to begin with small, focused projects — things you can test, learn from, adjust, and eventually scale. Experimentation is key, but it needs to go hand in hand with time for analysis, reflection, and documentation.
Most importantly, you have to recognize that this isn’t just a technical shift — it’s a cultural one. That’s why training and collaborative work are so essential. The more involved the team is from the beginning, the stronger their sense of ownership will be, and the more authentic the innovation will feel. Newsrooms that understand this aren’t just adopting new technologies. They’re also strengthening their ability to adapt to whatever comes next.

Álvaro Liuzzi speaking at the 80th General Assembly of the Inter American Press Association (Sociedad Interamericana de Prensa), Córdoba, Argentina, October 2024.
(Photo from the private archive of Álvaro Liuzzi.)
Q. More and more newsrooms are using AI to personalize content and better understand their audiences. What tools and strategies do you find most effective for building a stronger connection with readers in today’s digital environment?
A. Personalization through artificial intelligence is one of the most powerful avenues newsrooms are exploring to strengthen their connection with audiences. But it’s important to understand that personalization doesn’t mean repeating patterns or serving up more of the same. It means offering relevant, useful, and context-aware content tailored to the interests, habits, and needs of each reader.
There are highly effective tools that support this, such as content recommendation systems and real-time behavior analysis engines. These tools can help create more precise user experiences and increase engagement. However, it’s just as important to take a critical look at the algorithms behind those decisions.
The ubiquity of algorithms — their ability to be everywhere, all the time — makes them central players in today’s information ecosystem. They shape what we read, what we watch, what we listen to, often without us even noticing. This kind of mediation has a direct impact on how societies stay informed and how people understand the world. That’s why it’s urgent for media organizations not only to use these tools but to audit them, understand how they operate, identify their biases, and assess the effects they produce.
We often ask what algorithms mean for us as humans. But we should also reverse that question: what do we mean to them? Algorithms interpret us as a chain of clicks. Their core logic is to turn us into an endless sequence of future clicks. That’s why they tend to reinforce our existing preferences and expose us to more of the same, fueling filter bubbles, confirmation bias, and in many cases, the spread of misinformation.
In this context, building a strong connection with audiences is not just about applying technology — it’s about applying it responsibly. It means blending data with journalistic intuition, using algorithms without being used by them, and always keeping one principle front and center: personalization should never come at the cost of information diversity, exposure to different viewpoints, or the public’s right to be well informed.
AI can help deepen the relationship between journalists and readers. But that relationship can only be sustained if it’s built on trust, transparency, and a clear editorial commitment.
Q. With technology evolving at such a rapid pace, how do you personally stay inspired and up to date with the latest in journalism and artificial intelligence?
A. I try to stay constantly aware of what’s happening. I read widely and regularly — not just about artificial intelligence and journalism, but also about digital culture, innovation, education, and design. I like to move across disciplines because, more often than not, the most valuable ideas emerge at those intersections.
I also consider myself a curious person when it comes to tools. I enjoy testing new technologies — not because everything is immediately useful, but because the only real way to understand a tool is to use it. I explore its possibilities, try to understand its limitations, and think about how it might be integrated into newsroom workflows. It often happens that a tool that seemed irrelevant in one context ends up solving a very specific need in another.
Another essential part of this process is my direct work with media organizations. Training sessions, workshops, and consulting allow me to stay in touch with very diverse realities and with journalists facing real-world challenges. That constant exchange forces me to reflect on my own approaches, stay grounded, and — above all — keep learning.
And there’s something else I consider key. For me, the best way to deeply understand a technology or a trend is to systematize what I’ve learned and explain it to others. Teaching isn’t just about sharing knowledge — it’s also a powerful tool for deep comprehension. When you force yourself to organize ideas, contextualize concepts, and translate something complex into accessible language for others, that knowledge becomes clearer, stronger, and more useful.
About The Author

Branislava Lovre
Branislava is a Media Expert, Journalist, and AI Ethicist who leverages her expansive knowledge and experience across various media outlets and digital landscapes.
Branislava Lovre
Branislava is a Media Expert, Journalist, and AI Ethicist who leverages her expansive knowledge and experience across various media outlets and digital landscapes.



Leave A Comment