How much time do grantmakers and nonprofits lose to paperwork before they ever get to talk about what actually matters? Behind every grant application, there are people on both sides, grantmakers trying to make the right call, and nonprofits trying to show the impact of their work. But too often, what stands between them is hours of spreadsheets, forms, and manual data entry.

That’s exactly what the team at the Patrick J. McGovern Foundation discovered. Their analysts were spending two hours per grant application just extracting numbers, calculating ratios, and filling in cells.

So they built Grant Guardian, a free AI tool that streamlines the financial review process for both grantmakers and nonprofits. Grantmakers receive customizable analyses of their grantees’ financial position, while nonprofits simply submit their existing financial documents. Today, more than 420 foundations are using it.

But ask Hazem Mahmoud, who leads the product development behind Grant Guardian, what he’s most proud of, and he won’t talk about speed or scale. He’ll talk about the principles behind it.

Seven of them, to be exact. From privacy and bias mitigation to environmental sustainability, every product the Foundation builds has to pass a set of ethical standards before it sees the light of day.

Hazem spent over 15 years in tech, from electrical engineering to big data, before choosing to bring those skills to the nonprofit sector. And he has a clear message for nonprofits that feel overwhelmed or left behind in the AI race: you’re not alone, and the first step isn’t a tool, it’s understanding what problem you’re actually trying to solve.

We believe in practicing what we talk about, using AI openly and responsibly. We used Claude by Anthropic to help prepare this episode’s introduction and transcript, always under human supervision before publication. The episode itself was recorded on Riverside, with Magic Audio handling the sound quality. When we use AI, we tell you. Transparency matters to us.
We believe in practicing what we talk about, using AI openly and responsibly. We used Claude by Anthropic to help prepare this episode’s introduction and transcript, always under human supervision before publication. The episode itself was recorded on Riverside, with Magic Audio handling the sound quality. When we use AI, we tell you. Transparency matters to us.

What we explore:

  • The real problem Grant Guardian was built to solve
  • The seven responsible AI principles behind every product the McGovern Foundation builds
  • Why “just because you can use AI doesn’t mean you should” is the most important question to ask first
  • What “human in the loop” looks like in practice
  • How nonprofits can start their AI journey without feeling overwhelmed
  • Why AI conversations are really human conversations about the world we want to build

Episode Details:

Transcript of the AImpactful Vodcast

Branislava Lovre: Welcome to AImpactful. This episode is dedicated to the nonprofit sector. We’ll talk about how foundations can use AI in a way that’s practical, responsible and truly helpful without getting lost in the hype.

Our guest today is Hazem Mahmoud. Hazem spent over 15 years in tech before deciding to bring his skills to the nonprofit world. Today he leads the development of Grant Guardian, a free AI tool that is already helping more than 400 foundations in their work.

Branislava Lovre: Welcome, Hazem. It’s great to have you with us.

Hazem Mahmoud: Thank you so much for having me here, Brana, I appreciate it and excited to kick off this conversation.

Branislava Lovre: Grant Guardian was launched last year and is already being actively used, which is quite impressive for those who are just hearing about it. Now, can you briefly walk us through how it was created and how it all started?

Hazem Mahmoud: Absolutely. So Grant Guardian was actually created based on a need that we were trying to solve for. And here at the Patrick J. McGovern Foundation specifically, we found ourselves in a situation where the analysts who perform due diligence on the grant applications that are coming in are spending a lot of their time doing a lot of just busy work and busy work that really just takes up from their time to be able to spend that with the grantees to build a more meaningful, long-lasting relationship. So one of the areas that we looked at was this idea of financial due diligence. And so financial due diligence, just for those who may not be familiar with it, when a nonprofit organization applies for funding from a foundation, the foundation goes through a review process and a number of steps whereby due diligence steps specifically whereby they try to understand what the organization is working on, the impact they’re going to have, their financial stability and so on. So one of those due diligence steps is financial due diligence, right? And so it’s this process of reviewing the financial statements, audited or unaudited financial statements, things like Form 990s, and be able to then assess the financial health and well-being of that organization. And what we found was, we within the Patrick J. McGovern Foundation, we used a spreadsheet, right? We had like this very complex spreadsheet where you punch in all these indicators and these numbers and, you know, assets and liabilities and all these financial variables. And then we get these ratios, right? Like a current ratio or cash reserve ratio. And so we had, by industry standards, a fairly complex approach to this and an advanced approach to this. As I continue to have more conversations with other foundations in the sector, we found that others are also struggling with this process. Typically, these financial statements are reviewed by folks who may not necessarily have the financial background or the financial expertise to understand what the statement overall is trying to tell them. But it also took a lot of time for them, for our organization, for a single grantee it would take our analysts about 2 hours to go through and input all these numbers into the spreadsheet and then do this financial due diligence process. Anyways, long story short, we found that AI can help us here. You’re right that AI can actually be a good use case for this problem that we’re trying to solve. And we decided to develop this product called Grant Guardian that allows an analyst or a program officer or a grants manager to kind of streamline that workflow of financial due diligence. And instead of it taking 2 hours for our staff here, it dropped it to around 2 minutes, per grantee.

Branislava Lovre: It’s always powerful when people start using what we built. Who is using Grant Guardian today?

Hazem Mahmoud: We have, last I checked, a little over 420 foundations using Grant Guardian today, and some of them are power users where, you know, they’re constantly giving out smaller grants and are constantly running through a financial due diligence process. Others use it maybe on a quarterly basis. So it depends from one foundation to the other on their actual usage. But like I said, we have over 420 foundations using it today.

Branislava Lovre: We’ve talked about how the tool works and what users can get from it, including time savings. But if you look at it again from a practical point of view, how does Grant Guardian work in everyday practice? Step by step? How can a nonprofit actually use it?

Hazem Mahmoud: So the first thing to call out, because I forgot to mention this, is that Grant Guardian is actually 100% free for use for any US-based foundation or grantmaker. And right now it’s US-based because we’re looking at the idea of how do we take it international and the privacy laws that we have to take into consideration, because that’s something that we care about, of course, very heavily here. But that being said, it was again, it was built by a foundation for foundations. And so we feel like this should be freely available to the sector. In regards to actually using it on a day-by-day basis, typically, when a grant application comes in to a foundation to review, they’ll send financial statements as well with that with the application and the again grants manager or program officer, there’s a lot of different role names for this role — within PJMF, we call them strategy analysts. But they would take those financial statements and upload them into the system, into Grant Guardian, and one of the principles that we developed, this idea of responsible AI products, is the idea of a human in the loop, right? That we should not just rely on AI to provide us with the output that it provides us with, that we should be engaged in that process to validate it, to verify that it actually is accurate. And so after the user goes ahead and uploads these financial statements, there’s a step by which they need to validate that every financial variable that the AI model extracted is accurate. And then from there it generates the financial due diligence report. Now, many of our foundation users, of course, have grants management systems or GMSs, and so they’ll take that, they’ll export it as a PDF and then, you know, input it into their GMS because that’s kind of their central repository of all documents related to a grantee. But prior to that process of running through a financial due diligence, there is this idea of creating what we call a financial profile. And so every foundation can create one or many financial profiles. And in fact, what we’ve seen is a lot of foundations are now able to do financial due diligence in a more equitable way because they’ve created multiple financial profiles for different types of organizations. So, for example, they created a financial profile, like a lot of our users will create a financial profile for a small organization that maybe is one or two years in operation, and then they’ll have a different financial profile for larger organizations that have been around for decades, because — and it makes sense, right? So you should not be assessing the two the same way, right? You should be looking at different metrics, maybe the thresholds will be a little bit different. The weight scores will be different. And so it allows for that level of detail which ends up in many ways creating more equitable grantmaking. And so after they create the financial profile, they can then use that financial profile in that financial due diligence report creation workflow. So when you go through it, you can actually select which organization you want to do diligence on. Then you select which profile you want the due diligence to happen with. And then from there it’ll extract just the variables that we care about for that financial profile.

Branislava Lovre: If someone listening to this conversation is interested and want to try it out, what do they need to do to get started?

Hazem Mahmoud: So you can visit mcgovern.org. And when you go there, you can navigate to the Grant Guardian page. Once you go there, there’s a link where you can sign up and you go ahead and fill out a form. What we ask for is basically the name and the email of the administrator, because the account we create will be for the administrator, and then the name of the foundation as well. And the reason why that’s important is because we have to actually, on our end make sure that it’s going to be a US-based philanthropic institution. Once we validate that, which typically takes about 24 hours, we create an account, the user will get an automated email sent to them. They can then go in and log in and set up their foundation account and there’s some steps there to do that. And then they can invite as many users as they want. And there’s multiple user roles or user types that you can create within Grant Guardian.

Branislava Lovre: Today we were talking about Grant Guardian as a great example of responsible AI in practice. Responsible AI is at the core of everything you do at the McGovern Foundation.

Hazem Mahmoud: Yeah, absolutely. And it’s a very good point, because at the end of the day, what really sets us apart, the Patrick J. McGovern Foundation and the work that we do as it pertains to AI product development, is this idea of being very hypersensitive and very cognizant of what does it mean to build responsible, safe, ethical AI products. That’s not something that we just state just for marketing purposes or PR purposes — this is something that we truly believe, and we’ve had a number of products that we’ve considered in the past and we decided not to go with because of the challenges that can put for us as it pertains to making sure that we do this in a responsible way. And so we have seven principles by which we do that today. The first one is this idea around privacy and transparency. So any product we develop, we are very transparent about the models that we use, the backend technology that’s being incorporated into it, very, very careful when it comes to data privacy. And so all that privacy and transparency is kind of one bucket. The other one is this idea of people community-focused. We are not going to develop a product just because it’s a cool thing to do. We’re going to develop a product because we truly feel that it’s going to bring about value and impact to a community. And so what does that mean? That means, and it kind of leads into the third principle, this idea of bias mitigation, is that working with the community hand in hand helps us understand what are the potential biases that can arise in that. And then it allows us to then address those biases as we’re designing, developing, testing and finally deploying. And so working with the community is an essential part of the work that we do, whoever that community may be for whatever product that we’re looking to build. The fourth one is IDEA integration. So this idea of inclusion, diversity, equity and accessibility. So we, for example, build accessibility features within our products. The fifth one is Human in the Loop, which is what I referred to earlier around this idea that a human needs to be engaged in these products. It’s not something that we just let AI loose on it. The sixth one is around enterprise-grade security. And what that means to us is that we actually hire a security firm that comes in and does a full end-to-end vulnerability assessment and penetration testing on the product, the application itself and the infrastructure that it resides on. And the report that’s generated from that, it’s something that we then, if there are any vulnerabilities, we address immediately and so on, and we pay money to have an organization come in and do that for us because it’s that important to us. And then the last thing is sustainability. And when we talk about sustainability, we’re talking about environmental sustainability. And so we just recently published a blog by our platform engineer Nick Trimmer, who works on how do we minimize our carbon footprint, how do we minimize the negative impacts, that we could potentially be having on the climate through various technical techniques that he works through. So those are kind of the seven principles that we work through within our product development work.

Branislava Lovre: When you speak about responsible implementation, what were the main ethical concerns or questions your team had to address when building Grant Guardian, especially given how sensitive this space is?

Hazem Mahmoud: Yeah, that’s a very good point. And there were a few of them that we had to work through. The first one is this idea: is AI the right tool for this? I mean, one of the things that I’ll always say is just because you can use AI doesn’t mean that you should. And in many cases, when you are doing a true problem definition exploration session, an AI solution may not be the right fit for that. And so you have to be honest with yourself: do I really need to use AI or not? For this case, we found a very narrow use case where AI can be beneficial for this product and everything else around that isn’t really AI-driven at all, the overall workflow and the financial profiles and all that stuff isn’t really AI, but ultimately, we wanted to make sure that AI is used just for the need that it’s needed for. The other part too, with these models, again, they don’t always perform at the accuracy level that we’d like to see. We have gotten it now to the model extracting information with about 93% accuracy on average, which is great, right? But there’s that 7%, right, that we still need to make sure that we are building or supporting our users to be able to catch that if there’s an error there. And so that was definitely making sure that we built those types of capabilities within the product. I will say the probably the biggest challenge that we faced was in the sector, broadly speaking, there’s still hesitation and a little bit of resistance around using AI, and it’s completely justifiable. And I think that a use case like Grant Guardian, where it’s very focused and the idea of responsible and ethical principles being embedded in it, helps lower kind of the barrier for entry for a lot of organizations to realize that there are good use cases with AI that we can incorporate within our workflows. We don’t have to open the doors and let everything in, but there are very, very specific use cases where it can be beneficial. And so I think initially there was a challenge or resistance to adopting something like Grant Guardian and other AI tools as well within the sector. But through conversations and through addressing what’s real and what’s hype within AI — there’s a lot of fear that’s built into AI today. And some of it is true, some of it is valid fears, right? But there are a lot of fears that are also not necessarily based on anything realistic right now. And so just being able to have those conversations with folks within the sector, I think has been very fruitful for us to kind of move forward with this.

Branislava Lovre: So far we talked about Grant Guardian as a valuable support for NGOs. If a nonprofit is interested in exploring AI more broadly but isn’t quite sure where to begin, what would you suggest as a first step?

Hazem Mahmoud: Yeah, I would say the first move, honestly, is education, right? Educating yourself about what is AI, what is possible with AI today, what is not possible, where are the real fears and threats and where are the real opportunities that we can realize. And so education is always the first thing. We have at the McGovern Foundation what we call the Learning Hub. Anyone can go and get access to it. It’s located at learn.mcgovern.org. So if you go there, there is a Learning Hub, and there’s a ton of resources that we provided that are highly curated and highly vetted resources, right. So it’s not just like we’re dumping things there. And we kind of take our users of the Learning Hub through the journey of what we call the AI journey, and understanding what is the first step here. The first step for us, the way that we see it, is a problem definition. You have to have a true problem that you’re trying to solve and that AI can truly be a solution to that problem. And the reality is and in working with a lot of nonprofit organizations, they’ll go through the product definition, we actually offer a workshop on that, but they’ll go through the materials on it and then they’ll realize that what they really need is, you know, some data analytics, for example, but they don’t really need true AI to solve that problem that they’re facing. And so all to say that they’re going through slowly and understanding what is a problem you’re trying to solve and take it one step at a time. I would say that’s the first thing. So education, taking it one step at a time and understanding what problem you’re trying to solve, all those kind of would be important first steps. And I will say the last part around all of this is around collaboration and working with each other and working with other organizations and learning from them.

Branislava Lovre: It’s also important to think about organizations that have already tried using AI but didn’t get the results they hoped for or even decided to step back. If they now feel that everyone else is far ahead of them, what advice would you give in that situation?

Hazem Mahmoud: Honestly, there are organizations doing some incredible work with AI today, and you’re right, some of them are really, really far ahead with the work that they’re doing. Some of them have been working with machine learning and AI models pre this whole hype of large language models. They have been doing a lot of different types of inferences and so on, as well as predictions and all that. But ultimately, know that within that network of even those advanced users, a lot of them are very open to collaborating and very open to sharing their experiences. And so, if you find an organization that’s that far advanced and that they’re already doing that much in AI, reach out to them and be open to this idea of just collaborating with them. What I found, especially in the nonprofit sector, is that it is a true community, right. There is no idea of, competition or conflicts of interest. It doesn’t exist as much here in this sector as it does in the private sector. And so generally speaking, there is an openness to support each other, especially when it comes to AI. And so, just don’t hesitate to reach out to others.

Branislava Lovre: When it comes to AI, it’s important that everyone involved stays up to date. But how can an organization do that without feeling exhausted or overwhelmed?

Hazem Mahmoud: The sector is overwhelmed with AI knowledge and AI solutions, and it’s continuing to get more and more of that. Right. So the advancements in AI, I mean, when I think about 20 years ago, for example, on the technology that I was dealing with back then, you’d see advancements in that on an annual basis, year-to-year kind of thing. You’d see advancements there with AI, it’s literally happening on a weekly, if not daily basis, where you’re seeing these kinds of advancements. And so it is overwhelming and it can be kind of too much information for you to be able to understand how to move forward. What I would recommend is to look for online resources that you trust and that you know can guide you through this process and try to stick with that for some time. It can get somewhat confusing. If you’re kind of going across the board and trying to understand what these publications are telling you, and then another set of publications and a third set of publications, it can be a little bit too much because in many ways, for many of them, they might have conflicting information as well, because the sector is so dynamic and still growing that one publication from a month ago might be completely nullified by a new publication today. And so going through that journey through one or two or, a few sets of courses that you know you can trust, then you can move forward, I think is probably the best approach. Like I said with learn.mcgovern.org, we kind of walk people through the journey and take them step by step. We have our problem definition workshop, then we have our data readiness workshop and we kind of walk users through that. But yeah, it can be overwhelming, but know that you’re not the only one. Everyone else is also overwhelmed and trying to understand how to make sense of all of this. And that’s a completely normal feeling to have as it pertains to AI today.

Branislava Lovre: Thank you Mahmoud, do you have a final message you would like to share with our audience?

Hazem Mahmoud: Yeah. I mean, thank you for having me again, Brana. This has been great to just chat, about some of this stuff. I think at the end of the day, I think it’s important to recognize that this is a journey that we’re all going on. We haven’t figured this out, even, I mean, within the Patrick J. McGovern Foundation, we’re still trying to understand, you know, what are the outcomes of these AI models, how can we leverage them to help improve humanity’s challenges and get past the challenges that we’re facing as a global society. And so it’s a journey that we’re all on. And AI and technology is not really the core of that, right. Like, that’s just a tool for us to use. But what we’re finding is that a lot of these AI conversations are really more human conversations, right, about what kind of society we want to build for ourselves. How do we make sure that we’re including marginalized communities into the decision-making processes. It’s really, it kind of shines a light on who we are as humans and what kind of world we want to have for ourselves now and into the future. And so I think that’s the most exciting part about this AI wave that’s happening. It’s really helping us all start to think about humanity, about the world we want to live in. And those are really the conversations that we end up having. But again, we’re here to support organizations who are on this journey. And please reach out again, follow us on LinkedIn, and visit the Learning Hub and take a look at the resources there. Looking forward to potentially collaborating with anyone who may be listening.

Branislava Lovre: Thank you so much, Mahmoud.

Hazem Mahmoud: Thank you, Brana, I appreciate you having me.

Branislava Lovre: Thank you for watching this episode of AImpactful. Don’t forget to follow us. And see you next time.