AI in Charity Communications

David Ogundare

By David Ogundare

In AI

Reading Time: 6 minutes

AI is becoming an integral part of our daily lives. So, how is it changing the game in the world of charity communications?

That was the question posed at the Harnessing the power of AI in charity communications CharityComms seminar, which Empower’s own Jed Chapman took part in.

AI offers charity marketing and communications professionals a wide range of tools and capabilities to help analyse data, automate processes, produce content and create more personalised experiences. But there are serious ethical considerations and possible pitfalls to consider too.

The seminar explored the different types of AI and how charities are using them to supercharge comms, how AI can help to create content, and navigate some of the looming ethical questions and risks of using AI. The aim was to inspire new ideas and provide knowledge to help nonprofits start implementing, or better using, AI in their communications.

The panel Jed joined featured charity employees and professionals working with charities to discuss their experience using AI in their communications and how they considered the many benefits, pitfalls and other ethical dilemmas..

Watch the video below to learn more about harnessing the power of AI in charity communications, or read the full transcript below.

Transcript: AI in Charity Communications

Adeela Warlety

What, as communications professionals, should we be asking ourselves before we use a tool, like ChatGPT? 

I know that Empower is currently doing a lot of thinking around how your agency may use AI in the future. 

Can you tell us a bit about the framework that you’ve been developing? And why and how are you developing it? 

Should every charity have an AI Ethics framework?

Jed Chapman

Thanks, Adeela. Well, I think some of the points that Irina just outlined really hit the nail on the head. And really do demonstrate how complex and entangled lots of these issues are, and interconnected as well. And I think when we started looking at generative AI programs, it laid clear the need for a really measured, thoughtful approach towards this. Because there are just so many issues and they scale up… the deeper down the rabbit hole you go into looking into these things, the more complex they get.

When we started looking at this and trying to decide whether we could even work with generative AI tools at all, we decided that we really needed a slow and thoughtful approach to this. 

We’ve been trying to build our own internal framework at Empower that we can use as a benchmark to measure new tools that we look at, or processes we look at, against. We’ve done it in a few ways.

We’ve set up an internal working group that various team members from different walks of life and backgrounds. But we’ve got consultants, we’ve got in-house team members, creative leads, video editors, graphic designers, all with very different views on AI. 

We have the optimist and we have the pessimists. And we’re bringing those people together alongside Empower’s own core values as a starting point to check and measure tools against, and go about how we can think about starting to build a framework. 

AI Ethics Frameworks

We’ve been looking at existing frameworks that are already out there. We’ve been doing quite a lot of reading around those. I’ll share the names of a couple of the ones we’ve been looking at to start with, that are informing some of our thinking. Digital Catapult has an AI ethics framework. First and foremost, this one is more geared towards developers. But we think that the questions that are on there are really useful and still relevant for any organisation that would want to be looking into maybe starting to work or use a generative AI tool internally. 

Another one is the Chartered Institute of Public Relations that has an ethics guide to AI and PR, published in 2020. Technically, by AI’s pace of change, it’s slightly out of date. But we think that some of the questions in there again, really clearly lay out some of the considerations that organisations should be making. 

Thirdly, the Charity Excellence Framework has already published an AI Ethics Framework and Governance Framework for charities that organisations can take and begin looking at. It breaks down a risk assessment, short, medium and long-term, specifically geared towards charities, and also breaks down the rest of the governance framework into quite a granular level that we think is a really good starting point, at least. 

So that’s some of the work we’re doing, to look at how we can go about building our own framework, to measure our own work against, before we would use tools ourselves, or work with any clients in recommending tools to them. 

Constitutional AI

When it comes to the tools themselves, it’s a bit of a murky picture. Image generation in particular, is really difficult. Not only for the reasons Irina laid out, but obviously there are ongoing legal cases with many of them. It really throws some very difficult copyright discussions up into the light. And it may be many years before any of those are resolved. I was actually going to talk about Claude and Anthropic, and the idea of constitutional AI, but I think Phil probably did a better job of explaining that than I could.

I’m not sure I’ve got time to really go into reinforcement learning via human feedback versus a constitutional AI model, but that’s something that we’re really interested in.

Claude’s outputs have been shown to be less toxic and slightly less biased as well, as well as more helpful for the user. I don’t think it eradicates those issues completely, but it’s a step hopefully in the right direction. So that’s something we’re quite optimistic about. 

What we’re trying to do as, not as AI experts, not as developers, not as legal experts, is to really just try and get an understanding of how it all works. What data has gone into training that tool? Are the outputs aligned with our own values and the values of the organisations we may be working with?

Moral Outsourcing

I think it’s also really important that we understand that the reputational risk may fall on the people that are using it, not the organisations that are creating these tools. There’s a sort of moral outsourcing going on at the moment where, because the tool is generative by nature, how can an organisation be responsible for the outputs it creates?

The risk could very easily land on the person using it, especially in a public facing context. So we’re trying to consider all of these things in one go. And we’re working to try and build that into something that we can hopefully make some progress with in the future.

AI Anxiety Report: a survey of comms professionals and their attitudes and feelings about AI

Adeela

Thank you, Jed, I’m going to do a follow up question for you really, which is, I know that you’ve just completed a survey of comms professionals and their attitudes and feelings about AI. I wonder if you could just really briefly share some of the headlines and themes emerging from their feedback?

Jed Chapman

For the past few months, we’ve been surveying creatives in the industry, professionals, anyone working in design, comms marketing, video editing, and so on. And we’ve had over 200 responses from that. We’ve published the results of those this morning. And I think the report will also be circulated in the post-event pack. So please do go and take a look at that.

What we found generally was that there is a reasonably high level of familiarity already with the concept of generative AI from professionals in the sector. We also found that many people were already experimenting with generative AI programs. 

We found that two thirds of respondents had used it in a personal capacity, and 40% had already used it in a professional capacity. 

Now we’re assuming that that’s not at an organisational level, but more individuals maybe experimenting with it, using it for work, maybe with the approval of their organisation. 

We’ve seen findings from, I think it was actually one of Phil’s presentations a month ago or so, it was asking organisations about how far along they’ve got with their AI strategies, and it was quite early stage. So we’re assuming that those findings are more professionals using it quietly at work or with approval from a senior. 

In terms of concern about impacts on jobs, we found about 50% of respondents voiced some level of concern about the impact on the job industry as a whole. And there were some that were really just outright point blank, opposed to all for many of the reasons we’ve already talked about and these ethical implications. 

There was slightly less concern around the idea of the creative process being replaced itself there. That really dropped when we started to look at okay, well, are you concerned about human creativity itself being at risk? And that dropped quite significantly. There was  this feeling that the creative process is not being replaced. it’s just evolving and we really got this sense of cautious optimism and intrigue towards working with those tools in the future. 

80% of respondents said that they planned to use generative AI programs in their work at some point in the future. And likewise, the majority of respondents thought it was really important that creatives at any level, have some understanding of what these tools are, and some familiarity with them. 

And just to round off that, as Irina said, we’re in the Wild West. Regulation will be far behind. And I think that the findings of this report really highlight the necessity and the importance for us all to use this responsibly as we begin to navigate all of this and be careful and be thoughtful, and to continue to keep open discussion on the table as best we can. 

Make sure that resources and education are available so that we can all be as familiar as we can with these tools and what the potential implications are. And always keeping the ethical angle in mind when we maybe start to harness some of these opportunities.

AI in charity communications: Further Reading