luke jones tBvFkmwBw unsplash

Ben

Co-Founder

05 September, 2025 • Reading time: 5 minutes

As the AI transformation continues to accelerate impact. Here are some of the main AI challenges with AI that Empower’s clients and team have been working on recently.

The charity sector has changed dramatically since we published our original AI Manifesto in 2023, and annual update last year. What we’re seeing now reinforces our cautious, values-led approach whilst also revealing opportunities that deserve serious attention.

The charity sector has changed dramatically since we published our original AI Manifesto in 2023, and annual update last year. What we’re seeing now reinforces our cautious, values-led approach whilst also revealing opportunities that deserve serious attention.

The numbers tell part of the story: 76% of charities now use AI, up from 61% in 2024. But behind this rapid uptake lies a messier reality that proves our earlier concerns about the gap between AI promises and practical impact.

Here are some of the main AI challenges with AI that Empower’s clients and team have been working on recently.

A sector in AI transition

Charities find themselves caught in an impossible bind. 75% continue prioritising digital transformation despite crushing constraints, while 69% say squeezed finances are their biggest barrier.

The result? A sector splitting in two. Well-resourced organisations race ahead with thoughtful AI implementation. Everyone else gets left behind by technological change they can’t afford to navigate properly and organisations get left with a skills gap.

AI leadership skills gap

Despite 76% of charities using AI, 36% admit their CEOs have poor AI skills, and 44% say the same about their boards.

We’re not talking about coding expertise. We’re talking about basic literacy needed to oversee technology that could reshape operations. The corporate world isn’t doing much better. Harvard Law School research shows only 13% of S&P 500 companies have directors with AI expertise, whilst 45% haven’t even put AI on the board agenda.

The California Management Review offers a framework spanning strategy, expertise, processes, ethics and culture. Most organisations, they note, remain stuck at the reactive stage, firefighting rather than planning.

The alarming bit? Only 23% of charities have updated their risk registers for AI, and just 22% have reviewed their governance frameworks. These aren’t nice-to-have activities. They’re basic safeguards that most organisations are ignoring.

For those seeking strategic AI guidance, this governance vacuum means serious risk – especially when it comes to disintermediation.

As the field of Generative Engine Optimisation (GEO) emerges, generative engines are reshaping the search experience:

   Generative Engine Optimisation (GEO) is the process of ensuring AI-driven search tools like Claude, ChatGPT, Gemini etc       surface your content. Unlike traditional search engines, AI search engines go beyond simply retrieving information, they         generate comprehensive responses by synthesising information from multiple sources.

GEO has upended traditional search behavior, resulting in ‘Zero-Click’ patterns, where the user is satisfied with the GEO summary alone. For example The Patient Information Forum found only one in ten people click through to charity websites after reading AI summaries.

This means AI tools are inserting themselves between organisations and beneficiaries, creating what researchers call disintermediation

   Disintermediation is a term with origins in banking, essentially summed up as: “cutting out the middlemen”. 

GEO is cutting charities out  – blocking the connection between people seeking help and the nuanced, expert, information they need.

Say someone with cancer needs treatment information. Instead of visiting a trusted cancer charity’s website, they ask ChatGPT or rely on AI-generated search summaries. They might get advice from other countries that don’t apply to the UK, or unverified forum posts mixed with legitimate guidance. 

The implications stretch far beyond website traffic. When people bypass trusted sources for critical information, preventable crises become more likely. And who picks up the pieces? Not the tech companies that created the problem. 

Its values-driven organisations left dealing with the consequences, which can place a hidden burden on them.

Trust

“Signs of AI writing should be treated as signs of a potential problem, not the problem itself. “ – so said Wikipedia in August 2025 as they launched a detailed page on Signs of AI writing

Our team discussing Wikipedia’s stance, probed the complexity of our experience – client expertise – on their topic and with AI, audience expertise and complexity, sign off processes, budgets and more. 

Broadly, we see the risk: AI-generated copy can create a dangerous disconnect between nonprofits and the communities they hope to serve.

When charities lean heavily on AI to craft their thank yous, donor appeals or volunteer communications, they’re outsourcing the very thing that makes their work meaningful: authentic human connection. Creating a sense that someone’s ticking boxes rather than genuinely grappling with the lived reality of the issues at hand. 

This isn’t about writing quality; it’s about trust. Non-profit work is built on the understanding that these organisations genuinely care about the people they serve, that they’ve taken the time to listen, learn, and respond thoughtfully.

HAI is a useful copywriting partner, so Empower and our clients use it where appropriate. But with an expert human eye and a clear audience in mind to avoid eroding trust.

The hidden AI infrastructure burden

While everyone debates AI ethics, charities are quietly subsidising the AI revolution through a hidden infrastructure tax that nobody talks about.

Tania Cohen at 360Giving has documented nine months of battling AI bots that hammer their platform with thousands of queries, sometimes bringing it down entirely. The cruel irony? Their data is freely available to download, but these bots scrape aggressively anyway.

The costs cascade through everything. Developer time gets diverted from building new features to fighting bot attacks. Infrastructure bills spike as servers strain under artificial load. Smaller charities without 360Giving’s technical resources face challenges that could silently destroy their sustainability.

Then there’s the misinformation problem. AI systems repurpose charity content without attribution, often introducing errors that can’t be corrected once absorbed into a model. 360Giving now fields more queries from people with wrong impressions of their work, consuming precious staff time.

AI companies extract value from charity expertise while pushing costs back onto organisations trying to serve society. Those working to make the world better end up subsidising AI’s infrastructure, despite the promises of new offerings such as Agentic AI.

The promise and peril of agentic AI

The latest AI frontier promises tools that work independently, handling complex tasks with minimal oversight. Cwmpas uses AI agents for analysis and tender responses, with one consultant estimating agents handle 70% of project development’s heavy lifting.

For stretched organisations, the appeal is obvious. Imagine agents managing volunteer rotas, analysing crisis data, or compiling case studies around the clock. But autonomy brings risks that demand careful management, particularly when teams have mixed AI knowledge. This compounds the leadership skills gap we’ve already identified.

McKinsey’s analysis suggests agentic AI could break the “gen AI paradox” where 80% of companies use AI but see no bottom-line impact. However, it also introduces “systemic risks that traditional gen AI architectures were never built to handle.”

The reality check comes from Gartner’s prediction that over 40% of agentic AI projects will be cancelled by 2027 due to escalating costs, unclear value, inadequate controls and environmental concerns.

AI’s environmental concerns

Our concerns about AI’s environmental impact have intensified. 

The investment rush in generative AI has seen Microsoft’s emissions jump 30% since 2020, whilst Google’s have increased 48% since 2019. These represent a fundamental contradiction between the tech sector’s climate commitments and their AI ambitions.

Google claimed that its Gemini AI uses just 0.26ml of water per prompt – dramatically lower than previous research estimates. UC Riverside researchers thoroughly debunked this claim, revealing a critical flaw in Google’s methodology.

Google’s researchers engaged in statistical manipulation. They compared their on-site water consumption figures to total water consumption estimates from other studies – a false equivalence that dramatically understates AI’s true environmental cost. As UC Riverside’s Associate Professor Shaolei Ren pointed out: “Their practice doesn’t follow the minimum standard we expect for any paper, let alone one from Google.”

AI’s water footprint is far more serious than Google’s selective statistics suggest. Google’s own estimates show that about 80% of water removed from watersheds near its data centres is consumed by evaporative cooling alone. But water is also consumed off-site in generating electricity needed to power AI systems through cooling towers at power plants.

UC Riverside researchers criticised Google’s cherry-picked comparisons. Google selected the highest total water consumption figure from their study (47.5ml, which they rounded up to 50ml) and compared it to their own on-site only figure. Ren noted: “They not only picked the total, but they also picked our highest total among 18 locations.”

This pattern of environmental data manipulation extends beyond water consumption. MIT’s comprehensive analysis of generative AI’s environmental impact reveals that the computational power required to train models like GPT-4 demands staggering amounts of electricity, whilst manufacturing and transporting AI hardware creates substantial embodied carbon emissions.

Cause-led organisations considering AI adoption must weigh this environmental cost in decision-making. 

Our sustainability-focused approach to digital strategy helps organisations balance technological adoption with environmental responsibility, reaffirming our AI manifesto.

Reaffirming Empower’s AI manifesto

AI’s promise continues to be oversold, whilst its risks become clearer and more pressing.

These AI developments reinforce our original AI manifesto

For Empower and our clients, this means we need to:

  • Proceed with caution: The skills gaps, governance failures, and systemic risks we observe suggest that slow, thoughtful adoption remains the wisest path.
  • Focus on genuine utility: Where we use AI – primarily for internal analysis of anonymised data – we ensure it serves a clear purpose and aligns with our values.
  • Prioritise human connection: As disintermediation threatens the vital links between charities and their beneficiaries, maintaining authentic human touchpoints becomes more important.
  • Consider broader impact: Every AI implementation decision must weigh not just immediate utility, but environmental cost, societal impact, and alignment with our mission to better the world.

Recommendations for causes AI use

For our clients and other cause-led organisations grappling with these AI developments, we recommend to:

  • Start with governance: Update risk registers and governance frameworks before implementing any AI tools. This isn’t bureaucracy – it’s essential infrastructure for responsible adoption. Our AI governance and strategy services can help establish these foundations and ensure your approach aligns with your organisation’s values.
  • Address skills gaps at the top: Leaders don’t need to become technical experts, but they need sufficient understanding to provide proper oversight and make informed decisions. We offer executive training programmes and workshops designed specifically for charity leadership teams navigating AI adoption.
  • Focus on disintermediation: Invest in direct channels like email and CRM. Strengthen your brand presence within large language models. Consider how to maintain visibility and trust in an AI-mediated world. Our digital strategy and communications work addresses these challenges directly.
  • Be selective about agentic AI: The potential is significant, but so are the risks. Ensure proper technical infrastructure, data management, and skills development before delegating tasks to autonomous AI agents.
  • Measure what matters: Look beyond productivity metrics to consider environmental impact, beneficiary experience, and alignment with organisational values. Our impact measurement and evaluation expertise helps organisations develop holistic assessment frameworks.

 

The question facing our sector isn’t whether AI will continue to develop. Instead we must decide whether we’ll shape that development to serve our missions and values, or allow those shaping AI’s priorities to shape us.

At Empower, we remain committed to the path we set in our original AI manifesto: thoughtful, values-led engagement with AI that prioritises human flourishing over technological hype. This year’s developments have strengthened our conviction that this remains the right approach for organisations committed to bettering the world.

If your organisation is navigating these AI challenges, we’re here to help with our AI transformation services. The Empower team brings deep experience in helping charities and purpose-driven organisations make thoughtful technology decisions that serve their mission.

More insights from Empower

Subscribe for updates

Keep updated with our latest news, trends and case studies.

This field is for validation purposes and should be left unchanged.