Empower's AI Manifesto: Reflections 1 year on

Jed

Strategist & Account Manager

16 August, 2024 • Reading time: 9 minutes

Empower's AI Manifesto: Reflections 1 year on

Our AI Manifesto was published a year ago, so we ask: Does bringing AI into your communications cause more problems than it’s worth?

When the AI furore was at its peak, Empower moved quickly. From internal workshops to temperature-check our feelings and anxieties, through to developing a values-aligned usage policy and ethics framework, we prepared ourselves in the face of uncertainty. 

And we’re glad we did, as Empower’s AI Manifesto has given us a firm grounding in our ethics-based approach to using Generative AI tools.

Since then, we’ve been better prepared to respond to unexpected challenges, and have been able to provide guidance and counsel to organisations we work with, many of whom are grappling with this technology and if, when or where it could be used. 

We’ve also been able to identify a tidal wave of AI in job applications, and develop a sympathetic but thorough approach to getting to know the real candidate behind the banal-gloss of AI copy.

We’ve found that we seldom use it ourselves, spare for occasionally running anonymised social media metrics through an LLM, to verify insights we’ve already found and look for any gaps in our analysis. Or for non-client work, like crunching research, meeting transcripts, or operational data into cleaner packages, for a human to then spend time crafting insights and outtakes.

But now, as the dust begins to settle and we can see more clearly, it feels like the right time to check-in on our stance.

Does AI carry overblown expectations?

The promise of increased productivity through AI.

80%

of the US workforce could have their jobs affected

47%

of employees using AI say they have no idea how to achieve the productivity gains their employers expect

In the midst of peak hype, there was enormous buzz around the impact LLMs and generative AI more broadly may have on our day-to-day lives, and especially – as communicators – our work.

We wrote about that here and developed this thinking into our AI Manifesto.

The promise was increased productivity (with OpenAI boldly asserting 80% of the US workforce could have their jobs affected) and reduced costs, as many “mundane” tasks typically handled by humans could then be automated. Image and video generation models also promised to revolutionise creative industries, despite a slew of lawsuits over copyright. 

However, the tune appears to be changing. 

There was the Goldman Sachs report, suggesting that the estimated $1tn of upcoming expenditure on AI was disproportionate to the actual impact generative AI currently has. It’s nowhere near where it needs to be in order to deliver significant economic or productivity benefits.

These grumblings were echoed in a scathing paper from a researcher at MIT, suggesting that the real-world productivity gains from using generative AI may be as little as 0.56% over the next decade.

When it comes to implementation itself, research from Upwork has shown similarly ‘unimpressive’  results. Almost 80% of respondents using generative AI in their work report that it’s actually hampering their productivity and has actually added to their workloads.

There’s also a clear disconnect between C-suite leaders and workers themselves: 47% of employees using AI say they have no idea how to achieve the productivity gains their employers expect.

That’s not to say there aren’t any viable use-case scenarios for using AI in your work. There are plenty – the internet is still rife with AI powered ‘productivity hacks’. The same research from Upwork conceded there were still some useful applications, but it’s far from a silver bullet for under-resourced and over-stretched organisations.

Generative AI is not climate conscious

The impact on truth and planet is worse than we thought.

30%

increase in Microsoft's emissions since 2020

48%

increase in Google's emissions since 2019

There were plenty of other well-placed concerns around these models –  data worker exploitation, bias, hallucinations, and copyright infringement to name a few. These are still persistent issues today, and all worthy of serious consideration before adopting any tool.

At the time these felt like issues that were an unwelcome part of this new technology, but largely unavoidable to any organisation hoping to use and implement the tools. It would likely be years before legislation could catch up – and it’s not like the other pillars of big tech that we regularly and unquestioningly rely upon have a particularly clean sheet either.

However, almost a year on the list only seems to be growing.

On the environmental front, a gold rush of investments into generative AI models by big tech has done little to support their climate goals. 

Microsoft, a leading investor in the technology, has seen its emissions jump by 30% since 2020 – putting its ambitions to become carbon negative by 2030 in peril. Google’s similar goal of net zero by 2030 also now seems a distant promise – their emissions have jumped 48% since 2019, with a 13% increase in 2023 alone. 

Then there’s the risk of replacement of creative roles. If the thoughts of our brilliant team being sidelined by what are often still ‘hit and miss’ creative outputs was bad enough, then OpenAI’s CTO Mira Murati did little to allay our concerns, and laid bare the silicon valley stance: “Some creative jobs maybe will go away, but maybe they shouldn’t have been there in the first place”.

Finally, there’s the impact on audiences – those who are on the receiving end of much of our work. Studies suggest social media audiences are less likely to trust content that uses generative AI, and this distrust increases further for generated content that deals with “hard” news topics. 

If you’re a cause-led organisation considering using this technology, there’s a lot to mull over here.

And for us, as a values-led agency that works with organisations trying to better the world, how could we conscientiously engage with this technology for the sake of disputable productivity gains? 

AI won’t take your job, but it might make it harder

This quote from Ludic Mataroa, although overlooking some of the larger elephants in the room, does well to summarise our opinions on whether your organisation really needs to be adopting AI in any form.

The promises of silicon valley are landing quite differently in the real world, and you may well be wasting time and resources to incorporate a technology that in many cases creates more challenges than it solves.

When compounded with the other, more significant teething issues, and the importance of careful consideration when approaching generative AI investment cannot be overstated. Empower stand by our AI Manifesto from 2023 – and even though we’re using it in a far more limited capacity than we had braced ourselves for, the sentiment laid out still stands.

What we Empower believe in, how we ensure our use of AI is aligned with our core values, and how we will and won’t be using AI on your behalf is still our position one year on – a long time in AI.

You either need to be on the absolute cutting-edge and producing novel research, or you should be doing exactly what you were doing five years ago with minor concessions to incorporating LLMs. Anything in the middle ground does not make any sense … if you don’t have a use case then having this sort of broad capability is not actually very useful. The only thing you should be doing is improving your operations and culture, and that will give you the ability to use AI if it ever becomes relevant.

– Ludic Mataroa

What AI’s developments means for our clients

Here’s what Empower recommend if you’re a cause-led organisation trying to approach this technology:

  • Weigh up the perceived gains with the costs – not just financially, but to your people and audiences too. 
  • Identify your use cases and consider whether generative AI, in its current state, will really solve a problem or streamline a process. 
  • Bring your team along for the journey. Consider developing your own AI manifesto and ethical use policy – internally discussing comfort zones, parameters and acceptable use cases will help you better understand your organisation’s feelings towards this technology, and better define your own stance. 
  • Look at ways AI could help with your internal admin – meeting notes, transcripts, consolidating data, saving your human brains for analysis or creative communications.

What’s next?

Read these articles for more insights.

Leave a comment

Your email address will not be published. Required fields are marked *