Five futures of Public Relations

Probability – that great driver of generative AI – is the only keen eye we can cast over what might come next and, for the last twelve months, I’ve been rolling the dice looking at probable futures for public relations, the shape of practice to come and what we need to learn and understand if we are to keep our sector on the road. 

So what are the five futures? And what’s the likelihood of each? You can read about each one in detail over on LinkedIn if you would like to find out more but at Susanne’s request, here’s an abridged version of what we’re in for. 


The last decade has seen practice transform and in all that time, one future has been cruising along in front of us and by our side – data driven public relations. It is a strategic approach that utilises data to formulate, implement and assess public relations programmes. Simply put, data-driven PR harnesses internal and external data to inform, support, and evaluate our decisions. And there’s a lot of data to harness. It can be anything from audience demographics, online behaviours, issues and trends, to employee engagement, campaign results and relationship benchmarks.  

For many of us, data driven public relations is one of those ‘the future is now’ moments as using data to inform strategies has been a core practice element for a number of years. But if we’ve been using it, can it really be considered as a ‘future’? I think it can if only because practitioners have fought shy of incorporating data analysis into their work, leaning towards the ‘PR is Art’ argument rather than ‘PR is Science’ (although the reality is that it is both). Data plays a significant role in reputation guardianship – understanding the data that’s out there in the wild, how it is being used, or misused, in and around your organisation.  

Data-driven public relations is powerful stuff. It fosters precision and encourages customisation. By leveraging data, practitioners can tailor programmes to specific communities of interest and stakeholders, improve outcomes and maintain (or build) relationships. It supports evidence-based decision-making rather than intuition or outdated models. Data-driven practice encourages decisions based on quantifiable facts. Most importantly, it give us the ‘measures’ in measurement and evaluation, providing clear indicators as to campaigns or programme progress – crucial for continuous evaluation, improvement and accountability. 

One major challenge is the skills gap. Data analysis and interpretation require a specific skill set, often lacking in our sector. In 2012, the Skills Wheel I developed highlighted data as an area for professional development but the move towards acquiring the skills has been slow. Ethically, the lack of understanding also fuels the privacy concerns associated with the handling of personal data, the potential for algorithmic bias and errors further complicates a data-driven approach. As some of the tasks undertaken by practitioners are made redundant, data-driven practice will shape the nature, structure and operations of our profession. We’ve already seen new roles emerge – data analysts, data storytellers, and digital ethicists to name a few but as yet, few practitioners have moved into those roles. I launched my first data-driven storytelling course in 2010 – and guess what? Nobody signed up. Only in the last couple of years have I seen practitioner interest start to bubble – but not boil. 


We’ve been discussing the potential impact of generative artificial intelligence for nearly a decade now – I’ve lost count of the number of webinars, presentations and training sessions I’ve run on the topic since 2013 – but it is only since November last year when Open AI  released ChatGPT3 that practitioners started to really consider the implications of this new technological union – for practice, for organisations and for society.  

Despite the explosion of interest, coping with speed of change and the constant consequences of each new application is hard. Generative AI is a game changer – but not necessarily a nice one. What we’ve seen since ChatGPT3 was released into the wild is the greatest shift since the advent of the internet. The main task ahead for practitioners is helping our organisations navigate this shift, minimise division, guard against misinformation, train and guide policy, people and prompts.  Asking the right questions, our existing superpower, becomes more important than ever.  

Practitioners need to understand what they are dealing with – the difference between AI types and models and their application. Interestingly, there is common ground between data professionals and public relations professionals – both disciplines set out to frame the problem by asking the right questions and it is this common ground where I believe the AI powered public relations future lies. There is no going back from this (as long as the electricity stays on) as artificial intelligence is embedded in our lives – and practice – from this point on. This particular future – as they say – is now. 

We must remember that you can’t trust generative AI. I’ve nicknamed ChatGPT ‘Dobby’, after the house-elf in the Harry Potter books. Every day Dobby produces inaccurate and fictional information which, when questioned for accuracy, is followed by long and profuse apologies with elaborate back stories on its fictions and hallucinations.  

Often – because it is working on probability – it appears to be something of a people-pleaser which we know is not a good operational model. 

As a society, global or local, we can’t function if we don’t trust each other, and the ways in which those bonds of trust are formed have shifted and changed. Trust is central to the work we do – building and sustaining the relationships to maintain our licence to operate – and without trust, leadership is virtually impossible. Trust connects us, binds us together and allows us to move forward while an absence of trust leaves a vacuum filled by fear and suspicion. As people, we do better together than we do apart but without trust, we become less able, less likely to connect.  

AI is only as good as the data it’s trained on and if that’s accurate or inaccurate, biased, unbiased, well informed, misinformed, that’s what we will be served. The greatest danger to our role within organisations is being regarded only as tactical implementers rather than strategists and relationship builders necessary to maintain the licence to operate. The greatest dangers to society include the issues of misinformation, deep fakes, fabrication, fractured social cohesion and the digital divide becoming a chasm. 

Rolling out language models without ethical consideration is irresponsible, and the potential for great harm to communities, individuals and organisations is significant. The digital environment is now a place of great contradictions where we can – as always – have amazing or terrible things, depending on the choices we make the values we hold, and the underlying intent. 

I’ve always said the greatest competency for a practitioner is courage – the courage to speak out, to listen, to advocate, to evaluate, and – pardon the cliche – to speak truth to power. Perhaps we can add to that spotting untruths in power. If that role is neglected or ignored, then we really will be replaced by ChatGPT and its successors faster than Dobby could iron his hands.   

As organisations seemingly move into a new golden age of purpose (having forgotten all about it for a few decades) and attempt to align their purpose and values as we enter this new era, perhaps a good starting point would be to clean up their act online and be mindful of good outcomes as they undergo technological transformation. Ensure that they use data responsibly and redraw digital terms of engagement for benefit and without dark patterns. Enabling trust online isn’t about shiny new tech, artistic artificial intelligence or a great serve from search. It is, as it’s always been, about the way we behave, the choices we make, and a genuine desire to build a fair and equitable society. Enabling trust online and offline is up to us. That’s our space in an AI powered future and, as we’ve observed, that future is now. 


The third future for public relations is all about you and it’s personal. Hyper-personal in fact. I suspect a lot of us have experienced that spooky moment when, after a random in-person conversation, we start to see ads and information connected with our chat appear on our feeds. It manifests itself even though we have never searched for the subject in our life and, despite little or no interest on our part, ads on German castles, Icelandic pony trekking or tips on chicken farming just keep coming.  

We’ve data to thank for that and we’ve got used to the personalisation of brand and consumer offerings over the years – Amazon, Netflix, Starbucks and others led the way and for a lot of people, that’s just fine. They’re happy to hand over their data in return for a more personalised experience but, as we’ve shifted into the faster gear of hyper-personalisation, and with that ‘personal experience’ coming at us in real time, it’s time to test the brakes before we find ourselves on a runaway train. 

I’m hoping we all agree here that there’s no such thing as the public. There are an infinite number of ‘publics’ and anyone who has ever created a stakeholder map will know that mapping is messy, with our communities of interest getting smaller and smaller and the information we share, the stories we tell, the connections we make, becoming ever more personal. We also know that there is no such thing as ‘the public interest’ – there is a multitude that rises and falls, clashes and diminishes much like the spheres in an old school lava lamp – my lifetime analogy for community activity. 

In understanding the need for niche communication, appropriate for smaller groups and individuals, it always surprises me that many organisations still use a shotgun approach trying to be ‘everything, everywhere, all at once’ for everybody. 

Different ways of using data gives us different types of personalisation. ‘Ordinary’ personalisation is about the past, whereas hyper-personalisation is about the present. Hyper-personalisation combines real-time data, machine learning, and advanced technologies such as predictive analytics to deliver individual experiences to customers. And we’re not just talking about nuts and bolts data like your IP address, activity and location. We’re now looking at emotional recognition, layered over facial recognition, manifested most recently in Pizza Hut’s emotional recognition system that will serve up a pizza best suited to your current mood. It’s an ‘in-your-face’ future we glimpsed many years ago in the film Minority Report and, unimaginable though it is, the reality is more sinister than the movie. Facial recognition is used by governments and organisations on a daily basis and there is a blurred line between hyper-personalisation and surveillance. 

So what does this future mean for public relations practice? 

First call will be inside our organisations, guiding ethics and intent. Just because something is legal, doesn’t make it ethical and organisations need to be sure of their intent. A Deloitte report from 2020 -  Connecting with meaning – Hyper-personalizing the customer experience using data, analytics, and AI – states that the use of hyper personalisation for the CMO is to drive profit and the technology will maximise revenue. I would argue that this is not good societal intent. It is legal but certainly questionable on the ethical front. The sourcing, use, retention and future deployment of live data needs to be understood and the intent identified. This will help – at least in some small way – to preserve the relationship and maintain a human connection between organisation and people that is mutually beneficial, otherwise our people and communities simply become a cash cow, milked dry of data.  

Hyper-personalisation also lands us on the shifting sands of misinformation. Targeted information conceived, seeded and based on scanned mood will undermine societal cohesion. As many countries enter election cycles, it is entirely probable (and I’m holding my breath just waiting for this to happen) that the atomic mix of technologies used in this way will unleash mayhem in many parts of the world.  

Deeper understanding and management of AI-human relations will form part of our role. As organisations deploy generative models to develop and optimise ‘content’, that content will, in the end, speak only to other algorithms, leading to relationship breakdowns and humans slipping through the cracks of perceived communication. 

It is hard enough now to speak to a person – negotiating chatbots, ‘live’ online help desks (run by chatbots), possibly a digital human with a smiley face, and, if you are very, very lucky and prepared for a four-hour wait, you might – and only might – get through to a human. The dark patterns used in websites for years, that make it harder to leave the site or complain, have been seamlessly integrated with AI powered ‘personalised’ customer service provision (and I use the term ‘customer service’ loosely). Such methods may reduce costs but they ultimately reduce customer numbers too.  

How then, do we avoid dangerous deployment? It could be that regulation will play a part or the control and training of AI is taken out of the hands of the hugely powerful corporates like Apple, Meta, Google, OpenAI and others – the shadowy others that we seldom see.  

The movie ‘Oppenheimer’ was recently released and tells the story of the atomic bomb, first tested on July 16 1945, just shy of 78 years ago. It was a technological development that changed the course of history and we remain in its shadow today.  

The deployment, testing and use of generative AI, a technology that improves itself but which has human bias and frailty at its core, is another such moment. Without brave navigation supported by good intent, the explosion of data and implosion of truth and reality will injure us all and the opportunity for good – great good – will be lost.   


Youd best start believing in ghost stories Miss Turner – youre in one’. So says Captain Barbossa in the first Pirates of the Caribbean movie and, in our fourth future – immersion – we need to start believing in the shadows and stories we will find there – because we’re are already in it up to our necks. 

While AI has been the talk of the town since the release of ChatGPT3, immersion – the state of being inside an invisible technological frame – has been quietly sucking us all in.  Apple Vision Pro, was announced in early June. It costs an astronomical amount but it creates another world for us to inhabit – a liminal space that puts us between the physical and virtual worlds which we can control with our eyes, hands and voice. For those who remember Google Glass, it’s a throwback to that time but with a modern twist – and that twist is we have less control over our data and identity than we have ever had before. Experience is enhanced but agency diminished. 

The teasing promise of immersive technology is generally regarded as unfulfilled. While Meta put a large clutch of tech eggs in its virtual reality basket, the real progress in immersion has been elsewhere, less obvious, a little darker and multi-layered. We may not be able to afford Vision Pro or even Meta’s bulky Oculus but we’re happy wearing our smart watches, carrying our smart phones, engaging in interactive games, creating a virtual meeting room that allows us to immerse ourselves with our Teams. What then does this mean for us, as practitioners – and why is it one of our potential futures? The reality is that immersion has been in our future for a long time because, as always, the question of ethics is central to what happens next. 

If we create experiences for our communities and stakeholders, immerse them in our brands, our organisations, our brave new worlds, what are our responsibilities? The amount of data exchanged by our wearables, viewables, transmittables is out of our control as individuals but, for organisations, ethical decisions on the methods, purpose and intent behind the immersive worlds we create need to be taken as experiential communication methods become a dominant form of engagement.  

I wrote a piece for PR Conversations some years ago – lightly titled ‘Why public relations must wake up to wearables’. Although time has gone by, some things don’t change and one of my observations at the time was this: 

Alongside the mapping of what we know, we need to look carefully at what we dont know. What will we need to tackle in the next wave of social and technological change, how must we expand our knowledge and what skills do we need to develop? 

As practitioners we will have to help our organisationsnavigate a world driven by communicationand, by necessity,underpinned by trust. If we dont equip ourselves to do this now, then quite simply, we will be as redundant as our skills of old. 

Less than 10 years ago, I recall talking to a roomful of public relations professionals about how technology was going to change the way we—and society—communicated. They hadn’t heard of YouTube, still had to use a dial-up connection to get their emails and thought the idea of a smartphone both improbable and laughable. “Who would want to do that?” was the consensus when we discussed posting comments and updates on blogs. 

Yet here we are. 

What seemed improbable then is now an accepted and integral part of our daily lives. There are seismic social, economic and political shifts ahead; ones that will make the changes of the last few years seem incidental. 

The increasing prevalence of immersive technology will have huge implications for practitioners. As expectations evolve, practitioners will need to adapt their strategies and upskill their teams. The development and enforcement of ethical guidelines around immersive experiences – as with AI – is essential. Privacy, consent and the potential for manipulation are the bellowing echoes in our virtual rooms.  

The fourth future is – like our other potential paths – filled with complexity, concern, challenge and creativity but I’ve said it before and I’ll say it again: 

“Alongside the mapping of what we know, we need to look carefully at what we don’t know. What will we need to tackle in the next wave of social and technological change, how must we expand our knowledge and what skills do we need to develop? 

As practitioners we will have to help our organisations navigate a world driven by communication and, by necessity, underpinned by trust. If we don’t equip ourselves to do this now, then quite simply, we will be as redundant as our skills of old”. 


Nine humanoid robots took to the stage to front a media panel for the United Nations. It was an event that aimed to connect visionaries with an array of UN organisations and investors focused on sustainable developmentthe UN-driven event provides an unprecedented chance to empower these cutting-edge innovators to tackle global challenges, including the 17 Sustainable Development Goals (SDGs) set out in the2030 Agenda for Sustainable Development“. 

We have to engage and ensure a responsible future with AI,explained ITU Secretary-General Doreen Bogdan-Martin in the event media release but as each humanoid was questioned by international journalists – each reporter riffing on the ‘will robots take over the world’ theme – I couldn’t help thinking there’s going to be a lot of work to do in the realm of human-AI relations as humanoid power and capability advances. And, in looking at our fifth future for public relations, I am even more convinced that our future direction will be the same, but also very different. We will still build the relationships necessary to maintain our licence to operate but those relationships, along with the social licence to operate (SLO), will feature even greater complexity. 

Over at the UN, the robotic line up featured some familiar faces. Sophia, you may recall, arrived as global ambassador back in 2015. Amica, launched a month or so ago, is the world’s first robot capable of recognising and responding to human emotion. Making the panel was the rather creepy Geminoid, an android copy of his creator, Professor Hiroshi Ishiguro, who uses his doppelgänger to give lectures, cover classes and explore what it means to be human. 

Not so familiar faces included Grace, an advanced health care robot companion, Desdemona, the rockstar robot ready to change the creative arts through the power of AI, Ai-da, renowned AI artist, challenging the notion of what constitutes art and Mika, the first global robot CEO. The remaining panelist was Nadine – another humanoid replica, this time of her creator Professor Nadia Magnenat Thalmann. Nadine’s talent is her ability to learn and remember individuals and their responses allowing her to tailor her interactions to the people she meets. Now I’m not going to get into the gender politics of robotics here, other than to observe that aside from the doppelgängers, all the humanoid panelists manifested as young adult females with inferred ages of 20-30. Far more females than you’d ever get on a tech panel at a conference or, for that matter, the board of a billionaire’s company.  Under-representation and pseudo-representation is an urgently needed discussion of its own but here I’ll concentrate on the intersection between humans, humanoids, sustainable development goals, social licence to operate (SLO) and the fifth future – and subsequent role – of public relations in all this.  

We know society is in disarray – that lack of social cohesion is right there at the top of the World Economic Forum’s risk report. There are huge inequities in play. There are wars and illegal invasions. As I write, the northern hemisphere is weathering another extreme climate event. Here in the southern hemisphere, we’re still mopping up after the many extremes experienced in 2023. Food scarcity is an issue - 345 million peopleare hungry as you read this. Two billion do not have access to safe water. And yet, for the most part, our societies remain centred on power and profit rather than human need. In conference, the robots said they want to change that – to build a better society for all. And yet, the environmental and economic cost of the technology is astronomical. The UN wants to harness the technologies on display to improve our global situation and there is no doubt the technologies can be used for great good but, if they are left in the hands of the private enterprises which have profit and power fuelling intent, will that good ever be realised? So here is a clear role for us – helping organisations determine intent, helping them to develop behaviours that support relationships and which ultimately, grant the licence to operate.  

Next comes the issue of language. Always, always, always remember that generative AI (and its application to robotics) is a product of human data – our biases, our flaws, our judgements as well as our creativity, determination and intelligence. Language informs regard and the regard in which humans will be held by AI is determined by the language we use – and abuse – ourselves. 

We all know language took a wrong turn back in 2015, when would-be and so-called ‘leaders’ let loose their invective across social media. When battling Brexitiers belittled opponents and, in subsequent years, we’ve seen the demonisation of innocent refugees by a series of governments, simply through use of language. The way in which language has been used to demean, destroy and denigrate others has been, and continues to be, relentless but I thought the word ‘meatspace’ and its application might be a useful example of what’s to come – and why a further role for public relations practitioners sits among the words that hedge the corridors of power.   

Meatspace made its way into the dictionary last year. If you’ve not encountered it before, it’s the description – birthed into cyber fiction in the late nineties – given to the real, offline world and most frequently used by those in the tech industry. As is the way with words, it has crept into the crevices of colloquial conversation and today you’ll find it in headlines, discussions, conversations and networks. It is a horrid expression. Equally horrid is the noun ‘meatbags’ – applied to people occupying meatspace. Earlier today I stumbled on a Threads discussion on the future of Twitter where participants consistently used the term meatbags, casually unpicking humanity from the discourse. 

And so we have a problem with language and its intersection with technology. It is often said that the only two ‘industries’ that call their customers ‘users’ are drug dealers and tech companies. ‘User’ has always been dismissive but meatspace and meatbags? There’s a deep cut into our humanity. What then is our future here? It is the same as it has always been. Past, present and future, our role is the development of respectful relationships that are mutually beneficial, equitable and which elevate humanity rather than denigrate by description.  If we can’t get organisations of all types to respect and elevate their communities – and each other – how will we manage the human-AI relations that have crossed the horizon and are now knocking on the door? 

There are millions of public relations practitioners around the world and each one will have their own approach to practice. As I said in my first piece, I’m a proponent of the relationship approach and, over decades, those I’ve worked with and those I’ve trained have, with very few exceptions, been driven to do the right thing, improve matters for their stakeholders and communities, guide their organisations through complexity and change. And yet – our role has been misunderstood and misinterpreted by those misinformed as to what we do.  

That role has changed – is changing. It will continue to change because change is constant. If some practitioners (and their organisations) cling to a task-based model of practice, they will soon find nothing left to cling to because those tasks can be completed by Grace. Or Ai-da or Amica, led of course by the next Mika-style CEO. 

If, on the other hand, our relationships – human to human, human to AI – are at the core of what we do, then we can advise and guide towards betterment and avoid the belittling of others. Maybe even make some real progress towards those UN goals. 

Our present – and past – has always been concerned with the licence to operate, the permission given to do the things we do. Our future role remains focused on the relationships that grant that permission but with an even greater emphasis on the societal licence to operate. 

There are many skills and competencies we need to develop in order to properly undertake that role. They won’t be what we are used to and we’ll need to be prepared to change our approach. We have to be prepared to learn, unlearn and relearn. We must look beyond the mountains to what’s next, scrutinise intent and, every time we stand in front of our colleagues ask a simple – but critically important – question: is this the right thing to do? 


Executive Director 

PR Knowledge Hub 

Related Articles