Sophie Scott
Sophie Scott

The AI battle lines have been drawn. On one side, you have evangelists who see artificial intelligence as progressive, positive and problem solving. To this group, AI is the secret weapon that’s going to boost our economies and make society more productive, efficient and, well, better.

The other side, though, is deeply skeptical about its potential, wary of AI leading to a big brother society and convinced it will change the way we interact with each other — for the worse.

Of course, as with all things tech, the debate is complex. There will of course be both benefits and issues and problems along the way, as AI becomes more pervasive in our lives. As communicators in the tech sector, we’re already pretty well-versed in this debate.

O'Dwyer's Nov. '19 Technology PR MagazineThis article is featured in O'Dwyer's Nov. '18 Technology PR Magazine

But one of the most important aspects of AI — its reputational impact — still doesn’t get the same level of attention or scrutiny. And it should.

We recently took a deeper look at the way organizations in every industry are thinking about AI and communicating around it. The report — “Artificial Intelligence & Communications: The Fads. The Fears. The Future.” — surveyed U.S. and UK consumers on their attitudes towards AI, and assembled a group of 25 experts to offer opinions on how and where the technology will make the greatest impact over the coming years.

The report reveals that the levels of hype around AI have influenced consumers’ views, leading to a disconnect between the popular understanding of AI and what it means in reality. It’s undeniable that we need much more education on the issue: 53 percent of global consumers believe there’s not enough education about the role of AI in society and more than a quarter (26 percent) say they have poor or no understanding of what AI is.

And regardless of age group, all agreed that the responsibility for educating the public about AI should be shared between key stakeholders in business, government and academia, with 61 percent of all those surveyed sharing this view.

This lack of understanding and desire for knowledge provides vital context to communicators looking ahead to the coming months and years. We must make sure brands are aware of all of AI’s potential consequences and how these could be interpreted (or misinterpreted) by their audiences. Here are a few that are particularly important:

The bias problem

Humans have inherent biases and, because we effectively “teach” AI via the algorithms we develop, we pass many of those biases on in the process. With these inherited biases, AI becomes far less effective in enforcing its decisions fairly. And, because there’s an expectation that AI is neutral and treats everyone fairly, evidence of bias will prove disastrous for any associated brand.

In the future, our clients that use AI will have to be active in defending the integrity of their algorithms against accusations of racism, sexism and other forms of discrimination. You might think this will concern only Silicon Valley tech brands, but government institutions that use AI to vet applications will have to be proactive in disproving any sense of bias. Likewise, if a bank uses an AI tool to assess the creditworthiness of mortgage applicants, will its algorithms treat people from different socioeconomic or ethnic backgrounds ­equally?

It’s still imperfect

AI in 2018 is by no means finished. Human language and interaction is full of subtlety and nuance, refined over centuries. Likewise, every image or picture comes loaded with symbolism and meaning. If human beings can regularly get these signals and social cues wrong, we can similarly expect AI to make mistake after mistake along the way.

Communicators will need to understand the nuances of the AI technology used in their businesses. It can’t be something that the communications side of an organization just leaves to the techies; it’s vital that they are completely on top of the technology’s potential public impact.

They also need to be able to defend the integrity of AI’s decision making. Communications teams will have to work with the public to build trust and goodwill. For example, how can the public help organizations build better technology?

Acting ethically

The data privacy and ethics issues aren’t going away, and it’s tied closely to the whole question of AI’s role in society. If you look at the rollout of GDPR in Europe this year, the tightening of privacy legislation worldwide and the impact of privacy breaches at tech and non-tech brands alike, the issue will continue to consume communicators’ time. Data is the lifeblood of AI; the more data it can access, the better it can perform. Communicators must ensure our clients have the permission to use this data, and provide clarity and transparency over how data is obtained and used.

So, what can organizations do right now to start preparing for AI? Internal audits to determine how AI is being used in the business and how it will be deployed in the future can give communications teams proper oversight. Following this thorough assessment, they’ll have a far better idea of what customers, employees and partners need to know. Then they can develop a full communications plan, where an organization can set out how to communicate the benefits AI will deliver to key audiences.

At the same time, consider a risk-assessment program to work out everything from the potential impact of AI on employees to the consequences of a data breach.

AI will undoubtedly become more of a focus for communicators as society’s shift to automation gains pace. And it’s already clear that PR and communications will be critical in managing through the issues and mitigating the reputational risk, while telling the positive story of AI’s potential, too.

Our industry is uniquely positioned to do this. Our commitment to authenticity and transparency will help businesses build goodwill and understanding amongst audiences, deepen relationships in this grey area of new technology, and earn our place in the AI conversation that’s just getting started.

***

Sophie Scott is global managing director for technology at FleishmanHillard.