Or, better question: can a company use generative AI for anything at all? Yes … but.
Generative AI can very quickly produce content that addresses specific characteristics of a crisis, if prompted with thoughtful and targeted questions. Tools like BARD and ChatGPT can help create news releases, statements, and social media posts.
… but, don’t expect AI to take the place of a skilled strategic communications professional. And that’s only partly because you can’t count on a generative AI tool to supplant human judgment.
You can use AI to create a public statement. But you should not rely on it for the finished product.
AI Can “Talk,” but How Well Can It Communicate?
Generative AI has certainly arrived at the workplace door of virtually all human endeavors. However, it’s still a work in progress. To use it, you have to prompt the tool with a question. The more deftly crafted the prompt and the more fully formed the information provided along with the question, the more useful the answer will be. (Prompt Developer is quickly becoming a valuable organizational position.)
Obviously, that means you already need some expertise in the area you’re asking about. Ill-considered or poorly worded prompts can (and will) generate responses that contain factual errors or include much that is irrelevant. Those responses could also miss current information since many of these tools are not able to access current Web-available information. In addition, depending on the data set the AI tool is trained on, your prompt can unintentionally capture deceptively written information and generate biased views. That’s a big reputational danger in the current societal climate.
AI’s Information Base Can Be Limited
A recently released upgrade of Open AI’s ChatGPT uses a new user-agent crawler, GPTBot to “scrape” content on the Web in real time, overcoming an earlier limitation that confined generative AI’s knowledge base to before 2021. However, some news outlets – The New York Times, Washington Post, etc. – have already installed software on their websites to block the web crawler from accessing current information available only behind their paywalls. There are similar digital tools available to non-journalism websites to “disallow” GPTBot from accessing proprietary data and information too, further limiting generative AI’s range, power and accuracy.
Can a Company Use Generative AI to Create Public Statements? Maybe, but Don’t Trust It
Here’s an example of what we mean.
We prompted ChatGPT: My company needs to install a new president ASAP. Write a holding statement for me.
Within seconds (literally 3 seconds), the tool drafted a one-page letter to “employees, stakeholders and partners.” It started by saying the Board of Directors had decided to appoint a new president to ensure “the continued growth and success of the company.” Good enough, so far – if true. But then it went on for two paragraphs detailing the company’s months-long search and selection process, which certainly could not have been the case if, as the prompt indicated, there was a need to install a new president ASAP. It continued with two more paragraphs praising the outgoing president and expressing gratitude for the departing individual’s work. Clearly this response doesn’t match circumstances that required a sudden leadership change. Frustrating, to say the least.
How Can a Company Use Generative AI Effectively?
To ensure that AI content is accurate, appropriate, and consistent with your company’s values, it’s crucial that you train the tool on a diverse dataset (so wrote both BARD and ChatGPT). Generative AI tools learn to perform better (generate more accurate, timely and pertinent responses) as you refine your prompts over time. When it comes to generative AI, a saying from the early days of computing is profoundly relevant: “garbage in/garbage out.” For generative AI to be a useful communications tool, let alone an effective one during a crisis, experience and skill in drafting prompts is a must. The more specific you can be and the more targeted and content-rich are the bots that support your prompt, the better the result. In other words, it helps to know what should be in the answer before you ask the question.
As the earlier example shows, it is crucial that humans review and fact-check statements or longer content before you share anything publicly. Think of generative AI as a pretty good intern. Even the best interns need mentoring and management. You would never trust them with your company’s reputation or that of its brand. Without proper oversight, and without the skills, knowledge and savvy needed to provide proper oversight – in this case, that of an experienced crisis communications consultant – you’re doing just that.
Transparent Comms Best Practice: Always Cite Your Sources
Finally, transparency is an important consideration in using generative AI to create effective crisis communications – or any public-facing statement, for that matter. Companies that want to hide the fact that generative AI was employed to create messaging do so at their peril – AI tools are already available to detect whether ChatGPT, BARD, etc. authored your content. These AI “detectives” are keeping pace with the sophistication of the tools themselves, so you might be outed anyway. As with all reputational threats, taking responsibility is the first step to managing the message.
I used both ChatGPT and BARD “interns” to draft early versions of this piece. Then I reviewed the work for accuracy and completeness, revised and edited the outcome, added content, and restructured the entire piece for the audience I was writing for – just as I would do for a piece drafted by a human intern. Hopefully, it shows.
The Takeaway
While AI tools can help, if you’re hoping to avoid hiring a crisis communications consultant by letting AI do the work for you, you could find yourself worse off than when you started.