News and Insights
Content Creation and ChatGPT: A PR Pro’s POV One Year Later
March 12, 2024
Last year, I ruminated about how generative AI would change the content creation game in PR. I didn’t have any hard and fast conclusions except to say that we’ll have to pay attention to these tools because they’d become really, really good.
Most people were probably hoping for something more definitive. If you can confidently say where these tools are headed, I’m impressed by your confidence.
While the exact scope of change in our industry is unclear, I’ve come to a conclusion about its value to me as a writer specializing in thought leadership for B2B tech executives.
Without a doubt, generative AI is the best intern I’ve ever worked with. But that intern grew up in a small, isolated commune and had limited contact with the outside world. Generative AI isn’t a substitute for professionals’ decades of lived experience. If you give it something to do, it does it without nuance. It can’t draw on the ambition, the success, the joy, and the boredom of being human. It doesn’t know what it’s like to eat sugary cereal for breakfast and watch Saturday morning cartoons.
Also, it makes stuff up way more than you’d expect, given its wholesome, communal upbringing. Regardless, it’s still pretty useful, despite his mendacity.
Where Are We Now?
At the end of 2022, most of us were amazed that generative AI could produce text seemingly indistinguishable from human-generated content. As colleagues kicked the tires on the technology through 2023, the novelty has worn off. AI-generated text isn’t quite as indistinguishable as it used to be because most of us have gotten pretty good at spotting AI-generated content.
That said, I’ve used generative AI nearly every day over the past year. It’s saved time and allowed me to arrive at a better product faster. I’ve even created a few custom GPTs to automate things that have been particular sticking points in my writing.
But I’ve been puzzled to see the more boisterous claims out there. Some of the technology’s biggest proponents in our industry have announced that junior staffers can now write bylines in one-tenth of the time. It’s true, generative AI will write a byline very quickly. I’m sure all of us are waiting to read it.
The fact is that generative AI is best viewed as an assistant that requires a good amount of hand-holding, fact-checking, and editing. That viewpoint has been echoed by discussions with many practitioners within my firm and others. It’s a great way to automate mundane tasks with low requirements for originality – like email campaigns in which each piece of content is closely related to the other. Using it for higher-level, long-form content requires a step-by-step, systematic approach with much more human input. It won’t shave 90% of the time, but it will save some time.
How Much Better Can It Make Us?
The question remains: How good and how fast can it be?
A recent report from Harvard Business School provides some insight. Researchers gave a series of tasks to more than 700 consultants from the Boston Consulting Group. Some consultants were given access to ChatGPT, and others served as a control group. Furthermore, one set of tasks dealt with things that generative AI was clearly good at. Another set of tasks served as a trick question. Consultants were asked to complete tasks that were the type of things ChatGPT seemed like it would be good at. However, these tasks were explicitly designed to be outside the capability of generative AI.
When performing tasks that generative AI is clearly good at, consultants with access to ChatGPT completed 12% more tasks on average and completed tasks 25% more quickly. The quality, as evaluated by human graders, was also rated better. These tasks included answering questions like “Generate ideas for a new shoe aimed at a specific market or sport that is underserved. Be creative, and give at least 10 ideas.”
Then, there were tasks that were outside the bounds of generative AI’s competence. These tasks involved analyzing a spreadsheet and transcripts of interviews with executives from a fictional company. At the end of these tasks, the consultants were expected to make recommendations to a CEO. While the spreadsheet seemed unambiguous at first glance, the interviews with the executives disclosed nuances about the state of the company. On these tasks, consultants using ChatGPT were more prone to error and misinterpretation. In fact, there was a subgroup within the ChatGPT-using consultants that received additional training on ChatGPT. Consultants in that subgroup were even more prone to error.
That finding raised questions for the researchers. Would regular people know when they were using generative AI for tasks outside the tool’s competence? Could the trust we put into these be a double-edged sword?
Another finding: Many ChatGPT-generated pieces of content sounded the same, prompting researchers to note that in “a competitive landscape where many are leveraging AI, outputs generated without AI might stand out and achieve notable success due to their distinctiveness.”
The questions the study raises, and the lessons it offers, are largely transferable to other generative AI models, for now at least. They’ll undoubtedly get better, and the question facing most of us is how fast these tools will make the jump from intern to partner.
I’d bet that the improvements will be incremental rather than exponential, and that’s a good thing. It gives us all time to learn and to experiment with these tools, rather than wait around for the thing that will blow GPT4 out of the water. The benefits will accrue to those that understand both the benefits of these tools and their limitations.
As agencies continue to refine and implement their AI strategies, these questions will continue to loom large: When should we use it? When should we not use it? How can we train employees about the limitations of these tools, and how do we restructure our workflows to get the most out of them? As tools improve, these are questions we must ask ourselves over and over again.