Mona Lisa in a futuristic background

These are exciting but volatile times in the content industry, and that’s because the introduction of generative AI is causing the greatest upheaval since the introduction of the internet. That’s not hyperbole; navigating this upheaval and its consequences is probably going to be the ongoing, defining challenge of this entire generation of content professionals.

We’re still only in the earliest stages of this upheaval, though, and one new startup’s ingenious breakthrough could turn the world upside down tomorrow. That’s why trying to answer this question by focussing on a particular platform that’s one update, buyout or court ruling away from changing dramatically is probably not the right approach.

Instead, let’s examine the major opportunities, the biggest hazards and where the jokers in the pack might be, and how you can wisely respond to them. Bottom line up front? Even though it may feel like science fiction come to life when the conversation turns to AI, we recommend keeping your feet firmly on the ground.

The fundamentals of what AI can do with content

Let’s start with a very quick logical exercise:

Q: Can you eat soup with a spoon?
A: Yes, of course.

Q: Can you cut down a tree with a chainsaw?
A: Yep, same again. That’s what it’s designed for.

Q: Can you cut down a tree with a spoon?
A: Well, uh, maybe. You might be able to eventually, but it’s hardly ideal.

Q: Can you eat soup with a chainsaw?
A: No! Merely failing would be the best outcome you could realistically hope for here. Seriously, don’t even think of trying this at home!

As this sequence illustrates, the tool you use for a task matters. If you get that choice right, life becomes much easier. If you get it wrong, the consequences sit somewhere on a spectrum between difficulty and disaster.

Right now and probably for the short-term future too, the role AI can play in the content industry is situational and it can sometimes be a useful tool as opposed to being a truly independent agent (philosophically speaking) in its own right. This state of play is best described as ‘human-machine teaming’1, where an intelligent combination of an AI’s brute force and a human’s finesse can offer the best of both worlds.

For example, we’ve been using AI-based transcription software for some time now. Although it occasionally needs a hand with names and accents, this has saved my fellow editors countless hours of typing everything up (and, worse, listening to our own voices and wondering ‘Do I really sound like that?!’) every time we interview someone.

Actually turning the transcript of that interview into a decent article is another matter entirely, though. Generally speaking, our experience has been that the quality of work done by a generative AI is usually adequate – no more, no less – and that perfection is still a long way off. In plenty of contexts, that’s satisfactory because a human can provide that crucial polishing to get a first draft to the finish line and there is still a healthy saving in time and trouble.

In cases like these, it's possible to be honestly optimistic about AI’s positive effects. Take the example of serial entrepreneur and crime thriller writer Ajay Chowdhury, who recently spoke to Sky2 about how generative AI has been a genuine help. "Eight out of 10 times, whatever AI gives you might be thrown away, but the other two times you might think it's a decent idea that can be expanded on," he explained. “Using a combination of these tools is giving me a much better draft to submit. I am finding that I get to what would have been a fifth draft by the second draft."

The practical limitations of AI

That’s all well and good, but what about the less clear-cut cases? Like trying to cut down a tree with a spoon, what’s theoretically possible on paper and what’s actually best practice in the real world are two different things. Using AI in the wrong context or without sufficient care can be actively counterproductive, and there have already been high-profile cases that ended in avoidable disasters.

And, like trying to eat soup with a chainsaw, some things really are impossible and to try at all inevitably leads to failure. As things stand, I could ask a human writer to test-drive a car, explain how music made them feel, give their impression of the atmosphere at an event, describe the adrenaline rush from jumping out of a plane, review the food at a restaurant and taste-test whisky. An AI could do none of these things because it doesn’t have the relevant senses, emotions and personal capacity for subjective judgment.

That’s why we need to protect ourselves from getting carried away with enthusiasm and appreciate the limitations of AI as much as its capabilities. An AI only acts in ways it has been programmed to act, whether by accident or design, but humans can go further. We can act of our own accord, create something truly original instead of just an average of billions of data points, take the road less travelled instead of the obvious path, apply live emotion and humane morals as well as sterile calculation and ruthless efficiency, make leaps of logic to create unexpected connections and earn the right to bend certain rules – or even break them outright.

What’s on the horizon for AI?

…is a difficult question, but it is reasonable to predict that several major factors will influence AI’s role in the content sector. To some extent, they are interlinked.

  • The first is regulation. At the time of writing, the AI industry is not heavily regulated. However, it is realistic to see that changing soon. For example, the use of so-called deepfakes to impersonate someone with illegal, immoral and/or hostile intent is already a matter of grave concern. There is the interesting precedent of Norway, France and Israel already having laws about edited images being clearly marked with a disclaimer, so AI-generated work being somehow marked too may become a legal requirement – enforcement would be a major headache, though – or part of a voluntary code of conduct for one of more industries.
  • This overlaps with the second problem: rampant piracy. Instructing or training a generative AI to write in the style of a particular writer is fairly straightforward, so scammers are doing just that and fraudulently publishing imitation works under the real names – as Jane Friedman found out the hard way3. How to reliably spot these fakes, what to do next when they are detected or how to prevent this from happening in the first place are, as yet, unsolved problems. It may become effectively impossible for a writer to completely protect their reputation from these digital doppelgangers, and the exact same problem applies for brands and their respective brand voices too.
  • The third is how existing legislation might also be applied. There has already been plenty of publicity – and at least one ongoing class action lawsuit – about how various generative AI models have been trained using vast amounts of content that they may not have legally been able to use. It would not be unthinkable for the courts to respond by awarding heavy damages, which will doubtless have an effect on the long-term future of the companies involved. Good luck to the judge who has to untangle exactly how those damages should be awarded, though!
  • The fourth can perhaps best be called a spam problem. Generative AIs can create a lot of content quickly, but whether that content has any inherent merit is another matter. Publishers and readers alike are already being deluged by dubious AI-generated works and that’s probably only going to get worse. In one case which prompted uproar and crossed the line into sheer bad taste as well, Forbes4 highlighted the launch of a 44-page e-book about the then-ongoing 2023 Maui wildfires on August 10 which made the logically impossible promise that it would “chronicle the events of August 8-11”. Unsurprisingly, the use of generative AI was suspected, accusations of a cynical attempt to exploit a disaster were plentiful, an investigation failed to locate the “mysterious” author and it has since been withdrawn from sale.
  • Finally, most generative AIs aren’t secure yet. If you need to create a prompt for an ordinary generative AI, giving it raw material covered by an NDA, GDPR or similarly ironclad need to protect sensitive information is off the table. However, as this is the closest of the five to being a technology problem, a technology-based solution is probably most readily found here and some generative AIs such as ChatGPT Enterprise that have been adapted with security in mind are already available – for a price.

It's the end of the world as we know it…

AI will have a dramatic effect on the content industry to say the least, but perhaps it can best be understood as multiple effects all happening at the same time and all ricocheting off each other. The future of the sector will probably be a situation-dependent mix. Some things will be done by AIs instead of humans; some things will be done faster, better or just with less hassle by humans with AI support; some things will be done by AI because humans don’t want to do them even though they could; and some things will still be done entirely by humans because that’s the only way they can be done at all.

But, although it’s too soon to say exactly how fast or how much AI will change the face of the content industry, we can already be certain that it’s not going to change the following fundamental principle: what matters most about a tool is the intent of the person using it. The same hammer that can be used to build a house can also be used to break into one, and that’s why we should take care to use tools responsibly and make sure they are used to help people instead of causing harm. Sometimes that means consciously not using a tool, because a genuine human can sidestep the golden hammer fallacy and realise that not every problem can or should be solved the same way.

The constant under all this is the judgment that comes with sentience. An AI might be able to write something in a fraction of the time a human could, but a human editor could take one look at it and tell you that you can’t publish it because you’ll libel someone. A wider appreciation of context, which stretches above and beyond the limits of content, is crucial and that is still the prerogative of human beings. Perhaps it always will be.

That’s why the best move you can make when it comes to AI is to talk to human beings, and Dialogue’s award-winning ones at that, who can help you navigate the changing but still valuable content industry. For one thing, we can definitely promise that we won’t ever glitch on you and tell you to eat soup with a chainsaw!


Contact us

Related articles

Content Marketing Agency of the Year at the CMA Awards

Content Marketing Agency of the Year at the CMA Awards

Read more

Read more insightful articles

See more from the blog
What you need to know about digital magazines

What you need to know about digital magazines

Digital magazines offer a compelling tool for today’s marketing experts and they can give you a...
Read more
Best Social Platforms For Content: The Rise and Fall of Threads

Best Social Platforms For Content: The Rise and Fall of Threads

The social media landscape is constantly changing, with new apps launching, old apps losing...
Read more
The benefits of membership magazines

The benefits of membership magazines

Membership organisations are a great way to build brand communities and bring people together...
Read more
What effect will AI have on the content industry?

What effect will AI have on the content industry?

These are exciting but volatile times in the content industry, and that’s because the introduction...
Read more

Proud to be a winner of industry awards, recognised as marketing experts in print and digital media.

Medium Content Marketing Agency of the Year 2023
Medium Content Marketing Agency of the Year 2022
ALF Sales & Marketing Award Winner 2023
Classic and Sports Car Club Awards
Content Marketing Award Silver
Content Marketing Award Gold
MemberWise Recognised Supplier 2024
Learn more about us