Feeling behind on AI? A practical guide for education marketers
A graphic of a bell curve, the same curve that is used in the innovation adoption curve developed by Everett Rogers in the 1960s
There’s a pattern playing out amongst higher ed marketing and comms teams right now. You might recognise it. It goes something like this:
AI came onto the scene fast. At first, it felt like something to keep an eye on, and some of your colleagues jumped right in there, fast. Then it was everywhere - dominating headlines, strategy meetings and LinkedIn feeds. And if you didn’t jump on the learning curve early, it’s easy to feel like you’ve been left behind.
It feels like 2004 all over again and the way that social media adoption played out. And, in a way, it’s a classic innovation adoption curve pattern, as defined by Everett Rogers in the 1960s. We have our:
Innovators
Early adopters
Early majority
Late majority
Laggards.
If you’re not familiar with the innovation adoption curve, this guide by High Tech Strategies gives a useful overview and understanding of the different segment needs.
With any new technology or approach, for some folks it’s easier to ignore it entirely. We’re all busy, and we don’t always have time to immerse ourselves in something that doesn’t yet feel embedded or proven. For others, there’s a subtle shift into scepticism - not because you truly believe AI is useless or dangerous, but because it’s easier to dismiss it than admit you’re not sure where to start. We see this all the time. And we get it. When you’ve left it a while, the gap between “I should probably look into this” and “I’m too embarrassed to ask basic questions” can feel insurmountable. You’re not alone and it’s not too late.
Caught between the hype and the fear
The AI conversation has become increasingly polarised. On one side, we have the evangelists: bold claims, rapid adoption, promises of productivity and transformation. On the other, the naysayers: (genuine) concerns about ethics, job losses, misinformation, or simply a flat-out “not for us.”
Neither extreme helps us move forward with clarity. The evangelists might optimistically ignore crucial steps to overcome, while the naysayers write it off often without really truly exploring it.
Here’s where I stand: AI has huge potential. Real, practical, immediate potential. But it’s not a magic fix. It can’t replace critical thinking, creativity, or human connection. And it’s only as useful as the purpose we set for it, and the quality of the input (training) we provide.
So rather than sit in fear, or mask uncertainty with cynicism, we need a different way to engage. One that’s grounded, strategic, and human-first.
But if we want to move to a different place on the adoption curve, we need to admit that we’re a little in the unknown and just take that first step forwards. That’s where I want to help, and what this post is all about today.
What are we really talking about when we talk about AI?
“AI” isn’t one thing. It’s a collection of tools and systems designed to interpret, generate or automate information. You’re probably already using it, even if you don’t realise: in your inbox, your search results, your content recommendations, your online shopping experiences, your movie or music recommendations.
Generative AI - the type of AI that writes, designs or codes - is just one part of the story. That’s the type that we’re talking about when we use tools like chatGPT, Claude, Perplexity and DALL-E. And it’s the part that has sparked both the most excitement and the most concern amongst marketing and communication professions.
But the real opportunity lies in how we combine these tools with our own expertise to solve real problems in our institutions.
AI still feels inaccessible for so many
There are plenty of reasons why professionals in education marketing and communications still feel unsure about AI:
The pace of change is dizzying (aside: it’s dizzying too for those of us that are immersing ourselves in it too)
Many tools feel built for tech teams, not content teams
The learning curve feels steep, especially under time pressure
There’s real concern about ethics, bias and data privacy
And let’s be honest, nobody wants to look like they don’t know what they’re doing.
But here’s the truth: most tools are only as good as the person using them. AI is not a magic wand. If you feed it vague prompts, biased data or unclear instructions, you’ll get weak, unhelpful results.
It's classic - and you’ll hear this said a lot about AI - rubbish in, rubbish out. And while some naysayers have genuine and tested concerns about the tools, others have become naysayers simply because they’ve been feeding AI with rubbish prompts, then blaming the tool when they get rubbish outputs.
AI is not a replacement for your skills. It’s a tool that requires your strategic thinking, creative direction, and contextual understanding to function effectively. Used well, it can amplify your impact. Used carelessly, it’ll just add noise. Awful, robotic noise at that.
Where AI can help (and where it can’t)
We’ve seen first-hand where AI can genuinely support education marketing and comms professionals. With the right guidance, it can:
Speed up the synthesis of research, transcripts and qualitative data
Help teams surface and reuse great content that already exists
Ideate for content and campaign planning
Represent audience perspectives through custom trained “persona” approaches
Generate content variants for different channels or audiences
Ensure alignment with style and tone guidance and accessible and inclusive writing
Make policy and strategy documents more digestible
Build searchable tools that make audience insights accessible across teams
But what it can’t do is understand your audience, your tone, your institutional politics, or your strategic goals. That’s still on you. It needs structure. It needs context. It needs clear instructions. And - most importantly - it needs your human judgement at the helm.
All AI and machine learning models are essential prediction making machines. They look at patterns that have gone before, and they produce outputs that serve as a prediction of what they think you want based on the patterns it’s used before. So, if you’re looking for originality, you need to build that originality into your prompts as it can’t come from within the model that it’s trained on itself. It’s not build for or capable of generating original thought.
Start with strategy, not shiny tools
Feeling overwhelmed by tools is understandable. New ones launch daily, each promising the earth. And if your social media feed looks anything like mine (and machine learning algorithms means that it doesn’t, but stay with me), you’re inundated with ad after ad for yet another AI agent that promises the earth and assures that your life admin will be wholly fixed if you just pay out that $19.99 a month. Sound familiar? Before you sign up for another free trial, stop and ask:
What challenge am I actually trying to solve?
What would success look like for me or my team?
Who else needs to be involved to make this work?
When you start from purpose, AI becomes a useful assistant, not a confusing distraction. And you stay in control of how and where it adds value.
So, instead ask:
What tasks do we do that feel repetitive?
What tasks do we do that could be automated?
What tasks do we do that take me away from using my best skills and expertise?
What tasks do I sometimes get stuck or blocked on?
Therein lies the clues to some of the tasks that you might start with when using AI tools.
Navigating the ethics
We can’t talk about AI without addressing the big questions: who is responsible for its outputs? What data is it trained on? Who owns that data and those inputs and how is copyright respected? Are we amplifying bias? Are we being transparent with our audiences?
Responsible AI use in our sector must consider:
Bias and representation in training data. We humans are biased AF (no matter how much we may think we aren’t), and our biases have been built straight into AI training models
Transparent use and disclosure. What’s the boundaries around declaring our use of AI? When should we (like creating totally AI-generated videos) and when does it simply not matter (like when using a tool like grammarly to check your grammar)?
Data protection and legal compliance. How are we handling and protecting personal data or copyright protected content?
Ongoing monitoring, not just one-off checks. What structures do we need in place for checking AI outputs for quality and reliability?
We don’t need all the answers to get started. But we do need to be asking the questions, and committing to using these tools with care and curiosity.
Ready to explore with confidence?
If any of this feels familiar - if you’ve quietly put off learning about AI, or you’ve dabbled but never felt fully confident - then our upcoming webinar is for you.
AI back to basics: tackle your FOMO fast is a practical, non-judgemental session for education marketing and communications professionals who want to stop feeling stuck and start getting smart about AI.
Join us on Friday 30 May 2025 at 1pm BST for the live session, or catch up on demand in the ContentEd+ hot topics archive.
Sign up to attend for free (you’ll need to subscribe to a free account on ContentEd+).
We’ll cover:
The fundamentals of how AI works (in plain English)
The best use cases for our sector (and the ones to avoid)
How to brief and prompt AI tools effectively
Where to start, what to try, and how to stay in control
Whether you’re curious, cautious or somewhere in between, this is your hour to reset your relationship with AI and take the first step forward - on your terms.
How else can Pickle Jar help?
Over the last 2-3 years we’ve been immersing ourselves in AI trends and concerns so that we can help you adopt and embrace it effectively, with impact and ethically. We have helped institutions by:
Building custom GPTs and chatbots
Providing introductory training workshops to get started with exploring generative AI
Advising on the development of AI policies and ethical frameworks
Coaching and mentoring teams exploring the applications of the tools for marketing purposes.
Want support with a measured and meaningful adoption of AI? Tell us where you’re starting from and we’ll share how we can help you move towards those next steps.