So you’re using AI now. Cool. Try not to break anything.

Matthew Hellicar

AI Safety
noun

  1. The art of not accidentally leaking your company’s entire roadmap into the cloud
  2. Also known as “please don’t get us sued”
  3. A delightful blend of common sense, paranoia and corporate policy
  4. Often ignored until it’s too late

AI Tool
noun

  1. A system that can generate essays, code, poems, lawsuits, conspiracy theories and pizza recipes — sometimes all at once and usually a mix of good, bad, plain wrong and often inedible
  2. A miracle of modern computing
  3. A potential HR headache
  4. Definitely not your new brain

AI Is fun – until it’s not

AI is the cool kid in tech right now. It drafts, edits, summarises, suggests, plans, generates, and sometimes invents things you never asked for.

But with great power comes great potential to accidentally share your salary spreadsheet with the internet.

Let’s take a look at what can go wrong, and how to make sure it doesn’t.

1. Sharing secrets with strangers (aka the AI server)

Most public AI tools work by sending your data to a bunch of servers you don’t own. Which means:

  • Your input could be stored
  • Your input could be used to train future models
  • Your input could be hacked, leaked or accidentally turned into a motivational quote (or an article)

Tip: If you wouldn’t write it on a whiteboard in front of your CEO or share it in the pub on a Friday night, don’t put it in an AI prompt.

2. Believing everything AI tells you (please don’t)

AI is confident. AI is eloquent. AI will tell you, with straight-faced certainty, that Abraham Lincoln invented email. It’s not lying. It just doesn’t know what “wrong” is.

Tip: Use AI like a helpful intern: enthusiastic, fast, but not entirely trustworthy. Always fact-check. Especially when it sounds suspiciously perfect.

3. Know where the robot learned its stuff

AI is trained on the internet. Yes, this internet. The one with Wikipedia, Reddit threads, and someone’s blog from 2009 claiming juice cleanses fix everything.

  • Bias? Yup, plenty of it.
  • Contextual confusion? Definitely, and without the ability to understand humour, sarcasm, irony, or the levels of boredom that humans can try to alleviate by posting rubbish on the net.
  • Advice based on your actual business needs? Probably not.

Tip: Your job probably relies on you being mostly right and being wrong is likely to end expensive for you and/or your company. You don’t need advice from someone who doesn’t know the difference between glue and pizza sauce.

4. Just because it made it doesn’t mean you own it

You asked the AI to draw a dragon holding a coffee mug. It did. It’s glorious. But can you put it on a T-shirt and sell it? Maybe… but probably not.

AI content is likely to be:

  • Based on copyrighted material
  • Unclear in terms of ownership
  • A lawsuit waiting to happen

Tip: Treat AI output like you found it in a Google Image search – cool, but not necessarily yours.

5. Trust but verify. Especially with faces.

Generative AI can now create videos of people saying things they never said. It can also make fake tweets, fake screenshots and fake news.

It’s equal parts entertaining and terrifying.

Tip: If something looks too good – a shocking quote, a pixel-perfect scandal – give it a side-eye. Then verify it. Twice.

6. Follow the rules (even the boring ones)

Using AI might make you feel like a digital cowboy, but that’s probably not all your company needs you to be. Most orgs have — or are making — AI policies.

Tip: When in doubt, ask.

7. AI moves fast. You should too.

That article you read about AI ethics last month? Already outdated. There are new tools, new laws and new problems popping up daily.

Tip: Stay plugged in. Read things. Take a course. Attend a webinar. Or at the very least, listen to the one person on your team who’s actually read the policy doc.

Conclusion: The AI is not your parent. Or your conscience.

Generative AI tools are brilliant at a few things – but they have no instincts, no ethics and no sense of consequence. That’s still up to you.

So yes, use the tools. Enjoy the magic. But stay human. Ask questions. Be sceptical. Keep your brain turned on.

Because the real danger isn’t AI making mistakes.

It’s us letting it.

Here at The Maverick Group, we spend every day thinking about how we can use AI safely.  We do quite often add a bit (or a lot) of AI into our solutions, but we don’t just blindly follow the AI trends.  Instead, our Technology Solutions unit helps our clients navigate the increasingly complex AI space in a somewhat obsessive search for the simplest, most efficient solutions (whether they involve AI or not) while keeping ourselves and our clients safe.

More News

AI is fast, flashy and increasingly good at doing things humans thought were safe from automation (like writing… oops). It...
It feels like Artificial Intelligence (AI) is everywhere right now — powering search engines, writing essays, and even helping people...
The global financial crisis of 2008-2009, COVID, the tariff wars. It’s too early to say how today’s volatility will translate...