AI slop and brand risk in zero-error environments
I am what some people would describe as sports-illiterate. Watching a soccer game is like observing an alien ritual conducted in a language without words.
But there are certain sporting events that even suckers like me can’t ignore. The Winter Olympics is one of them. I have spent maybe 30 minutes total in my life thinking about curling, but now that the Olympics have wrapped up, I am coming off a high from the Shakespearean drama of Canadians obsessively insisting on grazing the rock with their finger. Or a Norwegian skier confessing adultery after winning bronze.
The Olympics is one of those rare global moments where everyone is watching.
So with that in mind:
Is this really the best place to experiment with AI in promotional material?
If you’ve missed it, the official Olympics accounts posted promotional videos and images that were clearly AI-generated.

One example shows a luge athlete sliding down a track whose base appears to be made of penne pasta – a cute nod to Italy, the host nation. The problem? The Olympic rings in the image intersect incorrectly, violating the strict brand guidelines governing how they must appear.
In other words: the Olympics broke their own rules.
This wasn’t an overworked animator making a small mistake. It was a failure to respect the limitations of the tool.
While I might not know much about sports, I am fairly deep in the AI swamp. I belong to the generation that started using AI at the tail end of university – back when the models were bad enough that you intuitively understood their limitations.
Text models contradicted themselves constantly. Image models produced spaghetti-fingered hands and surreal nonsense.
Today, image and video models are vastly more capable. They can produce photorealistic visuals that are indistinguishable at a glance.
But that’s the key phrase: at a glance.
If you look closer, the artifacts are still there. They’re just more subtle now.
Most people won’t notice that the rings intersect incorrectly. And if we’re honest, most people couldn’t draw the Olympic rings correctly off the top of their heads.
But for a brand with billions of eyes on it, it only takes a handful of people to notice. And once they do, the mistake spreads.
The narrative shifts instantly.
AI goes from being a creative accelerator to a global laughingstock – and a legitimate brand risk.
The lesson isn’t “don’t use AI.”
The lesson is context.
When we use AI imagery commercially, we have to understand where it will live and how much error tolerance the environment allows.
A meme account experimenting? Fine.
An automated email flow with dynamic product images? 80% accuracy might be more than enough.
The Olympic Games? That’s zero-degree error tolerance.
At Curamando, this is where we always start: the context.
Is this part of an automated flow with no human intervention?
If so, we design for “good enough at scale.” That might mean accepting that 80% is sufficient – because the tradeoff is speed, volume, and efficiency.
But if the output carries brand equity – logos, visual identity, high-reach campaigns – then the tolerance drops to zero.
And that changes how we use the tools.
For example, when we need to include brand assets like logos, we don’t ask the model to generate them correctly. That’s gambling. Instead, we generate the core image without the logo. Then we add brand elements afterward – either through controlled AI workflows or traditional tools like Photoshop – where precision is guaranteed.
AI can draft.
AI can accelerate.
AI can inspire.
But AI should not be the final gatekeeper of brand integrity. At least not of this current date, who knows what might be released tomorrow?
Because in high-visibility environments, it only takes one pasta-made luge track or one broken Olympic ring to remind the world that automation without oversight isn’t innovation – it’s negligence.
About the author
Leon Henzel is an AI Engineer/Developer in the Insights & Analytics team at Curamando, with a background in socio-technical systems engineering focused around machine learning and AI.