
GenAI: the hottest seat in the room
I sit in another meeting, watching the usual GenAI hysteria unleash: the business leaders are in full sprint mode, like there’s no tomorrow —“We’re late!” “We need AI everywhere!” “We’ll fall behind, and become obsolete” “We’re going to lose customers” and—who knows—maybe even collapse into complete irrelevance… well, wait a minute!
Let’s admit: the pressure is real. AI pretty much looks like the next industrial revolution, and of course, nobody wants to be the fool who bet against the steam engine. But let’s also remember: hype doesn’t build sustainable businesses, execution does.
GenAI: What it can (and can’t) do
Let’s get this straight: GenAI is an amplifier, not a strategist. It can generate, summarize, and accelerate. It automates boring work, augments/boosts human creativity, it crunches vast amounts of information in seconds. But AI doesn’t think. It doesn’t understand. It scales what you give it—good or bad: if your processes are broken, GenAI won’t fix them; if your data is garbage, it’ll just generate more garbage, faster. If you blindly trust its outputs, prepare for hallucinations that make some politicians’ made-up success stories sound like audited financial reports. :-).
So how do you navigate these wild waters and stay standing when the storm hits?
The Punk CIO Playbook for a reasonable GenAI approach
- Ignore the “Fear of Missing Out “. Being late and right is better than being early and reckless. Fullstop.
- Demand real use cases. “We need AI” is not a strategy. “We need AI to reduce contract review time by 60%” or “We need to save 50% on our Client Acceptance process” are.
- Own the risks. AI can introduce security risks, compliance nightmares, and unintended bias. Treat GenAI like you treat an intern — useful, high potential, but not to be left unsupervised :-).
- Push for AI readiness, not just AI adoption. Clean data, clear workflows, and upskilled teams matter more than rushing an implementation.
- Centralize or democratize? For critical AI guardrails — like governance, compliance, and security — keep them centralized. For innovation and experimentation, empower teams to test and deploy within clear boundaries. All in all you need a well-governed AI ecosystem that balances speed with protection, allowing for crowd-sourced progress within clear guardrails.
- Pragmatic compliance. Total protection is an illusion—people will always find workarounds, especially when security measures make their jobs harder. Instead of blocking everything and hoping for the best, the smarter approach is to guide behavior. Give employees access to secure, approved AI tools so they don’t feel the need to bypass restrictions. Focus on education over enforcement—train people on what tools to use and why, ensuring there’s accountability without paranoia. Compliance shouldn’t create friction; it should enable secure and productive work. Don’t let compliance be an excuse to stall progress, but don’t treat it as an afterthought either. Focus on key issues: data privacy, AI decision accountability, and regulatory transparency. Build compliance into your AI approach rather than retrofitting it later.
AI isn’t going to replace CIOs, but bad AI decisions might very well do. Our job now isn’t to chase hype — there are plenty others doing this 🙂 It’s the same as with every other technology: to make sure it actually delivers. Without real strategy and execution, AI isn’t a weapon—it’s a distraction, a compliance headache, a security risk, and last, but not at least a money pit. Standing firm means asking hard questions, cutting through the noise, and ensuring GenAI creates real value, not just another layer of complexity.
Your turn—what’s on your mind?