The AI Strategy Dilemma: Are You Ready for More Than Just a Pilot?
By Andrej Hudoklin, Executive Head Data & AI: Europe
Most businesses have now dipped their toes into AI, but dipping toes won’t drive real transformation. And here’s the part we don’t like to talk about:
Up to 90% of AI initiatives never make it beyond the pilot phase – not because the technology fails, but because there’s no plan for ownership, scaling, or value realisation.
Pilots often start strong, attract interest, maybe even deliver encouraging early results. And then… nothing. The pilot wraps, everyone claps, and the model gets quietly parked.
It’s not a technology problem, it’s a strategy problem.
We’ve seen this happen too many times across our work in retail and route-to-market in Europe, MENA, and Africa. Businesses invest time, money, and talent into pilots, but without the clarity, ownership, or structure required to turn them into something scalable.
Pilots Are Good. But They’re Not the Strategy.
Let’s be fair, pilots serve a valuable purpose. They help organisations learn, test assumptions, de-risk decisions, and explore what AI can do in a relatively safe environment. We run pilots ourselves and recommend them when they make sense.
But increasingly, we see companies treating pilots like endpoints rather than stepping stones.
A pilot is not a win. It’s the beginning. And unless it’s designed with a clear path to production, scale, and ownership, it doesn’t matter how clever the model is. It’s just a prototype with better PR.
If your team doesn’t know what happens after the pilot, who will use it, where it fits, how it evolves, then you don’t have a strategy. You have a science project.
What a Real AI Strategy Looks Like
There’s no shortage of AI frameworks out there. McKinsey, BCG, Gartner, Microsoft,… they’ve all published layered models, value chain diagrams, and maturity curves. Most of them are pretty good.
But here’s my advice:
Don’t follow any single framework to the letter. Pick two or three that fit your business reality, and apply them pragmatically.
Adapt them to your culture, your teams, and your systems. Build for what works and not just what looks good in theory.
Within our team, we rely on a practical, layered approach based on what we’ve seen succeed and fail on the ground. We think of it as the five layers of a scalable, sustainable AI strategy, and it’s become a common lens for assessing our own roadmap and how we support clients.

Five-layers-AI-Strategy
1. Business Alignment
Everything starts here. AI must solve a real problem tied to a real objective like revenue, cost, margin, execution, efficiency, or customer experience. If your AI model can’t tie back to a KPI, business process, or behavioural outcome, it doesn’t matter how technically sound it is. It won’t stick. Strategy starts by answering: what’s the point?
2. Operating Model
This is where many pilots collapse. The operating model defines ownership, usage, monitoring, and integration into business rhythms. You can’t just “plug in AI” and hope it runs. Risk management and governance need to be embedded here too:
- Who is accountable when the model fails?
- How do you handle model drift, bias, or compliance issues?
Without clear operating models, AI projects gather dust rather than gaining momentum.
3. Data, Technology, and Trust Foundations
Yes, you need the right data and tech, but that’s only the starting point. Usability, adaptability, and trust are non-negotiable. Focus on:
- Modern pipelines and data governance
- Version control and retraining
- Real-time risk, security, and compliance monitoring (TRiSM)
- Building explainability and transparency into every model
Trust is not an add-on. It’s the foundation that determines if AI scales or fails.
4. People, Change Enablement, and Ethics
Even the best models fail if no one trusts them, understands them, or knows what to do with them. Change enablement isn’t just training, it’s about:
- Communication
- Trust-building
- Clear support structures
- Mindset shifts around working with AI
Responsible AI design ensures fairness, transparency, and minimises bias, which must be embedded from day one. Ethics isn’t an afterthought. It’s part of how you build AI that earns adoption and survives scrutiny.
Scaling AI is often less a technical problem than a behavioural and trust problem.
5. Experimentation-to-Scale Loop
Pilots are necessary, but they are only the beginning. Success depends on having a clear scaling path:
- Who owns the pilot’s output once it succeeds?
- How is it funded, integrated, monitored, and evolved?
Without these answers, even the best pilots turn into “another thing” on the shelf.
What Changes When You Actually Scale
We often use this table with clients to explain the shift in mindset and mechanics between experimenting and scaling.

Pilot Trap vs Scaling for Success
Scaling means thinking differently about where AI lives, who owns it, and how it becomes part of daily execution and not something extra that needs to be “used”.
Want Trust? Then Build Governance.
Governance isn’t bureaucracy. It’s the safety system that prevents you from crashing once AI speeds up. It answers essential questions early:
- Who owns the model once it’s live?
- How do we manage updates, risk, and bias?
- What happens when something breaks?
Without trust, there’s no adoption. Without adoption, AI is just code.
Good governance doesn’t slow AI down. It enables AI to scale safely, sustainably, and with confidence. It’s less about setting up committees and more about building lightweight but real structures for ownership, versioning, bias management, and incident response before the system becomes too critical to fail.
Governance is not a barrier to AI innovation. It’s the bridge that turns experiments into lasting outcomes.
Where We’re Putting This Into Practice
At Smollan Technologies, we’ve had to work through all of this ourselves, and we’re still evolving. We’re building AI capabilities across three key tracks: generative AI, predictive intelligence, and image recognition. But we’re doing it with a strong bias toward real-world integration, not experimentation for its own sake.
We’re working with field and planning teams across markets to build tools that actually help them make better decisions. Our GenAI agents, for example, are designed to surface insights through natural language so that anyone can ask questions and get clear, context-relevant answers. Our PredictRetail and PredictManufacturer products use forecasting and pricing models to support commercial teams with real-time trade-offs. Our Data-Driven Execution solution for field teams delivers daily execution alerts and short-term demand signals to the front lines, so people can fix problems before they become losses. And we’re combining image recognition with execution logic to reduce the in-store reporting burden.
But all of this, no matter how smart or sophisticated, is ultimately designed to answer one question: “What is my next best action?”
If your AI isn’t helping people at different levels of the business, answer that, it’s just “another thing” that sits on the shelf. The real challenge is not building the model, it’s making sure it lands.
Final Thought
We’ve seen too many clever pilots die quietly. Not because they failed. But because they were never designed to live. So before you greenlight another proof-of-concept, ask the hard questions:
- What happens if this works?
- Who owns it after the demo?
- How does it scale, evolve, and become part of how the business actually runs?
If you can’t answer these questions, you don’t have a strategy. And without a strategy, no amount of AI will stick.
Your Turn
Scaling AI is messy, complex and absolutely worth it when done right. If you’re thinking about AI and are not sure where to start, let’s connect and figure it out together – data@smollan.tech