From Pilot to Adoption: Overcoming the AI Implementation Gap
AI experiments always work in the lab because the lab has the ideal conditions: isolated scope and boundaries, dedicated teams and tools, freedom from operational constraints, and executive sponsorship. The problem is those conditions don’t exist in the rest of the business.
Video Transcript
Philippe Bonneton
There is an interesting one with ai. I mentioned as a scenario you’ve been appointed as head of AI for your company or for your organization, and you start to go through your inventory and you realize there is 20 or 30 little experiments. You have your developers trying GitHub copilot and so on. There’s a couple of things to your point about how the technology gets in the way or how your organization can get in the way. There’s a couple of things that early on will happen. The first one is, okay, you have 30 pilots. You look at them, you talk to the team, you interview the leaders, you put them in a nice grid to say, okay, this is what the pilot is about, this is how it drives value and so on. And it looks very successful. It looks very promising. Now, your pilot by definition has been built in a kind of controlled environment that doesn’t suffer from all the bottlenecks of your organization and your technology.
So the first thing is this gap between the conditions under which your pilot was created and run and your real life conditions when you are talking about your entire IT stack and your broader product teams that need to consume part of the data that comes from this pilot. So right there you have a gap. Those are the things, what’s in that gap are the things that get in the way of you being able to say, okay, I’m not just experimenting with ai, but AI is actually driving real business value and real business outcomes. For me, that’s the first thing. And I would say that’s kind of like the technological aspect and the organizational aspect that could get in the way of doing this, scaling this implementation.
Mike Cottmeyer
That’s kind of where Eric and I got at the end of our conversation. It was very similar to our agile transformation story. It’s like what we would see companies do is they’d be in a chaotic mess and then they’d go and they’d run an agile pilot and they’d create the ideal conditions for the Agile pilot to be successful and they would Greenfield development, all the data is locally accessible. The dev teams dedicated to solving the problem. And then they would do something like use Scrum or extreme programming or lean startup or some methodology to enable those set of conditions. And then what would happen often is that people would say, oh, it was Scrum or XP or whatever that enabled this, so let’s go do this in the rest of the organization. Totally missing the point that the conditions that were created in the pilot were not the conditions.
And so when Eric and I were talking about this, it was interesting the way that story emerged because it was like, like you go and you run this AI pilot and you have all the data available to you. You’re solving a localized problem and AI is able to do amazing things. And then you take that AI and those prompts and you unleash it on the legacy organization and the legacy data and you know that you don’t have the data in the right place. It’s not available via API. You’re not able to change the applications and interact with them in a different way. And it was just really interesting the parallel between why an AI initiative will fail and why an agile initiative will fail. I thought that was fascinating.
Philippe Bonneton
Yeah, absolutely. Absolutely. And those are real. This is real scenario, real client context as well, right? We know we have clients who are facing that and we’re looking for ways to deal with that sort of implementation gap from pilot. Where in pilot what you do is you kind simulate all the things that you don’t want to, all the heavy things that you don’t want to bring into your pilot. You can assimilate like, okay, real life condition, but not really. Then when you implement your pilot, you realize that the data that you need to run through your now scaled AI application is actually scattered across, I don’t know, 25 different repository across your organization. Your pilot simulated all that. You had a clean data feed coming in.
Mike Cottmeyer
Well, you and I have a shared colleague that we talked to about a year and a half ago, and you were talking about after the pilot you take some of these initiatives and you move them into the rest of the organization and the data’s not in the right place. There was something that was said by this colleague that was something to the effect of data and data governance and the culture around data is going to be the thing that gets in the way. I thought that was an interesting comment because that means that there’s an organizational change process that’s going to be required to overcome where data’s located, how data’s accessed, the current cultural impediments to doing that, and if we don’t fundamentally solve that problem, it’s going to make all of this that much harder. Any thoughts on that?
Philippe Bonneton
No. It’s interesting that you mentioned, so you talk about a couple of things here. You talk about where is the data located, but then you also, I think you’re talking also about what processes are in place to access the data, validate the data, work with the data.
And the way I like to put it is when AI is done, it can augment or amplify a sort of value creation process that’s already working in your organization. What typically happens though with a ai, as you start to work towards this sort of value creation process, value stream that’s now AI enabled is AI will shine a light on all the dark corners of operation. It’s going to point to the teams that are untethered from the bigger data lake or it’s going to uncover the inconsistencies in formatting of this data or the lack of process to validate vet filter that data. And so all these terms that I like to summarize that those are the dark corner of your organization or your operation, all the ways you have loosely tied data architectures, that you have APIs that maybe are configured for one application, not for another or not for the new state of your application.
So as soon as you start to play with AI and you want to see the output, the output of you transforming and processing this data, if the output’s not what you expected, if it’s not working right, usually that shines a light on all of the bottlenecks you have, all the impediments you have underneath it. And yeah, I like to talk about it. That is the AI is the big revealer of everything. You hit under the rug as you went. And it’s not just technology, it’s process, right? It’s how teams are designed and how the incentives they have to do certain work where the checks and balances are. So the interesting part about it, and I think what our clients will feel or already feel is we could come in and tell you, okay, you have a bunch of the AI pilots. We’re going to take the top three that’s in more successful and so on, and we’re going to help you implement those.
We help you bridge the gap. We’re going to remove what gets in the way. And now you have three scaled pilots. They’re no longer pilots, they are applications. That’s great. Now you’re not done. Your team, where is the next batch of 30 AI experiments that your team is going to work on in the next two years? How are you going to continue to innovate? And right now it’s great. We got three of those AI experiments to an implementation phase, to a scaled phase, but did you learn how to start your next wave of experiments so that you can remove a lot of the heavy lifting so that you can take care of this implementation gap early on? So the point being, how do we teach or how do you enable your team? How do you empower your team, your organization, to now envision and design those next experiments, the next pilots, the kind of the quick test and learn cycles, so that implementation down the road is much easier.
I think what I’m trying to get to here is, again, I’m going to take the example of you’re the new head of AI for company Y and you have 30 experiments that you’re sorting through. Find three that are valuable. You figure out a way to bridge the gap from these three into scaling and implementation. But now you still have teams that are going to continue to come up with use cases around AI and pilots are going to continue to pop up in the organization. The prime is your teams are not taught or not equipped to work with data, to work with AI innovation. And so how do you do that organizationally? How do you put the governance over it so that the teams can do it in a more sustainable way moving forward?