CapTech Trends

From Hype to ROI: How Leaders Navigate AI Adoption

CapTech Episode 50

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 25:55

In a landscape where AI dominates boardroom conversations, CapTech’s 2025 Executive Research reveals that while the pressure to “do AI” is intense, true value comes from a business-first, not tech-first, approach. CapTech’s CTO Brian Bischoff sits down with Sr. Consultant Maddy Stevens to discuss why:

  • Many executives feel pressured to implement AI without clear business goals.
  • Short-term, value-driven projects outperform long-term, vague AI mandates.
  • Security and compliance concerns often stall innovation but can be managed with early stakeholder involvement.
  • Employee adoption is critical; and resistance can derail even the best initiatives.
  • Iterative, experimental approaches and clear communication drive sustainable AI success.

Read the full research here: https://www.captechconsulting.com/articles/why-so-many-ai-initiatives-are-failing-to-deliver-roi

From Hype to ROI: How Leaders Navigate AI Adoption 

CapTech Trends Podcast Episode | 50



Brian Bischoff:

Hello and welcome back to the CapTech Trends podcast. I'm excited today 'cause we're going to dive into some of our recent research that we've done, specifically related to AI and some of the executive perspective, which is a new space for us. I'm Brian Bischoff, Chief Technology Officer at CapTech. And here with me today, we have Maddy Stevens, who is an expert on the research that we did and was a big part of the team that helped us not only execute it, but get to the point where we were able to get these insights that we're going to talk about today.

So welcome.

Maddy Stevens:

Yes. Thanks for having me, Brian.

Brian Bischoff:

Absolutely.

So it's hard to see any sort of marketing message today that doesn't mention some sort of hook or something around AI. And we've done research in this space. We've done a number of podcasts in this space. Maybe kind of tee up this research a little bit for us and why it's maybe different or what's unique about this compared to other things that may be out there.

Maddy Stevens:

Yeah. There's a lot of talk of AI. There's a lot of hype. And this research in particular, our methodology was to interview C-suite and VP-level executives from multi-billion dollar organizations across the US, and they span industries from healthcare, financial services, retail, hospitality, telecom. And what's different about this research is that we really wanted to understand their landscape as they're navigating technology investments. So what are their concerns, what are their priorities, objectives, and what influences their decision making process so that we can come in as consultants and help them navigate that. And you mentioned the AI hype. There's so much pressure on executives right now to "do AI." So one of the executive interviews, they actually said their board was like, "What's your AI strategy folks?" So you're seeing a lot of tech-first strategies.

Brian Bischoff:

So let's dive into that because I think the way we coined this in our research was around the AI mandate.

Maddy Stevens:

Yes.

Brian Bischoff:

And specifically just doing AI for either the novelty or the sake of doing AI. So maybe dive into that a little bit more and explain what that means and what we found in that research.

Maddy Stevens:

Companies are approaching AI from this tech-first perspective. So they're coming in and seeing AI as a solution, but don't really know what problem they're addressing. So they're coming in, let's bring AI, let's do AI, but why, and the path to value there is really unclear. So instead of that tech-first approach to AI, we're talking about a business-first approach.

Brian Bischoff:

Yeah. And that's actually pretty natural in many tech lifecycles is to certainly explore it and see how it might be applied. And then now you get to the point where you really need to see that investment.

Maddy Stevens:

Exactly.

Brian Bischoff:

And so that kind of approach where, maybe not the mandate side of things that might be something that's new. I think one that's unique about the mandate concept is that there is so much press and so much buzz around AI that if you weren't doing something related to AI as a company, maybe a publicly-traded company or even small companies, you either felt like you were left out or certainly your investors probably thought that you were.

Maddy Stevens:

Absolutely.

Brian Bischoff:

So that was kind of where this mandate came from, this, we have to have something on the table to prove that we're doing AI because everyone else is doing it.

Maddy Stevens:

Yeah. And on top of that pressure to do AI, there's this pressure on the executives to still meet their short-term financial goals. So they're focused on today, not this kind of long-term AI investment. And we heard one executive say, "Before you get all this glamorous AI stuff, you need to hit your baseline of business continuity." So there's kind of this perception that AI is this long game, this long-term investment, and you're not going to see returns for years to come, or if at all. So how would you recommend that our clients navigate this need for immediate results while also thinking through long-term planning?

Brian Bischoff:

Yeah. Well, there's multiple parts. One, just to kind of close a thought on the mandate side of things, I'll just tell a little bit of a story. So I was with a client a couple months ago and of course the topic of conversation was AI. Their responsibility was propelling AI forward inside the organization. And one of the things that I asked very early on was like, "What are your goals for the rest of this year around either AI or just your business objectives?" And they kind of surprisingly came back and said to me, surprising to me, came back and said that they've been told that they need to have 10 examples of AI in production by the end of the year, regardless of the use case, regardless of the results. And I think that sort of pressure is very common right now. So that's what we got to try to get away from.

And I think what you're teeing up here is some of the opportunities that we see around near-term value, near-term investment, starting small in a lot of these scenarios in order to get value. We've had a lot of examples over the years where we've worked with clients to define three to five-year roadmaps. And to try to look at an AI example right now or even come up with any sort of technology strategy right now that's three to five years in the future and have any sort of confidence that that's actually the direction that the technology is going to go is probably not very realistic. So what we're seeing a lot more now is these shorter-term, 12 to 18-month type of plans, not necessarily that you don't have to have a long-term objective in place, but certainly those near-term plans that hopefully drive value.

Because the other thing that I think you teed up is that what we're seeing, and I'll ask, I'll flip that back to you in a second maybe just to see what we heard in the research, but we're seeing a lot of pressure for the value realization or examples of the value very early on in those investments. Normally, when you invest in some sort of program of any size, you're going to have 18 months to maybe two years of an ROI return on that before you see the value of that. But there's so much pressure right now and to say, "Well, we need to have value now to demonstrate that the investments in AI are worth it, so how quickly can we turn that around?" And so we're seeing a lot more smaller, call them bite-sized, call them, but they're still impactful to the business opportunities to really drive change in that space.

Maddy Stevens:

Yeah. I think that goes back to what you said about value-driven purpose behind your AI initiative, starting with a business use case first. And one astounding stat that we learned in this research is that 95% of organizations report zero ROI on their AI investments. So that's nearly everyone saying they're not seeing ROI from their investments. And I think that comes from this tech-first approach as opposed to the business-first approach.

Brian Bischoff:

I also think that that... So I've read that and seen that stat as well. I also think sometimes the expectations are misaligned. Sometimes people are immediately either looking for that opportunity to completely replace a job family, for example. And that's not reality. Maybe we're going to be more efficient in certain scenarios, but not completely replacing jobs, not that we want to get into the job replacement conversation in that. But sometimes the expectations are just in the wrong place. Maybe the opportunity is really more incremental and that's just what you should be figuring out how to measure.

Maddy Stevens:

Right.

Brian Bischoff:

So what's maybe some other examples of in this starting small, some things that maybe came out of the research that were highlights for you and really what made this more impactful?

Maddy Stevens:

Yeah. So on the note of starting small and starting now, we kind of encountered some non-starters actually. So organizations are kind of viewing security and compliance as this full stop. They're afraid of experimenting with AI. One healthcare exec said, "We can't experiment with patient data." Another one said that you have to get it right every single time, but a hacker only has to get it right once. So security is seen as this big barrier, this kind of beast, which is blocking that AI innovation.

Brian Bischoff:

Yeah. Yeah. Well, that's true. I think we have a... Our perspective on that is that you can is that, there is a risk. You just need to manage the risk. And the whole purpose, I think the whole opportunity for getting to the next step would be to engage those security teams, engage those compliance and risk officers as you're coming up with the ideas so that you understand what those risks might need to be that you have to overcome in order to get to the end solution that you're trying to get to.

You gave the example of the healthcare in the research, the healthcare executive that said, "We can't use patient data because it's just too risky" or, "Patient data, we can't use in AI scenario. It's too much risk." At the same time, we just finished a project with one of our healthcare clients where they used patient data to help with the onboarding of patients to summarize that information. There are examples, I think, as long as you're bringing in key participants into the decision-making process early on in both the ideation, the proof-of-concept, validating it, that's the opportunity where you can actually start to overcome some of the fears of security and risk and still get to a point where you get to a viable solution.

Maddy Stevens:

Yeah, that's huge. You mentioned risk a lot, and we've seen a lot of risk-averse executives in our research. A majority of executives want to see successful implementation of AI in their industries before they bring it in house and implement it in their own company. And one exec actually said, "We don't want to be first, but we also don't want to be last." So how do you juggle innovating while also not getting left behind?

Brian Bischoff:

Yeah. Looking for that Goldilocks zone of not being first. I think also in some businesses, in some scenarios, you might want to be. If you are super confident and you see the opportunity in your business where there's a chance to really change the market, then you might want to try to see how you can be first to market in those scenarios. Some businesses are not going to do that. Some businesses are not going to be at a position where they want to be a market changer, but others that either maybe in growth cycles or early on in their development of products or services where they see an opportunity in the marketplace, they want to pursue that. And so taking a risk, a calculated risk in that scenario and making that jump is a reasonable approach for some people.

I think you need to balance the risk with, again, the opportunities where they exist. If your tolerance for risk inside your organization is very low, then maybe you don't choose the most customer-facing or forward-facing type of thing that's going to dramatically change your business. Look for various opportunities where there may be just small efficiencies that you can gain that prove value, but also set you up for that next investment. And so definitely kind of wandering that path or meandering that path of let's test some things out that are very progressive and forward-thinking, but let's do the basis of our investments and things that we know are safe and secure with our customers and clients.

Maddy Stevens:

Yeah, that makes sense. And you mentioned kind of the risk level or risk tolerance within an organization and I think a lot of that goes back to employees. And there's this fear of AI really with workers, like a third of workers believe that AI will replace their jobs or limit their job opportunities in the future. And up to 70% of AI-related change initiatives fail due to employee pushback. And we heard that in the research, one executive actually joked, we have some AI-resistant employees and he was like, "Well, give me back your computer." And so that seemed a little drastic to me, but another executive said, "We have 500 licenses, nobody's using it. How do we get our employees to adopt this technology?"

Brian Bischoff:

Yeah, I've been an advocate for a number of years now that these AI initiatives, the biggest hurdle to overcome is going to be employee adoption. You mentioned those two examples of buying licenses or using products and services that the people inside of your company just aren't using it. It's because we haven't made it very clear. We haven't empowered those people to make decisions in order to take those tools and see how they provide value for them in their jobs. I kind of jokingly or didn't jokingly, but referred to earlier about job loss and those sorts of things from a business objective. I think it's a realistic fear that people have right now is that if I use these capabilities that our companies are creating, then I'm not going to be needed in my job or my role anymore.

And one of the unique things that I've been pointed out to recently from one of our consultants, Liz McBride, who leads a lot of the change initiatives related to AI for our clients, is this concept called job crafting. And this idea that the people that are impacted by those jobs are the ones that should be starting to help formulate what their new roles look like in the future as well. That's just another part and example of how you can bring your employees along in this process so that people feel like they're, again, empowered to make the decision to use whatever widget or tool you've created, but also are in control of the destiny as far as what the next steps look like from a career perspective.

Maddy Stevens:

I love that, framing it from a perspective of empowerment. And as you were talking, I was thinking of this through line. So we talked about the AI mandate at the top, how boards are pressuring executives to adopt AI and then the executives are pressuring their employees, but you flipped it on its head and started from the grassroots. So a human-first approach to implementation, a people-first approach is the way to go, focusing back on that value-driven idea.

Brian Bischoff:

Yeah. I think that there's a huge part of it that is definitely human-centered and human-focused. At the same time, you need the leadership inside the organization to say this is an important part of our direction moving forward. So it's a balanced approach there, but without the employee side of things, without the human side of things from the impact, then it's going to, as you mentioned, the stats indicate it's going to fall flat.

Maddy Stevens:

So we've seen nearly half of organizations are scrapping, they're abandoning their AI initiatives before they even reach production. So that's a lot of wasted resources and investment. How do we help organizations choose their projects to ensure that they see that ROI from their investment?

Brian Bischoff:

Well, that's choosing the right problem to solve. The easiest things that we find to ask questions are when we come into clients is to say really where are you spending the most time? Where do you feel like you're least efficient inside your organization? Where is there an opportunity that you just haven't pursued yet because you didn't feel like it was possible? Those sorts of questions are things you can ask that allow you to really get to a point where you can narrow in on the right opportunity and then try to figure out exactly what the metrics and how you'd measure that would be, so that you can figure out whether there will be true ROI for that opportunity or not. It's not about, you used the word using tech for tech's sake, it's not about trying to deploy AI. That's not the goal. The goal is to transform your organization in various ways.

And that's why you've got to kind of inverse this a little bit and look at things from a business perspective, what are your goals, what are you trying to accomplish? What projects do you have in flight right now that might be good opportunity to see how you could do things better, faster, cheaper, more impactful, all those sorts of things. Questions on those are really where you start as opposed to sitting in a room and say, "Well, I've got AI, where do I deploy it?" Questions you should be asking is where am I spending the most time, where am I being as least efficient. Where are my opportunities in the marketplace? And that's kind of how you narrow in on the right use case. And you're not going to get that answer immediately.

Maddy Stevens:

Yeah, that's true.

Brian Bischoff:

And so that's something that's a narrative process that you, again, hopefully will be able to test some things initially to get to a point where you can validate that return.

Maddy Stevens:

That makes a lot of sense. And speaking of choosing the right problem, what kind of use cases have you seen with our clients that they're seeing success with this approach, with this business-first approach?

Brian Bischoff:

Well, I think across industries, I mentioned one earlier, which is a healthcare example where somebody's looking at the process that doctors take to onboard patients is taking longer than it should. It takes a massive amount of time for doctors to, if you go to a new doctor, you have to find your medical history and share that with the doctor. And the doctor's got to look at that and be like, "Okay, well based off all of this, what do I know about this person?" So how do I make that more efficient for the doctors and the nurses to review all that information in ways that make it more consumable and easier for me to get to the point of, "Okay, I know this person. I know Maddy's had these things, issues in the past, she's on this medication, and this is how I can help her going forward." It doesn't mean removing their analysis from it, just means you're surfacing insights faster. So that's one example in the healthcare space.

Financial services and looking at things where there's opportunities for anomaly detection, where it's been hard to identify either fraud or errant transactions, whatever the case may be, AI are great use cases for that. So these are things that were hard to solve previously or sometimes even more difficult as the technology advances and people find new ways to exploit things. It's harder and harder to find and prevent these things, but there's tool sets that are now available that allow you to do that. So I think looking at by industry, the things that are most applicable in different industries is one way to look at it, but you still got to look at things that are affecting your business uniquely. And I think that's another place to start.

Maddy Stevens:

We talked about iterations and experimentation and starting small and you were talking about financial services too. That made me think of a Fortune 500 financial services company that we had. They just had tool sprawl, AI sprawl all over the place. They had different business streams using different technologies, and they had us come in and do an assessment. Just let's look at your AI landscape, let's do an assessment, see where you are. And then we were able to implement some AI governance for them, which resulted in a 50% decrease in tool sprawl, which is a huge savings.

Brian Bischoff:

To me, there's a couple of aspects to that story that I'll build on. One is, if you happen to be in an environment where AI is mandated and people are going out on their own and implementing a variety of different solutions, you're going to get a lot of different solutions that have very little controls over them potentially. And so as part of a strategy moving forward, making sure that you have a solid governance in place so that you can manage risk, manage security, all those sorts of things, but also make it easier for people to find use cases that already exist inside your organization and say, "Oh, well, that's already been solved. I'm going to take that one and just re-implement it." Those sorts of things are important from a leadership perspective when pushing forward with AI objectives inside your organization.

Well, we kind of hit on this leadership a lot. Obviously, the survey was geared around executives and getting the insight from executives from, as you mentioned, large companies across variety of different industries. What are some other maybe things that came out of this from a leadership perspective that maybe you'd have from, just insights that came from, examples or insights that came from leaders as to how to be effective leaders in this space?

Maddy, what are some of the things, if you could summarize this conversation in your mind, what are some of the key takeaways that you think you'd like people to have leaving this conversation, specifically as it relates to the research or just in general providing strong vision and strategy around AI?

Maddy Stevens:

Yeah. A key takeaway I definitely want to leave here with, and I might break the fourth wall a little bit, is only 1% of organizations feel that they are AI mature with fully integrated workflows and measurable ROI. So that's 1%. So if you're feeling-

Brian Bischoff:

1% is pretty small.

Maddy Stevens:

... Yeah, so if you're feeling left behind, if you're feeling alone, you're not. We're here to help with navigating this landscape, helping you identify those purpose-driven, business-oriented objectives for your AI initiatives and making sure that you see your ROI so that you can be in that 1%.

Brian Bischoff:

That's an important thing to remember because we started this out talking about the AI mandate and the genesis of a lot of those mandates came from people feeling like they were trying to keep up with the Joneses. They were trying to keep up with other companies that were doing all this stuff. In reality, we found in the research is that most people are in the same boat. There are some leaders out there that are pushing things forward and they're putting out their marketing materials and making sure you know that. But most people are in the same boat of trying to navigate and figure out exactly where this can be employed that actually drives value inside of organizations.

Maddy Stevens:

Right.

Brian Bischoff:

Yeah. So what else? What are the things to do, key takeaways for this conversation for the audience?

Maddy Stevens:

One of the key takeaways for me is that you have to have this iterative experimental approach to AI. You've got to start small, you've got to prove your ROI early, which is possible and doable when you have a small initiative that is business-driven and value-oriented. And once you prove that success, you can grow and continue to build that momentum, prove to the board that this AI initiative is working, and then it could turn into an enterprise-wide strategy. You don't have to start big. Start small and build on your success.

Brian Bischoff:

I think we advise people against starting big.

Maddy Stevens:

Right.

Brian Bischoff:

Don't go ask for those million-dollar investments at this point. Start with the small ones that you can justify, prove that and get to a point where people are comfortable with the results of that before the next step happens.

Maddy Stevens:

Exactly.

Brian Bischoff:

Yeah. Something that I think I learned certainly from this research and taking this just in general as part of this conversation, the technology is one thing, but one of the biggest things that we have to get over as an industry is the adoption side of things and taking that, being very intentional about how you engage with whoever your audience is of that, I keep calling it a tool or a widget, whoever the audience is of that, you need to be very intentional about engaging and making sure they understand what the impact is to them, whether it be a consumer or an individual in their role inside of a company, because that is really a key to adoption, is getting people comfortable with these tool sets in ways making sure that they're clear what the next step is.

Maddy Stevens:

Right. You can't just take away their computer. That won't work.

Brian Bischoff:

Well, you definitely can't take away their computer unless you have other objectives in mind.

Maddy Stevens:

Yeah.

Brian Bischoff:

Well, thank you, Maddy, for the research, the insights as a part of this conversation. It's good to hear a little bit of behind-the-scenes story about certainly your thoughts when you're developing the research and as we've evolved this over the last month or so. You can find all the full research on our website, so you can look at both the executive survey as well as you teed up the consumer survey as well.

Maddy Stevens:

Right.

Brian Bischoff:

I think both of them provide interesting perspectives of both leadership inside of organizations and the challenges they're solving for, as well as consumers and what they're expecting.

Maddy Stevens:

Absolutely. Thank you for having me, Brian.

Brian Bischoff:

Thank you.

The entire contents and design of this podcast are the property of CapTech or used by CapTech with permission and are protected under US and international copyright and trademark laws. Users of this podcast may save and use information contained in it only for personal or non-commercial educational purposes. No other use of this podcast may be made without CapTech's prior written permission. CapTech makes no warranty, guarantee, or representation as to the accuracy or sufficiency of the information featured in this podcast.

The information, opinions, and recommendations presented in it are for general information only, and any reliance on the information provided in it does so at your own risk. CapTech makes no warranty that this podcast or the server that it makes available is free of viruses, worms, or other elements or codes that manifest contaminating or destructive properties. CapTech expressly disclaims any and all liability or responsibility for any direct, indirect, incidental, or any other damages arising out of any use of or reference to reliance on or inability to use this podcast or the information presented in it.