CapTech Trends
CapTech Trends features thought leaders and subject matter experts discussing emerging technology, design, and project methodology. Our goal is to unite diverse skills and perspectives to show how data, systems, and ingenuity can transform and enable organizations to advance what’s possible in a changing world.
CapTech Trends
Consumer AI Study 2024: Shifting Attitudes and Increased Adoption
Listen in as we discuss how CapTech’s Annual Consumer AI Study findings have evolved. Brian Bischoff, a CapTech Principal leading our Practice Areas & Services, and Elliott Hartz, a Senior Consultant in our CX practice, join Vinnie for a discussion around what’s changed, what’s surprised them, and how organizations can positively adopt AI technologies for their customers.
- Over half of consumers are willing to pay a monthly fee for enhanced AI-powered benefits.
- The shift from simply "trying out" AI to demonstrating clear ROI from AI initiatives.
- Generative AI’s widespread adoption in both personal and professional settings.
Vinnie
Hello and welcome back to CapTech Trends. Today we're going to talk about our 2024 Consumer AI Study, and I have with me Brian Bischoff. He's a principal leading our practice areas, and in particular, he has a strong focus on leading our AI internally and externally with our clients. Also, Elliott Hartz, a senior consultant in our CX practice, and Elliott actually published the survey and led all the analysis that we're going to be discussing today. So Brian, Elliott, welcome.
Brian
Thank you, Vinnie.
Elliott
Thank you.
Vinnie
Elliott, why don't you kick us off? Tell us a little bit about the consumer study, and this seems specific, Consumer AI Study. So what has the CX group been doing every year and fill us in on where this fits?
Elliott
Sure. So this year, we ran our second iteration of the consumer perception on AI study. Our customer insights service offering in partnership with our marketing team identified the opportunity to take a survey that we conducted in 2023 when artificial intelligence was pretty popular, beginning to gain a lot of consumer awareness, and run it again this year in 2024 so that we could start to see, as that awareness grows and adoption becomes higher, how those perceptions and sentiments change for our consumers around artificial intelligence.
So we conducted a study of around roughly 360 participants. We ran them through 36 questions that are fairly identical to what we asked in 2023 with the addition of some additional questions that just dug a bit deeper into sentiment.
Vinnie
It's a different set of people who responded, right, not the same group?
Elliott
Yes, yes, yes. Both studies are random participants from the general population, so spanning all age groups, demographic groups, and we collected about similar sample sizes between the two.
Vinnie
So what have we noticed between last year and this year, high-level findings, and really is there something that surprised you or not?
Elliott
Yeah, so across the board, I mean, sentiment is still very positive, which it was in 2023 as well, but we are noticing an increase in awareness around artificial intelligence, which I think was expected in our hypotheses, but with that increased awareness comes a really large jump in adoption. Artificial intelligence is beginning to become a household name and we're seeing consumers start to use that in their personal life, their professional life, and really start to see especially formats like generative AI take off.
Vinnie
Yeah, it's funny to me because I would say 2023, 2022, my thinking was, and I guess the industry was thinking this way too, that people would have a little bit more a fear around it, AI taking over, or lack of transparency into the security aspects of it. And you still see some things online, social media where people, if they don't like what someone they like says, they claim it was AI, so they're using it as the bad guy in some cases. But the survey shows that people really do trust it and want to use it and want it to be there.
So I'll throw that to you, Brian. Why do you think so many of us were wrong about people's perception about fearing AI and now the adoption and the approval ratings seems to be really high?
Brian
I think there's still a lot of fear out there in certain pockets. I don't know if we were necessarily wrong. I think the reality is there's so much value in leveraging these tools and these capabilities, as Elliott mentioned, just for personal or professional, that people are starting to get over that, I guess, insecurity of what's possible with these tool sets.
I think when you look out there, you see all kinds of bad examples of AI, whether it be images that are created just obviously in a bad light, or content that you know, you can observe and realize that it just isn't very productive. That's on the fringe now, I feel like, and most of it, people are finding ways to actually be very purposeful and useful with the tool sets.
Vinnie
It's funny because from a computer science perspective, I can understand some of these differences and capabilities, but if you're not deep in it, it can be strange. And I'll give you an example. Meaning, there was this famous one where you asked, I don't know if it was ChatGPT, I don't want to be specific about the generative AI model that was used, but the question was, how many Rs are there in strawberry? And it came back as two, not three, because they looked at the word berry. And you try to correct it and it's like, "Nope, it's two."
And so it gets the number of Rs wrong in strawberry. If you ask it to create a picture of a guitar, it has the wrong number of strings, people have the wrong number of fingers, but we're going to trust it on a cancer scan for your brain or your lungs.
Brian
Well, here's different about that, right?
Vinnie
Right.
Brian
Because you're asking a one-time use to create a picture of a guitar and you get the invalid strings or you're creating how many Rs are in strawberry? And that's a one-time point in time.
Vinnie
Yes.
Brian
What's important about those advanced uses, like you just mentioned around medical device and detecting cancer, is that we're actually weeding out all those invalid scenarios, right?
Vinnie
Right.
Brian
You're reviewing and saying, "Oh, that's clearly not right," or, "after further review, that's not right," so you're building a more intelligent system as you go along. And I think that's what people are getting more comfortable with is the fact that yes, at one point in time, you might get an errant or an invalid type of response from AI, but as you get more and more trained and more and more usage at it, you're going to get more valid results.
Vinnie
Right, and I think as technologists, we can understand that and understand that anomaly detection across that many scans of a brain is actually easier than trying to generate something from scratch the first time, right?
Brian
Yeah.
Vinnie
Yeah. So second question to this. When I look at how we were evolving the user experience prior to AI, we focused on personalization, we focused on journey mapping where we started at the very beginning of an experience as opposed to just the point-in-time experiences, categorization of the personas that we have that are target audiences and stuff. Those things still exist. So does AI just do those things better, and/or does it do things we couldn't do before?
Elliott
It's a good question, and when we look at some of the survey data, I think it points more towards the latter of really being a inspiration or sounding board to get started from versus the end-all, be-all answer. We noticed top scenarios in which consumers found value were making complex tasks easier and automating some of these tasks that take a lot of manual labor and time to complete. But using AI, it helps expedite that process. It doesn't necessarily work in the you of that manual time, if that makes sense.
Vinnie
Is that more agentic AI, taking multiple steps and linking them together?
Brian
I mean, that's the definition of AI. Agentic AI would be certainly that more goal-oriented approach to solving as opposed to just responding to a prompt. But to get to your point around personalization, for example, and is AI just doing these things better? Absolutely, I think they are. We've joked about, everyone's joked about these retargeting ads that you get from things-
Vinnie
You buy a tent and then for the next month, Facebook wants you to buy a tent. Right?
Brian
You already bought a tent-
Vinnie
Already bought a tent, right.
Brian
... and you already know that-
Vinnie
And now I'm annoyed.
Brian
Yes, that's exactly right. And that's not AI. People may think that is because they're uninformed on it, but that's not AI. What is AI is when you are going through a purchase process and you're going through and evaluating certain products per se, and then the next item that actually gets presented to you is an item that is based off of either previous purchases you made or based off of your current history as you're going through that particular site and say, "Okay, we're actually going to recommend you a better product, not necessarily something you already bought or something you already looked at." And so that's being able to take all those inputs in and provide the best recommendations or things like what AI is really good at right now.
I'll give another example of that just because it's something that I've recently experienced. I enjoy fishing. And so using ChatGPT, I was asking ChatGPT to provide me with some recommendations on different gear I could have, whether it be different types of fishing rods or reels for different types of fishing, and it came back with great recommendations. And at the end of it, it said, "Well, based off of your history of vacationing in this place, this is what I would recommend for you." And I was like, "Well, hang on a second. I never told you I was vacationing that place." And I realized that on previous interactions, I had asked about things like, "If I'm going to go on a vacation in this place, what sort of activities would you like to do or would I do in this environment?"
Vinnie
How did you feel in that moment? Did you feel happy that they knew that about you or a little freaked out?
Brian
And in the moment, I questioned it, right?
Vinnie
Right.
Brian
I was like, "Well, how does it know that I did that?" And I realized through my interactions and history-
Vinnie
You told it that.
Brian
... that I told it that. And so there's that line and that's where we're finding ourselves right now and where I think certain parts of the demographics, when you look at the survey, are really accepting and saying, probably earlier career folks, earlier generational folks, are looking at this and saying, "Well, I'm okay because I recognize all my data I'm providing these systems are things that are going to provide better experiences for me," whereas some of the Gen Xs or even beyond into Boomers are probably looking at this from a standpoint of, "Well, how do you know that? Why would you know that?" And so there's definitely that balancing the line of finding the right persona there.
Vinnie
Gotcha. And in the survey and also on the website, which by the way, if you guys want a copy of this, just head out to the CapTech website, you'll find it out there, we're pretty direct in saying, "You got to do this or you're behind." I mean, it's basically that direct. Support that or back that up for me.
Elliott
Yeah, one of the really interesting insights we gleaned from the survey this year was that consumers not only are becoming more familiar and adopting AI more, they're also starting to formulate opinions. So we are seeing far less responses in a neutral sentiment where consumers are still understanding AI in and of their own to formulate an opinion. This year, that has shifted and we are seeing consumers, both in a positive and negative sentiment, formulate strong opinions on how they think AI can provide value within their daily lives.
Vinnie
So my thinking is, I got two sides of the coin here. One is there are going to be people who are excited that AI is available and there are going to be people who don't want to use the site because AI is being used. So are you basically chopping off some percentage of your client base by saying you're using it? That's one.
My thought going into this with pretty much all technology and advancements in technology is it's better when you don't know it's there. So if the experience is better, it's more personalized, it's more efficient, less friction, all those things, and AI happens to be powering that, great. We don't have to know that it's doing that. Now, we can be transparent on the site and have a link that says how we use data and all that so we're not being sneaky about it. But still, the experience isn't a AI-forward experience. It's an AI-supported experience.
So having said that, do you agree with that or is it important for some brands to say, "No, we want to show we're tech-forward," and what does that do to the client base in terms of alienating some people?
Elliott
It's a good question. I think in terms of brand perspective, in the survey, we did specifically ask, "Do you feel brands and businesses are communicating both the benefits and risks and how does that go into your decision to engage with that brand or business?" Although consumer sentiment about AI as a software is pretty opinionated, how that translates to a brand is a little bit more vague. We saw a bit more even of a spread of what that translates into brand representation.
So I think that all goes to say that consumers want it, they expect it, and it's a really good opportunity for brands to leverage because the backlash is not quite as high from a business or a brand perspective.
Vinnie
You have an opinion on that, Brian?
Brian
I mean, it goes back to the way that I think we like to talk about AI with our clients specifically is we don't try to force AI into a solution. You're not trying to say, "Where can I use AI?" And when you do that, it becomes very obvious, I think, about how AI is being positioned to the consumer, whatever the case may be. You're really trying to accomplish certain goals or certain tasks. With consumers, you might be trying to create a better, like I mentioned earlier, personalized experience with shopping. Maybe you want to have better, higher quality customer support, and we need to make sure we're tailoring previous interactions that customer might've had with that brand in a way that you might want to implement a chatbot to solve those sorts of interactions.
So to me, my opinion is that we should try to focus on those sorts of items, which I think inherently pushes the AI conversation a little bit to the back burner. To your point, that might mean we're being a little bit less transparent, but I don't think so. I think if your goals you're trying to accomplish are to improve the citizen experience, improve the customer experience, consumer experience through various interactions, and you're open to the fact that we're collecting data and we're collecting all this to make a better experience, then I think that enables what you're trying to accomplish without actually specifically highlighting the fact that, "We're using AI and this is the purpose in which we're using it for."
Vinnie
I mean, famously, or most recently famously, Apple has done a good job with Apple Intelligence of saying, "Oh, we're using it everywhere," but they've built trust with the user base that they're not going to misuse that data. So I think there's a way to be quiet about it and a way to be loud about it.
Brian
Yeah.
Vinnie
I'm going to go back to the "everyone needs to be doing it now" comment that I made earlier about the survey and ask you, Brian, to go a little bit deeper on what that means, either from a technical perspective or a methodology perspective. What are you seeing specifically at your clients? Are some of them stuck in that trial, workshopping it, doing it in a lab? And if not, great, but if so, what's either the methodology or the technical steps to get out of the lab and get into a product?
Brian
I think there is a lot of... I mean, I think last year was a lot of trial and exploration around what was happening with AI or what you could do with AI. And the reason why a lot of those got stuck is there was no attributable value to what was actually being accomplished there.
And so the idea now to take things out of the lab, it's not just, "Let's try this technology out and see how it might apply in certain scenarios," it's, "What really value are we going to get? How much additional revenue are we going to get because we can drive this new AI capability? How much of a higher customer satisfaction are we going to get because we do better personalization? And that means a better overall score from that perspective. What are we going to do from an internal operations perspective to make our call center agents more efficient because they're either more connected to the interactions that customers have had or they're better prepared to answer questions that customers have?" Those-
Vinnie
Or auto-generating responses.
Brian
Auto-generating, all those sorts of things are attributable to value. I think where last year, a lot of it was, "Let's make individuals more productive and let's introduce Copilots and these sorts of things to be more productive," now it's about really transitioning to, "How do we use this technology to drive value?" And when you can demonstrate that, then that's an easy answer to, "Yeah, let's move this forward and move it out of pilot stage because we can demonstrate the value that we're getting."
Vinnie
Yes, it's second time you've reinforced in this conversation that the business drives the technology not the other way around. And I think when something new comes out, whether it be blockchain or machine learning or large language models, you do have to have that technology for technology's sake to fully understand it and get people spun up. But then quickly, I think your point is well-made, that it's not the responsibility of that lab to then force it into the marketplace. It's the responsibility of the business to say, "These are the expectations we have and how we want to engage with our citizens or customers, whoever, and we need to do it better," and that's what drives it.
Brian
Yeah. And it's not always about just doing things better, right? Because there's times where, well, maybe this actually does align to that, but there are times where you just need... Now with these additional tools in your tool belt, you got to think about how to solve problems differently, things that you maybe couldn't do before that you now can do. Maybe you have legacy systems that people just didn't understand before or because the aging, you have a workforce that's aging out or it's just a system's been around for a long time, it's complex, and you have two or three folks that know it, but they don't really know all the edges. Right?
Vinnie
Right.
Brian
Leveraging tool sets like AI to understand those systems allow you to think differently about how to solve problems, how to possibly either replace that system or provide better experiences outside of that because you know how that system works because you're using AI-related technology. So I think just thinking about leveraging these tool sets to solve problems in ways that you maybe hadn't done it before.
Vinnie
And I'm not trying to plug it, but LegacyLift is an accelerator that we have, and I just want to use it as an example to what you said about those legacy systems, and correct me when I go off track here, but basically you're able to use AI to look at the entire code base of a legacy system and ask questions in real English, like, "How many classes do I need to create to mimic this type of thing?" Or, "How many workflows are there or what are they?" And basically, it would then interrogate the code, go against the models and give you those answers back. And you still have to do human work to make some of these changes, but the analysis side of it goes way down.
So you want to fill in some of the gaps that I missed on that?
Brian
Yeah, and those are exactly the types of things, again, thinking about ways to solve problems differently, looking into how many interfaces, how many systems does this-
Vinnie
Touch.
Brian
... application touch?
Vinnie
How many internal, how many are external? Right, yes.
Brian
Right. And knowing that, I think, is helpful in diagnosing whether there's problems or thinking about how to replace these things in the long term.
Vinnie
I'm going to move on to more use cases and maybe some of them are coming from the survey as well. I know when I talk about AI to a lot of people, they go straight to generative AI, whether it's text or images. That seems to be the thing that's most easily understood. Did that come through in the survey? And either way, yes or no, what are some of the other things that are outside of just the generative AI that people are excited about or using or starting to experience more?
Elliott
Yeah, it's a good question and a topic that we were very interested in when planning our survey this year. Considering some shifts in the market, we were not surprised but delighted to see that generative AI had jumped in the ranks of what consumers find valuable to the third position. We already hypothesized this would happen, and so we asked a bit more in-depth questions around generative AI, how consumers are using it, what use cases, what industries and scenarios. And what we saw is that really consumers are using it to create, summarize, and synthesize content and information, both in their professional and personal setting. And that can take the form of planning a trip with your family to some far-off destination. It can help you expedite writing business proposals.
So it has a lot of different use cases that it applies to, but we saw across the board a lot of adoption and interest in generative AI this year.
Brian
Generative AI, yes, is where all the buzz is. That's where everybody seems to want to try things out. And I think where we're stepping into now, the next phase of that, both on the consumer side and on the business side right now is what is called agentic AI, and that's something that you mentioned earlier, Vinnie. And so agentic AI being the concept of instead of being very prompt-oriented, where I'm going to ask a question to see what kind of response I get from it-
Vinnie
And then I'd go perform those tasks myself. Right?
Brian
And go perform those tasks myself.
Vinnie
Like buying your fishing reel by yourself, right?
Brian
That's exactly right. It would be more goal-oriented, right? So in Elliott's travel example, instead of just, "Give me information about this Hawaiian vacation that I want to take," "I want to plan a Hawaiian vacation and my budget is $5,000 and I'm looking to go this time of year. Plan a week-long excursion with different activities," and actually have it go out and do the research and not necessarily automate everything behind the scenes, but provide input back to you and say, "Here's what I've come up with. Are these aligned to what you want to do, yes or no, or which of these would you like to do?" Maybe you want to go visit Pearl Harbor, for example, and recognizing that Tuesday is maybe the best day as opposed to a Friday because of the amount of traffic they get.
That sort of analysis, things you wouldn't think about are things that an agentic AI system could do. And I think that's the next part of this, both on the business side, if you think about automating workflows inside your own business, but also on the consumer side.
Vinnie
I was smiling because I like that you used a travel analogy for agentic AI because of travel agents. You know?
Brian
Yeah. Except that was... Yeah.
Vinnie
Yeah. I don't don't know if that was intentional or not, but I-
Brian
Elliott brought up the travel example, so.
Vinnie
Okay, so I can blame him.
One thing that we talked, I think on the last podcast we had about AI, about jumping in, doing it now, it has to be part of your solution, one of the cons to that was that it's moving so fast that what you're doing now is going to be very different from what you're doing a year from now, so why not wait till the pace slows down?
So I guess technically, Brian, I'd throw that at you first. Are the models such a way that you're not really reworking a bunch of stuff, it's just improving and it's evolutionary, not revolutionary at this point? And are there things six months ago that we just couldn't do in AI that we can now? I'm looking for examples that actually showcase that as opposed to, "Yeah, it's moving fast." Well, we know it's moving fast, but show me an example of that.
Brian
My opinion is the steps within the last few years have been very evolutionary from that first leap that was made-
Vinnie
That was revolutionary.
Brian
That was revolutionary. And I think we're probably close to having another revolutionary type of advancement. I'm not going to try to predict what that is, but the things that we're seeing, these model releases every two weeks, those are definitely evolutionary-type things, but things in the last six months that we've seen, like multimodal type of models that allow you to do not just text and not just ask via text, "Give me information about my trip" and have it respond in text, but prompt with photo, put a photo of a trip that you made as like, "This was a great vacation. I want to reproduce this," and be able to look at that photo and say, "Oh, I recognize that that is Pearl Harbor, and that's where we can certainly make another trip around that."
So that whole multimodal thing and adding voice and audio to that as well are things that are evolutionary, but they're definitely advancements in how you can interact with these models. And I think those sorts of things are going to continue in the near term, and then I'll let the Sam Altman's of the world make predictions on when the next revolutionary thing would be.
Vinnie
I was thinking about the question I asked and how I would address it in terms of what do I use AI for now that I didn't six months ago? Copilot is something that people either have or are starting to get in large organizations, and just being able to take meeting minutes and summarizing notes and asking, "What were the key points in this meeting?" I mean, that is something, and really interacting with it and asking it those questions, "What are my to-dos? What are my tasks?" That type of stuff wasn't... I mean, six months ago, it was there, but I think the adoption now and the use of that is much higher.
Apple Intelligence is obviously rolling out. It's in the beta form now. So there are some things, I think, in the use case of everyday life, everyone's going to be using it every day. Yeah.
Brian
Yeah. To your point though, and actually to answer your question about whether or not people should wait until the next revolution and jump on board, we've had a revolution and the way you operate and the way you consume and get information is changing. And the way you, I guess, achieve those outcomes that I've referred to before are changing right now. And to do it more efficiently and effectively and probably more accurately now, you have to start reframing how you think about solving those problems. And I think that, if you don't get on board in understanding how to leverage these tools now, when that next revolution is made, it's going to be increasingly or exponentially harder in order to make a change to adapt.
Vinnie
You'll be flat-footed. Yeah. It's like trying to time the market and just get in the game, right?
Brian
Yeah.
Vinnie
And I think that's a very valuable comment, which is it changes the way you think, even if the technology is going to change, your style of thinking is going to continue to evolve with it.
Brian
Yeah. And I've said this a number of times, and I talk with our internal folks a lot and our clients, because of that, I mean, the importance of managing change inside of your organization as far as getting people to adopt and understand how to leverage these tools is as significant, if not more than just rolling out some of these tool sets like the Copilots, for example, that you just referred to. Getting people to understand how to use it, what are the right use cases, what are the right ways to achieve the value you're trying to obtain is not rolling out new technology. It's reframing and changing how the workforce thinks about leveraging this technology.
Vinnie
So I'm going to try to summarize, as best I can, some of the key themes that we discussed today. The first is AI is more accepted than we thought it was going to be, and that continues to grow. The trend is continuing up, which is amazing. We're solving problems we've always solved in a much better way, and we're solving problems we couldn't have solved before. We're solving new problems, right? That's also alone great. Much of this is focused on elevating the experience of the end user, whether it be a citizen or a customer or an employee. And so the value is immediately observable, which I think is amazing. And then there's a value side to it as well. So the experience is better, but then also we're doing things in fewer steps and fewer transactions, costing less, so there's a value side.
Did I capture the points?
Brian
I think you did. I think it all really, in the survey, backs this up. Consumers are expecting this now, these sorts of capabilities, I think that came through loud and clear in the survey, and so driving a better experience for your customers, for your citizens in a government scenario, leveraging this technology is something that they're expecting and that's something that we should be striving for.
Vinnie
Good closing thought. Elliott, would you have any final words before we wrap up?
Elliott
I'll say if that's not selling enough, the final point I'll leave you with is that when we asked consumers if they would be willing to pay for an enhanced benefit of AI, 52% of consumers reported that they would be willing to pay on a monthly basis some monetary value for that enhanced benefit.
Vinnie
Do you remember if that was asked last year?
Elliott
It wasn't, no. That was a net new question this year, yeah.
Vinnie
So it'd be good to see next year if that goes up.
Elliott
It will.
Vinnie
Yeah, great. Well, really good conversation. Elliott, Brian, thank you for joining me, and for everyone listening, thank you for listening. Please subscribe to the podcast, continue to listen. We very much appreciate it, and we'll talk to you next time.
The entire contents and design of this podcast are the property of CapTech or are used by CapTech with permission and are protected under US and international copyright and trademark laws. Users of this podcast may save and use information contained in it only for personal or other non-commercial educational purposes. No other use of this podcast may be made without CapTech's prior written permission. CapTech makes no warranty, guarantee, or representation as to the accuracy or sufficiency of the information featured in this podcast. The information, opinions, and recommendations presented in it are for general information only, and any reliance on the information provided in it is done at your own risk.
CapTech makes no warranty that this podcast or the server that makes it available is free of viruses, worms, or other elements or codes that manifest contaminating or destructive properties. CapTech, expressly disclaims any and all liability or responsibility for any direct, indirect, incidental, or any other damages arising out of any use of, or reference to, reliance on, or inability to use this podcast or the information presented in it.