Agentic AI Where Hype Meets Business Reality
30:40

Agentic AI Where Hype Meets Business Reality

Big Data LDN 09.12.2025 357 просмотров 10 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
Data & AI Strategy Theatre Thursday, 25th Sep 14:40 - 15:10 The term 'agentic AI' is all the rage these days, but there's still not much clarity around what it means. We'll walk through the basic building blocks of these agentic AI systems - predictive AI, generative AI, and workflow automation - and discuss why it's harder (and more important) than ever to ensure a trusted, enterprise-grade, and secure data backbone to get the reliable and trusted solutions our end-users are looking for. We'll also touch on market trends where we see the technology and capabilities evolving in the coming months. Kyle Jourdan Master Principal, AI Practice, Qlik As the head of Qlik's AI Practice, Kyle helps to craft solutions for our customers in the front lines by supporting our sales and marketing teams directly in the field, while remaining deeply involved in the engineering and design teams to ensure a smooth connection between what our customers and the market are looking for and what the team is working towards in their development efforts.

Оглавление (2 сегментов)

<Untitled Chapter 1>

Hello everyone. Uh, good to meet you all. I'm Kyle Jordan. I run the AI practice u for Click. Uh, if you haven't I'm not going to talk too much about Click during this presentation. So, uh, if you're not familiar, if you haven't heard of Click before, I describe it as an AI platform that does everything to support your AI projects from data integration all the way to application delivery for your end users. So, if you want to learn more, we have a booth out there. I think it's K88. Is that Did I get it right? Okay. Um, so feel free to stop by, have a conversation, whether you're looking for something to help you um, you know, integrate your data from your source systems into some sort of warehouse or if you're looking to build AI applications and deliver them to your end users. We're happy to have a conversation there. So, what does head of AI practice mean? So, I came from an acquisition that Click did a few years ago um, of a company called Big Squid. Uh we are now the automated machine learning platform inside of the click cloud. And so uh we do no code machine learning, model building. Um so you bring a data set in, you give it a target uh and we'll be able to train a multiple algorithms, do hyperparameter optimization, um model deployment, model ops, all that stuff that you'd need to do with machine learning, data science, uh we manage all in one place without you having to write any code. So uh that's where I came from. But uh there's been a few other acquisitions that click has made along the way around AI. Recently we've introduced um generative AI capabilities around retrieval augmented generation with our click answers product uh which gives you the ability to ask questions of your unstructured data powerpoints, PDFs, uh HTML files and then recently we've introduced uh structured data into that. So now as you have data applications, dashboards and you want to ask questions there um you can do those. Uh and then whether it was uh you can call it a strategic uh decision or we stumbled into it by chance, we made an acquisition about probably eight years ago at this point of a company that now aligns very much with Agentic AI. Um and so, you know, we'll kind of talk a little bit about what that means and how people view that and where it fits into everything, but a lot of interesting pieces to plug in there. Uh so to start, how did we get to this? And hopefully this isn't everyone's first agentic AI uh event of the day. If so, I don't think you've been going to enough sessions. I think everyone I've seen has said agentic AI at some point. And so uh you know what does it mean? So we are coming from a place where we had generative AI take the world by storm. Uh and really that was about finding knowledge, right? So these large language models were capable of uh answering a lot of questions. Uh they're trained on the internet. So uh there was a lot of concerns that rose out of that one out of data recency right so you know was the answer you were getting the most up-to-date information um or the most accurate for the context that you're asking in uh and then there were hallucination concerns right because if you believe uh everything on the internet is true then uh you know I have oceanfront property in Arizona to sell to you. So uh you know there were a lot of challenges that rose from these two concerns right and so uh the question was you know what do we do with these large language models what's the most practical implementation uh out of that came the world of rag or retrieval augmented generation where we said okay hey we'll use these models for helping us with inference and understanding what the user is asking uh and we'll use their creative ability to write responses to questions but we'll ground their truth uh in trusted data sources from my organization. So maybe I have a SharePoint site with lots of policies and documents uh that my internal employee should be using to answer questions. We'll have the LLM use only trusted information from that source rather than its general knowledge um from the internet where it could answer things incorrectly. Uh that's evolving into this world of agentic AI where now we're starting to look for more business value in what we can do with these generative AI models. So uh the question becomes can we use them to help automate workflows um in our organizations? Can we uh have them understand what the user is trying to accomplish and then give them access to tools and systems that then can go and do these various operations and ultimately kind of take some of the monotonous uh task work that we have to do in our day-to-day jobs out of um out of that process. So, you know, we've seen this rapid rise of the LLM that can help do this. You know, this probably already out of date since we last updated it, but you know, just in the last year, uh you've seen that there's been a significant evolution of versions of LLM, right? So, you know, uh the GPT models from OpenAI are all the way to version five now. Um the uh the Meta Lama models are up to Llama 4. Um we've uh seen hugging face now cross two million versions of LLMs in its repository that you can access including uh this rise of this new concept called a small language model. So maybe some of you have heard of these small language models. Um Nvidia put a paper out a few months ago at this point now talking about small language models being the future of agentic AI. Uh and that makes a lot of sense if you think about it because these large language models are getting very big. uh and they're becoming expensive resources to have to manage and to keep up to date especially if you're doing fine-tuning where you're updating the parameters of these to um better understand your organization. So the small language model can be a very compact modular efficient uh version of an LLM that's very good at specific tasks, right? So maybe it knows how to do data analysis really well or maybe it's a an HR small language model and it knows HR terminology and policies really well. And so we can plug these small language models into the appropriate parts of the workflow automations that our agents are working on. Uh and then that allows us to more effectively route tasks. So the large language model can be the router if you uh think of it that way. It understands the incoming question and then it can send the request to various tools in its tool belt. Um where a small language model can maybe do more task specific jobs instead of using an expensive large language model. Um so there's a question you know if AI holds all this potential that it's telling us to why are so many of these projects failing? So Gartner says something like 80% um sure everyone saw the study that came out recently that showed something like MIT said 95% of agentic projects are failing. Now you can debate the merits of the data of that. You know, I would say any early adopters of a new technology, there's going to be a lot of failing, right? Because people are figuring out how it works, where to implement it, how to use it effectively. Um, but uh, you know, there's this question about how do we get from this interest in this agentic space to the point where businesses are actually seeing value from it? And there's a few reasons why. You know, obviously there's the bad data concern. That's not new to the concept of data or AI or machine learning, right? You put garbage in, you get garbage out. And that's the same with agentic tools. If you're letting them use systems that are built on bad data, then they're going to produce bad results. Um, when you go to implement those, um, there's, uh, this, you know, concept of insufficient AI where, like I just talked about, throwing an LLM at every use case is not an efficient use of AI. In many of the conversations that I have with our customers every day, they show up and say, "Hey, we need generative AI to do X, Y, and Z. " But then I'd say 90% of those use cases we end up routing to traditional or predictive AI, right? Where we say actually what you're looking for is a predictive model. And using an LLM to try to build a predictive model is, you know, like using a sledgehammer to put a nail in the wall to hang a picture, right? And it's just too much. Um it's over the top. And so a lot of times it's about finding the right AI for the right situation. Uh and then we're seeing a lot of these complex custom builds. So a lot of people are choosing to go the route of I'm going to build that myself and uh I was meeting with a Gartner analyst a few weeks ago and he asked you know what do you see as the theme for 2025 into 2026 uh in this generative AI space and I said really I see it as the year of failing to scale fail to scale because a lot of people can go on their laptops today and download open source libraries and tools and quickly build a lot of these things that we're talking about a rag assistant a predictive model, um, an agent, if you will. But what they're going to find is as soon as it's time to go deploy that to their organization, that they don't have the ability to scale that to from the 10 documents they were working on their rag system into thousands of documents in a repository. Um, to do the information management of all these different versions of documents that are out there. How do I make sure I have the right one? How do I access this information correctly? Uh and then the security around how do I expose the right data, the right documents to the right people uh to make sure no one's seeing something they shouldn't be seeing or to make sure that what someone should be seeing is made available to them at the time they ask their request. So there's a lot of different moving pieces that are involved in this why people are failing to ultimately realize value from these agents and these AI tools that they're working with here. So we've seen the rise of agents, right? This is just an illustrative example of how many vendors are popping up in this space. Uh you know this is a screenshot of you know from a few months ago and there's a website you can go and see the whole landscape but there are literally thousands of vendors who are popping into this uh agentic AI space. So it's obviously becoming very popular very quickly. We're seeing the emergence of agent development kits from all the major players right so whether that's mistral um openai um you know a protocol coming out of uh some of the labs there MCP from open AAI so we're seeing this interesting framework develop around how to build these systems uh in a I guess a unified way we're starting to see convolescence around how to do it correctly and create consistency there um but there's a lot of options out there so the question becomes you know how do I focus on the right option for the project that I'm trying to build. Um, and we did a study uh here just recently. This is a research study we did as click um asking some people around uh their agentic AI strategy. And so you can see not surprisingly there was some 97% who told us that there they organizations have made some commitment to invest money into Agentic AI. Um not surprising but still pretty spectacular if you realize how quickly uh that money became available. uh to those organizations. Uh a pretty significant chunk, 39% said they're will they're going to spend more than a million dollars on these um these investments. So that's a pretty big deal. Uh and more importantly, um you're seeing that interesting and this is interesting to me. 77% are saying they agree that they can distinguish agentic AI from other tools. And I think that's a relatively new development because when I was having conversations with people just a few months ago, there was a lot of blurriness around what does agentic AI mean? What is an agent to you? And I still ask that when I have a conversation with someone, they say we want agentic AI. Um the first thing that I usually ask them is tell me what an agent is to you. Like what do you expect an agent to be? it to do? Because then I can better align, you know, what they're looking for with uh with what we have to offer. And so I still see a lot of gray in what people are defining as an agent, but it's becoming clear that more and more people are starting to kind of understand uh what it means to them and they're telling us that. Um similarly uh that response to that question, right? So what does it mean to you? Um there's not surprising this is what I would hear when I asked someone to explain it to me. I said uh we started to see that it was really a lot of workflow automation. So, a lot of people were starting to tell us, uh, well, what I want to happen is when this happens in my data set, I want you to send an alert in Teams to someone. I want you to go write a write up a summary of what's happened and maybe start to suggest some proactive things that we can do. Maybe I need to create a Salesforce um, task in order for someone to respond to this or create a ticket in Service Now. Um, maybe this system needs to be updated or queried. uh and then I need an LLM to synthesize all this information. So what we start to saw emerge as the definition of an agent was uh a workflow automation. And again going back to that earlier point where I talked about our tool set that was that acquisition that we made something eight years ago was an integration platform as a service or workflow automation. And so you know someone will I'm sure some executive at click will tell me that was a strategic onpurpose decision. I still think we uh we just got lucky on that we made that acquisition and integrated it into our platform. Uh so you can see that um really there's kind of two ways people are looking at it. Um a large chunk of people who are saying you know we want something that's goal oriented. So you know if there's a specific outcome in mind and it's going to do the tasks that are required to reach that outcome or that goal. And then we have proactive AI which is I see a trend emerging. So this is again involving predictive AI. I see something emerging in my data and I'd like to make something um happen before that occurs. So, you know, when I think to traditionally the age-old problem of customer churn, right? Um I see a customer being predicted by my model to be likely to churn in the next 6 months. What can I start doing with an agent that tries to prevent that customer from churning? How do I get them into a marquetto campaign? Um how do I alert the appropriate account manager to start to do proactive outreach, send them emails? maybe I have um AI put together some sort of plan that you know maybe gives them a discount or some sort of incentive plan to the customer. So u that's really where we're starting to see people find a lot of interest in what these tools can do for them. So if I look at agentic AI in perspective I mentioned this early and I've been leading up to this. It's really a convolescence of those traditional AI um components that we've seen historically. So, the generative AI that we've been talking about, these LLMs where people have been saying, "Okay, I get how it works. I understand the value of it, but how do I get business value or return out of what this LLM can do? " And then there's this predictive AI that's been around. You know, I've been selling predictive AI for probably close to 15 years now. And for the longest time, we were had to convince people that there was value in predictive models. Um, and you know, probably until two years ago. And it really, I guess, a rising tide lifts all boats. It was the emergence of the generative AI trend that really promoted a large adoption of predictive AI

15:10

because suddenly we had to tell people no what you're looking for is predictive AI um that you just think it's generative AI and so we start we're starting to see people now come back to okay that's I want to start putting predictive models in place to influence my decisions or what the organization's doing. So when we synthesize those two pieces together really that's what's leading into the ability to execute agentic AI well because now we're saying okay let's be proactive let's make predictions about what's likely to happen let's use generative AI to understand you know what things the tasks that need to be done how to orchestrate and route the appropriate tasks that need to be executed in the workflow and then let's use some sort of workflow automation to go and execute those. So when we see all those pieces being put together, we're starting to see people that are getting a lot of business value out of Agentic AI because now they are proactively making decisions using systems that are ultimately reducing either expenses or increasing revenue um in some way and that's what's driving a lot of business value or impact here. Uh that involves uh an evolution of complexity in these right. So, we're moving from this world where before we had Generative AI where you were putting a prompt in and you were getting a an output, right? The LLM was giving you an answer uh to the prompt. Now, we're moving to this world where the LLM is acting as the um the terminal if you will. It's the point where the user comes and submits their request and says, "Here's what I'm looking to do. " And now it's routing the work to all the appropriate. So whether that's some sort of knowledge graph vector database um some sort of data source whether that's unstructured or structured data um we're doing you know reinforcement learning from the feedback that we're getting along the way to understand okay was this a good answer thing that the agent did was it bad uh we have the uh small language models that are doing task specific things right so maybe part of this process is okay great I just retrieved a data set that's related to this solution that you're trying to get but now I need someone to analyze the data So a small language model that's good at doing customer return analysis can go through that data and help kind of point out some interesting things that you might want to surface. Might be a predictive model right that says okay you know what let's run uh this scenario through a predictive model let's optimize what scenario or what combination of features produce the most likely uh retention of a customer and then let's do whatever third party integrations we need to do in order to execute those things that we've learned from the predictive model. it's becoming much more complex as we build these composite systems to run these agents here. Um, so that's leading into this new class of enterprise software and systems, right? So you have a few different evolutions that are going to happen along the way and I think it's going to be a while before we get to this point. Um, I think there's this, so there's this interesting generational gap where, you know, I think a lot of us, myself included in this room, are probably still somewhat sept skeptical of generative AI, right? You ask a question and you, you know, you take everything with a grain of salt. You probably say, h, you know, maybe I'll verify that a little further. Um, and there's this new generation out there though that expects everything to be generative AI first, right? They ask a question, they tend to believe wholeheartedly everything that comes out of it without a lot of verification. And that's in the analytics world, too. There's a lot of people now who expect the analytics experience to be, I just want to type in the question that I'm asking and get the answer that I'm looking for. I don't want to go to a dashboard and try to find the answer. I don't want to create an a chart. Um, I don't want to analyze a data set in Excel that I just want to ask a question and get a response, the answer that I'm looking for. Um, and so early on, I think we're going to see these pretty simple deterministic workflows built as agents, right? So, you know, very, uh, let's call them safe automation tasks along the way, right? Like do this thing, do this thing, check with a human before you continue to this next step and then finish your job, right? That's going to be the safe implementation of agents at first. Then you're going to see that evolve into the AI augument augmentation, right? where there more and more people will become willing to uh hand off some of the tasks to these AI models, right? Where uh you can say like look, you know, go and pull the data set. I'll trust you to go find the right document for me. Um retrieve the data set, do some data analysis, and then maybe summarize what you've done up to this point before you commit any changes to my production systems. Right? So that's kind of this augmentation of a deterministic workflow with some AI along the way that's being fact checked, if you will, with a human in or on the loop. And then eventually maybe we'll get to this world where AI becomes trustworthy enough that you know we allow it to take certain tasks and run fully autonomously. Right? So um to get to a world where we say you know what uh I think I've put all the checks in place that make sure that this won't cause any problems and so go ahead and just run in the background and do your thing. And you know there may be some tasks that this can happen with today. There's relatively lowcost things like um for example in a customer retention world if you predict a customer is likely to churn and you enter them into a some sort of um retention marketing campaign where you're just sending them kind of promotional um offerings or uh some sort of product related information there. There's not really a lot of cost if there's a false positive there, right? So, if someone gets an email or two promoting your products that you wouldn't have put them into that cadence anyway, like it's not going to be a big deal um if uh if that happens. But if you are, let's say, a credit card processor and you have some sort of fully autonomous system that's declining people's credit cards uh because they think it's fraud from a fraud model, there's a very high cost to doing to a false positive there, right? because now you're someone's at a grocery store, swipe their card, they can't buy their groceries because you incorrectly predicted fraud and so you've got a really bad customer experience that's potentially very costly to the organization. So it'll be interesting to see how these evolve over time. Um I'm not going to read through all these, but you can see that really there's a lot of ways to think through uh the right approach here, right? Um the questions around do I have strategic alignment? I think this is one of the areas actually that we see the organizations that are successful and getting value out of agentic AI where we see the most consistent u outcome is where they had strategic alignment where the business and the executive sponsorship agreed on what the people that were executing the agentic AI um projects were working on right because a lot of times we see this disconnect between the business and the lab if you will right where the business says here's my problem can you help me solve it data science goes and works on it for a few months and they come back and they say okay here's what here's the model I made and the business goes actually that's not useful at all right I can't do anything with this there's no way for me to use this information I can't tie it into my production systems and so um getting that strategic alignment is important um building a feasible tech stack is very important right to because and again another area we see failure is cost as soon as you start stacking up all of these components that are involved you start to see that it can become very expensive as you start to scale this out. So, how do we find the right partners to make sure that um we can ultimately do this at scale at the right cost? And then there's these I'll lump these two together around building a trusted foundation. So, how do I make sure the data that I'm feeding into these systems is accurate information is the right information is grounding the truth of these AI models in a way that the user can blindly maybe not blindly but can trust and verify their responses. And then enterprise grade being around how do I make sure this can be integrated to wherever the users wants to consume it. So I don't want to make them go to a specific tool to use this. It needs to be in the tools that they are using day-to-day anyway um in a way that feels seamless and integrated to them. and how do I make it governable and bring observability because I want to make sure that I'm getting feedback from my users as to what things are going well, what's not going well so I can make improvements, but also I need to be able to make sure people are accessing the right information and not being exposed to information that they shouldn't have access to. Right? So I'll kind of close with you know this will be the one pitch I guess for the click stack and but it's relevant to the projects that you know you're working on in AI and that's moving from that data quality that trusted foundation that data access right so being able to go into these source systems to bring them into um some sort of data product um you know into a data warehouse somewhere where you're building data transformation pipelines around them you're putting these trust scores. So click we have this concept of a trust score that will evaluate the freshness of the data uh the usability of the data um the completeness of it and make so you can kind of really understand okay these data products that I have are users should users trust this should this be used in AI model because it's the right information all the way to the delivery side of things where we're looking at okay you know how do I not only build predictive models generative AI assistance but how do I deploy those to places that people can actually use them and take action on them. How do I put it onto their phone? How do I embed it into Salesforce? Um how do I, you know, ultimately make it approachable to the systems that they're consuming every day so that um they don't uh they don't need to go log in to click in this case to consume that information, right? Uh that's probably a similar thing. So I guess I'll just um I'll end on this note, right? which is thinking about um you know h how does this look from a simplistic view? How do we start to think about these agents? These are some of the agents that we're working on the click ecosystem as we're finding ways to kind of make the analytics experience more consumable, more approachable using agents. Um whether that's, you know, going through your data uh your structured data in your applications, unstructured data like we talked about a SharePoint site, some sort of repository of files. Uh, and then things where we're proactively looking for insights that maybe you weren't looking for in your data and surfacing those to users. So, this is how we're using agents internally at Click to make our product better, but you know, it kind of gives you an idea for how people start to think about um, you know, where they can embed this into their business process to make this more appealing. So, with that, I'll stop there and I'm happy to take any questions uh, if anyone has any. — Yeah. — Yeah, please. Okay. So where does aentic AI fall under that? Is it under predictive AI or under generative AI? And the second question aside from the AI, predictive AI and generative AI, what other AI are there? — Yeah. So uh question was around you know, does agentic AI fall into predictive or generative? I'd say it's both. It's a combination of those, right? a predictive AI and generative AI are inputs into agent. So I don't want to say I take qualm with the term agentic AI because it's really not a new form of AI, right? It's really just a combination of other AI models to do tasks, right? Or workflows. So uh we bring those together along with integration into thirdparty systems, right? So we have some predictive models, we have LLMs, generative models, and then we have whatever tools are available to us. So maybe that's Salesforce, maybe that's Teams or Slack, uh maybe that's, you know, sending an email or some sort of Marquetto campaign system, but orchestrating all those pieces together to be able to make decisions for us and then route the appropriate work to the tools and allow it to execute the work in those tools. That's what makes Agentic AI, right? So it's more about how do I say, okay, here's something that you've detected going on. Solve it for me before maybe I'm even aware of it existing. and then just get the work done and tell me about what you did. Right? That's how I see agentic AI, right? — I don't know if I am aware of any other AI beyond those two. I guess you could define a lot of things as AI, right? But as I see it, it's really falls into those two buckets, predictive and generative. Yeah. Any other questions? I couldn't have been that thorough. Come on. Okay. Well, oh yeah, there's one. Yeah, if you can start and if you — Yeah. I was just wondering what's the key takeway to for businesses to successful implement uh agent AI. — Yeah. So, you know, what are key takeaways to for a business to implement agent AI so successfully? And so, one of the things I talked about was that strategic alignment, right? So before you even start a project, make sure that the executives, the people who the stakeholders who are investing in this are bought into the use cases that you're working on. I think the second most important thing beyond that is uh the trustability of it. So how do I present results in a way that someone can verify that what I've put in front of them is accurate information. So just to give you an example in our click predict product our predictive um predictive AI we output what are called shap values which are the explanability of a predictive model. So it'll say okay here's the prediction I'm making but here's why I've made that prediction right here are the things that influence this prediction. That way my end users who are consuming that can say okay that I understand why it got there that makes a lot of sense and I believe this right. Um the generative side we give so we do rag retrieval augmented. So that means we're retrieving specific documents and chunks or text from that document and we can say okay here's the document that answer was generated from. So you don't have to think oh man did this come from a trusted or untrusted source on the internet. You can say okay this came from this place. I can go to that document decide whether I believe this document in our system or not and see exactly where it came from. So that visibility into how the AI made its decisions I think is important. And then the final piece I'll say is that getting it into the frontline systems that people are using, right? Um you can build the best AI in the world and if you don't deploy it outside of the lab I talked about, then it means nothing. There's no value, right? Because no one can consume that. So I think from my perspective, those are probably the three most important. — Thank you. — Yeah, oh, I'll get over to you. Um I was just going to ask do you see any limitations on the types of data sources that you can link up to Gent AI? — Yeah. So, um, in terms of data sources limits, there's, um, I'll say PDFs are notoriously hard to work with. Uh, if you don't know, a PDF is basically an image, right? And for document format and then if you think about how many PDFs you've probably seen where someone scanned something and didn't put it into a PDF, right? So one of the things our research teams are working on to um uh to work through is how do I uh make things like images and multimodal inputs so video audio image file like you know there's pictures in PDFs too right how do I make those understandable by the large language model uh so that they can produce responses from there because there's often a lot of value in a chart embedded in a report in a PDF right and so I think those are the ones we see the most challenge with today uh and then we're also thinking through ways where um if you're familiar with rag doll, the traditional chunking process is we basically go a thousand characters and then stop and create a new chunk a thousand characters. Imagine you hit a table inside of a PDF, right? Half of a chunk could have the top of the table, the next chunk could have the bottom of the table and the headers are in the first chunk, right? So you might only get part of the table and not the other then you don't do a complete analysis, right? So that's another area where we're seeing a lot of evolution of kind of how do we improve um the results that we see from those. Okay, I'm getting a very uh stern finish. So uh but if I didn't answer your question, uh please come find me. I'll if they have another session after this, I'll just be in the hallway and happy to have a conversation. Thank you everyone.

Другие видео автора — Big Data LDN

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник