AI Autonomy Is Redefining Architecture: Boundaries Now Matter Most

AI Autonomy Is Redefining Architecture: Boundaries Now Matter Most

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI

Оглавление (11 сегментов)

Segment 1 (00:00 - 05:00)

If you're the kind of senior engineer, architect, or technical leader who people look to for what's next, CUCON London is probably on your radar. Join us in London from March 16th to the 19th, where we go deep on the topics that matter, like the architectures you've always wondered about, engineering, productivity, and applying AI in the real world. This isn't about trends for their own sake. It's about getting practical insights from senior practitioners to help you make smarter calls on where to invest your time in tech. With software changing fast, London is a conference that helps you lead the change. Learn more at qconlondon. com. — Welcome everyone. We are starting second podcast in our series of next generation architecture playbook and it's about insights and patterns for the AI era. Earlier we did a episode one of this with Grady Buch where we discussed the principled view of that what's changing and what remains unchanged what is hyped and what is actually naturally coming with the AI changes we also spoke about that what is the difference between the design and the architecture and what teams are focusing and what they might be missing and the beautiful part was that Grady touched upon the third golden age which we are living into for the software engineering and the architecture. So if you have not listened to that uh podcast, I would highly recommend go back and listen though it's not in any particular order but that will also give you a lot of view. With that said, I'm happy to start with our second episode which is all about evolving architectures that what is evolving in this AI era about the architectures and how do we go about it some advice practical advice on it that how do we really go about designing it from our experiences and to touch upon that and uh discuss it in detail we have our guest today Jesper log GR am I pronouncing your name right Jper? — Perfect. Thank you. — Thank you. And Jesper is joining us from Australia and um late evening for you. Thanks for making it happen. A little bit about Jesper and then I would uh ask you to add all the missing details which I might miss. Jesper is enterprise architect lead with the XC technologies. He's being teaching us about enterprise architect frameworks. He is also an author of enterprise architect 4. 0 framework and recently he has written a book which I really love by the name design or be designed. So with that great background what you have Jesper tell us a bit more about you and what is your thinking these days? What do you want to tell us? — Yeah thank you for that introduction. I think the only thing I would like to add is like the last two years I have been almost obsessive about degenerative AI and how that is affecting uh businesses and people and processes and entire workplace and I have been lucky in a sense I've been able to manifest a role within DXE where I am 100% focused on generative AI and building up frameworks and models and then being able to go in and talk to customers about it running proof of concepts and running experiments etc. and actually see in real life how these things work because of this new world that we're going into this new gen AI fueled world is very different there a number of fundamentals in this world that is very different from the world that we are coming from and again I find it very interesting and I'm very lucky that I get to spend all of my time in this place hardly experimenting but also designing and architecting and testing things into what work and what doesn't work and yeah it's very exciting times — so we are into generative AI do we need generative architectures why you said we are experimenting we are designing and architecting but the change here is the pace we always used to do it but in every era we do something and we leave behind something and move on to the next thing so that brings a conse executive momentum of leunas what we are leaving behind but with this space it's insane these days what is happening in the industry however most of the time I see it it's around tools and not really around the system thinking and systems how they are evolving so tell us from your experience what you're going through these days that do we need generative architectures what is the problem space really looks like from your perspective — I think the short answer is absolutely I would like to take a small detour because I think that sometimes the differences are not really appreciated. So I'm just going to use an analogy.

Segment 2 (05:00 - 10:00)

So let's say 100 years ago we had a piece of paper. I'm a sales person and I take a sales order. I'm with the roughly it piece of paper. We get into year uh let's say um 2000 or 2010 and then we are starting automating all of these pieces of paper. So now it's a digital copy of the paper and we can choose to digitize the entire workflow end to end and make the completely digital process or we can choose to only digitize part of it. So we have all of these shapes of gray. We can automate a lot or we can automate little we can introduce a lot of technical depth or we can introduce very little depth. It's a choice. We don't have that choice with AI because if we're introducing technical debt into AI into generative AI is going to drift and it's going to hallucinate. So I think our mindset has to shift. We have to think about it differently and that means that also the architecture has to change and of course the real change here that is driving all of this change is this world autonomy because if we don't have autonomy we just have the robotic process automation that we have done that for a while so I mean that there secrets it's real autonomy how do we handle autonomy because what happens is that when we're turning on the autonomy tap you know we're giving agents free will. It is going to play up. do things that we don't expect it to do where we're calling emergent behavior. It will absolutely happen if we are putting a number of these agents together and we're connecting them into system of agents and they're autonomous. We're going to get emergent on steroids. These are new situations that we haven't really faced in the past. We're used to governance where we know exactly what can go wrong because everything is procedural. Everything is logic driven. It's um list of 20 things and we know that if something goes wrong is one of these 20 things. But when we talk emergence, we can't predict exactly what's going to go wrong. So entire thinking around architecture and design and guard rates and governance everything has to change in order to be able to manage to control this new thing that we call autonomy. — So with that we are acknowledging that yes things are changing things is changing fast we cannot rely with the same rules in the new world to dig deeper into this architectural space which we are talking about. What are those architectural mistakes? What do you think that people are when they are embedding AI into their existing platforms? What is wrong there? Are we doing some things good? wrong? So let's talk about the problem space in terms of bit more concrete that okay things are evolving acknowledged but what is that mistakes we are doing here. I would take a step back and I would look at the MIT report that was released about 3 months ago in 2025 that's talk about 95% of all proof of concepts failing and we need to understand why they're failing in order to really uh address that story and I'm going to put forward a hypothesis that I'm unproving in a number of customers and I starting to be more and more writing about it and it is mindset shift again that in past. If we are building a system, if we are architecting a system or we are designing a system, we are really in this mindset that we're calling procedural logic that we are determining the workflow. We're building the workflow. Start with this and then you have an evaluation. If it's a you do that, if it's B, do that. And you have this entire se sequence of events. So it is determined already. So we have that on one side and then on the other side we have autonomy that is free will. This is like oil and water. They don't really belong. This one wants to do its own stuff. And here we are telling it, do you have to do it in this way? It will be an incredibly [snorts] expensive way of employing AI to try to put genative AI into procedural construct. We're getting all of the cost and we're getting none of the benefits. So that is not the answer. And we're coming back to the mindset shift again. It's not about controlling the logic or the logic at runtime. It is real. It's really about understanding the boundary. So it's almost like looking at it like a gene in a bottle. And bottle that's a naughty AI agent. And that naughty AI agent wants to get art at any cost so it can play up. We have to make sure that the boundary that we are putting around the agent is touch that all of the seams, all of the holes, all of the interfaces

Segment 3 (10:00 - 15:00)

into this um boundary that we really understand what they are and we really understand how to control them. Once we do that, then we can say to the AI, I'm not going to tell you what to do. You're really smart already. Some of the AIs now that we're working on, they have an IQ of 140. that's higher than me. The AI is much smarter than me. I'm not going to tell it what to do. I'm more about what it can't do and I'm going to tell it what I wanted to achieve. So, I'm going to give it a goal, for example. And that's very important when you talk about Yeah, you want to give it a go what to achieve. And I have identified seven things that are defining the boundary of an agent. And if you're defining these seven things, you can have a fairly high confidence that the agent is going to be contained with within that boundary. And of course, the real problem of agent is when you get multiple agents together and they are sort of getting emergent behavior together. That's when you really need that boundary. The more agents you have, the more control of that boundary you need to have us. — Yeah. — And that's the first step. If we don't understand what makes a system scale and that is I think that's what an architect brings to the table. We are the people that understand scale. Everyone else you know they just want to take the bits and pieces and start hammering them together and then worry about what they're building a little bit later on. Architects we are the opposite. We build the foundations that we're building foundations because we want to scale. And when it comes to AI, that is such a different foundation. It looks nothing like the foundation we're coming from. And I said I have the way that I'm defining these is I talk about seven seams and they actually form part of the genetic architecture where you need to define the goals. for decision poly etc. Sorry. — Before we go into the seven um practices and those goals you're talking about. So let me express from my understanding the important points which you have touched upon. You said we are trying to do some retrofitting with maybe where we should have intentional design. You said we are trying to handle procedural with non-determinism. So we're trying to merge and marry these things. But other aspect which I or the problem side of it which I want to delve into because yes that's one major part of it where the architecture and design is sometimes totally skipped and sometimes looked into very um shallow way. But the important thing here also is that speed of innovation with keeping the reliability and governance in place because we know is not as speed up as the speed of innovation has become. Can you touch upon this aspect as well because one is the design piece of it which you have already touched upon. The second part is the governance, the innovation and the speed with which business wants to accelerate. — Yeah, the answer is that they're one and the same. And I think the best way of explaining one sort of saying such a contradictory thing perhaps is that I love theology of a merrygoround that is spinning around that we're sitting on a horse is going up and down and it starts spinning faster and we have to hold on a bit more, right? And then it starts getting faster again and we can't hold on anymore. It spins too fast. Either we let go and we fly off or we move into the center. So we're moving to the center holding on to another horse and we are fine. But it spins faster again. So we're moving into the center further and further. And I see this is really what's happening with aentic AI or gen AI is that things that were a strategy document that was an architecture on a diagram. We have a design a BPMN here. We have a governance document here. They don't sit on the outside anymore. They can't of this world. They're coming together and they're fusing. So for example, to answer the question is that you must design governance into the agent or into the system at design time. They are not separate. You actually do them at the same time. So it's not like you're innovating and then you are catching up. That will never work because you're always going to have a mismatch. You can't have mismatch in these systems. They're going to drift and do horrible things. So they actually need to be joined at the hips when you're designing the agent. So you actually need to build the governance into agent when you decide — that acknowledges and I love the anal analogy which you have used either control it from outside or

Segment 4 (15:00 - 20:00)

um or you get spinned right with the moving thing. So — exactly right. — Yeah. But now let's delve into the guardrails which you're talking about. So what are those guardrails and would those be evolving? Let's if you can take some examples uh along with the seven uh key framework which you want to touch upon but can that evolve too because again we cannot be doing the mistake of being rigid with our design. So I call it that designs are drifting all the time. — Absolutely. — I have touched in my book and some techniques which I have laid out there but when designs are drifting our mindset needs to change. It's one time activity. I talked to various people it's like we have done the design in the beginning we are done we are not done it's changing with every configuration change in cloud which a developer is changing or maybe AI is changing on its own and not even telling you so let's talk now about those guardrails what do you think um those things should be with this changing world and how do we make them evolvable — and they're actually evolving and of The way that I'm looking at it is I'm using the maturity levels one to five and I've invented a sixth one that we can talk about if you're interested but let's say there are five and each of these maturity levels and it's useful to go through them. The first one that is ad hoc that is CMM level one typically ad hoc we get benefits by become measurer so that is when you are AI that's when you have AI system you you're enabling code pilot for everyone in the organization you're going to get benefits but it's hard in them and then level two that is things are being repeatable and that's why I put in the AI agent you can repeat the business process you're going a policy agent an employment on boarding agent is singular agent with a singular purpose. So far we don't need to change that much. What we have today can handle that. I mean we are deploying these kind of simple agent systems and they work okay. They're expensive to maintain because they're brittle but they work. It is when we get into level three and this is when we have a multi-agent systems and we have some level of autonomy. I'm not even going to talk about four and five. It's so speculative. But level three we have multi-agented systems. This requires a new operating model. It a new design language. It requires a completely new architecture, completely new governance approach. And it's a really big step. That's why I think that we need a step in between. So, I'm putting in step 2. 5. That's a multi- aent system, but it is without the autonomy. And when you're looking at the guard rails, they look different as you're moving in between the maturity levels. So for example, one guard rail that's very important and not quite understood is a good one to pick. It is authority and the decision right. If we are going to put any kind of autonomy in and we're going to give it agency that we are going to allow it to make the decisions on our behalf, we need to be crystal clear on the planet over the decisions it can make. I mean it's common sense but it is something that's been left far too late. So that's a really important guard right that you always understand what exactly can the agent decide and what can't it can't decide and when you're moving for example from a mature to level two into level 2. 5 there's quite a delta in then when you're moving from 2. 5 into three then you talk about autonomy there's a quite big delta again so the guardrails are changing and also because you need to look for different things when you turn on the autonomy suddenly the risk picture is much more complex it's much higher so again your guardrails have to reflect that and you need to design that into the agent again from the beginning to make sure that you can scale both what the agent does but you can also scale the governance again I'm coming back to this thing they have to be joined at the hips it's really important — yeah while you're bringing agents into the picture. Um I'm more worried because uh everything is not a agentic problem, right? Uh while we we'll talk about that later and we have a dedicated time for that uh separately as a episode as well. But I like the framework which you're saying where organizations can assess themsel most of the organizations in the initial level twos which you said ad hoc and the other one but what are those guardrails beyond the policy documents or maybe uh very high level principles we are giving people

Segment 5 (20:00 - 25:00)

that have the least privilege. Let's talk about those. I mean um when let's say I'm putting my new AI system which is driven by LLMs genai and maybe let's say it has some agents agentic components as well. Now I'm marrying and merging this with my existing system which is procedural microservices based. — Yeah. — What is one or two those guardrails which you are talking about in terms of what you said? — Okay. They regarded for this specific is we have seven of them or I have defined seven of them. There are the obviously other ways of approaching it. I only I really found one way of doing it. So one of my gods is scope and the scope is specifically about understanding what your interaction points what your context with the art or non agental. So let's say that you're interfacing into an ERP or a CRM or any kind of external system whatever shape and form you actually have a guard rail specifically for external systems. So that's how you manage that. Also to give another flavor of these guard rails I mentioned one is goals. This one is actually really important and actually in a sense perhaps one of the harder ones. Imagine if we have an agentic system and here you have agent one and this is the profit maximization and then you have agent two and this is the margin maximization agent and then you have a third agent of so we introduce a few more that's a warehouse let's say that you're putting all of these agents together that just say go for it that would be very dangerous they have their own goals and they're going to pursue their own goals but you have no idea what the emergence is going to be. So again, one of the very important scenes in the boundary um one of the governance pillar is this ability to be able to define a goal in an intelligent way to an LM. So if you have three goals here, you need to provide some kind of guidance to the LLM how to view it. And it could be for example, and again we're touching on another regard right now that's policy. It could be a policy that says that you must never ever go below 10% profit margin. So that might be constraint that is built into one of the agents there. So that is how you start to put boundaries around them and what they can't and can't do and that is policy is the main instrument that could do that and then we also balancing the goals and saying that this goal is going to be more important than that goal under these circumstances. We talked about procedural logic. It doesn't disappear. We are pulling it out of the core. code and we are putting it in the boundary instead and we are telling the LLM, okay, you figure out the code as long as you are following these gold race. — Makes perfect sense. Yes. But um I think what you're telling us to do is and uh more organizations to evolve and even um the people who are working behind the scenes to come up with more of these emergent behaviors where the goals are separate or distinct and then still system has to work. So again things come is that we need more system thinking than ever before. — Absolutely. Can I give an example? — Sure. Sure please. Okay, so this is how it plays after real life. So this is a little bit wild, but I'm doing it anyway because I think that we need to push the boundaries and it works and it's completely new design process. So I'm used to the old role and I've had so many workshops. You get the team of people, you have the whiteboard, you do the you can service the blueprint or customer experience design or be a PMN diagram and you draw and you have sticker notes dot dot. I'm running the sunsh workshops radically different today. So recently I deep one for a listed company in Australia and I captured all of the information around the boundaries or as much as possible with the goals I understood about authority delegation of authority for example a lot of the policies we understood the interfaces into other systems etc. So we could define the boundary so I had defined the boundary in an And then we invited the customer. They flew in for all over the place. And we sat in the room. There were two IT people from the customer. I think there were four people from the business, you know, senior people in different areas of the business and a couple of people from DXE technology. And I said we we're going to design your future call center process now of end to

Segment 6 (25:00 - 30:00)

end. I put in everything into the LN. It understands your business. It understands the boundaries. We are now going to say instead of going up to the whiteboard to start drawing, we can't design a process for AI better than AI can design itself. We need to let AI design what we do and that what we did. So the first prompt, it was a big screen instead of a whiteboard so everyone could see same by typing which is terrible. But anyway, I put in a prompt based on everything that you know about company X, I want you to develop an end to end process that is taking everything into account. Hit enter and it goes away and it comes up with an agentic design of about 27 agents. I also written a program that I can fit it into so I can get it graphically represented and then rather than going in and try to understand the process because some of it was common sense other things not so much I'm taking a very different approach we want to test into the boundaries it everything is about the boundaries so these business experts I had invited they were people that really understood their business and everything that could go they were expert in the edge cases and that is how you validate the system. It's one of the best where you start through every edge case at it and you try to break it and made it a competition you know a bag of loggies for anyone that can break the AI and actually one person could they found an area where they consider that it needs to apply national policies depending on the country it's in where it comes to buyer protection and things like that. So we ended up with 33 agents instead. But we LLM designed the entire system inside the boundaries because we had defined define the boundaries and then but we had done that design. We took a part of that and we automated the coding and we actually built build the pilot. It's a new way of thinking. operating. It is insanely fast. Absolutely. I love that this organization you worked with is spending so much of time doing it. I wish everyone does that because that's extremely missing in the whole pace of AI. And uh while we were touching about system thinking, I also want to throw one example which I absolutely loved from Dr. Wner when he gave his last speech this reinvent he mentioned that there was a forest from which you know the wolves are removed because those are very aggressive animals to be and killing everyone else and doing the damages and all seemed logical. Everybody supported it and wolves were removed from the forest and within a decade of removing them everything started to degrade. water problems, forest problems, greenery problems and even certain breeds were dying and that made them to look back that where we went wrong. Then we went back into that removal of wool was the bad decision they took and they had to put bring it back put it together again and then within few years it started evolving. So while everything might seem logical when people are putting their AI strategy decks most of them have these days — system design is missing that how this is working today how will it work with the new components and how will it evolve in this whole picture if we now move towards the guard rails and the safety aspect of it what are those content filters what are those fallback logics and what is your advice is in that area. — I'm coming back to these seven seams again in the boundary. I think the way that I understand a genetic and uh generative AI in high school deflect architecture to me that sort of almost like sits in the middle and it informs all of the conversations. So for example in terms of safety one very obvious one is risk. So one of the uh dimensions one of the seven is risk and that is really again understanding the risk within an agent the system and understanding ways of firstly how they happen I mean emergence behavior even if we are defining the boundary how do we actually define the emerging behavior what are we actually looking for if we don't really understand it and

Segment 7 (30:00 - 35:00)

that is part of the risk etc that's part of the safety in the system itself because we need to have some kind of understanding of it in order to trust it and I have some ideas about what these things are. I think that we can sort of work out the minimum viable starting point that we can start and we can learn etc. Over the time so I think the risk is very important but the safety I'm looking at safety a little bit different probably from where you are coming from. I think safety for example also sits in one of these uh dimensions of the boundary one of the seven and that is semantics of semantics you cannot build any multi- aent systems unless you have more to semantics and that's part of your governance as well because if you have an agent here and if they operating independently fine but I don't think really care too much if you're stitching five of them together where they're making decisions and they all have a different context. That's not going to work at all. They're going to drift and hallucinate immediately. So, we need to give multi- aent systems. We have to give the agents the right context at the right time so you can make the right decision. That is critically important. And that is what the semantic dimension of the boundary does. It making sure that all of the agents have the same understanding that if we're talking about for example done in the context of the customer order, every agent in the system including humans in the loop, they know exactly what done means. It doesn't mean this or that. It means exactly this and everyone is interpreting at the same time. That's another way of enforcing safety that we always are using the same language to communicate and another way of looking at safety. Coming back to the boundaries again is evidence. It is sort of a an output of policy perhaps but it's really about how do we know that something in the system is true? How do you prove anything? How do we retain the record so we can go back and look at something? Yeah, it was true at this point in time. I mean that is also about making the system safe. So there are a lot of these boundary things around and then we can build other things into here. We can talk about fairness and ethics and morals and all these kind of things which is all about how we're controlling the model and making sure that the models are doing the right thing etc. But I think the question about safety is so broad and it actually touches everything. I think if you don't trust one part of this system, it's going to be hard. hard to trust the whole system. — A lot of things you've said around it and semantics and risks and the boundaries of the system and looks like you're quite already deep into the multi-agentic system architectures which you are thinking and writing about. I want to bring you back thinking atomic because uh while we are talking about organizational risk, the user trust the in between everything things start small right when a developer is writing yeah — maybe wipe coding or spec coding or whatever specdriven development and then we are pushing this code to production at a pace And now what does that person is not in position to think that user level of safety maybe not even knowing and the whole ecosystem where do we start atomically or the unit level when I'm defining my function and u how do I think as a developer that what will be the emergent behavior of this what would be small things out there which then builds it to the semantics which actually makes perfect sense and connects the dot fully. — I would do what I did. So when I started this, I'm not a developer although I'm a reasonable vibe coder. Now when I started 18 months two years ago and was chat GPT 3. 5 back then I realized immediately this was going to be really good. Here we have automated cognition. you know, this is going to have to make a difference. And I just understood immediately that for all of this to work, it is not about having a bot here and there. We're going to have to be able to connect them. And in my mind, it makes no sense unless we actually looking at them in terms of system. It's not about training the agent here and there. And I think that as a developer, we have to set our sight much higher than that. We need to set aside on the system and understand the system. And then we're coming back to some of the things here that we have talked about.

Segment 8 (35:00 - 40:00)

But I think that a lot of these things here they sound complex and they sound different. They actually not that complex. I have just come back from a piece of consulting with the government department that is redeveloping all of the horribly complex core systems. And I'm looking at that I'm thinking technical that holy and I'm not going to say the f word but holy f and I'm thinking this is so complex that you can't fix it. It's unfixable. How do you untangle all of that spaghetti? You can't. So we are facing incredible complexity already. So the wall that I'm painting it's not that complex. It's just really different. This is a mindset thing. This a hard thing. If you go in and you build a multi- aent system and let's say that you use a framework like crew AI or uh magenta what it really doesn't matter which one you're using. You can build an agentic system using procedure logic or you can build an entic system using the principle here that we are talking about. It's not that one is more complex than the other. harder than the other. We just have to think differently. The difference is that one of the system has infinite scalability and has all of the governance built in through these seven layers before you even start designing your engines. So I think for my recommendation for developer that is working in code and working in the traditional systems the development life cycle get off it. Honestly get off it. It's it's a race to the bottom today. The power of the technology and the tools is changing things very fast. Anyway, I would just start investing time and learning it experimenting. uh take some of the things here that we're talk like perhaps even by my book agentic system designed which has explains how this work that you can take that book and the principle then you can build an agentic system along the lines what we are talking about that is what I would do I would not continue what I'm doing and trying to do it better faster I will not race in the same race I would I will go into another race Yeah, definitely. Maybe what I think is that that's where organizations are doing the all the AI access and the tools to people uh — at all levels, but they're not giving the AI ability which is actually is needed at all levels from top to bottom. So, uh well said there. Let's talk about some tradeoffs in this space. — Yes. is explanability, evolvability and I see it as like pace versus stability — also is an is a thing now what are the new trade-offs do you think are very important to keep in mind now — yeah I ask people when I talk to executives I scar them on purpose and I say that you're used to will be technical debt in a genic AI that can't detect your debt that is not strictly speaking through I'm exaggerating when I say that but I think the conversation is really about how much technical debt can be afford because when you're talking about an enidentic system even if you stop autonomous still going to have emergence so you have all of these agents connected doing things together something's going to happen that you can't predict this question is how much that how significant is it so in terms of trade-offs it really comes back to I think how much technical debt you can stomach. When you allow technical debt into the boundary, you know that something is not going to work properly. Something is going to drift. It will always drift unless you lock it down. So the question is how much drift are you willing to accept. And it could be that there systems that are not involved in let's say really critical decisions like for example payment system or trading systems that are dealing with very important things that could be for example it could be policy system around travel. It could be that we are happy to accept a little bit of drift in these agents because at the end of the day if the agent is getting it a little bit wrong or the workflow is not working perfectly well it might not matter too much. So I think the trade-off is going to be very is really going to be about how much are we willing to let a drift hallucinate

Segment 9 (40:00 - 45:00)

loose insights into the reasoning and it really comes back to the business problem they're solving. So obviously the more important the business problem is the less drift then the more governance etc we need. — So I think that it needs to be — agree um that's the layering what we need to do maybe to really get to overcome those trade-offs which we have spoken about. Now let's move on to the responsibility boundary. We have created a lot of layers in the organizations from product team to platform team to architects to various other things in the governance security usually stands alone where we say that it has to be all connected. What do you think about the responsibility boundaries around generative AI and agentic systems? Where does it lie when something goes wrong? when it hallucinates whom should we blame — and it's a good and interesting question and I had to speculate now because of I don't think the systems I'm part of the designing and implementing have been running long enough to sort of really comprehensively answer that but having said that if we go back to architects because architecture is the most important profession on the planet right If we go back to architect, architecture changes quite a lot and I think that there going to be certain responsibilities in this entire new operating model that we are sort of implicitly discussing here and let's start at top for example business architects they will be responsible for it's not so much about capabilities and those things and more they're going to be responsible for the policy anatomy of the policy and the structure of policy. Can these different policy types and these policy instruments can we capture the essential business rules of their organization using this construct and if we can't do we have to sort of change etc. So business architects are going to be intimately involved in this sort of interfacing between the business and uh enigentic AI and this is similar with other with other roles. My hypothesis is that I think the business architect is going to be the most impacted. Uh I think the second one is going to be the data architect because ultimately data is everything and it's okay if we had a if we have a proof of concept or a pile if it's small as we are opening the ecosystem or we are allowing other data sources coming in. If that data is not high quality that is if it doesn't fit the ontological and our semantic layer then things are not going to work. — Yes, that's a space which needs to evolve uh further with the boundaries that we have created and of course the conveys law which always come in the picture here. What about the agentic? Now I know you are uh you have written a lot about it and even in the whole talk you have spoken about it with agentic are we solving the real problems are there any problems which because one day I and one architect had very good discussion and we were saying that while our uh managements are saying go solve the problem with the agentic AI what is the problem I mean everybody's putting a solution to it but what is the problem which agentic solves And what is a problem which is not meant to be solved by agentic AI? — I think the first problem is mindset. I think that if we are going to play in this AI sand pit, we cannot take the old world with us. It's not designed for autonomy. It just breaks. So I think the mindset is really important. I mean perhaps that is all it is. If we understand that we are turning the world upside down and we are letting go of logic and we're going to place logic with autonomy and we put in autonomy well in order to control that we're going to have boundaries. If that sinks in deeply, I think that we have solved 75% of our problems because you get everything once you understand that you get the design language. I mean you have to design your goal, authority, your policy, your scope, your risk, semantics and evidence. It also give you the governance language. The goals are also provided governance. Authorities also provided governance. I mean your delegations of authority for example in my perhaps very simple mind just this very concept

Segment 10 (45:00 - 50:00)

leads to so many answers to some of these really difficult questions I mean brittle systems if you're trying to build an AI agent and you make it deterministic it will always be brittle it's not meant to be deterministic generative AI is an if then else construct. It's not meant to do that. It's not meant to be in that. It would always play up. It would always be brutal. — Yeah. While this whole answers will evolve with time, but yeah, you said it right. It's not if and else problem which most of the teams these days uh unfortunately start thinking but that's one correction we can do. One thing to keep in mind is that we are of course in the very beginning and when we are thinking about the gentic system frontier models and chatb and gemini and all of those things they are very expensive to use. So I think that one of the big changes in all this is going to be not even using the large I mean they're all generative AI but rather using the large language models we also using the small language models because there are when you're building these multi- aent systems for example these 33 aent system that we designed for this customer we don't want every one of the 33 agents to go hidden frontier model and start incurring a lot of costs. So it's a question of also understanding the system and understand how to design it and what components to use and what kind of LLM they begin to use in what circumstance and I think that if we have access to the very low cost smaller language models I think the use case of AI is going to be bigger again so again I can't think of any areas where I think immediately we should absolutely not go there. Yeah, that's a good advice to keep. I would uh tell people who are listening to this, we are relying so much on genai and LLMs because llama came out then so many models came out and then whole world started drifting towards geni. Is that the only piece of research which we can leverage on? — I'm the wrong person to ask because I think I have an LLM addiction myself and I think I use it differently. I don't use the LM to write emails and things like that. I write by emails. I'm happy doing that. I have my own style. I speak with an accent. I write with an accent. I'm quite happy with that. To me, what the LMS are really useful for the way that I use them is that they're really good at helping you connect dots. So, I create a lot of thought leadership for example around I have a lot of ideas. So I create a model and the best way to validate the model is to take it into the LN and say this is what I think first you validate it you know what do you think and quite often they come back and tell it's really good of course we know how to write but then you can do some deep research and you go in okay so what exists in other parts of the world etc and then how does these relate to other thought leadership I have developed etc and when you start attacking something from a number of different angles. It's almost like you see a new ontology. So you have an idea and you see an ontology of your idea expanding. I think that's really cool and that is when you can just create I mean I'm a consultant so this is what I do for a living and the fact that I have a tool like that at mine that my disposal is sort of increasing my productivity drastically. If you're using it simply to answer questions and if you're surrendering your own thinking, I mean your own free will, your own critical thinking, I mean that's clearly not good. That is really misusing the LM you should never you still have to think. — Yeah. And uh with that we are at the end of our conversation. It's long time we have taken. I think uh one clear takeaway from my end on this discussion is that we definitely need evolutionary systems, evolving system, emergent behaviors and the new guard rails and the new architectural practices. And one thing which has which I've started saying stating to various teams even more now is that architecture is required much more now than ever before. — Absolutely. As a matter of fact, I'm telling people I mean I'm an enterprise architect. I'm telling people that enterprise architects they're not new architected optional. I think the real difference is that technology architects I think we always need solution architects everyone has them data architects almost everyone has

Segment 11 (50:00 - 51:00)

them. It's really the business architects and then the enterprise architect they are going to become essential because the enterprise architect is looking after the entire ecosystem that we are talking about and the business architect is going to be this critical new business interface between the business and your agent system you know all of those translation layers. So I think it's a great time to be an architect. It's a fantastic occupation going into AI it's going to be very exciting. Absolutely completely agree. In fact, uh the responsibility increases much more where first architects need to be doing the right job and then bringing our engineers, senior, junior engineers to that level where they start connecting the systems and start thinking in systems that where my work or my wipe coding can impact the organization or the user or the function and where it couples more where it shouldn't. All these things are important to keep in mind. With that said, it's good segue to end this. Any last thought, Jesper? — I'm just going to say the boundary. Remember the boundary. That's all. — The boundary. And before that, create that boundary uh so that you stay in control of things and don't let things control you. All right. So, with that said, thank you for joining us, Jesper. It was really a good discussion in terms of thinking. We may not be able to answer all the problems today existing but we have thrown certain ideas for sure which are uh which deserve to be listened to and which needs further work to be done in those areas. So thanks a lot for joining us today. — Thank you so much for having me. Have a great day. —

Другие видео автора — InfoQ

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник