# Turning Governance Into the “Yes” Guys

## Метаданные

- **Канал:** Domino Data Lab
- **YouTube:** https://www.youtube.com/watch?v=kBX8gDI9ULo
- **Дата:** 01.04.2026
- **Длительность:** 30:36
- **Просмотры:** 22

## Описание

When Cindy Tu (https://www.linkedin.com/in/xin-cindy-tu/)  first stepped onto a conference stage, it wasn’t part of a long-term plan. It was a turning point. A single speaking invitation shifted her role from quietly reviewing AI systems to actively shaping how governance is practiced across financial services. With a background spanning IT, data, and audit, Cindy brought a rare systems-level view to the table.


Now a rising voice in enterprise AI risk, she’s influencing how institutions think about oversight, how governance frameworks evolve, and why people are at the heart of successful implementation. Her perspective is informed not only by technical expertise, but by lived experience. 


Cindy brings:

  •  How to frame governance as an enabler and not a gatekeeper
  •  Why third-party risk keeps rising as gen AI adoption accelerates
  •  How to flip governance from the “no” guys to the “yes” guys

## Содержание

### [0:00](https://www.youtube.com/watch?v=kBX8gDI9ULo) Segment 1 (00:00 - 05:00)

That's the area that I worry about that keeps me up at night is really the third party risk management cuz it was not designed for the JAI arrow and now we really need to revamp it. Otherwise, we're going to get in big trouble. That's Cindy too, director of IT and data audit. Cindy has a long career as an auditor in finance and she has a unique vantage point on the industry's evolution of risk management and data governance. A vantage point that has evolved alongside in finding her voice on the main stage. She joined me to discuss the misunderstood world of governance and the importance of standing up and speaking your mind. I'm Thomas Bean and this is Data Science Leaders. In her early days of finance audits, Cindy saw her primary role as focused on data integrity concerns under an IT risk framework. And though governance was often seen as something of a negative word, Cindy was quick to remind me I mean governance is still a negative word to nowadays because when people talk about governance, right? They are thinking, oh no, these are the guys who are going to slow us down, right? They're these are the no-no guys. But Cindy doesn't think that's actually the case. Instead, she sees governance as an undervalued asset. How do I say it? Like maybe innovate more responsibly, right? Not the no-no guys. Like how can we turn that no into a yes or maybe? Now that take might seem like an unpopular opinion, but Cindy isn't shy about sharing her opinions, at least not anymore. So uh I was born in China, right? I grew up in China and then I started working here, right? Obviously got an LA like degree and then working here, but I think that Asian culture, the upbringing is still very rooted like in my DNA, right? So I'm a mom, absolutely you're right. Like she's very She has a strong character, right? She had worked in her entire life in She had very, very specific philosophy about working is that I'll I'll tell you the Mandarin what it sounds like, right? She will say shao shuo duo zuo and that translate into don't speak up, just work hard, right? So that is, you know, everyone is um informed by their life experience and that's what her life experience telling her is that just work hard, you know, don't draw attention to yourself, right? That's how she lived her life and worked her entire career, right? But that's not, you know, as I'm finding out like in the past two decades I work, right? Like that's not how it works here, right? So it's very like it's very interesting kind of have the upbringing but finding out how do you actually live in the American culture, right? The work experience here is was a it was a big cultural shock to be honest. As an expert who has lived in the United States for some time, I can tell you how big this American culture shock really is. And so the first half of Cindy's career was defined by her drive to succeed by showing up, by listening and then learning. She's attending every conference she can get to. Thinking all along The voices or the leaders who dominate these sessions there's really not many people who look like me. I'm the first generation immigrant, right? English is not my native language and I never really you know, kind of like picture, oh, that could be me. That's how it was back in the days. I was just there attending the conference, listening to what they had to share. So Cindy goes to countless conferences and one day she hears a speaker that particularly resonates with her. She reaches out to say, "Hey, you know, I really enjoyed your session. " And the speaker? She actually responded and invited me to speak at a local event and I remember it was the local event for women in data. I was surprised and after I thought it for a little bit like, well, why not? It was very scary, but I thought to myself, well, why not? Let's just try it out, see how it goes. It was amazing how many people approached me after the session and um a lot of them like made a comments about that my story actually inspired them a lot, which is very, very surprising given that the very first session I did. What did you realize about yourself? You were in a zone, the story your story made an impact, your speech made an impact. So what I realized is that I really, really deep down enjoyed the whole experience, right? Before, during and after, right? It was not the fact that I shared my

### [5:00](https://www.youtube.com/watch?v=kBX8gDI9ULo&t=300s) Segment 2 (05:00 - 10:00)

perspective, it's was able to engage and connect with the audience. And it looks like as you were going through this realization on stage, it also changed the way you saw your place in the audit and risk and AI governance conversation. So it Can you tell us a bit more? Like did it feel like shifting from reviewing the system to helping shaping it or taking a more active role? I think um it's not the fact that I'm like just a good speaker, right? I really have kind of like finding out I have a unique angle to share because as I have told you in the earlier conversation, right? Like I'm often time one the only odd person in the room, right? In these data conference and AI conference, right? And that means I actually could see from a big picture view because of my unique background. The intersection between IT, cyber, data and AI risk because as you know that the AI risk is very much a top level enterprise level risk. It's not just related to IT, data, third party or model, right? It's all everything, right? So um it requires a lot of different stakeholders and parties to collaborate together to sit on the steering committee or whatever the organization that is providing the oversight for AI risk to be able to ensure that you are consider all the aspect of the AI risk. So the start of this whole journey about making sure that I can actually, you know, engage with the right leader at the enterprise, especially financial services. Like surprisingly small world, everyone kind of know everyone, right? Data leaders, AI leaders, like you know, audit leaders and risk professionals. So we're actually a very tight knit community, right? We all share perspective about what works, what doesn't and then through that sharing you kind of understand the leaders at the personal level and build a connection that last lifetime as well. But at this same time, you took a way more active role in kind of shaping the AI governance frameworks of the financial institutions you were working at. So when did that switch happen in terms of I'm just not evaluating, I'm now influencing? You spoke about the audience and your speaking activities, but in the context of the organization you were working at, how did this shift happen in terms of hey, I'm taking the wheel and now we're going to drive and define because I have this unique perspective? That also happened like slowly and gradually, right? So because of my role and, you know, there's not really a lot of the risk professions and audit profession that has that, you know, like big picture view across different risk type, right? And at the time was like IT, cyber, data and AI, right? So which is why we were invited in and sits on the JAI council. That's our our first iteration of JAI like governance framework, right? And how it came about is really like through our engagement and, you know, the effective challenge that we were performing at the time with all the JAI cases that we were finding through from, you know, inception honestly to proof of concept to wider roll out and implementation. And because of our input and we were able to shape the second iteration of the JAI, you know, like governance framework, right? Defining what is acceptable AI use cases and defining what the risk assessment should be involved and defining what level the approval that would need based on the JAI cases, you know, the classification piece, right? Through the triaging process deciding which path it should take and what management level committee's approval would be required before it can go to proof of concept general or wider roll out and implementation. All of that is really through the gradual effort and that you became that opinion leader, I became that influencer and you get invited to the sit at the table and you have a voice that then you become more from a participant standpoint to more of a influencer and stakeholder at that point. So it also happened slowly and gradually just like a trust. It doesn't build overnight. It's through the gradual influencing and connection and relationship building. I love what you just mentioned about trust. Can you give us an example of the type of problem of risk that only your unique perspective could highlight? Because that's also part of this moment. What I was able to bring to the table was because I have extensive background in IT, cyber and data and AI. And also because I have experience designing and implementing IT and data

### [10:00](https://www.youtube.com/watch?v=kBX8gDI9ULo&t=600s) Segment 3 (10:00 - 15:00)

audit framework at various like financial institution. I'm also specializing data governance. So I I understand that like through implementation of a governance framework is not just like technology and process. It's also people, right? So with any change management is people, technology and process. And often time the most important part and the most difficult part, guess what? It's people. — Right? Because it involves more of a culture shift. It involves often time organization changes as well. So which is why it's hard, right? Like our human nature, the first instinct is to get comfort to routine and resist change, right? So it requires a lot of people, a lot of effort, a lot of influencing, right? So my perspective is that I have experience implementing the framework and know how hard it is and know and understand is not an overnight effort. It requires a lot of people, a lot of buyings, a lot of influencing, right? It's not just the technology has to be right, the process but the people has to buying on it. And that's number one. And the number two part is that because I have experience in all these different risks, right? I understand that if there's a specific risk that we're considering about AI cases if they are multiple control breakdowns related to same area that risk tend to be also compounding, right? Which increase the likelihood of a specific attack or specific thing that could go wrong to happen with a much higher likelihood because of two control breakdown in the same area. And because the fact that we often time do a lot of like trend analysis and root cause analysis on a lot of issues that we see across the board, not a single like lines of business, but across the board looking at the quarter over quarter trend in terms of the issue in a specific area, we understand, right? If you get a collection of five different issues it talks about not just a single control breakdown because especially if it's across different lines of business, that's telling us that maybe we didn't fix the right root cause in the first place. And it's widespread breakdown in this very specific control in the same area, then it tells us maybe the root cause is in governance, is in training, is in guidance, right? So there's a lot of these like big picture view that the audit have the luxury of seeing from 3,000 ft above the sky, right? Not just looking at one issue in isolation, but looking across the board. why there's a lot of different things that we can offer to be having a seat at the table is because of these experience that I have had in this field specifically. Yeah, the risk, just like the solutions you were talking about, are compounding themselves as well. Like you open to a little bit of risk that might be acceptable, but you take it at the group level, then you might have something much, much more significant. Yeah. Um you mentioned earlier you governance is still being figured across the industry, but I was especially true as you were seeing GenAI arrive and such. And um what were you starting to see in these early kind of conversations that others were not talking about? So I definitely see, you know, because I connect regularly with the chief AI officer, chief data officer and you know, CISO and CIO as well, right? I'm seeing that these AI governance is framework is much more mature than before. And the rate of innovation now it's definitely is speeding up. But on the flip side, I'm also seeing there's a lot of the stories that they share about what could go wrong or what went wrong already that is also actually pretty scary, right? So So it's you are seeing on one hand, you are seeing the AI governance framework maturing. And the other side, you are seeing because of the fact that we are more or on the risk management, we're finding out more about the risk exposure now. That in turn is making us more cautious also when we are rolling out GenAI use cases to do more risk assessment or have a way to feed the risk assessment, feed these GenAI use cases back to the risk assessment process because as the GenAI model is is implementing, right? Or implemented in the production for a while the perform rate if you're not being careful, if you don't have continuous monitoring in place, right? So that's general trend. I do think the governance

### [15:00](https://www.youtube.com/watch?v=kBX8gDI9ULo&t=900s) Segment 4 (15:00 - 20:00)

framework is maturing, but the risk where I think we're finding out more about the risk exposure now. And the other case I also want to see is on the horizon is that I'm seeing a lot of gap in the third-party risk management framework right now because it was not obviously designed for the GenAI era that we're in now. So now I do see a lot of exposure on third-party space because one, third party you have less control over what they do with your data. Even they say, well, your data is on the prime, right? It's not going to get cross trained, but how do you know, right? Until you can validate that, you can verify that. Especially those bigger players, we always had trouble validating these third-parties environment anyway with the big player like AWS and so on. Because they're too big for any single client, so they don't care about you, right? Any client is not big enough for them. So they don't want to open their book for your audit and all that. But then deep down, how do you know they're actually doing the right thing? They're actually doing what they say they were doing, right? So and as you see, there's a lot of GenAI model out there like ChatGPTs and Perplexity and Anthropic, right? They already all the public available data, right? There's no more data for them to train. They're actually using AI to create data for their training. So what they're after next is your enterprise confidential and proprietary data. You have to safeguard it because that's what's giving you the competitive advantage, right? And also you have to safeguard it to comply with relevant laws and regs, right? It's not a nice to have, it's a necessity, right? Especially for financial services company. So that's the area that we're about that keeps me up at night is really the third-party risk management cuz it was not designed for the GenAI era and now we really need to revamp it, otherwise we're going to get in big trouble. In such a fast evolving domain, I mean, I know the regulations are moving slowly, but the technology is moving super quickly and the businesses, I mean, business use cases are invented right in front of us. How do you um as a thought leader, but also as an executive, how do you stay current or actually how do you have a forward-looking perspective in a domain that's evolving so quickly? What I do regularly is I connect with the industry leader because you actually will be able to hear what's happening on the ground because what's reported on the news is not what's happening on the reality on the floor, right? So you want to actually see behind the scenes story about, hey, oh this banking company have rolled out AI agent. Was it actually AI agent? Was it simply just automation, right? So you are through your connection with these industry leader are able to see what's truly happening behind the scenes and what works, what doesn't. And you know, what they have tried out with the GenAI cases and what could go wrong, right? That's actually that's the most valuable thing that I gained from being the industry leader is the ability to connect with people at different level and hear their story and share what works. And more importantly, because these are not just, hey, you want to hear some funny story to tell. You actually bring it back to your daily work to say, hey because I heard this story, what should we do to better manage risk, right? While we are seeing these use cases so that we can get in front of it and we're not going to get into trouble with the regulator and we're not going to violate any laws and regs, right? Because these are we're doing this for a reason. It's to bring it back to apply to your work so that you can become a better professionals and you can actually look out for the company's best interest as well. We spoke about how you came to become a speaker, how also the early days of AI and GenAI governance. Let's speak about today. You spoke a lot about the people as being a big part of the solution. So what has changed in terms of how AI is governed or how risk is assessed for AI in enterprises today? What have you seen? Um I do see there's a lot of different flavors of AI governance framework out there and that is for reason because if you think about the AI, like let's say even chief AI officer or CDO organization, right? Like organizationally, every company they sits at different part of the business. Some of them are report to CFO, some are report to CIO, some report to CEO, right? So on that is due to the organization culture, their maturity, their level of like scrutiny from a like

### [20:00](https://www.youtube.com/watch?v=kBX8gDI9ULo&t=1200s) Segment 5 (20:00 - 25:00)

a regulation perspective, also like the their lines of business, what kind of product they're selling, right? What agency will be governing them? And like all the factor matters, right? So, you are seeing a lot of different flavors of AI governance framework out there. That's for a reason. And I see a big potential in the AI governance framework field is because it's not a one-size-fits-all solution. Even if, let's say, we announce in 2026 there's a golden standard out there for AI governance framework. The company is not going to be able to adopt it right away. And the reason being that it the change of management is just different story every company, right? And the governance framework, how it works, is it centralized model? Is it decentralized model? Who have a say in there? What kind of level approval, risk assessment is needed? Every company, their risk appetite framework is also very different. What is acceptable risk for company A may not be B. And there's a reason for that, right? So, so even if there's a standard, let's say, today, right? January 2026, we have a gold standard. The company is not going to be able to adopt it because you have to figure out what does that look like for your company. So, I'm seeing a lot of different flavor out there, which is okay because every company have a different answer when it comes to what is the right balance between innovation and risk management. And that is creating a lot of opportunity for risk professionals like me, right? If you think about audit, like the risk management skills is very transferable for first and second lines well. It's that you are able to translate that risk requirement, what is acceptable risk, into a framework that is designing for that company specifically. So, yeah, I'm seeing a lot of different flavor out there, but they're all maturing into more a rigor, a framework, a structure, which is very good trend for the overall industry. We're all trying to figure it out, and we're out that experimentation because no one have the right answer, so we have to do more of a test and learn so that as the industry we can figure out what are the all the risk exposure out there, what could go wrong, how can we safeguard it. And through these exchange of the insight, right? Sharing with the industry leader, we're going to be able to get to the final answer of what is the minimum level of the rigor that should look like, right? We can come up with what the minimum standard is, but to figure out what the individual AI governance framework, it requires your company to have the risk profession tailor that and design it for you. So, that's what I'm seeing in the industry out there. Do you think we'll ever have a gold standard, or do we even need a gold standard? Cuz I'm listening to you, and I'm thinking actually a gold standard might be a good step forward for the companies that may be a little bit behind, but for the ones that are pushing ahead, it might be actually holding them back. So, there's a tension there. Yeah, so I think there will be um like a baseline requirement, let's say, maybe NIST will like release something that is for financial services industry specifically, right? So, if you think about the category one banks, they're already far ahead, probably, right? So, but for the mid-size bank, like maybe there's a few area that you still need to patch up, right? That's the baseline requirement that you have to be uh compliance with, and then there's a different level of rigor that comes if you're a category one, category two, category three bank, right? So, I do think there will be an industry baseline that would be released, but you know, to your point, if you're far ahead, maybe you need to do more because the risk is much more, the volume of transaction that's going through is much more, the level of data that you have for the customer is much more. So, you are holding a very different bar. And that's what they're doing, and that's their job, right? To make sure that these are tailored in a different way so that they are different bars that they have to meet. You were just talking about the importance of figuring it out together as an industry. So, what does that community look like to you today from your vantage point? And what kind of collaboration that maybe could not happen without a community are happening now? Yeah, absolutely. So, um there's a lot of community like, you know, I attended like, for example, EDM Council and ABA, right? American Bankers Association. So, they all have like data governance working groups and AI governance working groups, right? And as a matter of fact, EDM Council um you know, is working on the next iteration or already released the next iteration of the CDMC framework that included AI capability in there as well

### [25:00](https://www.youtube.com/watch?v=kBX8gDI9ULo&t=1500s) Segment 6 (25:00 - 30:00)

right? So, these wouldn't happen without industry professionals coming together and sharing what should be, you know, what is the bar for data risk management in the cloud environment, right? In this AI era. So, there's a lot of people who are trying to get to into the AI field because they're seeing this as a more of exponentially exploding area right now, right? It's very hot. Um so, there's a lot of different professionals trying to get in and in financial services particularly as well. But honestly, I see a stronger community coming together. There's a lot of people, there's a lot of player in there, but there's never a dull moment because people are sharing stories about what their work experience is and what their different flavor of AI governance framework is so that we can say, "Hey, maybe we should try that, right? " It's all about experimentation, it's all about sharing, and that's what's making this community stronger. What's your perspective on the kind of tooling that can be provided to these governance or to these risk teams? And are agents going to help actually in that domain? What do you expect to happen on this front? I think we have to have trust in AI agent first before we can actually apply in the end AI governance world. And the tooling itself, honestly, and that is also a question for yourself, what tool you have already. How easy is it to embed the new tooling into your overall ecosystem? And what kind of feature are you looking for that your current tool doesn't have? And that's also a very individual answer as well. Yeah. Where do you think we're headed with this notion of AI governance? And I say AI governance, I I want to go back to the fact that you said it needs to be an all-encompassing encompassing uh risk management, but um where do you think we're headed? Uh and what excites you? What's the next frontier? What is possible in that domain that will unlock even more value for financial institutions or any but actually enterprise? Yeah, absolutely. So, I would say generally I'm very excited for AI innovation, and I attended the Nvidia conference, the GTC conference in DC just a few months back, right? What they have released about the Omniverse uh virtual reality, how they're collaborating with different industry into creating like, let's say, with Johnson & Johnson, they're creating that virtual reality for surgical use, right? Especially, they are able to simulate the surgery environment so that the doctor can actually simulate in a surgery before they even perform on a real patient, right? That in itself excites me and scare me at the same time because one, right? Think about the endless possibility what they could do, right? With the AI factory, with everything, with the weather simulations, with surgery, with everything, everyday life, right? They're going to cover all aspect of our life, right? Which is very exciting to me because the endless possibility that it brings. The other part of being risk profession that scares me is that how do you know it actually simulate the real world example, right? Because for the surgical use particularly, the consequences could be very dire, right? Could be life and death situation, right? It's not like, you know, we're managing AI use cases. The stake is not that high, right? But when you are performing surgery on real patient, the stake is very high. So, deep down, I worry about like how can we validate it actually simulate the real reality with all the physical aspect and all the element of the environment. How do we continue to ensure it still simulate the real world experience, right? Is what scares me, but that also creates job security and opportunity for risk professionals. So, I'm not complaining. But yeah, it's if you attended and hear Jensen Huang's keynote, like it's very exciting the future could bring with AI agent, with AI factory, with endless possibilities AI could bring, but the risk management framework has to catch up as well. Or evolve as fast as AI innovation. I totally Hope that. That's what we're all working towards. The way that Ciny found herself on stage is the way that all AI leaders need to be thinking of progress. Had she never believed that the spotlight was for her, we would be deprived of her unique and holistic approach to governance. This is especially true for those who see the stage — but don't see themselves on it. The thing I particularly appreciated hearing was how every part of Ciny's story on a personal and professional point actually bring together this unique perspective that she uses to assess risk and identify value and enable value in enterprise AI. If you enjoyed this episode, please stop

### [30:00](https://www.youtube.com/watch?v=kBX8gDI9ULo&t=1800s) Segment 7 (30:00 - 30:00)

to leave us a rating and if you have time, a review. You are the reason we produce this show. This is Data Science Leaders where we learn from real experiences of those building and governing the intelligence systems shaping our world. The show by AI leaders for AI leaders. This show is brought to you by Domino Data Lab trusted by the world's most advanced enterprises to operationalize AI responsibly and at scale. I'm Thomas Bean. Thanks for joining us.

---
*Источник: https://ekstraktznaniy.ru/video/46011*