Video: Identity, AI, and the SaaS Supply Chain: The Next Security Reckoning | Duration: 4512s | Summary: Identity, AI, and the SaaS Supply Chain: The Next Security Reckoning | Chapters: SaaS Security Landscape (28.03s), SaaS Security Challenges (137.315s), Expert Panel Introduction (281.59s), SaaS Integration Challenges (410.47498s), SaaS Security Challenges (551.445s), OAuth Access Risks (778.98s), AI Security Challenges (1109.75s), Agent Security Challenges (1346.04s), Evolving Identity Management (1625.15s), Shared Security Responsibility (2018.205s), SaaS Security Integration (2474.99s)
Transcript for "Identity, AI, and the SaaS Supply Chain: The Next Security Reckoning":
Here to the Obsidian Security Webinar on IdentityAI and the SaaS Supply Chain, the next reckoning. I first want to start talking about what we're seeing in the industry. There's something interesting about the number 30%. As we look at the latest investigation from Verizon in terms of breaches we've seen worldwide, 30% of breaches actually start with third party compromises. This is up two x from the year prior. At the same time, we're also seeing increasing research from IBM that shows that when there's an AI related breach, it too starts from the SaaS supply chain, another 30%. Different studies, different angles, same conclusion. The fastest way into the modern environments, including AI, is still through integrations that we have trusted. If we take a step back and then we look at the SaaS security problem, in the last decade, we've built systems that help humans connect to a cloud first world. Applications move to the servers that we don't control. Users moved outside the corporate perimeter. And in order to secure that access, we invested heavily. We put in cloud identity providers. We've got device agents. We've got proxies that route traffic. We've established protocols like RBAC and MFA to prove that the right user has zero trust access into SaaS. All of it, however, was designed around a single assumption that it was humans on a device that were accessing these applications. Then as we've moved on, we've actually seen the business intelligence layer move atop of the SaaS applications. So what was once a system of record then turned into intelligence, and then we've connections across applications that are between other SaaS or between other AIs. In these workflows, we've actually lost a lot of the protections that we spent the last decade building. So in this new business layer here, data moves directly. There's no human, there's no device and there's no browser. That means that we can't proxy that traffic. It also means that we can't entirely verify the identity on the other side. You can't put an MFA between two systems and you can't put an agent on a SaaS application. So that gap is starting to show up in where we've invested over time. If we think about traditional security, the current market for network security is at over $200,000,000,000 Meanwhile, we've established these new networks across SaaS and across AI, but we haven't brought those same protections into those systems. We've lost segmentation. We've lost observability. We've lost detection and response. And that's created a controls gap that modern businesses need to solve. In order to solve that gap, it doesn't come from stretching old controls. The issue here isn't a tooling problem. We have a visibility problem. And so that requires a new model. We want a model that actually looks at all the individual SaaS for what they are or all the individual AI systems for what they are. They're also an identity provider. So we need to be able to normalize these systems that are non standard, different commercial versions to be able to understand what is a human, what is a SaaS identity, often in the form of a nonhuman identity and AI. Second to that is we need network effects. We need to understand how do different systems and different users behave with these integrations. The last thing we're looking to do is interrupt business productivity or business agility. And being able to understand what does real access risk look like or in the case of a threat, what is abnormal behavior? And then last is in order to do that, we actually need context. We can't just know when there's an anomalous behavior without understanding how are these systems configured? What is the data sensitivity that they're accessing? And then last, what is proper behavior? That's why we at Obsidian Security believed in a different model where SaaS security doesn't just think about human to SaaS, but also extends SaaS to SaaS and AI to SaaS. This makes them first class citizens as we think about single actor movement across all of these systems and build a system where we restore what we've lost in security architecture back into modern integrations that come over the public internet. And so with that, I'm fortunate here to be joined by two experts that prove that this isn't just theoretical and that these gaps aren't just showing up in research papers or future state diagrams. They're coming up in real environments. We've seen recently breaches like SalesLoft Drift. Drift was an AI application that was connected into SaaS or breaches like Gainsight that also reuse some of the stolen integrations from SalesLoft to then again go into other applications, harm the ecosystem and put SAS at risk. We're joined today by both Ravi and Jason, who will talk about how they're building controls for the world where data moves between systems and between AI and not just between humans. And these systems are already in production. So instead of asking what might happen next, let's hear from them. Jason, Ravi, thank you for joining us. Ravi, do you mind introducing yourself? Thank you, Sean, for having me. So this is Ravi here. I work for S and P Global. I lead identity and access management. I've been in the identity space for almost sixteen years now. And I look after everything related to customer identity and internal identity. So, and the problem that Sean is describing is a real problem that we have, and it's gonna grow as we move forward. And we can discuss more during Q and A, I'll hand it over to Jason for introduction. Thanks Robbie and thanks Sean for having me as well. I'm Jason Poppe, I'm the director of security architecture here at Workday, focusing on our enterprise infrastructure application security space in terms of SaaS and AI and other related areas. Like Robbie, I've been in the business a while. I started in infrastructure engineering in the late nineties at Microsoft and have traveled to a number of enterprise environments working both infrastructure as well as security roles. And to reiterate Robbie's point, this is a scenario where the explosion of attack surface in the SaaS space, in the AI space is really driving this to the forefront. From our perspective at Workday, we recognize this is a growing area of exposure and a growing need for us to mature security capabilities. So, it's a heavy focus for us moving forward. Cool. Thank you for joining us, guys. What we've prepared here today are a few questions to kind of discuss and think about how to assess this risk. What's working, what's not working? What were moments when you realized that you needed to add to your programs and your controls environment. And so with that, let's kick it off. So I think the first question that's top of mind here for individuals is what was the experience or the signal that pushed your organization to think just beyond user connectivity into SaaS and to think about the greater SaaS integration supply chain as well? Ravi, do you mind answering first? Yeah. So when we started looking at the SaaS problem initially, if you look at like five to ten years back, the SaaS started growing heavily in every organization. And I think the previous stat I have seen every organization at least have 80 to 100 of SaaS instances with different vendors. And we used to have network level controls before for all the internal applications. So with SaaS, you're, you don't have any network level control anymore. So any SaaS instance can be accessed from anywhere. And a lot of controls related to the internal applications and network is completely gone. And then we started adding human level controls on top of SaaS, like, does the user have username and password to enter it? And we started hardening it. Okay, they need to have multifactor authentication. They need to have a device trust and whatnot. Then we started seeing additional problems where, okay, who is connecting to the SAS? Okay, there are other SAS instances that connects to the SAS, who has a bunch of IP addresses that are coming from different areas. So SAS is interact, started interacting with SAS, a lot of data exchange between HR systems, identity systems, and sales systems and whatnot. So it started exploring from that point. And we started thinking, okay, I think we need to sit and find a system that consolidate all these things and brings together into a central place where the security teams can start consuming the data and start hardening all these new integrations that are coming on board. So that's how we started looking into the SaaS. Very interesting. And actually Ravi, I think you talk about SaaS to SaaS, but I also know that the org is making a lot of progress in AI to SaaS. Do you see this as a natural extension or do you look at it as a different problem? I think it's gonna grow much harder in AI because in AI, you have things running much faster. I think it becomes a speed problem as well. In before, if it is a SaaS to SaaS, maybe some of the systems might be running, once an hour or once a day. With AI, we don't know what type of queries are gonna come. And sometimes it could be hallucinations that could be coming and executing on top of your SaaS. And we don't know how the pattern detection is gonna work. We need to put a pattern detection. I don't know how we need to add the just in time provisioning and whatnot. So bunch of other new things are gonna come with AI. And also it's hard to distinguish between human and AI, because it may behave like a human. So that's the additional problem that we need to tackle with AI on to SaaS. Very interesting. Jason, can you share your experience? What was the inflection point when the organization was starting to think more about the broader connections into SaaS. Yeah, I think I'll build on Ravi's point. Think, we all know and have seen the evolution of computing from networks to domains, to cloud, to applications and all this sort of the meta explosion of SaaS and now AI services. I think to me, SaaS has always been an area or has been an area for probably the last five to ten years where it sat in the gray area between infrastructure security, endpoint security, system security and application architecture and even code level application security. I think it's been an area where, I think Robbie, you illustrated this, which is we were aware of the credentials, were aware of the use patterns, but perhaps hadn't really integrated into the full model to the same level of diligence that we did from a network security perspective as you touched on earlier, Sean. I think to be honest with you, my personal watershed moment was the midnight blizzard revelation in early twenty twenty four. And thinking about the fact, you know, somebody who worked at Microsoft for many years back in the day, having the realization of even though our security program was on solid ground, that we had an exposure point that we didn't fully understand in terms of the utilization of SaaS credentials on the back end of the platform, which led us really to the structural realization of we have other SaaS integrations and utilization patterns. We do not have enough visibility to. So I think for me personally, that was my realization. Speaking from a Workday perspective, it's always been a focus, being a SaaS company. So maybe a little bit easier here at Workday, which maybe is cheating. But I think ultimately that for me personally, as security architect and as a security leader was my kind of key moment and one that really helped me understand to your point, Sean, that we need to elevate this and look at these non human identities and these integrations at the first, sort of the first class citizen level as we do with human identities, because the risk is significant. Very helpful. And so I think you give us a good launching point, right? You both start describing about credentials or backdoor access into certain systems of SaaS and they often appear as OAuth based tokens, right? And so I think that there's a lot of early understanding and emerging understanding on what security looks like for these types of systems here. So what risks do you see in a world where we exchange OAuth tokens? And what's the security model that's most appropriate for keeping the business from using them correctly and responsibly as they attach all these systems to SaaS? Ravi, do you have an initial point of view? Yeah, sure. Coming to the SaaS integrations, I think you touched on OAuth. That's a critical point where a lot of these integrations go through OAuth. And the permissions that we grant is when the requirement starts with they say, okay, these are all the permissions needed. Sometimes they're overly permissive. And we go ahead and sometimes give that permission and we have to go back and continuously review those permissions is one thing that we need to definitely do, which a lot of companies keep missing out on. And the other thing is the access that we give is persistent. For example, we often ignore the permissions that we give and then they continuously live for years and years. So, and they don't have a human level controls. For humans, we have a termination that happens, which removes the If they move between the organization, we go and recheck the permissions, we do the access recertifications for the human accounts. A lot of these things miss for the OAuth based tokens. And we can't put MFA level controls, we can't put device stress level controls because the access is happening from some other third party SaaS or some other system, which we don't have control on. Only limited set of controls that we can put are, okay, either can we put a IP based whitelisting on where the request is coming from, or continuously monitoring what activity is happening, and if there is a behavior change, for example, if there is an all of a sudden IP range change from where it's coming from, it has to automatically detect and find out there is something a behavior change and go back and fix, or find out from the vendor on what's happening during those integrations. So that's the critical piece I think we need to continuously monitor and reevaluate all those things. And looking at the traditional IAM systems, I don't think they fit WARP. If you take some systems from five years back, it may not fit the WARP and the keys and APIs, API keys and all those. So we need to rebuild our systems or we need to recheck with the modern systems on all these type of integrations. Ravi, you bring up a good point, right? So it's actually, so maybe some systems aren't designed for OAuth, but actually earlier, third party APIs hitting third party APIs don't give you a lot of granularity either. So I think it goes to your point, you need a different security model to also accommodate two third party systems, your Microsoft Excel talking to another system. Those aren't APIs that you control. How are they necessary? Or bringing in a low code, no code SaaS managed agentic platform that will eventually want to touch data in SaaS introduces a split in what you can do from first party or third party integrations. Jason, I think to you, right, so OAuth and I think Ravi helped illuminate a distinction between security controls and then a gap first party, third party, for you when you think about OAuth access, what risks are you thinking about and what's the security model that you've envisioned to address it? Yeah, I think it's a good question. I think building on Ravi's point, because ultimately, ultimately the foundation sits with the identity structure itself. I think we all can generally agree on that. And I think identity, I hear your point wrong again, I'm I agree in IGA, but I think fundamentally identity is probably a little further ahead in these areas because we simply understand the actual identity a little better. And to your point, we can operationalize that. I think we extend that risk posture and we start to think about at that point is, do we understand the identity? Yes. Now we start to think about, can I identify the ways in which that identity is actively being used within the constraints of the platform I'm looking at? And so the context of the platform utilization becomes really fundamental. I don't always necessarily know all of the ways a given SaaS vendor may implement authentication on the backend or may have application integrations with orphaned OAuth tokens. So there's a visibility piece to understand what's happened to those OAuth tokens after they've been provisioned into the environment that I think becomes fundamentally important. I think Ravi touched on that with his comment about continuous monitoring. Step one for us is understanding the identity structure. Step two is do we understand the posture of the application structure? And can we assess the critical controls that need to be implemented to control the use of that identity to the extent possible? I think the third one being can we continuously monitor that posture and understand has it drifted over time? Do we need to make changes back? And I think the fourth one for us is becoming increasingly important around how do I add context to detections for anomalous behaviors? Frequently detections come from a suite of advanced detection table joint structures from a Splunk as an example. But the context of the application behavior and the application controls in the visual way that we string that together between the knowledge base and activity elements becomes an element that also actively helps our incident responders and even our insider threat teams, because we can help them gain additional context to those utilization patterns that in a lot of respects before, it was invisible to them as they worked with the architects designing the security elements. So that structure for us is the encompassing of the foundation, the posture, the monitoring, and then of course the detection ability. I think the combination of those things improves our efficiency across the board. Helpful. And I think Jason, you did a good job of describing context, right? It can be a heavy word, but I think in the world that you've described and laid it out is to understand if access is appropriate also means to understand how the application was configured to permit access. And if there are exceptions and what type. And so that when you look at access to be able to determine if it was safe within guardrails or anomalous, you need that together. And I think that that's where indexed and joined tables become very hard to kind of figure all that out at a single point in time. Cool. Double clicking then a little bit, right? So I think context is an interesting topic to stay on here. For you, Jason, when you think about security teams either making a risk decision, do we permit this integration? Do we de scope it or a response decision? What are types of contexts that you and your team are looking for to understand the safety of SaaS integrations? Yeah, I think, I suppose there's a certain extent where the answer depends. It depends on the team. Is it the architecture team, analyzing a particular configuration, the engineering team for particular implementation or the response team trying to understand the utilization pattern that they're seeing? So, I think to me, the context is to a large extent the sum of all of those parts. My sizzle was famous for saying, we can do anything we want, but we can't do everything. And I think when I look at context, think about we need to do so many things that we can't literally do everything kind of build those ideas. I think the context to me is if I can have my set of security standards that maybe I'm using the misidentify, protect, detect, respond based structure, or I'm looking at it from more of an old school OSI, like I understand the connectivity and the authentication structures and the authorization model. Now I'm moving into, I'm going to understand the interplay of that within the application architecture. So how is the account being used? Is there a disabled app with an active auth token that I need to take action on? Things of that nature. And I think the last part again is on the detection. The context to me is if my detection engineers see an alert for something and they can use the platform to understand what's the behavior that they're seeing for that identity in that application, or is there a set of TTPs that for a given integration that that may exist in another integration in my platform that I can start to contextualize like, hey, I may have additional risk over here that just hasn't been identified yet or for this specific identity I see that's an issue, I can see the utilization patterns across a number of different applications integrations for that identity and I can start to look at the IP address structures and realize it's being utilized from structures that I haven't approved and so forth. So I think those pieces for the identity structure, the posture and the detection capabilities come together so that my organization can build identity based on the role. So it's not just a one stop. It's depending on the use case of my organization. It's not just my architects, it's my incident responders, my insider threat teams tracing things. So I think the context is a little bit contextual to double up on the word, but I think the platform is what brings that together and then we can go back and forth. So when we design a platform, we can create an alert and work with that team so they then can understand it a little better. Interesting. Ravi, I think SMP is done a tremendous job of enabling the use of new technologies very quickly, right? And I think it's allowed it to grow over its heritage here. As a result with AI adoption and S and P being a highly regulated and important institution to the financial markets, you too had to be an early adopter of AI security. You know, what were gaps that you saw in your program across your tool sets that you had to reconcile to be able to help the company embrace and responsibly use AI, given no doubt you're at the forefront of security investments up into these recent points too? Yeah, so looking at going back to our context, Jason mentioned, I think that's really important and that's going to become more important with AI, even for any of the things that we are doing, what's the context related to the usage? So, thing that we wanted to continuously do is bring as much data as we can and the visibility for security teams, so that they can easily understand what's happening at a given point of time. So, if you take, for example, if you have a HR system, and if you have an identity system, and if you have a finance system, and three different systems independently working, and if you ask security team to go and find out what's really happening, they need to jump between bunch of systems and need it's time consuming and also the correlation becomes very difficult. So they don't understand the context of, okay, which system they're accessing at which point of time, what where they are moving from one to another. So bringing everybody and everything into one single place and giving that visibility is one thing we are trying to improve for every security person that that's working. And the other thing on top of that is how can we establish the context of either human or the agents or any type of other integrations that we have. For example, if they go into HR system and then finance system and doing some activity, we should be able to track each and every point or each and every thing that they have touched. And the other thing that we looked at is similar thing is applicable for AI agent as well. If an AI agent is designed to go and access payroll system, it only has to go and access payroll system. And similar way, depending on the context of what it is applicable for, it should not be doing more than what it's supposed to do. So that's where we we are trying to bring everything into a centralized place and add that context to our security teams. And if there is some change in the behavior, alert the systems so that they can go and check what really is happening, either it's for AI or for humans. But for AI, going we need to be doing much faster than humans. So that's the main difference I would say. Yeah, think you look, you raise a good point Ravi because agents, even if they're deterministic, they might not be a single workflow. They might make hops across systems, right? And so then that's where, to your point, absent context, how do you know that those hops are safe or the functions of those data are safe? If I actually ask you maybe to take it one step further, you've designed strong identity programs. You're already showing fluency over the non human problem, right? You're bringing in agents. What do you think about the risks with agents to agents? Right? I think that that's a whole different set of topics here, right? And maybe describe to us how you all are thinking about that or preparing for it. Is that a technology, a pattern that's in play today or there's a little bit more work before you get there? Yeah, I think I think the agent to agent problem is going to be when, for example, if an agent A is designed to do only specific tasks, and it doesn't have a permission to do task number two, let's say. And it goes and talks to a different agent to get it done. That's where the impersonation problem is gonna come. So does it have the permission to go and talk to other agent to execute a different task? So think a lot of more work has to be done in this area, and how do we control this impersonation problem of one agent going and talking to other agent and using that permissions to execute other things. So it becomes more and more complex as we have number of agents and each talking to each other, and which agent can talk to other agent and what permissions that it has that it can hand it over. We may need to think of a delegated level access with agents as well. So okay, I'm delegating you this access to go and do some payroll related work, you can go and do it, rather than having agent hand over the task to the new agent. So we need to think of the delegated agent level access in the future. That's a good point because especially because all those agents when they're integrated to SaaS, right? So it can be an agent or an AI platform. They were done so with very specific tasks in mind, right? Or the underlying grant, right? The admin had a certain level of permission and then it allows for some lateral movement there too. And so I think that's a really interesting point here. I think integrations are multiplicative. They're not just two spokes at any given time here, agent or SaaS otherwise. You both have a very strong history of designing identity programs, right? And I think we've talked about the morphing of OAuth tokens and AI agents. I think Jason for you, have you really seen the definition of identity change over time? That's a good question. For myself, honestly, I think to me, it's more about the scope and focus on the way identity is used. Obviously as we've moved into modern authentication protocols like OAuth, the structural implications of how it's used and some of those complexities make it a little bit harder to trace, also introduces some new ways for those identities to be misused through token theft and other mechanisms. Think I come out a little bit more around as a need and an awareness of security teams being plugged into the different ways that identity can be used. I think sometimes we've looked at identity as just a credential, right? In certain cases, I have this human identity that's doing thing and I have a good tracing. I think now that identity when applied to a SaaS and a third party structure used in AI use takes on more of an implication from a security perspective. It is a credential, but there's runtime activity around that credential that I think is greater than. And it brings us back to the concept of context. I think understanding the context in which that credential is operating becomes more of the definitions that have just the identity itself. We have to look at the context because the identity itself may look fine, right? The credential of permissions may be structured but the way it's being used or the methodology that it is using may not be fine. I think security has to really extend itself and to think about those cases in a broader sense, especially with non human identities. And as Ravi was pointing out, when we get into agent to agent structures and the complexity there, whether it's from a protective scenario or a detective scenario or even a triage scenario, how do I trace those sessions across agents in the event of an anomalous activity that I need to understand? Do you think Jason, within kind of how you described it there, there are many ways to connect into a SaaS application, right? It's also not just OAuth credentials. How do modern teams reconcile all the different ways that you can connect into a SaaS application, right? Is there a blueprint or some sort of pattern or success that you've seen as it represents a new class of interaction with SaaS applications themselves? Yeah, mean, I think the blueprint for us is about establishing the guardrails that are appropriate for the use case. We can obviously look at a case that says, want to use a clear text username and password for this simple integration. And we said, we can point out why, what the specific risks are in that environment and what the specific mitigation paths would need to be. Then we move to more usually more of an OAuth structure. Again, we talk about these are the risks in this use case and these are the mitigation structures. So I think it's about architectural patterns and consistency. I think it's, I tend to approach this from the better we as an architectural organization can work with our dev teams for the cases that we have, whether it's agentic interactions, a SaaS integration, third party app structures. We try to work from a place of establishing a key set of security principles, requirements and patterns, we try to enable our dev teams with a set of functional guardrails and give them options. But the more we manage it, we believe that we present a structure where they can operate within the secure guardrails we've established and operate more rapidly. When there are new opportunities and new patterns, we try and bring those back into our architectural model. The reality is dev teams are always going have new ways of building things and we're covering those now with AI, with MCP interactions and those types scenarios. We look at it the better and faster we can build key patterns and manage those, the easier it is for us to then translate that from a protective implementation to a detective and response structure because we know how it should be working and we can guide our teams out of Interpre when it isn't working the way we think it should be. Cool. Ravi, I think when you opened, you shared a little bit about your remit and programs, right? So identity isn't just about access. There's also authorization, there's governance, there's lifecycle management, right? So you've been leading a large identity program for SNP. How have you seen your program, your definition, kind of your systems change to accommodate this changing definition of identity? Yeah, I think when identity started, it was just a username password, as I said before, and then it has evolved into a lot of requirements needing for different kinds of access. For example, system to system interactions started coming into the play. That's when APIs came in, SOAP based, REST based APIs. And then after that, started looking at the API based permissions and said, okay, we need to add scopes. What kind of permissions are we giving for these APIs? So, and we started introducing more concepts and OpenID came into picture and what came into the picture for authorization. So I think it's a core concept is always same, but there was so much complexity got added after that on what type of different workflows that we need to build between the systems and how do we make the processes much faster. So that's where, our process is always, evolving around, what new technology is coming into the play, what type of new machine identities for. Now the new thing is how do we make sure we manage all the non human identities and continue to manage them. Some of the basics still apply, we need to still govern, we need to still follow the least privilege model, even for non human identities. We need to continuously monitor and verify what activities are being performed for non human identities. But we need to do it much fast phase. The scale has to increase and the detection has to be much faster than before. Yeah, that's a good, very good point. And I have a question for you, Ravi. So, SaaS integrations, I think have crept up on many organizations. Right? And I think if you think about, one of the opening comments is each SaaS is also an IDP. So you typically have Delta between SaaS and you have Delta between the IDP. If everyone thinks the other team has it covered, then no one has it covered. Right. So I think, maybe help walk me or the audience through, how do you assign ownership over the integration problem? Does it sit in identity? Does it sit in application security? Is it joint ownership? So I think how do you find yourselves working well with other leaders in a greater cybersecurity org? I think it's a shared responsibility. So, doesn't sit, don't think it just should be sitting with one particular team. And also, I think if we think of the whole life cycle of a SaaS, it has to be responsible to have the procurement team in the beginning to make sure when they're bringing in any new SaaS tool, it has falling set of controls that need to be followed. After that, it's the SaaS owner responsibility to make sure all the controls that are set by security team are being followed and the configurations are secure in the SaaS that they're operating on. And the third thing is that security responsibility is to continuously monitor what is happening in each of these applications. And if there is a drift, make sure that they go back and talk to the application owner and make sure they fix the things that they need to. So I think it's a continuous interaction and monitoring, and it's a shared responsibility for sure. Yeah, thank you. Jason, I'm curious, you know, how does Workday create a culture where enterprise security and the app owners also pride themselves in joint responsibility and safe usage of SaaS applications and third party apps? Yeah, I would build on Robbie's point. I agree. Think it's a shared responsibility and it really becomes an element of the culture. When we have these conversations with our leadership team, it starts from our CEO down to our president, our CIO, our CISO, we have those shared conversations around, we know we need to promote a secure environment, it's part of our culture, it's part of our product, it's what we do. And I think, from a security perspective, I think the balance point for us is, we won't say we're discovering new areas, we know these things are coming in and it's our job on the security side and Ravi illustrated this really well. It's our job to ensure the teams understand the guardrails where we need to operate securely. Right? And we hand those guardrails to teams, give them opportunities to develop securely. They then need to be accountable for ensuring that they're implementing those choices. I find at least for my team, the more we approach this in an integrated manner and it's maybe seems simple to say, but engaging in the conversation like, hey, are you guys doing okay? We're in a weekly office hours, people that show up and can just ask questions. So we try and add the personal element to how we handle these conversations just to move past the static like, hey, here's a set of requirements, go do it, you know, let me know when you're done. And I think just building that culture, and I think as a security team, it's important and incumbent upon us to drive those conversations from an education perspective and not from the usual security hammer perspective that so many of us use for so long. And then to Robbie's point, it's flowing that through the stack and making that enabled and supporting that process all the way through and not punishing people when things go wrong because it happens, and it may seem oversimplified, but I do think that's really important in this environment. And so I think you both did an excellent job describing how a group of teams assemble to make sure there's safe adoption of SaaS, right? So that to your point, Ravi, there's due diligence, right? Brings in like TPRM and others that look at controls. Then there's identity governance, there's application security, there's due use, continuous visibility in here. You and Jason, you've helped establish those programs. So now SaaS security is becoming a higher priority for many, right? We're seeing more SaaS security leads, etcetera. How do we hold SaaS and AI vendors to a standard where they can participate in the shared responsibility of SaaS security as well? I think the SaaS vendors have done a phenomenal job in giving controls for user access, securing the infrastructure that they build on. What more can they give back to the security community to make sure that everyone has safe access to SaaS and between SaaS or AI to SaaS? Ravi? Yeah, I would start with the Intel sharing between the SaaS vendor. So I think they need to come in and talk to each other and say, okay, these are the things that we're observing. So it could be helpful for other SaaS vendors as well. So that's first thing. And maybe make it like use AI to have a common shared signal framework that each SaaS vendor can share with others, so that if somebody is seeing some anomalous behavior somewhere, they can immediately report to all the SaaS partners they have, and everybody can benefit from the shared signaling. So I think once we have that ecosystem, I think the whole SaaS community, whole InfoSec community can come together and then share a common framework where they can use everybody's signals and benefit from each other. Cool. And, you know, Jason, I think, you know, you represent a company that is emblematic of what it means to deliver a secure service. So maybe if you could share, maybe for others that represent vendors here, like what are the steps or the capabilities that Workday has been making available or certain security things to make sure that they're the safest platform in the world to use for their customers. Yeah, I think the key, I'm going to go back to my previous comment. I think the key is to approach your environment from the perspective of Secure by Design, and maybe it seems glib, but I think always thinking about security as a functional part of the element that you're building creates an environment where shared responsibility becomes shared action. If everybody in the organization understands that you need to approach the thing that you're working on and ensuring that security and how you gather that information and share that information with drugs, there's a number of ways to do that. But driving home the point that we are all collectively accountable for that and we should all hold ourselves to that standard and producing the most secure solution that we can within the abilities that we have, becomes a default action. And isn't to review at the end of the process, right? Security has to be part of how we think about designing our environment. I know for us, it's thinking about, you know, we protect customer data, we run a service, our job is to provide a secure service that customers can trust. And you to place that at the forefront and look at that in every day design, day, you know, every day email and web usage on their own workstation. It's just part of the culture and how we operate. And I have long believed that if everybody in a given organization just steps in a little bit and puts a secure by design hat on when they're approaching their work, up levels the entire environment and then makes those conversations easier and makes the work seem not as a tax that you're paying on the work you have to do, but as part of the opportunity for you to produce a better product. Good point. And so I think to both of your point, right? So you've given us some interesting insight on how the community is prioritizing SaaS security and I think different vantage points to make sure that everyone is participating in it. So when a company reaches a similar inflection point as either one of you about building a SaaS security problem, like how do you think about best of breed versus best of suite when it comes to SaaS security? Maybe Jason, you want to answer first? Yeah, I think that's always a good question. I think best of breed versus best of suite. I from the philosophy that foundational security and base security hygiene are probably two of the most important things you can focus on. And as you've reached a point where you've covered the core of your environment, you know, from infrastructure and as Ravi was discussing from an identity perspective, you know, from internet access, when covered those areas, you have to take the same approach from a SaaS security perspective, meaning understanding your full scale end to end ecosystem and really looking at like, what are the top risks that you see with your specific ecosystem and identifying what's the most optimized way to solve the highest number of problems. Just mathematically, we do the whole eightytwenty rule. If you can provide the right platform model and you build there, then from that point where you have base coverage in a foundational layer, it becomes more effective to then think about a best of breed solution for particular gap you may have specific to your environment where you can approach that not just from what its capabilities are, but how well does it integrate with the broader system? Because I'll go back to, it's not just the app itself or the capability itself, it's the ability to integrate the data into the core platform, which ultimately gives everybody the context and visibility they need. So you don't want to push it as a one off. It's building on a solid platform, using best of breed options to fill in any gaps in the environment, doing so in a way that ensures they're operationalized on day one. Not that they're bolted on, just as an independent piece. I'm a believer in architecture and integration, think at our level, go back to the more we integrate, the better the visibility in the context is, and the more informed the teams in those spaces can be. Great. Ravi, what about you? When you think about SaaS security, you know, how should individuals think about best of breed versus best of suite? I think I agree with what Jason has said. And I think that in the specific areas, I would look at best of breed and then coming to integrating all these things into centralized things for information security. They need to come together in best of suite and bring in all the signals that are needed and then give our security teams the, whatever is the ammunition needed for going and executing their job. And, you know, the borrowing from the eightytwenty rule, right? You want to optimize for the risks that the business need to face or embrace new technology. And there's always tail risk, right? You can't solve for everything. Maybe a little bit of wisdom or foresight for the audience, as you think about emerging risks that you are prioritizing against, right, technology, etcetera. Ravi, what are probably two or three concepts that you think are very much in focus for you to get in front of this year? So, think I'm mostly thinking about non human identities, the ones that we discussed today. And how do we make sure we have a proper governance structure for all the non human identities? At least we need to have a way to go and look at what all activities happened for all the non human identities. That includes agents and whatnot. So, that's one of the primary thing that we're looking at. And the other thing is the AI identity governance. So I think regulations are gonna catch up onto this one is what I'm thinking. Similar to human governance, we, they might come and ask us some questions about, okay, what agent has performed? What activities? Do they have those privileges? And are we governing them properly? Are we decommissioning the unused agents or unused, similar to what we discussed in OAuth? They become part of our life going forward. I am thinking like we'll have more agents running around than humans in every company. So, the governance of those agents is going to become very critical. That's another thing that we're mostly thinking about. Good points. Before I ask you, Jason, Ravi, the focus on NHIs, is that do you think that that's where the greatest growth in technology comes from? Or do you see it as a different reason to focus on that? I think it's mostly technology. Also the risk along with coming with the number of identities that are going to exist because today we have 80 non human identities versus one human identity. And I think it's the number is gonna go to thousand or more than that within a couple of years. So how can we manage that number of non human identities with limited human resources? We need to build an AI around it. It's So gonna become complex pretty soon. So, I think if we don't start early, it's gonna become too late. Good point. And so then for you, Jason, as you forecast ahead here, what are the priorities or the security priorities or the risks that the business is getting ready to mitigate so that you can embrace use of new technology? Yeah, I think for us, very similar to what Ravi said, the explosion of non human identity structures, as compared to the single human identity or if you've got good segmentation with two human identities. I think that area in particular, and if I pull back just a little bit from that platform, which I agree with, I think it becomes the concept for us is do we, are we doing enough to stay ahead of the ongoing developments Cause we're development shops, the ongoing developments from the code side, clouds and your cursors, into your code management areas, low code scenarios through the utilization and the other AI patterns, whether it's AI integrations, AI services, third party AI elements, MCP server utilization. So we think of it in terms of using our core foundations and building on those foundations, but making sure that we've got enough framework for our developers to operate securely in those areas that they're exploring. Because we can't, as a secure organization, get in front of all the development, but we can keep laying those tracks on giving people the guardrails as they're exploring their own options. So, we're trying to build ways for them to explore that internally and secure sandboxes and things of that nature. But I think those AI development models, the SaaS security models are going to continue to be at the forefront and as well as, obviously ongoing cloud development underneath that from some of the platform pieces are probably our top three. Great, thank you. Sean Broch again with Obsidian Security. Thank you so much for joining us in the conversation and attending the webinar to hear from S and P and from Workday. We're happy to continue the conversation and chat and answer any questions. And now that Obsidian Security is here to spend more time and share with you threat briefings of what we're seeing across the industry or discuss our suite of solutions and capabilities that address SaaS supply chain threats from pre breach, during breach and post breach, anywhere from mitigation and response. Thank you so much for joining us today and we look forward to continue conversation with you soon.