Episode Transcript
Robert Wood (00:00.878)
Hello everyone. This is Robert Wood and this is the security program transformation podcast. I am joined today by my good friend Gunnar Peterson, and I'm going to turn it over to him to let, let Gunnar introduce himself so I don't steal all of his thunder. So please take it away, sir.
Gunnar (00:20.205)
Thanks, Robert. It's good to be here and good to talk to you today. So for my background, I'm currently CISO at Forter which is trust platform for digital commerce. Before that, I was chief security architect, Bank of America. And before that, did a lot of security consulting on everything from payment systems to the early days of cloud security, web application security, application security is a lot of the stuff we're going to be talking about today. So really looking forward to it.
Robert Wood (00:48.652)
Yes, sir. And one of the things I love about Gunner is so we, we intersected years ago at Cigital So Cigital was for anyone that doesn't know one of those like OG AppSec firms. And we ended up working together on several big, what we referred to as architecture risk analysis projects that were basically like you went in and just kind of rolled up your sleeves and got, got nasty with an application and you're like, figured out all the, all the ins and outs and the data flows and.
and where it was going to break and fall over and all of that. And I learned a ton from this man and, and, very much look up to him. And so one of the things I found most interesting and why I was very excited to have this conversation is that you have worked in very, very large environments and very, very small environments. And I've done the same throughout my own career and they are very, very different in terms of.
org politics and, and budgeting and how things get prioritized and all the, all the things. so thinking about application security, which it's been around for a while now, but interestingly is still seems like somewhat of an emerging field in terms of truly grappling with the skills needed to do AppSec right. wanted to ask from your perspective.
If somebody was starting out the process of building an AppSec program or an AppSec initiative, what do you think are the highest ROI return on investment activities that they should be thinking about first? thinking about that contrasting from the very, very large environment that probably has a bunch of legacy technology, legacy process, legacy people even to the emerging product technology company.
which I would put Forter or kind of in that category now of just more bleeding edge, single or small product footprint company.
Gunnar (02:59.041)
Yeah, I think, you know, you and I have both been doing this for a long time, like you said, and I don't know about you, but I suspect that similar to me, when we started out, we probably thought a lot about technology. We thought about the tools. thought about the code doesn't lie sort of view of the world, which is true. And you need good technology. You need good tools. You need certainly need quality code.
But one of the things I appreciated about your work over the years and that you've shared publicly is your focus on the human side of cybersecurity as much as any of the technology side. So it's fair to say I think my world has evolved from looking really super deep into the technology and that being the driving force to like, how do you scale this in a real live organization? And to your point,
It is a different game if you're talking about two or three developers versus 20 developers versus 200 developers versus 2000 developers. Those are not the same shops. They could be writing the exact same line of Python right now, but the way that those things get put into a build price pipeline, the way that they get deployed, the outputs of those, the ecosystem in which they exist is really totally different. And certainly when it comes to securing them, it's totally different.
Robert Wood (04:06.957)
Well.
Gunnar (04:25.773)
So I think that a healthy way to start is like, where are you? Like, I think it's important for teams to have a, not a skeptical mindset in the sense that they don't wanna be skeptical about what can be achieved, because everyone can really improve their application security with a little bit of work. You can make some pretty large improvements in the short amount of time, but also don't assume that this is an easy problem. So first understand,
you know, where you are and don't assume that it's an unsolvable problem, but don't assume it's the hardest problem. You know, it's not the hardest problem, it's not the easiest problem, but start with understanding where you are. What's your ratio of application security people to developers is an interesting question, you know, and back to that two-person team, 20-person team, 200, 2000, you'll end up with different things. So I think...
The first principles I always like to start with is from a risk standpoint is coverage and efficacy. What parts of a pipeline, what parts of a dev tool chain are you going to cover and what are all the places you can play? There's three, four five different parts of the build pipeline that you could probably think of going after first from like dependency scanning to checking repo permissions to, you know, know, to scanning on the outbound side. there's, there's a whole.
set of things you could do, which ones are you doing now? Which ones, and this is the second question, efficacy, which ones do you think you're actually doing a good job with? And like, it's really important not to, this is where the skepticism is really important. You're running a tool, great. Is somebody consuming the tool output? Is somebody making decisions differently than if the tool wasn't running at all? You know, is everything a high, is everything a low, is everything a critical? Like these are signs of a system that's probably not working.
And so I think the basic answer to your question is it starts with coverage and efficacy. My, my feeling is that most teams are served better proceeding from there and thinking almost in, in Lego block by Lego block terms where it don't do five things halfway, you know, pick one thing, maybe two things at most, and try to feel like you have that problem nailed down tight.
Gunnar (06:50.165)
Maybe it's container scanning. Maybe it's a golden build. Maybe it's dependency scanning. I think there, you can make different arguments depending on where you are at in your organization and your business model and stuff like that. I would tend to say that scanning on the outbound side is my default starting point in terms of making sure I have a really good understanding of the output of the build pipeline.
and the health and quality of those products and the findings that we're getting out of those products. There are a hundred ways it can go wrong left of that. But the product of all of this is something that's going to be manifested on the right hand side of that build pipeline. And so I think starting from that side and then finding what the next part left of that would be, which doesn't mean left adjacent. could mean all the way back at the beginning.
of dependency scanning or something. But like doing that next thing really well one by one rather than I've seen teams fail by doing a pretty good job at five things instead of a really good job at one or two.
Robert Wood (08:04.086)
Yeah. Now I, so I, reason I love that answer is it really touches on the, the process dynamics in building an AppSec program. And this was something that I personally fumbled the ball on when I stepped out of Cigital into the startup ecosystem is thinking about coverage.
That was one of the things that I over-optimized on coverage, trying to have too many balls in the air instead of really pinning down a few things and doing them supremely well. And over time, you fumble the ball and you learn and you get a little bit better at your job and hopefully, the years go on, you get more gray hair and you're a little bit better with each gray hair that comes.
Gunnar (08:58.937)
Yeah, the one I like on that is good judgment comes from experience and experience comes from bad judgment. So you have to have the humility to underrecognize. You can only get the good judgment if you recognize through humility where you actually had bad judgment in the first place.
Robert Wood (09:07.788)
I love that.
Robert Wood (09:15.916)
Yes, that is, that is very, that is very meta. That might be the quote, the quote takeaway from the, from the show. Now, if you were in a, if you were in an organization, because in, in, especially in a product organization, you've got engineers doing a lot of the work that ends up resulting in security. It's not necessarily the security team doing these things. It's the security team evangelizing.
or supporting or building some sort of scaffolding or tooling that engineers use. But ultimately the engineers end up driving a lot of the progress and the outcomes. Now, if they were, let's say if you were to measure the overall, the efficacy of a given activity on a one to 10 scale, if you were at say seven or eight, something like that on a given activity, let's say you've got two things.
DAS and SCA, dynamic scanning and library scanning for anyone who's not tracking. And you were at a seven or eight of efficacy on those two activities. Would you recommend, and of course, like everything in security, it's an economics game on some level. And so you're choosing to invest your time and focus and money, sprint over sprint, quarter over quarter. Would you really try to...
Optimize those two activities before you take something else on or I guess when is Enough enough from an efficacy standpoint in your in your experience
Gunnar (10:57.133)
Yeah, that's a great question. I think coverage is part of it. And one thing we didn't touch upon is that coverage can be looked at really two different ways. There's coverage of like, here are all the things I'm doing as an enterprise. I don't know, let's take Uber. I don't know how many applications run inside of the Uber incorporated ecosystem, probably hundreds or thousands.
But there's one that I'm pretty sure needs to work really well every time, which is the ride share app and the payments app, right? So knowing what your crown jewels are, the coverage that you might go after could be deeper in your crown jewels that's different than the rest of your enterprise writ large. So there could be a way to slice that answer a little bit more thinly.
I think when it comes to efficacy, one of the things I like about doing less stuff and doing it better, to put it simply, is it also helps you with developer engagement and working with developers. Cause it's the way most of your developers think. Like, why am I doing this thing? don't want engineers wake up to solve a problem. They don't wake up to partially solve a problem. They wake up to solve a problem. And so, you know, pushing that, all else equal.
and all it's not always equal, pushing that seven, eight or nine to a nine or a 10, sometimes that can be the best way to get a developer engaged. also is pushing your advantage in the sense that you have people who know how to run those tools and who know how to run those processes, whether it's DAST or SCA. All that's equal, I'd probably say it's DAST. Just because that's gonna be the closest thing to the runtime and the closest thing to what a real attacker is gonna see.
I think the tiebreaker in that argument that I would use is really about exploitability, reachability, the ability of a threat to take action on any flaw that you have. Every piece of software that everybody has ever written, certainly every piece of software I've ever written, has some number of vulnerabilities in it. That by itself is an interesting, what's interesting is the ones that can be exploited, the ones that are gonna make.
Gunnar (13:15.743)
a really, really bad day. think those are thankfully a lot fewer. They're certainly not zero. So finding the place where you can wield the right tool at the right time is where I would go with.
Robert Wood (13:28.908)
Okay. Now does that answer change at all in a bigger federated environment, which like Bank of America, I would, I would put into that category, any big enterprise that has centralized IT or security resources, developer teams spread across business units. You've got, you've got more hierarchy, more bureaucracy just naturally in place. And CMS is very much like this. Like we, we sat in the office of information technology.
which had some software development happening, but most of the, most of the action happened out in what was referred to as the centers and offices. So healthcare.gov and the office of program integrity and all of that, all of that good stuff. And so that was all an out there problem. And so we had to find these ways of bridging the gap between an activity we wanted to drive home and the way that we engaged developers who were
loosely under our same organizational structure, but we're really far away and it's harder to reach farther across an organization in just thinking through red. It's like literally swimming through a sea of red tape. And so I'm curious if you would apply the same answer in the same decision process in that bigger federated.
Gunnar (14:55.277)
It's a great question. and I think I mean, I do think an enemy of security is when it when it can become going through the motions and so I I don't think this is true of most compliance activities, but there is there are organizations where certain compliance activities are received.
as going through the motions and check the box activities. And I think the time in which you feel like anything that you as a security engineer, leader are prescribing feels like a check the box activity. It probably should be a wake up call to say like, I'm probably on the wrong track here. And I think that becomes much more prevalent the larger the organizations become. So in a smaller organization, if you and I decided tomorrow to start a cloud startup,
go and build a product and someday we say we want to this to customers and then we'd say we have to have a SOC 2. It wouldn't feel like a SOC 2 I don't think would feel like a check the box activity. It would feel like, this is what we're putting our big boy pants on now as a company and we're trying to get something up and running. But you scale out to hundreds of thousands of people, means hundreds of thousands of middle managers, hundreds of thousands of regulations. And then there are things that feel like check the box activities. And I don't...
Robert Wood (16:19.117)
Right.
Gunnar (16:19.927)
I think security is too important, frankly, like information security, cybersecurity is too important to be a check the box activity. any organization has a limited number of budget for security, a limited number of resources to throw at it. Our jobs, one of our jobs is to make sure like every dollar we spend, every hour we invest from our team is used for the highest and best use that it can from a risk perspective.
what we should know better than anybody else. And because it's in a regulation, it means we have to pay attention to it and we have to satisfy it. But we also need to look at the overall risk to find out where the lion's share of the investment is going to go to addressing application security or any other risk. And I think larger organizations make that a little bit more difficult.
The goal I would go for always in a large organization is leverage. this is something that I don't know if you've ever seen. I've never seen it talked about in a security book or a secure, know, Ross Anderson, Gary, dearly departed Ross Anderson, Matt Bishop, Gary McGraw, wonderful ideas, tons of technical ideas. All my ideas that I built my early and middle parts of my career on came from them.
And leverage was never one of the things that anybody, you know, leverage was never one of the things that anybody mentioned. Right. So like, how do I put an X and how do I derive two X or three X out of that, out of that resource? And I think that's the friend that you, can try to make in a larger organization where in a federated world, maybe it's about getting a super strong identity provider.
that is fed via SCIM or some automated provider that mandates your authentication policies in the exact way that you want them and provides the highest quality identity the rest of organization can have. There are ways to scale certain things. Not everything in security scales. But starting from an x, y access where maybe the x-axis going
Gunnar (18:45.355)
Left to right is strength and we're trained as security people just to go stronger, stronger, stronger on that, on that X axis. That's what we're trained to do. We've got passwords. Well, let's make complex passwords. We've got complex passwords. Well, let's make them MFA. We got kind of crummy MFA cause it's SMS. Let's make it, you know, Google authentic here, like fishing resistant. So like that's how that's. That's the entire canon of security literature is like how to move.
Robert Wood (19:05.496)
fishing risks and all that.
Gunnar (19:14.773)
left to right on the x-axis. I would argue in any organization and specifically in large organizations, there's a y-axis that y-axis is scale. And how many people, applications, assets in your organization are using the highest rightmost part of that x-axis that I just figuratively drew. And probably it's not 100%. So then the question is, how do I move at the same time I'm moving left to right?
on the x-axis of strength, how am I moving from bottom to top in scale? And those are different things. there's less things, frankly, that scale. And a lot of security stuff just doesn't scale very well. So I think the discipline you can have as a senior security person is take a hard look at your risk, take a hard look at your capabilities, like I said at the beginning, and then say, which of these actually scales well? And that's probably the place you should start thinking about, I'm going to get the most
juice out of this lemon by squeezing and getting as close to 100 % scale as I possibly can on three capabilities across the organization. I think that'll work in large organizations, that'll work in medium sized organizations. Smaller organizations probably not as big of a problem to your point, but even medium sized organizations have this problem.
Robert Wood (20:33.58)
yeah, big time, big time. And not to brag, but I just got delivered some fresh baked cookies and I'm very, very excited.
Gunnar (20:41.12)
I, yeah, geez.
Must be nice.
Robert Wood (20:45.068)
That's an unexpected treat. So the point about lever and I agree with you. I have not read that in any anything. The only person who I can I can think of who is this intentional around getting things done is probably Phil Venables. The blogs that he put out such and he
Gunnar (21:10.282)
Absolutely.
Robert Wood (21:13.122)
He's obviously very steeped in organizational business dynamics as they relate to security. so I don't know personally if he has ever talked about this, but it wouldn't surprise me. He's one of the few people who I could see talking about that Y axis and how to take advantage of it. And so this was something that we wrestled with a lot at CMS when...
Executive Order 14.028 came out. And that was basically the big zero trust executive order. So President Biden puts out this executive order saying everyone's going to do zero trust among other things. And so everyone has to figure out how, how to jump through the zero trust hoops and to do it quickly and to have a plan. And so for us at CMS is a very large enterprise. And the way we thought about leverage was looking at
shared services. how many identity providers, for example, how many systems or parts of the organization were using a particular identity platform? How many were in a particular cloud hosting environment? How many were using golden images or EDR or some kind of shared resource where we could push our security configuration into those things, even if it didn't get us all the way
down the path on the security maturity scale, we were able to get, if we have limited resources, 10 million bucks or whatever, and we put 5 million of it into this one place, we're able to get 80 % coverage across the enterprise, was kind of the way we were breaking that problem down. that's not an easy problem to solve. It's also something that I think speaks really...
directly into the need for having relationships and figuring out how to engage people in things that matter. the thing I wanted to jump into next was on effectively engaging developers in the security process. I think it's useful maybe to think about not just engaging software engineers or full stack engineers, but engaging product.
Robert Wood (23:37.048)
teams, because you might have infrastructure engineers, might have DBAs, you might have a product manager who's the one responsible for maintaining a backlog and doing sprint planning and prioritization and all of that stuff.
From your background, you found any particularly useful approaches or tools or conversations to have with these different individuals or roles in terms of getting them engaged on security activities?
Gunnar (24:18.317)
Yeah, I mean, I think I think understanding the the culture culture is a thing and what works at Netflix may be exactly the right thing for your company and it might have nothing whatsoever to do with your company and it's really on you to figure out where you know, not just is this a good idea for some dev dev team? Is this a good idea for my dev team and and probably for most of the people watching this it's my dev teams.
because you have one team in New Jersey and you have one team in California and you have a team in India and you have a team in Australia. Like you might have a, might have three or four teams spread. If you really look, you might have a couple of teams around the world. So I think figuring out who you are as an organization, who your dev managers are, what are your dev managers incentivizing their team? What are they hearing? you know, so the top down view is helpful. The bottom up you have to have and
You know, I don't think there is a developer worth their salt that wakes up out of bed and wants to create vulnerabilities. They don't frequently know what app security and secure coding practices are, but they don't intentionally want to create them. So you have a knowledge gap, number one. You might have a skills or a tools gap and, frequently
We as security people will start with tools, but it's probably a knowledge and awareness gap more than anything. And once you get skipping ahead to how we're going to solve this, Robert, once we get them engaged, know, your best developers will be with if they're enrolled in the problem of application security, they will be better than any tool anyone has ever created to finding and rooting out vulnerabilities didn't know existed.
and finding effective ways to deal with them. So to me, it's a hearts and minds campaign for the heart of the engineer. And nobody wants to be the person who built a bridge and the bridge fell down. But I don't think that average developers, many average developers fully believe all of the things that come out of security teams mouths. And the second you flip that bullshit detector,
Gunnar (26:45.453)
you're probably not going to get a second or third chance, which goes back to what I said at the beginning, doing a few things and doing them better. So like, I would much rather come with three known vulnerabilities than, you know, 23 vulnerabilities. And then, you know, five of them turned out to be true positives. because you'll lose people every where along the way it to respect that these people are working.
many many hours and especially if they're getting ready to ship a big release and things like that that's the time you want to be working with them that's also the time where you're putting a tax on their time you know you're asking them to stay nights and weekends and things like that obviously you need to do all of these things but you have to do it in a way that works in your culture works in your organization and you better not be asking people to work nights and weekends on false positives that is
Robert Wood (27:41.09)
Yeah.
Gunnar (27:42.081)
that is really not going to be a winning strategy. I always say like security teams can win any one battle. There's no world in which security can't have a one and record. The question is like what happens in the second game, the third game and the fourth game. You can ram that list of 23 vulns down somebody's throat and make them stay nights and weekends and fix it. And if it comes back with three true positives or one or
you know, heaven help us zero true positives, then, and it also think about like, if you're rooting for true positive, you know, for false, for true positives, that's also not good, but you know, you have to know that these tools are working the way you want them to, that they're, that they're designed, that they're tuned, that they're pointed to the right targets and that the results they're producing are something that is worth your team and investing in. And I think when you do that, you get credibility.
And when you don't do that, you lose credibility so fast and it's so hard to get back.
Robert Wood (28:42.712)
to gain back. Yes. And the tuning, I think, is really an important, important part of that and something that is so often kind of cast aside or looked over. I wanted to ask about this blurred line that has been happening between cloud security and application security.
At least in my experience, I've observed a lot of uses for the term security engineer or engineering in a security context. And despite that, I have not come across that many people who really truly understand the way enterprise software is built, or maybe not even enterprise software, but the way product.
is built and designed and released and operated and monitored, all of that stuff. we have this rise of DevSecOps, like DevSecOps as a skill, as a job classification, as a set of tools, whatever it is. Yet, I think a lot of that is
really focused on people who are, they understand cloud orchestration and deployment and all of that. So it's almost more cloud formation, Terraform focused as opposed to the application tier, even though both are technically like as code in this world that we're moving into. But where do you see or do you feel like the
Do you see there being a blurry line between cloud security and application security, or do you feel like the lines are still pretty clear, yet the fact that we've merged them from a, a architecture standpoint, do you think that that's started to confuse things around security engineering and what that means? I think, and I guess the,
Robert Wood (31:07.81)
Where I'm going with all of this is you'd mentioned a couple of things in your last response that I really wanted to pull on a little bit. One was the tool tuning. And I think it's, it's really easy for us to start to skim over all of that in this more deterministic world of cloud security, as opposed to application security. And then two is your point about showing up with credibility and
bolstering that and continuing to gain trust and whatnot. And I think the, when we're talking to a software engineer and we don't understand code, that is a really, you're skating on thin ice in that situation. And so I wanted to get your thoughts on that. I know that there is a lot of conjecture and...
layers of questions in that, I want to get your thoughts on all of
Gunnar (32:08.301)
Yeah, I mean, so on the first point, is there blurred lines between cloud security and app security? I think if anything, you undersold it. I think these are, these are so blurred at this point. It's, I don't even know where and how those, those structural separations will re-emerge from. Like I think they're gone at this point. I think they're into people's titles. They're in people's brains. they're, they're in like what I think my job is.
Robert Wood (32:35.0)
Yeah.
Gunnar (32:38.821)
today But I think as a security person who's like it has to defend things in the cloud. Let's say There there those distinctions are so arbitrary Like it's it's terraform and its infrastructure or this is an app or this is a product like they're they're either Assets their vulnerabilities or their threats and it's all attack surface and they all you know, look at the
the Capital One breach as an example. you know, that's obviously a very strong, strong security team for a long time. There was a hole and, you know, I don't have any inside information just based on the public reports. There was one, you know, one hole that somebody didn't find. And then there was a blast radius that was due to a handoff between a
Robert Wood (33:18.285)
Yeah.
Gunnar (33:38.527)
an identity token and an SSRF. And so like, was that, you know, SSRF of a application vulnerability? Sure. Was the identity scheme behind it an infrastructure vulnerability? Sure. the, you know, and you get this one plus one equals three scenario of a massive blast radius very quickly. So I think it's not something that
I don't see it solving itself anymore. I don't think security can really solve that problem. I think that whether it's a good thing or a bad thing at this point doesn't matter. It's the way the world is. I think we, I think our antidote actually is security is to blur the lines ourselves and the lines that I would like to blur or try to blur in my own thinking and teams and tools is like, I think we should be thinking 90 % of the time, like purple teams. And by which I mean,
We want to, as security people, the extra sauce that we bring to the table, the extra value add is on the blue team side, we should know our assets as well as our developers. Like showing up to a developer and you don't know how the code is written is bad. So not knowing your APIs, not knowing how your authentication scheme works. Know the blue as well as you can.
80, 90 to 100 % as well as the developers should be the goal. But that's not enough to call yourself a security person. You have to understand the threat side. You have to understand threat intelligence. You have to understand exploitability. You have to understand remote access, those sorts of things, as well as the red teamers do and the types of tests that you can run. And then the purple team is really, of course, blending those two together. So I think we should...
I don't know what to wish for in that first part of your question, like AppSec versus ProdSec versus CloudSec. Like I think that's just happening. And we, our job is we can spend a bunch of time worrying about that, but I think we should be blending our own worlds as fast as that world is, is blending on the assets that we have to defend side. We should be blending the red world and the blue world and thinking and operating as much like Purple Team as much of the time as possible. And, know, so to give a concrete example,
Gunnar (36:02.293)
When I think purple team, one of the things I think about that's really important is shortening the meantime to detect and shortening the meantime to remediate. So like those things are never zero, obviously, but where the purple team can really, really help is understanding what are the things that you're monitoring? What are the real threats? How do I detect things faster? How do I remediate things faster? Those are things like you can really build a practice around and you can, and you can even, you know, those metrics specifically, you can report them up the chain.
Right? It's budget time. People are asking for budgets. If you can show somebody a number, like what your boss wants to know is if I give you 20 % more this year, what do I get? If I take 20 % away, what do I lose? Like security teams have a really hard time, justifiably so, answering those questions. MTTD, MTTR give you, they're not everything, but as a metric to index the health of your organization on, it's not a bad start. And to do that, you have to think like a purple teamer.
Robert Wood (37:03.712)
Okay. I like that. I like that answer. And what I'm, what I'm getting out of that is the security person, the engineer working in security needs to be effectively broadening their skillset so that they can work across all of these boundaries from detection and response to proactive.
proactive software or infrastructure protection and be able to communicate across these boundaries because historically the way that we've organized not only big organizations, but security teams is very siloed and you've got penetration testers and the SOC and the compliance people and you have a really hard time speaking across those silos and understanding one another and
Gunnar (38:00.257)
And it's not that those silos don't exist for good reasons. Like we'll still have penetration does obviously we still need to do them. But the key thing is like, how do you harvest the good bits out of that and make it as useful as possible as quickly as possible. Right. So there's still optimization in those in those certain areas. And this isn't really like, how do I build a career and information security kind of chat? But like, if you're starting out, you're probably doing really good. If you get good as a blue team or, or get really good as a red team or like there's, there's lots of careers to be started in each of those areas.
Robert Wood (38:12.647)
Yeah.
Gunnar (38:29.931)
I just think as a practice, the evolution, just like DevSecOps is blending lines, Purple Team is about blending lines. That's a mindset that you can build your practice around. And you can, you you can find tools and service partners that can close the gaps wherever you have the gaps. Maybe they're in a silo, one or the other silo. Maybe your gap is crossing the silos and trying to find tools and service partners that can help you close those gaps is perfectly good. Like nobody's going to be
10 out of 10 at all of these things.
Robert Wood (39:00.642)
Yep. Yep. That makes, that makes a whole lot of sense. So.
Robert Wood (39:07.842)
Double clicking into this one last time, we've had this emergence of posture management tools. It started with WIS, well, maybe not with WIS, but it started heavily in the cloud security space with CSPM. Now we've got SSPM for SaaS security posture management. We've got identity posture management tools. We've got data security posture management, and I want to talk about that later. And we've got application.
Gunnar (39:36.845)
We're starting to sound like yoga.
Robert Wood (39:40.15)
I know it's true. And so thinking about ASPM in particular, the way I've seen most of these solutions work is they're almost like wrappers around all these other things that exist in the AppSec space, all these scanners and tools and things like that. Do you think that those tools are emerging and
successful because of skill gaps in and around the application security specialist? Or is there really just things moving so quickly and fluidly that we need these kind of middle wearers to facilitate doing application security at scale?
Gunnar (40:31.253)
I think the the and I wrote a I wrote a substack. We maybe we can put it in the chat. So defensible systems dot substack dot com. We put a a post out on DSP data security posture management is another one. But I agree with your characterization. It started with CSP and whiz has obviously been a success story and there's other success stories in that space.
What I think CSPM did that was so helpful for the industry and it ties to the purple team mindset that I talked about earlier is it took a traditional outside in view that that InfoSec teams have had since the beginning. You know, we put a firewall, we have a DMZ, we're, you know, bad stuff on the outside, good stuff on the inside. And we'll, we'll monitor this DMZ pattern.
So CSPM started with that kind of structural outside in security and it added the killer feature, which is the attacker graphs. And that really gave you the simulated red team plus the blue team view. And you really have a start of a purple team in practice, which is really a big leap forward for the industry as a whole. By contrast, data security posture management, and I wrote about this in the piece,
takes a really what I would call a revolutionary approach. Like it's really difference in kind, not just difference in degree where they're taking similar things that the CSPM folks are doing and running them from the inside out, starting with the data, trying to figure out where you're sensitive, PHI, PII, et cetera. So now we've got two really successful spaces and all of these contenders also starting to call themselves posture management because marketing and product people are nothing if not observant. And so they're very,
quick to label things as posture management. You and I have been in AppSec for a long time. And I remember in the early days of identity, somebody would put a million dollars into an identity tool, and then they'd put $2 million of services into making it work. And you'd get these two to one and three to one ratios of service to licensing costs. AppSec was not dissimilar. People would buy the Fortifis and the Ounce Labs of the world and then spend
Gunnar (42:56.639)
many hundreds of thousands of millions of dollars to tune these systems to get them to work and get the value out of them.
Robert Wood (43:02.956)
Yep. Roll them out, tune them, figure out what to do with the results, feed that into vulnerability management policy procedure, all that, all that good stuff.
Gunnar (43:11.497)
And what I would call scale, right? Like scale means like I can use it, you know, point, click and go across the enterprise. And I, and I don't know that AppSec has a particularly as an industry, you know, this is a 48 point font statement, not a, not a, not a statement on any particular implementation. There's many good implementations of AppSec program, but as an industry as a whole, I don't think it's done a very good job with like, you can peanut butter this across an entire organization and without breaking.
a lot of systems in the process. So I think AppSec has been around for a long time, is still searching for a huge winner. It doesn't have a whiz in AppSec. There isn't really a winner to that level of success in terms of adoption and everybody runs a type of thing. And so now they've jumped on board this posture management, ASPMs. Is there a place where that could work in AppSec? Absolutely.
I would be wary of people just rebranding, like putting old wine in new bottles and rebranding things as as ABSEC posture management. And the bright line I would look for is this, like, what are you bringing me today that I didn't get already out of your tool? And is that more in the case of the attacker graphs and CSPM? Does that mean that I get attacker context? Does that mean I get true reachability analysis?
not just SCA, like who can get to this library and is it hanging out there on the internet and is it available behind authentication or not authentication and can you walk through some part of my control matrix and tell me if any of the 3,000 controls I put in place is going to block this or not? And if you can do that, then I think it really is posture management. I don't expect anybody to be able to do that day one out of the gate.
in all cases, but like that's the direction of travel that I would want to see before I'd say like, that's going to be a viable space. It certainly could be the fact that nobody has really put that team on the field yet makes me think that it might be harder than some people think. But I think that's a direction that people can go in. You you're even starting to see Wiz and some of the other CSPM leaders.
Gunnar (45:25.367)
put out code level systems. So we'll see what happens. I think the cloud gives you some advantages that we didn't have in the old days. And to go old school for a second, I also, didn't put this in the bio, but I wrote the first API security top 10 for OWASP. And I was frustrated, this was probably 12 years ago, something like that. I was frustrated because
the API security vendors would take the OWASP top 10 and plunk it down and say, well, we're using HTTP, so we might as well use the OWASP top 10 to assess our API security. Well, the OWASP top 10 had things like cross-site scripting. It had some stuff like SQL injection would still apply to APIs, but it also had a ton of stuff like cross-site scripting and JavaScript things that probably didn't really apply as much or sometimes at all to APIs.
You missed a very important behavioral aspect with APIs, which is in web applications, you knew what a browser was supposed to do. Chrome, Firefox, Firefox are supposed to do this, this and this. They're supposed to handle a session cookie that way. APIs are written because you don't know what the client is. You wouldn't have an API if you know what the client was. You would just have an application, which is do the job. So APIs only exist because you don't know what the client's doing. Hence, how do you put a, you know, how do you actually take the things that you assume the browser is doing, put that
burden of secure storage and secrets management and initial authentication and what identity protocols that needs to be in play.
This the cloud for all the problems that you rightly pointed out about the cloud and the blurring of the lines of the dev sec ops and things like that. There are certain ways that API gateways work in Azure and AWS and GCP. There are certain structural advantages that we have over the way these servers and services are constructed to the extent that an ASPM can understand those similar to how a CSPM does and add.
Gunnar (47:27.257)
understanding of the code, the sources and the sinks and the vulnerabilities, then there is room for somebody to innovate and create real value there. But I would be skeptical of anything labeling themselves out of the blue as ASPM unless I see something of real value there.
Robert Wood (47:44.428)
Yep. And the way that I've interpreted this over time is, I think that the point about APIs and these cloud native solutions and cloud platforms really helping there is really on the nose. I think one of the reasons that those sort of improvements have been able to happen is because we've taken elements of
software applications, the application space, and we've pulled them into the infrastructure space. And at least this is my own layman's understanding of, of the, the, the dynamics of these different spaces is it seems like most things, I will not say all things, but most things in infrastructure are just way more deterministic. If X then Y yet.
Most things in the world of software are not that simple. And I think that's where tooling has really struggled to, to support. Cause if it was just a bunch of determined, mean, some things are, you know, use of insecure libraries or methods or like certain insecure coding patterns that we know are
going to be problematic. They may not be directly associated. Like, you could use the raw SQL stuff in Python to build a SQL query. And that might not be insecure, but it's certainly more susceptible to possible error if you are not sanitizing inputs that go into it and all that good stuff.
And so I think that's really where that line to me ends up coming into play is are we able to pull things out of this complex space of the software and into this more deterministic space of the infrastructure where we can apply structured tools and kind of decision methodologies to it. that I think is, you know, that's an interesting...
Robert Wood (50:00.47)
evolution in the space as the cloud providers and these other technologies begin to pull out more of this complexity and make it deterministic in some way, or form, whether it's Kubernetes distribution or some resource that the cloud providers are building or something that manages sessions or data storage, whatever it happens to be. That trend I think is really, really interesting and speaks to your
your comments about the need for security professionals to really be blurring their own minds and being able to move fluidly into and around all of these things.
Gunnar (50:42.785)
Yeah. And I mean, at the end of the day for AppSec is like they were infrastructure can help too, is like, close are you to the source? How close are you to the sink? Like the, found the vulnerability. Great. How close is it to the source? How close is it to the sink? You know, where are the potential mitigations in, this chain? If any. Right. And that's, that's a challenge for a lot of teams. And I think, you know, one thing we haven't talked about too much yet, but I, think is another.
Robert Wood (50:54.285)
Hmm?
Gunnar (51:12.565)
sort of dimension to play here as we think about scale and getting it out to the rest of organization is what am I trying to do as a security professional, as a purple teamer? Am I trying to raise the floor for everyone or am I trying to raise the ceiling for a few systems? And that's, that's something that I think every security teams have to make so many decisions. I think one of the things you read these
Stories about security fatigue from security burnout things like that like there's a lot of hard decisions that security people have to make Where it's you know, it's like do you want to hit yourself in the foot with a hammer or hit yourself in the hand with a hammer? It's like well, you're still getting hit with a hammer that neither really sounds that good And it's like I can limp for today or I can you know work with my left hand for today, right? So You know, it's not always the case that you have perfect answers to these things but
What one of the things I try to do from a decision standpoint is There are certain sets of things like dependency scanning we talked about a couple times like that's some that's a to me a classic Let's raise the floor for everybody. Do not pass go to 9 o'clock 200 without You know making sure you can clear this check is we feel like we know exactly what's in that list that we're blocking and we and we feel like we know exactly what to do about it and we're going to be a hundred out of a hundred on
on this and like, are things you can small, medium, large organization and roll it out. There are other sorts of things that are, that require a lot of time. They require design or investment or special purpose tooling because the, you know, the Uber app works a certain way. Who knows what it is. And there are in every organization that is successful enough, you know, you have to be in a successful organization to have a security team at all, whether you're a successful enterprise or.
or a government agency that serves a lot of people, you've got a higher, even might not feel like it at the time, you've got a higher class of problems because you're in a successful organization. If you're in a successful organization, you probably have one, two, three, something like that, some number you can count on one hand of crown jewel applications. And the answer for those crown jewel applications might be really going into what, like 10 times more analysis because it's that important.
Gunnar (53:37.957)
Because that ride that that person picking you up on the right side of the street is that important as opposed to like crossing six lanes of traffic in LA or something What whatever that is and so again back to the the blue team side of the equation is knowing Which which game am I in and my Emma am I in a raise the floor game? Am I for minimally acceptable? Am I in a raise the ceiling game for like I'm really gonna try and
try and turn everything up to 10 on this thing because it's a existential threat if there's a problem here.
Robert Wood (54:12.184)
Cause it, it demands it. Yeah. Well, and, and, and that, think that, that thinking deeply about the asset, understanding the asset, understanding, understanding the organization that the asset is in the mission of the organization, what's important, how the organization makes money or delivers on its mission, depending on the organization type. That was one of the things that really jumped out to me about in the
The substack you mentioned on DSPM is unlike most security solutions that are outside in, this is one of the first things that is really going direct to the asset and working from the asset out. And that is really, I think a useful thing. One of the talks that I saw in, I guess it wasn't that recent anymore.
Man, every time I think that I feel like I'm just getting old, alas.
Gunnar (55:16.875)
It beats the alternative.
Robert Wood (55:18.478)
This is true. One of the talks that I saw that has really, it's stuck with me was this talk by Nathaniel Blyger. And I believe he's over at Metta now, but at the time he was at Illumio. And the topic of the talk was basically what cybersecurity can learn from the Secret Service. And it was all about starting with your high value asset, know, Secret Services case, they're obviously doing.
executive branch protection and you so they're, they're protecting the president, the vice president, and it's et cetera. And so you start with that and then you work in these concentric rings outside of, or away from that high value asset and you control and monitor and set choke points on the different pathways to get into and to the high value asset. And that, that talk ended up resurging in my memory as I read your
read the Substack and thinking about that, that asset first, that asset first approach. And do you think that this is going to be, do you think that this is going to be a trend with security approaches and or tooling moving forward?
Gunnar (56:37.057)
I hope so. I think it will take a while before it becomes the dominant. And I don't even know if it's appropriate for it to be the dominant approach to security. But I'm glad it's on the table, and I'm glad it's starting to be in the tool set. you can go back to an extreme example I always think of. Let's assume for a moment that somehow, someway, PGP and GPG email
Was like ready easy to use and like a billion people had pgp and gpg installed know how to use it and it was seamless like the amount of controls that we spent the amount of investment that the industry the world spends on like securing email and networks to send email and like so many things would fall by the wayside because
You know, you've got authentication. You've got an encrypted message that's, you know, signed and hashed and everything going across the wire. Like you could send something from the most dangerous coffee shop in the world and to somebody else who's in the most dangerous coffee shop in the other part of the world and you'd be fine, right? Like that. And it's kind of incredible. Like how many hundreds of billions, trillions of dollars probably are spent on
Robert Wood (57:46.23)
airport and broke.
Gunnar (57:56.913)
Making everything in between point a and point B secure where if you started with the asset now that obviously didn't happen for lots of usability reasons Which is why I'm not predicting that like this is the start of a new age, but I do think that the focus on the asset has been lacking for a very long time and we have as an industry almost sleepwalked into this mindset where we're always working from the DMZ in and
There's lots of good practical reasons why that was the case, but we missed so many opportunities and I'm glad that the SPM is starting to see the. Uptake in interest that it has and that anytime spent over there over there. I mean like on the right hand side of a diagram where you're on the inside close to the data file systems file structures databases.
That's just time well spent. Like how are those databases locked down? What's the control stack looking like? Where's that data moving? Where is the sensitive data? Is it in the places you thought or is in some places maybe you didn't think that it would be? That's just such so opposite how a lot of security teams spend 98 % of their time. And yet that's probably where 78 % of the risk is, right? Like that's where so much of the risk is on that side of the equation. So I do think
as much as setting the infrastructure, the apps, the users, the don't click this fishing link, like all these different things that security teams invest in. And rightly so, we miss the opportunity by losing track of the asset. And back to some of those earlier discussions we had of like, is this a place to talk about strength or scale or raise the ceiling or raise the floor? Getting really crisp and clear where those assets are and what the issues are that helps you tune those decisions into the
the reality on the ground in a way that you can't really do without those scans and classifications in place on of what the assets are.
Robert Wood (01:00:00.942)
Mm.
One thing that comes to mind here is
So you mentioned the the common around the industry kind of sleepwalking their way into into an approach. I don't disagree with you. Do you think that there was any major causation to or reasons as to why that was? it sort of a collective?
kind of experience that we're all going through and it just sort of became a cultural norm or do you think that maybe the overemphasis on compliance mandates and alignment for so long sort of instilled this behavior?
Gunnar (01:00:52.609)
I think industries and technologies are a reflection of people as much as the people who built them and the people who run them. And I think it's as simple as most of the people in security came up with an infrastructure network background. And when they thought about security, they very rightly thought about securing the infrastructure and networks that they had built. And that is where that
that the security capability lived in most of the organizations for the first, I would say, decade and a half of the internet. And it's only in recent years that it started to move into like, maybe we should be in the, maybe product security should be, I don't know, in the product security team. It's really only in the last six, seven years that that started to come.
Robert Wood (01:01:28.792)
Yeah.
Gunnar (01:01:49.357)
to come into question like where should it be.
Robert Wood (01:01:53.804)
Yeah, or even hiring developers onto security teams and that sort of thing.
Gunnar (01:01:59.277)
Yeah, and I mean, the security protocols, unlike programming languages and unlike technology where you have these products and tools that didn't exist and then two or three years later, they're everywhere. And then two or three years later, they're gone and replaced by the next thing. Security protocols aren't really like that. know, splitting the key was like 1978. And Kerberos was like
1990 or something like that. And then actor directories like 2000, like these things are like decades long. So we're still from a security protocol perspective, we're still standing on the shoulders of primitives that came out of 20, 30, 40 years ago. And those were built, know, Kerberos goes back a really, really long time. And that's, you know, every, every three finger.
Robert Wood (01:02:49.678)
Yeah.
Gunnar (01:02:56.969)
salute in the history of the world is the Kerberos transaction, right? So these things were designed for very specific reasons and they had good reasons why they built them. But I don't think that security as a discipline will fully reflect some of the opportunities that you talked about until we get enmeshed in some other teams. And I don't think we should leave the good things that were accomplished
In by the past aside, I'm not I'm not brushing those aside at all. In fact, I I wrote a piece some years back and I can send you the paper where we can put in the notes is, you know, the the DMZ, you know, is is dead and long live. The DMZ is basically the point of it. And the point being like, well, here are the things the DMZ did well. And if identity is the new perimeter, what what can we learn from the old perimeter? And, you know, the old perimeter, the DMZ was.
great at having a separate standalone place where the normal rules don't apply. That's fantastic security advice, right? Like that's almost, you know, a one sentence description of what we're supposed to be doing. But we didn't do it for applications in the DMZ. We didn't do it for anything other than network traffic, right? Like that's what those things were designed for.
And when this, you know, identity is the new perimeter or the clouds, the new perimeter, can you take those constructs that were good from the old world and apply them and which ones do still apply? And I think some of them do, but, but yeah. And then some of them will be like, you know, we're in a new world and we need these, these are some things that we, that we, we need to do better. And those, those would be things like, you know, your golden builds, your ability to.
Robert Wood (01:04:33.729)
even if they look a little different.
Gunnar (01:04:48.877)
You know deploy new code and you know two hours two days two weeks So that things don't live for very long times, you know ephemerality and those sorts of things That's not something the network DMZ people ever really had to deal with that's something that you know apps that can bring at the table and it's very very effective and that I would I would say is something That could even be a raising the floor type of thing In certain large organizations so like there are times when you can get these great wins you just have to
you know, open to where they're coming from. And sometimes they're not coming from a security vendor and sometimes they're enabled because of the, you have the right security vendor. You just have to figure out what that is. So you should be open to new design patterns and new ways of solving the problems because we're operating in new areas.
Robert Wood (01:05:39.244)
Yes, yes. And I mean, you need look no further than like this, this AI boom that is happening right now to even.
Gunnar (01:05:49.94)
I hadn't heard about that. What's going on there?
Robert Wood (01:05:51.598)
my goodness. So it's interesting with, with that. mean, one thing I find really encouraging about the way that things are evolving there is it seems like with the, the gen AI boom, we're unlike with, with other major advancements in the past, are building and figuring out.
industry certification. You've got like the ISO 42001. You've got the EU AI Act. You've got regulation and certification and all of this stuff happening alongside of, or at least like much closer to the advancement in technology, as opposed to those things just being way lagging. And that I think is...
It's really interesting. And I hope that stuff like that continues to be a trend in the industry. And that's not all security people driving that. That's regulators. That's a whole lot of people. But I see that as being a pretty encouraging trend.
Gunnar (01:07:05.805)
I'm glad that it's on the table and I think we have got better, almost probably starting with cloud where at least security has been at the table and considered, you know, the cloud security Alliance came pretty, pretty close to, you know, five minutes after cloud started to get big cloud security Alliance was right there. Didn't get everything right, but like it, it, it, was there. Right. Like, and I think that that's been an encouraging improvement over the last.
five, six, seven years that security is brought to the dance very close to the beginning as opposed to, you know, with maybe the earlier days of the web and maybe JavaScript would be a good example where it's already widely scaled and then security's like trying to get the horses back in the barn too late. Having said that, I...
Robert Wood (01:07:54.156)
Yeah, we'll be in that super awkward dance trying to come up and figure out how to groove.
Gunnar (01:08:01.261)
I do worry a bit about AI in the sense that I think back to my we're operating new territory standpoint, I'm not sure that people have fully grappled with the gaps of traditional security tools in AI in the sense that let's just take role-based access control, you know, the old battle horse of information security from 1992 or whatever.
It assumes that you have a structure and you have a permit, you have a role and you have a permission that you put on that role. Well, AI is developing this and generating the structure as you go. So I would posit that if you don't even know what the structure is, then how are you putting a role and a permission on that? And the answer might be role-based access control doesn't work. Fine. What's the other?
Please tell me what the access control matrix is going to look like then. And you you can get fancy with attribute based access control and policy based accurate control. We can talk about this for as long as anybody wants to. But the point is AI is proceeding. It's not clear to me what the structure that any of the access control matrices are going to be based on has been has been thought out to any real degree.
Robert Wood (01:09:20.236)
Yep. Yep. For sure. So, all right. So I want to ask one final question as we kind of round things out. We've talked about everything from DSPM to developer engagement, building AppSec programs to, you know, big broad sweeping trends in the industry. And
That's all fantastic and super interesting. And, I can't wait to listen to this. once, once I go in and get it all cleaned up and edited and whatnot. If you were, if you were taking on a mentee or, or maybe like you're, you're stepping into a, an organization in the future and you need to start kind of getting your bearings or you're recommending to somebody how they go about getting their bearings. What is the first.
Like, you know, there's, there's get your 90, your first 90 day plans in place, all of that stuff. I think, that's all, that's all well and good, but you can't even get a 90 day plan in place. If you don't get your bearings right out of the gates. It's like, what is, what advice would you give to somebody stepping in for like their first week or two weeks in terms of conversations they have things that they try to learn and who they try to start partnering up with.
Gunnar (01:10:41.953)
Yeah, I I would start with the key use cases, the key platforms, the key products that the business runs. So not the security use cases, not the security platforms, not the security products. I'd start with the business or organization use cases, products, platforms that you run because
Robert Wood (01:10:58.478)
thing that runs for organization.
Gunnar (01:11:09.227)
Back to leverage, which we talked about a while ago. You can get leverage as a security person by knowing the attacks better than anybody else. That's pretty hard to do. There's thousands of really good attackers out there. You can get leverage by understanding your security tools better than they do, better than others do. That's helpful.
If you're I don't know you're an insured your health care insurance company and You have a claim system and like if that claim system goes goes belly up you you have a serious problem like You should know that claim system better than anybody else should that's that's one thing that you should know Better than anybody else and that will give you the mental model to then overlay what you
you know, whatever level you've developed your red team skillset, your blue team skillset, or your vendors have, you'll be able to target and bring more precision to what am I trying to do with this service, the security capability and locate it and ground it in reality, because it will look different depending on...
I call it the law of strawberry jam versus the law of raspberry jam. Raspberry jam, can just spread over a piece of toast and it's the same height all the way across the piece of toast. And sometimes you want that raspberry jam and sometimes you want strawberry jam and then you get those chunks of strawberries. It's nice big chunks of strawberries. So it goes on. It still covers the toast, but it's chunky. you know, those are your crown jewel applications. So know those crown jewel applications, at least the ones that are in your sphere of influence. You really need to know those things.
And those are the things that probably consume 80, 90 % of your developer resources. So those are things that you want to have much more than a passing familiarity with. And they've probably been the most heavily customized as well. So you're not going to just be able to say, understand GCP, so now I understand this. It's like, did we deploy this to build our health care insurance claims applications?
Robert Wood (01:13:24.364)
Yep. I guess a hopefully brief follow up to that is what sort of things are you trying to learn about those assets? it how they're built? Who depends on them? Data flows? Is there like a top thing that you want to or a top couple of things that you really want to understand from those? the technology is all well and good, but I don't think the technology is really where...
I'm going to make an assumption that that's not where you're going to go.
Gunnar (01:13:56.481)
Well, I like to start with identity structures and who's logging on and how. And it's always important to think about the user, know, who are they? Do they work for you? Do they not work for you? Do they, are they outsourced? Where are all the places they can't think? And then don't forget your privilege users.
who actually can update the software and do they work for you and how are they authenticated? So I think you can take an identity, once you understand what the use case platform are, you can take an identity view and it is something you can get in a day or two. You can figure this stuff out in a day or two. How are people signing on, hopefully, and then who are they and what are the permission structures and.
And again, don't forget the privilege users, that's also going to, you're going to learn a lot that way as well.
Robert Wood (01:14:49.526)
that. I love that. All right. Well, I wanted to say thank you, sir, for spending the last hour and change with me here. This has been a fantastic conversation. Not so different than all of the other fascinating conversations we've had about everything from investing to security to culture and all the things in between. So I very much appreciate your time, your wisdom, and yeah, I hope we get a chance to talk against it.
Gunnar (01:15:16.215)
Thanks again, Robert. I really appreciate your time as well.
Robert Wood (01:15:19.21)
All right, thank you for listening, everyone, and we'll be on again soon.