Agents are the Wrong Way to do Attack Surface Mapping

Robert Hansen March 24, 2021
Share icon

This post is the fourth of a short series of posts that we have dubbed “Attack Surface Mapping the Wrong Way,” showing the wrong way that people/companies/vendors attempt to do attack surface mapping. Read the first in this series here. Next up are agents and why they are the wrong way.

Agents alone are flawed

Quite often, people will make strong claims that the only way to know about environmental changes is to have an agent running that monitors all changes in the environment. Typically, these people are speaking about EC2, Google Cloud, or Azure cloud-based environments and entirely ignoring things that may still be in the datacenter, that are not centralized, or that are in environments that are not one of those three behemoths – but let us give them the benefit of the doubt for a second.

Most companies these days do not have just one Amazon EC2 account – they tend to have several, or even in some cases that I have seen, over a hundred! Who will manage those tens or hundreds of agents that need to run in each of those accounts?  There are some short-cuts that make this process slightly less onerous, but it still requires quite a bit of thinking beyond “just install an agent.”  That is a substantial hidden cost in some cases.

But what happens when you have an account that you do not know about?  What then?  We have run into this situation several times where one or more of the various cloud providers are forbidden by policy – but are found through external means.  The agent failed because it did not know to look for anything that it was not installed on.  That is the entire point of an up-to-date asset inventory – find the things that should not be there, as well as the things that you do know about.

One particularly frustrating conversation went like so:

Organization, “We don’t want to look at things externally. We want agents on all of our Azure systems so that we find everything.”
Us, “What about this set of servers over here on Amazon?”
Organization, “We don’t allow anything on Amazon.”
Us, “Right, but… there they are…”
Organization, “That’s outside of policy.”
Us, “Right… but since it’s outside of policy and you didn’t know about them, they are even more important to find, right?”
Organization, “We just don’t allow that.”
Us, “Right… exactly…”

It is a bit like talking to a wall having a conversation like this where it is clear an agent-only approach simply had no chance of finding the issues, and the client simply cannot see why that is an issue. To be perfectly clear, there is nothing wrong with supplementing an external asset inventory with agents, but relying entirely on agents means that you already had to know where all your assets are to install said agents.

Bit Discovery has agents that can be deployed for the aforementioned behemoth cloud providers, but we do not push them heavily for this reason. It is simply not the right way and causes too many downstream headaches for the client to manage.

Installing agents “everywhere” is practically an impossible task because you cannot install an agent on a machine that you do not know exists. So, unless you happen to know all your assets already, how could you hope to leverage agents perfectly?  That seems to be a chicken and egg problem. Therefore, if you or your vendors require you to install an agent on every machine to get an asset inventory, you are likely missing many assets. 

We recommend using agents when there is a clear-cut use case, like where the environment is going up and down rapidly, and an external scan simply cannot keep up.  Beyond that, agents should only be used to supplement a better strategy or used on an internal asset inventory where a scanner simply cannot reach without a privileged connection in the network.

Want to talk about the right way to do attack surface management? We’ll show you. Get in touch with us here.

Back to blog listBack to blog list