Agent vs Agentless Deployment for Vulnerability Scanners: Why Are We Still Debating This?
Disclaimer: I’ll say right up front that I am biased on this topic, for a couple of reasons. First, I’ve been in tech for over 35 years and I’ve been through this discussion many times. Second, having been part of Nodeware since before its commercial release in 2016, I know first-hand what our partners say about deploying it.
If you’re reading this, you probably already know that vulnerability management is a foundational component of an organization’s security program. With new vulnerabilities showing up at an alarming rate, the days of quarterly or monthly vulnerability scans are behind us. Continuous vulnerability management is the new norm.
Unfortunately, many organizations, either directly by their IT/security staff or by the MSPs who serve that role, are still unprotected when it comes to vulnerability management because not ALL of their assets are being scanned. Why is this?
It usually comes down to one of three reasons: 1) The preference of the person in charge of deployment; 2) Availability of only agents or an agentless solution from the vendor of choice; and 3) Time to deploy.
As an end customer or a service provider, let me ask you this: Why would you want to arbitrarily limit critical protection?
On the agent side of the conversation, people generally say that you can mass deploy them and they provide superior results due to having credentials of the machine they are deployed on. This is true, but what about the devices on the network (e.g. IoT, printers, etc.) where agents cannot be deployed? On the flip side, deploying agentless sensors on the network can capture everything that is attached and active on the network, but what about the devices that are inactive or remote, but are still network assets? What if you need deeper scan results that credentialed scanning provides?
As I am writing this, I am reminded of a series of commercials that ran for a famous beer some years ago. The two camps were drinking the same beer, but touting different aspects of it as better than the other. I see that here as well. Ultimately, it didn’t matter, as they were all drinking the same beer. I’m going to take it a step further, though.
We built Nodeware with the purpose of providing continuous, full coverage for asset discovery, vulnerability scanning, and access to vulnerability data for reporting and remediation. Over time, we refined and expanded the solution to include agentless sensors (Hyper-V, VMWare OVF, Windows, Debian), as well as agents (Windows, Debian and Ubuntu, Apple macOS, Linux), so that there is always full visibility and protection when using a combination of agents and sensors.
We know that organizations have different needs and so, with Nodeware, there is no debate – use agents and sensors, however you need to, in order to get full, continuous network coverage. They are all lightweight and easy to deploy and all network data, regardless of deployment method, is returned to a common UI for seamless management.
At the end of your day, you can kick back and have a cold one, knowing that Nodeware has you covered. There’s no need to debate it, but, if you really want to, I invite Nodeware users to pick a side: “Scans great” vs “Less overhead”!
More from the blog
View All PostsHow to Address the Log4j Vulnerability
EDR vs Vulnerability Management Scanning: Understanding the Difference and Enhancing Your Security
How Log4j is Exploited and Tips to Stay Protected
Subscribe to email updates
Stay up-to-date on what's happening at this blog and get additional content about the benefits of subscribing.