Based upon extensive work with enterprise customers we have compiled a list of eight best practice steps. Follow them for an effective vulnerability management programme.
S4 Applications takes a ‘security-by-design’ approach for vulnerability assessment and remediation solutions.
Overview of our 8x best practice steps for Vulnerability management:
- Build repository for data
- Populate data on regular basis
- Manage automatic mitigation rules
- Understand risk
- Generate tickets for remediation
- Closing tickets
- Risk management
- Report on everything
Best practice steps 1 – 8 for an effective vulnerability management programme:
1. Build a data repository
You’ll want to hold information on every asset connected to the network, for instance:
- Networking equipment
- That Samsung fridge that the intern was playing with and put on the network
- Possibly employees mobile devices depending on how you manage them
- Everything on your network
In addition, you’ll want information about virtual assets too, in other words S3 buckets and similar, along with their security profiles.
Add to that containers, and the scanned machine images that you use on AWS / Azure.
For a large company collecting over 1 Million assets would not be that difficult to imagine. Considering that it is typical to have many hundreds of pieces of information per asset.
All the data would need to interrelate, and the database would need to handle a capacity of 100M + pieces of data. In other words it would probably not be a relational database.
- Design a database schema that can hold all of those different types of data and allow navigation between them. If you think about an application, you will want to see everything about it in one place. DAST and SAST results, infrastructure on the underlying servers, security issues on the S3 buckets; everything, in one place.
- Implement a database structure to store large amounts of data. Provide high speed access.
For more on how to assess your security posture, read our blog: Assess your security Posture with our Security Maturity Model.
2. Populate data on a regular basis
There are a number of different data source types that you can collect data from, including:
- Vulnerability scanners e.g. Tenable or Qualys
- Web application scanners (DAST tools) e.g. Netsparker, Acunetix, AppSpider
- Source code scanners (SAST tools), e.g. Versacode
- CMDBs e.g. Service Now
- Container scanners e.g. TwistLock, Aqua
- Ticket systems (ITSMs) e.g. Service Now, Jira, HP’s Service Manager
- General asset information including software inventories e.g. Microsoft’s AD, Miscrosoft’s SCCM, Red Hat Satellite
- SQL database CSV and XML files
Most companies have at least one of each of the items listed above.
You’ll need to connect securely to the source, and then extract the data. Make sure to regularly check and maintain the secure connection, as vendor revise their API’s along with new releases.
Generating new data
Not all data may be directly available, and therefore needs to then be inferred from other data.
The most common example of this is the machine name. Consider a sample machine called “S-LON-PH-35”. This could mean:
- S – it is a server, not W for workstation
- LON – it is in the London data center
- PH – it is part of the Phoenix application
- 35 – a sequential number to make it unique
You need to extract data from the machine name, and store it. When someone says “Give me all of the machines in the London data centre” you now have that information.
Another common piece of inferred information relates to if the machine is internet facing or not i.e. does it have an RFC 1918 address? Again this needs to be calculated and stored.
3. Manage automatic mitigation rules
Companies often have mitigation technologies that are under utilised.
Consider the situation where you have networking equipment that can perform packet analysis and block certain attacks.
You need to enable network equipment to instigate and check for a particular CVEs. Since using these features of networking devices slows them down, you only want to look for CVEs that you are vulnerable to.
We would suggest the following workflow, say for EternalBlue:
- Scan machines on the network – see if you have any that are susceptible to EternalBlue.
- If you do then you’ll need to create a ticket for the network guys to enable EternalBlue blocking on the network equipment. This is typically very quick and low risk to do.
- Once the mitigation is in place you need to manage remediation process as normal
- When you verify all devices on the network have been remediated, create a ticket for the network guys to remove the EternalBlue blocking.
In the above workflow, you are vulnerable to EternalBlue for the shortest possible time frame, maximizing your investment in the networking technology.
Where an SQL injection issue is found in an application then consider the similar workflow: create a ticket to generate a WAF rule, then removing the WAF rule when the issue is resolved.
You always want to remove unnecessary rules as they only complicate matters, and complication is the enemy of security. They can often slow an operation down too.
To learn more about vulnerability management, read out blog: Staying safe with Risk Based Vulnerability Management.
4. Understanding risk with vulnerability management
Now that everything is in one place, including metadata information about applications. Assign the risk based on several inputs.
The risk of any given vulnerability is a mixture of:
- Technical properties of the vulnerability, typically given by the CVSS score and other related CVE metadata
- Ease of exploitation
- Local access required?
- Properties of the machine on that the CVE exists on:
- Is it Internet facing?
- Properties of the application that it forms part of:
- Is there personally identifiable information (PII)?
- Is there credit card information (PCI-DSS)?
- Information provided by your threat intel provider?
- Is there an exploit under development?
- Is there sample code available on GitHub?
You can collate all the above information, drop it into an algorithm and score each vulnerability uniquely on a particular host. Avoid scoring 90% of your vulnerabilities 10 out of 10, or conversely 1 out of 10. So, plan to have a wide distribution of scores, to help set your priorities.
Now that you have a scoring algorithm, you need to score each vulnerability on each asset. You can then build an asset level risk score, then rolling everything up, you can get an application level risk score.
5. Generating tickets for remediation
Prioristise all vulnerabilities and data, to remediate the most important ones. You typically do this by creating tickets in your ITSM or ITSMs.
Organisations tend to have more than one ITSM, consequently different types of vulnerabilities go into different ITSMs.
Infrastructure issues may go into ServiceNow. Whereas application issues (say a Cross Site Scripting issue) would go into a tool used by the developers; for example Jira.
You also don’t want one ticket per vulnerability, because that is not the way that the patching teams work. In the Windows environment, one patch may resolve many issues. So create tickets for Windows machines based on resolution.
The way that you create tickets, and who you assign them to will vary by platform and a number of other variables.
If you consider, say a Cisco router, running an old version of Cisco’s IOS, this will have a long list of vulnerabilities. You may want one ticket per router, as this is how that team works.
This is very different to a programmer who has 3 x SQL Injection issues within a single application. They may just prefer 1 ticket and deal with all 3 issues at once.
Your Windows workstation patching guys may want tickets in a different way to the Windows server guys as they patch in different ways.
It is hard enough to the the patching teams to patch things, so we suggest that you have different ticket creation rules to fit with the different ways that the teams work. Id you do things their way, there is more chance things will get fixed.
6. Closing tickets once remediation is confirmed
Once the remediation teams have applied the fix, they will update their ticket. You could trust that the work is complete and close the ticket down.
The best plan though is to trigger a re-scan of the associated item. The scanning technology can confirm remediation. Close tickets after the scanning technology confirms remediation.
Note that this can be a lot harder than it sounds. Many of the scanning technologies don’t report items as remediated, they just stop reporting their existence.
This means you have to compare the outputs from different runs.
- You also have to be careful, because not all runs are the same.
- If last week you ran a full scan and this week a quick scan, then they cannot be compared.
- Vulnerability X, that was found last week, was not looked for in the quick scan.
- This doesn’t mean that vulnerability X has been fixed, only that it was not checked for.
If a vulnerability still exists, then the ticket is pushed back to the remediation team. This is often because a re-boot has yet to be completed or similar!!
This also has the benefit of closing out vulnerabilities that were remediated as a by-product of the remediation work.
So consider a Windows Patch Tuesday update. There will probably be at least one critical item that you have to patch. In the process of applying that patch you will probably fix some other, less important issues at the same time. If you close tickets based on scan results then everything gets cleared up automatically.
To learn more about the difference between Vulnerability Scanning and Pen testing read our blog.
7. Managing risk for non remediated items
The most common example of this is Java but the issue impacts a lot of other technologies too.
Consider an example where the vulnerability scan reports the version of Java in use must be upgraded because it is full of holes. The issue is that the hosted application is not supported without the newer Java version; hence it cannot be upgraded. Here you need a risk management system.
The remediation team flag an item as being a risk. There then needs to be some form of approval process where specific mitigations are discussed and any residual risk reviewed by the business.
If the business decides to accept the risk, then it should only be for a limited amount of time, and needs to be reviewed / accepted again.
The suggested process is then:
- Remediation team identifies something that cannot be remediated.
- Case file is built and discussed, including:
- Current risk
- Possible mitigations and their effectiveness
- Risk and replacement time
- Action to be taken
- Business decides if residual risk is acceptable.
- Complete actions.
- The accepted risk is reviewed after a given timeframe. The review timeframe varies based upon risk, with higher risk items being reviewed more frequently.
- When a review occurs then start back at stage 2 because many assumptions will have changed.
8. Report on everything
Lots and lots of data, means:
- Report on everything.
- Create dashboards with different needs for different stakeholders.
- Drill through the data.
- Measure tickets against agreed SLAs.
- See on one screen an application and all it’s issues including
- DAST tool results
- SAST tool results
- Infrastructure issues from the servers
- Issues from resources like S3 buckets
Also consider data security, people from London data centre shouldn’t see the Singapore data.
Brinqa vulnerability management solutions.
Brinqa prioritises assets, vulnerabilities, and incidents based on their impact and value to your business.