From: Securosis Highlights – Endpoint Security Management Buyer’s Guide: Ongoing Controls – File Integrity Monitoring

Posted on 2012/08/16


a quick re-post from Securosis Highlights https://securosis.com/blog/endpoint-security-management-buyers-guide-ongoing-controls-file-integrity-m

Endpoint Security Management Buyer’s Guide: Ongoing Controls – File Integrity Monitoring by (author unknown)

After hitting the first aspect of the ongoing controls, device control now we’ll turn our attention to file integrity monitoring (FIM). Also called change monitoring, this capability involves monitoring files to see if and when a file has changed. Why is this capability important from an endpoint security management standpoint? Here are a couple of situations:

  • Malware detection: Malware does a lot of bad stuff to your device. It can load up software and change configurations and/or registry settings. But another common action is for the malware to change system files. For instance, a tampered IP stack could be installed on the device directing all of your traffic to a server in Eastern Europe, and you’d never be the wiser.
  • Unauthorized changes: Something a little less malicious but no less problematic is an unauthorized change. Perhaps it’s a result of an operational failure. Or a bad patch. Or any of a million other reasons a key file gets changed which causes a potential exposure.
  • PCI Compliance: Finally Requirement 11.5 in our favorite prescriptive regulatory mandate, the PCI-DSS, requires file integrity monitoring to alert personnel to unauthorized modification of critical system files, configuration files, or content files. So there you have it, you can justify the expenditure using the compliance hammer, but remember security is about more than checking the compliance box, so we’ll focus on getting adequate value from the investment as well.

FIM Process

Once again we’ll start the discussion with a process that can be used to implement file integrity monitoring. Remember, implementing technology controls for endpoint security management without the proper underlying processes doesn’t end well.

  • Set policy: You start first by defining your policy, defining which files on which devices need to be monitored. Obviously there are 10s of millions of files in your environment, so you’ll need to be pretty savvy to restrict monitoring to only the most sensitive files on the most sensitive devices.
  • Baseline files: Then you need to ensure the files you assess start from a known good state. This may involve evaluating version, creation and/or modification date, or any other file attribute to give you assurance that the file is legitimate. Like any other approach based on detecting anomalies, if you define something as normal that turns out malicious, things go downhill pretty quickly. The good news is that the FIM vendors have databases of billions of files to track these attributes, and that intelligence is a key part of their product.
  • Monitor: Next you actually monitor usage of the files. This is easier said than done because you may have hundreds of file changes on a normal day. So knowing what is a good change and what’s a bad change determines success and failure for this function. In other words, you need a way to minimize the false positives of flagging a legitimate change and wasting everyone’s time.
  • Alert: When an unauthorized change is detected, you’ll want to let someone know about it.
  • Report: As we pointed out above, FIM is required for PCI compliance, and you’ll likely use that budget to buy it. So you’ll need to be able to substantiate the effective use of the control for your assessor. That means generating reports. Good times.

Technology Considerations

Now that you have the process in place, you’ll need some technology to implement FIM. Here are some things to think about when looking at these tools:

  • Device/Application support: Obviously the first order of business is to make sure the vendor supports the devices and applications you need to protect. We’ll talk about this more when covering research and intelligence below.
  • Policy Granularity: You’ll also want to make sure your product can support different policies based on device. For example, a POS device in a store (within PCI scope) needs to have certain files under control, while an information only kiosk on a segmented Internet-only network in your lobby may not need the same level of oversight. You’ll also want to be able to set up those policies based on groups of users and/or device types (locking down Windows XP tighter as it doesn’t have the same protections as Windows 7).
  • Small footprint agent: In order to implement FIM, you’ll need to implement an agent on each protected device. Of course, there are different opinions about what an “agent” is and whether it needs to be persistent or whether it can be downloaded when you check the file system and then it goes away (dissolvable agent). You’ll need sufficient platform support (as discussed above), as well as some kind of tamper proofing of the agent. You don’t want an attacker to be able to turn off or otherwise compromise the ability for the agent to monitor the files or even worse, return tampered results.
  • Frequency of monitoring: Related to the persistent vs. dissolvable agent decision, you need to determine whether you require continuous monitoring of the files or whether a batch assessment of the file system is acceptable. Before you respond “duh, we want to monitor the files at all times,” remember that in order to truly leverage continuous monitoring you need to be able to respond immediately to an alert. Do you have 24/7 ops staff ready to pounce on every change notification? No? Then perhaps a batch process would work.
  • Research & Intelligence: A big part of successful FIM is knowing what is a good change and what is potentially a bad change. That requires some kind of research and intelligence capability to do the legwork. The last thing you want your expensive and resource constrained operations folks doing is assembling the list of file changes for a monthly patch cycle. The vendor needs to do that. But it’s more complicated that that, so here are some other thoughts on detecting bad file changes.

    • Change detection algorithm: Is a change detected based on file hash, version, create date, modify date, and/or privileges? Or all of the above? Understanding how the vendor determines the file has changed allows you to ensure all of your potential threat models are factored in.
    • Version control: Remember that even a legitimate file may not be the right file to implement. Let’s say you are updating a system file but an old, legitimate version is installed? Is that a big deal? If the file is vulnerable to an attack it could be, so ensuring the versions are managed by integrating with patching information is also a must.
    • Risk assessment of change: It’s also helpful if the vendor can assess different kinds of changes for potential risk. So replacing the IP stack is a higher risk change than updating an infrequently used configuration file. Again, either could represent inherent badness, but you’ve go to prioritize in some way, shape or form, and having the vendor take the first cut can be helpful.
  • Forensics: In the event of data loss, you’ll want a forensics capability so having the product log all file activity. Knowing when different files were accessed, by what programs, and what was done can be pretty helpful to assess the damage of an attack and pinpoint the chain of events resulting in the data loss.
  • Closed loop reconciliation: We’ve discussed throughout this post, thousands of file adds, deletes and changes happen every day with most changes being authorized and legitimate. That being said, from both compliance and operations perspectives, you want to be able to reconcile the changes you expect with the changes that actually happened. So during a patch cycle, a bunch of changes should have happened. Did all of them complete successfully? We mentioned the verification step as part of the patch management process, and FIM technology can provide that reconciliation as well.
  • Integration w/ Platform: As we described, there is no reason to reinvent the wheel, especially for cross-functional capabilities like discovery, reporting, agentry and agent deployment/updating/maintenance, so leveraging the endpoint security management platform can streamline the implementation and provide operational leverage.

So with this description of file integrity monitoring, we complete the controls discussion in this series. Between the periodic controls and the ongoing controls, you’ve been presented with a fairly comprehensive means of managing the security of the endpoints in your environment. We’ve alluded (rather extensively) to the concept of the Endpoint Security Management platform, and in the next post we’ll flesh out exactly what that means, as well as mention some buying considerations when looking at vendors.

– Mike Rothman
(0) Comments

Posted in: reading