bother with logs

Posted on September 19, 2011 by

0


At most places ( that I came across so far) the sea of log files produced by usually talkative IT infrastructures was still used in just a rudimentary fashion (if at all) for diving into it and retrieving information intelligence to be used for InfoSec analysis and management.

The problem

The security management team is desperately looking for the means to dashboard the operational security posture and to somehow measure the impact (improvements, hopefully) of security projects. Also, a copy of ISO/IEC 27004 or NIST  800-55 may be around for a while within the team -the latter one now meditating on the matter of how to actually implement all those nice (but still rather generic) recommendations and advice given by the literature …

It is right there, in front of you – use it

Now, for a start, how about using the vast event information stored in your infrastructure’s log files? Literally every single (trans)action and event, made by users, applications, systems and devices nowadays is getting logged somehow and somewhere. The trick though is to get hold of this information and to make sense out of it: to filter it through (automated) information intelligence, separating the background noise from the interesting stuff.

There are some tools out there that can do this kind of log information intelligence. For different reasons we decided to use the tool Splunk, extended with the  add-on Splunk ESS (Enterprise Security Suite) to do the job, or more precisely, to help us with the job. Sure, it takes some planning and deployment efforts before web servers, domain controllers, firewall, RAS system, Anti-Virus system, email servers, database servers and the ERP are connected with the central log intelligence system (some of them via data forwarders, others via a pull mechanism). It also takes learning efforts in order to use the mighty capabilities of the system to their full extent, such as building customized dashboards tailored to visualize specific measurement of, for example, security project progress or the projects impact on the operational security posture.  The fruits of the efforts are also remarkable though:  Automated(!) security event monitoring and detection, incident review, reporting, dash-boarding, and auditing.

Lessons-learned:

A problem we faced was self-inflicted and could have been avoided relatively easily simply by (better) counting.
We massively underestimated the sheer amount of log data volume. The firewall system alone contributed up to 5 GB of data per day – meanwhile, after some streamlining and consolidation of the firewall logging policy,  this could be reduced to between 2 and 3GB per day. Means: in those cases where the log data server license model is volume based (like it is the case with Splunk), when preparing the business case and the funding request,  it is best to have as much as possible an exact idea of the log information volume your systems create every single day.

Posted in: Observation