The case for centralised logging, event log management
Collecting too much log data overwhelms systems and staff. Centralised event log management allows filtering of the most significant security infoPrint
13 June 2018 | 0
More companies are using their security logs to detect malicious incidents. Many of them are collecting too much log data—often billions of events. They brag about the size of their log storage drive arrays. Are they measured in terabytes or petabytes?
A mountain of data does not give them as much useful information as they would like. Sometimes less is more. If you get so many alerts that you cannot adequately respond to them all, then something needs to change. A centralised log management system can help. This quick intro to logging and expert recommendations help new system admins understand what they should be doing to get the most out of security logs.
Security event logging basics
One of the best guides to security logging is the US National Institute of Standards and Technology (NIST) Special Publication 800-92, Guide to Computer Security Log Management. Although it is a bit dated, written in 2006, it still covers the basics of security log management well.
It places security log generators into three categories: operating system, application, or security-specific software (e.g., firewalls or intrusion detection systems [IDS]). Most computers have dozens of logs. Microsoft Windows computers come with three main, binary event logs: system, application, and security. Unfortunately, the names can be misleading, and evidence of security events are often stored across all three logs.
Since Windows Vista, those three main log files are broken down into nearly a hundred different views for more focused digestion. At least a dozen are text or binary logs, even without any other application installed. A Unix-style system usually has a centralised syslog file along with other text-based log files for all the various applications and daemons. Many administrators redirect the individual files to the main syslog file to centralise things.
Smart phones and other computing devices usually have log files, too. Most are similar to syslog, but they can’t easily be viewed or accessed. Most require that the device be put into a special debug logging mode or download additional software to view or configure the log files. One exception is iPhone crash logs, which are synced by iTunes to the host PC upon every connection.
Security log files on mobile devices are a lot harder to access and a lot less useful once you do. You might be able to get basic information on a security event, but with far less detail than you can get from the average personal computer. Many admins install a third-party application to collect more thorough security log data from a mobile device.
Windows enables most log files by default, although you might need to define what level of logging you want. Turning on the most detail possible should only be done during a specific need or while trying to track an active, known security event. Otherwise, the number of event messages can quickly overwhelm a system. Many systems have been crashed because well-meaning system administrators turned on the most detailed logging to help with diagnosing something and then forgot to turn it off.
Unix-style systems usually have syslog enabled by default, and you can configure the detail level. Many other application and security log files are disabled by default, but you can enable each individually with a single command line.
Each network device and security application and device will generate its own logs. Altogether, a system administrator will have hundreds of log files to choose from. A typical end-user system can generate thousands to tens of thousands of events per day. A server can easily generate hundreds of thousands to millions of events a day.
Tip: Unless you know of an ongoing security event, the default log settings will provide more than enough information for most security needs. Start with the defaults and add log files and detail only as needed. If you turn on detailed logging, make a reminder to turn off the extra detail later so you don’t forget.
Centralised security event logging
Every administrator wants to collect the most common default log files of each computer to a centralised location. The value in aggregating all the data to a centralised database for alerting and analysis simply cannot be understated. The question is: How do you aggregate all that data and how much?
Most systems have the ability to send their main log files to a centralised location. You’ll almost always get far more value and versatility by using a third-party agent built exactly for collecting and sending event log information. Many admins use free utilities to do it, but most of the commercial options are better.
You will want a centralised log management system to collect, store, and analyse all the data. There are hundreds of options and vendors to choose among. You have software-only and appliance options, with the latter more easily providing better performance in most instances. Pick an option that allows you to efficiently and securely collect event data from most of your sources. You don’t want to send event log data over the wire in clear text. The event log management software must aggregate the data, normalise it (convert it to a common format), alert on anomalous events, and allow you to run queries.
Before picking any solution, try before you buy. You want to be able to type in any query and get an answer in a reasonable amount of time. If you wait 10 to 15 seconds for an answer, you will use event log analysis less and it begins to lose its value.
Most companies collect all data from the main default log files, and it can easily be overwhelming, from both network utilisation throughput and storage viewpoints. I mean this literally, not figuratively. Most seasoned administrators have a story of how aggregating their event logs crushed their network’s performance until they optimised their event logging.
Whatever you do, don’t just collect information from servers. Today, most compromises start from end-user workstations. If you don’t collect the log files from client computers, you’ll be missing most of the valuable data.
Tip: Generate as much detail as you need on the local system, but filter and only send the most valuable and critical events to your centralised log management system. Send whatever is needed to generate alerts for all your most important security events but leave the rest on the local system. You can always retrieve the added detail when needed. Using this method, you can get the data you need to get alerts and start forensic investigations, but without overwhelming your network and storage devices. There is always the chance that a bad guy can delete the local event log data before you can retrieve it, but in practice I’ve almost never seen that happen.
Getting useful log information
The hardest part of any event log management system is getting enough information to be able to detect all needed security events, while not overwhelming your system with too much noise. Even in the most efficient management systems, most of collected event log data will be noise. It’s just the way event log management works.
Tip: Make sure the date and time is set correctly on all your systems. When trying to correlate events, you must have accurate time.
Where event log management systems show their real value is in how well they filter the unneeded noise and alert on useful actionable events. Critical events should always lead to an immediate alert and a responsive investigation. An event record should be defined as actionable when the event record indicates strong likelihood of malicious activity, excessive (sustained) system activity, unexpected (sustained) drop in system activity, or mission critical application performance issues or failure.
A good event log management system comes with predefined common alerts (e.g., excessive account lockouts) and allow administrators to create their own events (deviations from expected baselines that exceed a certain threshold). It might take multiple correlated events from multiple systems to generate an alert. Depending on the event’s rarity, a single event (e.g., log on to a fake “trap” account) is enough to generate an alert. A good system comes with the events most administrators want to alert on and with enough filters to slough away the junk.
Tip: Start by defining all the events related to your company’s most recent and most likely future successful attacks, and then figure out which event messages and alerts you need to define to detect and stop those attacks. You have many, many security events to worry about, but the ones most important to monitor and alert upon those most likely to be used in the future against your company.
Good event logging is about pulling out the necessary critical events and alerts from an otherwise overwhelming amount of information. The problem for most admins is not getting enough information, but in getting the truly useful information out of an overwhelming tsunami of events. A good event log management system helps you do that.
IDG News Service