Suricata RPM for EL and CentOS 7

I’ve taken the Suricata package as found in Fedora and rebuilt it for CentOS 7. This should be similar to how the package would exist in EPEL (and hopefully it makes its way there).

To get the package with yum, first install the yum repository package (note: you must already have EPEL installed):

rpm -Uvh http://codemonkey.net/files/rpm/suricata/el7/suricata-release-el-7-1.el7.noarch.rpm

then install Suricata:

yum install suricata

To just get the package, head over to http://codemonkey.net/files/rpm/suricata/el7/.

Suricata RPM for EL and CentOS 7

Suricata + ELK in Docker

While getting familiar the very popular Docker Linux container tool, I went against best practice and put Suricata, Logstash, Elastic Search and Kibana into a container that is looking promising for demonstration purposes. If you already run this stack on one machine, it might be suitable for real use as well.

What you get is a very simple to run application container that abstracts all the tools above into a single application.

Assuming you have Docker already installed, you can get a feel for Suricata + ELK with a couple commands:

git pull https://github.com/jasonish/docker-suricata-elk.git
cd docker-suricata-elk
./launcher start -i eth0

The first time ./launcher start is run, Docker will pull down the container file system layers so it may take a while. Subsequent starts will be much quicker.

Once it looks like it is up and running, point your browser at http://localhost:7777.

A few notes:

  • Docker containers are more or less stateless. Changes to the filesystem inside the container are not persisted over a restart. Instead any data that needs to be persisted will end up in the ./data directory where you started the launcher.
  • This container uses host networking instead of the usual isolated network you find with Docker containers. This is to give the container access to your physical interfaces. This alone has me questioning Docker for network monitoring deployments.
  • As host networking is used, the container will probably fail if you have existing applications bound to port 7777 or 9200. Making these ports configurable is on the todo.
  • The containers log directory is available from the host system. Take a look in ./data/log.
  • Suricata is built from git master.
  • ./launcher enter will give you a shell inside the running container. This is useful to take a look around the runtime environment. Just remember that any changes you make will not be persistent.
  • ./launcher bash will start a new container with the bash shell and nothing running. This is mostly useul for development.
  • If running a VM, allocate 2GB of memory and/or create a swap file. These are not lightweight applications.

Suricata + ELK Docker Container

Project links:

Suricata + ELK in Docker

EveBox – An “eve” Event Viewer for Suricata/ElasticSearch

Kibana is really good for getting a high level overview of your Suricata events, but I didn’t find it very useful for reviewing individual events, and I’m not really sure if Kibana is really built around that idea, so I created EveBox, a web based event viewer for Suricata events being logged to Elastic Search in “eve” format with a focus on keyboard navigation:

evebox

 

Yes, forgive the “yet another bootstrap app” looks, but I’m not a designer nor do I pretend to be.

If you log thousands, or even hundreds of events per second, then EveBox is probably not for you, the “inbox” will be unmanageable.  However, if you run a highly tuned ruleset, EveBox gives you full keyboard navigation review of those events.

Its still a little crude in some areas, for example, if you open an event to get further details you are just going to see the JSON as returned by Elastic Search, personally I like this but I think something a little easier on the eyes is needed.  It will also be more useful with eve logs the alert packet, but for now it pivot to Dumpy (a rather basic daemonlogger spool directory frontend) to get a packet capture of the alert triggering data.

I’ve also learned that while Elastic Search is great (well, more like awesome) for searching, its not the best tool for mass updates of records such as “tagging” every entry that matches a query.  For such cases it might be useful to introduce a backend at some point so the HTML5 application can hand off some of the grunt work to a backend server that can handle the batch tasks.  PostgreSQL 9.4 with its new JSON(b) column could also prove to a very capable data store for Suricata eve events (Cassandra might be another option as well).

If you would like to try, go get the latest release and drop it on a web server.  For now its just straight up HTML like Kibana, so its basically a 0 effort install.  If that sounds to hard head over to http://codemonkey.net/evebox, click on settings and enter the URL to your Elastic Search server.  The “inbox” won’t be there until you configure Logstash accordingly, but you can still review events under “All’. NOTE: My server will not connect to your Elastic Search, the settings only tell the HTML5 application where to connect to Elastic Search).

If you are not yet using Suricata, Snort can easily be used instead.  For more info on sending Snort events to Elastic Search in “eve” format see my post Snort, Logstash, Elastic Search and Kibana…

EveBox on Github

EveBox – An “eve” Event Viewer for Suricata/ElasticSearch

New Dumpy Release – Multiple Spools and Single Binary Install

I’ve made some changes to my simple to install and use PCAP spool web frontend Dumpy including:

  • A rewrite in Go, mostly for entertainment purposes, but the really easy to use concurrency, and single binary installation make Go a good choice for small applications like this.
  • Multiple spool directory support.
  • A decoder for translating a Suricata JSON style event to a pcap filter (in additions to the existing “fast” style event decoding).

Check it out over here https://github.com/jasonish/dumpy.

dumpy-pcap-filter-view

New Dumpy Release – Multiple Spools and Single Binary Install

Snort, Logstash, Elastic Search and Kibana…

After having fun with Suricata’s new eve/json logging format and the Logstash/Elastic Search/Kibana combination (see this and this), I wanted to get my Snort events into Elastic Search as well.  Using my idstools python library I wrote u2json, a tool that will process a unified2 spool directory (much like barnyard) and convert the events to Suricata-style JSON.

Usage is relatively simple, assuming Snort is logging to /var/log/snort, the following command line should do:

  idstools-u2json -c /etc/snort/snort.conf 
--directory /var/log/snort
--prefix unified2.log
--follow --bookmark
--output /var/log/snort/alerts.json

As the output is in the same format as Suricata’s you can refer to this guide for the Logstash setup.

One extra step I did was use Logstash to add an “engine” field to each entry.  This can be accomplished by adapting the following Logstash configuration:

input {
file {
path => ["/var/log/suricata/eve.json"]
codec => json
type => "suricata-json"
}
file {
path => ["/var/log/snort/alerts.json"]
codec => json
type => "snort-json"
}
}

filter {
if [type] == "suricata-json" {
mutate {
add_field => {
"engine" => "suricata"
}
}
}

if [type] == "snort-json" {
mutate {
add_field => {
"engine" => "snort"
}
}
}
}

Checkout out the documentation for information.

Snort, Logstash, Elastic Search and Kibana…

Easy Unified2 File Reading in Python

I recently consolidated my Python code bits for dealing with Snort and Suricata unified2 log files into a project called idstools. While I’ll be adding more than just unified2 reading support, that is about it for now.

While it can be installed with pip (pip install idstools), if you just want to play around with it I suggest cloning the repo (git clone https://github.com/jasonish/idstools.py). You can then use the REPL or write test scripts from within the idstools.py directory without having to install the library (yeah, basic stuff for Python developers).

idstools does come with a few example programs that demonstrate unified2 file reading, namely, u2fast.py, u2tail.py and u2spewfoo.py (a simple clone of the Snort provided u2spewfoo).

Basic Unified2 File Reading

from idstools import unified2

reader = unified2.FileEventReader("tests/merged.log")
for event in reader:
    print("Event:n%s" % str(event))

This few lines of code will iterate through each record in the specified unified2 log files, aggregate the records into events and return each event as a dict.

If straight up record is reading is more what you are after then check out unified2.FileRecordReader, or the lower level unified2.read_record function.

Each event is represented as a dict containing the fields of a unified2 event record, with the associated packets represented as a list in event[“packets”] and extra data records represented as a list in event[“extra-data”].

Resolving Event Message and Classification Names

To make event reading just a little more useful, code to map signature and classifications IDs to descriptions is provided.
from idstools import maps

# Create and populate the signature message map.
sigmap = maps.MsgMap()
sigmap.load_genmsg_file("gen-msg.map")
sigmap.load_sidmsg_file("sid-msg.map")

# Get the description for 1:498.
print("Message for 1:498: %s" % (sigmap.get(1, 498).msg))

# Create and populate the classification map.
classmap = maps.ClassificationMap()
classmap.load_classification_file("classification.config")
print("The description for classification id 9 is %s, with priority %d." % (
        classmap.get(9).description, classmap.get(9).priority))
The example program u2fast.py is a complete example of reading events from one or more files, resolving event descriptions and classification names and printing the event in a “fast” like style.

Spool Reading

idstools also contains a spool reader for processing a unified2 spool directory as commonly used by Snort and Suricata.  It supports bookmarking, deleting files, and open and close hooks which can be used to implement custom archiving.
from idstools import spool

def my_open_hook(reader, filename):
    print("File %s has been opened." % (filename))

def my_close_hook(reader, filename):
    print("File %s has been closed." % (filename))

reader = spool.Unified2EventSpoolReader(
    "/var/log/snort", "merged.log", delete_on_close=False,
    bookmark=True,
    open_hook=my_open_hook,
    close_hook=my_close_hook)

for event in reader:
    print("Read event with generator-id %d, signature-id %d." % (
            event["signature-id"], event["generator-id"]))
To see a more complete directory spool process, check out the u2tail.py example program.To learn more checkout idstools over at GitHub, PyPI, or the work-in-progress documentation on Read the Docs.

Easy Unified2 File Reading in Python