Suricata RPM for EL and CentOS 7

[2016-07-25: This information is no longer valid, header over to for up to date links.]

I’ve taken the Suricata package as found in Fedora and rebuilt it for CentOS 7. This should be similar to how the package would exist in EPEL (and hopefully it makes its way there).

To get the package with yum, first install the yum repository package (note: you must already have EPEL installed):

rpm -Uvh

then install Suricata:

yum install suricata

To just get the package, head over to

Suricata + ELK in Docker

While getting familiar the very popular Docker Linux container tool, I went against best practice and put Suricata, Logstash, Elastic Search and Kibana into a container that is looking promising for demonstration purposes. If you already run this stack on one machine, it might be suitable for real use as well.

What you get is a very simple to run application container that abstracts all the tools above into a single application.

Assuming you have Docker already installed, you can get a feel for Suricata + ELK with a couple commands:

git pull
cd docker-suricata-elk
./launcher start -i eth0

The first time ./launcher start is run, Docker will pull down the container file system layers so it may take a while. Subsequent starts will be much quicker.

Once it looks like it is up and running, point your browser at http://localhost:7777.

A few notes:

  • Docker containers are more or less stateless. Changes to the filesystem inside the container are not persisted over a restart. Instead any data that needs to be persisted will end up in the ./data directory where you started the launcher.
  • This container uses host networking instead of the usual isolated network you find with Docker containers. This is to give the container access to your physical interfaces. This alone has me questioning Docker for network monitoring deployments.
  • As host networking is used, the container will probably fail if you have existing applications bound to port 7777 or 9200. Making these ports configurable is on the todo.
  • The containers log directory is available from the host system. Take a look in ./data/log.
  • Suricata is built from git master.
  • ./launcher enter will give you a shell inside the running container. This is useful to take a look around the runtime environment. Just remember that any changes you make will not be persistent.
  • ./launcher bash will start a new container with the bash shell and nothing running. This is mostly useul for development.
  • If running a VM, allocate 2GB of memory and/or create a swap file. These are not lightweight applications.

Suricata + ELK Docker Container

Project links:

EveBox – An “eve” Event Viewer for Suricata/ElasticSearch

Kibana is really good for getting a high level overview of your Suricata events, but I didn’t find it very useful for reviewing individual events, and I’m not really sure if Kibana is really built around that idea, so I created EveBox, a web based event viewer for Suricata events being logged to Elastic Search in “eve” format with a focus on keyboard navigation:



Yes, forgive the “yet another bootstrap app” looks, but I’m not a designer nor do I pretend to be.

If you log thousands, or even hundreds of events per second, then EveBox is probably not for you, the “inbox” will be unmanageable.  However, if you run a highly tuned ruleset, EveBox gives you full keyboard navigation review of those events.

Its still a little crude in some areas, for example, if you open an event to get further details you are just going to see the JSON as returned by Elastic Search, personally I like this but I think something a little easier on the eyes is needed.  It will also be more useful with eve logs the alert packet, but for now it pivot to Dumpy (a rather basic daemonlogger spool directory frontend) to get a packet capture of the alert triggering data.

I’ve also learned that while Elastic Search is great (well, more like awesome) for searching, its not the best tool for mass updates of records such as “tagging” every entry that matches a query.  For such cases it might be useful to introduce a backend at some point so the HTML5 application can hand off some of the grunt work to a backend server that can handle the batch tasks.  PostgreSQL 9.4 with its new JSON(b) column could also prove to a very capable data store for Suricata eve events (Cassandra might be another option as well).

If you would like to try, go get the latest release and drop it on a web server.  For now its just straight up HTML like Kibana, so its basically a 0 effort install.  If that sounds to hard head over to, click on settings and enter the URL to your Elastic Search server.  The “inbox” won’t be there until you configure Logstash accordingly, but you can still review events under “All’. NOTE: My server will not connect to your Elastic Search, the settings only tell the HTML5 application where to connect to Elastic Search).

If you are not yet using Suricata, Snort can easily be used instead.  For more info on sending Snort events to Elastic Search in “eve” format see my post Snort, Logstash, Elastic Search and Kibana…

EveBox on Github

New Dumpy Release – Multiple Spools and Single Binary Install

I’ve made some changes to my simple to install and use PCAP spool web frontend Dumpy including:

  • A rewrite in Go, mostly for entertainment purposes, but the really easy to use concurrency, and single binary installation make Go a good choice for small applications like this.
  • Multiple spool directory support.
  • A decoder for translating a Suricata JSON style event to a pcap filter (in additions to the existing “fast” style event decoding).

Check it out over here


Snort, Logstash, Elastic Search and Kibana…

After having fun with Suricata’s new eve/json logging format and the Logstash/Elastic Search/Kibana combination (see this and this), I wanted to get my Snort events into Elastic Search as well.  Using my idstools python library I wrote u2json, a tool that will process a unified2 spool directory (much like barnyard) and convert the events to Suricata-style JSON.

Usage is relatively simple, assuming Snort is logging to /var/log/snort, the following command line should do:

  idstools-u2json -c /etc/snort/snort.conf 
--directory /var/log/snort
--prefix unified2.log
--follow --bookmark
--output /var/log/snort/alerts.json

As the output is in the same format as Suricata’s you can refer to this guide for the Logstash setup.

One extra step I did was use Logstash to add an “engine” field to each entry.  This can be accomplished by adapting the following Logstash configuration:

input {
file {
path => ["/var/log/suricata/eve.json"]
codec => json
type => "suricata-json"
file {
path => ["/var/log/snort/alerts.json"]
codec => json
type => "snort-json"

filter {
if [type] == "suricata-json" {
mutate {
add_field => {
"engine" => "suricata"

if [type] == "snort-json" {
mutate {
add_field => {
"engine" => "snort"

Checkout out the documentation for information.

Easy Unified2 File Reading in Python

I recently consolidated my Python code bits for dealing with Snort and Suricata unified2 log files into a project called idstools. While I’ll be adding more than just unified2 reading support, that is about it for now.

While it can be installed with pip (pip install idstools), if you just want to play around with it I suggest cloning the repo (git clone You can then use the REPL or write test scripts from within the directory without having to install the library (yeah, basic stuff for Python developers).

idstools does come with a few example programs that demonstrate unified2 file reading, namely,, and (a simple clone of the Snort provided u2spewfoo).

Basic Unified2 File Reading

from idstools import unified2

reader = unified2.FileEventReader("tests/merged.log")
for event in reader:
    print("Event:n%s" % str(event))

This few lines of code will iterate through each record in the specified unified2 log files, aggregate the records into events and return each event as a dict.

If straight up record is reading is more what you are after then check out unified2.FileRecordReader, or the lower level unified2.read_record function.

Each event is represented as a dict containing the fields of a unified2 event record, with the associated packets represented as a list in event[“packets”] and extra data records represented as a list in event[“extra-data”].

Resolving Event Message and Classification Names

To make event reading just a little more useful, code to map signature and classifications IDs to descriptions is provided.
from idstools import maps

# Create and populate the signature message map.
sigmap = maps.MsgMap()

# Get the description for 1:498.
print("Message for 1:498: %s" % (sigmap.get(1, 498).msg))

# Create and populate the classification map.
classmap = maps.ClassificationMap()
print("The description for classification id 9 is %s, with priority %d." % (
        classmap.get(9).description, classmap.get(9).priority))
The example program is a complete example of reading events from one or more files, resolving event descriptions and classification names and printing the event in a “fast” like style.

Spool Reading

idstools also contains a spool reader for processing a unified2 spool directory as commonly used by Snort and Suricata.  It supports bookmarking, deleting files, and open and close hooks which can be used to implement custom archiving.
from idstools import spool

def my_open_hook(reader, filename):
    print("File %s has been opened." % (filename))

def my_close_hook(reader, filename):
    print("File %s has been closed." % (filename))

reader = spool.Unified2EventSpoolReader(
    "/var/log/snort", "merged.log", delete_on_close=False,

for event in reader:
    print("Read event with generator-id %d, signature-id %d." % (
            event["signature-id"], event["generator-id"]))
To see a more complete directory spool process, check out the example program.To learn more checkout idstools over at GitHub, PyPI, or the work-in-progress documentation on Read the Docs.

Dumpy – A Simple PCAP Spool File Frontend

Sometimes the best way to try out a new framework or language is to apply it to a domain you already know very well, even if it does happen to reinvent the wheel.  Tornado and Twitter Bootstrap are two such frameworks I’ve been meaning to play with for a while now. The result is Dumpy, a web front-end to pcap spool files as created by tcpdump, daemonlogger, or netsniff-ng with a very simple configuration and user interface:


Requirements are minimal, Python 2.6 (so it will run on CentOS 6 with little hassle), Tornado and py-bcrypt which are both trivially installed with pip. It provides its own http server with SSL support, and does not require a database.

Usage is also simple.  Simply enter a pcap filter, or paste in a Snort or Suricata event in “fast” format, choose  start and end times (or simply offsets) and hit download.

If interested, start a pcap spool (ie: sudo tcpdump -i eth0 -C 1000 -W10 -G 3600 -w /tmp/eth0.log.%Y%m%d.) then check out Dumpy over here

SpringMVC with Embedded Jetty and Thymeleaf

As mentioned in my previous post on using embedded Jetty with SpringMVC, I was going to look at simplifying the application by using Thymeleaf instead of JSPs as a view technology.

Well, it turns out that it is much more straightforward.

First we can remove the JSPC plugin from our pom.xml. Second we can completely remove the web.xml file.

Bootstrapping Jetty is much simpler now. We do not have to hook into the Jetty startup process with a lifecycle listener, instead we can directly create the dispatcher servlet and add it to a ServletContextHandler like we would any other servlet:


public ServletHolder dispatcherServlet() {
AnnotationConfigWebApplicationContext ctx =
new AnnotationConfigWebApplicationContext();
DispatcherServlet servlet = new DispatcherServlet(ctx);
ServletHolder holder = new ServletHolder("dispatcher-servlet", servlet);
return holder;

public ServletContextHandler servletContext() throws IOException {
ServletContextHandler handler = new ServletContextHandler();
new ClassPathResource("webapp").getURI().toString());
handler.addServlet(AdminServlet.class, "/metrics/*");
handler.addServlet(dispatcherServlet(), "/");
return handler;

@Bean(initMethod = "start", destroyMethod = "stop")
public Server jettyServer() throws IOException {
Server server = new Server();
return server;

This is much more straightforward, everything wired up with Spring without any lifecycle callback hooks.

The setup for Spring to render the Thymeleaf views, is a little more complex, but not fussy. What we do is replace:

public InternalResourceViewResolver configureInternalResourceViewResolver() {
InternalResourceViewResolver resolver =
new InternalResourceViewResolver();
return resolver;


public ServletContextTemplateResolver thymeleafTemplateResolver() {
ServletContextTemplateResolver resolver =
new ServletContextTemplateResolver();
return resolver;

public SpringTemplateEngine thymeleafTemplateEngine() {
SpringTemplateEngine engine = new SpringTemplateEngine();
return engine;

public ThymeleafViewResolver thymeleafViewResolver() {
ThymeleafViewResolver resolver = new ThymeleafViewResolver();
return resolver;


Our templates now look a lot more like plain HTML, and will render better when loaded directly in a browser:

<!DOCTYPE html>
<html xmlns:th="">
<h1>Hello World!</h1>

<p>Server time: <span th:text="${serverTime}"></span></p>

<p>Here are some items:</p>
<li th:each="item : ${someItems}" th:text="${item}"></li>

<p>Do we have a message from the dummy service:</p>

<div th:if="${dummyService == null}">
<p>No, dummy service is null.</p>
<div th:if="${dummyService != null}">
<p>Yes: <span th:text="${dummyService.getMessage()}"></span></p>

<p><a href="resources/static.txt">A static file.</a></p>

<p><a href="metrics">Yammer Metrics</a></p>


With the build process being simple (no dependence on maven plugins) we can switch to a much less verbose Gradle build file:

apply plugin:'java'
apply plugin:'application'

version = '0.0.1-SNAPSHOT'

mainClassName = "ca.unx.template.Main"
applicationName = "jetty-springmvc-thymeleaf-template"

repositories {

dependencies {
compile("org.springframework:spring-webmvc:3.1.3.RELEASE") {
// Commons-logging excluded in favour of SLF4j.
exclude module: 'commons-logging'

compile "cglib:cglib:2.2.2"
compile "org.thymeleaf:thymeleaf-spring3:2.0.14"
compile "org.eclipse.jetty:jetty-webapp:8.1.8.v20121106"
compile "com.yammer.metrics:metrics-servlet:2.2.0"

/* Logging. */
def slf4jVersion = "1.7.1"
compile "ch.qos.logback:logback-classic:1.0.9"
compile "org.slf4j:slf4j-api:$slf4jVersion"
compile "org.slf4j:jcl-over-slf4j:$slf4jVersion"
compile "org.codehaus.groovy:groovy:1.8.6"

A complete template project can be found here. Note that this links to tag which is the state of the code at the time of this writing.

SpringMVC with Embedded Jetty

As an accidental Java developer I’ve never been comfortable deploying applications into a container, especially when the web interface is secondary to the primary purpose of the application. Instead I prefer to programatically create and manage the web interface rather than have it manage me. Currently SpringMVC is my Java web framework of choice (due to company convention more than anything else) and it is built around the idea of being managed by a container such as Jetty or Tomcat.

While there is no shortage of existing posts on using SpringMVC with embedded Jetty, they fail for me due to the following reasons:

  • They embed Jetty just enough to bootstrap something that looks like a classic Java web application – not what I want!
  • They don’t address JSPs – the default view technology used by SpringMVC.

Compile Those JSPs

By compiling the JSPs prior to deployment we can save ourselves from the hassle of classpath issues, especially relating to tag libs.


This plugin will compile all the JSPs found under src/main/resources/webapp and modify the existing web.xml to direct requests for the JSPs to the pre-compiled versions. This avoids compiling them at runtime and speeds up the time it takes to respond to the first request for a JSP.

You may also notice that the JSPs are under src/main/resources/webapp instead of the more common src/main/webapp. This is to avoid extra lines in the pom.xml to pull in webapp as a resource, as we are packaging as a jar.

Bootstrapping Jetty

I use Spring annotations to configure Jetty and import its @Configuration class into my root context configuration class. In order for the SpringMVC dispatcher servlet to access beans created in the root context, this class needs to be application context aware:

public class JettyConfiguration implements ApplicationContextAware {
private ApplicationContext applicationContext;

public void setApplicationContext(ApplicationContext applicationContext)
throws BeansException {
this.applicationContext = applicationContext;

this is used as the parent context for the new context that will be created for the SpringMVC dispatcher servlet:

public ServletHolder dispatcherServlet() {
AnnotationConfigWebApplicationContext ctx =
new AnnotationConfigWebApplicationContext();
DispatcherServlet servlet = new DispatcherServlet(ctx);
ServletHolder holder = new ServletHolder("dispatcher-servlet",
return holder;

There are still a few more tricks required to get the dispatcher servlet registered without defining it in your web.xml. We must also create a Jetty web application context, this creates the JSP servlet as well as a default servlet as would be created during the instantiation of a classic war based application:

public WebAppContext jettyWebAppContext() throws IOException {
WebAppContext ctx = new WebAppContext();
ctx.setWar(new ClassPathResource("webapp").getURI().toString());

/* We can add the Metrics servlet right away. */
ctx.addServlet(AdminServlet.class, "/metrics/*");

return ctx;

Notice that I’ve added the metrics servlet, but have yet to add the dispatcher servlet. For some reason, adding the dispatcher servlet here causes JSP views to fail. One option is to fall back to defining the dispatcher servlet in your web.xml (which may be the cleanest option), or you can register the dispatcher servlet in the Jetty lifeCycleStarted callback which we’ll do here:

public LifeCycle.Listener lifeCycleStartedListener() {
return new AbstractLifeCycle.AbstractLifeCycleListener() {
public void lifeCycleStarted(LifeCycle event) {
try {
ServletHolder dispatcherServlet = dispatcherServlet();
.addServletWithMapping(dispatcherServlet, "/");
} catch (Exception e) {
"Failed to start Spring MVC dispatcher servlet", e);

@Bean(initMethod = "start", destroyMethod = "stop")
public Server jettyServer() throws IOException {
Server server = new Server();

/* Add a life cycle listener so we can register the SpringMVC dispatcher
* servlet after the web application context has been started. */

return server;

In summary, this is just another variation of James Ward’s post on Containerless Spring MVC. The complete code for this project (which I use as a template) can be at The code as referenced in this post can be found in the 20121207 tag.

Next – As I’m not really a fan of JSPs, or the extra hoops required to make this work right I’ll probably look at updating the template with Thymeleaf support, which should greatly simplify things.

Some NSM type RPMs.

I’ve always maintained more or less up to date Snort RPMs for RHEL for personal use and have recently added Suricata. As they may be useful for others I have cleaned them up a little and made a YUM repository for EL6 i386 and x86_64. See the for more info.

A few things to note:

  • These RPMs use a prefix of /opt/nsm to prevent conflict with similar RPMs you may have installed, its a little bit out of the norm for RPMs and I’m open to comments…
  • Snort and Suricata packages will never be automatically upgraded as often upgrading to a new version requires some administration work such as updating your configuration files. To facilitate this the packages have their version as part of the name and “-latest” pseudo-packages are provided which will always install the latest RPM but you will have to “snort-select” or “suricata-select” the new version to make it active. I’ll probably have to add some more detailed documentation about this on the wiki.
As I’m also a regular Fedora user I’ll probably add Fedora builds at some point as its little effort to me. If Fedora builds would be useful to you please let me know and I may do it sooner than later.