Information, Linux Distributions, OpenSourceSoftware, Security, Sguil, Snort, Sourcefire, Suricata

Some notes on “making Snort go fast under Linux”

These are general pointers too things you want to dig into when you need to optimize Snort. If you are one of those who believe that Snort can’t go beyond 100Mbit/s and still not drop packets, you should read on. Comments/feedback/new tips/corrections on how to tune a Snort system is very welcome.

–[ Optimize the hardware ]–
This is always a moving target… And you need to keep yourself updated on the topic and pay attention when you buy your hardware. If someone in the community is maintaining a updated list of such hardware, give me a note!

Intel Network Interface Controllers(NIC) are the off the shelf choice of network adapters, 825NNXX PCI Express series with minimum TCP segmentation offload, TCP, UDP, IPv4 checksum offload, interrupt moderation, and maybe Bypass if you use inline mode/IPS.

If you want to pay someone that already has researched a bit (pure speculation from my side), then maybe Endace could be a choice. But if you first go there, then why not just go straight to Sourcefire (The makers of Snort).

(Matt Jonkman states that you can increase your Snort throughput up to a 16-fold increase if you introduce Endace platform’s acceleration features. Matt is the founder of Emerging Threats, and also deep into the OISF and the Suricata project)

At one time (early 2009), a discussion on IRC (Freenode) summed up in something like this:
“IICH8 southbridge, and 975G north bridge performing at 1066MHz, 8GB of 1333MHz DDR2 ram on a Intel quad core 3.2Ghz 8MB L2 cache processor running at 1333 MHz FSB and Intel 825NNXX PCI Express Gigabit Ethernet Controller.” – for a high end sniffer at that time.

Your whole system would benefit great from fast hard drives, as I/O too hard drives generally sucks juice, and locks up the system.

To sum it up:
Fast CPUs, fast RAM, fast buses, fast hard drives and a good network adapter.

–[ Optimize the Linux kernel ]–
In the file /etc/sysctl.conf – you should consider options like these:

# Just sniffing:
net.core.netdev_max_backlog = 10000
net.core.r mem_default = 16777216
net.core.rmem_max = 33554432
net.ipv4.tcp_mem = 194688 259584 389376
net.ipv4.tcp_rmem = 1048576 4194304 33554432
net.ipv4.tcp_no_metrics_save = 1
# IF also in Inline mode:
net.core.wmem_default = 16777216
net.core.wmem_max = 33554432
net.ipv4.tcp_wmem = 1048576 4194304 16777216
# Memory handling – not that important
vm.overcommit_memory=2
vm.overcommit_ratio = 50

–[ Optimize your network interface card ]–
Change the RX and TX parameters for the interfaces. The following command will display the current settings and the maximum settings you can bump them up to.

# ethtool -g ethX

To change settings, the command is something like this:

# Just sniffing
ethtool -G ethX rx
# and for inline mode, also add
ethtool -G ethX tx

Adding the command to /etc/rc.d/rc.local so that they are execute automatically when you boot would be a good idea.

–[ Optimize Snort ]–
Snorts performance is based on several factors.
1 – YOUR network!
2 – How snort is compiled
3 – Preprocessors enabled
4 – Rules
5 – Snort in general and snort.conf

–[ 1. YOUR network! ]–
Your network is a variable that is most likely not like any other networks. The amount of concurrent connections, packets and packet size flowing through, is most likely unique. Also, depending on the payload in your packets, Snort will perform differently. Also, if your $HOME_NET is one single host, compared to complex list of “networks” and “!networks”, Snort will spend more time figuring out what to do.

–[ 2. How snort is compiled ]–
First, I recommend only to compile Snort with the options that you need. I used to compile Snort in two different ways, one including options among “–enable-ppm and –enable-perfprofiling” and one without. But as my sensors are not suffering enough at the moment, I include them both by default, for easy access to preprocessor and rule performance data if I need too.

Also, I have not confirmed this, because its out of my budged reach, but the rumors are that Snort performs up to 30% better if it is compiled with an Intel C compiler (and probably run on pure Intel hardware).

If you use Phil Wood mmap libpcap and compile Snort with that, you will get some better performance in the packetcapture, giving you less dropped packets. I nice writeup/howto is found here.

–[ 3 – Preprocessors enabled ]–
How many and which preprocessors you have enabled is also playing a role on the total performance of your system. So if you can, you need to reduce the numbers of preprocessor to a minimum. Also you need to read the Snort documentation, and figure out the best settings that you can live with for each preprocessors that takes configuration options. The flow_depth parameter in the http_inspect preprocessor is a good example.

Here are two settings/views I switch between when profiling preprocessors:

config profile_preprocs: print 20, sort avg_ticks, filename /tmp/preprocs_20-avg_stats.log append
# And
config profile_preprocs: print all, sort total_ticks, filename /tmp/preprocs_All-total_stats.log append

You should now review the *stats.log files and make changes based on your interpretation, and profile again to see if things get better or worse.

–[ 4 – Rules ]–
The amount of rules also affects the performance of Snort. So tuning your rules to just enable the ones that you need is essential when aiming for performance.
Also, how a rule is performing on your network, might defer from how it performs in my network… That said, you need to profile your set off rules, and tweak or disable them so your system uses less overall “ticks”.

Here are two settings/views I switch between when profiling rules:

config profile_rules: print 20, sort avg_ticks, filename /tmp/rules_20-avg_stats.log append
# And
config profile_rules: print all, sort total_ticks, filename /tmp/rules_All-total_stats.log append

You will get a fairly good view of rules that needs/should/would benefit from tuning/disabling.

–[ 5 – snort in general and snort.conf ]–
* search-method
You should look into which search-method snort is using. The default search method is AC-BNFA (Aho-Corasick NFA – low memory, high performance). This is probably the best overall search method, but if you have the RAM for it, AC (Aho-Corasick Full – high memory, best performance) would be a better choice. Snort 2.8.6 added a new pattern matcher named AC-SPLIT. The new pattern matcher is optimized to use less memory and perform at AC speed. This would probably the choice for the future? Need to test right away 🙂
To enable it, add something like:

config detection: search-method ac-split, max-pattern-len 20,
search-optimize

* Latency-Based Packet Handling
If you have a problem with dropped packets, I would say over 1% on an average, I would recommend enabling Latency-Based Packet Handling. You should run some tests in your environment to find a value that works for you, but the general situation is like this:
If your Snort “Packet Performance Summary” is telling you that your “avg pkt time is 10 usecs” then Snort can inspect about 1000 packets in 10000 usecs. If a packet for some reason is using 10000 usec to get through Snort, you may have dropped/sacrificed 1000 other packets in that time frame, just to inspect this packet. So if you configure max-pkt-time to be 1000, Snort will stop inspecting packets that take more time than 1000 usec, and in this basic example leaving you with 100 dropped packets instead of 1000. You choose! (The example is not technical correct, as a packet can take over 10000 usec with out Snort dropping any packets at all (Imagine if there is only one packet going through snort that day…), but in my tests, this is more or less the real world outcome of enabling Latency-Based Packet Handling).
Example:

config ppm: max-pkt-time 10000, fastpath-expensive-packets, pkt-log

Other keywords you should be aware off in the Snort config, that I don’t want to go into details about, as I don’t have enough Snort-Fu about to stand firm, and the doc is rather lacking! I have a personal understanding of what they do, and how it effects performance etc. but if anyone has some nice writeup of the topics, please point me to it!! :
* Event Queue Configuration
* Latency-Based Rule Handling

–[ Additional notes ]–
Obviously, if you need to go as fast as possible, your system should not be used for lots of other different stuff. So keep your running processes/services too a minimum.

Snort is also, as far as I can tell, single threaded when it comes too packet inspection. There is a pdf here from Intel, explaining how Sensory Networks Software Acceleration Solutions boost performance of Snort and things alike, making them Multi-core enabled/aware.

That said, Snort benefits from sticking to one CPU, so using schedtool in a proper way, might help snort perform overall better. If you are running multiple instances of Snort on one multi-CPU server, you should use schedtool to stick each Snort process to its own physical CPU etc. Example:

$ man schedtool # and read about “AFFINITY MASK” and understand the difference between cpu-cores and hyper-threading etc.
$ schedtool <pid of snort> # Displays current settings
$ schedtool -a 0x01 <pid of snort> # Pin the snort process to one CPU (The first)
$ schedtool -M 2 -p 10 # Change the policy to SCHED_RR and set priority to 10 (0 highest, 100 lowest)
$ schedtool <pid of snort> # to verify your changes

Always when optimizing a system, you should have some sort of measuring system. I use Munin. I wrote some basic Munin plugins for Snort which monitors the most important stuff.

And as always,
“Measure, don’t speculate” — Unknown
“Premature optimization is the root of all evil” — Tony Hoare

Standard

7 thoughts on “Some notes on “making Snort go fast under Linux”

  1. EBF:
    I guess i skipped that part, my recommendation on your writing would be to never staticaly compiled snort against any libpcap.

    On this line of tought i would rather recommand to use dynamic libraries (if you can), it can somtimes prevent some errors in the long run (think upgrades) and also it can help other software that interface with libpcap to transparently benefits from performance enhancements provided by the library.

    Like

  2. Matthew,

    Im in the process of digging into the subjects. I also have a fair understanding on how it work, but a confirmation from someone like you would boost my confidence, and make me post my thoughts maybe 🙂

    Looking forward for your reply…

    E

    Like

Leave a comment