Information, OpenSourceSoftware, PRADS, Security, Snort

Populating Snorts host attribute tables with PRADS

It has been a long journey, but after about two years, I finally got a way to populate Snorts host attribute table, automagically(tm)!

When I started this, my first option was to use nmap to scan the network to populate the information. This was scratched, as my goal was to be non intrusive and always up to date (The minute a new host popped up, I want to know). Scanning 65535 ports times two for each of the hosts Im monitoring is not an options also… I started to look at the Open Source tools out there, on which to use to get the information from. As I was familiar with p0f and PADS, I saw that they could do the job, but they needed some band-aid to work together, and they where lacking active development… p0f has a DB patch/version, and I already had PADS hooked up in Sguil, so I had the info in a DB, but not in the way I wanted it. So I started out on a journey to merge the two projects, enhance them, and try to speed things up a bit.

The project is still in development, but the main parts are done. It is useful in the way that it will print out information about detected hosts, like this in verbose mode (And yes, it also does IPv6):

2a02:c0:1002:100:21d:72ff:fe92:728,[syn:S4:64:1:40:M1440,S,T,N,W7:Z],[Linux:2.6 (newer, 7) IPv6],[link:IPv6/IPIP],[uptime:2hrs],[distance:0]
2a02:c0:1002:10::2,[synack:5712:63:1:40:M1440,S,T,N,W7:ZAT],[Linux:2.6 (newer, 7) IPv6],[link:IPv6/IPIP],[uptime:4069hrs],[distance:1]
2a02:c0:1002:100:21d:72ff:fe92:728,[ack:45:64:1:*:N,N,T:ZAT],[Linux:2.6],[uptime:2hrs],[distance:0]
2a02:c0:1002:10::2,[service:OpenSSH 5.1p1 (Protocol 2.0):22:6],[distance:1]
2a02:c0:1002:10::2,[ack:45:63:1:*:N,N,T:ZAT],[Linux:2.6],[uptime:4069hrs],[distance:1]
2a02:c0:1002:100:21d:72ff:fe92:728,[client:OpenSSH 5.1p1 (Protocol 2.0):22:6],[distance:0]

At the moment, it also makes a file in your /tmp/ dir, /tmp/prads-asset.log, which presents the info in the following way:

2a02:c0:1002:100:21d:72ff:fe92:728,0,56268,6,SYN,[S4:64:1:40:M1440,S,T,N,W7:Z:Linux:2.6 (newer, 7) IPv6:link:IPv6/IPIP:uptime:2hrs],0,1269420770
2a02:c0:1002:10::2,0,22,6,SYNACK,[5712:63:1:40:M1440,S,T,N,W7:ZAT:Linux:2.6 (newer, 7) IPv6:link:IPv6/IPIP:uptime:4069hrs],1,1269420770
2a02:c0:1002:100:21d:72ff:fe92:728,0,56268,6,ACK,[45:64:1:*:N,N,T:ZAT:Linux:2.6:uptime:2hrs],0,1269420770
2a02:c0:1002:10::2,0,22,6,SERVER,[ssh:OpenSSH 5.1p1 (Protocol 2.0)],1,1269420770
2a02:c0:1002:10::2,0,22,6,ACK,[45:63:1:*:N,N,T:ZAT:Linux:2.6:uptime:4069hrs],1,1269420770
2a02:c0:1002:100:21d:72ff:fe92:728,0,22,6,CLIENT,[ssh:OpenSSH 5.1p1 (Protocol 2.0)],0,1269420770

Input from the community on how to present the information/output for a best possible way for integration with other applications are welcome.

To try it out, this is what I believe is necessary to install on my Ubuntu machine to run it:

$ sudo aptitude install build-essential git-core libpcre3-dev libpcap0.8-dev
$ git clone http://github.com/gamelinux/prads.git
$ cd prads/src/ && make
$ # then to start it
$ sudo ./prads -i ethX -v

For populating the Snort host attribute table, there is a script in the tools dir, prads2snort.pl, which takes the prads-asset.log file and processes it.
Example:

$ perl prads2snort.pl -i prads-asset.log -o hosts_attribute.xml -v -f

The -v (verbose) mode prints out some details, which can be good to check to see if stuff seems to be detected correctly.

Snort supports reloading of the attribute table if you give it the signal 30. (kill -30 <snort-pid>). This means, that if you discover a difference in your host attribute table (Say you got a new http service some where, or a host has changed OS), you can swap out the attribute file with a new updated one, and tell snort to reload its attribute file without restarting snort! Darn cool if you ask me ๐Ÿ™‚

You can read more about Snort and its host attribute table here, and you can read about another tool called Hogger here. Also, you should read the Snort documentation section 2.7 (around page 104/105) “Host Attribute Table”.

I would once again like to thank Michal Zalewski and Matt Shelton for their work on p0f and pads. I would also like to thank Martin Roesch & The Snort Team (And all the contributers) for a great application and all the effort they have put into Snort and its surroundings. (Virtually giving you the price for best Open Source security application 2000 – 2010!).

Attribute Table Loaded with 980 hosts

Attribute Table Reload Thread Starting…
Attribute Table Reload Thread Started, thread 363022672 (15333)

$ /bin/kill -30 15333

Swapping Attribute Tables.

$ /bin/kill 15333

===========================================
Attribute Table Stats:
Number Entries: 980
Table Reloaded: 1
===========================================

Standard
Information, Linux Distributions, OpenSourceSoftware, Security, Sguil, Snort, Sourcefire, Suricata

Some notes on “making Snort go fast under Linux”

These are general pointers too things you want to dig into when you need to optimize Snort. If you are one of those who believe that Snort can’t go beyond 100Mbit/s and still not drop packets, you should read on. Comments/feedback/new tips/corrections on how to tune a Snort system is very welcome.

–[ Optimize the hardware ]–
This is always a moving target… And you need to keep yourself updated on the topic and pay attention when you buy your hardware. If someone in the community is maintaining a updated list of such hardware, give me a note!

Intel Network Interface Controllers(NIC) are the off the shelf choice of network adapters, 825NNXX PCI Express series with minimum TCP segmentation offload, TCP, UDP, IPv4 checksum offload, interrupt moderation, and maybe Bypass if you use inline mode/IPS.

If you want to pay someone that already has researched a bit (pure speculation from my side), then maybe Endace could be a choice. But if you first go there, then why not just go straight to Sourcefire (The makers of Snort).

(Matt Jonkman states that you can increase your Snort throughput up to a 16-fold increase if you introduce Endace platform’s acceleration features. Matt is the founder of Emerging Threats, and also deep into the OISF and the Suricata project)

At one time (early 2009), a discussion on IRC (Freenode) summed up in something like this:
“IICH8 southbridge, and 975G north bridge performing at 1066MHz, 8GB of 1333MHz DDR2 ram on a Intel quad core 3.2Ghz 8MB L2 cache processor running at 1333 MHz FSB and Intel 825NNXX PCI Express Gigabit Ethernet Controller.” – for a high end sniffer at that time.

Your whole system would benefit great from fast hard drives, as I/O too hard drives generally sucks juice, and locks up the system.

To sum it up:
Fast CPUs, fast RAM, fast buses, fast hard drives and a good network adapter.

–[ Optimize the Linux kernel ]–
In the file /etc/sysctl.conf – you should consider options like these:

# Just sniffing:
net.core.netdev_max_backlog = 10000
net.core.r mem_default = 16777216
net.core.rmem_max = 33554432
net.ipv4.tcp_mem = 194688 259584 389376
net.ipv4.tcp_rmem = 1048576 4194304 33554432
net.ipv4.tcp_no_metrics_save = 1
# IF also in Inline mode:
net.core.wmem_default = 16777216
net.core.wmem_max = 33554432
net.ipv4.tcp_wmem = 1048576 4194304 16777216
# Memory handling – not that important
vm.overcommit_memory=2
vm.overcommit_ratio = 50

–[ Optimize your network interface card ]–
Change the RX and TX parameters for the interfaces. The following command will display the current settings and the maximum settings you can bump them up to.

# ethtool -g ethX

To change settings, the command is something like this:

# Just sniffing
ethtool -G ethX rx
# and for inline mode, also add
ethtool -G ethX tx

Adding the command to /etc/rc.d/rc.local so that they are execute automatically when you boot would be a good idea.

–[ Optimize Snort ]–
Snorts performance is based on several factors.
1 – YOUR network!
2 – How snort is compiled
3 – Preprocessors enabled
4 – Rules
5 – Snort in general and snort.conf

–[ 1. YOUR network! ]–
Your network is a variable that is most likely not like any other networks. The amount of concurrent connections, packets and packet size flowing through, is most likely unique. Also, depending on the payload in your packets, Snort will perform differently. Also, if your $HOME_NET is one single host, compared to complex list of “networks” and “!networks”, Snort will spend more time figuring out what to do.

–[ 2. How snort is compiled ]–
First, I recommend only to compile Snort with the options that you need. I used to compile Snort in two different ways, one including options among “–enable-ppm and –enable-perfprofiling” and one without. But as my sensors are not suffering enough at the moment, I include them both by default, for easy access to preprocessor and rule performance data if I need too.

Also, I have not confirmed this, because its out of my budged reach, but the rumors are that Snort performs up to 30% better if it is compiled with an Intel C compiler (and probably run on pure Intel hardware).

If you use Phil Wood mmap libpcap and compile Snort with that, you will get some better performance in the packetcapture, giving you less dropped packets. I nice writeup/howto is found here.

–[ 3 – Preprocessors enabled ]–
How many and which preprocessors you have enabled is also playing a role on the total performance of your system. So if you can, you need to reduce the numbers of preprocessor to a minimum. Also you need to read the Snort documentation, and figure out the best settings that you can live with for each preprocessors that takes configuration options. The flow_depth parameter in the http_inspect preprocessor is a good example.

Here are two settings/views I switch between when profiling preprocessors:

config profile_preprocs: print 20, sort avg_ticks, filename /tmp/preprocs_20-avg_stats.log append
# And
config profile_preprocs: print all, sort total_ticks, filename /tmp/preprocs_All-total_stats.log append

You should now review the *stats.log files and make changes based on your interpretation, and profile again to see if things get better or worse.

–[ 4 – Rules ]–
The amount of rules also affects the performance of Snort. So tuning your rules to just enable the ones that you need is essential when aiming for performance.
Also, how a rule is performing on your network, might defer from how it performs in my network… That said, you need to profile your set off rules, and tweak or disable them so your system uses less overall “ticks”.

Here are two settings/views I switch between when profiling rules:

config profile_rules: print 20, sort avg_ticks, filename /tmp/rules_20-avg_stats.log append
# And
config profile_rules: print all, sort total_ticks, filename /tmp/rules_All-total_stats.log append

You will get a fairly good view of rules that needs/should/would benefit from tuning/disabling.

–[ 5 – snort in general and snort.conf ]–
* search-method
You should look into which search-method snort is using. The default search method is AC-BNFA (Aho-Corasick NFA – low memory, high performance). This is probably the best overall search method, but if you have the RAM for it, AC (Aho-Corasick Full – high memory, best performance) would be a better choice. Snort 2.8.6 added a new pattern matcher named AC-SPLIT. The new pattern matcher is optimized to use less memory and perform at AC speed. This would probably the choice for the future? Need to test right away ๐Ÿ™‚
To enable it, add something like:

config detection: search-method ac-split, max-pattern-len 20,
search-optimize

* Latency-Based Packet Handling
If you have a problem with dropped packets, I would say over 1% on an average, I would recommend enabling Latency-Based Packet Handling. You should run some tests in your environment to find a value that works for you, but the general situation is like this:
If your Snort “Packet Performance Summary” is telling you that your “avg pkt time is 10 usecs” then Snort can inspect about 1000 packets in 10000 usecs. If a packet for some reason is using 10000 usec to get through Snort, you may have dropped/sacrificed 1000 other packets in that time frame, just to inspect this packet. So if you configure max-pkt-time to be 1000, Snort will stop inspecting packets that take more time than 1000 usec, and in this basic example leaving you with 100 dropped packets instead of 1000. You choose! (The example is not technical correct, as a packet can take over 10000 usec with out Snort dropping any packets at all (Imagine if there is only one packet going through snort that day…), but in my tests, this is more or less the real world outcome of enabling Latency-Based Packet Handling).
Example:

config ppm: max-pkt-time 10000, fastpath-expensive-packets, pkt-log

Other keywords you should be aware off in the Snort config, that I don’t want to go into details about, as I don’t have enough Snort-Fu about to stand firm, and the doc is rather lacking! I have a personal understanding of what they do, and how it effects performance etc. but if anyone has some nice writeup of the topics, please point me to it!! :
* Event Queue Configuration
* Latency-Based Rule Handling

–[ Additional notes ]–
Obviously, if you need to go as fast as possible, your system should not be used for lots of other different stuff. So keep your running processes/services too a minimum.

Snort is also, as far as I can tell, single threaded when it comes too packet inspection. There is a pdf here from Intel, explaining how Sensory Networks Software Acceleration Solutions boost performance of Snort and things alike, making them Multi-core enabled/aware.

That said, Snort benefits from sticking to one CPU, so using schedtool in a proper way, might help snort perform overall better. If you are running multiple instances of Snort on one multi-CPU server, you should use schedtool to stick each Snort process to its own physical CPU etc. Example:

$ man schedtool # and read about “AFFINITY MASK” and understand the difference between cpu-cores and hyper-threading etc.
$ schedtool <pid of snort> # Displays current settings
$ schedtool -a 0x01 <pid of snort> # Pin the snort process to one CPU (The first)
$ schedtool -M 2 -p 10 # Change the policy to SCHED_RR and set priority to 10 (0 highest, 100 lowest)
$ schedtool <pid of snort> # to verify your changes

Always when optimizing a system, you should have some sort of measuring system. I use Munin. I wrote some basic Munin plugins for Snort which monitors the most important stuff.

And as always,
“Measure, don’t speculate” — Unknown
“Premature optimization is the root of all evil” — Tony Hoare

Standard
Information, OpenSourceSoftware, Security, Sguil, Snort, Sourcefire, Suricata

sidrule update (yes, so soon!)

I friend of mine at Sourcefire, jim, made some comments yesterday on my little bash-script. He wanted to be able to search through the msg field in a snort rule, and be able to activate or deactivate based on the search.

Also after having Alex Kirks last blogpost fresh in mind, I had the thought on enabling rules based on one of the three default policies Sourcefire maintain – Connectivity Over Security, Balanced, and Security Over Connectivity. And since all the logic was done, why not just add support for classtype as well…

So, I added three new ways too search through the rules, using the msg,classtype and metadata fields.

And you can enable or disable rules in a bunch, say all rules that has “RPC portmap” in the msg field, or “Security Over Connectivity” in the metadata field. And also by classtype, say “attempted-user” or “attempted-admin”.

The script also supports walking through the bunch of rules and enabling/disabling/skipping(don’t do anything) rule by rule.

# sidrule -p policy security-ips drop
Bash’ed together by edward.fjellskal@redpill-linpro.com

[*] Found 4224 rules in 39 rule files.
[*] Searchterm: metadata:”policy security-ips drop”
[*] Disable ALL rules (y/N)?
[*] Enable ALL rules (y/N)?
[*] Enable/Disable rule by rule (y/N)?

# sidrule -s RPC portmap proxy
Bash’ed together by edward.fjellskal@redpill-linpro.com

[*] Found 4 rules in 1 rule files.
[*] Searchterm: msg:”RPC portmap proxy”
[*] Disable ALL rules (y/N)?
[*] Enable ALL rules (y/N)?
[*] Enable/Disable rule by rule (y/N)?

# sidrule -c attempted-admin
Bash’ed together by edward.fjellskal@redpill-linpro.com

[*] Found 894 rules in 41 rule files.
[*] Searchterm: classtype:”attempted-admin”
[*] Disable ALL rules (y/N)?
[*] Enable ALL rules (y/N)?
[*] Enable/Disable rule by rule (y/N)? y
[*] Getting sids from 41 file(s).
[*] (1/41) Getting sids from file: /etc/snort/rules/backdoor.rules
[*] (2/41) Getting sids from file: /etc/snort/rules/bad-traffic.rules
………
[*] (40/41) Getting sids from file: /etc/snort/rules/web-misc.rules
[*] (41/41) Getting sids from file: /etc/snort/rules/web-php.rules
[*] In file: /etc/snort/rules/backdoor.rules
[*] alert tcp $EXTERNAL_NET any -> $TELNET_SERVERS 23 (msg:”BACKDOOR w00w00 attempt”; flow:to_server,established; content:”w00w00″; metadata:policy security-ips drop; reference:arachnids,510; classtype:attempted-admin; sid:209; rev:5;)
[*] Rule 1 of 894
[*] Disable/Enable/Skip rule (d/e/S)?S
[*] Not processing rule..
……….

When I started working on this yesterday, I saw that I should rather do all this in perl, but I decided that since I had started it in bash(+sed), I should just finish this version in bash. I need to practice my bash too!

Maybe one day I’ll redo it in perl or something… But not today ๐Ÿ™‚
There code is still here.

Enjoy, Jim!

Standard
Information, OpenSourceSoftware, Security, Sguil, Snort, Suricata

sidrule – A simple and fast way to Enable, Disable or Display a Snort/Emerging Threats/Suricata rule

On my private servers and home machines etc. (even my laptop), I run snort.

I got tired of spawning vim to edit a rule file (disabling/enabling) or sometimes just to read a rule for joy and pleasure…

So I made a simple bash-script to solve my small needs…
Output from sidrule:

# sidrule
Bash’ed together by edward.fjellskal@redpill-linpro.com
Usage:
sidrule [list|enable|disable] sid
or
sidrule [ -l | -e | -d ] sid

# sidrule list 15363
[*] In file: /etc/snort/rules/web-client.rules
[*] alert tcp $EXTERNAL_NET $HTTP_PORTS -> $HOME_NET any (msg:”WEB-CLIENT Potential obfuscated javascript eval unescape attack attempt”; flow:established,to_client; content:”eval|28|”; nocase; content:”unescape|28|”; within:15; nocase; content:!”|29|”; within:250; metadata:policy balanced-ips alert, policy security-ips alert, service http; reference:url,cansecwest.com/slides07/csw07-nazario.pdf; reference:url,www.cs.ucsb.edu/~marco/blog/2008/10/dom-based-obfuscation-in-malicious-javascript.html; classtype:misc-activity; sid:15363; rev:1;)

# sidrule disable 15363

[*] Found sid:15363 in /etc/snort/rules/web-client.rules:
[*] Disabling:
[*] #alert tcp $EXTERNAL_NET $HTTP_PORTS -> $HOME_NET any (msg:”WEB-CLIENT Potential obfuscated javascript eval unescape attack attempt”; flow:established,to_client; content:”eval|28|”; nocase; content:”unescape|28|”; within:15; nocase; content:!”|29|”; within:250; metadata:policy balanced-ips alert, policy security-ips alert, service http; reference:url,cansecwest.com/slides07/csw07-nazario.pdf; reference:url,www.cs.ucsb.edu/~marco/blog/2008/10/dom-based-obfuscation-in-malicious-javascript.html; classtype:misc-activity; sid:15363; rev:1;)

# sidrule enable 15363

[*] Found sid:15363 in /etc/snort/rules/web-client.rules
[*] Enabling:
[*] alert tcp $EXTERNAL_NET $HTTP_PORTS -;gt& $HOME_NET any (msg:”WEB-CLIENT Potential obfuscated javascript eval unescape attack attempt”; flow:established,to_client; content:”eval|28|”; nocase; content:”unescape|28|”; within:15; nocase; content:!”|29|”; within:250; metadata:policy balanced-ips alert, policy security-ips alert, service http; reference:url,cansecwest.com/slides07/csw07-nazario.pdf; reference:url,www.cs.ucsb.edu/~marco/blog/2008/10/dom-based-obfuscation-in-malicious-javascript.html; classtype:misc-activity; sid:15363; rev:1;)

The git repo is on github.com/gamelinux/sidrule
git clone http://github.com/gamelinux/sidrule.git

Hope you find it usefull!

Standard
Information, OpenSourceSoftware, Security, Snort, Sourcefire

Sourcefire 3D – 4.9 (My highlights)

Sourcefire has recently released version 4.9 of their Sourcefire 3D System. Iโ€™m really happy with the changes and improvements they have included in this release. Some of the changes were ones I was looking forward to, as I already had seen some of these smart changes in Snort.

The first improvement that I was eager to try out was the support for Multi Policies/Policy by VLAN or Network/Filtered Policy (I cant seem to find a consistent name in the 3D documentation or GUI) on one Detection Engine (DE). With the previous 4.8 version, I was unable to sufficiently segment my inspection. This meant that I was generating a few more alerts than i needed, and also using up my sensors resources in handling this traffic.

For example:
One network hosting 8 web services, and another network hosting 2 web service, both on the same DE. All the web-rules that I enabled for the Policy for the DE, would default be potentially firing for all 10 web services.
When two of the web services are on appliances that are running the legacy Windows NT 4 Embedded, RNA recommended rules suggest enabling lots of rules that will fire off too many false positives on the other Linux Apache web servers etc. A good tuning was needed for my setup, so that web-iis.rules would not fire on Apache services and vice versa if you get my point. Not a big problem, it just took some time.

With the new multi-policies I can now make a policy for each of the two networks, and RNA recommended rules will give me a better default set to start with (different RNA recommendation for each network) and a lot less false positives, which makes the work of tuning less. It also means fewer false negatives. It is much easier now to tune the rules, as I donโ€™t have to take into account other parts of the network when Iโ€™m tuning and the effect on those if I disabled or enabled a rule in a policy. The use of suppression is now done in a better way, as I now donโ€™t need to spend time on suppressing rules for one host which is firing false positives, but the rule is needed for other hosts.
There is of course a limitation here, if the amount and variety of services and hosts is the same in one policy as it was in 4.8, you’re back where you started. Also, there is a limit at this point on 8 policies in total on one DE. I wish there where more, so I could split stuff up more, but hey! Thanks for the 8 I got ๐Ÿ™‚

Now the second new feature I love is the Policy Layers. This basically allows you to create reusable modules of policy configuration, rules, etc, which you can share among different policies. I can now have different sets of “rule” policies that I can maintain inside an Intrusion Policy. An example is my set of “strange rules I cant live without” or “Standard rules that should be enabled in all policies” in one template now, that I can reuse easy!
Also i like consulting Sourcefire’s RNA (Real-time Network Awareness) recommended rules, but I also like adding more custom rules from the VRT/SEU repo. Now its easier to have things organized like for RNA recommended rules in one rule policy, and my custom โ€œstrange rules” in another, more Sourcefire VRT rules in yet another, and finally some Emerging Threats in yet anotherโ€ฆ

Now back to reviewing eventsโ€ฆ

Standard
cxtracker, daemonlogger, forensics, fpcgui, Information, OpenSourceSoftware, Security, Sguil, Snort, Suricata

Full Packet Capture GUI (FPCGUI)

I started a little project of mine that I have been thinking about since the summer of 2008 (Also see this post). I saw that it was a problem finding vendors selling a cheap setup for a full packet capture solution. The recommendation was to set up a Linux server on your own, run tcpdump and spool pcaps to disk. Well, once you have all that data, you need some way to manage it. I thought about using sancp to index the connections, and tools like tcpxtract, foremost, dsniff, chaosreader, tcptrace and combine features from xplico to add some extra value and possibilities on top.

So I started my project back in september 09, calling it FPCGUI (Full Packet Capture Graphical User Interface). It is currently supporting daemonlogger/tcpdump/sancp for spooling pcaps with a wrapper script that puts pcaps in directories based on “year-month-date”. cxtracker/sancp can be used for connection profiling/tracking, writing session data to disk, where I have written fpc-session-loader.pl which picks up the session data files and inserts them to a mysql database. If I now have an interest in seeing all the traffic from one host, I can do a search in my webgui and get the data. I can do rather interesting queries on all the data from cxtracker/sancp, and get interesting results.

freebsd search

I use cxtracker in my setup, as it collects meta data on both IPv4 and IPv6 connections. I have also managed to store IPv4 and IPv6 addresses in the mysql database in a reasonable and usable way.

IPv6 search

I have just finished writing a PHP webgui, where I can enter a search term, and get a list (or just a single session if I’m specific enough), click on the session of choice, and up pops a download dialog, where I can choose to open the pcap straight away in wireshark! The pcap of the specific session is carved out from the pcaps for the relevant period (days) when the session took place. More or less the same functionality you find in a Sguil stack setup. I wrote the php-gui in such a way, that it can take search terms via an URL, like “?srcip=10.10.10.10&srcport=80” and so on, making it easier to integrate with other applications.

search1

Example screenshot of what happens when you click on an event:
search1
I have associated the pcap files with: ‘Content-Type: application/pcap-capture’ and set firefox to spawn wireshark for those files automatic ๐Ÿ™‚

So now I’m one step closer to having Full Packet Capture with my Sourcefire 3D system! Just need to find out what part of the 3D webgui code to hack, to add my custom <click here to get the pcap of the session that triggered the event> ๐Ÿ™‚ Of course I can enter the data manually, but I’m lazy, and I like to hack stuff ๐Ÿ™‚

The project i hosted here. Any thoughts are more than welcome.

Standard
Information, OpenSourceSoftware, Security, Sguil, Snort, Ubuntu

snort-2.8.5.1 debian/ubuntu packages

Loglevel: INFO

I have packed snort 2.8.5.1 for Ubuntu Hardy and Jaunty:
http://debs.gamelinux.org/snort/hardy/
http://debs.gamelinux.org/snort/jaunty/

I have changed the way I pack snort. I no longer pack the pgsql and mysql versions. I have also dropped prelude support. If you need them, drop me a line, and I’ll see what I can do. Its just my belief, that one should log in unified/2 format for speed, and let barnyard/2 take care of the rest ๐Ÿ™‚

I also compile snort with IPv6.

-*> Snort! Version 2.8.5.1 IPv6 (Build 114) <*-

Standard
daemonlogger, fpcgui, Information, OpenSourceSoftware, Security, Sguil, Snort

Full packet capture…

I was on a seminar today, where one of the key focus was full packet capture of network traffic.

It was rather strange to me, that it seem to be presented as something new, exiting and “must have”…

IDS/IPS without full packet capture – is time consuming if you try to investigate an incident. All analysts knows that, and there is nothing new about that. Richard Bejtlich has preached this for years ( Read Tao of Network Security Monitoring, Beyond Intrusion Detection ).

As a happy Sguil user, I always have full packet capture of my network traffic, and can drill down in all the network data from an event. Meaning that I save tons of time investigating events, and can better tune down my false positives also. Most commercial vendors don’t integrate any “full packet capture appliances”, and don’t even support 3rd parties packet capture services. In my earlier days, I brought this to among IBM and Juniper, where they just look strangely at me and replied more or less the same – “Full packet capture is just to much data to handle… you need big disk and lots of CPU/RAM… We are not sure how to integrate this…”

Well, there is a free and open source way to implement such a device. A standard Linux host with daemonlogger is one example. (There are other tools that also does packet capture, but daemonlogger really aim just at packet capture, and nothing more, and does it in a way that I want it.)

Now that you can get 67 terabyte of storage for about $7,800 USD, there should not be a problem storing your data ๐Ÿ™‚

You can split up sguil to run different services on different hardware, so if you have a Network Tap that can mirror traffic to more than one devices, you can run IDS on one server, pcap on another, network statistics on a third and asset detection on a forth example vis. Basic overview of Sguil with all services running on one sensor:
DUAL HDD NAS
If you want to, or need more juice from your snort sensor etc. you can split it up, so that one sensor takes the traffic from X most used services, and the other sensor take the rest. Or even split it up more!

Since I started using Sourcefire 3D system, I have planed to make a way for me to easily integrate my package capture server with the Defense Center. My thoughts are on using Firefox with Greasemonkey and some perl-cgi on the pcap server to carve out the the right portion of the pcaps. Capture has some nice ideas and I might reuse some code from there. If Sourcefire don’t beat me to it, I might have something of my own in a near future…

If you don’t capture packets today, you should look into a way of doing it. It saves you time, and it saves you lots of work. I would not be without mine ๐Ÿ™‚

Standard