Information, OpenSourceSoftware, Security

Security thoughts for 2012+

Quoting Richard Bejtlich: “Prevention will eventually fail!”

And I have always agreed. Accidents do happen, the world is not perfect. So when companies that really spend time and money on security get breached (RSA, Lockheed, Google, [place your company here?]) then you could work out from the theory that you eventually will get breached too.

When you realize and accept that, you may need to redefine the way you think of IT security. You should prepare for the worst, so identifying what would be “the worst” for you (your company) and then identifying you most critical assets should be on the top of your list, and you should start focusing your effort on securing them the most.

Limit the users that have access to the most critical assets (and work on sensitive projects etc). The users also need special attention when it comes to awareness training and follow up. They should also have a good communication with the security staff making it easy to report anything that seems suspicious and get positive feedback no matter what. They are an important part of picking up security issues where your technology fails! So you need them.

The most critical assets needs to be monitored as close to real-time as it gets. The time it takes for an incident detection and till your response should be a minimum, even outside working hours and weekends.

Then the users who has access to theses critical systems should also have special attention/hardening on their OS’s etc. Use a modern operating system and enabling the security functionality all ready there and making sure that executables cant be executed from temporary directory etc. When you got basic security features in place (Including Anti-Virus), you should start looking at centralized logging and alerting on suspicious activities from the logs.
You should also look into implementing different ways of monitoring anomalies for the users usage. When do they normally log on? From where do they normally log on? Are they fetching lots of documents from the file servers? etc. And did they access the fake “secret document” that is there just for catching any suspicious activity? (You need to define your own anomalies).

When the inner core (most valued assets + its users) are “secured”, you should strive to maintain an acceptable level of security on the rest of the corporate office network and also importantly the public facing part. Compromises here can be used to escalate into the “inner core” or to damage your reputation and business affairs, so keeping an acceptable level of security here “as always” is good.

As “Prevention will eventually fail!”, you need to have sufficient logging up and running. So when you do have an incident, the analyst has sufficient data to work with and this will also keep the cost down, as the time it takes to handle an incident will be lower. I’m mostly into Network Security Monitoring, so for me, NetFlow type data, IDS events, full packet capture, proxy logs, and DNS queries logs are some key logs from network that will help me. On the more host side of logging, the more logging, the better… web, email, proxy, spam, anti-virus, file-access, local client logs, syslogs/eventlogs, and so on…..

And remember – if you cant spot any badness, you are not looking hard enough 🙂
I always work on the theory that something in my networks are p0wned. That keeps me on my toes and keeps me actively finding new ways to spot badness.

With that – I wish you all a hacky new year!

Advertisements
Standard
Information, OpenSourceSoftware, passivedns, Security

PassiveDNS update (v0.2.4)

It has been some while since I had time to code on my C projects. But the last week I got some time and used it to get PassiveDNS into a state where Im more relaxed about it. Previous version (V0.1.1) used to spit out all DNS data it saw. The latest version caches DNS data internally in memory and only prints out a DNS record when it sees if for the first time, or if it is a active domain, it prints it out again after 24 hours and so on (once a day). The previous version would give me Gigabytes of DNS data daily in my test setup, while this version gives me about 2 Megabytes. This version also just gives you A, AAAA, PTR and CNAME records at the moment. I’m open for suggestions for more (use-cases would be great too!).

In my tests and in feedback from people who has tried it, PassiveDNS is very resource friendly when it comes to CPU usage (more or less idling). In current version (v0.2.4) there is not implemented any limitation on memory usage, so if your network sees a lot of DNS traffic, you might end up using some hundreds of Megabytes RAM for the internal cache. The most I’ve seen is around 100 MB at the moment. My plan is to implement some sort of “soft-limit” on memory usage, so that you can specify how much memory PassiveDNS should maximum use. The “downside” of this though, is that PassiveDNS would have to expire domains from its cache faster. That might end up in bigger log files with duplicate entries. When I say “downside”, its not a real downside as I see it. From my tests with the example scripts pdns2db.pl and search-pdns.pl, it is not much of a problem keeping up with insertions to the DB (MySQL) and your last seen timestamp will be a bit more accurate. I guess this kind of data though, is better suited for a NoSQL solution, if you are collecting lots of it.

If you have read this, and you are into Network Security Monitoring, and you don’t use passive DNS in your work, I recommend you too Google it and read a bit about it.

Standard
cxtracker, daemonlogger, forensics, Information, OpenSourceSoftware, Security

cxtracker updates (0.9.7 beta)

Thanks to Ian Firns that has implemented custom output formating (sancp like), pcap indexing and pcap capturing (daemonlogger-style)…!

Starting from commit 6b32fb24db, cxtracker can now, additional to writing flowdata, also do packet-capturing and outputting indexing data about where in the pcap(s) the flow starts and ends. This should potentially bring down the time needed to carve a session out of a big pcap. Right now, all this is just in beta, but the functionality is there, and there is also an example perl-script to carve out a session based on index-data.

Output fields of interest:
%spf pcap file containing start packet in session
%spo pcap file offset of start packet in session
%epf pcap file containing last packet in session
%epo pcap file offset of last packet in session

Example on a indexed pcap output, using: “%spf|%spo|%epf|%epo”
“/tmp/test1.pcap.1321821603|10115|/tmp/test1.pcap.1321821809|62704”

So, basically, if you have a 1 GB pcap file, normally you could use tcpdump with a BPF filter to care out the session you where looking for, reading and searching the whole 1 GB pcap file.

With this addition to cxtracker, you would now be able to spool right to the start-byte off the session and start carving from there until the end-byte of the session. So if the session is placed say 450 MB into the pcap, and ends at 550 MB into the pcap, you basically only have to read and carve in 100 MB of pcap data. In the example perl script (cxt2pcap.pl), the file handle for the file is opened, it would seek to the right place in the pcap (_not_ reading 450 MB of data from your disk) and start reading 100 MB data from your disk and carving+filtering and then close the file handle.

We would love to have some feedbacks here, and to have people test it. Again, it is still beta, so be aware 🙂

Note: Idexing pcap files is nothing new, the sancp project did add alike features, but was never properly released.

Standard
Information, OpenSourceSoftware, passivedns, Security

Passive DNS and PassiveDNS/PRADS

For those of you not familiar with the concept of Passive DNS, there are lots of stuff on it on the intertubes…

Just some of the links:
Some use cases: http://conferences.npl.co.uk/satin/presentations/satin2011slides-Rasmussen.pdf
A public passive dns db: http://www.bfk.de/bfk_dnslogger.html?query=sans.org#result
Or just click here: http://lmgtfy.com/?q=passivedns

I have not found any good tools yet that lets you build your own passive DNS DB, so I have started to walk down that path…
First off, I have coded a DNS sniffer (passivedns) I have ported the same functionality over into PRADS. All code is in beta at the moment.

I announce this release, so if anyone is interested, I will take input on the output format 🙂
My first tests shows that the passive DNS data collected on a small network is too much… My plan is to implement a in memory “state” so that it don’t prints out the same record more than X times over a time interval (say, if a record is the same, just print it once a day, but if it changes, print it immediate). When that is done, Ill write a parser to feed it into a DB and a query tool to fetch passive DNS records on request.

Feedback is always welcome!

Standard
Information, Linux Distributions, OpenSourceSoftware, Security, Snort, Sourcefire

Packetcapture with Snort using the “tag” option

I did this several years ago, but when I switched to full packetcapture I did not have the need for catching pcap of traffic firing a rule.

You can do this with the tag option in Snort. If you want to know more, please read README.tag.

I will present you with a signature that will log the first 1000 bytes or 100 seconds (What ever comes first!) after the packet that triggered the event. Im looking for a SYN flag in a TCP session and I start my logging from there (0,packets means that there are no limits on amount of packets).

alert tcp 85.19.221.54 any <> $HOME_NET any (msg:”GL Log Packet Evil-IP 85.19.221.54 (gamelinux.org)”; flags:S; tag:session,1000,bytes,100,seconds,0,packets; classtype:trojan-activity; sid:201102011; rev:1;)

I use unified2 as output plugin for Snort (something that also Sourcefire 3D does IIRC), so I need to fetch the pcap from the unified log. Snort 2.9.0 and newer ships with a new tool that will help you here, u2boat. This will carve out the pcaps from the unified log:

# u2boat /var/log/snort/<unified.log.timestamp> /tmp/snort.pcap

From there, you can read the /tmp/snort.pcap with tcpdump or wireshark etc. or just fetch the evil-IP packets:

# tcpdump -r /tmp/snort.pcap -w /tmp/Evil-85.19.221.54-traffic.pcap 'host 85.19.221.54'

If you love it in console, you can read the pcap with tcpflow etc:

# tcpflow -c -r /tmp/Evil-85.19.221.54-traffic.pcap

I did could not seem to verify that the “0,packets” actually do work. I added the following line also to my snort.conf:

config_tagget_packet_limit: 0

But again, not sure if it works.

I wanted to do some more testing before releasing this blog, but it has been sitting around for a while, so If I play more with it and have something new, Ill post a new post 🙂

BTW, turning you Sourcefire 3D into a packetcapture device is easy 🙂 adding the rule as above, you can just click the “Download Packet(s)” Button in the Event Information/Packet Information view 🙂 Use such a rule with care though…

Standard
Information, Linux Distributions, OpenSourceSoftware, Security

10 years of gamelinux.org….

January 2011 gamelinux.org has its 10th birthday…

Did you know that gamelinux.org started out as the website for GamelinuX, a linux distribution for gaming?
I never got a working release that I wanted to present to the public, and after 2 years of working on the GamelinuX distro, the project came to an halt, as my Master degree and personal life took too much time from hacking on the distro. The GamelinuX project got official dead in September 2001 :/ And thinking of it now… do I have copies of the Alpha CDs somewhere??? I should have, but I dont know where… :/

My first security related post was in July 2003, when Free-X released an exploit for Xbox, that would let you install linux on it…

In March 2007, the blog entered its current form, leaving phpnuke/drupal (and clones) for wordpress.

Gamelinux.org has always been about Open Source and hacking (‘as in finding a way to make things work’). As I started to play with Linux in 1998, Linux has been my OS of choice since. My reasons for continuing to blog security related topics on this domain, was that “Game Linux” was for me also associated with “gaming linux”, meaning “hunting linux” – find ways to break it/exploit it.

I went online for the first time with my Linux machine in 1998, and went to IRC/EFnet and the channel #Oslo. I asked anyone if they where into hacking/cracking, and asked for pointers on where/how to best start reading and learning more about it. Not long after, some guy told me to look in my /root/ directory and there was a dir that had a dozen of exploits… I realized that I had been hacked, and decided then not to get back online before I knew more about how to protect my self. The sploit used, IIRC, was a buffer overflow in wu-ftpd that shipped with the Red Hat release then, and wu-ftpd was default enabled 🙂

I stayed offline for about 2 months with my Linux machine, using the university machines to read more about hardening linux, firewalling, IDS, HIDS and such… As long as I can remember, I have been interested in hacking/cracking and defending from it. So linux+security has been an active interest for ~13 years now, and with my first related job experience ~10 years ago working for a Managed Security Service Provider (MSSP).

Thinking back the last 15 years, it has been some good years. I love what I’m doing and I have no plans on quitting!

Standard
Information, OpenSourceSoftware, polman, Security

Yet another rule manager for VRT/ET/ETPRO or Suricata/Snort rules…

As I installed a new home router/firewall some months back, I installed it with an IDS (Sguil) just to have something to play with at home. I never got comfy with oinkmaster or pulledpork as I had to dig into config files too much…

Based on my idea for sidrule on how to manage rules, and also baring in mind cerdo, I quickly made a sidrule like tool in perl. I talked to some people about it and they liked my approacher on how to do rule management. I got some very positive feedback, so I decided to rewrite it and publish the code (Get polman 0.3.1 here).

To cope with not having a configuration file, you have to start polman with the –configure option to add a RuleDB (or more) and a Sensor (or more). A RuleDB is a “database” that holds rules. The idea is that you can load Sourcefire VRT rules for Snort version X.X into one RuleDB, and you can also load Emerging Threats rules for Snort X.X into the same RuleDB and have nice set of rules to play with. As I’m currently testing Suricata, I can with the same tool, and without any extra configfile or too much hassle, make a second RuleDB, but this time one with the VRT rules and with the ET suricata rules in it. I have one Sensor attached to the Snort RuleDB, and the other one to the Suricata RuleDB. One tool too rule them all…(LOTR->Lord of the Rules?).

A bit back to the –configure option. This will enter a ascii menu where you can add and edit RuleDBs and Sensors. For a RuleDB, you specify where to load the rules from (Currently just filesystem, but http/https is scheduled for next release), name, comment, etc. For a Sensor, you specify name, comment, what RuleDB to use, where to write rules to (a file, for writing all rules to a file or a dir, for writing all rules out into their original filename in that dir), where to write sid-msg.map file, etc.

Once a RuleDB is set up, you can eater from the menu load rules into the RuleDB or from command line. Once a Sensor is setup, and the specified RuleDB has rules in it, you can start to play with the Sensor rules. If you choose to write out rules from the menu straight away, it will turn on all rules that are default turned on by vendor (VRT/ET/ETPRO etc.). When you are back at command line, you can start turning on/off rules. One way I start, is to disable categories that I normally would not enabled in my setup, example:

polman.pl -i SensorA -m "(dos|games|icmp_info|pop3|rpc|scada|scan|snmp|sql|voip)"

This will search through all the rules based on the filename that the rules was loaded from. So from only the “sql” entry in my regexp above, it would typically show you all rules that where in the files: emerging-sql.rules, mysql.rules and sql.rules.
You will then be faced with some questions… Disable them all? or Enable them all? (With this particular search, I disable all the rules). There is also a third option, and that is to go through the rules, rule for rule, to make a conscious decision after you read the raw rule. If you choose to enable all rules, or based on rule for rule, you can change behavior of the rules to. Most rules are set to action “alert“, but if you choose to enable rules, you will have the option to change action for rules (to alert,log,pass,drop,reject,sdrop,default or current). The option “default” and “current” may need some explanation… Default will set the rule to the default action that was set by the vendor. Current will leave the rule(s) as they are defined for the sensor (say you have set an alert rule to drop before, the rule will keep the current action for the rule, which is drop).

There are some powerful searches built into polman. You can search in field classtype, metadata and msg at the same time. You can also search in “fields” ‘default enabled/disabled’ (Vendor state of the rule) and category (which is default filename where the rule was loaded from).
Example:
Say you want to search all rules that VRT classifies in their most secure config:
polman.pl -i SensorA -p "policy security-ips drop"
(You can then enable them all, and set action to drop).
Say you want to limit your search, and you only want to search for Zeus/Zbot traffic…
polman.pl -i SensorA -p "policy security-ips drop" -s "(Zeus|Zbot)"
Now you can enable them all, and set action to drop 😉

If you have a sid, or a list of sids, you can enable (-e) or disable (-d) them rather easy…
polman.pl -i SensorA -e sid1,sid2,sid3,....,sidN

To load rules into a RuleDB from command line:
polman.pl -r RuleDB1 -u

To write out rules for a sensor to a file (or files if a dir is specified):
polman.pl -i SensorA -w
This also writes out the sid-msg.map file…

There is nothing wrong about reusing the same Sensor rules across multiple sensor. Indeed that is one of the reasons that I choose the name policy manager, as they don’t need to be looked upon as Sensors that you define, but also Policies (I have thought about naming of Sensor/Policies a lot). In the future I hope to finish the implementation of Thresholding and Suppression too, so that you can edit them quickly from command line.

Thoughts and feedback are welcome on this blog or here!
Project on github here.
An example on how to use polman here.

Standard