Information, OpenSourceSoftware, Security

Security thoughts for 2012+

Quoting Richard Bejtlich: “Prevention will eventually fail!”

And I have always agreed. Accidents do happen, the world is not perfect. So when companies that really spend time and money on security get breached (RSA, Lockheed, Google, [place your company here?]) then you could work out from the theory that you eventually will get breached too.

When you realize and accept that, you may need to redefine the way you think of IT security. You should prepare for the worst, so identifying what would be “the worst” for you (your company) and then identifying you most critical assets should be on the top of your list, and you should start focusing your effort on securing them the most.

Limit the users that have access to the most critical assets (and work on sensitive projects etc). The users also need special attention when it comes to awareness training and follow up. They should also have a good communication with the security staff making it easy to report anything that seems suspicious and get positive feedback no matter what. They are an important part of picking up security issues where your technology fails! So you need them.

The most critical assets needs to be monitored as close to real-time as it gets. The time it takes for an incident detection and till your response should be a minimum, even outside working hours and weekends.

Then the users who has access to theses critical systems should also have special attention/hardening on their OS’s etc. Use a modern operating system and enabling the security functionality all ready there and making sure that executables cant be executed from temporary directory etc. When you got basic security features in place (Including Anti-Virus), you should start looking at centralized logging and alerting on suspicious activities from the logs.
You should also look into implementing different ways of monitoring anomalies for the users usage. When do they normally log on? From where do they normally log on? Are they fetching lots of documents from the file servers? etc. And did they access the fake “secret document” that is there just for catching any suspicious activity? (You need to define your own anomalies).

When the inner core (most valued assets + its users) are “secured”, you should strive to maintain an acceptable level of security on the rest of the corporate office network and also importantly the public facing part. Compromises here can be used to escalate into the “inner core” or to damage your reputation and business affairs, so keeping an acceptable level of security here “as always” is good.

As “Prevention will eventually fail!”, you need to have sufficient logging up and running. So when you do have an incident, the analyst has sufficient data to work with and this will also keep the cost down, as the time it takes to handle an incident will be lower. I’m mostly into Network Security Monitoring, so for me, NetFlow type data, IDS events, full packet capture, proxy logs, and DNS queries logs are some key logs from network that will help me. On the more host side of logging, the more logging, the better… web, email, proxy, spam, anti-virus, file-access, local client logs, syslogs/eventlogs, and so on…..

And remember – if you cant spot any badness, you are not looking hard enough 🙂
I always work on the theory that something in my networks are p0wned. That keeps me on my toes and keeps me actively finding new ways to spot badness.

With that – I wish you all a hacky new year!

Information, OpenSourceSoftware, passivedns, Security

PassiveDNS update (v0.2.4)

It has been some while since I had time to code on my C projects. But the last week I got some time and used it to get PassiveDNS into a state where Im more relaxed about it. Previous version (V0.1.1) used to spit out all DNS data it saw. The latest version caches DNS data internally in memory and only prints out a DNS record when it sees if for the first time, or if it is a active domain, it prints it out again after 24 hours and so on (once a day). The previous version would give me Gigabytes of DNS data daily in my test setup, while this version gives me about 2 Megabytes. This version also just gives you A, AAAA, PTR and CNAME records at the moment. I’m open for suggestions for more (use-cases would be great too!).

In my tests and in feedback from people who has tried it, PassiveDNS is very resource friendly when it comes to CPU usage (more or less idling). In current version (v0.2.4) there is not implemented any limitation on memory usage, so if your network sees a lot of DNS traffic, you might end up using some hundreds of Megabytes RAM for the internal cache. The most I’ve seen is around 100 MB at the moment. My plan is to implement some sort of “soft-limit” on memory usage, so that you can specify how much memory PassiveDNS should maximum use. The “downside” of this though, is that PassiveDNS would have to expire domains from its cache faster. That might end up in bigger log files with duplicate entries. When I say “downside”, its not a real downside as I see it. From my tests with the example scripts and, it is not much of a problem keeping up with insertions to the DB (MySQL) and your last seen timestamp will be a bit more accurate. I guess this kind of data though, is better suited for a NoSQL solution, if you are collecting lots of it.

If you have read this, and you are into Network Security Monitoring, and you don’t use passive DNS in your work, I recommend you too Google it and read a bit about it.

cxtracker, daemonlogger, forensics, Information, OpenSourceSoftware, Security

cxtracker updates (0.9.7 beta)

Thanks to Ian Firns that has implemented custom output formating (sancp like), pcap indexing and pcap capturing (daemonlogger-style)…!

Starting from commit 6b32fb24db, cxtracker can now, additional to writing flowdata, also do packet-capturing and outputting indexing data about where in the pcap(s) the flow starts and ends. This should potentially bring down the time needed to carve a session out of a big pcap. Right now, all this is just in beta, but the functionality is there, and there is also an example perl-script to carve out a session based on index-data.

Output fields of interest:
%spf pcap file containing start packet in session
%spo pcap file offset of start packet in session
%epf pcap file containing last packet in session
%epo pcap file offset of last packet in session

Example on a indexed pcap output, using: “%spf|%spo|%epf|%epo”

So, basically, if you have a 1 GB pcap file, normally you could use tcpdump with a BPF filter to care out the session you where looking for, reading and searching the whole 1 GB pcap file.

With this addition to cxtracker, you would now be able to spool right to the start-byte off the session and start carving from there until the end-byte of the session. So if the session is placed say 450 MB into the pcap, and ends at 550 MB into the pcap, you basically only have to read and carve in 100 MB of pcap data. In the example perl script (, the file handle for the file is opened, it would seek to the right place in the pcap (_not_ reading 450 MB of data from your disk) and start reading 100 MB data from your disk and carving+filtering and then close the file handle.

We would love to have some feedbacks here, and to have people test it. Again, it is still beta, so be aware 🙂

Note: Idexing pcap files is nothing new, the sancp project did add alike features, but was never properly released.

Information, OpenSourceSoftware, passivedns, Security

Passive DNS and PassiveDNS/PRADS

For those of you not familiar with the concept of Passive DNS, there are lots of stuff on it on the intertubes…

Just some of the links:
Some use cases:
A public passive dns db:
Or just click here:

I have not found any good tools yet that lets you build your own passive DNS DB, so I have started to walk down that path…
First off, I have coded a DNS sniffer (passivedns) I have ported the same functionality over into PRADS. All code is in beta at the moment.

I announce this release, so if anyone is interested, I will take input on the output format 🙂
My first tests shows that the passive DNS data collected on a small network is too much… My plan is to implement a in memory “state” so that it don’t prints out the same record more than X times over a time interval (say, if a record is the same, just print it once a day, but if it changes, print it immediate). When that is done, Ill write a parser to feed it into a DB and a query tool to fetch passive DNS records on request.

Feedback is always welcome!

Information, Linux Distributions, OpenSourceSoftware, Security, Snort, Sourcefire

Packetcapture with Snort using the “tag” option

I did this several years ago, but when I switched to full packetcapture I did not have the need for catching pcap of traffic firing a rule.

You can do this with the tag option in Snort. If you want to know more, please read README.tag.

I will present you with a signature that will log the first 1000 bytes or 100 seconds (What ever comes first!) after the packet that triggered the event. Im looking for a SYN flag in a TCP session and I start my logging from there (0,packets means that there are no limits on amount of packets).

alert tcp any <> $HOME_NET any (msg:”GL Log Packet Evil-IP (”; flags:S; tag:session,1000,bytes,100,seconds,0,packets; classtype:trojan-activity; sid:201102011; rev:1;)

I use unified2 as output plugin for Snort (something that also Sourcefire 3D does IIRC), so I need to fetch the pcap from the unified log. Snort 2.9.0 and newer ships with a new tool that will help you here, u2boat. This will carve out the pcaps from the unified log:

# u2boat /var/log/snort/<unified.log.timestamp> /tmp/snort.pcap

From there, you can read the /tmp/snort.pcap with tcpdump or wireshark etc. or just fetch the evil-IP packets:

# tcpdump -r /tmp/snort.pcap -w /tmp/Evil- 'host'

If you love it in console, you can read the pcap with tcpflow etc:

# tcpflow -c -r /tmp/Evil-

I did could not seem to verify that the “0,packets” actually do work. I added the following line also to my snort.conf:

config_tagget_packet_limit: 0

But again, not sure if it works.

I wanted to do some more testing before releasing this blog, but it has been sitting around for a while, so If I play more with it and have something new, Ill post a new post 🙂

BTW, turning you Sourcefire 3D into a packetcapture device is easy 🙂 adding the rule as above, you can just click the “Download Packet(s)” Button in the Event Information/Packet Information view 🙂 Use such a rule with care though…


Update on tuning snort…

I rearranged the “menu” on my blog, and put most items under “pages

I also added my blogpost on how to make snort go faster under linux, and made it a page…
I updated it and added among some words of DAQ and PF_RING.
If you have any Tips or Tricks on how to tune snort, please mail me/add a comment, and Ill update the page.
Read the “article” here.

Information, Linux Distributions, OpenSourceSoftware, Security

10 years of….

January 2011 has its 10th birthday…

Did you know that started out as the website for GamelinuX, a linux distribution for gaming?
I never got a working release that I wanted to present to the public, and after 2 years of working on the GamelinuX distro, the project came to an halt, as my Master degree and personal life took too much time from hacking on the distro. The GamelinuX project got official dead in September 2001 :/ And thinking of it now… do I have copies of the Alpha CDs somewhere??? I should have, but I dont know where… :/

My first security related post was in July 2003, when Free-X released an exploit for Xbox, that would let you install linux on it…

In March 2007, the blog entered its current form, leaving phpnuke/drupal (and clones) for wordpress. has always been about Open Source and hacking (‘as in finding a way to make things work’). As I started to play with Linux in 1998, Linux has been my OS of choice since. My reasons for continuing to blog security related topics on this domain, was that “Game Linux” was for me also associated with “gaming linux”, meaning “hunting linux” – find ways to break it/exploit it.

I went online for the first time with my Linux machine in 1998, and went to IRC/EFnet and the channel #Oslo. I asked anyone if they where into hacking/cracking, and asked for pointers on where/how to best start reading and learning more about it. Not long after, some guy told me to look in my /root/ directory and there was a dir that had a dozen of exploits… I realized that I had been hacked, and decided then not to get back online before I knew more about how to protect my self. The sploit used, IIRC, was a buffer overflow in wu-ftpd that shipped with the Red Hat release then, and wu-ftpd was default enabled 🙂

I stayed offline for about 2 months with my Linux machine, using the university machines to read more about hardening linux, firewalling, IDS, HIDS and such… As long as I can remember, I have been interested in hacking/cracking and defending from it. So linux+security has been an active interest for ~13 years now, and with my first related job experience ~10 years ago working for a Managed Security Service Provider (MSSP).

Thinking back the last 15 years, it has been some good years. I love what I’m doing and I have no plans on quitting!