Suricata Logstash Kibana Utilite Pro ARM

I’m currently in the process of overhauling my pesonal work network, this includes deployment of an inline IPS as part of the project.

Hardware List

  • Freescale i.MX6 quad-core Cortex-A9 @ 1.2GHz
  • 2GB DDR3 @ 1066 Mhz
  • 32Gb mSata
  • 1 x SanDisk Ultra 64GB Class 10 MicroSD
  • 2 x 1GB NIC (Intel Corporation I211 Gigabit Network Connection)
  • 1 x Internal Wifi 802.11b/g/n
  • 1 x USB Alfa AWUS036NHR

Complete Utilite Pro Spec

Ships with Ubuntu 12:04 LTS

You can of course change the OS on the Utlite pro to things such as Kali and Arch assault the caveat being if you want to install on the mSATA and not run from the sdcard you’re going to need to use the serial connection.

My USB -> Serial adapter has a male connector, the connector for the Utilite also provides a male DB9 connection … so an adapter is on order.

Topology

1
[LAN Router] --- [ Utilite Pro ] --- [ ISP Router ]

So as can be seen here I’m sitting the device inline, with the intent to have it route traffic between the LAN and WAN, as an asside I also plan to use the WiFi to provide Wireless access disbaling the ISP equipment, also to allow segmented guest access for visitors etc / captive portal, but that’s a far off from solid plan at the moment

Suricata

The packages available from the ubuntu arm repos are 1.x and I want the new 2.x builds (Archassault however took my feedback and have built the 2.x packages) so in the interim to receiving the required equipment to install Arch on arm all the prototyping will need to use the unbuntu install.

Building Suricata 2.x on ubuntu 12.04 ARM

1
2
3
wget http://www.openinfosecfoundation.org/download/suricata-2.0.tar.gz
tar -zxvf suricata-2.0.tar.gz
cd suricata-2.0

Adapting from the intructions here.

Install core requirements

1
2
3
4
apt-get -y install libpcre3 libpcre3-dbg libpcre3-dev \
build-essential autoconf automake libtool libpcap-dev libnet1-dev \
libyaml-0-2 libyaml-dev zlib1g zlib1g-dev libcap-ng-dev libcap-ng0 \
make libmagic-dev

Install IPS configuration requirements

1
apt-get -y install libnetfilter-queue-dev libnetfilter-queue1 libnfnetlink-dev libnfnetlink0i

Install logstash output format (eve.json) requirements

1
apt-get -y install libjansson-dev libjansson4

configure and build suricata

I want everything to run on the sdcard at this time as I plan to replace the OS and thus everything on the mSATA ;-)

1
2
3
mkdir -p /sdcard/suricata/{usr,etc,var}
./configure --enable-nfqueue --prefix=/sdcard/suricata/usr --sysconfdir=/sdcard/suricata/etc --localstatedir=/sdcard/suricata/var
make && make install-full

–build-info

After complete of the above your build info should look like

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
root@utilite:/sdcard/suricata/usr/bin# ./suricata --build-info
This is Suricata version 2.0 RELEASE
Features: NFQ PCAP_SET_BUFF LIBPCAP_VERSION_MAJOR=1 AF_PACKET HAVE_PACKET_FANOUT LIBCAP_NG LIBNET1.1 HAVE_HTP_URI_NORMALIZE_HOOK HAVE_LIBJANSSON 
SIMD support: none
Atomic intrisics: 1 2 4 8 byte(s)
32-bits, Little-endian architecture
GCC version 4.6.3, C version 199901
compiled with -fstack-protector
compiled with _FORTIFY_SOURCE=2
L1 cache line size (CLS)=64
compiled with LibHTP v0.5.10, linked against LibHTP v0.5.10
Suricata Configuration:
  AF_PACKET support:                       yes
  PF_RING support:                         no
  NFQueue support:                         yes
  IPFW support:                            no
  DAG enabled:                             no
  Napatech enabled:                        no
  Unix socket enabled:                     yes
  Detection enabled:                       yes

  libnss support:                          no
  libnspr support:                         no
  libjansson support:                      yes
  Prelude support:                         no
  PCRE jit:                                no
  libluajit:                               no
  libgeoip:                                no
  Non-bundled htp:                         no
  Old barnyard2 support:                   no
  CUDA enabled:                            no

  Suricatasc install:                      yes

  Unit tests enabled:                      no
  Debug output enabled:                    no
  Debug validation enabled:                no
  Profiling enabled:                       no
  Profiling locks enabled:                 no
  Coccinelle / spatch:                     no

Generic build parameters:
  Installation prefix (--prefix):          /sdcard/suricata/usr
  Configuration directory (--sysconfdir):  /sdcard/suricata/etc/suricata/
  Log directory (--localstatedir) :        /sdcard/suricata/var/log/suricata/

  Host:                                    armv7l-unknown-linux-gnueabi
  GCC binary:                              gcc
  GCC Protect enabled:                     no
  GCC march native enabled:                yes
  GCC Profile enabled:                     no

You can now run Surticata in IDS mode:

1
LD_LIBRARY_PATH=/sdcard/suricata/usr/lib /sdcard/suricata/usr/bin/suricata -c /sdcard/suricata/etc/suricata//suricata.yaml -i ethN

NOTE The intention is to run in IPS mode, however IDS is suitable to complete the integration with logstash and kibana

Get some event data

Configure your SSHD for keyonly authentication, and harden to your preferences and then just expose SSH to the internet for a few hours; I’m not kidding here within ~12 hours I’d logged well over 1K attempted logins enough for suricata to log some ET COMPROMISED Known Compromised or Hostile Host Traffic group events.

Setup ARM Java

1
apt-get install openjdk-7-jre

Logstash

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
wget https://download.elasticsearch.org/logstash/logstash/logstash-1.4.1.tar.gz
tar -zxvf logstash-1.4.1.tar.gz
cd logstash-1.4.1
mkdir -p etc/conf.d
cat >> etc/conf.d/suricata.conf << EOF
input {
  file { 
    path => ["/sdcard/suricata/var/log/suricata/eve.json"]
    codec =>   json 
    type => "SuricataIDPS-logs"
  }

}

filter {
  if [type] == "SuricataIDPS-logs" {
    date {
      match => [ "timestamp", "ISO8601" ]
    }
  }

  if [src_ip]  {
    geoip {
      source => "src_ip" 
      target => "geoip" 
      database => "/sdcard/logstash-1.4.1/vendor/geoip/GeoLiteCity.dat" 
      add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
      add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
    }
    mutate {
      convert => [ "[geoip][coordinates]", "float" ]
    }
  }
}

output {
  elasticsearch {
    embedded => true
  }
}
EOF
bin/logstash -f etc/conf.d/suricata.conf

This will take some time to start up, note that if you want to load in an existing log set, add start_position => "beginning" to the file {} declaration before starting logstash, after the back loading has completed I recomend you to remove this line, as it defaults to “end” and logstash tracks it’s position in the file if you leave this as beginning however it will always start at the beginning of the log and take a long time to startup needlessly

ArgumentError: cannot import class java.lang.reflect.Modifier’ asModifier’

something screwy occurs within jruby

Install oracle java

Download ejre-7u55-fcs-b13-linux-arm-vfp-hflt-client_headless-17_mar_2014.tar.gz from here

“ARMv6/7 Linux - Headless - Client Compiler EABI, VFP, HardFP ABI, Little Endian1”

1
2
3
4
5
6
7
8
9
10
11
12
13
14
tar -zxvf ejre-7u55-fcs-b13-linux-arm-vfp-hflt-client_headless-17_mar_2014.tar.gz
update-alternatives --install "/usr/bin/java" "java" "/path/to/ejre1.7.0_55/bin/java" 1
update-alternatives --config java
...
There are 2 choices for the alternative java (providing /usr/bin/java).

  Selection    Path                                            Priority   Status
------------------------------------------------------------
* 0            /usr/lib/jvm/java-7-openjdk-armel/jre/bin/java   1043      auto mode
  1            /path/to/ejre1.7.0_55/bin/java                    1         manual mode
  2            /usr/lib/jvm/java-7-openjdk-armel/jre/bin/java   1043      manual mode

Press enter to keep the current choice[*], or type selection number: 
 

Select 1 or whatever index you are shown

Kibana

Kibana is really just a web interface, so download it and install your preffered webserver to run it from nGinx / Apache / Lighthttpd etc …

1
2
cd /path/to/kibana/apps/dashboards/
curl -o suricata2.json https://gist.githubusercontent.com/regit/8849943/raw/15f1626090d7bb0d75bca33807cfaa4199b767b4/Suricata%20dashboard

In your browser now go to http://your_device/path/to/kibana/#/dashboard/file/suricata2.json

Heartbleed - CVE-2014-0160

Heartbleed, sounds like a bad B-Movie title / title for some cheesy pop song doesn’t it?

If only that were the case, as I have covered in my blog post on mysqlperformanceblog.com heartbleed affects OpenSSL versions 1.0.1 through 1.0.1f

I’ve spent the last 2 days working 15hrs plus on this, and this is during some nasty Jet Lag courtesy of my return trip from Percona Live 2014, as the code keeps being pulled down for some reason I have mirrored some effective P.o.C code at github NOTE: This is not my own code I am only mirroring it, use at your own risk etc etc …

I encourage you to both read my blog post and my colleague Ernie’s blog post on the matter.

the TL;DR

  1. 0.9.8 and 1.0.0 versions ARE NOT VULNERABLE
  2. 1.0.1 -> 1.0.1f ARE VULNERABLE
  3. some distros are backporting to fix into their 1.0.1e (Redhat, ubuntu, debian etc …)
  4. check changelogs of packages for CVE-2014-0160 fixes
  5. you MUST rotate keys and certificates and assume ALL user credentials have been compromised

In my own testing using the POC code I found the following.

  1. I could not dump the SSL keys from memory (I may have just been unlucky here, some sources are claiming they have been able to do this).
  2. I could dump ANY content which had a some point been “in flight” e.g. user login forms and servers responses (usernames + passwords + session cookies etc).
  3. I could ONLY dump the memory of the process using TLS (some sources claimed to be able to walk the entire server memory, I found this to not be the case).

This blog post may not be my usual deep dive, however given the work being done and the links to blog posts on MPB this shold be enough information for you the reader to go on in the interim.

UPDATE: this video provides a great description on the vulnerability in detail.

Comments

Kali Linux on the X1 Carbon Grub-pc Encrypted LVM FINALLY!

So finally I have Kali linux 1.0.6. installed as the main OS on my Lenovo X1 Carbon, this was not a trivial matter to say the least.

First of all the installation routine fails trying to install grub-pc; this is due to the network configuration step of the routine creating a blank /etc/resolv.conf

So right after network configuration has completed inspect your /etc/resolv.conf and if it is blank as mine was:

1
echo "nameserver 8.8.8.8" >> /etc/resolv.conf

Ensuring this is done BEFORE it reaches the grub installation step; this will now complete as expected.

Next up post reboot the encrypted LVM fails to mount, citing it was unable to find kali-root

1
2
3
4
Loading, please wait...
    Volume group "kali" not found
    Skipping volume group kali
Unable to find LVM volume kali/root

Help is however at hand, boot back into the live distro forensics mode, and what follows is my somewhat condensed and modified procedure

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
blkid /dev/sda5
/dev/sda5: UUID="XXXXXX" TYPE="crypto_luks"
cryptsetup /dev/sda5 luksOpen sda1_crypt
vgchange -ay kali
mkdir -p /mnt/root
mount /dev/mapper/kali-root /mnt/root
cd /mnt/root
mount -t proc proc proc
mount -t sysfs sys sys
mount -o bind /dev dev
mount /deva/sda1 boot
chroot /mnt/root
echo "sda1_crypt UUID=XXXXXX non luks" > /etc/crypttab
update-initramfs -u
exit
reboot

As for UEFI / EFI ? Don’t even get me started there nothing I have spent long evening hours looking into works for kali, not using the fedora shim nothing at this time; I’m very annoyed at this and will post again once I arrive to a resolution.

In the interim CaptTofu release some interesting material on leveraging Docker to test PXC deploys, he’s even go so far to produce some Ansible playbooks for the deployment process; I’ve been helping to work in some respect on the Ansible side and I can see a lot of potential in docker aswell as a lot of issues (it is a very young project it reminds me a lot of OpenStack hack in the diablo RC days), I encourage you to check this out.

Spanking the Scan Monkies

Hello 2014, do I have your attention?

Early warning This is a satirical blog post, with colourful language of which the sole intent is to troll automated scanners and script kiddies, those of a disposition nature should stop reading now.

Shortly after watching @chrisjohnriley’s Defcon 21 talk defense by numbers, I began thinking how I could implement so of the methods within nginx, taking them to another level by trolling and generally pissing off anyone scanning the server.

Some background on this nginx server does nothing but bounce old domains, and links to their appropriate place on this blog, so it’s out of the way not something you’d typically see attacked en mass.

(seriously I see one or two hits from search engines on the instance, except recently China Telecom must LOVE my blog, 500K requests in an hour … aww shucks guys I love you too)

So let us start with response codes, because 400 response codes are so last century right? I really can’t see why the 7xx-rfc isn’t already a standard.

So I opted for responding to automated scans of my nginx instance with the 793 response code; helpfully letting the scanner know that the Zombie Apocalypse has occured where the instance is located and that I care not of their scans as I’m either shambling along biting everyone within reach and incoherently moaning, or I’m too busy trying to not get my ass zombified.

Zombie apocolypse is serious business; they should appreciate my early warning!

Providing this sorely needed public service is this small nginx server block after my main server block handling all valid requests.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
server {
    listen 80 default_server;

    server_tokens off;
    autoindex off;
    root /var/www/rickroll;
    index index.html;
    add_header X-Never-gonna-give-you-up "";
    add_header X-Never-gonna-let-you-down "";
    add_header X-Never-gonna-run-around-and-desert-you "";
    add_header X-Never-gonna-make-you-cry "";
    add_header X-Never-gonna-say-goodbye "";
    add_header X-Never-gonna-tell-a-lie-and-hurt-you "";

    return 793; #zombie apocolypse 
}

if only those scanners could fully appreciate the midi tones of Rick Astley melodic symphony soothing them to sleep in the wake of the end of all things via Zombie Apocolypse … alas we wonder do calculators dream?

Yup no hostnames were being sent as part of the request, so China telecom doesn’t love my blog afterall … well screw you guys! I thought we had something but you were just a fake …

but wait there’s more just as the sweet verses dictate; we’re never going to give you up, so if you’re making so many requests in such a short time you must want to stay connected to me for as long as possible, it’s ok I’ve got you covered.

1
2
iptables -A INPUT -p tcp --dport 80 -m state --state NEW -m recent --set
iptables -I INPUT -p tcp --dport 80 --state NEW -m recent --update --seconds NN --hitcount N -j TARPIT

Forever together … into the tarpit … shhhh … only dreams now …

And for those not snuggling with us down in the tarpit, sorry but you’ll just need to prove you really want to be in there; sticky cuddles …

YMMV etc this isn’t a fully tested configuration, it’s not ment to do anything but troll all the automated scanners out there hammering the instance.

Comments

Two Factor SSH Authentication - Pubkey Yubikey

OpenSSH >= 6.2 supports “multi factor authentication” which is to say you can require multiple forms of identification to complete authentication for the SSH connection.

A real world comparrison would be I suppose providing more than one form of identification to open a bank account.

OpenSSH 6.2 introduces the AuthenticationMethods setting; this combined with pam_yubico can be used to require that the connections provides both the SSH public key and the yubikey O.T.P (One time password).

OpenSSH 6.2 is included Fedora 19 and for a while now OpenSSH has supported the Match Group (I covered the use of such for chrooting users easily).

So we’re going to combined this combination such that we attain the following:

  1. SSH Connections will require pubkey authentication
  2. SSH Connections will also require yubikey authentication
  3. The above will be applied to specified users via the Match Group clause

To be clear if the connection does not provide a valid public key for the user; it will never reach the yubikey prompt stage; also if the provided yubikey OTP is invalid authentication will also fail.

Install the pam_yubico package: sudo yum -y install pam_yubico

At the end of your /etc/ssh/sshd_config add the following:

1
2
3
Match Group mfagroup
    AuthenticationMethods pubkey,keyboard-interactive
    

You will also need to set ChallengeResponseAuthentication yes in your sshd_config file.

The above is the bare minimum you can add any additions you wish; and restart sshd.

Create the file /etc/pam.d/yubi-auth with the content

1
auth sufficient pam_yubico.so id=your_yubicloud_id key=your_yubicloud_api_key authfile=/etc/ssh/yubikey_mappings url=https://api.yubico.com/wsapi/2.0/verify?id=%d&otp=%s debug

Note: I am specifying the URL as the default will use http and not https despite what the documentation might say.

Create the file: /etc/ssh/yubikey_mappings with the content:

1
username:yubikey_identity

You can get your yubikey identity from demo.yubico.com

Edit /etc/pam.d/sshd so that the first lines read:

1
2
#%PAM-1.0
auth       include     yubi-auth

And finally create a user in your group, in this case we’re using the mfagroup.

useradd -g mfagroup -s /bin/bash username and install their public ssh key in /home/username/.ssh/authorized_keys, ensuring proper permissions.

All being well when you try to login with the user you should see the following:

1
2
Authenticated with partial success.
Yubikey for `username': 

And you have sucessfully setup two factor ssh authentication with public keys.