Comments

NCA Challenge 2015 Progress Writeup

NOTE I was unable to complete the challenge ahead of the 18th of July deadline due to other commitments, what follows is a write up of my progress in the challenge after ~6hrs total spent.

On watching the video noted 299879 as the evidence id on the bag, this may be relevant later.

Unzip nca_image.zip

Yields nca_image.bin, let’s use binwalk to analyse the file

1
2
3
4
5
6
DECIMAL       HEXADECIMAL     DESCRIPTION
--------------------------------------------------------------------------------
7995373       0x79FFED        Cisco IOS microcode for "l"
95256215      0x5AD7E97       Zip archive data, at least v2.0 to extract, compressed size: 3790080,  uncompressed size: 3799842, name: "e-mail.docx"
99046429      0x5E7541D       End of Zip archive
191886470     0xB6FF486       QEMU QCOW Image

On using binwalk -e everything except the identified QCOW image is extracted, so using my helper script

1
2
3
4
5
6
7
8
9
10
11
12
#!/bin/bash

echo -n "Can haz start offset hex?:"
read start_off
echo -n "Can haz end offset hex?:"
read end_off

start_int=`echo "ibase=16;${start_off}" | bc`
end_int=`echo "ibase=16;${end_off}" | bc`
chunk_int=`echo "${end_int} - ${start_int}" | bc`

echo "It's not safe to go alone, here take this: dd if=/path/to/space/kitteh of=/path/to/space/kitteh_part skip=${start_int} bs=1 count=${chunk_int}"

We manually carve the file out

1
2
3
4
file_carve_dd_calc 
Can haz start offset hex?:B6FF486
Can haz end offset hex?:C6ED5F0
It's not safe to go alone, here take this: dd if=/path/to/space/kitteh of=/path/to/space/kitteh_part skip=191886470 bs=1 count=16703850

Trying to analyse the QCOW file using

  1. guestfish
  2. qemu-* tools (even pulled down the latests source and compiled)

Ultimately this appears to be a false identification, opening up the file in bless noted many occurences of the QFI header associated with a qcow image, and errors such as

1
2
... not supported by this qemu version: QCOW version 3330981897
... not supported by this qemu version: QCOW version -963985399

Variant on the version of qemu being run, means I move onto analysing the rest of the extracted files.

email.docx

Opening the file (which I did on a tails VM to err on the side of caution, citing paranoia over potential for some macros), notes what appears to be a raw email complete with headers.

And an embedded oleObject

So I unzip the .dox file and again use binwalk to inspect the file.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
unzip e-mail.docx
Archive:  e-mail.docx
  inflating: [Content_Types].xml     
  inflating: _rels/.rels             
  inflating: word/_rels/document.xml.rels  
  inflating: word/document.xml       
  inflating: word/footnotes.xml      
  inflating: word/footer3.xml        
  inflating: word/footer2.xml        
  inflating: word/footer1.xml        
  inflating: word/header2.xml        
  inflating: word/header3.xml        
  inflating: word/header1.xml        
  inflating: word/endnotes.xml       
  inflating: word/embeddings/oleObject1.bin  
  inflating: word/theme/theme1.xml   
  inflating: word/media/image1.emf   
  inflating: word/settings.xml       
  inflating: word/fontTable.xml      
  inflating: word/webSettings.xml    
  inflating: docProps/app.xml        
  inflating: docProps/core.xml       
  inflating: word/styles.xml   

binwalk word/embeddings/oleObject1.bin

DECIMAL       HEXADECIMAL     DESCRIPTION
--------------------------------------------------------------------------------
38019         0x9483          Zip encrypted archive data, compressed size: 2391816,  uncompressed size: 2960344, name: "fl46.wav"
2429884       0x2513BC        Zip encrypted archive data, compressed size: 1536,  uncompressed size: 1958, name: "my_key.asc"
2431471       0x2519EF        Zip encrypted archive data, compressed size: 1373482,  uncompressed size: 1373454, name: "usb_key.gpg"
3805313       0x3A1081        End of Zip archive

encrypted zip

binwalk has provided us with information showing this is an encrypted archive containing thress files, so its needed to extract the zip file and break the encryption to get at the files within.

1
2
3
4
5
6
7
8
9
 zipinfo T0PS3RET.zip 
Archive:  T0PS3RET.zip
Zip file size: 3767679 bytes, number of entries: 3
warning [T0PS3RET.zip]:  131 extra bytes at beginning or within zipfile
  (attempting to process anyway)
-rw-a--     6.3 fat  2960344 Bx u099 15-Jun-23 11:26 fl46.wav
-rw-a--     6.3 fat     1958 Bx u099 07-Feb-06 15:21 my_key.asc
-rw-a--     6.3 fat  1373454 Bx u099 07-Feb-06 15:19 usb_key.gpg
3 files, 4335756 bytes uncompressed, 3766798 bytes compressed:  13.1%

Running strings on the file also notes the following which may be of use later as it indicates the user “JAMIEH”

Z:\CSC-Final-Revision\Final ‘e-mail’\T0PS3RET.zip C:\Users\JAMIEH~1\AppData\Local\Temp\T0PS3RET.zip

Ok let’s john this bastard

1
2
3
4
JohnTheRipper/run/zip2john ./T0PS3RET.zip > T0PS3RET.hashes
JohnTheRipper/run/john ./T0PS3RET.hashes --show

T0PS3RET.zip:flower:::::T0PS3RET.zip

wav and gpg files

So now we have three files.

  1. fl46.wav - which upon listening to this is clearly DTMF tones followed by a modem handshake
  2. my_key.asc - a private GPG key
  3. usb_key.gpg - an encrypted GPG payload

I setup John to start brute forcing the gpg key password whilst inspecting the other files; think of it as an efficent workflow we may not need the bruteforce however there’s no harm in having it run whilst we continue the investigation

1
JohnTheRipper/run/gpg2john -S my_key.asc > my_key.asc.hashes

Listening to the wav file in vlc this is clearly DTMF tones and a modem handshake, using multimon I can extract the numbers associated with the DTMF tones.

1
multimon-ng -t wav fl46.wav

On this first pass there is some odd behaviour occuring, some numbers are being repeated and some appear to be being skipped, opening the wav file in audacity reveals the issue.

The wave file is stereo meaning there is both a left and right channel, observing the pattern above it’s clear this is an 11 didgit telephone number, we “flatten” the file to mono and run it through multimon again

1
2
3
4
5
6
7
8
9
10
11
12
13
multimon-ng -t wav fl46.wav
DTMF: 0
DTMF: 7
DTMF: 4
DTMF: 8
DTMF: 2
DTMF: 3
DTMF: 5
DTMF: 1
DTMF: 2
DTMF: 4
DTMF: 9
DTMF: *

Whilst it was not needed it’s worth noting that sox can be used to convert to a multimon native format

1
sox -t wav fl46-mono.wav -esigned-integer -b16 -r 22050 -t raw fl46-mono.raw

Calling the number (via an anonymized service of course) yeilds a very faint voice reading numbers aloud, this is why having the call recording prior to dialing is such an advantage; some post processing to raise the volume and carefull listening yields: 533020565

usb_key.gpg

The numbers are indeed the gpg key password

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
gpg -d usb_key.gpg > usb_key.img

You need a passphrase to unlock the secret key for
user: "Black Oleander Top Secret <bl4ck0l34nd3r70p53cr37@devnull.invalid>"
2048-bit RSA key, ID C96C8291, created 2015-06-16

gpg: encrypted with 2048-bit RSA key, ID C96C8291, created 2015-06-16
      "Black Oleander Top Secret <bl4ck0l34nd3r70p53cr37@devnull.invalid>"

usb_key.img 

file -i usb_key.img
usb_key.img: application/x-tar; charset=binary

tar -xvf ./usb_key.img
Formula.docx
Ledger.xlsx
X101D4.docm
Charles.pptm

binwalk usb_key.img 

DECIMAL       HEXADECIMAL     DESCRIPTION
--------------------------------------------------------------------------------
0             0x0             POSIX tar archive, owner user name: "root", owner group name: "root"

Charles.pptm

2 slide presentation First slide “It is not the strongest of the species that survives, but the more adaptable”, background portrait of Charles darwin, oleEmbbeded file “TransferCode.zip.001” could infer multipart zip

1
2
3
4
5
6
7
As noted before ppt/embeddings/oleObject1.bin

Slightly odd however ...

DECIMAL       HEXADECIMAL     DESCRIPTION
--------------------------------------------------------------------------------
4247          0x1097          Zip archive data, at least v2.0 to extract, compressed size: 977930,  uncompressed size: 1070767, name: "TransferCode.pdf"

running binalk -e produxes the .zip and the .pdf file, the .pdf file is unreadable as it is incomplete therefor we know that this zip file is the head of a multipart archive.

Now I have TransferCode.zip.001

Formula.docx

Embbeded images showing a formula TransferCode.zip.002, ok yup looking like multipart zip Google image search “The Drake Equation” also “The Equation of Life” 2014 film

Found the following strings

C:\Users\Jamie H\AppData\Local\Microsoft\Windows\INetCache\Content.Word\TransferCode.zip.002 C:\Users\JAMIEH~1\AppData\Local\Temp\TransferCode.zip.002

Now I have TransferCode.zip.002

Ledger.xslx

Account numbers many 25000 transfers descriptions may be erroneous, “cabal”, “lord” etc.

Binwalk extracted noted something interesting …

./_Ledger.xlsx.extracted/secret_hash/1902d4bfb197e0b7372fc0ec592edabbce0124845a270e4508f247e1faffecce

strings ./_Ledger.xlsx.extracted/xl/embeddings/oleObject1.bin

C:\Users\Jamie H\Documents\CSCUK-Challenge-1\Stage 2\TransferCode.zip.003 C:\Users\JAMIEH~1\AppData\Local\Temp\TransferCode.zip.003

Now I have TransferCoder.zip.003

X101D4.docm

noted VBA from strings run, large binary textx (101 etc …) another hash 13790e4b2ed8345dc51b15c833aa02a33171bd839c543819d19b41bd3962943c followed by “keep looking ;-)” Used binwalk to extract the files

strings _X101D4.docm.extracted/word/vbaProject.bin

1
2
3
4
 curl https://gist.github.com/anonymous/e13e60e1975bceb04c20 > 0wned.txt
 activate 1337 hack tool
 destroy the world
 mission complete

the gist contains file TransferCode.zip.004 in base64encoding: https://gist.githubusercontent.com/anonymous/e13e60e1975bceb04c20/raw/145cad938bd2c4391fc55f5b482625aa86dae776/gistfile1.txt

1
2
3
4
5
6
7
8
from base64 import b64decode
data = open('TransferCode.zip.004.raw)
data = data.replace("local file = TransferCode.zip.004\n'Begining of file\n",'')
data = data.replace("\n'End of File","")
raw = b64decode(data)
out = open('TransferCode.zip.004', 'wb')
out.write(data)
out.close()

The end …

Unfortunatly this is where I must end, I originally did the above work on June 30th 2015 in my evening, and was not able to pick it up again untill autoring this blog post … past the deadline, the PDF file appears to be the final stage. (Just cat the zip files togetheer and unzip to get the PDF file)

Oh well it was an interesting puzzle at least and a welcomed exercise of skills I do not nearly get to use enough.

Pi2 Cluster - Docker Swarm

I am currently working on overhauling my network and devices once again, so finally (maybe) I’ll actually get around to producing a commodity cluster, this post focuses on getting docker up and running on the RaspberryPi2

Hardware

  • 5 x RasPi2
  • 1 x Utilite Pro

Docusing on the Pi2’s here as I’ve not rebuild the utilite at this moment in time.

Installing Arch linux

Why are we using Arch and not raspbian? simply because of time constraints, Arch has ARM packages for docker (and openvswitch) and this will save sometime going on.

As I’ll be imaging multiple SD cards I wront a bash script to save some time

This assumes you have allready done the partitioning per the arch installation document

WARNING Make sure you do not blindly use my script, the device paths may be different and you do not want to be wiping out the wrong device.

Installing Docker

pacman -S docker

Caveats of docker on ARM

Most docker images are x86 or x86_64 so when you use docker pull and try to docker run you’re going to have a bad time …

1
2
docker run swarm
FATA[0001] Error response from daemon: Cannot start container caff048f6af28eca4648078ac1452b9464dcc16f5273a3b3d0912b1c00e0423f: [8] System error: exec format error

Running swarm without running swarm

The swarm docker images will not run on ARM, so what do we do ?

Simple we build the swarm binary from source

pacman -S golang godep

Check the github readme via the link above to get swarm to compile

Start the swarm

On one node go/bin/swarm create and record the token

Now on every node

1
go/bin/swarm --addr NNN.NNN.NNN.NNN:2375 token://the_token_from_create

Now we need to start the manager, this can be on any node or even on a sperate machine such as your laptop / desktop.

1
go/bin/swarm manage -H tcp://NNN.NNN.NNN.NNN:2376 token://the_token_from_create

Check the swarm

Again this can be run from any docker client.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
docker -H tcp://XXX.XXX.XXX.230:2376 info
Containers: 1
Strategy: spread
Filters: affinity, health, constraint, port, dependency
Nodes: 5
 alarmpi: XXX.XXX.XXX.227:2375
  └ Containers: 0
  └ Reserved CPUs: 0 / 4
  └ Reserved Memory: 0 B / 970.7 MiB
  └ Labels: executiondriver=native-0.2, kernelversion=3.18.14-1-ARCH, operatingsystem=Arch Linux ARM, storagedriver=aufs
 alarmpi: XXX.XXX.XXX.229:2375
  └ Containers: 0
  └ Reserved CPUs: 0 / 4
  └ Reserved Memory: 0 B / 970.7 MiB
  └ Labels: executiondriver=native-0.2, kernelversion=3.18.14-1-ARCH, operatingsystem=Arch Linux ARM, storagedriver=aufs
 alarmpi: XXX.XXX.XXX.226:2375
  └ Containers: 0
  └ Reserved CPUs: 0 / 4
  └ Reserved Memory: 0 B / 970.7 MiB
  └ Labels: executiondriver=native-0.2, kernelversion=3.18.14-1-ARCH, operatingsystem=Arch Linux ARM, storagedriver=aufs
 alarmpi: XXX.XXX.XXX.230:2375
  └ Containers: 1
  └ Reserved CPUs: 0 / 4
  └ Reserved Memory: 0 B / 970.7 MiB
  └ Labels: executiondriver=native-0.2, kernelversion=3.18.14-1-ARCH, operatingsystem=Arch Linux ARM, storagedriver=aufs
 alarmpi: XXX.XXX.XXX.228:2375
  └ Containers: 0
  └ Reserved CPUs: 0 / 4
  └ Reserved Memory: 0 B / 970.7 MiB
  └ Labels: executiondriver=native-0.2, kernelversion=3.18.14-1-ARCH, operatingsystem=Arch Linux ARM, storagedriver=aufs

So there we have it, 20 available ARM cores all running in a docker swarm, seems simple doesn’t it? finding the correct information to make this all work however was a trial in itself.

TODO

  • Rebuild utilite-pro, make part of the docker swarm (brining the core count to 24)
  • Force docker to use TLS
  • Try to get ceph compiling (throwing issues about not finding any high precision timers)
1
common/Cycles.h:76:2: error: #error No high-precision counter available for your OS/arch
1
asm volatile ("mrc p15, 0, %0, c15, c12, 1" : "=r" (cc));
  • Write up notes on getting Logstash 1.5.0 and docker on ARM to play nice together
  • Complete setup of openvswitch
  • Explore deployment of cuckoo sandbox
  • Explore Hadoop components
  • Write up notes on distccd setup (this really speeds up compilation time)
  • Write up systemd entries for swarm (allow automatic swarm cluster startup on reboot).

Photos

I’m uploading photos and screenshots of the cluster as progress is made here

Why Pi2?

We can’t all get our hands on a HP moonshot, I debated for some what to use, the Pi2 won out due to

  • Price
  • Form factor
  • No. cores
  • Readily available distros and packages
  • Readily available accessories (cases, etc..)
  • Low power consumption (5 pi2, 1 utilite-pro, mikrotik switch, USB thumbdrives, and USB HD’s, all runnign just under 33 watts)
  • ARM architecture

CVE-2015-1027 Percona-toolkit and Percona-xtrabackup

Since my move to information security architect at Percona back in November of 2014 I’ve been able to begin to curate and build a responsible disclosure program for which I hope best reflects that of a responsible open source vendor.

There is still penty to do here of course, and more is yet to come on this front.

The first public success story may be considered a minor one but I feel it is an important step toward responsible disclosure.

The blog post disclosure on percona.com may be found here and I’m hosting a plaintext version here

The initial research began 2014-12-16 at this time a functional PoC was created and distributed internally to allow the developers to test their fix this means from concept to fix (2015-01-16) took one calendar month, with percona-toolkit 2.2.13 being released 2015-01-26 and percona-xtrabackup 2.2.9 being released 2015-02-17.

So why you may as did the disclsure not occur untill 2015-05-06 ? simply put to allow user and distros to update; and frankly this was by far the hardest part trying to illict response from distros began to seem to a fruitless task.

And thus I had planned to just go ahead with the disclosure 2015-04-30, it was around this time we were contacted by the people over at oCERT regarding and entirely seperate issue CVE-2015-3152 for which you can read more about how this is looking to be addressed on Todd Farmer’s blog.

Following the interaction with oCERT (namely Andrea B), we’ve since applied for membership with oCERT and work continues on curating a responsible disclosure plan.

If you have any suggestions / comments on the progression of the responsible disclosure program I’d be glad to hear them via email to:

1
david {dot} busby {at} percona {dot} com

You can use either my gpg pubkey at keybase.io or 0x5422aa2ab636da5a

Please remember the program is still very much in its early stages as such time to disclosure are typically longer than exepect (as can be seen from CVE-2015-1027).

Comments

Snoopy NG on the Pi2

I’ve said it many times over in the talks I’ve given both at conferences and during meetings, the devices we carry betray a wealth of information about us without us even knowing.

Projects like Jassegar leverage this to masquerade as trusted wifi networks, and yet these issues remain.

And worse still are present in other standards beyond WiFi.

Enter Snoopy-NG, this is a suite of tools mostly authored in python which orchestrate the passive collection of the data our wireless devices are constantly screaming out into the Ether.

If you’ve ever spoken to me at a conference or on site, chances are you’ve seen my “bag of toys”; the reasons for this are for demonstration purposes.

There’s nothing I’ve found more powerful than giving a practical demonstration of an issue; be it process or security issues at fault (Please consider this the next time you raise a bug on a project’s tracker, and provide as much detail as you can screencasts are very useful).

So in this train of thought; following the announcement of RasPi2, it was time to add another tool / toy to the arsenal.

And so went the “rehashing” of some of the older tools, cases etc…

This was not without its issues however, seems if you try to draw more power than the Pi2 can provide it leads to some odd behaviour.

I took to the Raspi forums though the discussion appears to yield nothing but “you’ve got PSU problems”.

In the end the Atheros WiFi is now using a USB-Y adapter (hence the two use A cables attaching to the battery pack), as I’ve little time to waste on the debate of what a “non crappy” PSU is despite giving complete examples of all types of power supplies used in the diagnosis of the issues at hand, which appear to have been ignored.

Now running on Raspbian a git clone of Snoopy-ng was taken, depedencies installed and some modifications to /etc/rc.local to have snoopy-ng run at startup, and we’ve got a fully functional “drone unit”, though currently reliant on the old “blinken lights” to produce confidence in the running of Snoopy (the WiFi LED will blink in approx 30 second intervals whilst in monitor mode).

Currently this has had some 24hrs of stable data collection, with the only interuptions to uptime being the change between wall socket and battery power when moving around.

I would really be intrested in seeing a USB battery pack which can also take a “trickle charge” to aid in mobility, sort of a mini UPS if you will.

What’s next? maybe the USB Armory which looks quiet promissing; I’m also looking to add BadUSB, HackRF to the “bag of toys”.

I’ve also been looking at SDR on and off, particuarly instrested in the POCSAG Pager network which seems to be another clear text protocol, aswell as 802.15.4 (Xbee / Zigbee) which appears to be making itself into traffic controll systems and is again completely open.

Why you may ask would these things make it into the “bag of toys”?

I reffer back to my previous point of practical examples, unless you can demonstrably show people why something is insecure / broken they have little interest / time / money in fixing the issue at hand, if you want results far better to show someone the problem and work with them on the fix.

A.K.A. Providing a Proof of Concept

Suricata Logstash Kibana Utilite Pro ARM

I’m currently in the process of overhauling my pesonal work network, this includes deployment of an inline IPS as part of the project.

Hardware List

  • Freescale i.MX6 quad-core Cortex-A9 @ 1.2GHz
  • 2GB DDR3 @ 1066 Mhz
  • 32Gb mSata
  • 1 x SanDisk Ultra 64GB Class 10 MicroSD
  • 2 x 1GB NIC (Intel Corporation I211 Gigabit Network Connection)
  • 1 x Internal Wifi 802.11b/g/n
  • 1 x USB Alfa AWUS036NHR

Complete Utilite Pro Spec

Ships with Ubuntu 12:04 LTS

You can of course change the OS on the Utlite pro to things such as Kali and Arch assault the caveat being if you want to install on the mSATA and not run from the sdcard you’re going to need to use the serial connection.

My USB -> Serial adapter has a male connector, the connector for the Utilite also provides a male DB9 connection … so an adapter is on order.

Topology

1
[LAN Router] --- [ Utilite Pro ] --- [ ISP Router ]

So as can be seen here I’m sitting the device inline, with the intent to have it route traffic between the LAN and WAN, as an asside I also plan to use the WiFi to provide Wireless access disbaling the ISP equipment, also to allow segmented guest access for visitors etc / captive portal, but that’s a far off from solid plan at the moment

Suricata

The packages available from the ubuntu arm repos are 1.x and I want the new 2.x builds (Archassault however took my feedback and have built the 2.x packages) so in the interim to receiving the required equipment to install Arch on arm all the prototyping will need to use the unbuntu install.

Building Suricata 2.x on ubuntu 12.04 ARM

1
2
3
wget http://www.openinfosecfoundation.org/download/suricata-2.0.tar.gz
tar -zxvf suricata-2.0.tar.gz
cd suricata-2.0

Adapting from the intructions here.

Install core requirements

1
2
3
4
apt-get -y install libpcre3 libpcre3-dbg libpcre3-dev \
build-essential autoconf automake libtool libpcap-dev libnet1-dev \
libyaml-0-2 libyaml-dev zlib1g zlib1g-dev libcap-ng-dev libcap-ng0 \
make libmagic-dev

Install IPS configuration requirements

1
apt-get -y install libnetfilter-queue-dev libnetfilter-queue1 libnfnetlink-dev libnfnetlink0i

Install logstash output format (eve.json) requirements

1
apt-get -y install libjansson-dev libjansson4

configure and build suricata

I want everything to run on the sdcard at this time as I plan to replace the OS and thus everything on the mSATA ;-)

1
2
3
mkdir -p /sdcard/suricata/{usr,etc,var}
./configure --enable-nfqueue --prefix=/sdcard/suricata/usr --sysconfdir=/sdcard/suricata/etc --localstatedir=/sdcard/suricata/var
make && make install-full

–build-info

After complete of the above your build info should look like

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
root@utilite:/sdcard/suricata/usr/bin# ./suricata --build-info
This is Suricata version 2.0 RELEASE
Features: NFQ PCAP_SET_BUFF LIBPCAP_VERSION_MAJOR=1 AF_PACKET HAVE_PACKET_FANOUT LIBCAP_NG LIBNET1.1 HAVE_HTP_URI_NORMALIZE_HOOK HAVE_LIBJANSSON 
SIMD support: none
Atomic intrisics: 1 2 4 8 byte(s)
32-bits, Little-endian architecture
GCC version 4.6.3, C version 199901
compiled with -fstack-protector
compiled with _FORTIFY_SOURCE=2
L1 cache line size (CLS)=64
compiled with LibHTP v0.5.10, linked against LibHTP v0.5.10
Suricata Configuration:
  AF_PACKET support:                       yes
  PF_RING support:                         no
  NFQueue support:                         yes
  IPFW support:                            no
  DAG enabled:                             no
  Napatech enabled:                        no
  Unix socket enabled:                     yes
  Detection enabled:                       yes

  libnss support:                          no
  libnspr support:                         no
  libjansson support:                      yes
  Prelude support:                         no
  PCRE jit:                                no
  libluajit:                               no
  libgeoip:                                no
  Non-bundled htp:                         no
  Old barnyard2 support:                   no
  CUDA enabled:                            no

  Suricatasc install:                      yes

  Unit tests enabled:                      no
  Debug output enabled:                    no
  Debug validation enabled:                no
  Profiling enabled:                       no
  Profiling locks enabled:                 no
  Coccinelle / spatch:                     no

Generic build parameters:
  Installation prefix (--prefix):          /sdcard/suricata/usr
  Configuration directory (--sysconfdir):  /sdcard/suricata/etc/suricata/
  Log directory (--localstatedir) :        /sdcard/suricata/var/log/suricata/

  Host:                                    armv7l-unknown-linux-gnueabi
  GCC binary:                              gcc
  GCC Protect enabled:                     no
  GCC march native enabled:                yes
  GCC Profile enabled:                     no

You can now run Surticata in IDS mode:

1
LD_LIBRARY_PATH=/sdcard/suricata/usr/lib /sdcard/suricata/usr/bin/suricata -c /sdcard/suricata/etc/suricata//suricata.yaml -i ethN

NOTE The intention is to run in IPS mode, however IDS is suitable to complete the integration with logstash and kibana

Get some event data

Configure your SSHD for keyonly authentication, and harden to your preferences and then just expose SSH to the internet for a few hours; I’m not kidding here within ~12 hours I’d logged well over 1K attempted logins enough for suricata to log some ET COMPROMISED Known Compromised or Hostile Host Traffic group events.

Setup ARM Java

1
apt-get install openjdk-7-jre

Logstash

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
wget https://download.elasticsearch.org/logstash/logstash/logstash-1.4.1.tar.gz
tar -zxvf logstash-1.4.1.tar.gz
cd logstash-1.4.1
mkdir -p etc/conf.d
cat >> etc/conf.d/suricata.conf << EOF
input {
  file { 
    path => ["/sdcard/suricata/var/log/suricata/eve.json"]
    codec =>   json 
    type => "SuricataIDPS-logs"
  }

}

filter {
  if [type] == "SuricataIDPS-logs" {
    date {
      match => [ "timestamp", "ISO8601" ]
    }
  }

  if [src_ip]  {
    geoip {
      source => "src_ip" 
      target => "geoip" 
      database => "/sdcard/logstash-1.4.1/vendor/geoip/GeoLiteCity.dat" 
      add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
      add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
    }
    mutate {
      convert => [ "[geoip][coordinates]", "float" ]
    }
  }
}

output {
  elasticsearch {
    embedded => true
  }
}
EOF
bin/logstash -f etc/conf.d/suricata.conf

This will take some time to start up, note that if you want to load in an existing log set, add start_position => "beginning" to the file {} declaration before starting logstash, after the back loading has completed I recomend you to remove this line, as it defaults to “end” and logstash tracks it’s position in the file if you leave this as beginning however it will always start at the beginning of the log and take a long time to startup needlessly

ArgumentError: cannot import class java.lang.reflect.Modifier’ asModifier’

something screwy occurs within jruby

Install oracle java

Download ejre-7u55-fcs-b13-linux-arm-vfp-hflt-client_headless-17_mar_2014.tar.gz from here

“ARMv6/7 Linux - Headless - Client Compiler EABI, VFP, SoftFP ABI, Little Endian1”

1
2
3
4
5
6
7
8
9
10
11
12
13
14
tar -zxvf ejre-7u55-fcs-b13-linux-arm-vfp-sflt-client_headless-17_mar_2014.tar.gz
update-alternatives --install "/usr/bin/java" "java" "/path/to/ejre1.7.0_55/bin/java" 1
update-alternatives --config java
...
There are 2 choices for the alternative java (providing /usr/bin/java).

  Selection    Path                                            Priority   Status
------------------------------------------------------------
* 0            /usr/lib/jvm/java-7-openjdk-armel/jre/bin/java   1043      auto mode
  1            /path/to/ejre1.7.0_55/bin/java                    1         manual mode
  2            /usr/lib/jvm/java-7-openjdk-armel/jre/bin/java   1043      manual mode

Press enter to keep the current choice[*], or type selection number: 
 

Select 1 or whatever index you are shown

Kibana

Kibana is really just a web interface, so download it and install your preffered webserver to run it from nGinx / Apache / Lighthttpd etc …

1
2
cd /path/to/kibana/apps/dashboards/
curl -o suricata2.json https://gist.githubusercontent.com/regit/8849943/raw/15f1626090d7bb0d75bca33807cfaa4199b767b4/Suricata%20dashboard

In your browser now go to http://your_device/path/to/kibana/#/dashboard/file/suricata2.json