One caveat was the need to use a paid for Slack account on either side of the shared-channel so I setup oneiroi-ltd.slack.com and upgraded it to the plus plan (hey it has 1 user, so wasn’t a big deal).
I then proceeded to “Share” a channel from the oneiroi-ltd.slack.com space with another on which I had a presence (and again was running on a Plus plan) and for the most part, it worked as advertised the sharing integration functioning on the server side so all transport encryption etc was OK.
The issue came once the channel was un-shared, this is where things got a little more interesting.
To test the functionality and asses security I ran the following:
In theory the last step could be as a result of a falling out, termination of client-service engagement etc. any number of reasons to wnating to unshare the channel.
What followed next was unexpected;
I had enabled notifications in the chrome browser from which the oneiroi-ltd slack was running, and to my surprise was still receiving messages from the “other” side (no pun intended) despite the channel being un-shared.
So I proceeded to search for Slack’s security contact, and authored a quick report, including screenshots:
1 2 3 4 5 6 7 8 |
|
This was of course a serious issue as this could lead to an invisible breach should sensitive information be communicated in the previously shared channel.
I filed the hackerone issue in order to notify the slack team, and just over a month on 12th January 2018 the Slack team reported the issue was fixed!
I moved to test this and sure enough un-sharing the channel now was working as intended with no observable leak as occurred previously.
And all it seemed was well, I would like to thank the Slack team work working on this issue through to completion despite the delays in feedback on either side.
]]>That being said I want to cover in this post today something which has I think unjustly gathered some F.U.D. (Fear, Uncertainty, Doubt).
Troy Hunt (and some engineers from Cloudflare) has released pwnedpasswords version 2; with an API!
First off, NO you do not send your passwords and you should NEVER send your password to anything but the system you are intending to log into.
Secondly, No, the API does not take the raw password in plaintext, it implements the k-Anonymity model.
First we take you plaintext password, hash it using the SHA1 algorithm and send the first 5 characters of the hash to https://api.pwnedpasswords.com.
In this way the original password is NEVER sent to api.pwnedpasswords.com, only the first 5 chars of the SHA1(your password) hash to allow an index lookup and return whether your password has ever been seen in breaches made public / obtained by Troy for haveibeenpwned.com.
SHA1 itself is easily computed using software, such as hashcat and John The Ripper most certainly, however we are not sending the complete hash only the first 5 hexidecimal digits of the hash for index lookup.
To answer that I need to go into some detail as to how the SHA1 hash algorithm works, or rather the output of the SHA1 algorithm.
Don’t worry this will not be all math, I promise, we are focusing on the output hash only.
Let’s take a really common password in 2016/2017 of 123456
as I noted in Passphrase or Complex Passwords as an example.
1 2 3 4 5 |
|
So we would send the 7C4A8
string to api.pwnedpasswords.com; but not the whole digest of 7C4A8D09CA3762AF61E59520943DC26494F8941B
.
So for an attacker / adversary to get back the original password (assuming they can intercept the api calls being made to api.pwnedpasswords.com), how do they go from 7C4A8
to derive the password of 123456
?
SHA1 in theory can return 7C4A800000000000000000000000000000000
through 7C4A8FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
that’s 35^16
.
35^16 == 5.070942774902497 x 10^24
possible outcomes.
And this presumes you are able to iterate over every single hash in the 7C4A8
‘index’ space.
This is not how the SHA1 algorithm works; for example 123456
returns a very different hash value that 1234567
or 123457
etc.
There is at the time of writing this post no known method to iterate over the SHA1 space for a specific ‘index’.
1 2 3 4 |
|
The two examples are not even in the same ‘index’ space as the original example.
I am not saying it is entirely impossible to iterate every single value in the SHA1 algorithm space, and there issues with creating known hash collision this took down for instance subversion repositories where the example good and bad files were committed (You can search for this there are many articles to choose from).
The thing is it is highly unlikely for an adversary to get your original password.
4.294967296 x 10^25
)5.070924774902497 x 10^24
possible known passwords against some source of truth that knows your password.So what’s the take-away here ?
Provided you use a unique password for every one of your online accounts (PLEASE never re-use a password!) and that your end vendor / maintainer is taking basic precautions to protect accounts the chances of an adversary getting your password because you looked up the first 5 chars of a SHA1 hash are VERY VERY small.
And if a nation-state threat actor is in your threat model, I hope you are not using 123456
as a password!
I have made available a python script which will allow you to lookup your passwords (or not) against api.pwnedpassword.com, the code is available here, and it is released open source, so you are free to inspect the source code and choose to use it or not,
So in summary, checking your passwords is unlikely to pose a significant risk; especially when weighed up against the risk your of password being within a breach disclosure.
Think I have something wrong ? Have I missed something ?
Ping me on twitter but be sure to have evidence to backup your claims ;-)
]]>So one such discussion over the last couple of days has been over password complexity vs pass-phrases, not to be confused with password length let me make that point clear.
So let me give you some examples
A typical example of a password policy with complexity requirements and fairly typical of most standards out there right ?
So let’s explore this a little, and discuss some of the issues. Here’s an example of a “compliant” password
CompanyPassword2017!$
The password is compliant with policy and as such is not a problem … right ?
Well if you’ve read anything on my blog before you’ll know the answer is no there is still a problem.
Password
(still better than 123456
though ;-) )123456
was the most used password in 2016)The problem is human behavior, and in the english language at least this is predictable behavior allowing for pattern analysis or behavioral analysis attacks to be carried out.
I’m speaking in generalizations, if you do not do any of this then great! Use a word list throw a dice to decide the password ;-) …
The downfall here is with a complexity requirement is poor choices of passwords and this is most prevalent where the target individual does not use a password manager and the password generation feature.
Note: This is not a bashing of p:eople not using password managers, password managers have their own issues (just see examples from Travis Ormandy or Dan Tentler) so please bare with me until the end I am simply speaking about human behavior being predictable of which MANY studies are available to back this up).
If you do not know what a pass-phrase is then go take a look at this, I’ll wait …
Oh you’re back? good did you review the XKCD comic in full ? Excellent let’s continue then.
A pass-phrase is a series of words used ideally with a separating character (I will recommend using a space instead of a dash!) for example
Peter Piper Picked A Peck Of Pickled Peppers 2017 $!
Q: Wait ?! I only see 2 special characters, your count is off! A: Actually I counted the spaces; as space is a special character.
Precisely that, both are acceptable provided they follow the same basic guidelines, no you can not sacrifice complexity for a longer password perhaps I should explain more
peter piper picked a peck of pickled peppers
Some may argue that a longer password removes the need for complexity which is simply not the case as this pass-phrase has lowered the address space
considerably.
WARNING here be math …
The address space is the total addressable character set for any given password for example
And so on, so to evaluate when brute forcing (iterating every single possible combination) a password the math becomes as follows.
58^27 == 4.0978x10^47
possible combinations (53 == 52 + space)How about throwing in some complexity ?
58^63 == 1.2472789544046017 x 10^111
Which may not seem like a huge difference until you work out that the former non complex address space is 3.2853 x 10^-67
% of the size of the complex address space.
Yes you’re correct, but you’re also wrong. Brute force is not the only attack you can carry out, let’s use the example from before
58^27 == 4.0978x10^47
possible combinations171,476
words this is by FAR much less than 4.09785x10^47
171476
is 4.1845 x 10^-43
% of the address space when compared with the full size of the bruteforceable address space,
as such when looking at possible combinations, start to factor in other human factors such as poor word choice (names, places, colours etc …) and you reduce the address space even further.
The problem is choice
to throw a quick pun in here (and an obligatory matrix reference).
Note This is all ‘napkin math’ so please forgive me if I am wrong anywhere, and note it in the comments so I can fix in the post ;-)
Update: corrected napkin math 2017-05-05 password example assumed a-Z
where as example given was a-z
corrected the math to account for a-z
as intended.
Update2: corrected napkin math AGAIN 2017-05-24
Password complexity is no stronger than pass-phrase with complexity, if you manage / are authoring a policy on password security then remember the following quote
“Security at the expense of usability comes at the expense of security” otherwise known as AviD’s Rule of Usability.
In order to gain the expected result, otherwise you’re going to get users whom develop poor habits and choose poor-passwords.
Not everyone wants to use a password-manager some people are even fearful of storing all their passwords in a single repository, there is no one solution here but there is the management of how each available option can be used.
Think I got something wrong or have strong opinions on something ?
Please put your thoughts into a comment, again I encourage debate so please include as much information as you can in your argument.
]]>So if you take one thing away from this post please make sure it’s:
Simple enough right? You would have thought so, but of course this isn’t a “cure all” and there’s other vulnerabilites and mitigations to consider; but not for this post.
Remote File Upload.
So the site in question had a plugin which was out of date. This plugin had a RFU vulnerability which allowed attackers to upload arbitrary code files then head to
https://thesite.com/wp-content/pluginname/uploadedfile.php
To execute the attack.
Standard, boring crap right ?
Well this post isn’t to focus on how it happened, nor why it happed.
Simply put the php file itself I found very interesting.
Sure, code obfustication is nothing new. Heck tools like msfvenom allow you to choose from a variety of obfustication methods the premis for which is to avoid signatures for “known bad” files, and thus avoid common signatures (which is why you should not rely solely on signature bases analysis).
The thing is the overwhelming majority of webshell obfustication is done through “packing”, you’ll see it use base64, gzinflate, eval and that’s a pretty common standard.
Not this little bastard, and that’s why this got my attention
Head of the file:
1 2 3 4 5 6 7 8 9 10 |
|
Well, that’s … Interesting …
Tail of the file
1
|
|
So first it’s a base64 encoded string; we know this due to the first line of code which is doing some signature evasion itself.
1
|
|
Which of course yields ‘base64_decode’
So then the next line
1
|
|
Is really:
1
|
|
So let’s use some python …
1 2 3 4 5 6 7 |
|
So we have some raw intelligibile data, the WTF continues …
Let’s look at the tail of the file again it’s doing some additional processing let’s add some whitespace and comments to make it readable
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
|
I suppose we could go the python route again, however as we’ve discerned the function (loop unpack payload -> create_function -> execute function), we can “disarm” it to instead echo out the unpacked code for further analysis.
So the file with the required modifications …
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
The resulting payload starts off as
1 2 3 4 |
|
And continues to create a webshell interface.
Granted this may be viewed as little more than a geeks curiosity, however on a more serious note the main intriguing element of this webshell is that the password is an intrinsic part required to unpack the valid payload.
Without the password the unpack will fail; so consider if
1
|
|
Was instead moved to reside inside the packed payload, how would you possibly be able to begin to write a signature for such a file?
Fuzzy logic sure, look for long strings of seemingly random content, still I can see that’s going to run false positives in the masses given the various obfusticating options out there for php such as those that require licensing …
httpd_can_network_connect
set to true, it’s not going to stop it creating a reverse shell either, check out httpd_can_connect_db
to maintain web app functionality but make it harder for attackers)On watching the video noted 299879 as the evidence id on the bag, this may be relevant later.
Yields nca_image.bin, let’s use binwalk to analyse the file
1 2 3 4 5 6 |
|
On using binwalk -e
everything except the identified QCOW image is extracted, so using my helper script
1 2 3 4 5 6 7 8 9 10 11 12 |
|
We manually carve the file out
1 2 3 4 |
|
Trying to analyse the QCOW file using
Ultimately this appears to be a false identification, opening up the file in bless
noted many occurences of the QFI
header associated with a qcow image, and errors such as
1 2 |
|
Variant on the version of qemu being run, means I move onto analysing the rest of the extracted files.
Opening the file (which I did on a tails
VM to err on the side of caution, citing paranoia over potential for some macros), notes what appears to be a raw email complete with headers.
And an embedded oleObject
So I unzip the .dox file and again use binwalk
to inspect the file.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
|
binwalk has provided us with information showing this is an encrypted archive containing thress files, so its needed to extract the zip file and break the encryption to get at the files within.
1 2 3 4 5 6 7 8 9 |
|
Running strings on the file also notes the following which may be of use later as it indicates the user “JAMIEH”
Z:\CSC-Final-Revision\Final ‘e-mail’\T0PS3RET.zip C:\Users\JAMIEH~1\AppData\Local\Temp\T0PS3RET.zip
Ok let’s john this bastard
1 2 3 4 |
|
So now we have three files.
I setup John to start brute forcing the gpg key password whilst inspecting the other files; think of it as an efficent workflow we may not need the bruteforce however there’s no harm in having it run whilst we continue the investigation
1
|
|
Listening to the wav file in vlc
this is clearly DTMF tones and a modem handshake, using multimon
I can extract the numbers associated with the DTMF tones.
1
|
|
On this first pass there is some odd behaviour occuring, some numbers are being repeated and some appear to be being skipped, opening the wav file in audacity
reveals the issue.
The wave file is stereo meaning there is both a left and right channel, observing the pattern above it’s clear this is an 11 didgit telephone number, we “flatten” the file to mono and run it through multimon again
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
Whilst it was not needed it’s worth noting that sox
can be used to convert to a multimon native format
1
|
|
Calling the number (via an anonymized service of course) yeilds a very faint voice reading numbers aloud, this is why having the call recording prior to dialing is such an advantage; some post processing to raise the volume and carefull listening yields: 533020565
The numbers are indeed the gpg key password
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
|
2 slide presentation First slide “It is not the strongest of the species that survives, but the more adaptable”, background portrait of Charles darwin, oleEmbbeded file “TransferCode.zip.001” could infer multipart zip
1 2 3 4 5 6 7 |
|
running binalk -e produxes the .zip and the .pdf file, the .pdf file is unreadable as it is incomplete therefor we know that this zip file is the head of a multipart archive.
Now I have TransferCode.zip.001
Embbeded images showing a formula TransferCode.zip.002, ok yup looking like multipart zip Google image search “The Drake Equation” also “The Equation of Life” 2014 film
Found the following strings
C:\Users\Jamie H\AppData\Local\Microsoft\Windows\INetCache\Content.Word\TransferCode.zip.002 C:\Users\JAMIEH~1\AppData\Local\Temp\TransferCode.zip.002
Now I have TransferCode.zip.002
Account numbers many 25000 transfers descriptions may be erroneous, “cabal”, “lord” etc.
Binwalk extracted noted something interesting …
./_Ledger.xlsx.extracted/secret_hash/1902d4bfb197e0b7372fc0ec592edabbce0124845a270e4508f247e1faffecce
strings ./_Ledger.xlsx.extracted/xl/embeddings/oleObject1.bin
C:\Users\Jamie H\Documents\CSCUK-Challenge-1\Stage 2\TransferCode.zip.003 C:\Users\JAMIEH~1\AppData\Local\Temp\TransferCode.zip.003
Now I have TransferCoder.zip.003
noted VBA from strings run, large binary textx (101 etc …) another hash 13790e4b2ed8345dc51b15c833aa02a33171bd839c543819d19b41bd3962943c followed by “keep looking ;-)” Used binwalk to extract the files
strings _X101D4.docm.extracted/word/vbaProject.bin
1 2 3 4 |
|
the gist contains file TransferCode.zip.004 in base64encoding: https://gist.githubusercontent.com/anonymous/e13e60e1975bceb04c20/raw/145cad938bd2c4391fc55f5b482625aa86dae776/gistfile1.txt
1 2 3 4 5 6 7 8 |
|
Unfortunatly this is where I must end, I originally did the above work on June 30th 2015 in my evening, and was not able to pick it up again untill autoring this blog post … past the deadline, the PDF file appears to be the final stage. (Just cat the zip files togetheer and unzip to get the PDF file)
Oh well it was an interesting puzzle at least and a welcomed exercise of skills I do not nearly get to use enough.
]]>Docusing on the Pi2’s here as I’ve not rebuild the utilite at this moment in time.
Why are we using Arch and not raspbian? simply because of time constraints, Arch has ARM packages for docker (and openvswitch) and this will save sometime going on.
As I’ll be imaging multiple SD cards I wront a bash script to save some time
This assumes you have allready done the partitioning per the arch installation document
WARNING Make sure you do not blindly use my script, the device paths may be different and you do not want to be wiping out the wrong device.
pacman -S docker
Most docker images are x86 or x86_64 so when you use docker pull
and try to docker run
you’re going to have a bad time …
1 2 |
|
The swarm docker images will not run on ARM, so what do we do ?
Simple we build the swarm binary from source
pacman -S golang godep
Check the github readme via the link above to get swarm to compile
On one node go/bin/swarm create
and record the token
Now on every node
1
|
|
Now we need to start the manager, this can be on any node or even on a sperate machine such as your laptop / desktop.
1
|
|
Again this can be run from any docker client.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
|
So there we have it, 20 available ARM cores all running in a docker swarm, seems simple doesn’t it? finding the correct information to make this all work however was a trial in itself.
1
|
|
1
|
|
I’m uploading photos and screenshots of the cluster as progress is made here
We can’t all get our hands on a HP moonshot, I debated for some what to use, the Pi2 won out due to
There is still penty to do here of course, and more is yet to come on this front.
The first public success story may be considered a minor one but I feel it is an important step toward responsible disclosure.
The blog post disclosure on percona.com may be found here and I’m hosting a plaintext version here
The initial research began 2014-12-16 at this time a functional PoC was created and distributed internally to allow the developers to test their fix this means from concept to fix (2015-01-16) took one calendar month, with percona-toolkit 2.2.13 being released 2015-01-26 and percona-xtrabackup 2.2.9 being released 2015-02-17.
So why you may as did the disclsure not occur untill 2015-05-06 ? simply put to allow user and distros to update; and frankly this was by far the hardest part trying to illict response from distros began to seem to a fruitless task.
And thus I had planned to just go ahead with the disclosure 2015-04-30, it was around this time we were contacted by the people over at oCERT regarding and entirely seperate issue CVE-2015-3152 for which you can read more about how this is looking to be addressed on Todd Farmer’s blog.
Following the interaction with oCERT (namely Andrea B), we’ve since applied for membership with oCERT and work continues on curating a responsible disclosure plan.
If you have any suggestions / comments on the progression of the responsible disclosure program I’d be glad to hear them via email to:
1
|
|
You can use either my gpg pubkey at keybase.io or 0x5422aa2ab636da5a
Please remember the program is still very much in its early stages as such time to disclosure are typically longer than exepect (as can be seen from CVE-2015-1027).
]]>Projects like Jassegar leverage this to masquerade as trusted wifi networks, and yet these issues remain.
And worse still are present in other standards beyond WiFi.
Enter Snoopy-NG, this is a suite of tools mostly authored in python which orchestrate the passive collection of the data our wireless devices are constantly screaming out into the Ether.
If you’ve ever spoken to me at a conference or on site, chances are you’ve seen my “bag of toys”; the reasons for this are for demonstration purposes.
There’s nothing I’ve found more powerful than giving a practical demonstration of an issue; be it process or security issues at fault (Please consider this the next time you raise a bug on a project’s tracker, and provide as much detail as you can screencasts are very useful).
So in this train of thought; following the announcement of RasPi2, it was time to add another tool / toy to the arsenal.
And so went the “rehashing” of some of the older tools, cases etc…
This was not without its issues however, seems if you try to draw more power than the Pi2 can provide it leads to some odd behaviour.
I took to the Raspi forums though the discussion appears to yield nothing but “you’ve got PSU problems”.
In the end the Atheros WiFi is now using a USB-Y adapter (hence the two use A cables attaching to the battery pack), as I’ve little time to waste on the debate of what a “non crappy” PSU is despite giving complete examples of all types of power supplies used in the diagnosis of the issues at hand, which appear to have been ignored.
Now running on Raspbian a git clone of Snoopy-ng was taken, depedencies installed and some modifications to /etc/rc.local to have snoopy-ng run at startup, and we’ve got a fully functional “drone unit”, though currently reliant on the old “blinken lights” to produce confidence in the running of Snoopy (the WiFi LED will blink in approx 30 second intervals whilst in monitor mode).
Currently this has had some 24hrs of stable data collection, with the only interuptions to uptime being the change between wall socket and battery power when moving around.
I would really be intrested in seeing a USB battery pack which can also take a “trickle charge” to aid in mobility, sort of a mini UPS if you will.
What’s next? maybe the USB Armory which looks quiet promissing; I’m also looking to add BadUSB, HackRF to the “bag of toys”.
I’ve also been looking at SDR on and off, particuarly instrested in the POCSAG Pager network which seems to be another clear text protocol, aswell as 802.15.4 (Xbee / Zigbee) which appears to be making itself into traffic controll systems and is again completely open.
Why you may ask would these things make it into the “bag of toys”?
I reffer back to my previous point of practical examples, unless you can demonstrably show people why something is insecure / broken they have little interest / time / money in fixing the issue at hand, if you want results far better to show someone the problem and work with them on the fix.
A.K.A. Providing a Proof of Concept
]]>Ships with Ubuntu 12:04 LTS
You can of course change the OS on the Utlite pro to things such as Kali and Arch assault the caveat being if you want to install on the mSATA and not run from the sdcard you’re going to need to use the serial connection.
My USB -> Serial adapter has a male connector, the connector for the Utilite also provides a male DB9 connection … so an adapter is on order.
1
|
|
So as can be seen here I’m sitting the device inline, with the intent to have it route traffic between the LAN and WAN, as an asside I also plan to use the WiFi to provide Wireless access disbaling the ISP equipment, also to allow segmented guest access for visitors etc / captive portal, but that’s a far off from solid plan at the moment
The packages available from the ubuntu arm repos are 1.x and I want the new 2.x builds (Archassault however took my feedback and have built the 2.x packages) so in the interim to receiving the required equipment to install Arch on arm all the prototyping will need to use the unbuntu install.
1 2 3 |
|
Adapting from the intructions here.
1 2 3 4 |
|
1
|
|
1
|
|
I want everything to run on the sdcard at this time as I plan to replace the OS and thus everything on the mSATA ;-)
1 2 3 |
|
After complete of the above your build info should look like
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 |
|
You can now run Surticata in IDS mode:
1
|
|
NOTE The intention is to run in IPS mode, however IDS is suitable to complete the integration with logstash and kibana
Configure your SSHD for keyonly authentication, and harden to your preferences and then just expose SSH to the internet for a few hours; I’m not kidding here within ~12 hours I’d logged well over 1K attempted logins enough for suricata to log some ET COMPROMISED Known Compromised or Hostile Host Traffic group
events.
1
|
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
|
This will take some time to start up, note that if you want to load in an existing log set, add start_position => "beginning"
to the file {} declaration before starting logstash, after the back loading has completed I recomend you to remove this line, as it defaults to “end” and logstash tracks it’s position in the file if you leave this as beginning however it will always start at the beginning of the log and take a long time to startup needlessly
something screwy occurs within jruby
Download ejre-7u55-fcs-b13-linux-arm-vfp-hflt-client_headless-17_mar_2014.tar.gz from here
“ARMv6/7 Linux - Headless - Client Compiler EABI, VFP, SoftFP ABI, Little Endian1”
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
Select 1 or whatever index you are shown
Kibana is really just a web interface, so download it and install your preffered webserver to run it from nGinx / Apache / Lighthttpd etc …
1 2 |
|
In your browser now go to https://your_device/path/to/kibana/#/dashboard/file/suricata2.json
]]>If only that were the case, as I have covered in my blog post on mysqlperformanceblog.com heartbleed affects OpenSSL versions 1.0.1 through 1.0.1f
I’ve spent the last 2 days working 15hrs plus on this, and this is during some nasty Jet Lag courtesy of my return trip from Percona Live 2014, as the code keeps being pulled down for some reason I have mirrored some effective P.o.C code at github NOTE: This is not my own code I am only mirroring it, use at your own risk etc etc …
I encourage you to both read my blog post and my colleague Ernie’s blog post on the matter.
the TL;DR
In my own testing using the POC code I found the following.
This blog post may not be my usual deep dive, however given the work being done and the links to blog posts on MPB this shold be enough information for you the reader to go on in the interim.
UPDATE: this video provides a great description on the vulnerability in detail.
]]>First of all the installation routine fails trying to install grub-pc; this is due to the network configuration step of the routine creating a blank /etc/resolv.conf
So right after network configuration has completed inspect your /etc/resolv.conf and if it is blank as mine was:
1
|
|
Ensuring this is done BEFORE it reaches the grub installation step; this will now complete as expected.
Next up post reboot the encrypted LVM fails to mount, citing it was unable to find kali-root
1 2 3 4 |
|
Help is however at hand, boot back into the live distro forensics mode, and what follows is my somewhat condensed and modified procedure
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
As for UEFI / EFI ? Don’t even get me started there nothing I have spent long evening hours looking into works for kali, not using the fedora shim nothing at this time; I’m very annoyed at this and will post again once I arrive to a resolution.
In the interim CaptTofu release some interesting material on leveraging Docker to test PXC deploys, he’s even go so far to produce some Ansible playbooks for the deployment process; I’ve been helping to work in some respect on the Ansible side and I can see a lot of potential in docker aswell as a lot of issues (it is a very young project it reminds me a lot of OpenStack hack in the diablo RC days), I encourage you to check this out.
]]>Early warning This is a satirical blog post, with colourful language of which the sole intent is to troll automated scanners and script kiddies, those of a disposition nature should stop reading now.
Shortly after watching @chrisjohnriley’s Defcon 21 talk defense by numbers, I began thinking how I could implement so of the methods within nginx, taking them to another level by trolling and generally pissing off anyone scanning the server.
Some background on this nginx server does nothing but bounce old domains, and links to their appropriate place on this blog, so it’s out of the way not something you’d typically see attacked en mass.
(seriously I see one or two hits from search engines on the instance, except recently China Telecom must LOVE my blog, 500K requests in an hour … aww shucks guys I love you too)
So let us start with response codes, because 400 response codes are so last century right? I really can’t see why the 7xx-rfc isn’t already a standard.
So I opted for responding to automated scans of my nginx instance with the 793 response code; helpfully letting the scanner know that the Zombie Apocalypse has occured where the instance is located and that I care not of their scans as I’m either shambling along biting everyone within reach and incoherently moaning, or I’m too busy trying to not get my ass zombified.
Zombie apocolypse is serious business; they should appreciate my early warning!
Providing this sorely needed public service is this small nginx server block after my main server block handling all valid requests.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
if only those scanners could fully appreciate the midi tones of Rick Astley melodic symphony soothing them to sleep in the wake of the end of all things via Zombie Apocolypse … alas we wonder do calculators dream?
Yup no hostnames were being sent as part of the request, so China telecom doesn’t love my blog afterall … well screw you guys! I thought we had something but you were just a fake …
but wait there’s more just as the sweet verses dictate; we’re never going to give you up, so if you’re making so many requests in such a short time you must want to stay connected to me for as long as possible, it’s ok I’ve got you covered.
1 2 |
|
Forever together … into the tarpit … shhhh … only dreams now …
And for those not snuggling with us down in the tarpit, sorry but you’ll just need to prove you really want to be in there; sticky cuddles …
YMMV etc this isn’t a fully tested configuration, it’s not ment to do anything but troll all the automated scanners out there hammering the instance.
]]>A real world comparrison would be I suppose providing more than one form of identification to open a bank account.
OpenSSH 6.2 introduces the AuthenticationMethods setting; this combined with pam_yubico can be used to require that the connections provides both the SSH public key and the yubikey O.T.P (One time password).
OpenSSH 6.2 is included Fedora 19 and for a while now OpenSSH has supported the Match Group (I covered the use of such for chrooting users easily).
So we’re going to combined this combination such that we attain the following:
To be clear if the connection does not provide a valid public key for the user; it will never reach the yubikey prompt stage; also if the provided yubikey OTP is invalid authentication will also fail.
Install the pam_yubico package: sudo yum -y install pam_yubico
At the end of your /etc/ssh/sshd_config add the following:
1 2 3 |
|
You will also need to set ChallengeResponseAuthentication yes
in your sshd_config file.
The above is the bare minimum you can add any additions you wish; and restart sshd.
Create the file /etc/pam.d/yubi-auth with the content
1
|
|
Note: I am specifying the URL as the default will use http and not https despite what the documentation might say.
Create the file: /etc/ssh/yubikey_mappings with the content:
1
|
|
You can get your yubikey identity from demo.yubico.com
Edit /etc/pam.d/sshd so that the first lines read:
1 2 |
|
And finally create a user in your group, in this case we’re using the mfagroup.
useradd -g mfagroup -s /bin/bash username
and install their public ssh key in /home/username/.ssh/authorized_keys, ensuring proper permissions.
All being well when you try to login with the user you should see the following:
1 2 |
|
And you have sucessfully setup two factor ssh authentication with public keys.
]]>yum -y install policycoreutils selinux-policy-targeted
Now edit /etc/grub.conf and ensure your kernel line looks something like the following:
1 2 3 4 |
|
Note the addition of “selinux=1 security=selinux enforcing=1”
Now: touch /.autorelabel
And: /sbin/new-kernel-pkg --package kernel --mkinitrd --make-default --dracut --depmod --install 3.XX.XX-XX.XX.amzn1.x86_64 || exit $?
Replacing the XX portions with your running kernel or you can use substitute in the uname -r
output; this one liner script was obtained from: rpm -q --scripts kernel
and is required to rebuild the initrd image such that the selinux settings can take effect.
Alternatively if there are updates outstanding a yum -y update
will acheive the same thing (selinux settings should persist); after all of this you can now reboot
and wait.
This will take a while to start back up as an selinux relabel is running (this is what the touch /.autorelabel
achieves.
All being well selinux should now be running enforcing in targeted mode; if not check your /etc/selinux/config file.
]]>I myself presented a talk on security which it appears was very well received, and I am hopeful this talk will make it into the line up for percona live 2014.
My talk was well received and there was a lot of great Q&A both during and after the session … though I did run 15 minutes over sorry Tim I’ll have to buy you a beer by way of appology at the next confernece.
Ryan H also gave a great talk on backups, I’ll update this blog post with a link to the slides once tey become available.
I’ve posted some photos of the event aswell.
More to come.
]]>If I were to facepalm at this point I fear my face would pushed out the back of my skull, so let me relay a small bit of insight.
TOR is an anonymizing proxy so long as every node along the chain is “behaving”, let’s say fo sake for argument somene sets up a malicious exit node, Jackin’ TOR shows just such a setup used to inject content into http requests.
And if the above does not work?
2013 has been a year of change for myself, after a long consideration period spanning several months in 2012 I felt that it was time to move on from Psycle Interactive as their Systems Administrator; the new roles “on the table” were as follows:
I accepted the offer from Percona becoming part of the Remote DBA team; the growth over the last 8 months has in my opinion been very rapid; the team and client list has more than doubled in size.
So some highlights on what I have been up to this year (well what I can talk about at least).
There’s so much more which I can not talk about with it being IP / NDA related.
Expect more security focused posts soon as I work on their content.
]]>In this case I needed to create a glance image that could be deployed to a openstack cluster … and that is where the fun stops.
First things first, if you can do a clean install (if you paid the extra £20 and actually received your dvd media that is!) do so, the upgrade process from Windows 7 took the best part of 2 days to complete.
Secondly to create your glance image you’re going to have to do the installation on the same type of hypervisor that you have openstack running upon, in this case I will be covering deployment of Windows 8 onto Linux KVM with virtio drivers.
You can not start the instance using virtio for the hard disk, it simply puts itself into a never ending recovery mode, instead set the bus type to SATA or IDE.
Attach a second drive that uses virtio bus, why you may ask? Windows 8 will now boot and in turn have a device attached which it can not recognize.
Before booting you will also need to attach this iso as a cdrom, at the time of writing you can use the Win7 drivers for Windows 8. (iso version 0.1-30)
I opted to first install all the drivers by opening up the virtual cdrom, navigating to the Win7 folder and: right click -> install on all the “Setup Information” files.
My “fun” did not end here however … because it would appear the attached virtio device was not formatted Windows8 decided to ignore it.
In this case the device manager needs to be launched to resolve the issue a laborious task in itelf.
And boot the image as normal, ensuring that the selected “flavor” has enough disk space to start the instance.
As for meta data injection, for say account setup I have no idea at this time, please feel free to post in the comment or email me with methods for doing so.
this blog for noting the ‘dirty hack’ workaround in Windows 8 R2
and James P for having way more patience with windows than I will ever have.
]]>1 2 3 4 5 |
|
When trying to validate a certificate using openssl, this is because it is in the wrong format, whilst the certificate file visually appears to be in x.509 format, you will find it contains a far longer base64 string than x.509 certificates of the same bit length.
The format in this case is p7b (PCKS #7); to use the certificate witih apache you’re going to have to convert this.
openssl pkcs7 -print_certs -in certificate.p7b -out certificate.cer
Within the resulting .cer file you will file you x.509 certificate bundled with relevant CA certificates, break these out into your relevant .crt and ca.crt files and load as normal into apache.
]]>After an initial read the setup process seemed very simple, and as it would turn out it was; I later moved onto some simple resillience testing of my 4 node p.o.c. cluster.
I’m still a little unsure on the circular topology I ended up using; but it appears absolutely fine so long as the following conditions are met.
This is not such a bad thing, as if all nodes were to suddenly go down; I can’t think of a situation where you would want it all to recover “automagically” you would want to inspect to ensure data integrity and recover from a “known good” version of your data.
Openstack as an experimentation platform
Openstack i I’ve found perfect for rapid prototyping of hostinsg platform architectures, in none geek building virtual models of servers and services; ensuring sure they all go together properly before committing to the build plan.
The best part being the VM’s are “Throw away”, something goes inexplicably wrong with a vm prototype? assuming you used snapshots at each step it’s easy enough to roll back.
For reference I used Fedora 17 and the wiki reference setup of openstack for prototyping.
Note in this case you may be better off using OpenVZ; whilst openstack does not at the time of writing support this directly, the openstack DBaaS (Database as a Service) project Red Dwarf leverages OpenVZ to provide DBaaS, (Something I’d like to get auto handeling clusters via XtraDB clustering, given the time …).
XtraDB cluster p.o.c. platform
My platform consists of 4 nodes; although I am sured an odd number of nodes is preferable to reduce the risk of split-brain behaviour occuring.
]]>