Blackhat Europe 2010 Barcelona – Day 10
I got up early this morning, trying to be sharp and well prepared for day 2 of the BlackHat briefings. As some of you may know, I’m not really a morning person, so I usually need some time to wake up and wait until all components in my body start functioning again. After one day of presentations, that appeared to take somewhat longer than normal, so I ended up starting this second day of briefings in slow motion mode.
The view from my hotel room looked awesome, no rain, only a few clouds… so despite the slow boot process, life is good.
The initial plan was to start the day with (what I hoped for would be) a "light" or "moderate’" talk about "Oracle" attacks… but boy I was wrong. This talk was not about Oracle (uppercase O), but about pratical crypto attackings using oracle padding (lowercase "o"). *blush*
What follows below is an overview of what went right and what went wrong today :
Practical padding oracle Attacks
Thai Duong and Juliano Rizzo kicked off the second day at Blackhat Europe 2010.
That was the first surprise of the day. Oracle vs oracle… I should have known there was something wrong with that. I guess I was not the only one who was caught by surprise. After the first slide, I heard a lot of people mumbling "this is another presentation" and things like that. Anyways, the talk was about crypto, and feeding hardcore crypto to my brain at this time of the day is just not good for me.
I got hit by the well-known "massive concrete demolition hammer" while trying to understand the highly technical (and math) presentation about cryptography and how oracle padding can be used to break all sorts of encryption mechanisms. Ok, I have to admit. Even when I’m wide awake, this stuff will kill me as well.
So I’m sorry. There’s not really much I can tell about this presentation. These guys did some awesome research and provided a really detailed presentation, that is, if you could understand what they were saying (and I’m talking about the content here). But don’t expect me to give you details on how things work, because I didn’t get it. (I will look back at the paper and slides later on and see if I can get it after a 2nd, 3rd, 4th etc reading)
Their first demo went terribly wrong (murphy’s law)… twice… , and that gave me the time to try to refocus and to understand what they were saying….
… but I failed miserably in my attempts to catch up again…
Again, excellent in-depth and well-performed and well-documented research. Just bad timing on my behalf.
So I moved on to the next presentation.
Accepting Adobe Reader’s Custom Memory Management: a Heap of Trouble
Because of the high volume of adobe bugs that have been discovered/published over the last year, I really wanted to attend this presentation. After all, if something is an important target for attackers, it’s obviouosly important for me too.
This presentation simply rocked.
The charismatic Guillaume Lovet started the session by explaining that, unlike regular applications running on the Windows OS, Adobe uses their own custom heap management system (and appears to do so for performance reasons), He outlined the Adobe heap structures and heap management allocation/free techniques really really well. You can get the slides here
Even people with limited knowledge about heap management should be able to understand the way the AcroPool, AcroBlocks, AcroCache works. That’s not because it’s easy, but just because mr Lovet did a good job explaining the concepts and mechanisms.
When Guillaume finished the theory and identifed the most common issues, and we came to the point where the techniques were going to be demonstrated, Haifei Li took over … and unfortunately that’s where I had to drop out again.
To get one thing straight first : there’s no doubt about the fact that this guy knows what he is talking about.
He just was not able to pass on the message (mainly because of the language barrier (English). As a presenter, if you fail to communicate the message in a clear way, no matter how good the content is, you somehow fail to achieve your goal.
So despite the fact that Guillaume’s part of the talk was of an exceptional level (both in terms of content and presentation skills), Haifei didn’t manage to take advantage of that and ruined the talk for me. I even left the room because it didn’t make any sense anymore and I didn’t want to get a "corrupted" view about the way these vulnerabilities can get exploited.
That having said, not all is lost of course. What I took away from this presentation is :
- Adobe uses their own heap management system
- The OS heap management protection mechanisms in place since XP SP2 also apply to Adobe heap
- Some specific flaws in the adobe heap management allow for exploitation (there are some relatively static key pointers + there might be some predictability, and the heap structures can be corrupted, which can lead to exploitation as well.)
For most people, this should be enough to understand that Adobe may not be the right choice for certain type of applications. It’s just my personal impression and other similar tools may suffer from similar threats…
Overall : good presentation. Message to BlackHat : please screen presenters to see if they are able to pass on the message. I hate to see excellent content get burned over a language barrier. (Again, I’m not saying that this guy did not do a good job research-wise…)
Anyways : time for lunch…
After the lunch, while heading back to the hotel for the afternoon sessions, I had the pleasure to be able to have a nice chat with some really kind and friendly guys :
(left to right : Myself, FX, Atilla, Xavier and Christiaan -> I really enjoyed hanging out with you guys – thank you !)
Oracle, Interrupted: Stealing Sessions and Credentials
The first briefing I attended after lunch, was the talk given by Steve Ocepek and Wendel G. Henrique (both employees at Trustwave Spiderlabs). After Steve finished the short introduction, he showed a demo of 2 tools called "vamp" and "thicknet" (written by these 2 individuals). In this case, showing the demo before looking at the theory/details first worked for me.
What these tools do is basically apply a quite old technique (arpspoofing) to detect, hijack and inject Oracle sessions.
Ok, I agree (and the presenters agreed as well)… that didnt’ sound like anything new…
But unlike many other network protocols, the Oracle protocol is not just like them. It’s not very well documented (at least not in public), behaves in a particular way,… it basically is not just like any other tcp session. That means that deploying "older" tools such as ettercap or Cain&Abel to attempt a mitm attack would not really work in this case. They might be able to hook into the tcp session istelf, but the Oracle session may (most likely) be broken.
In fact, apart from re-assembling/re-constructing live Orace tcp sessions, there’s also Oracle-specific behaviour that needs to be taken into account. Because of those added features, the Oracle-specific protocol (Net8) gets encapsulated into TNS (Transparent Network Substrate) first. Simple mitm can take care of the TNS layer, but won’t take the Net8 into account.
There are 3 types of messages seen frequently :
- User to Server (Net8 Bundle call 0x03 0x5E)
- Piggyback call (0x11E)
- User to Server (Fetch 0x03 0x05)
Anyways, using "arp poisoning" against a server & client, in order to get (=mitm) a tcp session, by itself, is not that hard. So basically, the "vamp" tool does just that. It takes care of the arp stuff and allow you to work your way in between 2 hosts on the network. Next, the "thicknet" tool allows you to detect live Oracle sessions. Detecting a live Oracle session is not as simple as looking at TCP handshakes or something like that… What Steve and Wendel had to do is build in a routine that would detect a new Oracle session based on a "sled", a predictable byte sequence inside the tcp session, which identifies a certain type of action that is sent from a client to a server (such as a "select" statement)))? In addition to that, they made the tool very statful. So new hosts/sessions get to dynamically join the mitm party as well.
Combining those 2 tools : if you have a mitm in place, and if you can correctly identify and re-assemble Oracle sessions, then you can take over the session and inject your own code (within the context of the user that started the session). So for example, if you have taken over the session of an admin, you become admin as well. When the session is taken over, the original client connection will obviously die.
This technique is obviously not limited to Oracle sessions. Steve mentioned that they tried to make the tools as modular as possible, basically allowing to modify the settings to do the same with for example MS SQL, MySQL or even SMB.
Next, they demonstrated another feature in this nifty utility, which will allow for authentication downgrade attacks. It would force the server to accept and older version of the password hash, and capture that hash. The reasoning on why anyone would want to do this, is simple : if you have a shorter hash, it becomes easier to bruteforce it.
Even though Oracle does not always use the standard 1521/tcp port, this technique should still work, as the utility performs pattern matching on packet content in an attempt to find the session (regardless of the port). And since the tool looks at pattern matches (and not so much at ports), it would allow you to catch "disconnection" requests from clients, drop the request to the server (and forge a fake reply to the client), so you can just take over the session without having to disconnect a valid session.
Right at the moment where this "thicknet" authentication downgrade feature was going to be demonstrated, I received some text and twitter messages from people in Belgium, asking me if I already knew about what was going on in Iceland (Volcanic activity, Mr. Spock, producing a huge could of ashes, and that one is drifting towards and over most part of Europe … that’s just in case you didn’t know already). They basically explained me that the airspace was going to be shut down in a few hours, in large parts of Europe (including France, Belgium, etc), and wanted to let me that I would probably end up being stuck in Barcelona.
(I kind of always kept in the back of my head that the whole "cloud" thing was eventually going to blow up in my face some day… ;-) )
A few minutes later (I obviously had to step out of the presentation for a few moments), some other people started reporting the same thing to me…. and while that happened, a round of applause resounded from the conference room… and the talk was over.
Luckily some of my friends were able to capture the last part of the presentation. So you can read more here.
The thicknet and vamp utilities will be released soon.
The take-away from this presentation is : protecting/monitoring access to your LAN is still very important. MITM attacks are old, but still real and very effective. Even somewhat "undocumented" protocols can be reverse-engineered (maybe not perfectly, but good enough to allow malicious people to do something with it).
If you combine these facts with the knowledge gained in the SAP Backdoor presentation from day 1…. I probably don’t need to say more do I ?
This was a good presentation and some display of hard work..
Unfortunately the presentation for me got a bit "overshadowed" by a little gray cloud in the air :
(Ireland is at the bottom of the image, and you can clearly see the grey stream of ashes drifting above Schotland, going towards the Scandinavian countries, and starting to spread across the rest of the continent – Image source : ESA)
Gathering all the "bummers" I’ve encountered so far today, this clearly is not my lucky day.
But I always think positive and I’ve tried to not let that ruin the rest of the presentations. After all, If I’m stuck already, then there’s not much I can do about that anyway. So I basically phoned the travel agency, they confirmed that my flight back to Brussels later that evening was cancelled indeed, and they simply rebooked my evening flight into a morning flight back to Brussels (at that time, it was still unclear for how long the airports were going to be closed, so it was a still a wild guess…)
Since most of us already checked out at the hotel in the morning, our plan for the rest of the day/evening looked somewhat like this :
- Continue with the presentations,
- "social networking" at the Core Security party,
- have dinner…
- Finally, Xavier (@xme) and myself will head out to the airport and spend the night there (some 0xC0FFEE, a laptop, free or cheap Wifi, a power plug…
That’s all we need to survive isn’t it ?.
Sounds like a plan, back to the presentations.
Right before this presentation, murphy’s law kicked in again. "If things can break, they will". "If things can break twice, they will." So 2 laptops later, Vincenzo Lozzo (from Zynamics) basically started his talk without any slides. But he really knows what he is talking about, and can deliver the message well. So the lack of slides didn’t really bother me (and they fixed the issue with laptops later on, so we got to see the nice pictures on his slides anyways)
In the past, sometimes a few lines of fuzzing code would allow you to get big results. While this is still possible, it will become more complex to catch big fishes (which explains the huge attention when someone does).
Most current fuzzing techniques are either
– dumb fuzzing : just feeding random data to an application. It is fast, but the number of vulnerabilities found with this technique will probably decrease (because it is partially based on luck as well)
– smart fuzzing : crafting data in a way it matches the protocol or format and see what happens if you change certain fields. This technique is slow and may be hard if the format is not documented
– evolutionary fuzzing : taking the best parts of the 2 previous techniques, this technique will get you better results, but it still has some weaknesses : The impact of having special fields in the format (such as CRC checksums, other fields based on calculations etc) may decrease the success of this technique.
Conclusion : fuzzing is hard. But since it means $$$, it still is important
In his talk, Vincenzo added a new approach to the evolutionary fuzzing technique.
He basically outlined 4 steps to find new vulnerabilities :
Static & dynamic binary analysis with the goal to find interesting functions. This technique is based on recognizing interesting functions (which can be very difficult). In an attempt to do so, he feeds the application with valid input and uses a set of breakpoints (on entry and exit points of functions) to find the path a certain input value uses while it’s processed by the application. Out of this path of functions, he tries to take the ones that are complex. He explains that a big function does not necessarily mean that it’s complex. Complex functions are the ones that may introduce vulnerabilities, but finding complex functions is not easy. So he tries to apply the concept of "McCabe metrics" to detect the complexity of a function. Futhermore, functions that have a loop may be interesting as well (Cyclomatic complexitiy) but it’s not always easy to find loops. Certain instructions/implicit loops (REP instruction, casts, call to strcpy etc) may not look like a loop inside the function, but they will result in a loop sequence.
One of the things he does is finding dominating functions. If the entry point of a child function goes back to the dominating function, they you have a loop, and that loop may be interesting. The use of REIL (intermediate language) will help in detecting these loops. For static analysis, Zynamics BinNavi can be used.
Once the functions are known, he performs in-memory fuzzing and captures the results. He only focusses on the function you want to fuzz.
In this technique, data is tainted / marked and watches are set up to see what happens with the values / registers while they get processed/propagated through the function. Dytan and Pin are 2 tools that will allow you to do this. Of course, this may become complex as well. If you have for example 2 values that are added together and put in another register, then the 2 original values disappeared.
Ayways, the goal of this step is to get memory locations and registers that should be watched and identify which parts of the input may be interesting.
When that information has been gathered, the in-memory-fuzzing can be started. There are a few approaches to this, because it can very easily lead to memory corruption and instability. Further more, you may be testing a specific function only, but it may not lead to realistic results because you are basically bypassing certain input validation routines, so you may be testing input that may never make it to the function in real life.
Depending on the way you perform the in-memory-fuzzing, the techniques may produce false positives (e.g. a memory corruption may trigger an AV), or false negatives (you may be fuzzing a specific function that is not vulnerable by itself, and the vulnerable function that is processed later on is never tested).
There are a few ways to try to minimize the false positives/false negatives :
- Hook image (using Pin)
- Hook function
- Hook instructions
The in-memory fuzzer implemented by Vincenzo builds upon Pin.
When testing a function, you can deploy one of the following techniques :
– write your own mutation function (one that changes the input data) and hook it into the function flow, basically have it run after the function ends and feed the input back into the function (Mutation loop insertion).
This technique works, but it continues to change memory without going back to the original data. So you can find crashes that may not happen in real life.
– snapshot restoration mutation : in this approach, first a function to save the original memory contents (snapshot) is called first. Before fuzzing, this snapshot is placed back so each fuzzing iteration will make use of "realistic" data.
Futhermore, there are a few approaches to changing the data. You can overwrite memory, or you can write fuzzed memory to another location (and update the pointers accordingly).
Finally, the results of the fuzzing operations are captured (fault monitoring). You can do this by setting breakpoints on entry/exit points of the functions and observing/capturing the contents of memory/registers. You can compare the results (code coverage score) with the results from good input/good sample.
This talk was really interesting and Vincenzo knows what he is talking about. Unfortunately his approach has not lead to finding any bugs yet (so there’s more work and finetuning to do).
You can get the feb 2010 version of Vincenzo’s paper here.
In the last session of this years BlackHat Europe 2010 briefings, Christiaan Beek shared his experiences with some challenges and solutions for newer technologies (Citrix, Virtualization, Windows 7), and how those new technologies affect the forensic processes.
In the past, forensic examiners knew exactly what to expect. They had to come in, get contents from physical disks and start their research. In virtual environments things have changed. Data is more centralized (=huge storage), it’s not always clear where the data is, who owns the data (cloud computing) or where it is (multiple copies stored in various locations of the world).
Next, capturing data/memory from virtual machines becomes harder too, because there are not a lot of tools available to do this. And if you fail to get the data the correct way, you may have contaminated the data (so it looses it’s legal value)
In certain cases (based on the type of investigation), it may even be easier to get a copy of a 1 day old backup and do research on those contents.
The current well-known tools (FTK, Encase) are not always prepared to deal with virtual images as well (vmdk, vhd in the case of Windows Vista/7). Christiaan mentioned that importing an image in one tool and exporting it to the used in another tool sometimes changes the contents of the images (and that is a bad thing !)
It is clear that, while technology changes fast, the people that build the tools to perform forensics are always a few steps behind. Fortunately the files that are important in the investigation process are documented, so if there are no tools that support the technology, then good old hardcore hexeditor may be your (only) friend.
Some people are working on tools that will improve the forensic process, but it’s still work in progress.
Overall : Excellent talk !
You can download the presentation and paper here
After Blackhat Europe 2010 wrapped up, it became clear that the airspace in most parts of France, Belgium, Germany and the Netherlands was already shut down, and that our chances on being able to fly back from Barcelona to Brussels on friday morning were… virtually non-existing.
On the other hand, we still wanted to spend some time at the Core Security party, basically identifying our travel options and setting a strategy :-) Ok, that’s not really true. My wife basically took the initiative and started looking for alternative solutions while we were at the party. We just had some drinks and talks while she got to do all the dirty work. After making a trizillion phone calls back and forth, we were finally able to get ourselves a flight booked from Barcelona to Lyon (that airport was not closed at that time). The plan was to figure out what our options were once we get there. It didn’t make any sense to start booking and making arrangements if we were not even 100% sure that we would make it to Lyon.
So, after having enjoyed the party and lunch at a great Japanese restaurant, together with Roelof Temmingh (Paterva – Maltego) and his wife Susan (I hope I got her name right), Frank (Seccubus), Christiaan Beek, Didier Stevens and his wife Veerle, Iftach Ian Amit, Xavier Mertens and Chris John Riley), Xavier and I went back to the hotel, trying to get our new airplane tickets printed, and trying to closely monitor the flight and airport information available at that time. So we set up a "mobile war room in the lobby" of the hotel, taking advantage of the fact that the "Blackhat" Wifi had not been turned off :-) Thanks Blackhat Staff for that !
(Xavier (https://twitter.com/xme) keeping friends updated on our status via twitter)
We basically figured out that after flying to Lyon, taking the train/TGV from Lyon to Belgium would be our best option… There seemed to be a few places left on the train to Paris & Brussels/Lille, so we went to the airport at 2:00am and kept monitoring the displays to make sure our flight to Lyon would not get cancelled.
This blog post was written while waiting to check-in for our flight (it’s currently 4am)
At around 4:50am, we decided to book our train tickets because the flight was still not cancelled.
(source : xme’s twitter account)
By the time I’ll make it home, I will have been awake for about 31 hours … so bear with me if you don’t hear anything back from me today :-)
Update : 14:00 : We made it home ! I just arrived at Lille Flandres, wife picking me up. Home sweet home ! Time for some sleep !
© 2010 – 2015, Corelan Team (corelanc0d3r). All rights reserved.