Please consider donating:


BlackHat Europe 2011 / Day 01

Good morning Barcelona ! My day started at 6:30am, with a beautiful view from the hotel room.


About an hour later, when the sky started to clear up, I already could smell it was going to be a fine day.


After having breakfast, chatting with ping and hanging out with @kokanin, @xme and @wimremes, it was time to start attending the various talks.

So, as promised in yesterdays preview, what follows is the report of my first day at Black Hat Europe 2011.


[Core Attacks] – New Age Attacks Against Apple’s iOS (and Countermeasures) / Nitesh Dhanjani

Nitesh kicked off his talk with quoting Steve Jobs, basically using some key figures to explain the popularity of the iPad/iPhone devices.

Apparently he announced that no less than 100 million iPhones and 15 million iPads have been sold so far.  In order to achieve that amount of sales, it’s fair to state that not only the hardware & design has to be solid, but the OS (and the way it works) has to be working well to be able to contribute to the success.  And with over 15 billion app downloaded, people actually seem to be using these devices.

The reality is that people actually store private (social media) information on these devices, use it to access and store potential company confidential information, or even use it as a platform to run company applications.  Users take those devices to work and expect IT departments to just accept it and support them.

Putting one and one together, we can conclude that the challenges around mobile security now include iOS platform based devices too.

Nitesh continues by explaining that not everything is as great as it sounds like.

The iOS platform (just like other OSes) uses protocol handlers (called a URLScheme in iOS) to link a request to an application.  We all know that http:// will most likely result in web access and telnet:// in setting up a telnet connection.  Those are very common examples.

When accessing a protocol handler in firefox (OSX) a dialog box will be presented on the device, asking if the user wants to allow the application to be launched or connection to be made.  Even if you include a handler in an iframe on firefox will ask you for permission, and Nitesh used the following hot & trendy example to demonstrate this :

Result : user is requested if he wants to connect to justin_bieber.image

So far so good, but it doesn’t work like this all the time.  Safari on OSX for example, won’t request authorization for this connection request.

What happens on iOS ?  Based on his research, Nitesh discovered that “making a phone call” is about the only handler that asks for authorization.

Since it’s unclear how many handlers are installed (or get installed by 3rd party applications), it’s unclear what the impact of this would be.  As Nitesh demonstrated, it’s trivial to trigger a skype call to someone by simply convincing the user to browse to a web page that includes an iframe.  It all happens in a blink of an eye, and there is not much the iPhone user can do about it.

BeEF (the Browser Exploitation Framework) anticipated and added a skype call handler into the framework.  As Nitesh showed, a simple xss vulnerability is enough to hook the BeEF framework and take full advantage of its features on the iOS platform as well.

Next, the presenter summarized what had been discussed so far, and put out some (open) questions :

  • Should we expect the Safari browser to ask for authorization when it tried to interface with other Apps ?
  • Should we expect Apple to kick out existing Apps when a vulnerability is reported (cfr the skype issue – Apple told the research to talk with Skype and ask them to fix it)
  • What exactly is Apple’s methodology to vet is an App is secure or not, before publishing it on the App Store ?
  • Should there be a list of exposed URLSchemes be made available to the admins ?

Again, Skype was just one example. There might be other ones, and the list is not limited to 3rd party apps. Similar issues may be introduced in the company software, written for the iOS platform.  Imagine you can simply view, edit or delete patient records by calling a custom URLScheme health_record://close_case/patient_id=24232

It’s clear that some visibility around the exposed URLScheme’s is important in order to asses the overall security level of the device, data stored on the device, and data the device can access one way or another.  On top of that, discovering new URLScheme’s in 3rd party apps is not easy. Apps in the App Store are encrypted. You would need a jailbroken device with custom decryption software (“Crackulous” for example) to expose the code, so you can run “strings” on it.

It’s also important to note that similar URLScheme’s are used for inter app communication. This means that it might be possible to own a suite of apps by abusing the handler that is exposed to the end user.

Solutions for the issue evolve around good coding practices.  Developers should be trained to bulid in logic that would at least throw and authorization request. In iOS versions < 4.2, this has to be implemented in the application:handleURL API call. :

– validate input

– ask for authorization

– perform the transaction if the end user approved it.

(note, in 4.2, the API call was replaced with application:openURL:sourceApplication:annotation, which is safer because it requires the use of a bundleID

Furthermore, precautions should be made in the application to prevent direct transactions that would actually modify / delete data.

Finally, developers should be aware of intermediary apps that can be abused to invoke another app.

Abusing exposed URLScheme’s is not the only issue with iOS.  We also have to deal with

– Pranks (Upside Down Ternet is just one example)

– MITM attacks (not all apps will actually show a warning if a SSL cert is not valid)

– Decloak identity (do a MITM & insert an iframe that would call the facebook profile page. This would expose the profile/wall of the victim)

– UI Spoofing (you would be able to insert a fake URL bar)

– Abusing push notifications

Finally, it’s important to understand that, while file encryption is available on the iOS platform, developers should use the readily available API calls to actually enforce it, and should use the user passcode in the encryption process.  The default passcode (4 digits) can be bruteforced under the right circumstances, so this is only going to make sense if you also enforce the use of stronger/longer passcodes.



[Core Attacks] – Escaping from Microsoft Windows Sandboxes / Tom Keetch

This was one of the talks that immediately drew my attention when I saw the BlackHat briefing schedules. Based on recent developments in various applications, and the fact that Stephen Fewer bypassed the PMIE (Protected Mode Internet Explorer) sandbox in pwn2own a few days ago, we might start seeing some really interesting research in that area. Richard Johnson recently published some information on Adobe Reader X Sandbox : “A Castle Made Of Sand”

Tom Keetch, an application security engineer at Verizon Business, kicked off his talk by laying out the contents of his talk : assessment of the IE, Acrobat Reader and Google Chrome sandboxes.

The ultimate goal when trying to implement exploitation mitigation techniques, Tom says, is to try to make exploitation as “expensive” as possible. If the amount of effort required to build an exploit that will actually allow an attacker to own the system / access confidential data is harder than the value of the access or data, then hacker probably will move on and find themselves an easier target.  The implementation of ASLR, DEP, Safeseh, Sehop are just a few ways the OS & compilers attempt to make applications less exploitable. Nevertheless, reliable exploiting is still possible, so a second stage payload can be used to attempt to break out of the sandbox. That is just one reason why we are seeing a lot of development around sandboxing.  The fact that those applications are the ones that get sandboxed is because we are seeing a clear shift towards client/browser based attacks.

As Tom explains, not all current sandboxing implementations are mature yet and will do what the user expect them to do.

In the past, practical sandboxes would require “nasty” kernel drivers to be effective. Nowadays, sandboxes can (and are) implemented in userland, using Windows OS features.

So, while the goal of the sandbox is to make valuable exploitation increasingly more difficult, hackers will still try to find (and use) the easiest path to breaking out of those sandboxes.

Before looking at the details of the individual sandboxing implementations, Tom explains what practical and effective sandbox implementation should look like.

Basically, (Windows based) sandboxes should implement :

  • Restricted Access Tokens
    • Deny-only SIDs (Discretionary)
    • Low Integrity (Mandatory)  (added in Windows Vista and up)
    • Privilege Stripping (Capability)
  • Job Object Restrictions
  • Window Station Isolation
  • Desktop Isolation

When looking at the actual implementations, Tom explains that the Protected Mode Internet Explorer (PMIE) doesn’t use all of those available features.  It does not implement Restricted Tokes, Job Object Restrictions, Window or Desktop Isolation. It basically only guarantees the integrity of a system, but not the confidentiality.

Adobe Reader X only does a partial implementation of Job Object Restrictions, and does not offer Window Station of Desktop Isolation.  The sandbox makes use of the Chromium sandboxing and IPC framework, and allows read access to files.  It does not protect the clipboard or the GAT (Global Atom Table).

Google Chrome (Chromium) does a far better job, as it has implemented all of the features in the renderer engine. Tom explains that the renderer is the most complete and restrictive sandbox, but also stresses the fact that 3rd party plugins are not protected by the sandbox.  The GPU feature in Google Chrome is not sandboxed either (scheduled for a future release)

So, what are the techniques that can be used to break out of a sandbox ?

BNO Namespace Squatting : Shared sections can be created with a name in the ‘Local’ namespace. By “squatting” on shared section locations, arbitrary values can be set on any shared section.  A practical example, Tom continues, would look like this :

Kill IE broker process / Predict the name of the shared section / Create the shared section with custom privileges / When the IE broker process creates that shared section, the sandboxed process all of a sudden allows full access.  This ‘generic” technique works for PMIE, Adobe Reader and Chromium.

Microsoft’s take on this is that the sandbox (protected mode) should not be considered a security boundary. This means that they probably won’t fix this until IE10.

NPAPI Interface Exploits : This API was originally used to interface between browser and plugin. Both sides of the communication trusted each other, because they agreed on certain calling conventions. The reality is that, if you find an exploitable bug in a plugin, you might be able to use NPAPI to escape from the sandbox.

A third technique takes advantage of handle leaks.  A handle which refers to a privileged resource/kernel object might exist inside the sandbox for various reasons (on purpose, by accident, because of incorrect policy settings, unclosed handles…). Finding the handlers is easy – process explorer will help you detect them at run time.  When the right type of handle is leaked into the sandbox, it can be used to escape.

Clipboard Attacks : in PMIE and Acrobat Reader, the clipboard is shared between sandbox and the rest of the user session.  Not only can an attacker read from the clipboard, he might be able to actually put in malicious data and use it as input into an application that “trusts” the clipboard.

Sandboxes already made exploitation harder, and will continue to evolve.  But a long road is ahead still.

I ‘m really interested in the sandbox technology  and the process of escaping from sandboxes.

Tom did a good job outlining & explaining this complex subject, visualizing the various components and escape techniques.

Good job Tom !   This is certainly one of the topics I would love to do some research on myself, time permitting :)

Update : Tom tweeted that flash appears to have a limited sandbox implementation in Chrome :


[Application Dissection] – Web Application Payloads / Andrés Riancho

17032011732The last talk before heading out for lunch is the Web Application Payloads presentation from Andrés Riancho, Director of Web Security at Rapid7.  People may know Andrés from the w3af framework, and this is exactly what this talk is about.

w3af, Andrés says, is an open source web application attack & audit framework  (hence the name w3af).  It’s plugin based (which makes it easily extensible), and since Rapid7 decided to sponsor the project, it now has one full time developer.   The tool offers a gui and a console.  Andrés explains that the current achievements so far include the fact that it has a low false negative rate, has good link & code coverage, is widely known, and part of a lot of security/audit (linux) distributions.

I have to be honest. I played with w3af a while back and found it somewhat buggy.  Andrés continues the talk by explaining that, because the project has a full time development resource, a lot of the bugs have been fixed, faster libraries are being used, documentation has been written, and a Windows installer have been released.  He acknowledges the fact that it might have been a bit buggy in the past, but also stresses the fact that most of the issues are gone now.

I have to admit, what I have seen in his demo really convinced me to take another look at it and actually use it more often.  You’re about to find out why.

Andrés continues the talk by telling a story. A classical story-line that explains the typical process of auditing a web application.

Let’s say you find a local file inclusion vulnerability in a web application security assessment.  The pentester attempts to read files for a few hours, but can’t find anything really valuable because of file permissions, lack of info, etc.  After a few more hours, the pentester gets lucky and all of a sudden finds an application directory in the webroot where he can pull off an arbitrary file upload. This will allow him to upload a shell. Using the shell, he might even be able to access db data, or get root privileges (one way or another).

Fact : none of the currently available tools have good post exploitation techniques that could use the initial exploit (local file inclusion) and turn it into something bigger. Most exploitation frameworks only focus on memory corruption bugs (metasploit, etc) because they were the most important vulnerability class.

As focus shifts towards web applications, there is a real need for tools that can assist in the process of post exploitation.  That’s where w3af comes into play.

Recent developments have introduced the concept of “payloads” into w3af. A payload will essentially help you get root from a low-privileged vulnerability in an (semi) automated17032011734 way. In order to do this, it will attempt to combine application/os capabilities with os/app behaviour and properties, in order to get the info that is required to further optimize the post exploitation phase of the assessment.  The payload is a script that will put all those things together, translate it into http requests that will take advantage of the originally discovered vulnerability, and collect more info / provide a path into further exploitation by emulating syscalls.

Andrés provided some examples on how read access could be used to list processes, list network connections, etc…  I must say, I was really impressed… and I hadn’t even seen the demos yet :)

Demo 1 : LFI allows read, payload reads / etc/passwd and dumps users & home directories. A second payload then takes all home folders and bruteforces interesting files in those homefolders.

Demo 2 : Use the LFI to read & dump the source code of the web application onto the attacker computer.  It uses information gathered during the scan phase of the assessment (knowledge base includes urls that were identified in the application), and the “get_source_code” payload will read those files and write them to a local folder on the attacker machine

So – now we have the source code, Andrés continues, but what can we do with it ? We can spend hours to read it, find bugs and exploit them.

Or… we can use Static Code Analysis (SCA), a PoC payload (2 weeks old at the time of writing, written by Javier Andalia) which will audit the downloaded source code, taint variables, look for variables that are used in dangerous functions, analyze those, and help with identifying potential new attack vectors that might help escalate privileges on the remote system.

One word : WOW.  (and no, this has nothing to do with World Of Warcraft)

Okay, the SCA module needs a lot of work and may not even work in real life applications yet, Andrés agrees, but I think this is definitely a positive evolution and it might be what it takes to continue to move their way towards the top of the list when having to pick web application assessment tools.

17032011735The fun is not over yet. The presentation goes on with a demo of using exec() capabilities. Not only does w3af offer integration with metasploit (it can create payloads on the fly (binary files), upload them and execute them (think : reverse meterpreter + listener), but it also offers the possibility to use a w3af_agent.

This agent will allow you to connect to other ports on the webserver. Ports that used to be unreachable because of ingress filtering. If you can execute the w3af agent payload on the destination machine, you can use proxychains on the attacker machine to connect to a port / any port / on the remote machine if required.


Finally, Andrés explains some new side projects that might help gathering information, based on syscall hooking. Using ptrace(), the idea is to hook into a real process, catch for example “read” calls, and dump the contents onto the attacker machine.  While this sounds very promising, Andrés stresses that this is still a work in progress and needs a lot of work.

Impressive talk.  I will definitely play more with w3af after seeing this presentation.  And if anyone out there has web app security knowledge, has some great ideas, python coding skills and some spare time… this is definitely one of the projects that deserve more/a lot of community support than it already has.

Hat tip.


[Application Dissection] – SAP : Session (Fixation) Attacks and Protections (in Wep Applications) / Raul Siles

You know what they say about lunch. It’s a presentation/meeting killer.  Usually, the first presentation or meeting after lunch ends up being slow and less effective. Raul Siles from Taddong really understands this issue and found a way to overcome that issue.  I guess the combination of having an interesting topic, maintaining good speed/pace, and being able to properly explain are the elements that made the difference in his case.
Session fixation issues finally received the attention they deserve, Raul says.  Owasp bumped up this type of issues to nr3 on the list, so it clear that this type of vulnerabilities are now widely recognized as dangerous.

So, what is the deal about session fixation issues ?

One of the main challenges with web based applications that require some sort of authentication and implement access controls, is to make sure the right person is authentication and access is granted only for that person.  In order to do so, the application needs to contain a strong session management layer.

HTTP, however, is a stateless protocol, and a session is usually made up of a lot of sequences of http(s) requests & responses.  This means that session management has to be added on top of the transport layer (in order to deal with the fact that http is stateless).

A common way to uniquely identify a session is to use a so-called session ID.  There’s nothing wrong with the concept, but it’s prone to vulnerabilities if the application doesn’t properly implement it, or if the webserver doesn’t handle it correctly.

A session fixation issue occurs when you can use the ID of an authenticated user to impersonate that user.  The easiest way to do this is when you are able to use your own ID (an ID you received during the pre-authentication stage of accessing the web application) and enforcing the use of that ID onto the victim user. You could trick the user to click a link that contains your ID, or you could use mitm techniques to insert the ID into the session.

As soon as the user authenticates and if the app has a session fixation ID issue, then you can simply become that user by using that ID in the session.

Raul explained where ID’s are typically stored (header, URL parameter, URL (in case of URL rewrite), get or post arguments, hidden form fields, etc). Unfortunately he did not go into detail on how to audit web apps for session fixation issues, which might have been added value to the talk.

In order to underline the fact that this type of vulnerabilities are real and important, he showcased 3 case studies (Joomla, J2EE and SAP) and explained what exactly caused the issue to exist and how the respective developers/vendors reacted to his vulnerability reporting & disclosure process.

In addition to that, he explained how the various issues were classified and fixed by those developers. Some of them were classified as “configuration issues”, others were fixed by making substantial changes to the code base and adding new features to the option sets (which, sadly enough, still remain disabled after applying the update packages).

What was really interesting is that, especially in the case of SAP, issues like this may take a while to get patched (mainly due to the 7+2 year support policy at SAP), leaving the business critical applications vulnerable.

Conclusions from this talk are

– if you do session management yourself, make sure to enforce https and regenerate the ID after authentication

– If you rely on the web engine to do session management, make sure it works the way it should.


[Core Attacks] – Exploitation in the Modern Era (a.k.a. “The Blueprint”) / Chris Valasek & Ryan Smith

Last talk of the day, by Chris Valasek and Ryan Smith (both working at Accuvant Labs).

17032011736Chris & Ryan started the talk by announcing that they’ve retitled & updated the talk shortly before the BlackHat briefings, and they basically rebranded it to “The Blueprint”.

In short, they explained that modern exploitation is not longer something that is carved in stone and looks generic or can be automated. As an exploit developer, you really need to have a solid knowledge about hardware, Operating systems, memory management, etc etc.

As mitigation technology matures (ASLR, DEP, SEHOP, SAFESEH) and sandboxes are put in place, we can safely state that being able to exploit something that bypasses most of those mitigation techniques is no longer a luxury, but a requirement.

Exploitating will increasingly become harder and they introduced the concept of defining “primitives” that should assist with building exploits for current systems.  After seeing their explanation of how they built the exploits for 2 bugs, it’s clear that a structured approach, combined with proper knowledge, and lots of time & dedication is going to be key to success in current and future exploitation.

In essence, the idea is that we should no longer look at a buffer overflow payload as one component, but rather take everything apart (what caused the issue, what are my options to own eip, what are the options to put shellcode, how can I reach shellcode, how can I bypass DEP, how can I overcome ASLR, what are the options to make the exploit reliable, etc).

Each and every of those parameters / components / elements are primitives. Even the vulnerability itself is a primitive (because it could be used as a component, as input into a bigger picture). In order to reliably exploit vulnerabilities, one should no longer focus on the crash and jump directly into the exploiting process, but we should try to document everything first, build a knowledge base (or re-use data that is already in the knowledge base) and only start putting pieces together when all info has been gathered.

Using this approach has a couple of advantages :

1. definitions : if you gather and classify primitives, you can use the same language and definitions.

2. it allows for better teamwork. Building exploits will no longer be an individual task, but might require teamwork. You need to be able to share info, to gather and store info in a way that avoids various people having to work on the same topic at the same time

3. if you analyze before you exploit, you might even find an easier bug to exploit.

4. Re-usability : if you find & store primitives in a structured way, they might even be re-usable.  For example a certain generic ROP routine might be generic.

5. community & contributions : if you set standards (in terms of definitions & what to look for), you can effectively get input from the community, use research/documenation/techniques from others and see if something generic can be extracted from it.

Although this talk was a bit technical and way above my league when they explained the inner workings of some of their heap exploits, I really enjoyed it and believe their concept and proposal has a lot of value.  I’m looking forward to see what they have gathered so far in terms of definitions / primitives structure and how creative & smart people can contribute to this.

Chris & Ryan finished their talk by making a couple of predictions :

– More primitives are needed for Normalization + ASLR + DEP + Sandbox bypass

– Primitives will be limited, but will be around for ever (Humans will continue to make mistakes)

– Increasing number of people will be writing quality exploits. What started with only a few people, the community is growing.

If that community can work together, everybody can benefit from this.

(Chris, me, Ryan)



I am obviously not…

… the only who is publishing write-ups about BlackHat.

Check out xme’s blog for his write-up of day 1 :


C ya tomorrow !

© 2011, Peter Van Eeckhoutte (corelanc0d3r). All rights reserved.

6 Responses to BlackHat Europe 2011 / Day 01

Corelan Training

We have been teaching our win32 exploit dev classes at various security cons and private companies & organizations since 2011

Check out our schedules page here and sign up for one of our classes now!


Want to support the Corelan Team community ? Click here to go to our donations page.

Want to donate BTC to Corelan Team?

Your donation will help funding server hosting.

Corelan Team Merchandise

You can support Corelan Team by donating or purchasing items from the official Corelan Team merchandising store.

Protected by Copyscape Web Plagiarism Tool

Corelan on Slack

You can chat with us and our friends on our Slack workspace:

  • Go to our facebook page
  • Browse through the posts and find the invite to Slack
  • Use the invite to access our Slack workspace
  • Categories