Thursday, November 1, 2012

Android Hacked in Ethiopia

Ethiopia-tablet-kids-thumb-550xauto-104204Now this is a lede:

"What happens if you give a thousand Motorola Zoom tablet PCs to Ethiopian kids who have never even seen a printed word? Within five months, they'll start teaching themselves English while circumventing the security on your OS to customize settings and activate disabled hardware."

Michael Howard said something years back that stuck with me - programming is human against compiler, much easier than security which is human against human.

Of course in this case its not a classic security fail of a malicious threat against asset, in fact the overall story is quite a triumph of human ingenuity:

"We left the boxes in the village. Closed. Taped shut. No instruction, no human being. I thought, the kids will play with the boxes! Within four minutes, one kid not only opened the box, but found the on/off switch. He'd never seen an on/off switch. He powered it up. Within five days, they were using 47 apps per child per day. Within two weeks, they were singing ABC songs [in English] in the village. And within five months, they had hacked Android. Some idiot in our organization or in the Media Lab had disabled the camera! And they figured out it had a camera, and they hacked Android."

What it does show from a security perspective though is the limitation of what we can reasonably expect from any access control. Humans with time and determination will find their way around, whatever you're basing your access control scheme on (TLS, Kerberos, SAML, ...) you have to assume it will fail eventually (and not in a good way as in this story) and factor in how the system as a whole survives.

Tuesday, October 16, 2012

You're not counting on your app store, are you?

Today's mobile app stores, like Apple's App Store (via iTunes), review the software in their stores before the public can download them. That curation process, however, is not without its limitations -- and as software developers, we absolutely must not ever rely on the curation process to spot security defects in our apps.

Much has been said about Apple's own App Store, both good and bad. Whatever your preference, their  App Store has undoubtedly the most rigorous app review process in the mobile app store business, such as it is. Developers are required to conform with their guidelines in order for their apps to get approved and become available for consumers to purchase.

But even that rigorous review is not in any way intended to be a security review of your apps. Make no mistake about it, Apple is not in the business of ensuring your app is secure.

So then, what do they do? Let's explore a bit -- with the understanding that I have no inside knowledge at Apple, and I'm basing this on my observations and readings.


  • Stability. They verify the app loads and runs as described.
  • Functionality. Does the app perform the advertised functionality?
  • Play by the rules. Does the app conform to Apple's published API standards? More to the point, is your app using any unpublished APIs, which is perhaps the biggest no-no on the app store.
  • Policies. Does the app conform to Apple's policies (good, bad, or otherwise)?
Now, I admit that the above is probably a gross over-simplification of what they actually do. I'd expect they load the app in a controlled test environment. I'd expect they run the app using some profilers and such to look for memory leaks and that sort of implementation faux pas.

But, by and large, if your app conforms to their published APIs and their policies, it's good to go.

OK then, so what sort of things would that process miss? From a security standpoint, pretty much everything. Some of the biggest shortcomings that I would never expect Apple (or others) to find in their review process include:

  • Local storage of sensitive data. As long as your app uses published APIs for file input/output, you can store whatever you want to, however you want to. Want to put your users' credentials into a plaintext SQLite database? No problem.
  • Secure communications. Again, use published APIs (e.g., NSURL) and the app will fly right on through the review process, irrespective of whether you use SSL to encrypt the network data. Want to send your users' username/password credentials in a JSON bundle to your server's RESTful interface, without any encryption at all? No problem.
  • Authentication to back-end services. Published APIs, blah blah blah... Want to authenticate your users against a locally stored username and hashed password? No problem.
  • Session management on back-end services. Want to use easily guessable, sequential numbers for your users' sessions? No problem.
  • Data input validation. Want to allow untrusted data in and out of your app, without ensuring they're safe from SQL injection, Cross-site scripting, etc? No problem.
  • Data output encoding/escaping. Want to pull some data from a database and send it straight to a UIWebView without encoding it for the output context? No problem.
That list can go on for a long time. Apart from these shortcomings, the app review process is just fine. :-)

To Apple's defense, reviewing an app for the sorts of things I've listed here takes a high level of knowledge of the app itself, the business function it provides, the sorts of data it handles, etc. These are things that cannot and must not be performed by a team with no knowledge of the app, like an app store review process.

No, to review an app for common shortcomings like these must be done by someone with deep knowledge of the app. That should happen within the app development team, perhaps with support from an external team to perform some rigorous security testing.

No matter how it's done, the reviewers simply must understand the app and its business. Without that knowledge, no review can be adequate.

We'll discuss ideas for how to do reviews like these -- and prevent the security flaws in the first place -- at our Mobile App Sec Triathlon in San Jose, California on 5-7 November. Join us and let's discuss.

Cheers,

Ken van Wyk

Tuesday, October 9, 2012

Mobile Brings a New Dimension to the Enterprise Risk Equation

In yesterday's blog we looked at Technical Debt, and how its infosec's habit to lag technology innovation. In the big picture, this approach worked pretty well in the Web, early web security was pretty poor but early websites were mainly proof of concepts and brochureware. As the value of the websites increased, infosec was able to mostly get just enough of the job done and played catchup for the whole decade.

But this catchup approach does not work in Mobile, the first apps are not brochureware, they are financial transactions, medical decision making tools, and real dollars flowing through the apps on day zero! That's 180 degrees different from how the Web evolved, with the Web we waded in the shallow end for years, with Mobile we are diving off the high dive with 1.0.

This risk profile should embolden infosec teams to get active way earlier in the process and to be more prescriptive. But it does not stop there, the nature of the engagement has changed as well, case in point:

The personal data of about 760,000 people was temporarily leaked onto the Internet through an address book application service for smartphones, information security company NetAgent Co. reported.

The Tokyo Metropolitan Police Department is set to launch an investigation after being informed of the case Saturday by Tokyo-based NetAgent. The application developer said the data leaked online has been deleted.

The latest version of the application, Zenkoku Denwacho (Nationwide Address Book), has been distributed for Google Inc.'s Android operating system for free since mid-September. It enables users to search information listed in a major address book developed by Nippon Telegraph and Telephone Corp., according to NetAgent.

But the application is also designed to send personal data stored in smartphone users' address books, including names and phone numbers, to a rental server.

Such information temporarily became available through the Internet mainly to users of the application, which at least 3,300 people are estimated to have downloaded.

Here we see another dimension to the risk equation for Mobile that enterprises have little experience facing- they are not just providing a browser front end, they are shipping code (apps) to users. The enterprise security team now needs to not only care about the site working on Firefox, IE, and Chrome. They need to care about a whole array of platform and device specific security considerations; ensuring the application does not introduce vulnerabilities, inadvertantly steal or leak data, location, addresses, and more. And its all specific to each Mobile OS.

Because Mobile is a Balkanized environment, platform specific Security architecture and guidance is required to get the job done. This means more up front work, but its essential to avoid mistakes like apps that can leak data or provide entry points for attackers to the Mobile app and data (bad) or enterprise gateway and backend (worse).

Its time for Infosec to step up
1
2
Patch and pray is not good enough, enterprise security teams must roll up their sleeves, do the work required to support security services for iOS, Android apps, data, and identity. Nothing is perfect but there are absolutely better and worse ways to implement here, Infosec *should* play a leading role, as the grown up, in practically navigating these choices.

Take a hard look at the Use Cases your company is going Mobile with, this isn't beta brochureware, this is real data, real transactions, real identity, real risk, and real new technology. Now is the time for Infosec to get smart on iOS and Android, and build security in.

**
Come join two leading experts, Gunnar Peterson and Ken van Wyk, for a Mobile App Security Training - hands on iOS and Android security, in San Jose, California, on November 5-7, 2012.

Monday, October 8, 2012

Line in the Sand on Subprime Security- Mobile Apps Can't Afford to Take on Technical Debt

If there is one thing that's crystal clear in Infosec its that Infosec lags software innovation. Its a field where we are always playing catch up and the important question tends to be - how fast can we catch up?

Because innovation outpaces security, Infosec has been a passive bystander shuffling debt issuances around like someone processing subprime mortgages and rating it Triple A when the first payment cannot even be made. The industry ships apps everyday with substandard access control that do not reliably authenticate or authorize users, much less deal actively malicious actors.

Technical debt measures the necessary work that does not get shipped in a release. Taking on too much debt is like borrowing too much money, it might work but once things begin to go against you its hard to recover because you are not in a position of strength. As Warren Buffett says, "You don't know who is swimming naked until the tide goes out."

Its important to note that Technical debt for security is not a passive thing, there are people actively looking to find and exploit your Technical debt.

As of now, the Information Security Technical Debt Clock (appropriately implemented in Javascript) shows 17 years (or 6,517 days) since the internet's foundation security architetcure of Network Firewalls and SSL were deployed. Since then we've been waiting for identity, authentication, authorization, and logging standards (de facto or otherwise).
Debt
The reason why playing catch up is not good enough in Mobile is one that will be familiar to my clients - the Mobile Use Cases are too important to screw up.

The security industry skated by the whole history of the Web on a security architecture past its sell by date, but at first it did not matter. Go way back to the mid 90s, what kind of apps were being deployed? Mostly brochureware. It took years to get to dynamic, data driven sites, and then years to get to profitable, transactional sites (pets.com anyone?). Point being - early Web was cool as hell, but it was a giant science project followed by a hype bubble. The fact that Infosec did not move quickly enough to deal with the security issues was too bad, but at the same time not a systemic failure because the arly Web Use Cases were low risk brochureware.

Most companies just dipped their toe in the water, and security incrementally figured out how to deal with SQL Injection, XSS, and so on in an iterative process. But there was time to do this in most cases.

Mobile is different

The first generation Mobile Use Cases are most certainly not dipping toes in water, they are diving in head first (and perhaps a lifeguard may not be present)! Doctors with iPads, brokerage applications, and pretty much the whole remote work force pinging your mainframe from who knows where. This has the makings of a bad cycle of events for security. Infosec is used to playing catch up because the technology moves fast but the business will take awhile to roll things out. Not in Mobile, the backend hooks are largely already there, just need to find the right Web services to call and write an iOS and Android front and dive right in the deep end.

Wait and see what happens is not good enough any more, Infosec needs to act now and get in front of the Mobile security issues. Take a hard look at the Use Cases you are deploying on Mobile, this is 1.0 technology running High Risk Use Cases, your Mobile Security architecture and implementation cannot be patch and pray.

Mind the Gap: Compare the risk level of what's being deployed to the robustness and assurance of your mobile security. Its time to invest: learning ways towards building a more resilient security foundation.
**
Come join two leading experts, Gunnar Peterson and Ken van Wyk, for a Mobile App Security Training - hands on iOS and Android security, in San Jose, California, on November 5-7, 2012.

Is SSL adequate for your iOS app?

How do you secure your iOS apps' network connections? Is it sufficient to simply use HTTPS in an NSURL object? The short answer is it depends. For sure, some recent attacks have eroded the trust we can place in the venerable SSL or TLS standards, but we're also not ready to just throw them out either.

It turns out you actually have numerous options available, depending on your app's needs. For starters, if you're using NSURL to make your HTTP connections, you can quickly change it to an SSL encrypted network session by simply changing the URL to HTTPS://...

That should provide you with basic protection against eavesdropping (and a few other network nasties) quickly and easily. But is it enough?

To understand that, let's consider a bit more what actually happens in an SSL protected network socket. When iOS (via NSURL) sees an SSL certificate from a server, it verifies two things: 1) is the certificate signed by a root certificate authority (CA), and 2) does the server's name (via DNS) match the name presented in the SSL certificate? Now, that combination might sound adequate, but there is actually a loophole or two remaining.

A sufficiently resourced adversary with access to the soft under belly of DNS as well as access to one or more CAs can still trick your app into accepting a connection you wouldn't otherwise want. In an extreme case, this could result from a successful man in the middle (MITM) attack.

So, for some cases, we might want to do more than just SSL. We might, for example, want to verify that a specific CA signed the server key. We might want to check that it is precisely the right server key that has been presented, using "certificate pinning".

Each of these things is achievable, but they require us to dig deeper than just using USURL with an HTTPS URL. They also have pros and cons -- cert pinning, for example, isn't going to work with many network proxies that a lot of corporate LANs require.

We'll discuss these options, along with code examples, at our Mobile App Sec Triathlon in November. Hope to see you there, ready to discuss so you can best understand what course of action is best for your app and its users.

Cheers,

Ken


Thursday, October 4, 2012

What's In your Android Security Toolkit, Part 4

This is the fourth in a series of posts focused on building an Android Security toolkit. So far we have looked at access control services and defensive coding, which are necessary for the Mobile app. but no Mobile app is an island.

Mobile apps can have lots of communication channels, such as SMS, NFC, and GPS. If used, each of these presents the enterprise a new set of challenges to deal with, protocols and threat models that the enterprise security team likely has not worked in depth with before.

On top of that, the Mobile app usually needs to connect back to the enterprise or Cloud via Web Services. Many enterprise mobile projects begin by saying something like "we have web apps and we have web services, this is nothing more than sticking that little sucker (the mobile client) as a new front end and we are done." Thinking that a mobile app is no different from supportin, say, Firefox is to miss the core of mobile. I have seen this repeatedly and it leads you down the wrong path.

Some of the differences include that mobile devices are ot connected per session (like a web app), they are occassionaly connected and those connections drop. This leads to caching and other usability enhancers. You can expect that a mobile middle tier (not just another front end on existing portals) will be required to manage optimizations and resolving sessions, cache and routes. On top of that, the enterprise is in a position of delivering not just data, but delivering code to the device. Its no longer a case of riding the rails of Chrome, IE or Firefox. The enterprise is now in the business of packaging, deploying and testing client software.

The communication between the Mobile app and the Mobile Web service requires layers of protection. Even the basics here, like access control, are fraught with challenges.



To navigate the Venn of Mobile Security, look outside the device. How will the device be manageed? How is access controlled when calling the Web services? What identity is used? How are the Web services protected? How is it authorized on the server side? These services are crucial to enabling the mobile app to work in a real enterprise deployment. The requirements are not all platform specific but they all create platform specific requriements for the Android developer to deal with. Think End to End.

**
Come join two leading experts, Gunnar Peterson and Ken van Wyk, for a Mobile App Security Training - hands on iOS and Android security, in San Jose, California, on November 5-7, 2012.

Friday, September 28, 2012

How do you think they'll attack your iOS app?

Write an app of any intrinsic value (either in user data, transactions, or whatever), and someone is going to attack it. It's 2012, after all, and I'm sure no one reading this will in any way be surprised to hear that there are computer miscreants out there who are going to attack apps of value.

The thing is, though, it's been my experience that many times the very people who write the apps themselves fail to sufficiently understand and internalize just how the attacks will happen. Sure, we've all read about various hacks, but to many people, those are nothing more than abstract thoughts. When it comes to understanding how someone will attack your work, however, it becomes real. So let's consider that a bit.

Which of the following do you think are likely to happen to your app:

  • Do you think your attacker will install and run your app to try to learn how it works? Sure, that's a given, right?
  • Do you think your attacker will work his/her way through all the views and data fields in your app, and perhaps try various dictionaries of the big bad boys (e.g., SQL Injection, Cross-Site Scripting (XSS)) and so on? Sure, that too is a given, right? Even if those attacks aren't in any way relevant to the technologies in your app, they're going to try them anyway.
  • Do you think your attacker will look through all the files in your app's sandbox (e.g., its ~/Documents folder), looking for potentially damning information like a user ID, password, or session token in a .plist file? Yup. Plenty of tools make that one real easy too.
  • Do you think your attacker will configure a network proxy to intercept all of your app's communications to/from its server(s), looking for login credentials, session tokens, etc.? Oh yeah, still well in the realm of feasibility here.
  • Do you think your attacker will use that same network proxy to try to get your app to connect to a server that he configures -- perhaps with a self-signed SSL certificate, or with a signed certificate where the root CA has been installed as a profile on the attacker's iOS device? Ruh roh! (Now I'm starting to hear "They can do that?!")
  • Do you think your attacker will examine your app's executable file, doing surface analysis of it to look for strings, symbols, and other telltale info in the binary executable itself? Of course. But executables in the App Store are encrypted, you say? On a jailbroken device, an attacker can use a debugger to access the unencrypted executable commands just fine. (They have to execute, after all...)
  • For that matter, do you think an attacker will load your app on a jailbroken device, put it into a debugger, and single step through it, looking for crypto keys and other sensitive data? Ruh roh, indeed!
  • Do you think an attacker will try to tamper with the Objective C runtime by intercepting messages to/from the various objects in your app? 
  • Do you think an attacker will attempt to inject messages into your app in that debugging session, and get your app to misbehave?
That list can continue on and on for quite some time. I wrote it in order of increasing attack complexity and difficulty, but every one of these things is achievable today using tools and techniques available to any attacker. This list isn't science fiction or "Hollywood" in any way.

The question you should be asking is whether an attacker would go to this level of difficulty to attack your application. Well, that depends on the potential gain and the likelihood of being caught, among other things. 

And on the point of getting caught, your attacker has all the advantages and you all the disadvantages. Every one of these attacks can be done in the safety and comfort of the attacker's "laboratory", with pretty much zero chance of being caught.

What you're left with then is what is the potential gain to the attacker, and that's not something I can answer for you.

What can you do about it? I'll address that in Part 2 of this blog entry within the next few days. And, of course, Gunnar (@OneRaindrop) and I (@KRvW) will be talking about issues like this at our Mobile App Security Triathlon in November.

Cheers,

Ken van Wyk

Thursday, September 27, 2012

OAuth 2.0 - Google Learns to Crawl

LtcGood news - Google is shipping OAuth 2.0 tools via Google Play. Wish this had happened years ago. when the Android platform shipped but its good its happening now.

OAuth 2.0 is not perfect from a security perspective but as Tim Bray says this is Pretty Good Security meets Pretty Good Usability. Makes sense to me - we have to stop using passwords and we have to do so in a way that won't have developers rioting in the streets and burning cars. But why be happy about shipping something that has a 70 page long threat model in its wake? This dev comment from the blog announcement says it all- "After implementing my own authentication for my app, I really would have appreciated something like this!"

Point is, “Out of the crooked timber of humanity no straight thing was ever made", this is forward progress because custom access control implementations will be definitely worse, and yes I have seen this many times.

So yes its progress, Why did it take so long - who knows? But here we are.

Its helpful to track evolution through a Crawl - Walk - Run maturity curve.

From where I sit, Crawl has been achieved with this release - a standard way to register your app, get a token and use it - plus many future apps that do not rely on passwords, but what about walking and running?

Walking should be about not just using a standard protocol as an improvement over ad hoc access control but also using the protocol safely. Its an access control protocol after all, its failure modes are ugly and have consequences to users and platforms. A chainsaw is great for cutting timber and its an excellent way to cut off your own limb(s). Use of a safer protocol is desirable but guidance on safe use is required to get full value. This release is not quite there yet. OAuth tokens, like anything else, have vulnerabilities large and small, but in removing crypto and signature functions the implementation increases its reliance on TLS for security. Fair enough for many apps, but there is no way to discern this from the documentation, SDK, and APIs. The OAuth 2.0 protocol, by itself without TLS, is not good enough.

"The sign above the players' entrance to the field at Notre Dame reads 'Play Like a Champion Today.' I sometimes joke that the sign at Nebraska reads 'Remember Your Helmet.'  Charlie and I are 'Remember Your Helmet' kind of guys. We like to keep it simple."- Warren Buffett

OAuth 2.0 should be shipped with a 'Remember TLS' reminder stapled to each and every release. Otherwise, numerous threats are in play. OAuth 2.0 with TLS meets the Pretty Good Security bar for many apps, without TLS its playing without a helmet.

Further, both the client and server side developers have some work to do to avoid shooting themselves in the foot with the protocol, for example the client developer may not realize the sensitive nature of the token and how best to protect its storage. The server side developer deals with a myriad of concerns like session management, linking the token to access control, replay and others that in most/all cases mirror the issues for most webapp security. Here we face two challenges though developers not being trained up on security protocols and so miss a lot of the subtleties and nuance in deploying security protocols. And infosec blithely assuming that silver bullet - this all singing all dancing protocol solves my problem is all too common. Not saying Google is fomenting either of these but I see it in the trenches every single day. I would prefer to see Google include a short and sweet Security Checklist to make sure people remember their helmets. Do not have to reinvent the whole Threat Model but guidelines for safe use would get this a long way towards Walking in my view.

The worst security posture is not being insecure, all systems have vulnerabilities, the worst security posture is to assume you are secure when in fact you are not. Here the current implementation is lacking and tailored guidance and/or checklists from client and server side  developers' perspective to know what the protocol is doing and what it is not doing would be very useful. I know this just shipped, but this gap should be closed soon. As a  group, developers across the globe have had zero training in secure coding. When I go into train a dev team on secure coding, even those with decades of programming experience, I am likely teaching them their first day of secure coding. You cannot expect them, even good developers, to know all the right things to do, and to pick up on the subtleties at work in implementing security protocols. I am all for finding the balance on Pretty Good Security and Pretty Good Usability - that's a worthy goal, but the dots need to be connected.   There's a world of difference between https://sites.google.com andhttp://myappisowned.com, Google's Android team should help to close these gaps out, clearly state what can and should be done to foster safe use of OAuth 2.0.

Implementing security protocols is a new proposition for most developers, they were never trained but back in the day it never mattered, the container or server did for them and the threat was not high. Neither of these is the case today any more. This stuff matters. We could easily do a "how to break Android" class and get the security people all fired up to attend, but what would that solve really? We need to start building better stuff and we need developers in the game to make progress. This is Why We Train. OAuth 2.0 and TLS can improve the security in most mobile apps, implemented wrong they can also make it worse. There are design and implementation things to consider from Crawling to Walking, but developers need to know what they are to make it happen- we tackle these on Day One of Mobile AppSec Triathlon.
**
Come join two leading experts, Gunnar Peterson and Ken van Wyk, for a Mobile App Security Training - hands on iOS and Android security, in San Jose, California, on November 5-7, 2012.

How do you protect your users' sensitive data? -- iOS

What would you think of someone who spent [an enormous amount of money] and installed industrial/military grade locks throughout his house, and then put a spare key under the doormat in front of the house? And what then would you say about someone who spent [another enormous amount of money] and installed an alarm system, and then put the unlock code on a post-it note on the face of the alarm system control panel? I can imagine most people wouldn't think highly of such a foolish person. There may even be expletives involved...

So then, what would you think if I told you the file system and every file on an iOS solid state disk (NAND flash storage) is encrypted using hardware-based AES-256? Your first impression might be pretty favorable. However, the disk itself (and its HFS journal) is encrypted with one key (the EMF! key), and then the vast majority of files are encrypted with another key (the Dkey) -- and these two keys are stored on the NAND in plain sight in Block 1 (PLOG). D'oh!

No worries, you say -- your device is locked using a passcode. So there! Well, it turns out that an attacker with physical access to your device can put it into DFU mode (Device Firmware Upgrade -- although I admit I thought it meant something else when I first learned about it) and boot via USB cable using a RAMdisk, easily created on a Mac using Xcode. Once booted, the attacker can access and steal all those files that are encrypted using the Dkey.

To be fair, not every file on an iOS device is encrypted using the Dkey. Notable exceptions to this are the user's email, and any file created by an app where "Complete" file protection is used. The device's keychain database also isn't quite as easy to to decrypt.

Most of those exceptions are encrypted using a key that is derived from the device's Unique IDentifier (UID) and the user's passcode. On the vast majority of consumers' iOS devices, that passcode is a 4-digit PIN. Not to worry, the good folks at Sogeti have provided us with a set of tools that can, among other things, brute force guess all 10,000 PINs and then decrypt most of the rest of the data.

Sounds pretty grim, doesn't it? For more reading on this, see Jonathan Zdziarski's excellent book "Hacking and Securing iOS Applications: Stealing Data, Hijacking Data, and How to Prevent It".

As a consumer, there are a few things you can do. As an enterprise, you can deploy a Mobile Device Management (MDM) solution and, among other things, enforce strong passcodes and such.

But, as a developer, you're not so lucky. As a developer, you cannot assume your customers will be smart enough to use a strong passcode. No, developers must assume the lowest common denominator in order to protect their customers' data adequately. And remember, the OWASP Mobile Security Project ranks a lost or stolen device as the number one risk faced by mobile consumers.

That means that for information exceeding simple consumer grade sensitive data, you must not rely on Apple's built-in file protections to protect your customers' data. There are a few alternatives, of course. We can make use of Apple's CCCrypt Library and make use of all that crypto hardware ourselves -- and stay in control of the crypto keys ourselves. (See Apple's sample CryptoExercise code for an example of how to do this -- it's a bit dated, but you'll get the basics.) We can also use third party libraries like SQLcipher to create encrypted databases where, once again, we control the crypto keys ourselves.

The common denominator in both of these approaches is control of the crypto keys. It's also the toughest (by far) problem to solve in using cryptography securely.

We'll be discussing these options, of course, at our upcoming Mobile App Sec Triathlon in San Jose, California, on 5-7 November. We hope you'll come join us, and let's discuss different approaches to tackling this enormously important and difficult problem.

Cheers,

Ken van Wyk


Wednesday, September 26, 2012

What's in Your Android Security Toolkit, Part 3

In the last two posts, we explored what goes into building an Android Security Toolkit, these are tools that developers can apply to minimize the amount of vulnerabilities in their Android app and, because no app is perfect, to lessen the impact of those that remain.

So far we focused on access control, which helps to establish the "rules of the game" authentication and authorization controls who is allowed to use the app and what they are allowed to do. If you read the Android Security documentation, access control concepts dominate, but this is only part of the security story. Access control enforces the rules for customers, employees, and users who are effectively trying to get work done; however access control does little to mitigate threats of people deliberately trying to break the system.

It pays dividends to learn and apply access control services because a vulnerability here will cascade across the system and be available to attackers as well, but it pays to go further just access control in your mobile security design and development. I usually describe this situation as - I would bet a lot of money that I can beat both Garry Kasparov and Michael Jordan in a game. The way I would do this of course is to play Kasparov at basketball and Jordan at chess.

This is what attackers do, they change the rules of the game or change the game entirely. So while access control gives us the According to Hoyle security rules that the app would like to play under, the attacker makes no such assumption, the asserted rules are the beginning of the game not the end.
All  security is built on assumptions, when these fails so does the access control model. For example, as we discussed in the last blog the Android access control policies are enforced in the kernel so the assumption is that the kernel hasn't been directly or indirectly subverted.

So if an app cannot be secured by access control alone, what's an Android developer to do? The requirements for access control are fairly straightforward on first pass - who is allowed to use the app and what are they allowed to do? Sure, it gets more complex from there, but the start and even endgame are fairly clear.

What's the starting point (much less endgame) in defensive coding? Threat models like STRIDE make an excellent starting point for finding requirements. Identify the key threats in the system and what countermeasures can be used to deal with them. STRIDE recommends, and I concur that data flow analysis is a practical way to begin modeling your application to discover where threat and vulnerabilities lie.

From there, refining the model with App attack surface - data, communications, and application methods, plus Mobile specific attack surface - GPS, NFC, SMS, MMS - adds more detail to both identify vulnerabilities and locate countermeasures.

The mindset of the Defensive Coder is fundamentally different than the access control mindset. The Defensive coder assumes compromise attempts and possible success at each layer in the stack. This includes standard techniques such as input validation, output encoding, audit logging, integrity checking, and hardening Service interfaces applied to local data storage, query and update interfaces, interaction with Intents and Broadcasts. Not just publishing these resources for use, but factoring in how they may be misused. How is the app resilient to attempts to crash it, an attacker impersonating a legitimate user, a malicious app with backdoors running on the device, or attempts to steal or update data?

The Threat Model cannot answer all these questions completely but it does lead the development effort in the right direction to finding ways to build margins of safety into the app.


**
Come join two leading experts, Gunnar Peterson and Ken van Wyk, for a Mobile App Security Training - hands on iOS and Android security, in San Jose, California, on November 5-7, 2012.

Monday, September 24, 2012

APIs behaving badly -- iOS

Did you know there are several system-level information caches where sensitive data can hemorrhage from your iOS apps? That's right, even an otherwise well written app can leak user information out and into areas that attackers can get to if they can get their grubby mitts on the device. (But remember, OWASP's Mobile Security Project considers a lost/stolen device to present the highest risk to consumers -- and rightly so!)

Examples? Here are the biggies to look out for:

  • Screenshots. Every time you (or your app's users) press the home key while they are running your app, the default behavior in iOS causes a screenshot to be made and stored in plain view on the device.
  • Spell checker. In order for that nifty, sometimes annoying, and often funny spell checker to work, the system keeps a running cache of what you type, even if it happens to be fairly sensitive. (Some data fields, like passwords, are generally protected from this.)
  • Cut-n-paste. Anything you put in the cut-n-paste buffer is readily accessible from any app. Face it, the feature wouldn't be so useful if you couldn't move data around.
The bad news is that all of these things can result in leaked information. The good news is that they're under the app developer's control. We'll of course discuss solutions to these and other issues at our Mobile App Sec Triathlon, including a coding lab where you'll implement fixes to these problems.

But also know that sometimes APIs don't behave entirely as we might expect. If you're a developer, no doubt you're duly shocked to hear this news, right?

Well, I encountered one such API inconsistency recently while working on the OWASP iGoat tool -- which we'll also use extensively at the #MobAppSecTri. Allow me to explain.

A few of us on the iGoat project have been working on a new exercise for iGoat. The exercise is supposed to illustrate the dangers of the keystroke cache used by the spell checker, as I explained above. Only, we've encountered some inconsistent behavior in how iOS treats this data.

In the new (not yet released) exercise, we're declaring a couple of text fields (UITextField) as follows:

     @property (nonatomic, retain) IBOutlet UITextField *subjectField;
     @property (nonatomic, retain) IBOutlet UITextField *messageField;

Next, in our implementation file, we're synthesizing those fields and setting them to not be cached (for spell checking) in our viewDidLoad method as follows:

    [subjectField setAutocorrectionType: UITextAutocorrectionTypeNo];
    [messageField setAutocorrectionType: UITextAutocorrectionTypeNo];

And, when we're finished with them, we're releasing both fields. All of this is as per Apple's API for UITextFields.

Now here's the strange part. When we run the exercise with both of these fields "protected", we find the first one (subjectField) is protected just fine, but the second one (messageField) shows up in the spell checker cache (located in ~/Library/Keyboard/dynamic-text.dat in the iPhone simulator).

Huh, that seemed odd. So, like any scientifically inclined geeks, we tried dozens of experiments to figure out why things were behaving this way. Eventually, we added a third field in exactly the same way as the first two. Sure enough, the first two fields are protected, but now the last (dummy) field goes into the cache.

Our next step, which we haven't yet done, is to test this on a hardware device, but my point here is pretty straight forward. 

Sometimes APIs misbehave. And, there's a security lesson to be drawn from this. If we'd done a code review of this app, we may well have concluded that all was fine (with regard to this issue). But that wouldn't have been enough. It's also vital to test these security assumptions during the testing phase. 

This type of issue is ideally suited for dynamic validation testing. Take your security assumptions and dynamically observe and verify them in a test bed. Surely, that would have (and did, in our case!) shown that there's still a problem.

Adding a third (dummy) field resolves only the symptoms of this problem, not the problem itself. The jury is still out on that one, but we won't rest until we've resolved it, one way or the other.

Cheers,

Ken

Friday, September 21, 2012

An annotated bibliography of MobAppSec -- iOS Edition

In the past few months, we've seen the publication of several highly useful texts on different topics related to mobile app security. We thought we'd start a small annotated bibliography here to point to the really useful stuff. It's not intended to be comprehensive, but these are documents that we've found to be exceptionally useful. If you've found some that are not on this list, please feel free to submit them to us; if we agree, we'll add them to the bibliography.

So, here's our list for iOS. We'll be building an Android version shortly, and quite likely a General MobAppSec version as well.

iOS

"iOS Security", May 2012, Apple, Inc. -- Say whatever you want about Apple's security practices. This guide provides a superb description of iOS's security architecture, from its boot process through all of the app-level protections provided by current iOS versions. This is a must read for anyone involved in iOS application development.

"Hacking and Security iOS Applications - Stealing Data, Hacking Software, and How to Prevent It", January 2012, Jonathan Zdziarski. -- Although it is largely focused on forensic analysis of iOS devices, this book is another absolute must read for iOS developers. In it, you'll learn how jailbreaking works, how to copy the contents of an iOS device's hard drive, how iOS encryption works in detail, among many other things. It includes several labs for the reader to work through, along with available source code for each.

"Security Configuration Recommendations for Apple iOS 5 Devices", March 2012, U.S. National Security Agency. -- Although more aimed at IT Security than MobAppSec audiences, this document provides some useful tips on how to configure iOS 5 devices and how to manage them in large enterprise environments.

"iOS Hardening Configuration Guide - For iPod Touch, iPad, and iPhone running iOS 5.1 or higher", March 2012, Australian Department of Defence. -- Conceptually similar to the NSA guide above (but written in Australian English :-), this useful document provides useful security configuration tips for iOS deployments. It also goes into good detail on how the platform's security features work, and is worthwhile reading for everyone involved in iOS application development.

"iOS Developer Cheat Sheet", July 2012, OWASP. -- This doc provides some quick pointers on how to avoid many of the major risks associated with mobile computing. The doc follows the (draft) OWASP Top Ten Mobile Risks, and points to possible solutions to consider for each. It is an open source document from OWASP, and others are encouraged to contribute and participate in expanding and improving it over time. (Full disclosure: I (@KRvW) was the principal author of the first version of this doc, so I'm somewhat biased...)


Mobile App Sec is being left behind

When it comes to application security, mobile app sec ("MobAppSec" as we like to call it) seems to be getting some pretty abysmal scores. What makes this especially risky business is that we're more and more putting real apps where real money (or other valuable information) is being put in harm's way.

Two studies were released this week, which together are useful at understanding the bigger picture when it comes to MobAppSec. The first is the fourth release of the venerable Building Security In Maturity Model (BSIMM) by Gary McGraw, Brian Chess, and Sammy Migues. Next, there's the fourth annual World Quality Report from consulting firm, Capgemini.

The BSIMM study collects and analysis observations from some 51 software development organizations across 12 industry verticals. In all, some 111 security activities are observed. It paints a rather thorough picture of what software developers around the world are doing with regards to software security. Although it's missing efficacy measurements -- to be fair, it doesn't set out to measure the efficacy of the activities observed -- it is easy to draw the conclusion that software development has come a long way in the last few years, at least in terms of security practices.

Since the launch of the BSIMM in 2008, for example, the software security groups (SSGs) in major software development organizations have flourished, rising from 1 SSG employee per 100 developers to 2 SSG employees per 100 developers. And it appears the limiting factor in staffing SSG organizations is finding qualified employees. This speaks well for the future of software security in large enterprises, to be sure.

In stark contrast to the BSIMM, however, Capgemini's World Quality Report (WQR) would indicate that MobAppSec isn't getting anywhere near the same level of security attention that other software projects get (per the BSIMM). (I should note that the BSIMM doesn't exclude mobile efforts, per se, but it doesn't directly address them either. Further, there is a note of a possible BSIMM Mobile Working Group, so perhaps we'll see some mobile-specific data in the future.) 

The WQR concludes that firms are failing at mobile application security. The MobApp communities seem, to be driven by more of a gold rush mentality, focusing on functionality and time-to-market.

While focusing first and foremost on functionality is completely appropriate for a business, doing that at the expense of security can result in unforeseen security consequences. For example, while iOS 6 is brand new in the hands of consumers, there are already reports of things like Siri allowing an attacker to send Facebook postings and tweets, even on a locked device. No doubt the security research community will be taking a far deeper dive into finding all the abuse cases that can be found in the new iOS 6 user interfaces, among other things.

The majority of BSIMM participants know that developing secure software requires attention to details throughout the development process, from inception through production and maintenance. MobApp developers would be well advised to learn from these things sooner than later. There's an old adage that a smart person learns from his mistakes, but a wise person learns from others' mistakes.

We'll help bring these things together at our upcoming Mobile App Sec Triathlon, of course. We'll talk about many of the things observed in the BSIMM study, and we'll help put those concepts into actionable steps that developers can immediately put into practice. We hope to see you there.

Cheers,

Ken van Wyk

Tuesday, September 18, 2012

Building an Android Security Toolkit Part 2


In the last post, we started building out an Android Security Toolkit, things every Android developer should know about security. Access control is fundamental to application security. In my perfect world, when a developer learns a new language they first learn Hello World, the next thing a developer learns should be how to implement who are you and what can you do in the langauge - authentication and authorization. The AndroidManifest.xml file describes the access control policy that forms the application boundary, but where is this boundary enforced and what services does it provide?

The access control chain consists of

1. Defining access control policy

2. Enforcing access control policy

3. Managing access control policy

The AndroidManifest.xml defines the permissions that the application requires, such as:

<uses-permission android:name="android.permission.
INTERNET" />
<uses-permission android:name="android.permission.
WRITE_EXTERNAL_STORAGE" />

The user is able to confirm or deny installation (but not change permissions) based on the AndroidManifest.xml file, this defines step 1 above. The policy is distributed with the application so policy management is under control of the distribution point such as AppMarket. This leaves step 2, enforcing access control policy.

Android apps run in the Dalvik VM, however IPC is not managed in the VM, instead its managed further down in the stack in the Binder IPC Driver which resides in the Linux kernel. Not sure, but I suspect the reason is that there are a number of permissions that requires lower level access.

The binder maps the permission and  either the caller's identity or binder reference to verify access privileges. From a design standpoint, permission boundaries can be defined and enforced at different layers in the App including Content Provider, Service, Activity, and Broadcast Receivers.

Access control is the beginning of thinking about security but its not the endgame, the next step to building an Android security toolkit is defensive coding, how to deal with cases like code injection that are designed to subvert the access control scheme.

**
Come join two leading experts, Gunnar Peterson and Ken van Wyk, for a Mobile App Security Training - hands on iOS and Android security, in San Jose, California, on November 5-7, 2012.

ANNOUNCING: MobAppSecTri Scholarship Program

For our upcoming three day Mobile App Sec Triathlon in San Jose, California on November 5-7, we are today announcing a student / intern scholarship program.

We will be giving away a few student / intern tickets to the event absolutely free to a small number of deserving students / interns.

Course details can be found here.

Requirements

To be considered for a student / intern free registration, you will need to submit to us by 8 October 2012 a short statement of: A) Your qualifications and experience in mobile app development and/or information security, and B) Why you deserve to be selected. Candidate submissions will be evaluated by the course instructors, Gunnar Peterson (@OneRaindrop) and me (@KRvW). Decisions will be based solely on the quality of the submissions, and all decisions will be final.

Details

All scholarship submissions are due no later than midnight Eastern Daylight Time (UTC -0400) on 8 October 2012. Submissions should be sent via email to us. Winning entrants will be notified no later than 11 October.

Student / intern ticket includes entrance to all three days of the event, along with all course refreshments and catering. Tickets do not include travel or lodging expenses.


Friday, September 14, 2012

PCI has gone mobile -- is your app ready?

The folks over at the Payment Card Industry (PCI) security standards council have just published their "PCI Mobile Payment AcceptanceSecurity Guidelines for Developers" document. If you're doing anything in the mobile payment space, this document is a must read, of course. Even if you're not doing mobile payments, though, it's still a pretty worthwhile read overall. But be prepared, some of their security goals are quite high indeed.

For starters, they lay down three security objectives (or requirements, if you will) as follows:

  1. "Prevent account data from being intercepted when entered into a mobile device."
  2. "Prevent account data from compromise while processed or stored within the mobile device."
  3. "Prevent account data from interception upon transmission out of the mobile device." 
These seem pretty reasonable starting points. They're all motherhood and apple pie sorts of requirements that we shouldn't find too many disagreements with.

Next, they set out a series of guidelines that are "essential to the integrity of the mobile platform and associated application environment." Here's where things start to get pretty tough, from the standpoint of a mobile app developer, to achieve. For example, "Prevent unauthorized logical device access." Now, there's nothing wrong with wanting to prevent logical device access, but app developers don't have much input on, for example, the use of strong passcodes on iOS devices.

But it's likely the case that the PCI council has taken a broader view here than simply the app itself. That's evident in the very next guideline, which speaks to server side controls.

The rest of the guidelines too, are worth reading. Some are high targets, like protecting the device from malware. And, to be fair, this isn't a standards document per se -- like, say the PCI Data Security Standards (PCI-DSS) itself is. This document lays out guidelines, after all.

To be sure, though, if you're writing apps that involve mobile payment systems, you'd better be diving into this document and taking it seriously. We'll be delving into this document and its ramifications for mobile developers at our Mobile App Sec Triathlon in San Jose this November 5-7, so bring your questions with you and let's discuss what mobile developers need to know and do.

Cheers,

Ken van Wyk

Thursday, September 13, 2012

iOS 6 and UDID deprecation

This is somewhat of a follow-up to my posting yesterday re what iOS devs should know about security-relevant changes to iOS 6.

We've all known for some time that Apple would be deprecating the use of Universal Device IDentifiers (UDIDs) in apps. We've also known more recently that attackers have been targeting those UDIDs.

And now, we need to prep our apps because, as of iOS 6, the use of UDIDs is no longer available. (Actually, reports indicate that Apple has been rejecting UDID-using apps for at least a couple months already.) But in iOS 6, Apple gives app developers an alternative in the form of a so-called "Advertising Identifier".

So, the question you might be asking yourself is this: Since this issue relates mostly to advertising, why do we care from a security perspective, and what's the big deal with UDIDs anyway? Glad you asked.

For starters, UDIDs are persistent identifiers. Many app developers have used UDIDs to identify sessions between mobile apps and servers. After all, they're unique identifiers, right? There are a couple of problems with that approach. First of all, if a consumer sells his iPhone, the UDID remains with the device, even if the iPhone gets wiped with a factory reset. Secondly, there are privacy concerns over associating users and persistent hardware identifiers.

So, in our apps, we really should avoid using persistent hardware identifiers to associate with users, sessions, etc. (Advertisers have also used these identifiers, but that's outside the scope of what I'm discussing here today.)

And besides, even if we mistakenly thought using UDIDs was a good thing, Apple has taken that option off the table.

That leaves us, at the very least, with the new advertising identifier. It isn't associated with the hardware, and can be cleaned with a factory reset, so many of the privacy concerns are reduced.

But let's step back a bit and consider this from a security perspective. If we're looking for a session tracking token, why wouldn't we generate a new one with every session, similar to how JSESSIONID works on Java web apps? If we're identifying a user, why not use a username and/or user number of some sort? Isn't then the advertising identifier simply an issue for the advertisers to deal with (as the name would imply)? I believe so.

But the fact remains that many apps have used UDIDs for session tokens, user identifiers, etc., for some time. Those apps will need to be re-tooled, if they haven't already been. I consider the use of something like a UDID to simply be sloppy coding, and we need to do better than that.

We'll discuss using the advertising identifier and other approaches at our Mobile App Security Triathlon in San Jose, on November 5-7.

Cheers,

Ken van Wyk


Wednesday, September 12, 2012

iPhone 5 and what every (secure) developer should know

Well, the Apple iPhone 5 big event has come and gone, and what new stuff do we need to know from a security standpoint?

For starters, the new iOS 6 Gold Master, Xcode 4.5 Gold Master, and iTunes 10.7 are available for download, as of this writing. (Mine are downloading as I type.)

While there was a lot of buzz about the "i5" getting Near Field Communications (NFC) capability, for payment systems and other short range RF comms, that didn't pan out. From what we can gather at this point, it appears the new Passbook system in iOS 6 is going to be based on barcode scanning, much like the existing Starbucks app has been doing for well over a year.

But then there is iOS 6 itself, and while the jury is still out on its under the hood security enhancements -- which are inevitable with each new major iOS release -- there aren't a lot of security changes on the surface.

Certainly, to support the bigger i5 screen, app devs are going to have to tweak their UIs, but that's all functional stuff and will no doubt happen in due time.

So, from an app security standpoint, the best thing we can be doing right now is to ensure our apps build properly in Xcode 4.5 and start diving into what Passbook has to offer us (if you're doing anything like payments, coupons, boarding passes, etc.). And, since we're forced now to support two different screen geometries, now might not be a bad idea to build UI XIBs for all of them (including iPad) and build our apps as Universals. While those are compiling, we'll be diving into the iOS 6 docs looking for any minor or major security UI enhancements.

Either way, we'll plan on a iOS 6 changes sidebar at our upcoming Mobile App Sec Triathlon in San Jose on November 5-7. Hope to see you there. Bring your iOS 6 questions with you!

Cheers,

Ken

Tuesday, September 11, 2012

What's in your Android Security Toolkit?

Ken van Wyk asks mobile developers - what's in your bag of tricks? From a security perspective Ken lists a number of critical things for developers to protect their app, their data and their users; these include protecting secrets in tranist and at rest, server connection, authentication, authorization, input validation and out put encoding.

These are all fundamental to building a secure mobile app. Over the next few posts, I will address the core security issues from an Android standpoint and what security tools shold be in every Android developer's tookit.

First, with regard to security for Android I think there are three key areas:
  • Identity and Access Control - provisioning and policy for how the system is supposed to work for authorized users
  • Defensive Coding - techniques for dealing with malicious users
  • Enablement - getting the app wired up to work in a real world deployment
So onwards to policy for Identity and Access Control, a good place to start is with AndroidManifest.xml.

There are only two hard things in Computer Science: cache invalidation and naming things. -- Phil Karlton

AndroidManifest.xml provides the authoritative source for package name and unique identifier for the application, this effectively bootstraps the apps' activities, intents, intent filters, services, broadcast receivers, and content providers. These show the externaly interfaces available for the application.

The next step is assigning permissions. Android takes a bold stance by publishing the permissions that the app requests before its installed. This has the positive effect of letting the user know what they are permitting, but at the same time the user cannot change or limit the app. If they want to play Angry Birds (and who doesn't?) they choose to install Angry Brids with the permissions set by the developer or they choose to live an Angry Birds-free existence. So the overall effect is to inform the user but not let the user choose granular permissions (this last has the positive effect of not turning the average user into a system adminisrator for a tiny Linux box).

The AndroidManifest.XML contains the request for access to system resources such as Internet, WIFI, SMS, Phone, Storage, and other

<uses-permission android:name="android.permission.
ACCESS_WIFI_STATE" />
<uses-permission android:name="android.permission.
CHANGE_WIFI_STATE" />
<uses-permission android:name="android.permission.
CHANGE_NETWORK_STATE" />
<uses-permission android:name="android.permission.
INTERNET" />
<uses-permission android:name="android.permission.
WRITE_EXTERNAL_STORAGE" />

The first step for App Developers here is to only request the least amount of privileges necessary for your app to get the job done. Saltzer and Schroeder first defined the principle of Least Privilege:

Every program and every user of the system should operate using the least set of privileges necessary to complete the job. Primarily, this principle limits the damage that can result from an accident or error. It also reduces the number of potential interactions among privileged programs to the minimum for correct operation, so that unintentional, unwanted, or improper uses of privilege are less likely to occur. Thus, if a question arises related to misuse of a privilege, the number of programs that must be audited is minimized. Put another way, if a mechanism can provide "firewalls," the principle of least privilege provides a rationale for where to install the firewalls. The military security rule of "need-to-know" is an example of this principle.

Notice the two facets of this principle. The first is the conservative assumption to limit the damage of accident and error. This margin of safety approach should be near and dear to every engineer's heart. The second part of the principle is simiplicity - if its not needed turn it off, or in this case do not publish or request access to it.

From a security point of view, the AndroidManifest file helps to reduce your applications' attack surface. If you don't need SMS or Internet or Wifi, don't ask for it.

Android has a pretty interesting approach to access control from some under involvement to declarative permission to capabilities, and we will dig deeper into this in the next post.

**
Come join two leading experts, Gunnar Peterson and Ken van Wyk, for a Mobile App Security Training - hands on iOS and Android security, in San Jose, California, on November 5-7, 2012.

Monday, September 10, 2012

Is your mobile app ready for legalized Wi-Fi sniffing?

Sure, we've all known about network sniffing for many years, right? We've also known that sniffing a network we don't own is illegal--or was illegal anyway. But now that a US Federal judge has ruled sniffing an open, public wireless network to be legal, it's a different game.

Let's put this into context a bit first. The Mobile Security Project over at OWASP started working a while back on a Top Ten Mobile Risks effort. They reckon the third biggest risk to mobile users is "insecure transport layer protection". (Number one on the list was insecure local storage, such as in the case of a lost or stolen device, and number two on the list was weak server side controls.)

Insecure transport layer protection is a kind way of saying that mobile developers often times don't adequately protect their apps' secrets while in transit. When we fail to encrypt things that matter -- e.g., authentication credentials, session tokens, device identifiers, user data, geolocation data -- we expose our users to what I like to refer to as a "coffee shop attack". Prior to that court ruling, the coffee shop attack was illegal. Of course, criminals weren't much deterred by that, but at least the victim might have some legal recourse if an attacker's action was itself illegal. No more. The gloves, as they say in ice hockey, are off.

Of course, those of us who understand the technologies involved wouldn't dream of using an open Wi-Fi without first encapsulating all of our network traffic inside a strong VPN tunnel (if at all).

Well, the average consumer can't even spell VPN, folks. Assuming our users will use a VPN is simply not adequate. So what does that mean for mobile app developers? How do we protect our consumers from (now legal) network sniffing?

For starters, we have to design and implement our apps under the assumption that our users will be using the apps in a hostile network environment, like an open Wi-Fi in a coffee shop. If your app can't withstand the scrutiny of running securely on an open Wi-Fi, you have no business using the word "secure" to describe it in any way.

That's all easy to say, of course, but how does it translate into Gunnar's "what do I do?" sort of action?

Here's some things to consider:

  • Make an inventory of all the sensitive data in your app, from the low level (e.g., user authentication) stuff through the high level (e.g., user data).
  • Make it a security requirement that all such sensitive data will be protected while at rest as well as while in transit, whenever possible.
  • Ensure that all network connections where sensitive data is to pass are strongly encrypted (e.g., SSL, perhaps with certificate pinning or other strong certificate verification).
  • Verify through code reviews that all sensitive data is encrypted in transit.
  • Validate through dynamic validation testing that all sensitive data is in fact (not just in theory) encrypted in transit.
I know the above list is an over simplification in many ways, but our consumers are not likely to easily forgive us for an "oops" when it comes to exposing their sensitive data to a coffee shop attacker.

Securing network data is just one of many things we need to do, of course. But it's a biggie. Building security into our apps is a lot like physical fitness in that way. We don't just go for a jog the day after New Year's Eve because we feel guilty about how much we consumed over the holidays. It's a lifestyle change. It's a discipline. We need to think about it all the time and live it.

In our upcoming Mobile App Sec Triathlon, Gunnar and I will cover these topics, of course -- right down to code examples of how to implement the above list. We hope to see plenty of mobile app devs there, and to engage in meaningful dialog about different ways of approaching this and many other issues regarding secure mobile apps.

Cheers,

Ken van Wyk

Friday, September 7, 2012

Why We Train

Ken van Wyk asks what is in your Mobile App Security toolkit? I had planned to write a post responding to that, but saw the tweet below from two of my favorite people in the industry and thought I would expand on this:
Jeremiahtweet
The first part, mostly, makes sense. Training developers is not an instantaneous fix, to be sure. In my training for developers we look at concrete ways for developers and security people to improve their overall security in their apps. The ways to do this vary, some are short term design/dev fixes (improving input validation for example) and some are longer term (swapping out access control schemes). There is some latency from the time you train developers til the time you realize all the benefits in your production builds.  However, unless you roll code at a glacial pace, I do not believe it takes 18 months for training to pay off. Should happen way faster.

The second part of the tweet boils down to the old adage - "what if you train them and they leave?" The counter argument to this is simple and serious - "what if you don't train them and they stay?" Believe me I have seen plenty of the latter and lack of clue does not age well.

So while I agree with the spirit (but not timetable) of the first part of the tweet, I definitely disagree with the second part of the tweet. We need more training, better educated developers and security people, not less.

Specifically, we need hands on security engineering skills - the basic principles of security are not rocket science, the challenge is all in how do you apply it in the real world?

Despite increasing budgets, the security industry has not solved many problems in the last decade, but one thing the industry absolutely excels at is - conferences!
900
900 - NINE HUNDRED - Infosec conferences! This is not a record to be proud. Granted there are a handful of very good conferences, but the security industry's conference problem is that the industry as a whole is geared to talking not doing. We've all seen the conference hamster wheel - oh big problems, oh solutions that seems hard, when is beer? You get on the plane home with the same problems (or more) than you left with. Repeat.

Many years ago, I was working on a project at a large company with thousands of developers, and they wanted to tackle software security. The company put its top architect on the project, a software guy not a security guy. We met early on the project, he was very talented one of the better architects I have worked with, and like is the case with all such people was very curious, he really wanted to learn. He asked me - how do I get up to speed on security matters? I told him to read Michael Howard's books, Gary McGraw's books and Ross Anderson's books. I came back a month or two later, to his credit he had plowed through, they were piled up behind him. He looked at me seriously and asked - "I see where the problems are, but what do I do about them?"

The what do I do question has haunted me ever since. We got down and worked on a plan for this company, but the industry as a whole glamorizes the oh so awful security problems at conferences but leaps over the what do I do part.

This is where training comes in. I am not naive enough to believe training is all we need to do, but I definitely believe that education for security people, architects and developers has a major role to play in improving our collective situation. We need better tools and technologies, advances in vulnerability assessment tools, identity and access management, these have all helped a lot over the decade, we need better processes on how to apply them in real world systems, your SDL matters. But so do your people! Without basic training you won't know what tools to use and where, how to apply them and what traps to avoid. This is why we train.

Ken and I will be in San Jose, Nov 5-7 doing three days of training on Mobile AppSec. If you or your dev teams are doing work on iOS, Android, or Mobile, there is a lot to talk about. The focus is hands on, what problems are out there in mobile today and what to do about them.

The first time I went to Black Hat, I was intrigued and impressed by the depth of FX's and other presentations, but I was also horrified. There was simply no one in the software world (at that time) talking about this stuff, it was clear the problems would just keep getting worse and they did. But enumerating problems decade plus later is not good enough, we need time materials, resources and people on what to do about them - how to fix. Out of 900 conferences, there is no equivalent "how to fix" conference that is akin to Black Hat. If you plant ice, you're gonna harvest wind.

By the way, waiting to deal with problems is a proven way to fail, and there is nothing more permanent than a temporary solution. Ken and I started on Mobile because now is the chance, the initial mobile deployments for many enterprises, to get it in right, with some forethought on security.


The last thing we need is more hand waving, bla bla and power point at a conference on "the problem" we need to get busy engineering better stuff, and that is where training comes in. As the USMC says the more you sweat in training, the less you bleed in battle. You might ask - with so many problems, can we really engineer our way out? Let me ask then - if we had 900 cons a year on how to build better stuff would be better off or worse?


Security always lags technology. In the early days of the Web, the security was egregious. But this did not matter so much because the early websites were brochureware. The security industry had time to catch up (though still behind) and learned over time how to deal with SQL Injection et al.

In Mobile its much worse. The security industry is behind the technology rate of change as always, the developers are untrained, but the initial use cases for Mobile are not low risk brochureware, they are high risk mobile transactions, Banking, and customer facing functionality. Security's window to act on building better Mobile App Sec for high risk use cases is not 3 years away, its now.

**
Come join two leading experts, Gunnar Peterson and Ken van Wyk, for a Mobile App Security Training - hands on iOS and Android security, in San Jose, California, on November 5-7, 2012.

Creepy featurism

In yesterday's launch of the new Kindle, Amazon CEO Jeff Bezos said some interesting things about today's smart phones and tablets. In particular, his point about customers not wanting "gadgets" but wanting services that improve over time really hit home for me.

I've been an "early adopter" for many years. I had an Apple Newton (MP-120, MP-130, and MP-2000) and loved them. More recently, I've been searching for 10+ years for the right smart phone for my needs. I had an old Linux-based Motorola A-780 and really wanted to believe that someone had finally built the right device for me. Its list of features was right on target (for 2003 or so). But it failed me miserably. I also tried a Blackberry 8800, but it too was just a box of silicon features with crappy software, IMHO. Total #FAIL.

Finally, I felt I found what I was looking for when I got my first iPhone. And, by and large, I did. I'm now a few iPhones down that path (on a 4S now, but that'll change in a week or so), and I'm a pretty happy customer.

Of course, there are many lessons to be learned in all of this. How about security, and how does this all relate to mobile app developers? Excellent question.

It's 2012, and few people would disagree that smart phones have become hugely important to a vast number of consumers. We're doing things on our devices today that we would have laughed at the day before the iPhone (or Android!) was released. The mobile phone world has been flipped onto its head, thanks to these pioneers.

But it's not about a competition of feature lists. To succeed in today's market, the device has to just work, and has to just work for non tech-savvy consumers. It has to pass the Uncle Bill and Aunt Betty test.

Apple long ago learned to de-emphasize the technical specifications race, and focus on the "user experience". When they release a new product, the focus of their announcements is showing us how things work, not the CPU speed of the new multi-core processor. Although those things are important, they're not what matters to our consumers.

Because, guess what -- today's consumers don't understand the technology (by and large), and they surely don't understand security. Security, like the functionality in our devices, has to just work. And those two words, "just work", have to be something that we all live and breathe.

Force a user to install a root CA certificate into the /var/blah/blah/blah folder and you've already lost. But make it "just work" and do it securely, and you've won.

Security too cannot be an after thought. We have to consider security at every possible stage of our work. It has to simply be a quality of our efforts.

Mr. Bezos is right in that regard. It can't be about building a product with all the latest buzzwords included in the ingredient list. It has to be about making our users happy. One of the things that will keep our users happy is to enable them to securely do all the cool things that today's (and next week's) devices can do. Security must simply be an intrinsic quality of our software.

Are you prepared? In our Mobile App Sec Triathlon, Gunnar (@OneRaindrop) and I (@KRvW) will give you plenty of food for thought, and discussion. Come join us in San Jose this 5-7 November and let's talk about what needs to be done.

Cheers,

Ken van Wyk