Select Menu

Slider

Windows

Apple | Mac

Linux

Mobile

Hardware

Tutorial

Android

» » » » How Apple is improving mobile app security
«
Next
Newer Post
»
Previous
Older Post

In a much-publicized recent case, scientists at Georgia Tech managed to get a specially crafted app that could perform all sorts of malicious activities app—aptly named Jekyll—onto the App Store, bypassing every single security measure put in place by Apple to protect its users.
That’s no small achievement: Apple has gone to great lengths to ensure that users of its mobile operating system feel safe when they use their devices for everyday activities from browsing the Web to updating their banking accounts. By enforcing a stringent set of rules that determine which software can and cannot run on its devices, the company has, for the most part, managed to keep its customers safe from malicious software.
Sure, the odd app containing features that violate the company’s rules does get through from time to time, but serious breaches are extremely rare. Still, hackers and security researchers continue to prod at iOS in an attempt to circumvent its security framework.
For its part, the Cupertino giant is hardly sitting still: The security behind its operating systems continues to evolve, creating additional layers of protection that affect everything from the way apps are developed to the way they run.

In the beginning, there was App Review

The first line of defense for app security is the review process, during which each app is manually tested to ensure that it doesn’t crash in any obvious way and that it conforms to all the appropriate App Store rules.
Before landing on the App Store, all apps are manually reviewed by Apple for flaws and malware. The large number of submissions, combined with the need to approve updates in a timely manner, conspire to make this process somewhat mistake-prone.
As part of this vetting exercise, Apple employees also run a special static analyzer on the app’s binary code to see whether it makes use of private functionality that’s normally off-limits to developers. This important step allows the company to determine, for example, if the code attempts to surreptitiously make phone calls, send SMS messages, or even access the contacts database without the user’s permission.
Despite having been largely successful at keeping malware out of the App Store, the review process has its limits. Faced with vetting hundreds of software titles every week, the reviewers can dedicate only a limited amount of time to each app, which means that they may miss issues that only crop up after a certain amount of use, or in response to external events. In the case of the Georgia Tech attack, for example, the Jekyll app was crafted in such a way that the malicious code would kick in only when a special message was delivered over the Internet, making it very hard for the app review process to highlight any potential flaws.

Buried treasure

And this is where iOS’s software-based defenses kick in. Each app that runs on an iPhone or iPad is allowed to read and write files only inside a virtual “sandbox” that the operating system creates for it. Any attempt to access data outside of the sandbox is rejected outright, thus effectively allowing apps to communicate with each other only through approved channels that Apple has put in place.
For all practical purposes, the sandbox prevents a malicious app that has managed to slip through the review process from siphoning data that belongs to another app (like, say, online banking software) without the user’s knowledge. Because sandboxing is implemented at the lowest levels of the operating system, it is very hard for a hacker to circumvent its security model—unless the user is operating a jailbroken device.
To make a hacker’s life even harder, iOS clearly separates areas of memory that are dedicated to code from those that are supposed to contain only data, making it impossible—in theory, anyway—for the latter to spill into the former. This prevents an app from downloading code from the Internet when the user runs it; this keeps the app from bypassing the review process altogether and potentially unleashing all sorts of trouble.

Anatomy of a heist

Unfortunately, even all this technology is no match for the wits of a determined hacker. For one thing, while sandboxing prevents apps from accessing each other’s data, it doesn’t necessarily stop them from accessing information that, under the appropriate circumstances, would be available to third-party software, like the user’s contacts or photo albums.
Instead, malicious access to these resources is normally flagged by Apple’s reviewers by observing the app in action and examining its binary code—which means that an app that manages to evade Apple’s analysis tools will potentially be able to access everything from your messages to those pictures you really wanted to keep private.
Due to the dynamic nature of iOS’s underlying technologies, this is not as hard to do as it may sound. Even a moderately skilled developer could write code that, for example,takes two seemingly unrelated words, encrypts them, and combines them to form the name of a private API. The final bit of code thus doesn’t come into action until the app is run; it’s a bit like trying to smuggle a gun onboard an aircraft by breaking it down into its individual parts.
Static analyzers can be used to examine source code for possible vulnerabilities (intentional or otherwise), but is no match for a resourceful hacker.
However, naïve implementations of this technique still leave telltale signs that a sufficiently sophisticated static analyzer can detect—bullets viewed by an X-ray machine still look like bullets, after all. These attempts are almost always discovered and blocked by app reviewers well before they manage to make their way onto a user’s device.
Yet, the Georgia Tech researchers were able to take the technique to a higher level: They managed to break their app into pieces that were both innocuous and necessary to the software’s “official” functionality—such as downloading information from the Internet and sending a webpage to a friend via email—but that could be recombined at runtime to perform illicit actions without the user’s consent, such as grabbing all the user’s contacts and uploading them to a website of the developer’s choosing.
As you can imagine, this kind of attack is very difficult to recognize. To take the air travel analogy further, tracking this kind of vulnerability down would be akin to recognizing a MacGyver-like terrorist who can fashion a gun out of some mints, a newspaper, and a piece of string.

That thing you (can) do

Combatting this problem involves changing the way apps are allowed to access system resources, essentially creating a sandbox that encompasses not just the file system, but also everything from your contacts to your pictures.
With this setup, it is the operating system, rather than human reviewers, that’s responsible for stopping apps from accessing any sensitive data, making it nearly impossible for malicious software to run, even if it gets past the app-vetting process. The only way for developers to gain access to the data is to explicitly request an “entitlement” to do so before they submit the app, thus giving the app review folks useful hints on what kinds of functionality they should specifically be examining to ensure compliance with the rules.
Entitlements, already widely used in OS X, allow developers to request access to individual sources of sensitive data.
Entitlements are already a firmly established technology—they are widely used in OS X, for example, to regulate how signed apps can access everything from the network to the camera, and iOS apps can already take advantage of them if they want to support iCloud or push notifications. In future versions of Apple’s mobile operating system, their use will simply extend to encompass just about any kind of sensitive information or functionality that a developer may need.
The real genius of this approach is that it improves security without limiting what apps can do or placing any additional burden on end users; the onus will be entirely on developers, who will be forced to explicitly request entitlements for the resources they need to access, and on Apple’s reviewers, who will need to approve or reject those requests.
As far as we—the customers—are concerned, the apps we use every day will continue to ask us whether they can access our contacts, location data, or photo albums, just like before. Behind the scenes, however, a whole new layer of security will help prevent hackers’ increasingly sophisticated attacks from wreaking havoc with our personal information.

Source : MacWorld

About Anonymous

This is a short description in the author block about the author. You edit it by entering text in the "Biographical Info" field in the user admin panel.
«
Next
Newer Post
»
Previous
Older Post

No comments

Leave a Reply