Hacker News new | past | comments | ask | show | jobs | submit login
The mother of all Android malware has arrived (androidpolice.com)
279 points by anon1385 on March 2, 2011 | hide | past | favorite | 128 comments



This is an epic screwup by Google and the mobile carriers, but it's also a useful warning to Google's competitors.

1) This problem was reported to Google a week ago, through multiple channels, by one of the app vendors who got ripped off: http://www.reddit.com/r/Android/comments/fvepu/someone_just_... Apparently, Google has an unofficial policy of ignoring copyright and trademark complaints, allowing lots of skeevy software to linger on the market.

2) The phone carriers are apparently very slow to patch root exploits.

3) Users should be able to trust everything in a curated app store, or else there's not much point to those 30% fees.

But a word of warning to iOS, WebOS, and Blackberry users:

4) Although the lax behavior of Google and the carriers made this exploit easier, we'll eventually see problems like this on most mobile platforms. Apple has allowed (benign) root exploits to slip through their approval process in the past. If your phone is vulnerable enough to be rooted, it's vulnerable enough to be owned by a malicious app.


> Apple has allowed (benign) root exploits to slip through their approval process in the past.

Citation please. To my knowledge, Apple has never allowed code through the official app store that would jailbreak or root an iOS device.

There have been, however, web based exploits that were quickly patched, but you can't really blame this on Apple's approval process. The most notable one of these was the PDF viewer exploit used by jailbreakme.com.

This incident seems to prove to me that a curated app store is the best way to keep a large monoculture safe from malware.


Citation please.

[I would like to edit my original remarks, but it appears that I no longer can. So I've upvoted your comment instead.]

When I wrote that, I had been thinking of the Handy Light exploit which slipped through the App Store's approval process: http://appshopper.com/blog/2010/07/20/handy-light-tethering-...

But upon further investigation, I can't find any evidence that Handy Light used an actual root exploit. It may have just been an easter egg which violated Apple's policies using standard APIs.

Even if it's not a root exploit, though, there's no cause for celebration. Handy Light shows that the App Store process can sneak major, prohibited functionality onto users' devices in the guise of a clearly useless and trivial app.

Apple's biggest advantage may be checking for pirated apps. If Google hadn't allowed obviously pirated apps into the Market, then any attack would have required writing useful, popular applications from scratch, which would raise the bar considerably.


> When I wrote that, I had been thinking of the Handy Light exploit which slipped through the App Store's approval process:

Yes, I can see how you would make that simple mistake. The Handy Light application did allow tethering, which on an Android device might have meant rooting it. Other people mentioned in this thread that Apple run each binary through a scanner that looks for undocumented API calls. The Handy Light application must have been using all publicly available API calls, so it passed through their approval process.

I think Google could improve dramatically by keeping their store open, but running each binary through an automated scanner that looks for known exploit signatures. I'm not intimately familiar with the Android sandbox, but I am familiar with native assembly. Assuming that there is a public API or list of "approved/sanctioned machine instructions", it would be somewhat trivial to decompile the bytecode and check each instruction against the list and kick the application back to the developer with a list of instructions or API calls that were not allowed.


Even with a static analysis tool, most of a Cocoa app is dynamic enough to easily get around them. I had an app which relied on a private API (one that was relatively stable, it had been available in Cocoa for years before iOS existed) by simply finding the selector at runtime. Function pointers would be no different.


I guess this is the start of the malware arms race, then? I actually already have Lookout installed on my Android phone, that scans every app I install (though I'm actually not sure what the scanning criteria are).

Isn't it really moot, though, considering the web exploits? I mean: what is there that you can get with this Android exploit, that you wouldn't be able to with the iOS web exploit?


The difference is that these apps were downloaded thousands of times before being removed. Getting thousands of people to go to your malicious website is probably much harder.


Even if it's not a root exploit, though, there's no cause for celebration. Handy Light shows that the App Store process can sneak major, prohibited functionality onto users' devices in the guise of a clearly useless and trivial app.

Any competent iOS developer should be able to think of a handful of ways to sneak past Apple's approval process if they felt like it and had sufficient reason to do so.


Clearly, the best way to prevent applications you don't want running on your platform is to prevent applications you don't want running on your platform.

The problem comes when the user's opinion of an application they want is different from the platform's opinion. Apple's approach doesn't prevent malware, just limits it to installs by exploit - jailbreakme.com was a very public example of a webkit exploit (I assume) - and there have been plenty of others, including currently active ones.

On the other hand, applications that will play by the rules only have the official platform to go through. This means that while malware can and will get through, user desired applications like functional alternate browsers, flash and alternate mp3 stores never will.


While not a root exploit, I believe this is what the OP was hinting at:

http://gizmodo.com/#!5592521/how-a-guy-tricked-apple-with-a-...

>This incident seems to prove to me that a curated app store is the best way to keep a large monoculture safe from malware.

An even better way would be to disallow third party apps at all. There is definitely a trade off between encouraging the ecosystem to grow and fending off malware. If that can be snuck in, is it far fetched to assume that an actual root exploit can't be?


"3) Users should be able to trust everything in a curated app store, or else there's not much point to those 30% fees."

Isn't the appeal of Google's app store that it isn't curated? Anyone can sign up and submit anything.

And Apple has never let a "(benign) root exploit" through approval.


Isn't the appeal of Google's app store that it isn't curated?

Yes and no. As a developer, I certainly love the relative ease and freedom of Google's Market. But as user, I don't want to fear clicking "Download" or "Buy". Apple succeeds, in part, because their App Store provides a certain measure of trust, and users are willing to open their wallets.

At a minimum, Google needs to respond quickly to malware reports, and to enforce anti-piracy rules vigorously. Allowing attackers to pirate and republish apps allows them to attack many more users than they could reach by developing malicious-but-popular applications from scratch.

And Apple has never let a "(benign) root exploit" through approval.

Yeah, I think that was an incorrect statement on my part. Apple has passed major, hidden functionality that violates their policies, but I was mistaken in thinking a root exploit was involved. For a mea culpa, see the thread at http://news.ycombinator.com/item?id=2279823


>Isn't the appeal of Google's app store that it isn't curated? Anyone can sign up and submit anything.

Android allows installing apps from sources other than the Market, so this isn't that big a deal. A free-for-all app store does have disadvantages, like app spam, rampant copyright infringement, and apparently malware.


> Isn't the appeal of Google's app store that it isn't curated? Anyone can sign up and submit anything.

If you're a "I like to let anyone install anything in my phone" kind of person, I suppose so.


Isn't the appeal of Google's app store that it isn't curated?

That's the appeal for some developers, but probably not for most users.


The root exploit appears almost incidental here - without it the application could do everything it was doing merely by requesting those permissions at install time. The vast majority of people would install it without checking.

Here's some examples of iOS apps doing similar things without requiring a root exploit:

The threat is not theoretical. Several iPhone apps have been pulled from the App Store after being found to be harvesting user data, intentionally or unintentionally. A game called Aurora Feint was uploading all the user contacts to the developer's server, and salespeople from Swiss road traffic information app MogoRoad were calling customers who downloaded the app. Game app Storm8 was sued last fall for allegedly harvesting customer phone numbers without permission, but it later stopped that practice. And users also complained that Pinch Media, an analytics framework used by developers, was collecting data about customer phones.

http://news.cnet.com/8301-27080_3-10446402-245.html


" 3) Users should be able to trust everything in a curated app store, or else there's not much point to those 30% fees."

Exactly. Google is trying to clone the Mac App store without doing the legwork to justify their cut.

• Curation should include safety

• Curation should include some sort of fair promotion

I'm skeptical of the benefits of these "curated" app stores and their high fees but at least Apple does the work to back up taking their cut.


And they've jacked up their cut of the take by 6% over Apple. Apple's 30% is quite enough but an increase of 6% added on top of 30% is just ridiculous.


A higher percentage for far less value.


> Apparently, Google has an unofficial policy of ignoring copyright and trademark complaints

I have no personal experience with this, but Google makes it very clear that a DMCA notice is the appropriate action in this situation:

http://www.google.com/support/bin/request.py?contact_type=lr...

I've read a bunch of sob stories lately from iOS and Android app developers, and they never say they've sent a DMCA takedown notice that's gone unanswered. Under US law, this is the way to respond to copyright violations. Sending a generic complaint means very little.


He said in his comment that he sent a DMCA notice.


Just to be clear, the guy on Reddit also said that Google responded to the DMCA notice, which counters the claim the GP was responding to, specifically the the claim that Google ignores copyright complaints.

That said, it's true that they won't respond to anything except a valid DMCA notice. Anyone who sends anything else is provided with instructions on how to file a valid DMCA notice last I knew.


3) Users should be able to trust everything in a curated app store, or else there's not much point to those 30% fees.

Wait a second - does Google curate the Android market or charge 30% fees?


I don't know about the curation, but the fees are real, according to this:

http://market.android.com/support/bin/answer.py?answer=11262...


Does Google take that fee, or does it go to the mobile carrier?


Bear in mind that this is just something casting a wide net by using the android marketplace.

There's enough bugs and 0day floating around in webkit as well as unpatched exploit code to take on Safari, the Android browser and Chrome at the moment.

From a mail over the weekend about a pentesting exploit kit we subscribe to:

    This release introduces two new exploits for the webkit CSS rule
    deletion vulnerability. Use safari_parentstylesheet to exploit
    all those pesky OSX machines (fully up to date and patched)
    and android_parentstylesheet for anything running android 2.2 and 
    below. Moreover, using android_hotplug you can further escalate
    your privileges to root. Being offsensive has never been so good!

    ==New Modules==

    o safari_parentstylesheet (Safari <= 5.0.3 64bit webkit css rule deletion vulnerability)

    o android_parentstylesheet (Android <= 2.2 webkit css rule deletion vulnerability)

    o android_hotplug (Android privilege escalation vulnerability)
That's just one pentesting tool we use and that's a legitimate toolkit. Malware targeting webkit in general is on the increase, with various payloads for safari, chrome, osx, ios and android. It's still the minority by far, but it is growing. Heck, even Metasploit's getting in on the game (http://blog.metasploit.com/2011/01/mobile-device-security-an...).

Incidentally if you want to see a video of the safari bug in action, you can download one from http://partners.immunityinc.com/movies/Lightning_Demo_Safari...

There's also one on owning android at http://partners.immunityinc.com/movies/Lightning_Demo_Androi...


I'm not sure after your paranoia inducing post anyone is going to download zip files from links you provide.

Are these videos somewhere on the web in viewable form?


If you watch the android video you'll see that a link will do the trick, not even a download. So it makes no difference really - either you trust the link or you don't. Scary, huh?

I'm not aware of these vids being anywhere else but they are from the guys who make the aforementioned framework.


The Chrome sandbox protects pretty well against that kind of exploit. I bet $20K [1] that won't own Chrome. Also, I think Android 2.3.3 patches the webkit bug - there was a similar problem that held up the release somewhat.

(Obviously that shouldn't understate the danger of that exploit against Safari or the Android browser)

[1] of Google's money: http://dvlabs.tippingpoint.com/blog/2011/02/02/pwn2own-2011


The Sandbox makes things harder, but it's not infallible and has been bypassed before. For a pretty good walkthrough of the source (with links to even better more in-depth walkthroughs) have a look here: http://www.breakingpointsystems.com/community/blog/chrome-sa...


Of course the sandbox isn't failsafe, but it's pretty decent. I don't think there have been any remotely exploitable holes found in it yet, have they?

(Thats a good link, and links to some security bugs found in the sandbox. But I don't think any of those have ever had a live remote exploit)


Browsers on my desktop won't be running up phone charges, which I think is an important distinction. Malware can now rip me off directly.


Malware has been able to do this for some time. Back in 2004 I (and many others, all independently) identified that some mobile phones (for me, most notably my beloved Nokia 6310) had a hidden bluetooth AT profile that supported unauthenticated connections.

It was quite simple to connect to and gave you effective access to a modem Hayes AT-style interface with mobile phone extensions, allowing you to download or modify address books, connect to the internet using their phone or even make phone calls.

The key thing is the monetisation. Just as with all the startups on HN, without monetisation you're effectively dealing in experimentation. With a monetisation route it all becomes worthwhile. Presumably the people who did this have some form of perceived monetisation route for the data they stole, otherwise they wouldn't have done it on quite that scale.


I wasn't trying to say phone malware didn't exist, my point was more that comparing the latest trends in Android exploits to browser exploits didn't really match up. I could have done a better job of saying that.


No worries, it's always easy to misinterpret text. Until recently there's not been a large amount of malware seen in the wild. A lot of the growth in mobile malware seems to be down to the following factors:

1) Increase in adoption of smartphones, most notably Android and iPhone

2) The general lack of, or poor abilities of software to detect intrusions on said devices

3) The lack of regular updates while relying upon skimmed down versions of major desktop apps and OSes (most notably that you can target android and iphone with the same bug, as shown earlier followed by a clear kernel bug escalation path)

4) The introduction of mobile malware into open source, commercial and blackhat exploit packs.

5) The transferability of monetisation strategies for malicious desktop compromise to the mobile market.

I'm sure I've missed more, but they're the main ones. 5 is the motivating factor, 4,3 and 2 make it easier and 1 unfortunately believe that we're approaching an unexpected variant in Dan Geer's monopoly hypothesis (see http://cryptome.info/0001/cyberinsecurity.htm for details) - and that a duopoly with shared (flawed) code doesn't appear to be much better than a monopoly.

Incidentally that stylesheet bug affects browsers across all main platforms and architectures, across pretty much every class of device (phone, netbook, laptop, desktop, workstation, server on windows, linux, osx, *bsd, Solaris, iOS, Android running on x86, x86_64, ARM and SPARC). If that's not a sign of bad things to come, then I'm not quite sure what is.


"I demand a walled garden, and I will gladly pay 30% to ensure all apps are reviewed, approved, and subject to the whims of the closed-shop store providing them."

This is obviously a serious issue - as the OP notes, the double-edged sword of openness. Still, the speedy response of Google makes me feel warmer than the (not-so-common now) decidely un-speedy application process Apple put many developers through.

Edit: As jevans points out in response, and others have noted in this discussion, Google's 5 minute response time might be better characterised as '1 week of sitting on their hands when the developers complained, and 1 rapid response when it went public in a loud way'.

I'm feeling less warm now, and looking fondly at the non-smartphone Nokia I own which is so clearly targeted at the 11-year-old-girl's-first-phone market that it came inside a pink cardboard handbag. But has no malware.


I'm not sure I would qualify Google's response as "speedy." On the Reddit thread, the developer of the real Guitar Solo Lite claims to have been attempting to contact Google about this issue through various channels for a week with no response. http://www.reddit.com/r/Android/comments/fvepu/someone_just_...


Would be nice to find some middle ground here for sure. Seems to me that this is only starting to become a serious issue. This also seems to be a much easier route than finding 0 day Flaws on a PC, and while most folks would think twice about installing software from the back alleys of the internet, they may not feel that way about android.(yet)


Most HN readers might think twice on a PC, but anti-virus creators have got rich off the majority who don't think twice.


This is probably where third party 'app stores' come in. EG, Amazon's app store or someone else's who manually picks what apps go in may become the 'trusted' source for apps.

It is pretty difficult given Android's structure to have any chance of keeping the automated shipping of apps while also retaining security. Yes, you can keep adding more warnings about what apps do, but in general I don't think people read them, and you can ask for an awful lot of privileges.

The great thing about Android is just how much power you have to do stuff as an app maker, but that is of course one of the main problems as well.


> Google's 5 minute response time might be better characterised as '1 week of sitting on their hands when the developers complained, and 1 rapid response when it went public in a loud way'.

To be fair, the first complaint was of copyright infringement, the second was of malware. You can hardly expect the same response time to both kinds of issues.


The Apple walled garden is way more dangerous, because it's almost as easy to slip malware into your code (remember, Apple doesn't do full source code audits), and the false sense of security makes users even more complacent. It really is a perfect example of security theatre.


They do static analysis of your executable and check what you call, which is why they know if you're using a private API and reject you because of it. They've also caught bugs in my app and sent it back.

I believe the advantage of Objective-C for them is that all messages pretty much go through one point in the runtime (as far as I know). That signature is probably very obvious, and they can probably do a lot of looking at your code with very little effort. If you have malware in your code, there's a pretty good chance they'll find it based on what it does. I imagine they can see a lot more about our apps than you'd think, since they probably run them in a debug build of iOS.

I can think of literally dozens of things they can look at, and I don't even have access to their systems to know what data is available. They have hundreds of engineers who have probably figured all of that out, and automated it for the approvers, too.

And how would we know? Is a malware author going to blog and say "I tried to slip malware into the App Store, and they denied it?" Since an attempt costs $99 (I can't imagine you'd keep your account after a failed attempt), that raises the bar to trying. I seriously doubt we're the first to think of exploiting iOS, and you haven't heard a word about it...

I think security theater is a bit of a stretch, frankly.


They do static analysis of your executable and check what you call

Doesn't Objective-C have dynamic binding? I'm sure you can determine what function to call at runtime, which means you can always get past a static analysis.

I can think of literally dozens of things they can look at

I can think of literally dozens of ways any analysis can be subverted. Even dynamic analysis wouldn't work in case the thing uses a timebomb or even something simple like downloading data from the internet (apps do that right), and sending in a special payload that instructs the application to do something evil once you reach, say, 1,000 users. Apple wouldn't test your application by installing it on a thousand phones, would it?


> Doesn't Objective-C have dynamic binding? I'm sure you can determine what function to call at runtime,

You need a message name (@selector), and the names are strongly typed -- meaning, they're pretty obvious with reflection tools. If there's a way to send a dynamic message without putting a @selector in your code, I don't know it, and I'm willing to learn.

When you start talking about the POSIX layer and stuff near the bottom (C), traditional wisdom applies there: what does your app link against? If you're linking against the dynamic libraries near the bottom of the stack and walking their contents (to avoid putting a string in your binary of what you're looking for, perhaps?), Apple's probably going to check that disassembly pretty heavily.

> Even dynamic analysis wouldn't work in case the thing uses a timebomb

You keep on writing time bomb as if it's some magical device that circumvents all security. A time bomb needs a callback in the binary, and Apple's going to wonder why your app registers a timer for a specific date. This is what you don't seem to get: Apple has a disassembly of your entire binary, and they can see when they run it that you register a callback for December 2012. Where does that callback go? Code in the binary.

Same thing with your 1,000 phones case: clearly something needs to count, and the obvious candidate is a Web server of some kind, and then the app needs to actually do something if the response comes back as 1,000. Which means that code needs to be in the binary.

Or you need to download code from the Web server to execute. Which is easily detectable by Apple, and you'd never get approved.

> something simple like downloading data from the internet (apps do that right)

You will be rejected if it's used as any part of execution, and they can (and do) check that. If you even touch APIs that try to sneak data into executable pages, I bet they'd terminate the app in record time.

Trust me, they've thought this through. The reason that I asked if you're speaking from experience is because you're making a lot of FUD claims which are easily fixable. Seriously, buy a Mac and try doing something malware-like with the iOS API. Then try submitting it to Apple. Otherwise, you and I are both bags of hot air, theorizing about hypotheticals.


"You need a message name (@selector), and the names are strongly typed -- meaning, they're pretty obvious with reflection tools. If there's a way to send a dynamic message without putting a @selector in your code, I don't know it, and I'm willing to learn."

sel_registerName() translates a C string to a SEL at runtime [1].

[1]: http://developer.apple.com/library/mac/#documentation/Cocoa/...


I find it pretty hard to believe the iOS review monkeys have the technical ability to reason about the disassembly of a binary (if sufficiently obfuscated there are maybe 1000 people tops on Earth with the ability to do this).

As long as the operating system has a dynamic linker all bets are off wrt. static analysis, and the halting problem definitely applies to automated analysis.

If the OS allows writable pages to ever be executable then you can pretty easily hide your nasty code in the low order bits of an innocent looking data file (say an image), then pull them out and execute them at runtime.


iOS doesn't allow writable pages to be executed. As I recall that was one reason why Android 2.2's V8 javascript JIT was so much faster than iOS's.

Also, use of dlopen() is not allowed. A long time ago I heard of two AR apps that used it (they dynamically linked against CoreSurface, to copy the bits from the camera preview into an opengl texture) but I haven't heard of anyone sneaking an app using dlopen() into the store in over a year.


hence "to ever be executable", they don't need to be writable and executable, as long as at one point they are writable (for you to dump your code in), then at some point they're executable (writable or otherwise).

as for dlopen(), you could just compile that directly into your app rather than using it from libc/libdl, bypassing that limitation entirely.


On non-jailbroken iOS you can't mark a section of memory as executable once it has been writable (and vice-versa, apparently). Executable pages are also checked against a signature by the kernel before they're used.


that's pretty incredible, I'm genuinely surprised.

I suppose it's possible when you start from scratch, there's no way they could do that on the Mac.


There are thousands of applications in the App store that implement a 'time' bomb without reading the clock. If you hide your payload in the code that runs level X of your executable, or that runs when a user gets the 'completed 100000 lines in Tetris, Apple's manual testing will not find it.

For the same reason I doubt that 'insufficient code coverage' can be a ground for not accepting a binary in the store.


You need a message name (@selector), and the names are strongly typed

What do you mean by strong typing here? AFAIK ObjC has dynamic binding, which means you can send the same message to a different object based on a condition. So from what I see, you can pretend you're sending a message to an internal object, but then switch the object out for an external one later.

A time bomb needs a callback in the binary

Nope. You read the current time in at startup, to, I don't know, display to the user, then at some later point, after enough obfuscation and misdirection, innocuously check if the number you got back was past 1335830400.

and then the app needs to actually do something if the response comes back as 1,000. Which means that code needs to be in the binary.

But the response won't be 1,000. The response will have lots of data you'd send otherwise, then an innocuous-sounding string like, I don't know, "true" or something, tacked on at the end, and you'll have a check for the end of the string being "true" buried somewhere deep within your code, which is where you'll switch the object out.

Seriously, buy a Mac

Sorry, I don't care enough about the issue to devote the large sum of money it would take to buy a Mac, or my time. I don't even own a smartphone, and I know I'm never going to buy an iPhone. I'm not even a security researcher, just someone who knows a little about programming languages.


I would like to see you qualify that comment with fact.

Apple does not examine full source code, but they do watch network traffic and examine API calls. At least they do something.


You can slip the malware in as a timebomb. It's not exactly as easy as it is with Android or an open platform, but it is almost as easy.


Why aren't we seeing it happen, then?


It's much easier to spread FUD on the internet than to actually do it.


How do you know it isn't already happening?

Do you monitor the outgoing traffic from your cellphone?


> How do you know it isn't already happening?

There is no evidence that it is happening, with plenty of security researchers and interested amateurs keeping their eyes open for it. There's nothing special about iOS that prevents you from discovering this sort of app behaviour that isn't present on Android.


The threat is not theoretical. Several iPhone apps have been pulled from the App Store after being found to be harvesting user data, intentionally or unintentionally. A game called Aurora Feint was uploading all the user contacts to the developer's server, and salespeople from Swiss road traffic information app MogoRoad were calling customers who downloaded the app. Game app Storm8 was sued last fall for allegedly harvesting customer phone numbers without permission, but it later stopped that practice. And users also complained that Pinch Media, an analytics framework used by developers, was collecting data about customer phones.

http://news.cnet.com/8301-27080_3-10446402-245.html


There is no evidence that it is happening

You mean other than seeing it happen in the biggest similar ecosystem?

There's nothing special about iOS [...] that isn't present on Android

Exactly. So why should iOS be different with regard to malware then?


> You mean other than seeing it happen in the biggest similar ecosystem?

It's being noticed in the biggest similar ecosystem, too, so by that logic it should be noticed in both if it is present in both.

> Exactly. So why should iOS be different with regard to malware then?

The Apple review process is present in iOS. The process to market is markedly different.


It's being noticed in the biggest similar ecosystem, too, so by that logic it should be noticed in both if it is present in both

Sorry, but how does discovering one instance of malware in the android market imply that any instance in the iOS Store will be discovered at the same time? Is there some sort of quantum-link that I'm missing?

The Apple review process is present in iOS.

I was told the Apple review process does not involve a full code analysis. And even if it did, malware authors are known to be quite creative in hiding their payloads.

Apps you have installed might or might not already contain shell-code embedded into seemingly innocent images or assets, with very little chance of detection.

I'm not a security researcher or blackhat. But under the premise that you can (afaik) not root a phone without the user noticing, my strategy for pulling off an attack would be a sleeper-strategy. I'd first seed my payload silently, and then pull the trigger all at once, at some point in the future.

Moreover, considering there has been a one-click safari jailbreak[1], you may not even need to embed actual malware in an app. It may be enough to be able to remotely instruct the app to load a specific URL at your command - now how's that for an attack vector.

So, technically there is no difference between doing either on android or doing it on iOS.

If you still want to claim otherwise then you should come up with a better argument than "but apple has a review process!".

[1] http://lifehacker.com/#!316287/jailbreak-your-iphone-or-ipod...


> Sorry, but how does discovering one instance of malware in the android market imply that all instances in the iOS Store will be discovered at the same time?

Twofold: this is not the only incidence of malicious software on Android, and I never made the claim that all instances should necessarily be immediately found - just that, if it's as easy to slip in as the OP claimed, that SOMETHING should've been found by now.


SOMETHING should've been found by now

Well, I'm working about as hard as PG. No, actually I work much harder. I SHOULD have found the one startup-idea by now that takes off and makes me as wealthy as him!

Notice the flaw in your reasoning? There is no correlation.


Finding a great startup idea and detecting malicious software are vastly different things.

If inserting malware into iOS is simple, it would be done, and done widely. If done widely, the chances are very good that someone would've detected it in at least one such application.


Finding a great startup idea and detecting malicious software are vastly different things.

Oh, you think so? Both are a function of skill, heuristics, sweat - and a great deal of luck.

If inserting malware into iOS is simple...

I'm not sure how I could make it any clearer, perhaps look at some of the other threads on this article?

So I'll just repeat:

   iOS is not different to Android with regard to malware.
Long version: The difference is so small as to be negligible.

I'm not sure I understand why that is such a bitter pill to swallow for some people.


> iOS is not different to Android with regard to malware.

Then why is malware being identified on Android but not iOS?


Erm, actually malware is being identified on iOS as well;

http://news.cnet.com/8301-27080_3-10446402-245.html


None of those appear to fit the malware definition.


This is one thing I've been wondering about, how is it that they don't know every single API call the executable is linked to?

I know that Objective C and messaging is different from function linking in some fashion, but certainly there must be a way of determining if disallowed APIs are ever called, at all, without just using the app and hoping you trap them.

I think at the very least they should be able to examine the executable for object types used, and function signatures used, as well as determining what signatures are passed to which objects.


> This is one thing I've been wondering about, how is it that they don't know every single API call the executable is linked to?

They do. I used an old example from the Internet, and that API was now private; Apple rejected the app and included the name of the API that I wasn't supposed to use.


But you were being honest in your use of the API.


Are you speaking from experience?


No, from logic. I don't have a Mac to develop on, sorry.


I think you're both right in different ways. The positions "if it can happen it has or will" versus "yes but there is no evidence so it probably hasn't or won't" both have merit and are not explicitly in conflict. But I'm reminded of that quote:

"In theory there is no difference between theory and practice. In practice there almost always is!"


So how is that more dangerous?


It gives users a false sense of security. The average user is far more willing to trust an arbitrary iPhone application than an arbitrary Windows application, do you not agree?


No, I don't really. I don't think those with a high level of technical intelligence will be affected by wall or wall-less, while those with less than moderate TI probably don't even realize that one marketplace has or doesn't have an approval process. So it's equally dangerous, but I don't think moreso.


You have people talking in this very thread about a walled garden being better:

http://news.ycombinator.com/item?id=2279823


A walled garden is unquestionably better. It isn't fool proof but it's the same situation as security: you can't make your system completely secure but you can make it more secure than the next guy. The Android store is the next guy.


Not to mention that the Underhanded C Programming content is showing how you can have malicious code hidden in plain sight. Even doing a source code audit wouldn't be guaranteed to show exploits.


It's not so hard to detect exploits - usually all you have to do is to detect system-calls - it's more efficient than looking at source-code.

That said, I don't think Apple does any kind of audit that targets exploits and only relies on reviews.


That's assuming the malicious syscalls actually fire while Apple is reviewing the app. An iOS malware author would likely do everything possible to prevent the payload going off during the review process; it might be as simple as checking the system time and only launching the obfuscated malicious code a month or two after submission.

Disabling the payload under circumstances that put the perpetrator at risk of being discovered is a very common malware tactic (see Conficker disabling itself if it detects it's being run in the Ukraine, for example).


That's assuming that Apple doesn't think about serving different GPS-coordinates, different IPs, different device IDs, different system time.

And this could very well be completely automated - you just need someone to design a map of gestures / time intervals for simple workflows of the app, and then let the system run the app and interact with it for every possible configuration.

Of course, the malware author could then try different tactics to detect if the app is running inside a virtual machine, and not on a real device. But that's as hard as detecting a well-behaved / modern root-kit, and that also implies certain sequences of system-calls that can be detected.

Virtual machines do this all the time, i.e. detecting illegal operations.

Not only that, but it doesn't have to be accurate - it just has to raise a red-flag in some intranet bug-tracker that these and these apps need closer inspection. It also makes it much harder for malware author, because instead of searching just for an iOS / Android exploit, now they also have to game this approval process.


interact with it for every possible configuration

This is practically impossible. You could randomize the input for years and people could get around it. e.g. just make a web request to your website, and depending on the reply, do something nasty. You could never catch this with this sort of automated testing.


Hyperbolic nonsense as explained here: http://news.ycombinator.com/item?id=2283338


This was inevitable - Google takes a laissez-faire attitude towards copyright violation on Android market, which is full of ripped-off IP - games with names and artwork belonging to other companies, rip offs of Rolex logos for clocks, etc - that I've personally reported and that have always been ignored by Google.

If they don't care about the small stuff - and it seems they don't - something nastier was always going to come along.

Hell if they'd bothered to notice one of the submitted apps was 'Spider Man' perhaps this would already have been averted.


The way it is supposed to be done is with a DMCA complaint, and I don't think you can do that unless you own the IP in question. Or are you saying that they are ignoring DMCA complaints? That is a much more serious accusation.


Android Market has a 'flag' button that lets you specify, eg, that 'Wolverine' isn't anything to do with Marvel but uses Marvel characters and artwork. They ignore this and similar copyright infringement flags.

You have to own the content to file a DMCA complaint.


The fact that they're collecting IMEIs is interesting. One of the little discussed facts about smartphones is that they make it trivially easy to change the IMEI.

On the Galaxy S you can simply mount the NVRAM where it is stored as r/w and change it, and any other data you want.

For those who don't know the IMEI number is what physically identifies an actual handset, like a MAC address, except that the networks/authorities view it as being more of a watertight way to identify someone, as up until recently changing them has required a soldering iron.

This malware's behaviour implies what I have suspected for some time, that there is a black market for IMEIs, likely being used for organised criminals to remain anonymous, or to enable the resale of stolen handsets.

Anyone fancy taking a guess at what an IMEI is worth on IRC these days? CC#s are meant to be about $0.10 each aren't they?


Haha, I don't know, it's always been easy to get a new IMEI. Go to one of those "cell phone support forums", advertise an unlocking service, ask for IMEI, receive in your email. With a little more work you could monetize it but botnets for CC#s are probably a lot easier, and 419's provide even more pay for the effort involved. But then again i'm not a kid in some 3rd world country looking for a quick buck so who knows, Android apps could be a really interesting proposition.


Or it could just be that they're collecting them because they can? It's a nice unique identifier they could use.


always was a blackmarket for IMEIs..

Just now its easier to make up an IMEI..but how soon before the network infrastructure rejects that new IMEI? As that would be the key if the new IMEI had blackmarket value or not..obviously you cannot have two of the IMEIs be the same on the MO network..there are also several other issues...


I think that this illustrates a point that's worth making about the difference between Android devices and iOS devices. There is no 'better' - there is a tradeoff. This is the same tradeoff that Linux elsewhere offers, really.

This is the tradeoff: "if you are willing to invest your time, mental energy, and vigilance into bending your device to your will, avoiding traps such as mention in the article, and doing upkeep, a Linux-based device will give you enormous and awesome capabilities, leveraging the full power of having a general-purpose computer in your pocket."

I am glad that that's available, because for some people, that's a great tradeoff! But it's important that there be another tradeoff available, which iOS is currently the flagship of: "if you are willing to accept more limited capabilities, you can have those capabilities in a form that Just Works and does not require your vigilance, time, and mental energy."

I propose that the market share of Android and iOS roughly reflect the number of people for whom each of these tradeoffs, is the one they want. Of course we may want different things at different times - but when we buy these devices, we are voting with our dollars as to which tradeoff, overall, serves us best. I hope that both retain vigorous market share, because different people are best served by different tradeoffs.


Coincidentally, this is exactly the kind of thing Google's recent Zynamics acquisition is meant to find automatically through binary code analysis.


If I was google I'd have a giant server room full of VM's running android hosting every app in the app store with monitors of their state. Then you can monitor not just what the code is predicted to do but what it actually does.


"as well as remotely removing them from user’s devices"

...well that is an interesting feature!


This worries me more than the malware.

I don't own an e-book reader because books that I purchased could be deleted without my consent.


This is surprising. Does anyone know if Google can actually do this? Maybe provide a reference.


They said they could and would do it, just as Apple and Microsoft have said they could do it.


Nokia can't do it, but the people don't like their Symbian UI. Those are the priorities I guess.


Completely missing from both the article and the comments: the list of permissions these malicious apps requested. I'd really like to know whether they found a way around the permissions model, or if this is a case of users clicking 'OK' to a prompt that says "do you want to let this app root you?"


The apps included root exploits. The thread on the Android subreddit has lots of details.


Apparently, the modified binaries use a local root exploit, so they probably only required enough permissions for the local root exploit to work.


I am for openness of the Android platform. I don't believe that a review process, like Apple's, would be beneficial. However, the Android market is not like the web. On the web, if you are selling a downloadable piece of software, and I come along and buy a copy, break your DRM and start selling it myself, you have a surefire way to get me to stop. What you can do is talk to my payment processor and tell them I am committing fraud. They don't like that, and will shut things down. While the law is fuzzy and I am innocent until you can prove I am ripping you off, the payment processor has a contract with me that says that I will not do anything illegal, and they will enforce it.

Now, on the Android Market, there is effectively, only one payment processor: Google. And they have a conflict of interest, since they also own the platform. They can either police all the apps (a la Apple) or they can ignore all fraud reports. The in-between gets them into a lot of grey territory about what constitutes fraud, etc.


There is one thing everyone is missing, Apple does have a process of verifying the identity of the author even if there was a malware that was slipped in it is easy to identify the author and report to authorities that in itself is a good deterrent against trying. remember malware writers want low hanging fruit first and and Android extremely ripe for it.


I found this: "Update: holy cheeseballs, they've been pulled already! Took less than 5 minutes from first contact to pull!" utterly amazing. That goes against ... well, pretty much everything I've ever heard about Google's speed and ability to respond to humans saying something.

Too bad it was over a security incident, but at least it proves that Google can react.


Actually, reading the thread on the Android subreddit one of the affected developers said he filed complaints over a week ago with no reponse nor action taken. That is until it reached the (Android sub?)reddit-frontpage.

Google doesn't really do as much customer-service as it does PR-management, and that seems to be the way Google usually handles these things. Very few problems are addressed before they turn into high-profile cases of bad publicity. And really: That is pretty bad.

Let's not let Google-fanboyism blind us from the facts: Google is one of the worst companies on the planet when it comes to customer-service.


Very few problems are addressed before they turn into high-profile cases.

Really? How would you tell if that wasn't true?


Good question. If I should answer it 100% honestly, I would say I've reached that conclusion trough very unscientific means: personal experience and a possible "postive" conformational bias.

Examples of this would be me having issues with Google services or applications, Googling for answers and then finding forum-posts on the Google forums discussing the problem.

These posts would mainly consist of 3 things: 1. Original poster asking for advice on a problem he has. 2. Other people experiencing the same problem, stating so and in some cases offering workarounds of various quality and reliability. 3. A complete lack of response from Google's end. No confirmation that it is a known bug with or without workarounds or a confirmation that this is a bug and that it is being investigated. Nothing.

These threads, when I find them, tends to be at least a year old if not more, with a staggering amount of people experiencing the same problem and all this with the issue at hand remaining unfixed. To quote a comment I read in one of those threads: "It's pretty obvious that Google doesn't listen and Google doesn't care".

Moving back to your question: "How would you tell if that wasn't true?". I guess in these cases I actually wouldn't, so it is indeed a good question.

But I know that for all the issues I've had, I still have them and they are still entirely unadressed by Google. Reporting problems to Google feels like showeling data into a write-only device and I can't think of any company I've dealt with which has had such a complete absence from its customers when it comes the products and services it offers. None.

Things like reliable customer service is why companies likes to send their money towards Microsoft: With Microsoft you may have Microsoft-costs, Microsoft-problems and Microsoft-quirks, but there is a huge support-apparatus ready 24/7/365 to help you get your problems solved. To put it bluntly: Customer support is not a "I'm feeling lucky"-button.


> Actually, reading the thread on the Android subreddit one of the affected developers said he filed complaints over a week ago with no reponse nor action taken. That is until it reached the (Android sub?)reddit-frontpage.

Well, he complained about copyright infringement of his app, not that the app was malware. I think if anything it commends Google that they obviously have a way to immediately sort and expedite high priority issues.


I have a Hacker News posting on this: http://news.ycombinator.com/item?id=1432193


Google needs to do something. They absolutely do not need to mimic Apple's policies or impose any radical new limitations but the current situation is starting to spiral out of control. Once this starts hitting 'normal people' it could have massive repercussions. They need to stop this before the average person fears installing software onto their device or the entire third party ecosystem is going to be stunted. When you think about the depth of personal information, location awareness, microphone, camera, etc the ramifications of this are huge. The first virus that leaks SMS messages, owns Facebook accounts, or turns on cameras to spy on people could basically end Android as a viable consumer product. Of course we've seen this all before on PCs but I think the stakes are higher these days especially with devices that are so personal and highly connected.


If someone who knows a lot about these things has a few moments to spare, I'd love if you could answer http://www.quora.com/iOS-vs-Android/What-are-the-relative-me... on Quora.


Never mind walled gardens, but Google could at least run some basic virus scanner before posting stuff on the market.


Smartest post on this entire thread.

They clearly analyze the APK before uploading and look at the Manifest, etc, so why not do a rootkit scan of some kind.


I saw that there was a rootkit for Android botnet C&C presented at Shmoocon. Is this a similar attack, or something different? http://www.grmn00bs.com/2011/01/30/smartphone-code-release-f...

PDF: http://www.grmn00bs.com/Shmoocon2011_SmartphoneBotnets_Georg...

Video: http://vimeo.com/19372118


Actually, that the malware contained a root exploit is fixable sooner or later.

The next insight will be, that even if the sandbox had worked, this type of attack still is possible by using the user's trust in the brand of a well-known app to use the permissions granted to it for malicious intents. There's no easy way to avoid that up front automatically.


So, was there any app-scanning software capable of detecting this before the warning? I have Lookout installed on my droid, but I have always wondered at its effectiveness. It has a "there are no crocodiles, so it must be a great crocodile repellant" feel to it...


I know it's probably off-topic, but would it have killed them to link to the original reddit thread?

They were more than happy to copy the whole story and quote verbatim from the thread. Strikes me that a direct link would have been the polite thing to do.


For developers who need to provide mobile access to their hosted apps, this event makes it easy to decide on a path with regard to "Native apps / Web apps ?" question.


Except that as discussed, there are plenty of code execution flaws in mobile builds of webkit that will do just as well as any flaw in the native app API.

One XSS or SQL injection vuln in your webapp and your users could be just as rooted as the victims of this malware. Exactly like in the desktop browser world.


As far as I know, I was at a security conference in January, there is no code signing and no sandbox on Android. That is not a good scenario for "security".


Wrong and wrong. Sandbox overview here: http://programming4.us/mobile/1290.aspx

Code signing is involved, to verify that particular apps have the same source, and that updates have the same source as the original app. (That's about all it's good for.)

Briefly: each signing key gets its own UID on the phone. Unix file permissions are used to restrict access to files, devices, and other facilities. In some cases, kernel hacks are required to make this possible; the Android kernel only lets you open a TCP socket if you're a member of a particular group.

If there's a local root exploit, you can still bypass all of this --- but the same is true on iOS.


What good is code signing if you don't know whether "Myournet" is a trustworthy source of "Hot Screaming Japanese Girls"?


You are absolutely wrong. All apps in the Market are signed, and apps are sandboxed by running them with separate Unix userids.


Okay I am wrong.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: