Hacker News new | past | comments | ask | show | jobs | submit login
GPU Password Cracking (mytechencounters.wordpress.com)
90 points by ctingom on May 31, 2011 | hide | past | favorite | 36 comments



As usual:

http://codahale.com/how-to-safely-store-a-password/

and note that "ihashgpu" works against salted passwords; salts do not prevent brute-force attacks.


Salts do not prevent brute force attacks. However...

NOT USING SALTS MAKES RAINBOW TABLE ATTACKS TRIVIAL.

Seat belts don't prevent cancer, but they sure as hell help in a car accident.


Every secure password storage scheme is randomized. None of them need explicit salts; salting is built in. If you have to provide the salt, you are doing something wrong.


If you want to advocate the usage of pre-built password libraries, go right ahead. But please find a way of promoting it without adding confusion about the use of salts.


If you are adding your own salts, you are doing it wrong.

There is no secure password storage scheme that doesn't randomize.

What you are saying is morally equivalent to "if you want to use a preexisting block cipher that's fine, but don't confuse people about the need to use nonlinear substitutions". No, if you are designing your own s-boxes you are doomed. Use AES.

This isn't worth arguing about except that you strike me as one of these people that think they're doing it right because they add salts to their hashes. No, you aren't.


Let me repeat: salt is built in.


Also, if you're trying to bruteforce just any hash from a long list of hashes, an unsalted hash means that you can try every word in your dictionary against all hashes at once; for Facebook's 500 million users, a sufficiently-long salts (bcrypt's 64-bit salt qualifies) really does help a lot.


I hope by this point you have a TextExpander shortcut for that URL. It's amazing how frequently it's useful in discussions.


Just being proactive about the otherwise inevitable "that's why I use 64 bit salts" thread, is all.


Which gives me yet another thing to add as a client-side script for my new discussion site: "Your post mentions the word "salt", have you read..."


Terrific idea, force them to answer a question too before continuing. Another good test would be for 'strong' or 'weak' typing, linking to this: http://cdsmith.wordpress.com/2011/01/09/an-old-article-i-wro...


Be careful to compare apples with apples here. He told us what GPU he has, but not what CPU. Depending on the CPU, the comparison could be skewed in either direction.

A Radeon 5770 apparently uses 108W[1] when fully loaded. Newegg's cheapest non-open-box Radeon 5770[2] costs about $120. The price is the same as a 3.0-3.2GHz Phenom 4-core, and the power usage falls right between them[3].

If he has one of these (or a similar Intel), it's roughly comparable. If he has a cheap or older CPU, the result is closer than it looks. If he has one of the high-end six-core CPUs, then it's an even larger gap.

[1]http://www.tomshardware.com/reviews/radeon-hd-5770,2446-15.h... [2]http://www.newegg.com/Product/ProductList.aspx?Submit=ENE... [3]http://www.newegg.com/Product/Product.aspx?Item=N82E16819103...


Maybe because CPUs are no longer competitive. A Bitcoin miner is essentially cracking a SHA256 hash (with zeros added for difficulty control). GPUs are 20-50 times more efficient power-wise than CPUs at this task. Interestingly, the AMD architecture adds a factor of 3-5 in the mix, compared to NVidia.

http://webcache.googleusercontent.com/search?q=cache:FSUbAiI...

For a more detailed comparison:

http://webcache.googleusercontent.com/search?q=cache:9FiOofD...

Note that a PS3 Cell processor falls somewhere between a CPU and a GPU:


I have a Phenom X4 940BE. While Cain isn't threaded for bruteforcing and hence does just 10 million NTLMs, a threaded Cain would do about 40 million I suppose. Optimized software like hashcat would do even more but not anywhere near 3.3 billion.


A few months back we built a 4-Tesla box for doing GPU cracking (I think for less than 15 grand total). Initially for doing WPA, but now for more general purpose cracking.

It's ludicrous how much quicker we get results. I'm hoping to start compiling some statistics, as it's been in near-constant use since January.


Is this just for proof-of-concept stuff? How do you manage to find that many hashes that need cracking?


We don't normally crack hashes for proof-of-concept, it's not usually that hard to convince a client about the perils of insecure password storage (although I guess ocassionally you might need to give them a concrete example).

Generally, we crack passwords to be able to try those credentials on more secured systems (thanks to the ubiquity of password reuse).

As to why so many, When doing network tests, there's some percentage of time when you manage to obtain a hash (it might be a small percent), but when you multiply that by the number of tests going on in a given week, it seems to have worked out that folks are lining up to use it.


I left this comment (awaiting moderation).

While I agree that GPGPUs are ideally suited for this type of thing, I think a lot of the difference you're seeing comes down to the amount of skill and effort put into the NTLM cracking functionality by the authors for their respective products.

ighashgpu is a single-purpose tool, whereas C&A does many things. My impression is that C&A is mostly used with rainbow tables (supplied elsewhere), whereas the author of the GPU tool is set on being the best.

Last time I looked into it, my impression was that a modern CPU could probably be made to run no slower than 5-10x that of a modern GPU at this type of task. Faint praise, I know :-)

As OpenCL matures, I suspect we'll see code written which can be benchmarked on both. Exciting times!


ATI GPUs are extraordinarily fast at hashing (just go look at any of the serious bitcoiners), but this is why we have things like bcrypt. Power goes up, difficulty goes up, passwords remain secure.


When you get into special-purpose computing devices, this is actually an argument for scrypt over bcrypt as well.


The author of ighashgpu has a lot more on this:

http://www.golubev.com/blog/?category_name=gpuprog


Don't forget with AWS offering GPU instances, this stuff is also becoming close to "cracking-as-a-service".

While unethical (perhaps, to be debated) you could even create a startup in this arena just to do this.


moxie marlinspike did just that: http://www.wpacracker.com :)


C&A does much more than password cracking. It has ARP poisoning capabilities, and it also comes with a trojan (Abel) that can hijack computers remotely over Windows networks. For a free program, it is very powerful. Even the NSA uses it [1].

I think this also means that one-factor authentication will soon be obsolete. Of course we can keep making passwords longer and more complex, but we all know its really the users that are the problem. People have shown time and time again that they will chose bad passwords, and to make matters worse, they will use the same password for every one of their logins.

Furthermore, this demo was done with a single card. Any attacker with knowledge and resources could easily link up a ton of graphics cards (or even use AWS GPU instances) giving them the ability to bruteforce most rememberable passwords. I can only imagine the scale of the massive GPU clusters that the NSA & other SIGINT focused intelligence agencies employ.

[1] - http://www.washingtonpost.com/wp-srv/photo/postphotos/orb/as... (look under latest tool versions)


I'd argue that it's an issue regarding conventional password length (probably stemming from the nomenclature―rather than password, it should be passphrase). I seem to remember an article floating a month ago on strong passwords, and how long, passwords (passphrases) like "fluffy is puffy" are stronger than "s#g789@d/".

Better user education (for want of a better word) is necessary as well―something as simple as adding the domain of the site in question to the password increases a user online identity by a significant amount.

With Moore's law (and quantum computers on the distant horizon), passwords will get ever-easier to crack―the next logical step are keys and certificates.


State-of-the-art password schemes have addressed the "Moore's Law" problem for over a decade now.


Forgive my inexperience, but is it trivial to obtain the hashes to crack in the first place (windows or mac)?


It's harder to get hashes than to crack them. For best results, you need to compromise a corporate Microsoft Windows Active Directory domain controller or an OpenLDAP server. Web apps are easier to compromise, but typically those passwords protect objects of lesser value. Either way, once you have easy hashes (MD5, SHA1, NTLM) the game is over.

Edit: And remember that NTLM hashes are just Unicode strings that are MD4'ed. They are not much better than LANMAN hashes.


NTLM is not difficult to crack. It's basically md4.

Edit: I forgot to mention that I have C++ code that will turn an ASCII string into its NTLM hash here: https://github.com/16s/NT_Hashes


Are cracking programs optimized to do all 1 character passwords, then all 2 character passwords, then 3 then 4 etc? Otherwise how will an attacker know to only run a search on 6 character passwords?

Not to mention that a cracker always would have to computed for all possible characters (alphanumeric plus symbols) because there is no way of knowing ahead of time what character set the person used.


Speaking to easy hashes (NTLM, MD5, OpenLDAP SHA, etc.)

One through four brute force is extremely trivial. A single core CPU will do all those in less than a minute. Five (all combinations... the entire space) in less than an hour.

Modern GPUs can enumerate the entire six char space within a day or so. Some GPUs can enumerate the seven char space in a few days. Complete enumeration begins to get infeasible when you get into the eight char space. This is basically why CPUs are still viable. Speed won't get you nearly as far as intelligence will (append stuff onto the end of words). Nothing against speed here and if it was a pure enumeration race, then no doubt, go with GPUs.

Edit: When I say "entire space" I mean the entire printable ASCII character set.

Edit2: It's rather simple to calculate too. 95 printable ASCII chars to the power of the password length. So, to enumerate the four char space, it's 95^4 (81,450,625). You can "do the math" for any lenght password. But you'll see that when you hit eight chars (6,634,204,312,890,625) that even GPUs (which can do billions of hashes a second) have to be smart and not just fast.


The cracking tool John The Ripper is highly configurable. I don't know about Cain and Abel. You can avoid running a full brute force attack by identifying the most common password patterns and generate passwords based on them. Eg: any word from a dictionary plus the current year. I blogged about this here: http://codebazaar.blogspot.com/2011/05/why-we-need-strong-p4...


how about chinese passwords?

  他们不了知道
  3000 ** 6 = 729,000,000,000,000,000,000
I guess unicode between the client and backend isn't consistant enough yet. Maybe in 10 years or so.


How do I type that?


  ta men bu le zhi dao
Although, you'd have to visually select characters after typing pinyin, so it would be useless if someone looked over your shoulder.


Black Hat (I think, could be DefCon) has a presentation on this stuff this summer. Guess I'll make sure to attend that preso.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: