Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Debunking the Google Interview Myth (technologywoman.com)
126 points by BarkMore on Jan 4, 2011 | hide | past | favorite | 88 comments


From anecdotal but first-hand experience (went through a couple of interview rounds), Google interviewers indeed don't ask the questions on that list. But the questions they do ask are hard and often academic - as in, you have likely never run into them in a real-world scenario before and you likely never will.

At first I was like 'wow, Google must be populated with demi-gods'. Then I spoke to some Googlers off the record and it turns out that those interview questions have little to do with the reality within Google.

Not that Google isn't a company lousy with smart people. They are.

And it's not that Google doesn't work on hard problems. They do.

It's just that the interviews are an extremely efficient dud filter, probing you about stuff that even at Google's you won't be working on 5% of the time (if at all).


probing you about stuff that even at Google's you won't be working on 5% of the time (if at all).

In any highly skilled profession, 95% of the time someone with very little of your skill could do your job. The valuable part is that you can also be counted on for that 5% as well.

Most of my time at Google hasn't involved any substantial theoretical work. One time though, I did have to come up with a algorithm for computing connected components that could run in reasonable parallel-time in mapreduce. The resulting algorithm ran in log n parallel time passing n log n messages over its lifetime. (This probably is in the literature somewhere considering that a coworker and I had to solve it for unrelated reasons and independently came up with the same thing.)

I'd like to expect my coworkers to be able to do that sort of work in a domain that they're familiar with, because it takes certain problems and moves them across the boundary from impossible to possible. Working at Google scale means that you'll inevitably have to solve some problems that have never been solved before, at least until a lot of universities find it worthwhile to build a thousand node cluster for research purposes.


I'd like to believe this story (haven't found an authoritative source yet) that I heard once from a professional speaker of the FedEx main processing facility in Memphis.

One day it completely stopped dead - usually it is controlled mayhem with the packages and machines running - the silence was deafening - thousands of dollars lost every minute. They brought in the best expert they could find. He investigated and went to a single box in the plant, opened a door, and turned one bolt with a wrench and everything restarted. The plant came back to life. He sent them a bill for $10,000. When FedEx protested the bill - "You just turned a single bolt! Anyone could have done that" - he itemized it and sent it back. They paid him.

  Turning a bolt                $    1
  Knowing which bolt to turn    $9,999



Unfortunately, this is likely just a modified version of a (almost certainly also fictional) anecdote about Picasso. I've heard this one from several different places:

A woman walks up to Picasso, later in life, as he sits at a cafe table. "Could you draw me a sketch?" she asks, thinking she'll make a quick buck. "I can pay you for it."

Picasso shrugs, says "sure", and quickly scratches out a little something on a napkin. "That'll be $10,000," he says.

"$10,000! But it only took you a few seconds!"

"Just the drawing of it took a few seconds," Picasso replies. "Learning how took my entire life."


That sounds like a paraphrase of a quote from Whistler: http://en.wikipedia.org/wiki/James_Abbott_McNeill_Whistler#R...


I'd like to believe this story

In the russian versions of the story the expert usually hits the object (server, or the machine that stopped working) with a huge hammer in the carefully identified spot. The result is the same though - the system resumes working as intended.

But, well, that's just the cultural differences in the ways the things are usually being fixed.


Actually you get the hammer one in automotive maintenance examples, and also if you are talking to an older generation where hitting stuff worked much better.


I grew up in a steel town and the version of the story there involved whacking with a hammer.


One of the nicest variations on this urban legend that I'm aware of:

A woman walks in to the studio of Yves St. Laurent:

"Oh Yves, you must help me, I have to go to a ball and I need a hat that's both original and chique..."

Yves sets the lady down on a stool and proceeds to drape ribbons around her head.

After about 20 minutes of this he pronounces the result done.

The woman asks for the bill and is presented with a 2500 Francs invoice.

"What is this? You charge me 2500 Francs for this?"

Without a word Yves St. Laurent unwinds the ribbons and stuffs them in a small plastic bag which he hands to her.

"The ribbons my dear, you can have for free".


The fashion version may date back to Nancy Mitford's _The Pursuit of Love_ or _Love in a Cold Climate_, but a) with reference to Schiaparelli, and with the valuation mentioned by a disinterested bystander.



I actually know one that is true. The Air Force back in the 60's brought some old piston engine transports back into service. The mechanics could not get the engines to deliver the power to spec, they read the manuals, and did everything they could think of, no dice.

Finally, the AF hired some old mechanics out of retirement who used to work on them, and they promptly had them running to spec. Turns out there's a lot on how to tune them that was never written down.


Thanks, That's a great story. I will have to quote it (in context and if relevant) next time instead of the silly apocryphal parable above.


Trivially untrue. FedEx never would have protested a $10K bill for that act.


I think part of having the academic background is that it gives you the confidence to be casual about certain problems. When Google first demonstrated search suggestions, someone I know who is definitely a good programmer was very amazed that that it was possible to make something like that run fast enough, across the internet, no less.

But, if you have learned a little about indexing and data structures, you'll think "Yeah, you can probably generate k suggestions in something like O(k log n), and you can probably predict the next few characters that will be typed in, so that a lot of data could be cached on the client. This doesn't seem impossible", even if you can't come up with the exact algorithm right then and there.

That means you can design something like suggest or instant without getting stuck on whether it would run fast enough.

Or, conversely if the problem calls for solving an NP-hard problem, then you should get stuck on whether it can run fast enough and whether there is some way to approximate the answer instead. (Did anyone else check to see if you could get Google Maps to solve Traveling Salesman when they first added intermediate destinations?)


I'd call it more of a "people we are really, really, really sure are what we consider smart" filter rather than a "dud filter", considering they want to find the top M% rather than remove the bottom N%.


Those are the same if M N


For a first round phone interview for an engineering internship at Google, I was asked to solve a reasonably challenging question (I don't want to give it away, but it was algorithmic in nature and involved no code). When I got it right, the interviewer asked me to write an inductive proof of my solution and email it to him.

I like doing stuff like that so I wasn't upset in the least, but I can understand why many people would find that scary and I certainly think Google's interview process is more extreme than others'. I've interviewed successfully with Goldman Sachs, Amazon, and Microsoft (among others) so that's to whom I'm comparing them.


  That whole “Google cares about GPA even for people 
  years out of college” thing?  I supposed I can’t 
  speak for every hiring committee, but I never 
  remember my hiring committee discussing the GPA 
  of a professional candidate.  For that matter, 
  we were never even given a candidate’s GPA unless 
  he/she elected to put it on their resume.
This doesn't jive with what I know of the process. Tom Galloway's response in the comments, however, does:

  GPA/SAT/GRE: Perhaps not in Engineering so much, but 
  a *lot* of non-Eng HR and HCs did do cuts based on what 
  school someone attended and/or their GPA and test scores. 
  And I know for a fact that even Eng applicants who had 
  been out of school for over a decade were asked for such 
  on a regular basis. There were a number of us, mostly 
  older types, who semi-actively campaigned against Google 
  asking for such from folk who’d been out of school for a 
  while as we felt it 1) was useless relative to their 
  actual job performance and 2) it was embarrassing *to* 
  Google for us to ask for such irrelevant info and gave 
  up a bad impression/rep.


Sales used to have a list of approved universities. Was not even a rank. Mostly a filter


I got to the onsite, day-long, interview stage at Google (about a year ago). It's been awhile, but I do remember they asked some VERY real world questions, ranging from lists related traversal questions, to "how would you implement xyz." Some running times questions, etc. The questions were challenging, but do-able, I suppose. My biggest issue was not so much with the questions, but rather with 1 or 2 of the interviewers who felt it was more of a pissing competition, always trying to undermine the answer. The rest of the people were just phenomenal -- super professional and a pleasure to talk with. In retrospect, it was the younger interviewers who were "problematic." I suppose it has to do with insecurity (I later found out that they weren't at Google for that long, nor were they such hotshots). Lastly, I believe that none of the interviewers asked for any particular coursework or GPA, but the internal Google recruiter asked for either a transcript or flat out GPA, to send to the committee (who reads the interviewers' comments and makes a decision whether to continue further with the interviewee, or not).


Coding on the spot might seem surprising to those outside of the software industry, but it’s standard practice.

Sounds like wishful thinking to me. At least I wish it were true. I've been involved in a lot of interviews at my company, and I'm the only one who ever asks anyone to write code.


I was asked to write very little code for my current position. One of the questions was being presented with some code and figuring out what it did. I didn't realize how apropos that question was at the time.

I just got done doing a series of 10 1-hour interviews for contractors for an enterprise Java web application contract lasting about 6 months. Only 2-3 of them could write correct Java on a white board for simple questions. We had three who weren't even close to Java syntax. Code quality seemed to be directly relational to algorithm quality, in that those with better syntax also knew how to solve the algorithms best, and those with terrible syntax were way off on the algorithms.

Up until we asked these candidates to write code on the board, a few of them sounded like great hires. They could rattle on about design decisions and enterprise web application stacks, about configuration and organization. But when it came to the white board at least one of them just stood there and left it mostly blank.

Some of the positions here have had interviews where the candidate is expected to write code but only one, if even that, interviewer asked to see any code. And even then the managers don't seem too worried when I tell them the person can't code, because everyone else likes them so much.


Makes me wonder how much of that is due to people being over-reliant on IDEs for their day-to-day chores.

Akin to people no longer knowing how to find their way without a navigator in their cars.


Did you have a chance to rate the performance of the candidates that were hired that had trouble writing code at a whiteboard in the interview? Were any of them productive in a different setting?

I am more inclined to give people a chance to work at a terminal (with internet access) instead of a whiteboard.


It's rare that someone who can write excellent code on a whiteboard can't do so on a terminal. The goal of an interview, in my experience, isn't usually to find a way to let the candidate shine, but find only the best person for the position.

If the candidate draws a blank writing code on a whiteboard, that indicates they have poor whiteboard skills - which is a pretty important skill to have in a team environment.


Could you give an example or two of questions on the same level as those "simple questions"?


I know this attitude won't win very many friends, but I wouldn't accept a position where I wasn't asked to write code during the interview.

I'm going to be writing code for money and they don't want me to prove that I can before hand? That's like hiring a "farmer" from the inner-city.


I'd take a look at a candidates github account or other open source work. That might preclude the need to ask for code during the interview.


most of my github account is filled with trash that i start for one day, barely learn something completely new to me, sometime a new language, then forget and move on.

if I find said thing promising, i'd usually came back some weekends later and do something more complex, but again, basically junk.

oh, and did i mention nothing looks finished?

Now, take anything i code behind the closed doors at my employer or i contribute to some open source projects. It will shine like nothing you will ever see in my github or googlecode or such.


Good grief -you have a github account that you put code on. You've already put yourself ahead of 90% of the candidates applying for a position as a developer - don't sell yourself short.


I think you are not looking at it the right way: sure, a lot of stuff on github is crap, but if you have something good on it where you are a significant contributor, then it surely is a good sign.


+1. If I am not asked to write code in the interview, it is likely that other employees of the firm were not asked to in their interviews either. In the absence of other indicators (like open source code published by the firm), I will take this to mean that the codebase I have to work with, should I accept the position, is likely not to be stellar.


I met someone who complained about how he was required to write code to solve simple problems in an interview. He took that as an offense, saying that he was a websphere something something expert and shouldn't be bothered with little things like coding. I did waste my time trying to explain to him that small coding questions are often the easy way to weed out the people who pretend to have technical expertise and that these are more frequent that he'd think. I still think the coding questions are useful and I don't ind losing such a candidate.


Every interview I've gone through that lasted more than about fifteen minutes involved writing out code. <shrug>


I've been involved in a lot of interviews at my company, and I'm the only one who ever asks anyone to write code.

Would you consider the primary output and product of your company software? Perhaps you are just working in a position creating software in a non-software company.


If you can comment, I would be interested on the standard of coding that successful candidates exhibit once employed.

Personally, if I could only ask a candidate one question, it would be something that involved writing code. Too many times I've seen candidates with impressive resumes, technical knowledge, or conversation skills, fail miserably to implement trivial (e.g. stricmp) functions.


If you can comment, I would be interested on the standard of coding that successful candidates exhibit once employed.

Good ones write good code; the ones we shouldn't have hired write poor code or don't code at all. As for how to tell the difference, I pay a lot of attention to candidates' questions, comments, and attitude while writing code. That part carries over pretty directly to how they do their real job and how pleasant they are to work with. As for quality of code, my rough impression is that the more meaningful details they take care of in their whiteboard code, the more productive they are. Getting syntax and library function names right or wrong doesn't tell you much (except how recently they wrote code that used that syntax or those functions) but getting other details right does matter.

Which details matter is a matter of judgment. My attitude is that some errors are trivial but should be corrected if attention is called to them. For example, off-by-one errors in array indices are easy to miss on a whiteboard if they only affect one line of code or just make the output a little incorrect -- that's no big deal -- but when a candidate gets stuck or confused because of an off-by-one error he made a few lines up, he should be able to figure it out, correct the error, and correct any related errors.

Good things to watch out for (besides asking good questions to make sure the problem is well specified):

1. The candidate doesn't start writing a complete solution without having a decent idea where it's going. If he doesn't see a solution immediately, he takes some time to understand the problem, possibly working an example by hand, sketching out possible solutions, or talking through the problem verbally.

2. The candidate draws logical inferences about his own code. "Oops, this function signature is wrong, because in these two cases the same arguments are passed, but the output needs to be different. I need to pass in more information."

3. If the candidate takes the wrong path, he backs up as far as necessary to get back on the right track. He reminds himself of the problem statement and reevaluates his decisions up to that point.

Bad things to watch out for (besides having a bad attitude or failing to ask questions):

1. The candidate launches right into coding a solution without understanding how the problem can be solved. (Those candidates usually end up boxing themselves in, so they really better be good at backing up and starting fresh.)

2. The candidate spends a long time trying to impossible code, such as trying to implement the body of a function when the arguments he has specified for it don't contain enough information.

3. If the candidate takes the wrong path, he doesn't back up far enough to recover. For example, he doesn't question anything he decided more than five minutes ago, so he starts to treat his initial steps in solving the problem as if they were part of the problem statement. (I have seen the opposite mistake also, where a candidate encounters a problem that is clearly local to a small part of his code, but he interprets it as invalidating his whole approach. That mistake is usually just a mistake; people are going to make mistakes under pressure. It doesn't show a lack of intellectual flexibility like the opposite behavior. Depending on the circumstances I might interpret it as a lack of persistence, but usually not.)


On the other hand, I don't understand how non-technical interviewers can tolerate not knowing how good a non-technical applicant is for the same reasons you mentioned. Do teams ever recruit athletes without watching them play?


http://www.kottke.org/08/05/gladwell-on-the-mismatch-problem

"One subject in Outliers is what Gladwell calls the "mismatch problem," which refers to his idea that the qualifications which people look for/companies require in candidates when hiring often do not necessarily have any correlation to successful job performance - and often leads to poor hiring decisions."

He's also written on the problems in scouting athletes & picking teachers in the New Yorker.


That's been my experience as well. I had one phone-interview with a west coast firm looking for a local-to-my-area contractor that included coding. But as for every other interview I've done in and from the midwest, the closest they came to 'code' was one group asking obnoxious syntax questions.

From the other side of the table, the firm I'm with now only has a pseudo-code step in the interviewing process if and when I run it. The director for my group doesn't even put the question forward if I can't make an interview appointment. Even though he agreed that it was a great tool that did filter out a couple duds that might well have been hires without it, and likely would have filtered out the one spectacularly bad fit that made it through before my time.


So this is a good post from Gayle, but I would add that the times when the committees I've been on cared to see grades are when a candidate is on the bubble between hire and no hire and they are within 2 or 3 years out of school.

So grades matter, but not forever...


Yeah, the Business Insider questions were certainly not real. I actually got several questions from family members and friends about it after that article ran. The reality is at once much more pedestrian and more interesting (if you consider algorithm or system design questions more interesting than brain teasers like I do).


i agree with the general idea of the article, but I actually got some of those exact Business Insider questions during my interviews with Google. obviously, some of them are probably fake (like the manhole one), but there are couple that definitely real.

edit: typo


@andylei Yes, some do happen to be real. The point is that if several are definitely fake, why believe the rest?

For example, take this one: "A man pushed his car to a hotel and lost his fortune. What happened?"

Come on, no one was asked that. If even if they were, the hiring committee would take one look and that and throw out the feedback.

As for some of the other questions, like "How much would you charge to wash all the windows in the Seattle?" (1) This question seems fishy, just because it's about Seattle. Yes, Google has a Seattle office (where I worked), but it has very few PMs. It's far more likely that this question was asked at Microsoft. (2) This question is actually very standard for consulting question. It's really a totally fair problem solving question, as the accuracy of your final answer doesn't matter.

The Business Insider article is designed to be link bait - reprinting fake questions and building off the "OMG IT'S GOOGLE" stuff. The legitimate questions on this list are actually totally normal for tech and consulting companies.


In my experience interviewing at Google, the article is accurate. Granted this was a handful of years ago and things may be different now, but I wasn't asked brain teasers and I was asked to code on the white board.

Where my experience does slightly differ than the article, was their use of your academic records. They didn't care about my GPA, mostly because I dropped out of college very quickly and I don't have a GPA to care about, but they did care I didn't have a college degree and used that to justify the offer I was given.


Found this incredibly interesting: xx let’s look at the very widely circulated “15 Google Interview Questions that will make you feel stupid” list. You want to believe these are real questions, given that Business Insider feels like such a reputable source. Except that they didn’t get this list from a direct source. They borrowed their questions from some blogger (I won’t link back here) who was posting fake questions. Now, I don’t know that said blogger was intentionally lying – he probably borrowed them from someone else. Whatever the original source is, these questions are fake. Fake fake fake. xx


I'm not at all surprised that bloggers, then media, recycle garbage stories that get pageviews. "Bullshit laundering", as reddit recently coined it.


Google's online application asks for your SAT scores. That always struck me as odd.


where is that? I see GPA but no SAT request, even if you apply as a current student/new grad:

http://www.google.com/jobs/application/


I just went through the application process (declined by committee) over the last two months and the application did have a space for GPA, but I was never asked about SAT or GRE scores as far as I can remember.


I seem to recall that being on a questionnaire they send you after you submit an application.


Several trading firms have ask for my SAT scores when I've applied for quant jobs. I too thought that was odd since the only applicants are usually PhDs.


I recently got rejected by Google. Most interview questions were straight out of an algorithms book. Like write an interval tree, a hashtable, some dynamic programming question, etc. I probably failed because I misunderstood a trivial question as something far more complicated. As for the interviewers, half were quite nice but the others seemed bored and distracted. I won't try again.


I wonder about the technical knowledge of this person who's writing articles about hiring technical people.

"Explain the significance of ‘dead beef'" is a perfectly valid, if somewhat unimaginative, technical question.


Frankly, 0xdeadbeef is a rather bad value for a pointer poison: It doesn't have enough leading zeros to be caught in unmapped area used to catch null pointer access (it has a high change of being a valid kernel address on linux if you have about 700mb of ram). And there are not enough zeros at the end to still recognize it if the access is to a offset within the array/structure (as it usually happens).

Not catching the usage of a poisoned pointer on first access creates enormous amounts of fun - normally preventing any meaningful analysis of dumps.


Agreed - not only do I find "DEAD BEEF" as a rather gross concept, but there are technical flaws, as you point out. I personally prefer the simple, form-matches-function 0x0000DEAD. Yup, that pointer's dead baby!


No, it's not. It's a trivia question. Nobody uses such techniques any more because looking at raw memory dumps is not the right way to debug these days.


Using a "magic marker" like this doesn't imply looking at a raw memory dump (which, however, is still needed in several places Google is involved e.g., kernel hacking, embedded devices). For example, I used it myself when debugging an issue where our processes morphed into un-killable zombies. I was dealing with crash dumps, where I was walking through the process list in the kernel (it's a doubly linked list) in crash/gdb (looking at kernel memory dump obtained via netdump from a production server that manifest this condition) so I could find a process that has acquired a specific lock to test whether a theory that I had was true. I wasn't (and still am not) at Google and I don't even claim to have any expertise in actual kernel development: it was a bug a customer was experiencing and it needed to be fixed.

Lastly (and I am completely going on a limb here, without any concrete information), this could also be used to gauge team/culture fit Google. This is a good way to see if you'll be a fit for a team that spends a huge amount of time finding and fixing low-level bugs: it's fun to read about infrastructure that Google has built, but the actual process of building it may be very frustrating to some. Not being a fit for such a group doesn't imply rejection, it could simply mean you're brought back to a different round with another set of people.


If you want conformity that's fine. What would you think if your company instituted a very specific dress code? Requiring specific trivial knowledge is just as restrictive and constraining.

If you want to find engineers with talent then you need to do a lot more leg-work in plumbing their knowledge, experience, and skill than merely ticking off a checklist of shibboleths.

P.S. Personally I love hacker jargon and lore, I like knowing about scratch monkeys, core memory, the usenet cabal, and even 0xDEADBEEF, but I wouldn't take ignorance of those things to translate to ignorance of engineering fundamentals or of passion in software development.


> If you want conformity that's fine. What would you think if your company instituted a very specific dress code? Requiring specific trivial knowledge is just as restrictive and constraining.

It's not about hacker folklore, it's about a debugging technique.

No one implies that this question is asked of every candidate. That's why these lists are useless. It's also not the way I'd ask a question about memory markers (I'd ask them about how they go about debugging such a problem and how the tools they use e.g., gdb and valgrind work), but it's a fair question to ask those claiming experience with or interest in low-level development much like it's fair to ask candidates who claim experience with data mining what the kernel trick is (...but it would be misleading to ask a Linux kernel hacker that question as that would mean something else to him, much as dead beef would mean something else to a data mining guru).


Keep in mind, I'm responding specifically to the idea that "[explaining] the significance of ‘dead beef'" is a valid interview question. Have you done a survey of memory debugging experts to determine how common knowledge of "dead beef" is?

It's not a knowledge question, it's a trivia question. A shibboleth. It is neither a necessary nor sufficient pre-condition for knowledge of memory debugging techniques and it probably has about as much correlation with them as asking whether they have read The Lord of the Rings.


> It is neither a necessary nor sufficient pre-condition for knowledge of memory debugging techniques and it probably has about as much correlation with them as asking whether they have read The Lord of the Rings.

It's a hexadecimal number, that should be quite obvious. It's a noteable and easy to notice one. Others may have used a different one (I used 0x12341234 for the task as others have used 0xdeadbeef) but it should be obviously what you'd use it for.

Is it one of the better questions? No. Is it merely a shibboleth or a piece of trivia? When asked to low-level hackers, no.


When shorn of hexadecimal connotation as in the original question what value is there to it? If I ask someone verbally "what is the significance of dead beef" should I put any value in their either having already learned about 0xDEADBEEF or in their ability to appreciate that it could be a 32-bit word aligned hex value?

If someone were to change the question to "what is the significance of bad food?" would it be considered equally valuable?


O is not a hexadecimal number - even if it was "BAD F00D" it still wouldn't be a full 32-bit word - it's also not use by existing libraries (libgmalloc)

cafe babe, dada dada would just as well as dead beef

In short, either you are familiar with magic numbers or you're not. If you're not, you're certainly not qualified to work on anything really low-level and may not be qualified to do C/C++ development in userland either (without strong evidence to the contrary).


No it's not. If you don't know about magic numbers you don't know how the BIOS loads the kernel into memory. Period.


+1 for a well-put argument.

However, I admit to asking a lot of trivia questions when interviewing candidates at Apple, partly as a culture fit, and partly because I could use them to segue into a chain of questions that dealt with the consequences of seemingly innocuous design decisions made by engineers long ago. This was something my group had to learn to foresee and deal with when we thought bad things would happen.

One of these questions had to do with 0xCAFEBABE.

That said, getting the trivia part right was a +1, and not knowing the info was a +0, so no harm if the job candidate didn't know it.

I'd also use them as a shortcut to validating their experience in an area. Again, a +0 if they didn't know the trivia, but if they do, I can move on to a different line of questions.


I'd also use them as a shortcut to validating their experience in an area

That seems very dangerous. I, for example, know about DEADBEEF and CAFEBABE as hacker trivia, yet have close zero experience in actually using those techniques for debugging real world problems.


The "right way" depends on the task at hand and the tools you have at your disposal. For any type of low-level work, being able to view, understand, and observe regions of memory is pretty much essential.


[deleted]


When I was writing a malloc implementation for a class, I defined a value I used in place of NULL to terminate a structure sort of like a linked list. With debugging off, it actually was defined to NULL; with debugging on, it was defined to 0xDEADBEEF so that I could see the distinctive value when debugging.


If the candidate is bragging about the time he saved the day with raw memory debugging, I'd start with this question to confirm.


@strlen Again, some of the questions are absolutely 100% FAKE.

So if some questions (like the "why are manhole covers round") are definitely fake, and others seem very weird (like the dead beef one), isn't is more likely that that question is fake too?


> So if some questions (like the "why are manhole covers round") are definitely fake, and others seem very weird (like the dead beef one), isn't is more likely that that question is fake too?

It may sound fake to you, for a specific team you interviewed people for. But, it's an entirely reasonable question to ask for someone who claims low-level experience for a team that actually does low-level work. Use a product your ex-employer has built to search for '0xdeadbeef' and you'll see this technique is not just a piece of esoterica.

There are many kinds of software engineers: a low-level hacker may just be as puzzled by "support vector machines" (why do they need our support?!) as you are by 0xdeadbeef.

[Edit: last point is why lists like this are meaningless. This question likely gets asked by teams who do low-level work, other questions get asked by teams who do machine learning, etc...]


yeah, I've read this blog post before, and this bit has always stunk to me.

The pattern might be 0xDADADADA, 0xCAFEBABE, or 0xDEADBEEF. But it seems perfectly reasonable to ask what one might use this for. In fact, you'll find 0xDEADBEEF in the man page for libgmalloc.

(and all of the replies are about "raw memory" debugging... wtf)


Two questions pop to my mind when I was interviewed by Google:

1. Given n number of computers with x number of numbers stored. Find the median of all the numbers distributed among those computers. Catch is that no single machine can store all the numbers. (Hope this is clear.)

2. Given a text file of logs, write a threaded program that scans the logs and produce some statistics. (My answer was to create n threads that reads every nth line.)

None of those brain teasers were asked of me.


You can do useful mental exercises for Google interviews, or other interviews, by working through "puzzle questions" and similar problems. But if a world of limited time, this might not be the best use of your prep time.

A good source for the history of these questions and interview style, as well as a bevy of sample questions, can be found in "How Would You Move Mt. Fuji?" http://www.amazon.com/Would-Move-Mount-Microsofts-Puzzle/dp/...


The best exercises are to look for ACM Programming Contest Questions.

Here are some programming contest questions from 2005 from an Atlantic Canada programming contest http://projects.cs.dal.ca/apics/contest05/


This article does make Google's interview process more bearable. I always found the Google interview "horror stories" were a deterrent. Perhaps that is a good thing for Google since it already filters out certain people. But I always avoided even considering applying for Google because I didn't want to waste a whole day getting grilled with brain teasers and academic CS concepts that I learned 10 years ago.


Google interviewees are asked to sign an NDA regarding the interview questions, mostly so that no one can look smarter than they are by studying those specific questions. Some candidates do forget it and leak information, but since bullshitters sign nothing, the bs quotient is high.


I actually wasn't asked to sign an NDA (in Switzerland). Maybe someone just forgot that, or they realize that creating a legally binding NDA is quite hard in Europe.


Pretty sure the NDA co era confidential info you might learn about the company and upcoming projects/products/plans, not their interview methods. I dont have access to it right now but I've posted the exact language from it in the recen past on another thread relates to Google's interviews.


I've heard these type of brain-teaser questions, not coding or algorithm questions, are still common in the finance sector, when hiring "quant" developers.

Anyone know the extent of this?


I don't doubt that hiring committees don't look at GPA. It's usually HR who does that sort of thing.


spending time to Build some thing useful > (far greater than) spending time to prepare for some thing so complex that only few people can crack.


Exploration vs Exploitation tradeoff: http://en.wikipedia.org/wiki/Multi-armed_bandit


What about all the plugging for that interview site that suspiciously only have google account login? Nobody noticed that?

Also, glass door has better content.


Are you referring to careercup.com? The author of this article is the owner of that site.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: