Hacker News new | past | comments | ask | show | jobs | submit | jakubmazanec's comments login

The problem isn't AI, the problem is companies don't know how to properly select between candidates, and they don't apply even the basics of Psychometrics. Do they do item analysis of their custom coding tests? Do they analyse the new hires' performances and relate them to their interview scores? I seriously doubt it.

Also, the best (albeit the most expensive) selection process is simply letting the new person to do the actual work for a few weeks.

[1] https://en.wikipedia.org/wiki/Psychometrics


> Also, the best (albeit the most expensive) selection process is simply letting the new person to do the actual work for a few weeks.

What kind of desperate candidate would agree to that? Also, what do you expect to see from the person in a few weeks? Usual onboarding (company + project) will take like 2-3 months before a person is efficient.


Candidate would be compensated, obviously. That's why it's expensive.

You don't need him to become efficient. Also I don't think it is always necessary to have such long onboarding. I'll never understand why a new hire (at least in senior position) can't start contributing after a week.


> Candidate would be compensated, obviously. That's why it's expensive

Ok... take me through it. I apply to your company and after a short call you offer me to spend 4 weeks working at your place instead of an interview.

I go back to my employer, give them resignation letter, work the rest of my notice period (2 months - 3 months), working on all handovers, saying goodbyes.

Unless the idea is to compensate me for the risk (I guess at least 6 months salary, probably more), then I do not see how you'd get anyone who is just a poor candidate to sign up for this.

> You don't need him to become efficient

So what will you see? Efficiency, being independent and being a good team player are the main things that are difficult to test during a regular interview.


And so that self-selects for people who already are unemployed then, right? Most developers I know (including myself) look for a new job while still having a job, as to not create a financial hole in-between. I'd be curious if that doesn't then end up with lower quality candidates who ended up unemployed to begin with?

And, additionally, it encourages your candidates to still be interviewing while they're on their probationary period with you, since they may be back to unemployed after 4 weeks or whatever. Which creates even more potential issues if they get a much better offer while they're onboarding with you.

> self-selects for people who already are unemployed then

You can say that about all forms of hiring process. If you're unemployed, you obviously have more time: to spend more time on the take-home assignments (which I hate, see another thread [1]), to add more stuff to your GitHub profile, to go to more interviews, etc.

[1] https://news.ycombinator.com/item?id=40200397


> You can say that about all forms of hiring process

Yes, but there's a significant difference between spending a few hours on a take-home assignment and dropping your current employment to spend 4 weeks potentially in another city working full time.


Well, I didn't say it was super practical approach, only that it has the best predictive validity :D

I'd argue the bigger expense is on the team having to onboard what could potentially be a revolving door of temporary hires. Getting a new engineer to the point where they understand how things work and the specific weirdness of the company and its patterns is a pretty big effort at anywhere I've worked.

> can't start contributing after a week.

Because you have zero context of what the org is working on.


If you work with Boring Technology, your onboarding process has no reason to be longer than a week, unless you're trying to make the non-tech parts of the role too interesting.

> unless you're trying to make the non-tech parts of the role too interesting.

Unless your role is trivial to replace with an LLM, you need to understand the business. Maybe not for really junior role, but everything above - you need to solve issues. Tech is just a tool.


You don't have to understand the entire business to start being productive. Particularly when you have experience from other, similar businesses.

I am not sure I follow - when you hire you search for someone who has 100% coverage of the tech you're using and also already works for your direct competitor?

Let's say you're hiring manager for a company that compares flight tickets, something similar to Google Flights or Skyscanner. You need three additional Rust engineers. You're located in Palermo, Italy.

How do you hire people that would not only know Rust, be willing to move to Palermo, or at least visit occasionally, but also know the airfare business?

Even if you're willing to have people remotely, in the same region, how many unemployed Rust developers that know that business are on the market? 0?


> when you hire you search for someone who has 100% coverage of the tech you're using and also already works for your direct competitor?

Ideally, yes. It's a common occurrence among large organisations. Google and Apple used to even have an anti-poaching agreement.

> How do you hire people that would not only know Rust, be willing to move to Palermo, or at least visit occasionally, but also know the airfare business?

Rust isn't Boring, which is why you don't do that and hire one of many Java developers and do Java, unless the tradeoff is really worth it.


How do you control for confounders and small data?

For data size, if you're a medium-ish company, you may only hire a few engineers a year (1000 person company, 5% SWE staff, 20% turnover annually = 10 new engineers hired per year), so the numbers will be small and a correlation will be potentially weak/noisy.

For confounders, a bad manager or atypical context may cause a great engineer to 'perform' poorly and leave early. Human factors are big.


Sure, psychological research is hard because of this, but that's not what I'm proposing - I'm talking about just having some data on predictive validity of the hiring process. If there's some coding test: is it reliable and valid? Aren't some items redundant because they're too easy or too hard? Which items have the best discrimination parameter? How the total scores correlate with e.g. length of the test takers tenures?

Sure, the confidence intervals will be wide, but it doesn't matter, even noisy data are better than no data.

Maybe some companies already do this, but I didn't see it (though my sample is small).



The palette doesn't seem perceptually uniform, i.e. the contrast ratios of each tint vs. white/black aren't the same across all colors. Did your try analysing the palette in https://huetone.ardov.me/ ?

I've analyzed the palette with my own tool, a11y-contrast[1], and indeed the luminance is not uniform. I wrote [2] about why this might be a desired property of a color palette.

[1] https://github.com/darekkay/a11y-contrast

[2] https://darekkay.com/blog/accessible-color-palette/


Nope, I just did what felt good. Does the huetone tool support OKLCH? I didn't see a toggle for it.

You have to convert your OKLCH colors to RGB hex codes before you can import them to Huetone, and that's little bit annoying, but worthwhile. I wouldn't use any palette that isn't perceptually uniform anymore. Being able to switch e.g. red-400 to blue-400 and to retain same contrasts is very valuable.

Agree. I recently played HL2 again with the new commentary (added because of the 20th anniversary) and it really shows why the level design is so good: a lot of play testing and iterative design. They really did the work. Now I'm playing the Halo franchise and the difference in how boring and uninspired the levels are is really striking.

Mm. Halo 3 felt really open and expansive the first time I played it. Second time, I tried to explore outside the normal path and rapidly discovered that it was a very well disguised rails shooter.

But I've also gone back further than that in Bungie's back catalogue, having recently been playing Marathon 2 and Marathon ∞ (winner of the MacFormat(?) magazine award for "largest version number increase between successive releases") on Steam, and… well, 2 has still-interesting levels, but Infinity's levels are a spatially confusing mess.


> Now I'm playing the Halo franchise and the difference in how boring and uninspired the levels are is really striking.

Relevant PA: https://www.penny-arcade.com/comic/2001/11/28/the-rest-of-th...


Halo: ODST is the one for me with the best storytelling and ambiance.


> I will tell you a KPI driven org is gaslighting me

That's still just the same old issue aggregate statistics vs. a data point. They tell you something, on average, is this way, but you experienced it in another way, so you say "the statistics must be wrong".

> you know about reality

So the issue is: is your model of "reality" the correct one?


The discussion thus far has been about the incentivized manipulation of the data points as they come in. Aggerating those into statistics will produce misleading and incorrect results. Its not that an an individual data point can be different than the average, its that the flow of individual data points into the average is being 'managed' as people try to 'improve' these KPIs.

Sure, but is the data collection truly incorrect? He argues that it is because "his experience" doesn't match the data. So in the end it is about data points being different than average.

I can't say for sure one way or the other. But my experience has been that people that have KPI metrics to meet will do a lot of things to game those data points to their favor.

Well, there's also Goodhart's Law [1]. Maybe that's what OP meant to reference.

[1] https://en.wikipedia.org/wiki/Goodhart%27s_law


> then human confirm is needed

"Suppress this prompt with the -y or --yes option" [1]

[1] https://docs.npmjs.com/cli/v11/commands/npx


> introduced a small patch

> introduced a patch

> the Git maintainer, suggested

> relatively simple and largely backwards compatible fix

> version two of my patch is currently in flight to additionally

And this is how interfaces become unusable, through thousand small "patches" created without any planning and oversight.


Ah, if only the Git project had someone of your talents in charge (rather than the current band of wastrel miscreants).

Then it might enjoy some modicum of success, instead of languishing in its well-deserved obscurity!


Git has notoriously bad CLI (as other commenters here noted). Your snarky comment provides no value to this discussion.


On the contrary, it offers a little levity and humour, and possibly even the chance for some self-reflection as you consider why you thought it was appropriate to insult the folk who manage Git. I'm sure you can manage at least one of those?


Your comment isn't funny, just snarky. I suggest you read again HN guidelines and do some reflection yourself.

Also, if you see it as insult, that's your mistake. It is just a simple empirical observation. I'm not saying it's an original thought - feel free to Google more about this topic.

I won't waste any more time since you obviously aren't interested in discussion.


>I won't waste any more time since you obviously aren't interested in discussion.

Pot. Kettle. Black.


But you hear no crying or shouting during e.g. Moon landing [1]. TBH I expect "disinterested" behavior from professionals in such situations.

[1] https://youtu.be/xc1SzgGhMKc


That's not a video of live broadcast TV coverage. It's a recording of the operational communcations (which you could hear in the BO livestream and it didn't have crying or shouting). Actual TV broadcasts at the time did show some actual emotions including laughter and possibly even tears, despite being from professional newscasters rather than employees: https://www.youtube.com/watch?v=oMF58ZP681A


> from professional newscasters rather than employees

Then it's not relevant.


Don't forget:

- sketch with some animals (usually dogs or cats, they're cute)

- sketch with some fluid sprayed over a cast member


and

-- the punch repeated three times

-- the pulled punch

-- the actors age inappropriate punch



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: