Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I found this very curious - by their own admission, this also means that most code _does not get reviewed_ before it lands in production. To me, this is quite scary, and I would be very hesitant to adopt this for any large-scale project or company.

IMO, code review is a cornerstone of code quality and production stability - the number of dumb (and smart!) mistakes in my code that have been caught in CR are numerous, and it's a big portion of my workflow. There are times when I feel it's redundant (one-line changes, spelling mistakes, etc), but I wouldn't trade those slowdowns for a system where I only got review when I explicitly wanted it.

Of course, for pre-production project and/or times when speed is of the utmost concern, dropping back to committing to master might make sense, but for an established and (I'm assuming) fairly large/complex codebase, I would think that it would be best for maintainability and stability to review code before it's deployed.



You can commit code code directly to master and still code review all code before production if you want to.

We do.

Our tooling will tell us what cases have introduced between our current deployment version and the to be released version. It will tell us that all cases have been reviewed and tested and ready to go between those two versions. We usually constantly deploy, so all cases ready are deployed asap.

Only issue is that you can get 'blocking' cases but that's fairly rare. Big cases get a feature switch


What are cases? Is it short for use case?


Pretty sure he means like an issue. A github issue, jira ticket, etc.


Yes, we use the word case for issue here


i'm guessing a scrum use case.


If we want a code review on anything risky, we may push a branch or we may just post the commit in chat for review before we build out. Which is chosen depends on how big or blocking the change may be.

We ask for code reviews all the time, we simply don't mandate them - I think that's the main difference.


> or we may just post the commit in chat for review before we build out.

Isn't that 'after the fact', considering your teamcity polls the gitlab repo a lot, so a commit will trigger a build right after it, and if everything goes well, deploy it too?

So you have to know up front whether a thing is 'risky', but that's a subjective term.


It only deploys to our development/CI environment automatically. Deploying out to the production tier is a button press still.

So yes, it will build to dev, but we're using this in situations where we're very confident the changes are correct already. I'd argue blind pushes are the problem otherwise. If the developer is not very certain: they can open a merge/pull request or just hop on a hangout to do a review.


> It only deploys to our development/CI environment automatically. Deploying out to the production tier is a button press still.

Ah missed / overlooked that!


Cool! That clarifies things a lot - the way I read the article sounded like you rarely asked for reviews.


It's a false dichotomy to think you can either move fast or have good process. Etsy commits directly to master on their large, monolithic php webapp and they have a pretty strong code review process where code isn't deployed until it's reviewed. They still manage to move fast with autonomy and trust, at least they did when I worked there for the past couple of years.


I just happened to listen to an old Stack Overflow podcast where they talked about code reviews. They said they do in-person code reviews before committing code.

(I find this is the most valuable way of doing code reviews vs pull requests/sending comments back and forth. In-person conversation about the code is so much higher bandwidth.)


We did in person code reviews at my company, and for me they are mostly useless. I like to spend some time to look at the code, and doing so on someone's screen or when the are looking at yours makes me (and fellow devs) want to finish it quickly, so we frequently missed design problems.


Code reviews aren't just about catching problems either: the reviewer can learn new coding techniques from this, the reviewer will become more familiar with following the team guidelines by applying them this way and the reviewer is also made more familiar about parts of the codebase they might have not known about.


Shouldn't the types of mistakes you're worried about be caught by the automated tests? IMO code review is more about bigger picture design, e.g. if a piece of code would yield the correct result but with sub-optimal performance (which can also be caught by automated tests but not always).


Prior work in CompSci on various engineering techniques showed code review to be among most effective at any part of lifecycle. Testing can miss lots of things. So, best to do both.


There's a lot that fits between automated tests and code review. We have an extensive system of static analysis and code quality bots that run, but there's still a lot of design patterning and higher level functionality that machines (or, at least, our machines) don't always catch.

Obviously, it depends widely on the codebase and the number/quality of engineers working on code, but it's been my experience that a team reviewing each other's code is still something that can't be 100% replaced with automated tests.


Most companies I work for, many in Fortune 500 roster, don't ever do code reviews.

Every time such process was put into place, it quickly faded out after a few months.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: