> Don’t review with a “how would I write it?” filter
_Thinking_ about "how would I write it?" when reviewing code catches a lot of stuff for me - I want to be able to come up with how I'd do it, see how the PR does it, and then check if the approach matches. If not, there's something to be learned and explained here. Although, all of that is in my head! If I come to the conclusion that the presented approach is wrong or inferior in an objective way (say, doesn't handle corner cases neatly or at all), I'll comment. Otherwise, let people do their thing!
So, I agree with the point of not using this to generate a lot of comments, but I do believe in it as a useful technique to engage the brain, instead of just loosely scrolling over the diff.
Good Points. The most important consideration from my experience having worked with more than 40 teams: "Code review is not the time for you to impose your personal taste on a colleague." There are always many acceptable ways how to solve a problem in software development, so if someone imposes his personal preference over that from a colleague this creates tension and a time sink for the team - don't make this a blocking review! This also plays a big role when developers obsess over code but lack the vision and experience to see that not everything will be a relevant factor for technical debt in the grand scheme for the business.
Which is my fundamental problem with peer review. There should be one gatekeeper to a code base that enforces some style preference. Practically which style they choose matters very little compared to having one predictable and enforced style to follow.
It is hard to fully specify all the different dimensions of consistency. Look at the code that exists and write in that voice. Maintenance will be much better.
It is even harder to chase personal consistency feelings of multiple different colleagues. Usually inconsistent with each other. Usually enforced only against less assertive committers and not enforced against those perceived to be strong.
That is why you should either make it a rule or let it be.
In terms of formatting etc I rely on tooling which break the CI. That way they don't even surface. Equally if something slips through the CI needs to be updated.
A piece of code should match the general style of the code it is being integrated into. When you create an addition to a wood-framed house the addition won't be steel construction, unless you have an explicit reason (weight, speed, foundation etc). And if I'm the main maintainer of a codebase, it is something I can call out on. If we use Tokio and you use raw threads I'll ask about this choice.
In an ideal world you (the PR author) includes this in its description, because it is a reflection of your understanding of the codebase.
Reviewing code with 'will this work' is also debatable. Someone might propose a change that fixes an issue they are having. But maybe it's not something we want to solve in our codebase, or maybe it breaks other things the maintainers are thinking off.
> In terms of formatting etc I rely on tooling which break the CI. That way they don't even surface. Equally if something slips through the CI needs to be updated
I also do something similar with lint rules. For Rust, this means turning on every single reasonable clippy lint (there are many), and making all warnings into errors. Some would say that some rules are overzealous or annoying, but it's a price I'm willing to pay to keep the codebase uniform.
If you are leaving a comment it’s because someone didn’t learn how things should be from the tools in your codebase.
Naming style problem? Linting isn’t catching it.
Architecture violation? Your import linting (checkers to prevent route handlers importing the DB and skipping the service layer, etc.) isn’t catching it.
Duplicate method? What tooling can you create in the LLM era - remembering you can ask agents to build scripts now?
- you review and if to the best of your knowledge you think something can be done better you comment about it and leave a suggestion on how to do it better
- then you approve the PR. Because your job is not to gatekeep the code
There is a middle ground here though. I try, anymore, to make sure I’m very clear on which review comments are blockers for me and which - most - are style, suggestions, questions. Unless the code has some egregious problem I immediately approve it and write a comment saying “this is good by me if you fix blockers A, B”; that way they are not then blocked by me doing a second review unless they want it.
Maybe this is because I’m working somewhere where we don’t use stacked reviews though? So it’s a major pain for someone to have a PR open for a long time going through lots of cycles of review, because it’s tricky to build more work on top of the in-review work
i guess there's also a difference in the places you work at.
the places i work at expect trunk to always be clean, and ready for production (continuous delivery). if you work someplace with a slower release cycle, then getting a not perfect change in may be more acceptable.
there's also responsibility, which is traced during incidents back to author and reviewers. i won't approve until i'm confident in the code, and that will mean the author needs to answer questions etc.
A PR can be broken, or not. If it's not broken, you approve it. You offer all the advice on how to improve it, and promise to re-approve promptly if these improvements are implemented. But you suggest it, not demand it.
If the PR is broken, you clearly denote where is it broken. I usually start comments to lines requiring changes with a red circle (U+1f354), because GitHub code review UI lacks an explicit way to mark it. You explain what needs changing, crucially, why can't it remain as is, and ideally suggest ways to fix it. Then you demand changes to the PR.
Because yes, your job is to gatekeep the codebase, protecting it from code that will definitely cause trouble. Hopefully such cases are few, but they do occur, even to best engineers at best engineering orgs.
> - then you approve the PR. Because your job is not to gatekeep the code
I can see this working when the person who wrote the code is responsible for making sure the product works for the client and that their code does not interfere with everyone else's work.
If the person reviewing though is responsible for the above it makes sense to gatekeep the code. I have been in this position before and off loading as much as possible to automated processes helps, but has never been enough or at least there is never enough time to make those automated process that would be enough.
> Review with a “will this work” filter, not with a “is this exactly how I would have done it” filter
I suppose there are edge cases where you could say "technically this will work, but when the system is close to OOM it will fail silently", but I would consider that to be a negative response for "will this work" rather than a case where you rubber stamp it.
I disagree with the point „don’t leave too many comments”. I think it’s easier to read a code that is a) following repo standards, b) following community standards. If a PR requires a 100 comments, so be it. Granted, I also almost always give a benefit of the doubt to the author unless a change is really required and I only ever seen a PR with close to a 100 comments once or twice. I should also say that you shouldn’t leave n comments with the same content so usually there’s no need for more than a couple comments in total.
I think a lot of this stems from the code review tools themselves. Especially the "Don't just review the diff" mistake. Especially with GitHub defaulting to just showing a list of diffs and changed files. I'd find it much more useful if a review tool started with a class hierarchy or similar high level view to get a sense of:
1. What/how many classes changed?
2. What/how many methods have been added/removed/modified
3. What method signatures have changed
4. What changes are covered by new tests
"Don't leave too many comments" i think can really be rethought of, don't review style and syntax. Leave that to the robots. If you're relying on other engineers to flag style problems and linting, you're just wasting everybody's time. Set up linting and style checkers and be done with it.
The thing that always bothers me is when the comments involve things that have nothing to do with the PR.
Like, they are reasonable ideas, but open up a new issue. If every reviewer wants to tackle large topics in the PR Review that have nothing to do with what specifically is happening, then it explodes and gets even harder for others to review now that we are changing things that have nothing to do with the change.
I'd also add: check out the branch locally. Your colleagues do some awful things to their local environments, and the central CI isn't perfect. Your IDE might have more checks enabled than both, too
I've worked in teams where I'm the only one who actually checks out the branch to poke around
I strongly disagree on this one. The CI pipeline has to be the single source of truth. Your approach is a step back to the times of "but it works on my machine".
Yes, ideally. Then again, I've caught plenty of bugs checking it out and running it locally.
The CI pipeline may not wholly be in your team's control. I've worked at places where a 'devops' team had too much control, liked to tinker all the time and often broke stuff. Nothing we could do - devops were gods and didn't take input from anyone.
Should I not do check it out locally and instead spend months and political capital arguing for more in the CI (which the PM would argue slows down velocity - and will win)?
Pragmatism and workarounds are often employed in non-bigtech enterprise companies...
edit: I also meant 'poke around' as in do a bit of informal QA. Again, your test and QA team might not be perfect...
There are times where it is reasonable, especially for bug fixes where there is a test to explicitly check that the bug is fixed. In that case, I might check out the code, revert the fix, and ensure that the test fails. There have been a few times I did that and the test still succeeded, so I can confirm that the test is not testing the right thing (in a very mature project, complexity or test settings can obscure what your test is actually doing, especially if it is an E2E test).
There can still be value in configuring your local IDE to emit more warnings than the centralized linter does, and looking at those warnings in your IDE.
This can help you identify additional pieces of feedback that you might have missed by just reading the code in a web interface.
If the code needs to work in 100 different environments it is likely that there are more bugs found, than if it is only tested in one standardised environment. The problem CI catches is, if that one environment isn't even standardised and reproducible.
100%. I build web apps, and you can't automatically test every single possible interaction of a user on a page. Weird shit can very very easily sneak through without thorough manual testing.
I feel like a useful tool someone should build now that LLMs are so capable, is some sort of automated walk through of a pull request, where it steps a reviewer through initially the overview and they why of the changes, then step by step through each change, with commentary along the way generated by the LLM (and hopefully reviewed by the pull requester for accuracy). Then the reviewer can leave comments along the way.
I’ve always found the cognitive overhead of going through a pull request challenging, seems like the paradigm can shift now with all of the new tooling available to all of us.
It seems like maybe you should just talk to your colleague who wrote the code. It would be far more efficient than having them create and review a whole review experience on top of the work that they did, and then have you try to understand a walkthrough document (that might have hallucinated explanations that the requester didn’t quite catch) plus the underlying code, plus still have to talk to them to ask clarifying questions anyway.
This sounds like you're making an argument against pull request descriptions in general. Like you just want to see code changes and have a meeting about it.
> When you receive a review with a hundred comments, it’s very hard to engage with that review on anything other than a trivial level.
Wow, that seems crazy. I can only hope I never have to work with somebody who thinks it is productive to leave that many comments on a change -- I genuinely cannot imagine any change that could ever require that.
I've had some PRs that would have required a ton of comments. There are usually two ways I handle this:
If the PR went in a completely different direction and missed the goal by a lot, I take ownership of it (with a brief explanation) and re-implement it. I then use the new PR for a pairing session, where I walk through both PRs with the original author for learning purposes.
If it’s mostly smaller issues, I schedule a half-hour pairing session with the author and review everything together, after preparing a list of issues.
Doing it any other way puts too much burden on the author to guess what the reviewer wants, and it slows down velocity significantly.
In those situations, the more productive option to course-correct is to talk to the change author in a meeting/chat instead of just volleying off a tsunami of comments about various minutiae in the change IMO.
In agreement with most of the points, this one was surprising to me:
> Many engineers seem to think it’s rude to leave a blocking review even if they see big problems, so they instead just leave comments describing the problems. Don’t do this. [...] Just leaving comments should mean “I’m happy for you to merge if someone else approves, even if you ignore my comments.”
Do people actually ignore a comment explaining a problem with the code as written just because it wasn't a blocking review ?
It's like ignoring someone telling you you're stepping into a hole because they're not grabbing you by the neck. Reviews shouldn't be that adversarial nor hand holding.
I'm also realizing, everywhere I work a comment is basically a blocking, except if it's explicitely flagged or discussed as optional. Trying to find some other person to review and ignore the comment is just a big NO.
Personally I hate it when people just leave comments on my PR without explicitly blocking or approving it. To me it comes across as indecisive. Either you are fine with the changes -> approve. If you think this code shouldn't be merged in it's current state -> block. Just leaving a comment feels like you want to complain, but don't really take any responsibility for what happens next.
There are exceptions of course and it all depends on the comments and the circumstances, but I generally prefer explicit yes or no.
> Personally I hate it when people just leave comments on my PR without explicitly blocking or approving it. To me it comes across as indecisive. Either you are fine with the changes -> approve.
I point out things that aren't functional blockers all the time. If I see there's a gap from the code itself, but maybe weren't considered (is there a ticket to deal with this? I'm not going to guess what it's named or where it is), I'll point it out. If it's not blocking, it's an approval from me, but someone else may disagree when they read it as well.
Not sorry that you think it's indecisive. This is technical feedback, which is separate from product feedback. You should still read your review comments and other people's review comments. If I was in charge of things to the point only my approval matters AND I was sure the author would respond, it would be an approval. Generally, this is not the case.
It sends a different message, in my opinion. Blocking means "I disagree, but lets figure it out and work together to get it over the finish line". "I don't approve, but someone else can" is very non-commital. Which gives me the feeling of being left alone with a bunch of critique, without appreciation for the work that I have originally done. I would wish my reviewer takes responsibility for his/her feedback.
"I don't approve, but someone else can" also means to me "Merge it, if you must. If it works out, good for you, I havent blocked it. If it doesnt work out, I get to say 'See, I told you so!'.
Having non-blocking comments leaves room for the discussion you want.
It's your job as the PR submitter to advocate for your code and shepherd it through.
Either you, indeed, work with the reviewer who made the comments to resolve them, or you have the option to seek out another if you think the feedback isn't valid enough to address.
Edit: TBH I don't get why you'd see a non-blocking comment differently, eg not meaning "let's get this done".
What about: "It would save as effort, if you would do it like this, but having it in the current change is still better then nothing in case you are not willing to do the work."
I don't want you to write an essay; I want to point out, that there is a third state, besides block and approve. I don't want to never have that code in case, the author disagrees with me or I am missing something.
> Do people actually ignore a comment explaining a problem with the code as written just because it wasn't a blocking review ?
Approval is approval. If you complete your review and don't block, you have concretely indicated to the author that in fact nothing is wrong with the diff and it is ok to be merged as is no matter what else you said. Many authors won't even look at comments if their change is approved.
Most people I know don't use it this way. Most people sign off on a review, with the expectation that the remaining comments will be addressed before pushing.
If they just push anyways, then post-push there are gonna be some awkward conversations. If it becomes common, other people would know to just not approve CRs for that person until they are pristine. It will basically just slow development speed down in the long run.
> Most people sign off on a review, with the expectation that the remaining comments will be addressed
People should be made aware that that's a bad and dangerous practice, because by doing that they are directly requesting unreviewed changes, because it means that now nobody needs to review the follow-up to their review. People need to learn to not sign off on code that they believe needs to be changed. Saying "I believe this code is wrong and needs to be fixed but am confident that the next version you produce will be completely correct without further review" is lazy reviewing and leads to inevitable disaster.
That's a bit unprofessional. The comment may be about a non-blocking issue. There is something wrong with the diff, and you should look into it.
Ignoring the comments is a tactic of a careless coworker. The diff may get merged and they can move on, sure. But come review time, they may find something else in their work environment is being rejected and removed.
I would say that there's no such thing as a "non-blocking issue". If it's not something you want to block over, then you're saying that it's not really an issue.
On the contrary, parents comment is to me the professional variant if there is no other agreement. Every modern Code Management Platform has a distinct feature to mark a review as blocking or non-blocking, so it should be understood and communicated by the team how to use this feature.
And it should be understood that if a coworker has something to say about your code, you should listen. Anything else is obstructive and arrogant. Which are both unprofessional behaviors.
> Do people actually ignore a comment explaining a problem with the code as written just because it wasn't a blocking review ?
Yes, yes they do.
And some devs will go the other way - not matter how minor a comment you leave, even just those that are tagged nitpick/FYI, they must address them.
I'm in agreement with most of the points, but I still find that most of the PRs I review, I block and request changes. Maybe that says more about me and the devs I've worked with...
> Do people actually ignore a comment explaining a problem with the code as written just because it wasn't a blocking review ?
Some people leave flood of comments that cosplay as explanation, but they are not one. Or I simply disgree.
I am just getting to the point of ignoring them, because the alternative is to write an essay over each of those comments just for it to be ignored and the person passive aggressively ignoring the patch forever.
I'll say this is directly proportionate to the size of the changes. If a code review is a conversation between n parties, it seems reasonable to me that more code leads to more comments.
I work with a person where I know going into a review that I have to check every single file path. They have this weird ability to consistently put things in entirely the wrong folder/namespace. For example, something related to Categories might end up under Orders somehow... it's like new files just go in whatever folder they happen to have open at the time.
I frankly don't even know they do it so consistently.
Something to keep in mind in your reviews as well I guess, lol.
I want to nitpick on
> Don’t review with a “how would I write it?” filter
_Thinking_ about "how would I write it?" when reviewing code catches a lot of stuff for me - I want to be able to come up with how I'd do it, see how the PR does it, and then check if the approach matches. If not, there's something to be learned and explained here. Although, all of that is in my head! If I come to the conclusion that the presented approach is wrong or inferior in an objective way (say, doesn't handle corner cases neatly or at all), I'll comment. Otherwise, let people do their thing!
So, I agree with the point of not using this to generate a lot of comments, but I do believe in it as a useful technique to engage the brain, instead of just loosely scrolling over the diff.
reply