I think we ought to reconsider some of our quality assurance measures.
Code Reviews
Two other core developers are supposed to review a commit before it gets pushed to upstream. Developers in charge of a specific subsystem get a free pass on their own commits to that subsystem.
Why this sucks:
Has anybody actually tried getting two core developers to review your commit? It's very difficult.
1). Developers are geographically distributed. They live in very different time zones.
2). Developers usually have a narrow time window to work on Minetest-related tasks. It's a side project, not their day job. This aggravates issue #1.
3). Waiting for feedback contributes to code rot. By the time it gets to actually be merged, several merge conflicts crop up.
4). Code reviews are usually superficial. I've noticed that reviewers often miss big, obvious flaws with commits. Reviewers don't test the code they review. Even then, the subject of the manner in which the code fixed/added something takes a back seat to less pertinent details such as code style.
Coincidentally, Minetest uses the same exact code review setup as my team does for my day job, but they key difference is that our upstream commits go directly into production and we're not in a constant rush to develop. Minetest has releases and generally moves faster. I pine for something better that is not a hindrance to development. Core developers, at least, presumably aren't running around destroying the codebase as they are considered responsible. They sure do make stupid mistakes sometimes, even in their own corner of the code world. I've done it and three developers presumably reviewed my code and missed the obvious mistake. I would be all for code reviewing if it were actually catching concerns that aren't minor code style issues or exercises of coding ideology. The code reviews are more or less acknowledgement of what somebody intends to commit before they get waved on to proceed with pushing to upstream - I guess how prevalent of an issue this is varies on the quality and skill of the developer performing the review.
Unit Testing
Unit testing is great to have. Runtime errors could be caught faster and with less effort by automating the verification of any changes.
Currently, no new unit tests seem to be added and existing ones are only modified if necessary due to a functionality change.
At the bare minimum, we could somehow automatically deny pushing a commit if it causes any unit test to fail. Bogus test failures would necessitate fixing the tests so they pass. Even better would be to encourage writing unit tests for as many things as possible, and perhaps deny code from being pushed upstream if it lacks unit tests that can reasonably be implemented. The code review model might be a roadblock to this latter proposal.
Much of the code is currently written with an interface lacking separation of concerns not conducive to unit testing when most of the codebase does not actually need to be this way. We want to draw a fine line between reasonable interfaces and test-driven design. The latter, in cases of features that require a lot of state, make testing feel forced and take same amount of the code as the feature itself - and then some things are impossible to test due to the nature of the application.
I need help coming up with a reasonable policy for adding unit tests. I can't force anybody to create them since for much of the code it's rather difficult to automate, and forcing people to do something would only result in the bare minimum. Just see how much handwavey the code review process became for evidence.