After completing the detection of Common Hand last week and establishing the base data structures for Melds, I moved on implementing other winning hands.
Hong Kong Mahjong scoring system features over 10 winning hands, with most, excluding Thirteen Orphans and Nine Gates, formed by combining 4 melds of Pongs and/or Chows. For example, 4 melds of Pongs make up the All in Triplets. If these melds satisfy certain conditions, it can also qualify as Great Wind, All One Suit, and so on.
Some winning hand allow Faans stacking. For example, All in Triplets, Mixed Orphans, Small Winds and Mixed One Suit can combine for a score of 3 + 1 + 3 + 3 = 10 Faans. However, Orphans cannot stack with All in Triplets. These variations make the implementation very complex.
As an Agile practitioner, Testing-Driven Development (TDD) was naturally one of my techiques for developing the application. I continuously added test cases (Red), implemented the smallest possible piece of code to pass the tests (Green), and made a slight refactoring afterward (Blue). The diverse winning conditions inevitably led to a situation: if
, for
and while
statements were scattered everywhere. Eventually, the cognitive complexity reached 40, exceeding SonarQube’s recommended limit of 15.
Cognitive complexity measures how difficult it is to understand a piece of code. The more conditional statements and nesting loops present, the high the cognitive complexity is penalised.
Is cognitive complexity of 40 a serious problem? It depends. If this had happnened in my previous company, it would have violated the Definition of Done (DoD) and required immediate fixing before deployment. However, as I now have full control of my own app, I decided to pause and carefully consider how to proceed.
Addressing cognitive complexity and other code smells improves code quality, but blindly fixing every issue cab lead to other problems.
Developers with limited knowledge in refactoring, or those forced to eliminate the code smells, might do harm more than good. They might extract sections of the code into multiple methods just to trick quality gate by reducing the appearance of nested loops. This, however, doesn’t improve the code quality but worsen readability, as the complexity is merely hidden and spread across the codebase.
More experienced developers might turn design patterns. While design patterns provide well-defined structures that improve readability and maintainability, they also introduces steeper learning curves and risk over-engineering. Does a set of never-changing rules really require extracting patterns and make them configurable?
In my case, I initially used a single class to handle all the logic. While drafting the data structures and algorithms, I relied on inner classes instead of extracting them. I also kept all the business logic in the same class. This resulted in a bulky service class with a method that had a high cognitive complexity.
To resolve this, I:
- Extracted classes and enums to better organise the code.
- Moved most business logic to designated classes.
This reduced the cognitive complexity to 16 and transformed the previously anemic domain models into rich domain models that encapsulate both behaviour and data.
Could I further improve it further? Of course. Will I? Probably not – at least for now. Here’s why:
Is the logic stable or likely to evolve?
The winning rules in Mahjong are fixed. There’s no foreseeable need for modification or extension. This means the cod is sufficient for its current purpose. If I decide to implement rules for Guangdong Mahjong or Japanese Mahjong, I’ll need to refactor the code to make the rules configurable. But I have no such plans at the moment.Will optimisation or refactoring add value?
Refactoring would enhance readability but require significant time investment with little functional gain.The risk of premature optimisation or over-engineering
Over-focusing on minor code smells can lead to premature optimisation, where resources are spent solving problems that don’t impact the project’s goals.
While clean, readable code is always a desirable goal, it’s essential to weigh the cost of fixing issues against the project’s needs.
Of course, if you work in an organisation with a DoD that mandates zero code smells, you should either comply or open a discussion to refine the DoD if necessary. There’s usually a reason behind the strict requirements like zero code smells or 100% code coverage – but that’s a topic for another.
What’s your take on handling code smells?