The third part of our series will dive into the code to assess the quality of the software. First, we will look at the processes used to guarantee quality in the project and check the quality of the tests used for Material-UI. Then we will connect the roadmap originally described in part 1 of this series to the code.
Besides looking at the code, we will also look at the importance of quality, testing and technical debt in discussions on the Github and the documentation. For this project, we also used various tools to consider the code quality and maintability of the project. Finally, we will evaluate the technical debt in the system.
Software Quality Assurance Process
To find out in what way quality is maintained in a community project, the best approach is to look at what is required of contributors and how their contributions are checked. This is described in the contribution guide1 of Material-UI. This guide encourages contributors to create an issue to discuss large changes with the maintainers before working on it. Here we can already get a glance that the paid developers play a critical role in guaranteeing the quality of the project. Creating issues allows them to give some input before someone would even start working on a problem.
The guide continues with the advice to keep pull requests small and a guide on how to properly do a fork and push. While that does not directly improve quality, small and properly pushed pull requests are easier to review. This is relevant because the core team is monitoring the pull requests. They review a pull request and either merge it, request changes to it, or close it with an explanation. We finally see the scope of the role of the core team: they actively review all pull requests and in this way can maintain a high quality of the software project.
The section in the contribution guide finishes with a list of requirements for a merge, which we will limit to some relevant examples. If a feature is added that already can be replicated using the core components it’s addition should be justified. In terms of software quality, they want to avoid unnecessary code for features that can already be composed of the existing components. When functionality is introduced or modified, additional tests to confirm the intended behaviour are required. The coding style should be followed, which includes using prettier and linting. Finally, it should succeed all automated checks, which we elaborate on in the following section.
Automated tests and continuous integration
When a change to an open source project is proposed in the form of a pull request, the impact of such a change needs to be assessed. In the case of Material-UI, several aspects of the project are especially important to check for when a contribution is made. Some of them are obvious: bugs that were previously solved should not be introduced again, and the behavior of components should usually not change in such a way that it becomes a breaking change.
There are also aspects that are not as obvious. Examples of these are visual testing, which verifies that the appearance of components does not change, which is important for a UI library such as Material-UI. Another is a bundle size check, which determines by how much the total size of the build artifacts changes, as these are served to users who use the application built with Material-UI.
Fortunately, all of these inspections have been automated, taking the burden of checking these away from the core development team. Every pull request is checked using several services that automatically build and/or test the project at the state it would be in after the proposed patch is applied.
Many of the required checks are run on CircleCI. These include tests such as unit tests that verify whether components respond appropriately to some props being set to certain values, among other things. There are also specific tests that verify that the TypeScript types match with the JavaScript components, and that the website documentation files are up-to-date with the latest version of the comments in the code, which are used to generate documentation. Finally, CircleCI runs regression tests and in-browser tests.
There are still more checks. Material-UI also utilizes Argos CI, which renders the components, takes a screenshot of the components, and compares these images to previous renders. Any visual difference causes the check to fail, and a project member must approve the changes manually. Another build is executed on Azure Pipelines. This check will compile the project and the documentation, and figure out how the bundle size has changed. The output is automatically posted as a comment on the associated pull request, making it easily accessible. The last service to mention is Netlify, which automatically deploys the documentation website. It includes review domains, which means that anyone can open the documentation website that was specifically built for any given pull request, to see if it is up to standards.
Quality of testing
With all these services, it is easy to assume that Material-UI is well-tested. But to verify that, we need some test coverage tools to gather this information. Luckily, the Material-UI developers know this all too well and have set up the appropriate coverage tools that interface with the JavaScript testing frameworks in use. As the unit, integration and browser tests are run, Istanbul collects the coverage data and outputs it to an HTML report.
We have executed the tests with coverage for the most recent commit on master
at the time of writing (6d62842d6f53c1726fc688131af0a16d561e3bd1
).
At the end of the test run, nearly 2800 tests have been executed, of which none have failed.
The coverage report indicates the following results:
Coverage type | Percentage |
---|---|
Statements | 96.6% |
Branches | 87.58% |
Functions | 97.79% |
Lines | 96.58% |
These numbers indicate that the code coverage is extensive. One must still be careful with drawing conclusions that the project is well-tested, but this is a good sign. Most files that pull down the percentages are often either very complex, which is not great, but they are still relatively well-tested, or they are so-called index files that include other files. Note that these scores only reflect the stable components and code, as components that are considered alpha/beta are not included in the coverage.
Despite the high testing scores, the bug
label is still a frequent occurrence when browsing the issues list.
Typically, such issues are quickly resolved and added as a regression test, which means that the test suite is continuously evolving and improving.
After all, Dijkstra’s quote applies to this software project as well: “Program testing can be used to show the presence of bugs, but never to show their absence!”2
Roadmap to code
As mentioned previously in the first post, main priority right now is to introduce more components3. Generally, these new components are first introduced into the the @material-ui/lab package4 as mentioned in last week’s post. By scanning through the merged pull requests, one finds that most changes are actually made to the documentation found in the docs. While most other changes, such as bug fixes, performance enhancements, etc. can primarily be found inside the @material-ui/core package.
The importance of testing, technical debt and code quality in discussions & documentation
One way Material-UI avoids technical debt, lack of tests and low quality is the lab package5. A component in the lab is allowed to make frequent breaking changes, this way new components can more easily avoid technical debt that would be introduced by avoiding breaking changes. Consumers of Material-UI are warned about these breaking changes before they start using the package.
There is a strict requirement for tests and code quality before a component can be moved from the lab to the core package. This ensures the core components have a standard of high-quality and good test coverage. Moreover, there is the requirement that in the short or medium term future, there are no expected breaking changes for the component. This avoids introducing unnecessary technical debt or breaking changes in the core package.
In many pull requests and issues one will see that the core developers focus on various code quality aspects. See for example original pull request for the Stepper component or the newer Alert component. Additionally, a lot of discussions about the best design decisions take place, with some issues focusing solely design decisions: Stop IE11 support. New tests are often asked for when making certain bug fixes or additions (example. For changes that do not change any functionality, the only requirement is ususally that the existing tests still pass on the new implementation.
Refactoring candidates based on code quality and maintainability assessment
To evaluate the code quality and maintainability we used both SonarQube and Sigrid provided by SIG. SonarQube was overwhelmingly positive about the project’s maintainability by giving all components a rating of A. SIG on the other hand, had some concerns regarding the unit complexity and unit size of several components. As the unit size scored on average 1.6 out of 5 stars, while unit complexity on average scored 2.3 out 5 stars.
According to the metrics computed by SIG, the most important risk factors for the maintanability of Material-UI are the unit size (based on LOC) as well as the unit complexity (based on the McCabe complexity).
Interestingly, it was found that if a function was a high risk factor for one of the two metrics, it would usually also be a high risk factor for the other metric.
Most of these risk factors are due to the arguments given to React.forwardRef, React.forwardRef
takes a rendering function as argument. However, this rendering function can be insanely large.
For example, in Material-UI for the slider this rendering function starts at line 336 and ends at line 779. While this is the largest example in Material-UI, more similar cases can be found in Material-UI.
Note that this is not an issue with Material-UI, as this is how React functional components work. In fact Material-UI has made an effort to split up the large rendering functions into smaller functions inside.
A minor refactoring suggestion from SIG was related to duplication of code. Material-UI is a library of components, some components are closely related to others such as Input and FilledInput. Currently, in these components there is a lot of duplications both in defining the styles used as well as the functional properties of the component. In the example of Input and FilledInput, other than the different names used for some variables, the main difference lies in the styles of the two components. Both components make use of InputBase, this already reduces the possible duplication by a large margin, but it could be further decreased.
Assessment of technical debt
To measure technical debt across the project, we will use SonarQube which has support for JavaScript and React6. Since Material-UI themselves do not provide a SonarQube configuration, we will use the standard JavaScript and React configuration for our technical debt analysis.
Using SonarQube we found the following technical debts for the different components of the Material-UI project at the 25th of March 2020:
Component | Technical Debt |
---|---|
core | 1d5h |
styles | 4h |
system | 5m |
types | N/A |
utils | 20m |
icons | 0m |
lab | 2h35m |
docs | 0m |
codemod | 1d6h |
eslint-plugin | 0m |
babel-plugin | 0m |
Total | 2d18h |
It must be noted that it was not possible to retrieve the technical debt for the types component because the component does not contain source code. As can be seen, the two components with the most technical debt are the core and the codemod component. As noted earlier, the codemod component is not really under active development, thus we feel comfortable disregarding the amount of amassed technical debt in said component.
SonarQube detected bugs
Aside from measuring technical debt in software projects, SonarQube is also capable of detecting possible sources of bugs. Using this feature we found two instances of code that could lead to potential bugs in the future. We decided to fix both of these instances and create pull requests for them, which were swiftly merged by the core team.
The first instance was an if-statement that could never be false, thus we simply removed the entire if-statement. The second instance was a function that did not take any arguments being called with arguments from a different location. JavaScript allows this and just disregards the passed argument. Neither of these cases actually caused any errorous behaviour, but we deemed that they might cause confusion and thus, potential bugs, in the future.
-
https://github.com/mui-org/material-ui/blob/master/CONTRIBUTING.md ↩
-
https://en.wikiquote.org/wiki/Edsger_W._Dijkstra ↩
-
https://material-ui.com/discover-more/roadmap/ ↩
-
https://github.com/mui-org/material-ui/tree/master/packages/material-ui-lab ↩
-
https://material-ui.com/components/about-the-lab/ ↩
-
https://docs.sonarqube.org/latest/analysis/languages/javascript/ ↩