My minimal stack and approach for writing professional single page applications (SPAs).
Introduction
When writing a web application, one has to choose from a vast collection of options. You may deploy it as PHP enriched with some jQuery (don’t laugh, that’s probably the majority of contemporary web apps).
PHP usually means server side rendering. I opt for single page applications instead, because in my experience it is a similar effort but if done right, you end up with a superior architecture, better user experience and better maintainability. But the expertise required for getting this right is quite different and in some regards goes far beyond what’s required for server side rendering.
A major point to consider is that you’ll need a server API if you want a SPA. API design goes wrong more often than not. So if you are unsure and don’t have somebody with the required experience, go for server side rendering.
Modern SPA development usually involves pulling in hundreds of thousands or millions of lines of third party code. This is no exaggeration. Webpack is the go-to solution for building and bundling. Its latest release as of this writing is a 20 million byte download. That’s millions of lines before you even started to choose a framework.
The problem with all that third party code is: you use it, you own it. That means you will have to invest effort into keeping your third party code patched and running. A couple of years down the line this effort will likely become significant. It also means that in that future you’ll need to find developers willing to work with your dusty weird code base instead of the hot new stuff others are having fun with.
If you are unwise and unlucky enough to choose third party code that becomes abandoned upstream, you’ll really own it – you’ll not merely have to patch it, you’ll have to develop these patches. If that codebase is substantial, you’re toast.
Thus the best third party library is the one you did not include – just as your best code is the code you never wrote. However, that latter wisdom does call for library code. Best established practice has you organize your code in a certain way and without using any library that means you’ll write a considerable amount of repetitive boilerplate. I don’t want that, it’s bad.
Over many years I went from pure vanilla via libraries and frameworks to my current compromise. The rest of this article lies out the principles on which I build my current approach. I hope it enables you to figure out your own minimal stack.
Two Kinds of Code
Actually more than two, but for simplicity’s sake: you’ll write user facing code that runs in the browser (tier one code). You’ll also write code for building, testing, deploying and whatnot this code (tier two code). You don’t control the environment (browser) in which the user facing code will run. You do control the environment that the latter life-cycle automation and management code will run in. Thus the former code should be held to stricter standards than the latter.
One of my guiding principles is: keep my tier one code standard conforming. It’s code that a modern browser can execute without any ado. That means, the tier two code is – for a large part – exchangeable!
For tier two code I try to restrict myself to stuff that is either exchangeable or such a widespread standard, that chances of it being abandoned in the next decades is minimal. In web projects NPM is such a standard. Thus life-cycle management happens as far as reasonably possible in NPM scripts (calling shell scripts or gulp if they become too complex).
Here’s a list of the NPM tier two dependencies in a project I’m currently working on:
"devDependencies": {
"@rollup/plugin-node-resolve": "latest",
"browser-sync": "latest",
"chai": "latest",
"chrome-coverage": "latest",
"eslint": "^8.4.1",
"eslint-config-google": "^0.7.1",
"eslint-plugin-html": "^1.7.0",
"jsdoc": "latest",
"jsdoc-to-markdown": "^4.0.1",
"mocha": "latest",
"mocha-headless-chrome": "git://github.com/schrotie/mocha-headless-chrome.git",
"rollup": "latest",
"rollup-plugin-terser": "latest"
},
These do 5 things:
- provide a development server (run the code) browser-sync
- lint the code eslint-*
- build the code rollup*
- test the code mocha* chai & chrome-coverage
- document the code jsdoc*
1-3 are exchangeable and require very little configuration. Should they become obsolete/abandoned it’s very easy to skip or replace them. 4 and 5 are the most widespread standard tools for what they do. Should Mocha become obsolete, that would be a major pain in the behind, but it is rather unlikely to happen in the foreseeable future. I’ll get into more detail about testing below. JSDoc has been around this whole century, yet it feels slightly more obscure. Still the best available doc-standard to my knowledge.
require import
If you are still using require
, that’s a hallmark, that your code is soon going to be legacy. You absolutely should be using Ecma Script modules with import
by now. Personally I’ve been using import
for years now, and I’ve been using it without a build step, and you should, too!
It has been a minor pain to do that, because one needed to use full qualified paths everywhere, which comes with its own set of maintainability drawbacks. It was still better than using require
or a build step, as far as I’m concerned. Note: you are certainly going to use a build step for production, see below. However, now you can also use nice module names and paths in Chrome and Firefox, and very likely stick to the future standard with importmaps.
For my liking skipping a build step during development is a major gain. It gets a huge layer of complexity out of my way during development and it speeds up development. With something like browser-sync
and without a build step, iteration/feedback is instantaneous. I know and love stuff like vuex
state persistence. But in my experience, this can usually be trivially mocked in development and the speed and simplicity of working without builds and codemaps is much preferable to me.
One significant drawback is, however, third party dependencies often do not yet play well with this approach. The situation is improving, but as of this writing, if you tried this approach, you would find yourself throwing out some dependencies because they are too difficult to get running, and considerably bloat your importmap in order to still get others to work.
If you are me, though, this is manageable, because you radically restrict the use of third party dependencies in tier one code anyway.
no God Framework
God frameworks are big frameworks that do everything for you (Angular, Vue, React and others). I got the term from this excellent article, which I also recommend to persuade yourself, that you may want to avoid them. Also consider this excellent and balanced analysis.
I’d like to add one essential point, though, that the aforementioned article misses, because it concentrates on technology: the problem that god frameworks solve is not a technical problem, but an organizational problem – they drastically limit the degrees of freedom of implementing your project. Thus they get all involved developers on track, reduce the need for communication, improve coherence and high level structure of a project.
All of these are good things! If you do not have a good organization, good communication, if you are unsure about how to structure code, by all means, choose a framework. If you don’t you’ll have to do this work yourself. It is not more work, you’ll have to do, but you better know what you are doing. Skipping frameworks increases the number of ways in which you can fail. However, when you succeed, you’ll be rewarded by a more fulfilling job – because of a much improved feeling of self agency – and a superior result.
The platform is getting better and better, continuously reducing the need to resort to any third party or library code at all. However, there is a few things, where you should resort to library code.
API microservices are mostly pretty trivial. What you need there (if you are not coding in Go – its standard library has you covered), is a request router to help you structure your code in a transparent reproducible way. Try to find something that does routing and nothing else. I wrote prouty because it allows me to write very concise modern code, but that’s a matter of taste. Just find a router or write one yourself, it’s no magic. You also need a DB driver. That’ll mostly be determined by the DB you use.
The frontend needs two things: state management and DOM manipulation. If you work on a somewhat complex SPA, just use Redux for state management. It’s minimal, completely independent from React and does its job. Redux is as good as popular libraries get.
On a side note: Redux’ value lies in nudging you to write stateless state manager code. What sounds like an ironic remark is a major gain in maintainability. If you are not familiar with functional programming and/or if you are not used to think in terms of stateful/stateless code, I suggest to rather use a framework and familiarize yourself with these concepts before embarking on min-stacking.
If your software is not that complex, Redux will add considerable complexity overhead. I wrote xt8 as a minuscule toolkit for putting together your own state management. But again: just choose or write some state manager that suits your needs. If you know what you’re doing, writing your own alongside your project is also perfectly fine.
DOM manipulation is not so much about manipulating DOM as it is about structuring your code. This is a pattern that repeats all through this section as you may have noticed. The DOM is organized in a tree, and from looking at the DOM it should be easy to find the place in your code, that deals with that DOM. That’s about it.
Nowadays this job is usually taken by the data binding facilities of your favorite god framework. But you can also do this yourself, just do it the same way anywhere in order to remain maintainable. Cut your DOM up into web components (likely without shadow DOM) and use these for structuring. I wrote bindom for my own needs and it does structuring and data binding at pretty low cost and non-intrusively.
With all the tools I mentioned above, my personal tier one stack clocks in below 2K lines of code for front- and backend, gives me great structure, great performance and no bloat. This makes a world of difference. With this stack I’m at least a freaking order of magnitude below the code size any of the shelf framework can offer – probably it’s more like two orders of magnitude and possibly even three (if you go full angular and stuff …).
This is a huge gain and for that it is okay to compromise. 2K lines instead of 200K means you don’t have to worry at all about whether that code will be maintainable ten years from now. Sure it will be. The stack becomes abandoned? Just own it, 2K lines will be less than 10% of your whole app-code, even if your app is on the simpler side. Only real simple stuff will be on par with 2K lines of code, and in that case you don’t need to worry about structure and should absolutely go full native without any helper libraries.
Quality Assurance
Professional software comes with automatic quality assurance. If you deliver anything without automatic QA, you have not done your job and you wasted your time and your clients money. Simple as that. And because that is so fundamentally true, don’t let your client or anybody tell you, you don’t need automatic QA. That is your decision not theirs, you should be qualified to decide that, not them. Take pride in your work, deliver quality and the tools to prove it.
The most basic kind of JavaScript QA is a linter. This is an exchangeable tool, so don’t ruminate over it, just choose eslint 😉 Whatever – since you should have at most a few linter specific comments in your code, swapping it out for something else later should be doable. Just choose a linter and make good use of it.
Then there’s unit tests. Mocha is the most common test runner and it also underlies other tools. Thus Mocha is a good bet for a test runner. The runner may be your most critical third party software choice. The reason is that much of your test code will be more or less specific to your chosen runner and exchanging the runner will imply a major test rewrite.
I also use Chai for assertion in Mocha tests. Chai is pretty popular but the case for Chai is not as strong as the case for Mocha. You can go without an assertion library without excessive overhead, while going without a runner is less advisable.
Look for clear structure and good reports in your runner. Unit tests are also an essential part of your documentation because the tests articulate very precise expectations for your code’s behavior. Thus you want to be able to find the test for a given part of code.
I found that aiming for a unit test coverage of 95%+ is realistic and sensible. You really want most of your code tested but there are cases, where it’s not worth the effort (like browser specific code and some integration code and sometimes error handling).
The key for getting good coverage with reasonable effort is writing your tier one code to be testable. All code in your state manager should be trivially testable. It should be stateless and utterly DOM-independent, so just pull it in and run it. Your DOM manipulation code should ideally be mostly setting bound (as in data-binding) variables, then testing that is also trivial. This leaves little code that’s more effort to get to.
I usually write mocha tests that run in the browser. That way I can rely on the browser’s debugger when developing my tests. But tests should also run in automatic pipelines where the test report should be logged and evaluated in the console. I use mocha-headless-chrome
for this. I can thus use the exact same tests for debugging in my favorite browser and in the command line in pipelines anywhere.
Ensuring that the unit tests cover the code sufficiently should also be automatically enforced. I found that only browsers reliably ship the newest features I tend to use and I want my code to be unit-testable without polyfilling and/or transpiling – thus I can also enjoy minimal stack complexity when developing tests.
However, that means test coverage has to be determined in the browser on the raw source code. That is not something that istanbul – the goto coverage solution – does. However, Chrome comes with built in coverage reporting. I modified mocha-headless-chrome
to allow me to extract Chrome’s coverage report from its runs. I also wrote a collection of simple tools to work with Chrome’s coverage report.
This setup allows me to significantly reduce the complexity of my test-environment (as compared to e.g. instanbul). At the same time I get superior coverage evaluation and reporting because Chrome’s report catches unexecuted statements in lines that have executed statements, too. My own coverage report that parses Chrome’s report is superior to Chrome’s own report inside its dev-tools, which has some problems. Mine does not account for source maps, though.
Should you need coverage reporting for node.js projects, you should look into C8 or something like it. It gives you configuration free exchangeable coverage reporting with a lot less complexity than istanbul (since you use node anyway).
In most cases you also want integration and possibly end to end tests for your frontend (and the latter also testing your backend). There are several frameworks for this (here you absolutely need a framework) and I recommend Selenium. It’s standard and it’s the only widespread solution that I’m aware of, that has wide cross browser support.
Apple’s Safari is the new Internet Explorer and you likely want to test for it. You may also want to test on desktops and mobiles in order to assert the responsiveness of your app. Selenium allows all of this, possibly with the help of something like browserstack.
Writing Selenium tests has also become a lot more straightforward with the advent of async/await. So these days I consider it a reasonable choice. As a runner you can still use Mocha with Selenium.
Build & Deploy
I aim for keeping my JavaScript files below a hundred lines or so, and I belief so should you. But that means even rather small projects usually comprise dozens of files. When there is few fonts and graphics I aim for delivering my SPA as one single HTML file. That way I can occasionally just send the file to stakeholders for review without them requiring a server. Again: minimal complexity.
I any case production frontend code should be comprised of few files, thus a build/bundling step is absolutely required in the end. But since everything I do runs in modern browsers without build, building is just bundling (and possibly including some extra polyfills) and can be done with an exchangeable toolchain. I chose rollup years ago, when there were less available alternatives. You should just choose a minimal standards compliant ESM bundler that suits you, and have an eye on exchangeability.
I already wrote a bit about JsDoc. I use jsdoc-to-markdown
in order to be able to deploy docs in tools like GitLab or GitHub and have them display nicely there. Up to date docs should always be deployed automatically.
Finally the act of deployment is completely dependent on where your pipeline runs and where you want to deploy. I believe everything, including deployment, should also run locally on any developer’s computer just using NPM. That means you try to restrict your use of platform specific tools on e.g. Azure or AWS.
Still, for complete lifecycle management, NPM alone may be too limited, especially if you want to share lifecycle management code between projects. I much prefer writing code for that instead of configuration and thus I gravitate towards shell or gulp for such tasks instead of things like grunt.
Conclusion
One can write professional SPAs and microservices with minimal dependencies in tier one code. The key is leveraging the standard libraries of the browsers and of node/deno/Go/… to their fullest. Tier two code is harder to free of dependencies and I argue that in case of tier two code dependencies are more acceptable.
When you embark on this journey of figuring out your very own minimal stack, you’ll be rewarded with a more fulfilling work and better long term value for your customers.