The Perfect Web Application Framework

The perfect framework for building your web application is finally here. And only you can find it.


In my previous article I discussed what’s wrong with present day web application development – among other things: it locks you into the enclosed ecosystem of the major framework you choose for your application. With one choice of framework you answer dozens of questions on what it is you want to achieve. You also limit your choice of developers to a small subset of the developers out there. And you end up with an application that shows, at best, mediocre performance.

Yet, what’s perfect for one project is certainly not for another. So here I’m going to discuss the overall problem of how to ask all these questions yourself and pick and choose your perfect framework. In that endeavor I’ll concentrate on the client side since that’s where I’m an expert and I only consider a radically modern approach.

The benefit you’ll gain from daring a custom solution is ultimately: saving money. By tailoring your tech to your project you can improve code re-usability, enhance developer engagement, increase maintainability and lower maintenance cost. You can also get such great performance that you can easily extend your app to become a progressive web app (PWA) and save the money of developing native apps plus save the significant expenses of marketing and racketeering in the major app stores.

Going all in custom framework also greatly enhances your chances of utterly botching it. This is not for the faint of heart, neither is it for the faint of expertise. The first article of this series was written for non-tech folks, this one requires a high level understanding of several aspects of web development and the third and final part will drill down into low and dirty code.


Work in Chrome. It’s the only browser as of this writing that has sufficient support for an all in native approach. You’ll support the other browsers later, we’ll come to that. Chrome also provides the best built in developer tools by a long shot. That said, using a standard polyfill you can also develop in other browsers except for IE. Your developers should indeed check their stuff in other browsers, but my recommendation for main development work is Chrome.

Create one HTML file. That file just contains one application tag in the body and includes your single app entry point script in the head. You should also include the standard web components polyfill there, in order to support other browsers. All development work – apart of package dependency management and build setup – is done in JavaScript from that point on.

All the JavaScript you write should be ECMA Script 6, or better yet, ECMA Script 2018. Use everything you/your team knows and finds useful of all language features supported in the latest version of Chrome. Your components (see below) must be native ECMA Script classes, the rest is up to you. You’ll make that work for other browsers by adding polyfills and transpiling with babel.

Dependencies must be imported using ECMA Script modules. I.e. all your JavaScript files are actually JavaScript modules happily juggling import and export statements (and ending on “.mjs” instead of “.js”). That’s the only way that will currently yield a straightforward development flow and that will only get better over time.

You should choose a code lint. EsLint is currently the customary choice, but you know: pick your perfect. Urge your developers to configure their editors to do linting on the fly and highlight lint problems, right in their source code. This is best practice and provides a significant productivity boost.

Decide on a set of linting rules and coding standards with your team. I have one suggestion on top of the usual: no line more that 80 characters (okay, that is common now) and no function/method more than ten lines, ideally below five. Together with speaking function/method/parameter names, this makes for very well readable/maintainable code, that is better than most comments, since it’s never outdated.

Educate your developers to write code for their colleagues to read, not for a computer. Use comments for section headings, order methods in the order that makes sense when reading them. Build code reviews into the deployment process.

Basic Architecture

The basic building blocks of the visuals are web components. Starting from a collection of half a dozen arcane concepts, Angular has narrowed that down to … components. React and Vue have started right there. The web platform now provides native web components.

Web development has been pointing toward web components as its holy grail for a long time now and finally they are here. Web components are a complete no-brainer – what else would you do? You can do them in a framework, but I’m here to tell you to do your own thing, so native vanilla web components it is.

If you are building a tiny application: KISS (Keep It Simple Stupid). Just write it. You may just write a single script containing all your components. Do your data binding with simple two way bindings. If you know that your app will become anything more than tiny, this approach dooms you to fail (if you don’t refactor soon) but if you’re just messing around or if it will in all likelihood always be tiny, you’re fine.

Anything bigger, though, even on the lower end of medium sized, you should consider considerably more structure. As web development matures further, more proven architectures will emerge. Currently the most compelling, battle proven architecture is: state engine, representation, UI logic. That’s what React does. If you are not familiar with this approach, you should do a Redux tutorial. It somewhat maps to the good old Model (state engine), View (representation), Controller (UI logic).

However, before going into details about architecture: keep in mind that you are on a quest to find your perfect. The best architecture depends on your team as well as your project. A rigid, finely grained structure may benefit a team of inexperienced developers as it may hinder very strong developers.

For example declaring all Redux action identifiers in separate constants may be perceived as harassment by strong developers in a medium project. The same practice will become a blessing that prevents errors in a huge project.


Redux (standing on the shoulders of giant Flux) introduced a rather strict definition of how information flows through the application, and this definition turned out to produce consistent and good results. Currently Redux is the 800 pound gorilla in application state management. You can safely choose another tool for this, but you may as well choose Redux. It has a pretty small footprint, low complexity and will get you started in the right direction.

I said Redux’ concepts somewhat map to model/view/controller. The emphasis here is on somewhat. For several years now mainstream web development revolves around DOM templates and data binding. And in basically all “Hello world” examples, that’s it. For simple applications there is no controller, just data (i.e. model/state) and data-binding.

Redux further elaborates that data-binding is a one way street: it only goes from the model/state to the view. The other direction is reserved for “actions” that may change the state and thus be fed back to the view.

In the following I’ll refer to various parts of the architecture with three terms each. First the traditional MVC (model/view/controller) term, then React/Redux (state/presentation/container) and finally the terms I use when referring to native technology (state/component/connector).

Your Perfect Architecture

You should pick the architecture approach that best fits your project and your team. But whatever you do, you’ll likely end up with a structure along these (MVC) lines: you have a src directory that’ll accommodate all your sources. Inside of that you’ll have a subdirectory to accommodate your views/presentations, i.e. your web components. Depending on the complexity of your project you’ll have further subdirectories inside src/components.

Mirroring the structure of the components directory you’ll have one or rather two directories for application logic: the controller/container/connector directory. I call it connector because it connects/hooks up the view/presentation/component to the model/state.

React’s containers are actually containers in that they contain the presentational components (though they are purely virtual components that leave no direct trace in the DOM). My connectors are pure logical units, that contain nothing. And their basic task when considering simple examples (I’ll do this in the next part of this series) is to (bidirectionally)  connect state and DOM, not control anything.

Complex application logic will go into the connectors (and partly, reducers) and promote them to become controllers, but even then they still connect. So they are always connectors, sometimes controllers, and never containers.

Finally you’ll ideally have a directory for your model/state. One of Redux’ major achievements is encouraging a structured model/state that is split along very similar lines as components and connectors. And more: making the different parts of the model/state independent and self sufficient.

Architecture Wrap-Up

If you use Redux, the state directory is further subdivided into actions and reducers. Ideally you’d want a structure where a new developer finds his way without a map. More importantly than for new developers this will slightly reduce friction during the normal workflow. Therefore you may want to spend a little time to figure out your perfect structure and naming with your team. When I use Redux I find the following somewhat intuitive:

  • src/
    • dom/ (web components go here)
      • wiring/ (see below)
    • logic/
      • actions/
      • connectors/
      • reducers/
    • app.mjs

You will probably diverge more or less from this – perfection is situational. But you absolutely should spend some time figuring out your directory tree. This is the backbone of your architecture. It should be the reflection of a good basic architecture and it should make sense to anyone involved.

Special Components

The major frameworks all come with facilities to hook up the views to the model and vice versa. Since you are going native, you don’t have that out of the box. I found it to be pretty straightforward to introduce special components that take on that task. It’s very little code, but it is very significant for later understanding how the application works. Thus these components should get a special place in your heart and architecture.

I call them wiring components. The main app- entry point will usually be a wiring component. Wirings instantiate other purely presentational components and hook up the instantiated elements to the connectors which in turn hook them up to the state. Connectors are purely logical units while wirings are web components and thus integral part of the DOM. So wirings take their DOM children, of which they consist, and instantiate connectors for their relevant children in order to connect view to state.

Due to their paramount function I’d suggest moving wiring components – apart of the top level app- entry point, which sits right in src – into their own src/wirings tree. Use whatever name suits you, but should you also encounter such architecturally significant components, you should probably set them apart.

You’ll also have many components, that have internal state. Every aspect of the state that is reflected in the presentation must necessarily leave some trace in the DOM. Thus you cannot not have state in the view. But that is “just” a reflection of the application state.

However, you’ll have local states independent of the model. If you do, for example, have some element that lets the user select values from a drop-down, then it is likely okay to maintain the folded/unfolded state of that component locally without synchronizing it to the state. You could sync’ it with the state, but you don’t have to. A good indicator of whether you can get away with maintaining local state is, if the state has no interactions with other parts of the application.

There’s another thing to consider here: A presentational component should be re-usable – even outside of your current application. Thus it must manage all those aspects of its state, that it needs to do the thing it does, itself. It should, however, expose these state aspects through attributes or properties so that it can be remote controlled by the state engine as required by the application … and by testability. Automatic tests should be able to test a presentational component programmatically without messing with its DOM.

Finally there’ll also be purely static components, that don’t interact with the state at all – for example a static footer with copyright and such. Pure layout components also fall into this category. These are the least of our worries. You may or may not want to set them apart.


When you roughly follow such a basic architecture layout, you’ll gain a major asset for your web application: (unit) testability. Web applications are notoriously difficult and expensive to test. Modern test frameworks emphasize browserless testing. Traditionally you’d use Selenium in the browser. This is slow and difficult to implement. Browserless testing emulates browser behavior in pure JavaScript which makes testing considerably cheaper and easier.

The bad news is, that the browserless frameworks do not support web components as of this writing. The great news is: you don’t need it too badly, if you followed established architecture guidelines as above. The stuff that breaks most often – everything to do with application logic – is now pure JavaScript without any DOM attached.

You can test the model/state logic just as is in any JavaScript test runner. As always pick what suits you. The controllers/containers/connectors just need any simple test mock ups. So choosing something that supports JsDom or another (simple) browser emulator will help you.

Thus your model/state and controllers/containers/connectors should be well tested. Aim for 100% of code lines coverage, do TDD if it suits you – if it doesn’t: reconsider. TDD is extremely valuable on the server, and using this approach it will at least come close in the browser.

Then there’s views/presentations/components. All but the wirings are now logic-wise isolated, which improves testability there, too. I settled on running the respective tests inside Chrome. The mocha test runner, syntax-sugared with chai, is very easy to get started with, has dead simple browser support and gets the job done.

As always: pick your perfect match. For coverage in the browser I use Chrome’s built in coverage dev-tool. The disadvantage is that (I think) you cannot automatically evaluate that and e.g. block code-pushes with insufficient coverage. To somewhat make up for that: it’s real cheap. Click one dev-tools buttons and get all the coverage information you’ll ever need.

As already indicated above: code your presentational components to be testable through their DOM API without tests being required to access the components shadow DOM. Such tests are relatively simple to implement. You’ll miss much of the actual user interaction, but a line has to be drawn somewhere. I’m not certain, though, that this is the right position for that line. More for you to figure out.

Automatically testing the actual presentation is in contrast very hard/expensive. In order to do a thorough job of that you’d have to check all the shadow DOM and all of its CSS. This is forbiddingly expensive in itself and makes any changes expensive, too. You probably do not want to go down that road and instead concentrate on testing the API of the component element, skipping its shadow DOM.

I’m still not convinced, though, that TDD (i.e. first write tests) is the right approach for writing presentational components. Creating presentation to me is, most of the time, a discovery process that can hardly be foreseen. For everything else I think TDD it is. And if you come to the conclusion, that for you TDD of views/presentation/components it is: great for you!

Last not least there’s integration testing. For this you’re thrown back on Selenium and its expensive implementation. This is not made simpler by the component’s ubiquitous use of shadow DOM. But it’s quite possible. You’ll “just” have to put it a function for diving through all the ShadowRoot elements – this requires, that all shadowRoots are attached with mode:open.

Performance Optimization

Going native will provide another very significant advantage over using a framework: the profiler suddenly becomes immensely useful. When you use a framework you’ll find in most non-trivial cases that most execution time is spent in some framework code, and since they mostly use some templating approach it’s very hard to figure out, where that code is called from. When you write native code, the profiler outlines your performance traps in bright red and you know where you spend your time.

Make good use of this great gift. Optimize all application logic and rendering the best you can. When you do not happen to have to render really big chunks of DOM for changes, chances are you’ll get away with just updating everything on change. Using the native tech yields performance that’s a lot better than going through a framework.

When that is not enough, consider using a change detector to reduce the amount of re-rendering you’ll have to do. Chances are, in simple cases just going through all DOM updaters is cheaper than pre-processing changes. However, if DOM updating gets complex, that can turn around. There are several change detection libraries out there. As always, pick one that suits you.

And before you go hunting for a change detector, look into the new native proxy objects in JavaScript. These make it dead simple to create your own – native – change detectors, that may outperform other solutions, at least if the browser natively supports proxy objects.

And if you do add change detection, still consider turning it off for development, so that developers stay on the lookout for performance hogs.


The final missing piece of the puzzle is deployment. Now that you you have all your development procedures, your architecture and project layout in place, deployment is the one thing between you and your users. That doesn’t mean, you should defer deployment to right before shipping. Quite to the contrary. You pick your favorite here as in all other realms right at the start.

And the very first code push should contain the basic project layout, test framework and deployment pipeline. From there all these parts grow together.

For deployment you need a build step. The whole development code should run fine in most browsers, but it will not in IE. Deployed code should also be minified and either prepared for HTTP/2 server push or bundled.

Present-day web application deployment is a rich and varied landscape. You’ll very likely use Babel to make your code agreeable to IE. Make sure to deliver native component classes to browsers that support it, and backported code to those that don’t. This is essential, otherwise browsers who could support native web components will use the polyfills and take a performance hit. Native web components have to be classes!

Again, pick your perfect match from the plethora of options out there. Personally I like my deployment pipeline transparent like everything else which makes WebPack less attractive to me. But that’s just me. I usually do Babel first, backporting, but retaining JavaScript module syntax (import/export). Then I have a bundling step, that does away with the modules and leaves one or a few JavaScript files. Finally I insert the (entry) script in the HTML.

Pushing code to the main repository should always trigger the lint and tests and build and reject the push if anything fails. From there the code should immediately be fed into the CI pipeline to make for a great development process.


You made it this far, awesome! This concludes part two of this series on modern web application development. The next, third, and final installment of this series will be a tutorial that underpins some of the aspects raised in part two with actual code. There I’ll demonstrate that going native lacks nothing compared to using crufty frameworks.

Well, actually it does. You won’t, yet, be able to pick from huge repositories of ready made components. But we’ll get there, hopefully with your help, and thus enter a bright new age of interoperable web development, where all projects compose their perfect framework, where all presentational components from all projects are perfectly interoperable, where all developers hold each other’s hands, dancing singing into the sunrise.

Final part of this article series – coding and deployment tutorial

2 Replies to “The Perfect Web Application Framework”

Leave a Reply

Your email address will not be published. Required fields are marked *