The 30 Year Web App

How to build to last – the PERI Web Framework

“We built our current ERP which lasted 30 years and we want you to build our next ERP to last for another 30 years.”

The current ERP is built with COBOL and the next one will be a micro service suite with web UIs. My part is mostly the user interface. I’m paraphrasing and condensing the introductory quote from my memory of the original German utterance. Yet that sentence captures the primary and driving inspiration that motivates the work I do for my current customer.

Mind you: I will be writing about somewhat complex web applications here. Those are quite different beasts from your standard web pages (or even shopware). The latter may have a lot of coded bells and whistles. But the former require a level of architecture and code structure that no “mere” presentational page needs.

Eternity Squared

In case you missed the outrage in the introduction: in web development 3 years is the timeframe in which the whole technology stack of web development used to be redefined. Things are just slightly slowing down these days but 30 years is still, for a web user interface, eternity squared.

So how do you even attempt that? This article describes my take on building web tech to last. Only time may tell whether my approach to this problem is worth a dime.

However, I believe that some considerations I will be presenting here are well worth considering by many businesses. It is not essential that my approach is great in order for the time thinking about building to last to be well spent. I will keep this article on the PERI Web Framework high level so that non technical folks may have a chance to follow.

Own It

I’m pretty convinced, there is one aspect that is hard to avoid, if your aim is to build for a time frame of decades: You own it. All of it. If you now pull in megabytes of libraries, 20 years from now you will have to maintain megabytes of libraries. And that’s not going to happen. 10 years from now you will have a very hard time finding developers willing to maintain old versions of framework x. And thus, in one fell swoop, Angular, React, Vue, and all their smaller brethren are out.

The Arvo (Angular, React, Vue, and Others; take a second to memorize “Arvo”, it’ll occur dozens of times below) approach does one thing very well: Provide a technological framework and low level technical process to guide a web project to success. We all know, tech projects fail. More often than not. Arvo reduces the failure rate.

The Best, Established Framework

Before I move on, I’d like to mention a related aspect: I evaluated frameworks to use for PERI. The clear winner is React. Still talking about long term viability here.

Angular arrived on its decline. It is still very strong but many indicators point to it loosing significance in the long run. Vue is great and promising but as of this writing it is still too young to tell whether it will keep its appeal and whether its lack of backing by strong business entities will allow it to thrive in the long run. Jquery is technologically 90% obsolete and does not address questions of architecture and code structure. Everything else is too obscure to consider.

So if you think about building to last and are not going to be convinced by what follows, I suggest to have a very good look at React. It is extremely popular, still rising, has strong business support and it has more things going for it in terms of long term viability than mentioned here.

Don’t Fail

So, if your business happens to not be Google, you probably should strive to try to not own Angular. There’s more to it than its eventually inevitable demise: Whatever Arvo you use, you’ll need to update every year or so to keep up with your framework. If you don’t, you’ll have a hard time hiring developers five years from now, that are willing to work with a long outdated tech stack.

You see: web developers maintain their labor value by keeping up with the fast evolving web tech stack. Working on old tech will degrade a developer’s business options over time. And it does not appeal to most.

However, in all likelihood, there will be little slack in your project. Deadlines will keep approaching and updates will be missed if you are not very persistent about them. If you keep missing updates the cost of updating will increase and you may fast enter a vicious circle that ends in your code becoming obsolete long before the ultimate demise of your chosen Arvo.

Arvos shine at initial success yet are mediocre at best at long term maintainability. Still, we will need to replicate that initial success in order to get to the long term part that this article is about. So what is it, that raises the success rate of Arvo projects?

Structure & Productivity

It’s two things, actually: structure and productivity – though both end up being two sides of the same coin. “Structure” is part of the very definition of a framework as opposed to a library. A framework imposes structure on your code. You can still pretty much do whatever you like, but with a given Arvo the “natural” (and in many cases pre-documented) way of doing something will likely be less fatal than what your average rookie developer will do without the Arvo.

“Fatal” here means just that: fatal for your project. No line of code and no hundred lines of code are ever fatal. But if the bad-decision-ratio in your whole code base exceeds a certain threshold, it’s better to start clean. Arvos improve the bad-decision-ratio developers make while implementing your project.

Developers are also initially more productive using an Arvo, and that is the reason they generally approve of employing one. However, the structural quality of a project’s code base determines the long term productivity.

If bad decisions keep piling up, changes and new features become ever harder and slower to implement – up to a point where development is deemed exceedingly expensive or slow and the project fails. Thus short term productivity is the initial driving appeal of Arvos while the structural superiority they impose drives the long term productivity and with it the project success.

In other words: we’ll need something that boosts initial productivity in order to drive adoption and keep our developers happy, but we really need to impose great structure in order to ensure long term productivity and ultimately success in the long run.

Initial Productivity Deemed Essential

With regards to initial productivity you may doubt that it’s a necessity. You are building your project and you are hiring your developers and you can just tell them to use your great framework. And they will.

However, they will also take shortcuts. If you want to keep things reasonably simple, you can just not technically exclude the possibility of shortcuts. And if you maintain any kind of pressure or just expectation on development, developers will take shortcuts. Always.

Thus initial productivity is also our lever to keep the number of shortcuts down. If developers are more productive doing things the right way than doing it any other way, then they will more likely do it the right way.

This intimate connection between productivity, structure, and long term success raises the bar for replicating Arvo’s initial success quite high.

KISS

Above I scrapped Arvo for long term pushing too complex a code base into our ownership. So obviously, whatever we do – maybe you find some obscure project that fulfills all requirements, maybe you end up shipping your own or stitch something together from various projects – we must Keep It Simple, Stupid, i.e. we must adhere to the KISS principle.

This is something else to consider about Arvos: They are built to be able to cope with everything. Angular was developed by Google for its vastly complex Google web apps, React was developed by Facebook to drive Facebook. All successful Arvos are capable of driving the most complex Apps and come with modules to address the most obscure recurring programming problems.

The structure they impose (if you go for something like Redux/Vuex as you absolutely should) errs safely on the most complex side. That’s because they need to cover Facebook. If you are not developing Facebook but just some slightly complex ERP, that structure is clearly over the top overengineering, resulting in reduced productivity.

You built it or you just simply use it (for thirty years) you ultimately own it. Thus it is well worth a significant up front investment to arrive at the simplest thing that could possibly do. The PERI web framework is about 2K lines including extensive documentary (API reference) comments. This is well inside the scope a medium-sized non-software-focused company can maintain.

4 Kinds of Code

Whether you go full Arvo, full custom or something in between: you’ll end up with 4 kinds of code that you should evaluate differently. When you go Arvo, 2 of these kinds of code will come from the Arvo. I’ll go over them in the order of their level of criticality, most critical first.

The web framework code is the customer facing code that will run in all the apps you will develop. It will run on a plethora of browser engines over the decades, facing countless individuals on their screens, AR glasses, retina implants …

If that code breaks because of some future incompatibility – as it likely eventually will – then all your apps break. You must hold that code (whether it comes from an Arvo or from yourself) to the absolutely highest standards of quality and maintainability. It must be small, very intelligible, well documented, thoroughly tested – altogether nothing short of great.

The next kind of code is the actual customer facing implementation of your apps on top of the framework. This will be the bulk of the code that you’ll need to worry about. This code is also the Raison d’Être for the other three kinds of code. It may thus be your primary concern most of the time.

If that code breaks – it will; the same applies as for the framework since it will also run on countless yet unknown platforms – one application breaks. If you use the same idiom in several apps all of these will break.

While you may be more lenient with regards to its quality, this will still constitute the bulk of the code you absolutely must keep running. Thus the absolute maintainability of the whole thing is essential and exceeding thresholds of maintainability equals failure.

The third kind of code is test code that automatically tests the first two kinds of code. Furthermore there potentially are several different kinds of test code, but for the sake of this discussion subsuming them as one will suffice.

The volume of the test code should be in the same order of magnitude as the volume of the former two kinds of code. However, the complexity of test code is usually much lower than that of the implementation code.

If the test code breaks, you’ll usually lose your ability to deploy new versions of your software. But you are the master of the environment where the test code runs. Thus – while tedious – you may set aside a dedicated software environment for that code to run for the next three decades. You will not have to react to unexpected breakage immediately.

The test code must still be maintained. It will become more important with time. It constitutes the ultimate documentation of the original intent of the tested code. It will be absolutely essential for developers 20 years from now maintaining any reasonable level of productivity when addressing new features or problems in the above two kinds of code.

In short you must maintain the test code along with anything else. But due to its lower complexity and due to the fact that you have absolute control over its execution environment, you may be more lenient with regards to quality here, than with the two former kinds of code.

The fourth and final kind of code to worry about is code that you will use throughout development but that is not itself part of your projects: the build scripts for your apps, the test environment, the development server for web developers and so on. This code may come from an Arvo, but you’ll still interact with it, be it by configuring your build, injecting mock data into the development server, or running your test suite.

If that code breaks, development slows down or comes to a halt. You somewhat control the environment it runs on. However, if it is less demanding with regards to its environment, productivity will profit.

If parts or the whole thing break for good, you can swap it out completely. While this will be a pain, you will likely do it eventually as new, better tech becomes available.

For these reasons, with regards to the fourth kind of code you have more freedom than with the others. You may pull in a plethora of dependencies in order to boost developer productivity. If stuff breaks, productivity may suffer intermittently but breakage here does not equal failure.

∑ Premise

Congratulations! You just made it through the introduction – almost. Let me sum up what we learned about replicating Arvo’s initial success while maintaining long term viability of our code base. The framework we are looking for must fulfill these requirements:

  1. It must provide an initial productivity boost.
  2. It must impose a good structure on the code written on top of it.
  3. It must be as simple a possible.
  4. It must adhere to different standards of maintainability depending on which kind of code we are talking about.

Sounds great, right? Well, without further ado: let’s do this!

Structure, what Structure?

I’ve been yammering on about good code structure without going into any specifics what that might be. The first thing to note here is that virtually any structure is better than no structure. “No structure” is one endless line of code without functions, just one abominable line of spaghetti. Be aware that any program can in principle be written that way and that by default much too much code actually looks like that – with the minor bonus of some whitespaces having been replaced by newlines.

There is a really simple remedy for this, an easy management win: lint. A linter is a piece of software that checks code for various criteria. “Eslint” does this for EcmaScript/Javascript for example. So put a linter into your continuous integration chain and let a code commit fail if it does not adhere to your coding standards.

You should have a good look at what your linter offers and decide with the team, what to enforce. But one thing you’ll want to enforce for sure is: structure. So have a reasonable limit on line length (e.g. 80 characters), a reasonable limit on function/method length (e.g. 10 lines) and a reasonable limit on file length (e.g. a hundred lines). I recommend liberally permitting exceptions to these rules but in general enforcing them.

Your colleagues/developers may complain that these limits are too low. Trust me, they are not. These limits are 90% of your code documentation and thus extremely valuable for your long term success. You will also employ the good coding practice of using good speaking names for variables, functions, and files.

A developer worth her money will quickly find a well named function in a well named 100 lines file in a well named directory  and she will merely need seconds to roughly understand what that ten reasonably short lines of code do. No outdated comments or documentation required in 99% of the cases.

Divide & Conquer

The above mentioned method of enforcing any structure is not directly Arvo related. What will be discussed next goes to the very heart and soul of Arvo, though. However, in order to understand what this is about, we must discuss a coding problem very specific to user interface programming.

Your web application consists of user interface components (aka widgets) like dialog windows. You will want your dialogs to have a consistent appearance and behavior (i.e. user experience or UX) across your applications. Thus you will re-use code that delivers consistent UX with each use of a dialog regardless of its content. You will inevitably end up writing at least some custom widget code.

Widgets constitute one (reusable) kind of application code. Another kind is the actual application logic or business logic of the app. These two kinds of code are vastly different beasts. Widget code directly interacts with the huge browser APIs for working on how things look and what the user does with the UI. Widget code is usually re-usable. Business logic is pure custom JavaScript code. It is usually not reusable.

However, business logic ties together several widget APIs and other kinds of APIs (like communication, state management and so on). 

Trouble is: (Widget-)APIs change and such changes break apps. Now almost any Arvo employs the same simple principle that does not solve but significantly alleviates this problem: Data Binding.

Data binding means: when a developer changes a (state-) variable in his business logic code, that change will “automatically” be reflected in the visualization (widget). And changes a user triggers in the visualization will “automatically” trigger specific code of the business logic.

This magic trick of data binding in itself fulfills the two essential requirements we identified above: it makes the developer’s life easier, i.e. provides an initial productivity boost, and it separates business logic from UI logic, i.e. it imposes superior structure.

The latter is achieved by redefining the API that developers use for interacting with the DOM (i.e. visualization). Instead of calling various functions with more or less complex arguments, all they ever do is change variables and register callbacks. How variables influence the DOM and vice versa, and how the DOM triggers callbacks is defined where the DOM is written down.

By defining a parsimonious yet somewhat comprehensive DOM-API, Arvo significantly reduces the pain caused by Widget-API changes. Since all you ever do is change single variables, it is most of the time easy to keep existing API calls working while adding new variables on you widgets, thus extending your API without breaking existing stuff.

Put another way: Arvos impose superior structure by defining a relatively small universal DOM API and thus significantly improving separation of business logic from visualization code.

Bindings

Replicating this may seem like an ambitious proposition. It is indeed not trivial to get this right and requires significant experience. Yet it is less of a feat than it might seem.

The browser’s DOM APIs show you the way. There is grand total of six ways of interacting with the DOM:

  • change DOM attributes
  • change DOM properties
  • change DOM text
  • change DOM elements
  • react to DOM events
  • call DOM methods

For your data binding needs it is totally sufficient and indeed wise to restrict yourself to four of these: attribute, property, and text changes and reacting to DOM events.

Attributes, properties and text lend themselves to plain two-way data-binding, i.e. you’ll want your framework to keep them in sync with bound state variables.

I strongly suggest following the proven React philosophy of bindings-down/events-up. That means that you use data bindings to change the state of the visualization and prefer (DOM-) events to trigger business logic code when things happen in the visualization.

It does make sense, though, to bind atomic values of form fields to business logic variables and have those changes trigger code along with updating the bound variable. This will save quite some typing and does not ruin the code structure. However, if you have changes (especially in more complex widgets or nested sub-modules of business logic) then don’t propagate those upward through bindings, but use events instead!

As for the binding syntax I strongly oppose the almost universal approach of whisker bindings. The reason for this is, that it introduces magic right into the DOM itself. Very few developers understand how whisker bindings work. Using stuff that few developers understand is bad now for many reasons and will be worse decades down the line, when the syntax may be forgotten or (which may be even worse) has become part of some standard.

It is my strong conviction, that the DOM that a developer writes should be plain and valid standard DOM. No strings attached. The reason for this is: your DOM will thus be plain and valid standard DOM for decades to come and developers not even born yet, will be sure to understand what it says – without reading decades old documentation of long dead frameworks.

That approach leaves you with precisely one (sensible) way to express your binding syntax: put it in (custom-) attributes. I chose to use one universal attribute: data-bind. Therein one or more bindings can be expressed, the syntax clearly shows the direction of the binding (i.e. from:to) and by prefixing with special characters expresses what kind of binding we have. 

data-bind=$variable:@attribute;.property:$variable;$variable:§;methodName(domEventName)

Note that this is valid HTML without quotes, with quotes you could use line-breaks instead of semicolons in order to improve readability. The paragraph (§) character indicates binding to the element’s text.

This is really all you need. I added a very little bit on top of this, but the most important thing when building to last is KISS!

Widgets

We are still missing two DOM interaction modes: changing DOM elements and calling DOM methods. You may disregard them in your data binding, but you will still need to use them occasionally. So if you need to get down to that nuts-and-bolts level of DOM interaction: write a widget.

For this you do not need any framework because the web standard defines everything you need to write widgets. They are called web components. I’ve written about those in previous articles, so I’m not going to repeat that here.

A tiny DOM manipulation helper will come in handy though (i.e. boost productivity). I use ShadowQuery about which I have also written extensively already. ShadowQuery can also provide 50% of the reusable code you need for data binding.

I learned – over many years of working with web components – one fundamental lesson about web components: don’t use shadow DOM but do use custom built-ins.

Shadow DOM is an extremely powerful feature of web components and it makes a lot of sense, too. However, if you are writing your own suite of web applications (or even just one sizable app) steer away of shadow DOM. It’s more trouble than it’s worth. Shadow DOM is designed to write universal widgets that are independent of a given (suite of) applications and will be re-usable across companies. This is likely not your primary concern. Shadow DOM also has unresolved issues with regards to styling/theming.

Custom built-ins on the other hand allow you to add custom functionality to existing HTML elements like <input>. This is extremely valuable. In particular it allows you to add custom behavior to the HTML <template> element.

I won’t go into details here, but a customized template (along with your data binding) is all you need for conditional rendering (i.e. much of the missing “change DOM elements“ part) and rendering arrays right from your business logic. It also gives you dialogs and more.

If you take this route of custom-built-ins-but-no-shadow-DOM then you will have an easy time even supporting legacy browsers like Internet Explorer. The reason for this is that without the shadow DOM the rest is reasonably easy to polyfill. And the resulting DOM structure can easily be debugged in IE (which shadow DOM cannot).

Scope

Your business logic should be compartmentalized in all but the most simple apps. Remember: Divide & Conquer! And one business logic module should have a definite DOM binding scope. That means if you nest business logic modules inside of each other, you absolutely do not want parent modules to change anything in the nested module’s DOM – and vice versa.

Luckily the DOM tree yields all the structure you need here. I implemented binding in special <peri-bind> widgets and attach business logic scopes to binding elements through special attributes. Thus the DOM tree automatically yields sensible and self explaining binding scopes. As a bonus the binding syntax isn’t universal but limited to where you use the special binding elements. This makes the whole concept even more self explaining to developers to come.

For business logic modules I recommend using pure EcmaScript classes without providing any direct access to the bound DOM. That way developers are strongly incentivized to follow the division into business logic and widgets.

Thus your business logic code will be much easier to unit test. The reason for this is, that you can mostly disregard complex DOM APIs and just call the methods of the business logic modules and check return values/state changes. Better, simpler and more comprehensive unit tests constitute a huge value down the decades.

This approach also yields another level of code structure improvement: the bound variables of the business logic classes constitute the module’s state. Thus you gain an implicit separation into model (state variables), view (widgets), controller (business logic) with a dead simple API.

Tooling

In olden days you could write an HTML file, have it load some CSS and JavaScript from separate files and run that in your browser without any tooling. Today the common practice is to have some build tools between your code and the browser. The reason for this is that many frameworks do not work with valid code but translate your framework specific code to something valid. Obviously I strongly advise against this practice. Exclusively write valid standard conformant code. Only that has a chance to be intelligible in the long run.

EcmaScript modules (i.e. import, export and friends) are supported by modern browsers and allow you to load your raw modular code into your development browser and have it run as intended. For legacy browsers you can do whatever build steps you like, and for deployment some minification, bundling, gzipping and possibly polyfilling is certainly advisable.

∑mary

So that’s it:

  • reduce the amount of code running in your user’s browsers at all significant cost
  • optimize structure of said code
  • provide simple data binding for your developers
  • push for clear separation of business logic from visualization code (and ideally app state)
  • leverage standard technology, then leverage it some more, and then some
  • wish me luck 🙂

I am thankful to PERI for giving me the opportunity to implement a framework built to last.

Scale Me!

Today’s technology is a lot about scalability. That means you have built something that works for you and a few people and now you want to scale the solution to work for thousands, millions, possibly billions of people/sensors/client-systems. Scaling technology is still tough but essentially understood. But what about scaling the people who make that technology? What about scaling me?

Continue reading “Scale Me!”