Today’s technology is a lot about scalability. That means you have built something that works for you and a few people and now you want to scale the solution to work for thousands, millions, possibly billions of people/sensors/client-systems. Scaling technology is still tough but essentially understood. But what about scaling the people who make that technology? What about scaling me?
The Call of the Agile
The way technology is being developed has changed dramatically over the past decades. The driver for that change was the desire to increase engineer productivity. We went from a top down system to a bottom up system.
Top down means that some upper echelons decided they want something and then the process went down through the ranks step by step, discovering what exactly is required, defining how to achieve that, planning how to execute the development, actually implementing what was defined, testing whether the goals were achieved and finally shipping to the users.
Today’s bottom up approach puts the prospective users and engineers together in a virtual box and lets them develop the system together in a continuous feedback loop where they discuss what the users need most badly, developing small features making them available to the users immediately, discussing if that’s what the users actually need and refining and adding more features.
In parallel we went from working on utterly proprietary technology stacks to utterly free. Just ten years ago one still had to argue to use free software in a proprietary solution. Today’s technology stack is free all the way through, even Microsoft’s Azure cloud hosts more Linux systems than Windows.
Thus technology development evolved from a military style hierarchical organization with a huge and powerful logistics branch providing the employed proprietary technology stack to a decentralized network of engineers and users feeding each other requirements and solutions and exchanging the latter for reuse in other projects.
The productivity gains are mostly in avoiding three kinds of mistakes: 1. avoid huge developments that don’t fit the user requirements, 2. avoid failure to take two years to be discovered (at the projects very end), 3. avoid reinventing the wheel over and over again.
There’s also something in it for most engineers: not wasting one’s time on misconceived features or projects or on reinventing the wheel the thousand’s time over has its merits. Implementing your own solutions the way you best know how to do it is more fulfilling than executing someone else’s plan. Interacting with the actual users of what you are building is great for most people, were it not for the embarrassment of watching your very own bugs unfold on the esteemed customer’s screen.
Knowledge Transfer is Key to Exponential Productivity Gains
So far I have written about two qualitatively different changes of productivity: Avoiding developing stuff that does not fit the requirement and discovering failure early yield a significant but linear gain in productivity. Not reinventing the wheel and sharing new functionality yields an exponential gain – because the latter let’s other engineers develop (and share!) new stuff even faster. Here I want to discuss novel ways of further exponentially scaling productivity.
One aspect of the modern development process is that engineers, too, communicate more among each other. Since they don’t get a complete script of what exactly to build but instead get a description of what users would like, they have to devise their own solutions, and that usually requires discussions among the team. During these discussions knowledge is transferred between team members.
This sharing of knowledge is also a process for exponential productivity scaling. But as applied today it usually is a side effect of other factors of the development process and is not at all leveraged to its fullest potential. This potential is – on a global scale – not equal to that of sharing solutions. That’s because sharing solutions – at least when software or design is concerned – has zero marginal cost, while sharing “software” from one head to another currently has a non-zero marginal cost for all but a few complete autodidacts.
Still I estimate the potential – on a company scale – to be so huge that in the long run no company that does not work hard on it can survive the competition. To understand why, you need to understand what’s holding engineers back.
What’s Keeping us Back?
The thing is: development productivity varies wildly. When you arbitrarily pick and monitor two engineers in one team, you’ll most likely find one to be twice as productive as the other. And when you look at the whole team – and it’s your lucky day – you may find one that is yet considerably more productive. There is even a modern myth about the 10x programmer, the coding hero, rockstar programmer …
There are several factors that lead to such differences. One that is relatively simple to understand is procrastination. I guess everybody who spends their worklife in front of an internet attached monitor falls prey to this, some more often than others. Other factors are much more subtle but not less significant.
Junior engineers have to spend much more time learning (from the internet) how to solve a given problem, and they tend to make worse choices that can multiply of time to lead to a bad system. Once a system has accumulated a sufficient pile of bad choices development can get exponentially slower because solving new problems on top of the accumulated problems gets harder and harder.
On the other hand the best engineers are slowed down, too: Even though they know the solutions to many standard problems they still have to implement them, fix bugs, test, document and so on. They also have to explore different solutions sometimes. There is only so much one person can do. They spend the vast majority of their time on tasks that – to them – are trivial.
I believe the key to a further exponential productivity gain lies in melting the former two paragraphs into one. The key is the activity that is also the prime factor for project failure: communication. Actually the old military way of technology development was an attempt at this: take the best, let them figure out the solutions and let the rest execute on the best plan – a miserable failure.
It is a failure because it stops the best from doing what they do best – implementing solutions – and force them to concentrate on what nerds tend to suck at: communication. Thus, superficially the solution is trivial: keep everybody at actually solving problems (instead of making grand plans of problem-solving) and fix the communication problem.
Glossolalia or Speaking in Tongues
I lied to you: nerds actually aren’t bad at communication. Quite to the contrary, we are great at it. Trouble is: hardly anybody speaks our language, least of all communication pros that design our processes. That’s because we are wizards and our language is magic. It is the technology we devise.
.map(n => parseInt(n, 16))
Got it? In case you don’t: you didn’t miss out, but I’d need at least a page to explain it to you. A picture is worth a thousand words. Sometimes so is a line of code. And this is not restricted to code but extends to technology in general: an elegant solution to even a simple problem will incite an awe of beauty in many nerds that is somewhat similar to the awe inspired by a poem or music in other people.
You can see ample knowledge transferred in this fashion in many places like Stackoverflow and GitHub. These are platforms where nerds speak code.
One of the earlier attempts at agile actually made this kind of communication a core practice: Extreme Programming had engineers sit by twos in front of each problem and solve it together. This tends to frighten managers as it feels like halving your work force.
Indeed I believe that pair programming – which is still occasionally being practiced today – is not the ultimate solution to engineer communication. But it certainly was a visionary approach when it was conceived. What I’ll try to describe in the following is certainly not the ultimate solution, either. Yet, I believe it would be a giant leap in the right direction.
Let’s recap what I’ve laid out so far: we should keep everybody at what they do best: solving problems. And we want them to communicate through the solutions they devise. Thus everybody can just continue doing their job, “just” do it a bit togetherer.
The Virtual Desktop
I’ll need your imagination here. Engineers spend the bulk of their work in front of a monitor. The monitor displays our virtual desktop. Some of us use multiple virtual desktops, for example one for coding and another for researching solutions on the net. Switching between virtual desktops is as frictionless as switching between windows or apps.
Now imagine I could switch to your virtual desktop as frictionlessly. I have my own mouse pointer and keyboard cursor. It’s like Google Docs cooperative editing on the desktop level. Engineers working on the same project usually share some expertise but also have complementary capabilities. And more often than not, another pair of eyes will see problems and potentials that the first overlooked.
So I switch to your desktop and see you working on a certain problem. I have solved that particular problem a hundred times and tell you what I know about it. I type some stubs of a solution and briefly work with you until we agree how it should be solved.
Later you see me working on another problem. You have seen another colleague devising a similar but different solution to yet another problem and tell me. The three of us get together on the third colleagues desktop, discuss how to generalize that solution to serve me as well.
We part and each work on part of the solution, get together again and put everything together. Thus junior engineers will learn a lot quicker than they otherwise would. They will be stuck on trivial problems for a shorter total time. Senior engineers may spend less time filling in the details in solution templates they can just draw from their heads.
Note that these examples are not made up. Well they are as I put them together above. But all these things happen all the time in teams of engineers. There is currently a lot of communication friction, though.
It is very important, that this is not a one way street. The role of senior and junior engineer must not be carved in stone. And more experienced engineers must be willing to implement solutions (proposed by juniors) they consider less than optimal but that will work, too.
If one (group of) engineer(s) keeps supervising the others, this approach will take on the feel of dystopian surveillance technology. It only works if everybody takes on each role. And even if a junior may not be able to propose an improvement to something he sees a senior doing, he’ll still profit from looking at the senior’s work and possibly take it over to finish it.
The networked multi pointer virtual desktop is a newish technology. It has been around for some 10 years, but is pretty obscure for lag of applicability. I’ll give a brief overview of what is required to make this a seamless experience.
The desktop will run in the cloud, thus all participants will need a good internet connection to the desktop. Any reasonable bandwidth (say an MBit) will do, but a low ping/round-trip (<100ms) is essential for an enjoyable experience.
The desktop’s operating system will likely have to be Linux. Linux’s X-windows system facilitates low bandwidth remote desktops and also supports multiple pointer and keyboard input devices. Both are essential capabilities. I don’t know whether they are available elsewhere, but I doubt it.
There have been a few experiments with the set up I propose above. These experiments are from around 2008 and will hardly work with a modern stack. X has since incorporated the multi pointer tech natively. But setting this up – a headless VNC multi pointer cloud desktop – is likely quite some effort.
In addition it would likely be helpful to mount the participants data into the desktop – or synchronize the data continuously. Finally some communication helpers will be required. There should be a tool for drawing on top of applications. Thus engineers can underline, encircle, link, cross out stuff they talk about.
Apropos talk: numerous services exist to talk via the internet. The are essential for this but should not run via the desktop. Good conference gear is required. For seamlessness and comfort I recommend a hands free set instead of a headset.
Apart of that, software needs to be installed as required for the engineer’s work.
Above I made initiating communication look too trivial. Let’s assume a new member joins a team. As soon as sHe logs into the remote desktop and fires up the hands free set sHe’s just a knock away from the best and most efficient induction training sHe could get. SHe could literally be programming within 30 minutes. And then learn as sHe goes.
I wrote above a new joiner is “just a knock away from”. This is an interesting field. The communication is much more efficient and frictionless than in a “real” office. We mostly communicate about problems we have – a setup problem, a bug, an API question, a problem in a code review … So we just open an editor, a shell, or a web-site … and drag it onto the desktop of the person we want to ask. But first we have to knock!
I can obviously not talk right into your ear whenever it pleases me, or drag something on your work desktop(s) out of nowhere. So how do you knock onto a remote desktop? And do we have a team channel? Probably more like dedicated chat-rooms and one to ones. Ideally audio comms would be coupled to who is on which desktop. But this is something that would require a considerable technology investment and thus is science fiction for now.
The best solution – again science fiction – would be coupled audio and a visual indicator of who is on the desktop I’m looking at. So I really just switch to your desktop and you see me joining. It is then on you to address me through audio – or ignore me if you’re currently deep into something. It should probably also be possible to lock your desktop (more SF) so that engineers can take time off the team in order to get into the flow of working.
But this more about culture than technology. Teams employing the process will need to figure out what works and what doesn’t. The technology has to follow.