The power of an immutable desktop
• An immutable desktop operating system can help businesses scale without fear of overwriting core data.
• The same immutable desktop principle can even be useful in data center scenarios.
• Transactional updates and declarative descriptions help ensure that an immutable desktop operating system can keep running smoothly.
Deploying and managing computers at scale is a challenging task faced by several IT functions in organizations.
In addition to the physically apparent machines on every desktop, the need for duplicate machines (real or virtual) is increasingly common as demand for computing power grows. Clusters of less powerful computers, linked by software or hardware layers, often take high-burden roles, a method that’s both scalable (simply add or subtract more resources as required), more economical (it’s cheaper to buy 1000 standard servers than a supercomputer), and more resilient (dying machines can be swapped out from a cluster without service interruption).
The immutable desktop and parallel changes.
A constant factor in all these situations is one of reproduce-ability: hardware and software on a computing instance should be identical to its peers when it’s commissioned and brought online, and all systems have to be upgraded, optimized, and changed in parallel.
The obvious analogy is a room full of desktop computers running, say, Windows 10. If 20 of the total 100 are upgraded by their daily users to Windows 11, the local helpdesk now has to support two operating systems, each with its own foibles, bugs, and issues. Throw in the need for users to use software specific to their role or use their own software, and it’s apparent that in no time at all, every deployed machine would become its own entity, at variance with the others.
That situation is burdensome for users (who can’t hot-desk from machine to machine, for example) and support and maintenance staff, who are at that point essentially maintaining 100 discrete systems. That’s why you need an immutable desktop.
In data centers or clouds, the same problems exist, often at a larger scale, although, thankfully, rarely involving much to do with Microsoft or Windows.
Fleets of servers running specific software need to be identical if they share the burden of running, for instance, a mission-critical application. For that reason, workloads are often run on virtualized machines (VMs) or, increasingly, containers, where individual resources can be created and destroyed by software as needed.
Replication in virtualized or containerized environments is relatively simple, but other issues exist around cybersecurity, skill sets required by systems administrators, resourcing, and vendor lock-in.
There are several technologies designed specifically for infrastructure deployment. Puppet, Chef, Ansible, and Salt are declarative tools with which specialists can create and attenuate the infrastructure that runs applications and services.
The complexities of most applications and their required infrastructure mean that these tools can be as complex as many programming languages, with elements like loops, conditions, branches, and subroutines acting as an everyday part of a script or playbook. Accordingly, the personnel skilled in these tools are well-paid: $100k-$200k salaries for a middleweight Ansible engineer, for example, are commonplace.
In general, to reduce the resources required to build, deploy, and maintain software in production, it’s important for there to be as little disparity as possible between every aspect of the systems through which a project moves during its course from the developer’s desktop to the data center.
The immutable desktop – iteration benefits.
Teams of developers working on a project, therefore, should be building their code against identical versions of all the software that’s part of the project.
As the project iterates, moves through to testing and to eventual deployment, version control and homogeneity are hugely important. That continuity significantly lowers the time taken to develop a finished project, but any updates, improvements, or security fixes to an application in production need to be mass-deployed (after appropriate testing).
To aid the processes described above, developers, infrastructure engineers and administrators will often talk about immutable operating systems or software, transactional updates, and declarative methods of deployment.
An immutable system is one where software, from the operating system upwards, cannot be changed easily by its users and, therefore, by bad actors. Security is a significant advantage of immutable systems, but with immutability come the difficulties of making necessary changes after the software is instantiated.
Immutable systems have been around for decades and can commonly be found in IoT devices, from machinery controls and environmental sensors to networking equipment. Manufacturers of such systems will often allow an area of their code to be overwritten at a specific level, termed firmware. Upgrading these types of devices, therefore, is often termed flashing the firmware, which refers to electronically rewriting code that’s then held semi-permanently in a device’s long-term storage.
A way of describing less common changes or updates to running systems (and flashing firmware is one such instance) is transactional updates. These occur at specific times or on particular events and, critically, should be designed not to affect running systems. An ideal transactional software upgrade, for example, may occur, and end-users see no noticeable changes.
Transactional software updates are also said to be atomic (encapsulated or discrete) and so can be rolled back if the new version does not work as required.
Immutability can also apply to a computer’s software (as opposed to firmware), be it a server or desktop machine. That means the software can be altered by users but only up to a certain point, one that’s set so as not to affect the smooth running of other software on the machine or other users. On a desktop computer, for example, an immutable operating system grants its users the ability to run their own software, make their own settings, and change many aspects of their environment. Any changes can be rolled back atomically, and no alterations can be made to core systems, so the platform remains viable for others.
To achieve immutability, it’s usually necessary for software to be installed alongside all its dependencies, such as libraries, graphical front end, and so forth. That setup is ideal for software development teams, for example, where homogeneity of the base on which software is built is essential. For example, every developer will build against the same version of, for instance, Java.
The same facility is also highly valuable on servers that host multiple applications. As an example, application A can run using library B version 1.1, while application X can also run using library B, but version 1.2. In a ‘normal’ system, library B could only have a single version – version 1.2 would replace 1.1. Anything depending on the specifics of version 1.1 would error out.
The need for granularity.
To establish this granularity, operating systems and software in general are often created declaratively, using roughly the same methods as the those systems that create infrastructure (Ansible, Puppet, etc.). With a declarative software system, a text file or files contain details of exactly how a system should be configured, who or what its users are, what software (and dependencies of those applications) is to run, and so on. On the invocation of a command or transactional event (after a reboot, for example), the declarative statements are read and acted on, the operating system configures itself, pulls and installs required elements, creates users, deploys applications, and so on.
For mass deployments of software, having a simple, easily-propagated text-based description of the full setup is invaluable. Not only are the files themselves tiny compared to the often huge applications to which they refer (text files are highly portable), but any changes to be reproduced at scale are invoked by simply distributing a new version of the human-readable text file.
Mass deployment technologies have existed for decades but were often comprised of several parts: to take a historical example, a base operating system was created by hand (a so-called gold image) that was built on using applications and dependencies pulled from source and pushed slowly to targets by third-party software. Updates and patches were pushed by the same third-party application, and an agent often had to be pre-installed on each target.
With immutable systems configured declaratively and changed transactionally, the maintainer of systems gets granular control. Individual users and separate applications can run concurrently independently of one another, and the only configuration tool required is a simple text editor.