Types of software optimisation

Work in progress. Structure is incomplete, and grammar and spelling are not yet checked.

Discuss features of programming languages in terms of features or attributes which:

  1. Affect compile-time enforcement of rules such as "const" in C (good only at compile time or development time, not at run or "use" time),
  2. With languages with run-time components such as Pascal, Python, Swift, more checks are done at run time than necessary,
  3. Don't contribute to the functionality of the finished software, e.g., "?" variable qualifier in Swift,
  4. Obfuscate code operation or hide potential bugs from discovery, e.g., "syntactic sugar",
  5. Are designed for lack of programmer/developer competence,
  6. Prevent or discourage code optimisation such as lack of "switch" in Python which prevents indirect jumps (as in C) and requires serial comparisons,
  7. Increase the distance (abstraction) between the concepts being expressed in the source code and the actions performed by the CPU/microprocessor when the actions are performed. C or FORTH versus, say, APL, Python or Haskell. This increase in distance also decreases execution efficiency, namely how well the function(s) is performed versus a well-coded Assembler implementation of the same.

Discuss fun.

Also discuss that C (and FORTH) was never supposed to be a slick or expressive programming language, it was supposed to one step up from Assembler while still expressing actions in terms which a typical CPU could do natively. This means that things like "i++" in C map directly to a single CPU instruction. Or that "if (a < 0) ..." maps to, typically, two CPU instructions if a is an integer. Or that indirection (using a pointer) also maps to, typically, one CPU instruction. Or that "x = 0" also typically maps to one instruction. This has and had benefits where the people doing the programming understood machine language. It meant they could write code much master than doing it in Assembler and it'd still go fast. Abstracting further away from machine language and into programming concepts such as OO, garbage collection, references and duck typing (none of which a CPU "understands") means increasingly poorer execution efficiency as programmers' intentions are increasingly poorly converted or mapped into CPU operations.

The benefit of doing this though is that these higher-level programming languages and concepts evolve away from potentially-hard-to-understand CPU mechanics and closer towards human-friendly concepts and ways of expression which flesh-and-blood people can more readily grasp, use to express their ideas, avoid bad forms of expression and get from idea to running program in shorter time frames.

A disadvantage of this is that the programs aren't intended to run on little-grey-cell hardware. They're intended to run on CPUs and microprocessors and so while an idea might be clearly expressed in a high-level language, it all necessarily becomes fuzzy and bloated when the CPU is running it. This is a weirdness where we write code so we understand it and it makes sense to us, but there isn't a computer which can run it natively and it therefore needs to be translated less-than-optimally to something a CPU can run.

See also Software development and optimisation.

What it's not

It's important when we talk about software optimisation to know what we're NOT talking about.

  1. It isn't about changing the results that software produces. For identical input, the output from the software will be the same before and after optimisation.
  2. It isn't about fixing bugs or defects.
  3. It isn't about extending or adding functionality. When a piece of software performs one or more functions before optimisation it'll do that exact same range of functions after optimisation.

What optimisation IS about is altering the software so that it's functionality remains the same AND one or more other goals are achieved. These other goals may actually be important enough that the software is not viable, even though functional, if they are not achieved. For example, a program to forecast tomorrow's weather which takes more than 24 hours to run on available hardware is clearly not viable even if its analysis and output are spot on. Here, obviously, if we were talking about optimisation we'd be talking about optimising for speed so that forecasts are produced in a timely manner.

Another way of looking at this is that we can say that, once software performs its required function or functions (i.e., it's fit for purpose), optimising the software increases its fitness for purpose.

Let's consider another example. The vast majority of the functionality of a GPS navigator comes from the software which runs on it. Many navigators are, or run on, portable devices designed to operate from batteries. This is the case for GPS software in smartphones and dedicated hand-held navigators used by trekkers and cyclists. Power consumption becomes a relevant optimisation here. If the operation of the software causes the battery life to be too short then the devices become effectively useless or even dangerous. The longer the GPS navigator's battery lasts the more successful it will be against other competing products. Note that we're not talking about changing functionality in any way, just power consumption.

Survival of the fittest

In any sort of world where there is competition between products, survival of the fittest applies. Just because software "works" doesn't mean it is going to survive, or that it's going to survive in the face of competing software which also "works". Software which is fittest for purpose is what will survive, though I need to say that we also need to define in which areas or niches we want our software to survive and other areas where we don't care because what characteristics might be a pluses in one area could be minuses in another.

The obvious ones

Two common examples of software optimisation are:

  • Speed, or
  • Size

These two are often appropriate to consider, but as shown in the power consumption example above, they aren't the only ones and may not be the most important ones. However, let's look at them first and then consider other less, obvious optimisations.


Speed is how long it takes a piece of software to perform a task. This is usually a function of the design of the software itself, the hardware it is constrained to run on, the network it must use for communication, or other systems with which it must communicate. Subjectively, speed often has to do with the user experience of how fast the software runs, but speed might also be measured against other non-human constraints such as actual time available or, in the case of embedded systems, constraints imposed by other equipment or the world in which the software runs.

By using this phrasing we can see that software doesn't need to be as fast as possible, merely fast enough to satisfy the requirements or constraints of the environment in which it operates. If a user doesn't find themselves thinking that maybe the can go and get a coffee while they wait for the software to do something, or that things don't break because the software is too slow, then it's probably fine.

If it's not fine, the potential for speeding up software will depend on what causes it to go slow in the first place. If the software spends most of its time waiting for input to arrive from elsewhere (such as from a user or from some server far, far away) then any local optimisations of the software may bring no change or benefit at all. But if the software spends most of its time doing calculations or shuffling data around then local optimisation may help it to calculate faster or more efficiently move data.


Optimising for size, typically making the software itself smaller or writing it so that it uses less memory at run time, is a less common optimisation than speed and in most desktop and server environments is a non-starter. However, where memory or disk space is tight - such as in embedded or IoT systems - software size can be a significant factor of the overall cost and hardware complexity of a product. Optimising for size then becomes the result of data encoding, programming language, algorithm choice and - to a lesser extent - hardware design.

Speed and size aren't the only optimisations

Execution speed and size are important, of course, but if they were the only factors then everyone would write in Assembler. In the real world, there are other areas of optimisation which conflict with these. Let's look at some of them.

  • Time to develop or time to market
  • "Security"
  • Usability
    • Don't need a university degree to drive it
    • Consistent interface
  • Maintainability
  • Adaptability
  • Extendability
  • Robustness when altered, expanded or re-purposed
  • Energy usage
  • Cost of required hardware
  • Long-term viability
  • Cost to run - use different communications media (dial-up instead of satellite? Run at a different time of day when <something> is cheaper - labour, electricity, network, cooling? Easier to use and thus less-knowledgeable or less-skilled people can run it?)

Time to develop or time to market

Being able to quickly produce a workable, saleable product in response to an unexpected opportunity in a limited time frame may involve optimisation at a pre-coding stage of development. It might be, for example, that a potential customer comes to you and says, "I have a need for an app which does X, Y and Z by next Friday. Can you do it?" You look at the problem and work out that if you don't use your standard development process, delay some documentation until after delivery, acquire a library to handle large chunks of the innards, use your existing coders to write the custom bits and hire a couple of product testers short-term, then you can deliver the product on time.

If you used some other process you'd still need to produce a product with the same functionality, but here you're optimising your process maybe at a cost of more dollars up front and maybe some extra costs due to licensing the library.

There might be other choices that also help get the product ready quickly. Instead of using a compiled language such as C you might use a scripting language like Python because it gives a you a shorter code-build-test-debug cycle.

You might also want to consider whether the app just needs to work as soon as possible and will never need to be changed, or whether you need to produce something which has basic functionality straight away but which will eventually need to be upgraded or enhanced.


"Security" is a vague concept and I've used it between double quotes to highlight this. I want to mention a few things that possibly fall under this umbrella term.


Software, or a system, which isn't operational when you want it is not available. That doesn't mean it's broken, it's just not there right at the moment and please come back later when we're ready to serve you.

A well-known (security-wise) reason that a system or software might not be available is a DOS attack. This is a Denial-Of-Service attack. We see and hear about these in relation to web sites or web-based services where some naughty and usually-anonymous person has done something to a web site they don't own to stop it working. They stop the site providing service to legitimate users and render it unavailable.

But unavailability comes in many other forms. If you're developing some software which takes a couple of minutes to start while it updates its databases, polishes its nails, scrutinises its navel or whatever then the period of time between the user activating it (clicking on it) and when they can use it it is unavailable. If a business competitor of yours produces software which does exactly the same thing as yours but comes up near-as-dammit instantaneously then you are at a disadvantage because there software has higher availability than yours.

Or if your company's software system needs regular downtime to update files, perform snapshots, run backups, reconcile accounts or meditate upon the awe-inspiring nature of the universe then you are also disadvantaged compared to another company whose system — which does exactly the same as yours — is capable of round-the-clock uninterrupted operation.

Software is also unavailable when the spinning wheel or hourglass icons are displayed. These indicate the software is "busy" — in other words, it wants some "me" time and will get back to you in due course. If you design software then the "me"-time icon can show up if you decide that updating some remote system or some time-consuming calculation needs to happen in the foreground. Usually they don't and when you instead have these things happen in the background the availability of your software is higher.


Security is closely related to trust and trust is closely related to consistency. Ideally a computer system will, above and beyond being accurate, will be consistent. If it works mostly, but sometimes says it can't answer your query because of sun spots or tremors in the Ether, then it is not a secure tool. Instead it becomes a tool for insecurity because you don't know when it will do what you want it to do. It makes things worse for you.

Or if one page on a web site shows only the first name and surname of people while other pages show surname and all given names then the web site user interface is inconsistent and creates doubt.

For people to trust software, it needs to behave consistently. This not related to giving the right answers all the time because this has to do with functionality and in this particular article we're talking about optimisation — making systems work better rather than making them work correctly. If a particular system or software function is broken all the time then it's completely trustworthy, i.e., you can always be sure it won't work. Problems arise when perhaps a helpful programmer decides to present an answer when it'll only be correct some of the time.


Maintainability is a characteristic, or an aspect of design and implementation, which determines how well software can be repaired or upgraded after its initial roll-out. This is not an important optimisation for one-off software projects which are going to get thrown away after they've been used once or twice. Indeed, trying to optimise such software to make it easier to upgrade in the future is likely to increase the time it takes to create it in the first place which, in this case, is a bad thing.

But for software which is going to have any life at all, and particularly where the original developer(s) is not going to be available all the time — such as if they're a contractor — then maintainability is an important optimisation beyond the software doing what it's supposed to.

Software which is difficult or impossible to maintain might mean cheaper cost to produce initially, but can mean high cost later. For example, if a piece of software needs to be changed months or years after it was written and the software proves intractable, it may need to be re-written (and re-tested) from scratch. If, instead, it had been written with a view to change from the start then likely will not need to be re-written at all.

  • Substantive documentation,
  • Code is well commented with explanations of principles "or theories" of operation,
  • Logically structured — break down monolithic code and isolate into separate, independent, functionally-related modules or source files,
  • Use simple and obvious techniques rather than obscure, non-obvious or "clever" ones. The person coming after you to work on the code may not know all the techniques you know,
  • Reduce duplication — do things in the software in one place only. For example, if you need to convert Fahrenheit to Celsius, write one function to do it and call it anywhere that needs to do what function

Long-term operability

Another interesting area of optimisation is long-term operability: Extending the amount of time software will remain operable or viable.

If the software's original function continues to be required unchanged then you might think that the software could just keep chugging along unchanged also, but this tends not to be true. While the program code — the instructions which the computer follows to produce the desired functionality — don't deteriorate or change over time, the environment or situation in which the software runs will change.

For example, if the software talks to hardware such as scanners, networking equipment or other computers, then these other devices are likely to change or be replaced over time. Better, cheaper, smarter and — most importantly — different scanners, networks and other computers will come along to replace what your software originally communicated with. Will these newer devices be compatible? There's a chance they won't, at least not 100%.

Closer to home, will the operating system, language interpreter and other aspects of the environment in which you did the initial development continue to support the operation of the software in 5 years? 10 years? 20 years?

We often see operability problems cropping up in specialist areas such as medicine and manufacturing where computer systems are developed to control complex hardware. While the hardware — e.g., medical devices such as scanners, or machinery such as lathes — may have a useful life of ten or more years, the computers and operating systems which run them may only have a useful life of a few years. This means that when the combination of the computer plus operating system plus software stops working, the devices they controlled which still might work fine, have to be disposed of.

We see shadows of this problem when we see the long-abandoned-by-Microsoft Windows XP (or even Windows 3.1!) still doing duty driving such dedicated hardware systems. We can ask: What has happened to all the other instances of these same dedicated systems where the computer or the operating system could not be persuaded to keep working this long?

The key part of all of these systems is the software itself and if choices were made at the outset to ensure that the software wasn't tied to a particular operating system or to particular interface hardware then the long-term viability of the software would increase because it can more readily be moved to new hardware or a new operating system.

The above example specifically refers to dedicated and specialist hardware but long-term operability also applies to software which doesn't drive dedicated hardware. Imagine that you have developed some sort of system, maybe a record-keeping system or a job-logging system, that uses a database. Now imagine that you have a web interface for the system. What choices can you make to ensure the system you develop remains operable for as many years as possible?

And, importantly, how can you recognise when you're making a choice that likely decreases the long-term operability of your software?

For the example above, here are some good choices:

  1. Choose a database implementation that provides a standard interface which is common across a range of implementations or products. For example, pick a database product which uses a standard form of SQL, for which there's a standard programming interface, and which doesn't use any vendor-specific "extensions".
  2. If you need to use a scripting or "glue" language to get the database and web interface to talk to each other, use a language that isn't experimental, that's stable, that's available across a wide variety of platforms and operating systems (to allow you to move it later on) and where the language developers have a stated policy of maintaining backwards compatibility. When you're developing or writing the code, use the most basic (or, better, standardised) aspects of the language rather than the cleverest or newest ones. Over the course of time most scripting languages evolve and this evolution can break your software. Scripting languages generally aren't constrained feature-wise to conform to what's in an official standard (as is C) so over time scripting language features may come, go or change and so may the functionality of your software if you depend on them.
  3. Use a common, well-supported web server with basic functionality and not one which is tied closely to the operating system on which it runs. It may be that when the host operating system is no longer supported, functions which it provided to the web server also become unsupported and this may break your system. Rely on basic web server functionality — GET, PUT, CGI, etc. — rather than on special or unique features of a particular web server product.
  4. When writing your software, use generic operating system functions such as OPEN, READ, WRITE, CLOSE, etc. Avoid operating system-specific functions such as pipes, process management or specific file system features. Avoid using non-standardised libraries, especially those that come from user-driven repositories because if you rely on, say, a module or library developed by Joe XYZ and Joe XYZ loses interest in maintaining the module then eventually your software will become unviable as the module becomes maybe incompatible with the latest version of the language you're using or isn't supported for the operating system any more, or security holes are found in it which don't get patched.
  5. In general, where possible, use standardised tools, languages and operating systems features. They're standardised precisely to ensure things developed to those standards remain viable in the long term so lean on them to help you achieve that same viability with your projects.

And here are some bad choices you can make:

  1. Select an operating system for the project which has an end-of-life just a few years away.
  2. Write the software in a language that is still under development or which is someone's research project (even when the "someone" is a large corporation).
  3. Use non-standard libraries or hardware.
  4. Make choices based on things with which you have a lot of experience but few other people know.
  5. Rely on hardware or software libraries only available from one supplier or vendor.

What I'm talking about in the above is making sure that your software continues to do what it's supposed to do even as the world evolves around it. You can't guarantee this, of course, but of you make good choices during the development stage it may be that as hardware and software technology change and evolve, your software can easily be updated to move along with them.

How to choose design, language, underlying O/S, hardware for long-term viability.

What external factors or choices can cause software to stop working even when there are no bugs.

Cost to remediate when this happens? Cheaper to throw away and start again? How to avoid this outcome?

Energy consumption

Lower heat output, batteries last longer, smaller container can be used, maybe no heatsink.

As mentioned earlier, one form of optimisation has to do with energy consumption and that can directly relate to how long a life a device will have when running on batteries.

Sometimes though, when talking about energy, it's not important to get the maximum battery life possible. The most important thing might be to get the lowest energy usage so that the battery size and weight can be reduced. This can be the case, for example, when we're talking about battery-operated drones or other aircraft. If the specification is for a mission life of, say, 30 minutes then reducing the energy consumption due to the way the software runs can allow the battery size to be reduced and the aircraft to therefore become lighter and need smaller motors — all of which means lower cost.

Also, reduced energy consumption means less heat generated. This can then mean smaller electronic components and maybe also the reduction in size or complete removal of the heatsink on the CPU. Less heat can also mean that the enclosure or case of the device can be smaller, simpler and cheaper, maybe having a plastic case instead of a metal one.

This also means fewer or no ventilation holes, and maybe that a heat-removing fan isn't required. If you're making a device which needs to be water-resistant, waterproof or which needs to operate underwater, these are all big deals.


It doesn't need to be very usable if it only gets used by engineers very occasionally, such as software which runs in geographically-isolated sites or in satellites or …

Not just user interface, i.e., GUI

How easy the device is to use, any displays it has, how easy it is to set up.

Determines how well supported it will be by the users who actually use it, whether it'll get used by choice, whether it'll get used only as a tool of last resort or whether the users will beg for it to be replaced.

Programmers who write graphical user interface code aren't always the best at implementing business logic code or at writing hardware or device driver code. Likewise, device driver writers don't always do good GUIs. Ideally, you'd have separate people doing these different coding tasks but if you're a one-person show then one of the things you have to really keep in mind is the other people — your clients, other team members, or your customers — who will be using your software.

If the user interface sucks, is difficult to use, only makes sense after a university course or causes users to develop uncontrollable facial twitches after they've been using it for a while, then your software, the product you've developed, is not going to last long. It may do whatever job it is supposed to do fabulously, but if it's not reasonably easy to use, then it won't be used, or someone else will write something better and your software will become, as they say, a footnote to history.

You can get an idea of how much attention you need to pay to the user interface by considering who is going to have to deal with your product. If the only people who are going to lay their hands on it are highly-trained engineers then you can probably skip some parts of the online help and the labels on the buttons, levers and sockets probably don't need to be blindingly obvious. However, if the engineers are going to be using the software a lot then it does need to be easy, sensible and logical to drive (like not having a button marked "Start" which you have to use to turn it off).

If, however, it is just for engineers to use but the vast majority of the time your product or device will be sitting untouched in an isolated building in the middle of a desert performing some mysterious communication functions, then it probably doesn't need to be very user friendly because, quite simply, mostly it doesn't have any users.

However, if your software is used by people who aren't well-trained or who might not be very computer savvy — such as users of information kiosks in tourist areas or shopping malls — then an awful lot of attention needs to be paid to usability so the poor dears don't get confused or frustrated.

So when we're talking about optimisation in the area of usability, you might have some software which does its job well, has the necessary buttons, dials and and sound effects to be usable and perform its functions, but spending extra time increasing usability, making it more obvious to use (not just for you, but for everyone else as well) and decreasing the number of neurones your users will need to have in-gear to get benefit from your software means they get more value.

Decrease likelihood of bugs

A perhaps-odd sounding optimisation is optimising your software to decrease the chance bugs will creep in. Put another way, this is designing and coding to be bug resistant. Of course, as the general theme of this article indicates, this doesn't change the functionality of the software — what it does one whit — but instead decreases the chances that bugs creep in during initial development and then take time to flush out, and that bugs will creep in during later updates and changes.

I'd like to discuss two, non-exclusive directions you can take on this:

  1. Coding style, and
  2. Making the most of your tools

The lists here are, of course, not exhaustive.

Coding style

Reduce the number of execution and decision paths. The more paths there are in your code, the longer it takes to test the code in each path and the more likely it is that one or paths will have bugs.

Group logically-related functions together in the same file or module. For example, if your program stores information in a database, put all of the code which reads, writes, searches and updates the database in the same file or module. Don't duplicate functionality. Instead, if you have to do the same thing in two slightly different ways at different times, write a function which handles both of these case and use it instead. The big advantages of this are that it means there's only one place where you need to make changes, and it means that this common code gets tested a lot more than the two, separate code sections.

Making the most of your tools

Assert statements.

Many times in software you'll spend some time at the beginning of execution of the program itself or in individual functions setting up some variables whose values aren't expected to change from then on. The problem is, if there's a bug, they might change unexpectedly and usually there isn't a big, red light which comes on to tell you.

What you can do in many different languages is contrive to make a variable actually read-only from a particular point in the program, or to make it so an error gets generated when something tries to change its value.

In Tcl, you might use the trace command to assign a handler which gets called when there's an attempt to write a new value to the variable. This handler can then generate an error indication.

In an object-oriented language you might embed the variable as a private variable in a class and then use accessor methods to read and write it. You can then have a method in that same class which sets a lock variable which is tested each time the variable's write method is called and, when it finds the lock variable is set, raises an exception.

In C, you might have a function which does all the setup using variables it can write to, but it returns const> pointers to them so the compiler won't let them have new values assigned to them.

Strict type checking.

Handle all cases in your code, not just the expected ones because someone may come along later and change things so previously-unexpected values start appearing. Consider the following C code:

typedef enum {
          DO_FUNCTION Z




switch (func) {
        case DO_FUNCTION_X:
                <some code>;
        case DO_FUNCTION_Y:
                <some other code>;
        case DO_FUNCTION_Z:
                <some different code>;

Today, the above code might be fine. Your code is expected to perform three different functions — A, B and C — and you use an enumerated type and a switch statement to neatly select the appropriate code in each case. But maybe your software is more complex than this. Maybe bits of code which handle each case are actually scattered around your software in more than one spot. This is still no problem… today, but what happens when someone comes along in the future to modify the software to handle one or more extra functions? What if they extend the enumerated type but miss modifying one or more switch statements? Answer: the code will run anyway and maybe produce results which have subtle errors.

One way you can avoid this happening is by adding a default case to each and every switch statement that outputs an error or pops up an error message box. Today they may not be needed, but tomorrow when the code is being modified your default cases can prevent unnoticed bugs creeping in. In languages which don't have switch statements, like Python, you make sure there's always an else to catch unexpected values.

And while we're on the topic of enumerated types in C, enumerated type variables are, under the hood, simply integers and there's nothing stopping you assigning arbitrary integer values to them or, indeed, assigning symbolic values from completely different enumerated types. For example you could have an enumerated type which represents animals and other which represents inanimate objects. C won't complain if you define a variable as animal and assign an inanimate object value to it.

You might like to try and make sure that none of these undesirable possibilities happen and here's a way:

typedef struct { int dummy; }   *ANIMAL_TYPE;

static const ANIMAL_TYPE        RABBIT          = (void *)(1);
static const ANIMAL_TYPE        CAT             = (void *)(2);
static const ANIMAL_TYPE        DOG             = (void *)(3);
static const ANIMAL_TYPE        HORSE           = (void *)(4);

typedef struct { int dummy; }   *OBJECT_TYPE;

static const OBJECT_TYPE        ROCK            = (void *)(1);
static const OBJECT_TYPE        TABLE           = (void *)(2);
static const OBJECT_TYPE        REFRIDGERATOR   = (void *)(3);
static const OBJECT_TYPE        MOTORCAR        = (void *)(4);

int main () {

        ANIMAL_TYPE     creature;
        OBJECT_TYPE     thing;

        creature = RABBIT;

        thing = RABBIT;

        if (thing == TABLE) {
                thing = MOTORCAR;
        } else {
                thing = ROCK;

        return (0);

Th above code will work as expected but note that you'll get an error from the compiler where we try and assign thing to be a RABBIT rather than one of the defined inanimate objects.

Compiler options.

Don't ignore compiler warning. Set highest warning level and aim for clean compile. Ignoring warnings can mean things slip through.

Unedited stuff lives under here (to the bottom of the page)

  • How long it takes to build instead of how long it takes to run - reduces the number of build/test cycles possible and thus the amount of testing of build configurations whch can be performed.
  • Resistant to unauthorised changes
  • Confidence
  • Change logs
  • Non-repudiation
  • Privacy / encryption


Compromising security for the sake of features - e.g., storing personal information which isn't required for function but which makes it seem "friendlier" - "Hello, Dave. How are you today? How is your cat named Moggles, and your three children - Huey, Dewey and Louie?"

Likelihood of being bug free

Likelihood that any bugs are minor

Likelihood that any bugs which cause problems cause: a) minor problems, b) not very expensive problems, c) no reputation-harming problems.


Important for user interface software for systems where the back end is under constant development, or where the software is costantly being deployed into new environments.


Has strongly-defined, well-separated and generalised interface definitions between modular component parts to allow easy addition of additional components to extend functionality rather than tightly-interwoven just-for-purpose components which rely on "knowledge" of data structures used by other components rather than a well-defined interface.

Robustness when altered, expanded or re-purposed

Modular components where strictly-imposed interface definitions isolate the "damage" a change might make to one component and protext the remainder. Example: the Linux kernel's call interface where, for example, NULL pointers cause an error return rather than the kernel panicking.