The Monoculture Myth

People build models to understand the world.

We sort our world into neat categories to aid in our understanding and decision making. Us vs. Them, Dark vs. Light, Good vs. Bad; used to help us make snap decisions in a hostile world. It is understanding that each of us carry around these mental models that we use to understand and navigate the world that is key; we respond to the world based on the models we see and not the reality in front of us. So choosing the right model for the problem at hand is critical – bad models lead to bad decisions.

Being a new discipline, Information Risk Management grapples with what are the appropriate models to use. As practitioners have joined our ranks from other fields such as Information Technology, Law Enforcement, Medicine, and Military each brought with them models that were tried and true in their old fields and they attempted to apply them to our emerging discipline. Just look at the many names we call ourselves: Information Security, Information Risk Management, Network Security, Systems Security, Security Engineer… In my prior articles – The word on Information Security, Adware and Spyware - are they really consensual?, and Secure at any price? – I consistently show the danger of misusing language to describe the new reality of the Internet Age. The newest example is the term Monoculture when applied to computers.

Monoculture: systems with low diversity.

The paper CyberInsecurity: The Cost of Monopoly is a great example of trying to apply a tried and true model from biology to computer systems. The basic premise is that software has the same vulnerability in a low diversity state as a biological system. A computer virus will attack a dominant system because it is dominant and the impact on society is huge due to the majority of computer systems getting "sick" from the virus. This model has started to grow legs with Massachusetts assaults monoculture and Monocultures and Document formats: Dan’s bomb goes off.

Quick - Go read those articles and see if you can spot the flaws.

Not only are these articles great examples of why misapplying models are dangerous they fail to properly apply the model to the problem they attempt to fix. If monoculture as a model applies then standardizing on any single document format leads to monoculture and the dangers it represents. The articles seem to be championing creation of a monoculture as a solution to a perceived monoculture…

The root issue is whether the monoculture model is appropriate for information systems. Let’s look at the monoculture model in its native discipline – biology. Monocultures are large numbers of a single species in close proximity that we as humans rely upon. The risk being a single vulnerability shared by all members of the species can be exploited by viruses, pests, changes in environment, etc. leaving us without a backup and subject to famine or economic loss. When applied to computers, this idea seems to make sense; Microsoft Windows seems to get a lot of computer viruses because everyone (98% last time I looked) uses Microsoft Windows. This leads people to install Linux or buy a Mac and think they are safe. The problem is in believing that there is such a thing as a computer virus and it acts in the same manner as a biological virus.

Biological viruses are bits of stuff that take over a living cell to replicate copies or procreate. They have evolved through random mutations. Computer viruses in contrast are simply computer programs. They may as part of their operation duplicate or attach themselves to other programs, but in the end they are simple programs created for a specific purpose. They didn’t evolve. They do what their creators want them to do and nothing more. The name "computer virus" is dangerous because it applies the wrong model. By thinking "the virus did it" or "my computer got infected with a virus" the motive behind the action is lost.

The “virus” is evidence of a crime. The real question to ask is “What crime?”

If you believe you are "infected" with a "computer virus" then you get the system cleaned and buy anti-virus software, however, if you realize that a program was installed on your computer without your consent to commit identity theft then you are going to take completely different actions.

In a biological system, the risk is dependant on a single species so monoculture is a valid risk. Misapplying the model to information systems hides the real risks. People break into computers for a reason. Sometimes the reason requires access to large numbers of systems, sometimes just yours is all that matters. The reason the system was compromised was because it was vulnerable – not because it was popular.

Microsoft Windows is insecure by design – not by popularity. Microsoft chose to make Windows easy to use and some of these design decisions or default settings leave the system vulnerable. That being said my children use Windows at home without ever being compromised or needing anti-virus products. If providing basic computer education that a 10 year old can understand can protect Windows then monoculture isn’t the problem.

KIS – Keep It Simple

No matter what industry we work in that model applies. Complexity in design and/or execution leads to increased risk. Computers are vulnerable because they are complex. Linux and Mac are just as vulnerable “out of the box” as Windows all of them have flaws in design or implementation that can be exploited to do you harm. Complexity breeds chaos. This risk was implicitly understood by the early computer scientists and can be found in the The Art of Unix Programming.

  1. Rule of Modularity: Write simple parts connected by clean interfaces.
  2. Rule of Clarity: Clarity is better than cleverness.
  3. Rule of Composition: Design programs to be connected to other programs.
  4. Rule of Separation: Separate policy from mechanism; separate interfaces from engines.
  5. Rule of Simplicity: Design for simplicity; add complexity only where you must.
  6. Rule of Parsimony: Write a big program only when it is clear by demonstration that nothing else will do.
  7. Rule of Transparency: Design for visibility to make inspection and debugging easier.
  8. Rule of Robustness: Robustness is the child of transparency and simplicity.
  9. Rule of Representation: Fold knowledge into data so program logic can be stupid and robust.
  10. Rule of Least Surprise: In interface design, always do the least surprising thing.
  11. Rule of Silence: When a program has nothing surprising to say, it should say nothing.
  12. Rule of Repair: When you must fail, fail noisily and as soon as possible.
  13. Rule of Economy: Programmer time is expensive; conserve it in preference to machine time.
  14. Rule of Generation: Avoid hand-hacking; write programs to write programs when you can.
  15. Rule of Optimization: Prototype before polishing. Get it working before you optimize it.
  16. Rule of Diversity: Distrust all claims for “one true way”.
  17. Rule of Extensibility: Design for the future, because it will be here sooner than you think.

No comments: