You can't fix stupid

What a great line.

I was watching Comedy Central the other night and heard Ron White utter those perfect words "You can't fix stupid." In one line he summed up why I can spend the rest of my life working in Information Security.

The truth is no matter how smart we think we are we make mistakes. Big ones, small ones, can't believe you had the gall ones. The problem is the people you have trusted with your information all make mistakes;

  • the criminal who jumped the border and is working illegally for $3/hr at the Quiki-Mart,
  • the CEO who thinks skipping commercials is advertising theft,
  • or the music publisher who thinks hacking your computer is the only way to protect their property.

The problem is the people who you trust with your information are focused on protecting their assets not yours.

I believe that it is time for this to change. This information about us that is being freely stored, traded, and sold without our permission is our property. Our intellectual property is worth billions; that is why Identity Theft happens. Criminals only steal what has value. It is time that we demand that business and government recognize that we are the rightful owners of our property and it's improper use is a crime. Businesses that fail to protect our property are just as liable as if they damage our health or homes.

You can't fix stupid; but you can make them pay to put up with it.

The Monoculture Myth

People build models to understand the world.

We sort our world into neat categories to aid in our understanding and decision making. Us vs. Them, Dark vs. Light, Good vs. Bad; used to help us make snap decisions in a hostile world. It is understanding that each of us carry around these mental models that we use to understand and navigate the world that is key; we respond to the world based on the models we see and not the reality in front of us. So choosing the right model for the problem at hand is critical – bad models lead to bad decisions.

Being a new discipline, Information Risk Management grapples with what are the appropriate models to use. As practitioners have joined our ranks from other fields such as Information Technology, Law Enforcement, Medicine, and Military each brought with them models that were tried and true in their old fields and they attempted to apply them to our emerging discipline. Just look at the many names we call ourselves: Information Security, Information Risk Management, Network Security, Systems Security, Security Engineer… In my prior articles – The word on Information Security, Adware and Spyware - are they really consensual?, and Secure at any price? – I consistently show the danger of misusing language to describe the new reality of the Internet Age. The newest example is the term Monoculture when applied to computers.

Monoculture: systems with low diversity.

The paper CyberInsecurity: The Cost of Monopoly is a great example of trying to apply a tried and true model from biology to computer systems. The basic premise is that software has the same vulnerability in a low diversity state as a biological system. A computer virus will attack a dominant system because it is dominant and the impact on society is huge due to the majority of computer systems getting "sick" from the virus. This model has started to grow legs with Massachusetts assaults monoculture and Monocultures and Document formats: Dan’s bomb goes off.

Quick - Go read those articles and see if you can spot the flaws.

Not only are these articles great examples of why misapplying models are dangerous they fail to properly apply the model to the problem they attempt to fix. If monoculture as a model applies then standardizing on any single document format leads to monoculture and the dangers it represents. The articles seem to be championing creation of a monoculture as a solution to a perceived monoculture…

The root issue is whether the monoculture model is appropriate for information systems. Let’s look at the monoculture model in its native discipline – biology. Monocultures are large numbers of a single species in close proximity that we as humans rely upon. The risk being a single vulnerability shared by all members of the species can be exploited by viruses, pests, changes in environment, etc. leaving us without a backup and subject to famine or economic loss. When applied to computers, this idea seems to make sense; Microsoft Windows seems to get a lot of computer viruses because everyone (98% last time I looked) uses Microsoft Windows. This leads people to install Linux or buy a Mac and think they are safe. The problem is in believing that there is such a thing as a computer virus and it acts in the same manner as a biological virus.

Biological viruses are bits of stuff that take over a living cell to replicate copies or procreate. They have evolved through random mutations. Computer viruses in contrast are simply computer programs. They may as part of their operation duplicate or attach themselves to other programs, but in the end they are simple programs created for a specific purpose. They didn’t evolve. They do what their creators want them to do and nothing more. The name "computer virus" is dangerous because it applies the wrong model. By thinking "the virus did it" or "my computer got infected with a virus" the motive behind the action is lost.

The “virus” is evidence of a crime. The real question to ask is “What crime?”

If you believe you are "infected" with a "computer virus" then you get the system cleaned and buy anti-virus software, however, if you realize that a program was installed on your computer without your consent to commit identity theft then you are going to take completely different actions.

In a biological system, the risk is dependant on a single species so monoculture is a valid risk. Misapplying the model to information systems hides the real risks. People break into computers for a reason. Sometimes the reason requires access to large numbers of systems, sometimes just yours is all that matters. The reason the system was compromised was because it was vulnerable – not because it was popular.

Microsoft Windows is insecure by design – not by popularity. Microsoft chose to make Windows easy to use and some of these design decisions or default settings leave the system vulnerable. That being said my children use Windows at home without ever being compromised or needing anti-virus products. If providing basic computer education that a 10 year old can understand can protect Windows then monoculture isn’t the problem.

KIS – Keep It Simple

No matter what industry we work in that model applies. Complexity in design and/or execution leads to increased risk. Computers are vulnerable because they are complex. Linux and Mac are just as vulnerable “out of the box” as Windows all of them have flaws in design or implementation that can be exploited to do you harm. Complexity breeds chaos. This risk was implicitly understood by the early computer scientists and can be found in the The Art of Unix Programming.

  1. Rule of Modularity: Write simple parts connected by clean interfaces.
  2. Rule of Clarity: Clarity is better than cleverness.
  3. Rule of Composition: Design programs to be connected to other programs.
  4. Rule of Separation: Separate policy from mechanism; separate interfaces from engines.
  5. Rule of Simplicity: Design for simplicity; add complexity only where you must.
  6. Rule of Parsimony: Write a big program only when it is clear by demonstration that nothing else will do.
  7. Rule of Transparency: Design for visibility to make inspection and debugging easier.
  8. Rule of Robustness: Robustness is the child of transparency and simplicity.
  9. Rule of Representation: Fold knowledge into data so program logic can be stupid and robust.
  10. Rule of Least Surprise: In interface design, always do the least surprising thing.
  11. Rule of Silence: When a program has nothing surprising to say, it should say nothing.
  12. Rule of Repair: When you must fail, fail noisily and as soon as possible.
  13. Rule of Economy: Programmer time is expensive; conserve it in preference to machine time.
  14. Rule of Generation: Avoid hand-hacking; write programs to write programs when you can.
  15. Rule of Optimization: Prototype before polishing. Get it working before you optimize it.
  16. Rule of Diversity: Distrust all claims for “one true way”.
  17. Rule of Extensibility: Design for the future, because it will be here sooner than you think.

Secure at any price?

I was doing my daily scan of Slashdot this morning when I came across the following article - The Failure of Information Security written by Noam Eppel. Embedded within all that doom and gloom is the following premise - security professionals have failed because our computers and networks are still not “secure.” Noam Eppel has shown us all exactly what is wrong our profession – Noam believes that we can actually achieve security.

Unfortunately, life isn’t that simple.

Secure: free from danger or risk

As every parent knows life is all about risk. No matter how hard you try, no matter what products you buy; your children will still get their share of scrapes, bruises, sniffles, and broken hearts. The wise parent knows that it’s worse to be overprotective than to let the child learn the important lesson behind that bruise – be more careful in the future. Parents understand the fundamental concept that Noam keeps on missing: Life is about living with and managing risk.

“The man who trades freedom for security does not deserve nor will he ever receive either.” -Benjamin Franklin

So as Information Risk Managers our guiding principle is to help our clients manage risk. Do car manufactures make cars that are safe? No. Around 40,000 people die every year in the US in car accidents – yet as a society we have determined that the ability to travel is worth the risk. We as individuals decide every time we get in a car that the reward out ways the risk. Car manufacturers attempt to design cars that are survivable in accidents – they don’t promise that you won’t get hurt.

When consultants sell “security,” clients go about designing applications thinking that the computer and network are “secure” – because security professionals fail to accurately assess and present the risks in basic business language that non-security professionals understand, design decisions that would make the system resilient when exposed to the risk are not made – and then some security professionals choose to blame the business for the "failure".

Web hacks are a great example. How many millions of dollars each year are spent by companies to protect against these “attacks”? How much money was spent by these very same companies to protect their buildings against spray paint? If your web site gets defaced do you really care? Well if you knew the risks up front and designed your web application to protect the customer’s information in spite of the web server being hacked then not really. A simple automated integrity check can trigger a scripted reload of the effected web server kicking out the script kiddy and restoring service – or even redirecting customers to non effected web servers while the reload happens. What can strike fear into the heart of the security consultant more then being replaced by a very small shell script.

Stuff happens. Our job is to help our clients deal with it.