Tuesday, November 19, 2013

Risky Business: tackling the securing challenges of technology

 Focusing on the horizon, as futurists do, can distract us so much that we risk tripping over what is right under our feet. The truth of this statement was brought home to me recently by Steve Keller, founding partner in the Architect’s Security Group, at the Lemelson Center’s Inventing the Surveillance Society symposium. As Steve pointed out that day, the potential of the emerging technologies we discussed (facial recognition software, eye tracking, biomonitoring) can distract us from considering the attendant risks. But before we even tackle this future, we need to play catch up with the technology we’ve already integrated into our museums. Today on the blog, Steve gives you a heads up on the tech risks that may trip you up in the near future.

Security consultants and engineers who work almost exclusively with museums have seen many changes in security technology over the past thirty years. In "the old days"—the 1990's, that is—life began to get complicated as museums started using computers to run the alarm and access control systems. The computers in use before then generally were proprietary to companies like Honeywell and ran on proprietary operating systems. But in the '90's, museums began to use off-the-shelf PCs running DOS or Windows. This made museums vulnerable to hacking, viruses and other cyber security threats because, unlike the early systems by Honeywell and Johnson Controls, virtually anyone could learn how to attack these systems from anywhere in the world.

The problem with technology is that it changes so rapidly it is difficult to keep track of the negative effect each change might have on security. As far back as Windows 95, programmers added “Easter Eggs” to their work—features visible only to programmers who saw the actual program code. Windows 95, for example, had a hidden flight simulator. In spite of backlash from the government and consumers, Easter Eggs continue to be found in most business software. Why do people question the integrity of software that had hidden features? Well, how do we know what else might be hidden in there that might make us vulnerable to a hacker. Could someone build a back door into your alarm system by adding it to Windows?

Picture from My Biggest Complaint
When a museum builds a computer network that may contain a hundred or more computers, each of those computers becomes a doorway into the whole network. If your alarm and access control systems use that network, then anyone with a password to your system and to the network can interfere with your security. How many times have I found a password into the network taped to someone's computer screen or on a Post-it in their top desk drawer?
  
This is just a small part of the problem. Each of these “doorways” is also a way for viruses to be introduced into the network. While good virus protection software can detect most threats, new viruses are being introduced weekly and until the software "catches up", we are all vulnerable. Other threats to your security systems include denial of service attacks where someone intent upon breaking in to the museum without being detected can overload the network with nonsense data until you literally shut it down to stop the threat. Shutting it down is exactly what the bad guys want.

My point is that museums no longer have the luxury of just buying an alarm and access control system. Consideration must be given to providing a dedicated network for it that can be protected, and isolating it from the internet so the only way to access that network is from the security control room.

Another threat is the trend of transferring ownership of all servers to the IT department and moving them to one location under their care and control. I feel that control of the physical server should remain with the security department and that it should remain in the security control room.  After all, who in the organization has the knowledge and access to commit the perfect billion dollar heist? If the IT manager decided to rob a museum, not only would we not know who did it, we wouldn't have any idea whatsoever how it was even done. I no longer worry as much about a dishonest curator or registrar because what they can haul away is pocket change compared to the damage an IT employee can do without proper controls.

Some institutions are using virtual servers and others are migrating their data to "the cloud," and this introduces other risks.  Have you seen "a cloud"?  It is generally a large shipping container packed with servers, each running virtual servers, located in a parking lot somewhere in the world--often India. That doesn’t provide the type of control of the security system that makes me, or your fine arts insurers, comfortable.


As we add useful new features to our security systems, we also add problems. I recently saw a system that puts help icons on the desktop of every employee’s computer. One icon is a panic button that they can use to alert security of a problem. Another customizable icon can tell Security you need paramedics because you are having, say, a low blood sugar event because you are diabetic. These systems are fantastic.  But they cause headaches as well because now the security system computer is storing previously private and protected human resource information like the fact that you have diabetes or a heart condition.

Today security is a high tech and complex field that is changing quickly. Only a small percentage of it still involves security officer management. Museums need a comprehensive plan to manage the changing security environment that identifies and neutralizes risks posed by technology. Security is like an iceberg. What you see above the water is the easy part.  It’s what’s hidden below the surface can cause you real problems.




No comments: