20years of security - Mojo Networks

14 downloads 231 Views 629KB Size Report
ogies and security requirements ebb and flow. The contributors to ... This move to a decentralized base forced computer
Products

20 Perspective:

Contents Introduction..................... 1 Information security....... 2 Access management...... 4 Biometrics.......................7 Digital forensics.............. 9 Email.............................. 11 Network & . platform security...........14 Wireless......................... 17 The future......................18

years of security T

here is a wealth of insight, reminiscence and brilliant looks into the future here. However, as in everything, time introduces both macro and micro changes. For example, instant messaging is a comparatively new concept and Web 2.0 barely has been born. Cloud computing, which arguably has been around under different monikers for decades, is so young that most people don’t yet fully understand what it is, what it can do for us or what its risks are. We would be hard-pressed to have a discussion on these and a few other topics that stretch back 20 years. Twenty years in our business is an eternity. And if we couple 20 years of information assurance with 20 years of information technology, the stew thickens and becomes even tastier. For example,

even the name of what we do has changed materially – several times. What started out as computer security has evolved through network security, enterprise security and information security to get where it is today. SC Magazine has tracked this evolution for the past 20 years and for a lot of that time I have been writing in these pages. I have had a chance to combine 30 years of consulting, 40 years of writing, and 10 years in academia into an experience that has been enriched by writing here. We have seen these products evolve. We’ve seen market segments, methodologies and security requirements ebb and flow. The contributors to this special section are a mix of experts, visionaries, long-time market-watchers, CEOs, educators, researchers and CISOs. But

1 SC • November 2009 • www.scmagazineus.com

they all have one thing in common: They have been on this rollercoaster with you and me, and their perspectives on where we came from, where we are and where we are going outstrips anything I have ever seen in a single publication. I could not have assembled this feature without the help of Judy Traub, our intrepid editorial assistant. I had planned to give her an extra month to help pull together the pieces for our upcoming December innovators issue. Alas, the best laid plans and all that. Judy dove in and pulled resources from her vast store and suddenly we had a first-rate feature for our product section. So, enjoy this walk – or, sometimes, run – down memory lane and perhaps we’ll do this again in another 20 years. – Peter Stephenson, technology editor

Products

INFORMATION SECURITY The practice of information security by Tom Peltier

T

hirty-two years ago, I began a frustrating, scary, exciting and rewarding career in computer security. The first conference I attended on computer security in 1978, addressed such issues as policy development, disaster recovery planning, data center physical security and the new technology of access control systems (ACF2, RACF and TopSecret). The environment we were working with was typically a computer laboratory with a big mainframe system. In 1981, everything changed with the introduction of the first affordable portable computer system. No longer would the business units be tethered to the whims of the information technology departments. Anyone could go to their local “Nerds r Us” store and get the hardware and software they needed to create their independent information processing environment. This move to a decentralized base forced computer security professionals to change focus and begin to stress the need to secure information whereever it was found, regardless of the format. The job title became information system security officer (ISSO). With the emergence of the client-server infrastructure, the ISSO function began to move out of the IT departments. As the technology leaped forward, the ISSO struggled to implement basic security mechanisms. However, with this decentralization, we saw the responsibility for information protection switch from IT and back to the business units. A greater emphasis was placed on creating “corporate” policy and having the business units implement their own supporting standards and procedures. Security awareness training came into its own during this period. The local business units were often charged with assigning a local security coordinator who would be responsible for implementation of the local program. In 1991, the industry took a big step forward in obtaining legitimacy by implementing the certified information system security professional (CISSP) exam certification. Industry experts established a common body of knowledge that would provide for testing to establish competency of the individual. For almost 20 years now, the CISSP certification has provided businesses with an assurance that the holder of the certification meets an industry-accepted level of knowledge.

The key factor in the success of any information security program has been the level of acceptance of the management and users. The focus on managing risk seems to have aided in this acceptance. Instead of implementing controls and countermeasure by decree, the new emphasis on risk identification and management includes all parties with a vested interest. We are beginning to see many organizations move away from information security and are now working on enterprise risk management. At CSI 2009 in October, the 36th edition of the Computer Security Institute’s annual conference, a risk management summit discussed that very topic. In 30 years, the industry has evolved from computer security to information security to information protection and now, perhaps, to enterprise risk management. What we are called is not as important as ensuring that the services we provide are continued and accepted. Tom Peltier has been an information security professional for over 30 years. During this time, he has shared his experiences with fellow professionals and has been awarded the 1993 Computer Security Institute’s (CSI) Lifetime Achievement Award. He has had six books published on policy development and risk assessment.

Revolution or evolution? by Michael Corby The vision from 1989 Computer systems as we know them were in their infancy in the waning moments of the 1980s and the early years of the 1990s. Systems were still largely segregated by manufacturer. IBM shops had no DEC equipment anywhere, and vice versa. HP systems were found in manufacturing plants and the world of CAD/CAE was dominated by standalone graphical units that were the engineering versions of the memory typewriter. Security was being promulgated in the form of model architectures and the Rainbow Series, computer security standards published by the U.S. government Within the next 10 years, we were enveloped in the dot-com boom. Technology permeated every aspect of our lives. What were we envisioning from the information security domain? In many instances, what we were looking to do was revolutionary. We saw that virus code and other malware were gaining in popularity and Scott McNealy from Sun Microsystems warned that there is no longer any digital privacy. Sometimes we have seen a revolution, sometimes we have seen a slower crawl forward. Human resources and staffing In 1989, security professionals were either writing crypto code

www.scmagazineus.com • November 2009 • SC 2

Products

for the military or were hanging backup tapes in a data center. Today, we are blessed with more than a dozen ways of measuring the security competence of our staff. Verdict: Revolution Network architecture In 1989, open systems were beginning to babble to each other. Communications was over-leased lines or internal networks. Nobody ever used dial-up public communications for sensitive data (if you knew what data was actually sensitive). Secure network architecture is now available on the shelf at the office supply store. Verdict: Revolution Systems development The good old five-phase approach (or six-, depending on your school of thought) to designing systems was de rigueur in 1989. Application teams went through stages of scope, design, programming, unit testing (and system testing) and implementation. Once all this was done, someone may have asked: “What about backup and recovery?” The method is the same, only some of the questions have changed. Verdict: Evolution Monitoring and forensics In 1989, logs were generated and maybe printed. Tracing events were rare, but also largely unnecessary. Today, compliance laws and industry regulations have tightened the need for monitoring and active event investigations. Verdict: Revolution Summary In these and other areas, we have made substantial progress in information security over the past 20 years. There is more to come. Here’s my prediction: Over the next 20 years, security will be embedded in all information management technology. Data segregation will be the usual architecture and trace logs, and factual responses to the query, “How did that happen?,” will be commonplace. Hackers and malware will exist, but will be just an annoyance. Hmmm. Didn’t I say that in 1989?

Michael Corby has over 40 years of experience in IT strategy, operations, development and security. He is a founder of (ISC)², the organization that established the CISSP credential.

20 years of governance by Howard Schmidt

G

overnance, according to Wikipedia, relates to decisions that define expectations, grant power or verify performance. It is a word that has had much use and has been interpreted in different ways over the years. But, when it comes to information security, governance is a relatively recent, but important, addition to the modern vocabulary. In the past, information security was primarily about technology and the tools needed to resolve IT related problems – rather than deliver security solutions to support and enable the business. It was a backroom function and rarely was it discussed in the boardroom. But, necessarily and very appropriately, things have changed. High profile accidental or intentional attacks against IT, combined with a series of natural disasters, helped to put IT security governance in the spotlight. It was quickly recognized as a key component in dealing with issues such as data privacy, loss prevention and protecting business and brand integrity. Company executives in B2C and B2B businesses alike, saw good governance in IT security as a way to protect customers, employees, suppliers and partners. Governments also took positive action by introducing new legislation to raise awareness and ensure a benchmark level of regulatory compliance to defined standards of governance. The best known of these initiatives include the Sarbanes-Oxley Act in the United States and the EU Data Privacy Act. This focus on information security governance was reflected in a new breed of executives bearing titles such as chief information security officer (CISO), chief risk officer (CRO) and chief privacy officer (CPO). These senior positions provide a clear line of responsibility and corporate structure for IT security governance. Furthermore, IT security governance has also become part of the corporate culture and mindset as companies promote the real value of compliance and good governance. Regulatory compliance combined with a strong commitment to governance, significantly enhances the IT security function and underpins the success and integrity of any business. One thing is very clear: Strong governance has to be driven from the top down and, with growing awareness and ownership in the boardroom, the future for IT governance looks positive. Howard Schmidt is president and CEO, Information Security Forum. Formerly, he was vice-chair, President’s Critical Infrastructure Protection Board; VP-CISO, eBay; CSO, Microsoft; professor of research, Idaho State University; and adjunct professor, GA Tech, GTISC.

3 SC • November 2009 • www.scmagazineus.com

Products

Information security policy development by Rebecca Herold

I

n 1991, as an IT internal auditor, I performed the very first enterprise-wide information security audit for my organization, a multinational financial and insurance organization with approximately 20,000 employees. After this comprehensive four-month project, I was asked to implement all the recommendations I made within the audit, primarily, the creation of the information protection function/department within the organization. The first thing I did in the new department was create information protection policies based on all the risks I identified from the audit. At the time, there were very few information security policies available. I created my organization’s policies largely based on the results of my audit – basically a risk assessment (though that term was not used much then). I was happy to find that a year or so after BS7799 [a British standard] was first published in 1995, that I had hit on virtually all of the topics it listed, along with the topics listed in the first publication of COBIT, best practices issued by the ISACA and ITGI in 1996. One of the earliest pioneers to provide guidance for developing information security policies was Charles Cresson Wood, who has published many books and articles covering the topic. As far back as 1981, with his Policies for Deterring Computer Abuse and the just released version 11 of his Information Security Policies Made Easy, which was first published in 1991 and is now used by over 50 percent of Fortune 500 companies. Until the introduction of BS7799 and COBIT in the mid-1990s, the National Institute of Standards and Technology (NIST) was largely the most referenced public source for information security policy guidance, starting as early as their June 1974 Guidelines for Automatic Data Processing Physical Security and Risk Management. Most organizations did not really put a lot of effort into information security policy development in the 1980s, or even well into the 1990s. The person assigned the responsibility for creating information security policies back then was often an IT administrator who had some extra time on their hands. And then the Health Insurance Portability and Accountability Act (HIPAA) and the Gramm-Leach-Bliley Act (GLBA) were enacted in the last half of the 1990s, loudly followed by the Sarbanes-Oxley Act (SOX) of 2002, putting policy development importance squarely in the cross-hairs of executive suites. The importance of not only information security policies, but also privacy policies, was elevated to higher levels within most organizations. Information security policy development necessarily evolved from being a largely cookbook type of exercise (viewed as a necessity to help keep security settings consistent and to

keep employees from doing bad things with computers) to now being realized as an exercise that must be based on business risk and compliance in order to be effective for business. The need for documented risk- and compliance-based information security policies is here to stay. Rebecca Herold, CIPP, CISSP, CISM, FLMI, “The Privacy Professor,” has over two decades of information security, privacy and compliance experience. She’s been named as a Computerworld  “Best Privacy Adviser” multiple times and also as a “Top 59 Influencer in IT Security” by IT Security magazine. She is currently leading the NIST Smart Grid standards committee privacy impact assessment.

ACCESS MANAGEMENT Access management evolves over 20 years by Tomas Olovsson

A

ccess management has not always been what it has become today. In the early days of computers, it was more or less identical with physical access to the premises. Projects were entered in batches into large mainframes by operators, and end-users seldom got in contact with the computers. It was not until the mid-60s when time-sharing systems, such as IBM TSS-360 and DEC TOPS-10/20, were born that access control and separation between users became important and systems required passwords to let users in. The next important step in the evolution was taken in the 70s with the Unix/Multics multi-user system. Unix introduced the concept of allowing users to give away their access rights permanently to a program (the -s flag). This made it possible for other users to execute applications and access data, which would have been inaccessible with the users’ normal access rights. Applications could now make decisions about what data end-users could access, not just the operating systems. This provided great new functionality, but with the cost of increased complexity – and security problems would, of course, commence. Then, not much happened in this area for 20 years. It is true that other systems borrowed or invented similar mechanisms. Access control lists (ACLs) were introduced, but nothing really new arrived on the scene. Access management was for many years

www.scmagazineus.com • November 2009 • SC 4

Products

just a matter of properly distributing usernames and passwords between a few, sometimes not even communicating, servers. However, during the last 10 to 15 years, internal networks and the internet have developed extremely fast. Applications and systems have become connected in ways never seen before. Suddenly, there was a need to synchronize accounts on not just a handful of systems, but to tens, hundreds and even to thousands of applications within an organization. New authentication mechanisms were also introduced – everything from token devices, smart cards and certificates to biometric identification methods. Single sign-on became important and role-based access control (RBAC) with detailed auditing and logging was suddenly necessary, even required by law. And today, one of the latest buzzwords in this area is “security in the cloud,” with people and applications spread out all over the globe. Now, we have yet another access management challenge in front of us. Tomas Olovsson is co-founder and CTO of AppGate Network Security, and associate professor at Chalmers University of Technology in Sweden, with research focus on network security.

Encryption: 20 years ago by Bruce Schneier

T

hese days, we live in a world of cryptographic abundance, but the 1980s were different. Encryption products were rare, obscure, eclectic, confusing, poor or – more likely – all of the above. Through a combination of export restrictions, patriotic pleas, threats and secret agreements, the National Security Agency (NSA) effectively controlled the encryption market, ensuring that it was never mainstream. Research, on the other hand, was blossoming. The Annual International Cryptology Conference (CRYPTO) started in 1981, Eurocrypt in 1982. Mathematics conferences accepted cryptography papers and more appeared in engineering journals. Many of the results now seem basic, but back then we were only just starting to understand algorithms and protocols, public-key cryptography and cryptanalysis. There were a few books: notably by Konheim (1981), Denning (1982), Patterson (1987), Davies and Price (1989), and, of course, David Kahn’s The Codebreakers (1967). When I wrote Applied Cryptography, in 1992, everything publicly written about cryptography fit onto a single shelf. I wrote the book that I wished existed – an accessible introduction to the field. Seventeen years and a couple of hundred thousand copies later, I regularly meet people whose interest in cryptography was sparked by that book. The changes came fast in the 1990s. Cryptography export

5 SC • November 2009 • www.scmagazineus.com

controls were relaxed and eventually repealed. The FBI tried, and failed, to force vendors to install backdoors in their products so they could eavesdrop more easily. Research continued to boom as the graduate students of the 1980s got graduate students of their own. There were more ideas, more conferences, more products. And, most of all, finally, there was demand: the World Wide Web, electronic commerce, corporate networks. Now, we all use cryptography daily. It’s on our operating systems, our web browsers, our phones and our email programs. There are so many cryptography conferences that no one can attend them all. I can’t fit my current cryptography library onto three massive bookshelves, and I don’t have anywhere close to everything published. At the same time, we’ve learned that security needs more than cryptography. Security is a chain, and it’s only as secure as the weakest link. Compared to applications, operating systems and network security, not to mention human factors, cryptography is already the strongest link in any security chain. We might have beaten the NSA in the battle for cryptography, but the war for privacy and security continues. Bruce Schneier’s new book, Cryptography Engineering, will be published in spring 2010. You can read his other writings at www.schneier.com.

An historical perspective on password management by Eugene Schultz

I

n the mid-1980s, no issue was bigger than password security, and for good reason – most break-ins into systems at that time involved exploiting weak, default and/or null passwords. Other kinds of attacks were almost unheard of then (social engineering attacks excepted). Policies often required strong passwords, but with the exception of minimum password length settings in operating systems, no technology for enforcement existed. System and security administrators were at the mercy of users, who were constantly urged to select strong passwords, but seldom did. In the late 1980s, several significant password management technologies surfaced. Password filters prevented users from entering passwords that did not meet password goodness criteria. Password crackers enabled system administrators to monitor cracked, weak or default passwords, so that account owners could then be required to change them. Thus, enforcing password policy provisions using technology became possible. Lamentably, however, few organizations used password filters,

Products

and worse yet, password crackers have been used more often and more effectively by attackers than anyone else. The 1990s marked the emergence of remote vulnerability scanners. Although most of the tests carried out by these tools probed for vulnerabilities in operating systems and system services, some of them also tested for weak passwords in well-known accounts. Unfortunately, these tools did not test password strength in other accounts. They therefore have made little difference on password management. Every Windows operating system since Windows NT (July 1993) has password filtering based on combinations of letters, numbers and special symbols. However, research studies have shown that passwords filtered according to these criteria are not significantly harder to crack than unfiltered ones. The relatively recent advent of rainbow tables, millions of pre-computed candidate passwords for password crackers to try, has revolutionized password management. System and security administrators can now crack a large percentage of passwords in minutes and force users to select better ones. Many organizations do not employ this technology, however, leaving their systems open to traditional simple password attacks. Exploiting weak passwords has been a major threat vector since passwords were first used. Free technology that allows enforcement of password policy is widely available. So why are there still so many weak passwords? You can lead a horse to water, but you cannot force it to drink. Dr. Eugene Schultz is the CTO at Emagined Security, an information security consultancy based in San Carlos, Calif. He is the author/co-author of five books, and has also written over 120 published papers.

On the last 20 years of vulnerability management by Rebecca Bace

W

hen I think of how things have changed (and remained the same) over the last 20 years in the security trenches, it’s tempting to come up with a tote sheet for the decades. On the positive side, the degree to which our modern lives are staged on IT networks is a testament to the ability of the security community to present at least a facade of acceptable risk to the masses. Even as I note that we don’t have sufficient acumen to protect what we’ve put online, I also – without apology – assert that the transparency that vulnerability management solutions brought to the practice of system security (added to IDS’s ability to spot problems in the making) had a positive influence on security and the online world. On the negative side, there’s a lot left to do in both the realms

of IT and security management at large. We need to better understand how to harden systems in ways that are reliable and cost-efficient. We need to acknowledge that complex systems are inherently imperfect – and include monitoring and control mechanisms to spot when imperfections are being exploited to the detriment of information owners. We need to understand how to flex models of expected behavior to accommodate local norms, while assuring that users can safely function online. None of these measures will come as easily as we think they should. That does not mean that they aren’t critical to our modern world. Perhaps, taking a look at the Japanese auto industry of the 1970s would be useful to us – it’s time to focus on improving the quality of IT systems and the management processes associated with their care and feeding. In the best of worlds, this will give us the market edge that finances the next quantum leap in both IT and security. Rebecca Bace is a security strategist with more than 25 years spent in a variety of roles. She currently focuses on taking early stage firms to market.

Big changes in vulnerability assessment by Ron Gula

D

uring the past 20 years, there have been very big changes in the vulnerability scanning industry. Keep in mind that 20 years ago, vulnerability scanning was really made famous when Dan Farmer released the SATAN tool. Even though he released a script named “repent” to turn SATAN into SANTA, the initial reaction to this type of tool from most managers was one of shock. Twenty years later, we have a thriving and growing vulnerability scanner industry with many competing vendors and technologies. It started out with a focus on which scanner could enumerate the most vulnerabilities in the least amount of time. Often, these scanners were run by consultants or auditors who were not embedded in IT. As threats became more sophisticated, “enumerating badness” was not good enough. Instead, scanners had to evolve to work within an IT infrastructure and perform robust patch management and configuration auditing. This allowed security auditing tools to speak the language of an IT administrator while still simulating the “hacker threat.” As this usage of the network scanner changed, the type of testing performed by organizations evolved to capture this. I remember helping some early magazine tests for Nessus and filling out forms that asked: “How many Windows patches do you check?” Waiting until the last day to submit this form made a user’s scanner appear as if they’d checked off more than some-

www.scmagazineus.com • November 2009 • SC 6

Products

one else. The industry has come a long way since then. Because of cheap bandwidth, as well as the increase in speed of network vulnerability scanners, the ability to offer scanning as a subscription service also evolved. For a low cost, organizations could procure a one-time monthly or yearly scan of their perimeter. This allowed the IT administrator at an organization to obtain risk information without having to run their own auditing infrastructure. For organizations involved in webbased e-commerce, this was a very good combination. And right now, on the verge of 2010, most scanning vendors are looking at how the world of virtualization and cloud computing will change the need for scanning. New combinations of software-as-a-service, passive network monitoring, continuous scanning, and scanning embedded within the cloud are well equipped to offer various forms of auditing virtual sprawl and cloud-based applications.

the earliest commercial automated penetration testing solutions, which have evolved significantly over the last decade and represent one of the most important elements of the practice’s continued development. As today’s cybercrime epidemic, specifically electronic data theft, continues to proliferate at a furious pace, penetration testing has finally truly found widespread recognition outside the realm of specialized organizations and consultants. It is now a central element of proactive IT risk management through the use of both services and an array of rapidly maturing technologies. As concepts of IT risk management and security measurement evolve out of their own nascence, driven in part by regulatory compliance, penetration testing will only become a more pervasive, critical component of those strategies, based on its ability to isolate vulnerabilities directly exposed to real-world attacks.

Ron Gula is CEO of Tenable Network Security.

The evolution of penetration testing by Matt Hines

A

s one of the oldest IT vulnerability assessment methodologies invented, yet one of the most rapidly evolving IT security practices today, penetration testing remains a process that continues to mature in direct parallel with the systems and applications which it has, and will be used to assess. Over the course of the 1970s and 80s, pen testing was an internal practice, used primarily within military and academic research centers to validate security mechanisms and to corroborate the presence of hypothetical flaws, both in production and R&D computing environments. In the early 1990s, as the seeds for the forthcoming internet and IT revolutions were being sewn, penetration tests began to see expanded use in gauging the overall security of many products and services, dovetailing with the arrival of the earliest purpose-built hacking tools and dedicated professional services. However, even with large and specialized consulting firms marketing pen testing audits to their customers, the process was still almost entirely manual, consisting of undocumented and unrepeatable methodologies that relied on the individual experience and skill sets of practicing experts. Despite being nearly three decades old, penetration testing realized perhaps its most significant advancement with the turn of the century and the arrival of more sophisticated, financially motivated cyberattacks, gaining greater adoption to help manage matters of IT-driven risk. This era also saw the emergence of

7 SC • November 2009 • www.scmagazineus.com

Before joining Core Security Technologies as marketing manager in 2008, Matt Hines covered the IT industry for over a decade as a reporter and blogger for publications including InfoWorld, eWeek, CNET News.com and Dow Jones Newswires, with a specific focus on the security space since 2003.

BIOMETRIC The search for the better biometric mousetrap by David Lease

I

t seems as though we are always looking for the better mousetrap – “nitrogen-enriched” gasoline; all-in-one laundry sheets for the washer and dryer; combination soap, shampoo and shave cream; and other products that make our lives somehow better. This constant search for the next best “solution” is true in biometric security technologies as well. When I started working on identification technologies in the mid-1980s, we already had a pretty big database of fingerprint cards that were cataloged and organized like library cards. If you had crime scene prints and wanted to identify a suspect through an existing set of prints, we’d manually search through hundreds of fingerprint cards comparing attributes of submitted prints to prints already on file. Over the years, we worked to find ways to make fingerprint-matching faster and more reliable. Today, fingerprints are digitized and the comparison is automated, but it’s still nothing like what’s on TV or in the movies. Unfortunately, most biometrics are useful only for authentica-

Products

tion or identity verification because the biometric marker must already be known before it can be matched. Additionally, many biometrics lose accuracy under certain conditions. Fingerprint scanning can be disrupted by hand cream (as is often found in hospitals). Facial scans are unreliable if there are changes in appearance (such as shaving off or growing a beard). So, in our continuing search for the better biometric mousetrap, we’ve turned to a number of biometric markers which you might not have considered. For example, we’ve been experimenting with spectral imaging for battlefield IFF (identify friend/foe) systems. Researchers in Japan and the U.S. have been working on body odor as a unique biometric signature. We have developed biometric systems that rely on your typing pattern, the way you walk, the way your lips move when you speak, and the shape of your ears, just to name a few of the more innovative approaches. What this means is that someday soon, I may be writing about how we retired fingerprints for a new biometric identification technology – and how we’re looking for ways to improve on that technology as well.

Security professionals must also deal with changes to a person’s actual fingerprints. While fingerprints do not naturally change on their own, injuries and other accidents can change them. What happens when an employee cuts their finger, or worse, loses the finger altogether? Cultural implications have also impacted the use of fingerprints as authentication. Some people simply do not like having their fingerprints on file, feeling it is a violation of their privacy. Yet others do not like having to place their fingers where many others have placed them. With the latest flu scare, this has become commonplace. Germs can spread easily from one person to another when they must all come into contact with the same surface. In any case, fingerprint scanning has become the number one tool when it comes to biometric authentication, and fingerprint scanners are now a lot more common than they used to be. Where once you would expect them only in high security areas, they can now be found on laptops, safes and home doors. Fingerprint scanning has become one of the most accurate and affordable choices for biometric authentication.

David Lease has been involved in the design and implementation of information assurance products, policies and countermeasures for over 30 years. He is currently developing improved identification technology for clients in the United States and EU.

Stan Jamrog teaches information assurance and information technologies as an adjunct professor for several colleges. He is a graduate of Norwich University’s Masters of Science in information assurance program.

Evolution of fingerprintbased biometrics

Face and occular biometrics speed, accuracy and cost

by Stan Jamrog

by Terrance Boult

T

he use of fingerprints as a means of identification has been widespread for many years. In the world outside of IT, fingerprints have been used to identify and convict criminals. It is only natural that the use of fingerprints as a means of authentication for computers and networks would become a mainstay of biometric authentication. The use of fingerprints as authentication has its own set of challenges. It quickly became apparent that using fingerprints as a means of authentication presents an entirely different set of challenges than those law enforcement has had to deal with. Faking fingerprints suddenly becomes a very real issue. Early fingerprint scanners could be fooled by a variety of techniques – from lifting and copying fingerprints to simply breathing on the scanner itself. As scanner technology advanced, so did the techniques to fool them, but it is clearly becoming more difficult to fool a scanner. Other issues have developed as well. Scanners have become more sophisticated and have reduced errors. A good scanner reduces false positive (giving access to the wrong person) and false negative responses (refusing access to the wrong person).

A

utomated identity biometrics has been around for decades and is regularly reported on in SC Magazine. Adding an important factor, “who the user is,” has been gaining interest within the security community. While fingerprints dominate the market, both face and ocular systems are growing rapidly. Semi-automated face biometrics was first deployed in 1988 and made a big step forward with the Face Recognition Technology (FERET) testing program in 1993 – a data set developed by the Department of Defense and still in use today. Government initiatives, such as DARPA HID [the Defense Advanced Research Projects Agency, the central R&D office for the U.S. Department of Defense] and the EU BITE (Biometric Identification Technology Ethics) program, continue to advance the field. For example, HID pushed both face and ocular biometrics to operate at greater distances and under less-controlled conditions, leading to improved products and further research. There are many good reviews of the technology and it is essential to keep up with the latest news as the field continues to rapidly advance. Increasing capabilities and decreasing costs have led to the

www.scmagazineus.com • November 2009 • SC 8

Products

continuing, expanding use of biometric systems. Face biometric accuracy, as measured in government tests from FERET to the National Institute of Standards and Technology’s Face Recognition Vendor Tests (FRVT) and Face Recognition Grand Challenge (FRGC), have shown dramatic face-recognition improvement and increasing flexibility of use. Ocular biometrics, measuring the iris and retina, are used in significant deployments. Reported accuracy for ocular biometrics has been high, but limitations exist in usability. NIST’s Iris Challenge Evaluation (ICE) has also challenged some of the accuracy claims of the vendors. Speed, accuracy and cost have been improving significantly, but biometrics still have a commonly discussed area of concern limiting some deployments. While biometrics had a generally negative impact on privacy in the beginning, recent advancements are improving the privacy outlook. Embedded biometricenabled devices for protecting small hand-held portables, from USB sticks to iPhones, are growing. Personal biometrics to protect a laptop can be easier to use, and more secure, than common passwords. Because biometrics are not shared, they can improve privacy. The privacy side has also advanced in general biometrics with transform technologies that convert biometric data into some type of revocable token that cannot be converted back to the original biometric data. Biometrics technologies, once the domain of high security and government programs, are entering the multibillion mainstream markets – from convenient PC/laptop login tools to securing mobile data to a wide array of access control and time and attendance products. It continues to be a rapidly changing field with significant growth potential. Terrance Boult is El Pomar professor of innovation and security at the University Colorado at Colorado Springs, and he is CEO/ CTO at Securics.

Vein recognition biometric systems by Joy Kasaaian

V

ein recognition systems can be used to identify individuals or to verify identity based on the vein patterns on the human finger or hand. Vein recognition technology was developed in Japan in the 1990s and was deployed on a large scale in the early 2000s. The technology has seen widespread adoption in East Asia. In 2004, the Bank of Tokyo-Mitsubishi, one of Japan’s largest banks, deployed 250 vein recognition systems in its ATMs, and now has more than 5,000 systems. In 2006, the International Biometric Group conducted the first independent test of vein recognition technology in the

9 SC • November 2009 • www.scmagazineus.com

Western Hemisphere. Comparative Biometric Testing, Round 6, from the International Biometric Group (IBG), demonstrated that vein recognition technology is highly accurate in 1:1 applications, and also has relatively low failure-to-enroll rates. Since then, vein recognition has been deployed in a number of physical access control and time and attendance settings across the United States and Canada. Carolinas HealthCare System in North Carolina is using vein recognition technology for patient check-in, while Bates County Memorial Hospital in Missouri uses vein recognition technology to monitor the time and attendance of hospital staff. The Port of Halifax, Canada’s largest port, is using vein recognition for employee access control. Additionally, Pearson VUE, a testing company, deployed vein recognition in 2008 to verify the identity of GMAT test-takers. Hitachi and Fujitsu are considered the most established vendors in the vein biometrics market. While vein biometrics are more costly than the majority of fingerprint scanners and are not yet proven in a 1:N matching environment, deployers of vein biometrics choose the technology for its ease of use, accuracy in 1:1 matching, low failure-to-enroll rates, resistance to spoofing, and the lack of stigma attached to the technology – unlike fingerprinting. Also, vein biometric sensors may be housed in a variety of form factors and deployed in a number of environments. Overall, IBG predicts that vein biometric revenues will more than double over the next five years. Joy Kasaaian is a consultant at International Biometric Group (IBG) and is a leading expert in the use of biometrics in public and private sector applications.

DIGITAL FORENSICS Digital forensics: 20 years on by Mark Pollitt Digital forensics may be older than many realize. Don Parker wrote about electronic evidence in the 1960s, and both the IRS and the FBI trained a few agents to extract evidence from corporate and government mainframe computers. But, like avantgarde artists, the world paid little attention to these pioneers. No one could have anticipated the revolutionary impact of the personal computer and the internet.

Products

By the late 1980s, when this magazine was founded, a few enthusiastic law enforcement officers and a handful of computer security folks began exploring ways in which computers could be used to commit or facilitate crimes and how to extract evidence of crime from them. This was then known as computer forensics. The term was telling – the focus was on the machine, not the information or its connections. After all, there weren’t many networks that criminals could access. The 90s changed all that. The decade began with characterbased DOS and dial-up modems connecting to AOL and Prodigy, culminating in the dot-com explosion and Y2K. The latter may have been a technical “non-event,” but its cultural impact –the universal appreciation that computers affected all of our lives – is still under-appreciated. No one understood that better than the law enforcement forensic folks who suddenly were in great demand. Child pornography and online crime were the growth sectors for law enforcement, and computer forensics would play a pivotal role. The new millennium brought new challenges. The collapse of Enron and other corporate giants resulted in legislation mandating corporate accountability. Information security would thrive in this new environment, and one of its new tools would be digital forensics. Sept. 11, 2001 would shock the world into the recognition that digital technology truly does bind us together – for good or ill. As we approach the end of the first decade of the millennium, forensic examinations of audio, video, image, mobile devices, networks and computers are routine. Electronic discovery has opened a new vista, while professionalism – in the form of accreditation, certification and licensing – is establishing a new chapter in the evolution of digital evidence. Mark Pollitt served as a special agent for the Federal Bureau of Investigation for over 20 years. At the time of his retirement, he served as chief of the Computer Analysis Response Team and director of the Regional Computer Forensic Laboratory National Program Office. He currently serves on the faculty of the National Center for Forensic Science at the University of Central Florida.

Computer forensics in digital investigation: 20 years by Christopher Brown

The first piece of information stored in digital form opened up the potential need for investigation, but until networking, digital investigations were generally single disk-focused endeavors performed by “the computer techie” or auditors using re-purposed tools, such as hex editors and other general purpose software. One of the first documented computer hacking cases, chronicled in The Cuckoo’s Egg by Clifford Stoll (1990), was the network hacking of computers located at Lawrence Berkeley Laboratories in California. From that point on, computer security and digital investigations would never be the same. Over the next 20 years, digital investigations would mean many things to many people. Computer forensics practitioners came from many source professions – such as system administrators, law enforcement and legal service providers – bringing with them a wide array of education, capabilities, methods and tools. Today’s computer forensics practitioners are recruited from the best of the many source professions. These individuals possess sophisticated capabilities, including a nose for investigations offered by law enforcement, the deep understanding of digital systems, and networking by computer incident response personnel, as well as a respect for legal and forensics sciences offered by a traditional forensics science education. The formalization of education, training and the computer forensics profession in general, has led to an explosion of tools available to the profession. Not only is today’s software light years ahead of the original hex editors and re-purposed software, specialized hardware is now commonplace in the market. Competition within the market continues to foster research and development, as well as drive continuous improvement in standardized methodologies. As well today, computer forensics is in the forefront of our legal and criminal justice system. In 2008, then American Academy of Forensics Science added a “digital & multimedia sciences” section alongside traditional fields like toxicology. Without a doubt, digital investigations, performed by computer forensics professionals, is here to stay and will continue to evolve as innovations continue to meet the demand of this expanding marketplace. Christopher Brown is the founder and CTO of Technology Pathways, where he focuses on computer forensics and digital investigation software. He has authored numerous books on the internet and computer security. His most recent book, Computer Forensics: Collection and Preservation, is published by Charles River Media.

One of the first things that comes to mind when evaluating the advances in digital investigation over the past 20 years is the advent of PC-based networking, followed closely by the public internet explosion of the early 1990s. Of course, these two events were only a catalyst to creating a demand for change.

www.scmagazineus.com • November 2009 • SC 10

Products

Digital forensics on the network by Chester Hosmer

C

ollecting network evidence has changed in revolutionary ways over the past decade. Obvious changes – such as in bandwidth; wired versus wireless; protocol enhancements, such as IPv6; the ubiquitous adoption of VOIP; and improved authentication and encryption – are just a few. Once you move above the network layer, however, the world is quite different today than even five years ago, especially in the way users view the network. Peer-to-peer sites and social networking have changaed the way we work, exchange information, communicate and even think. Unfortunately, this also has provided new ways for criminals to communicate and attack our infrastructures. Botnets have invaded peer-to-peer and social networks in such force that our ability to even predict the attack scenarios is threatened. As we approach 2010, our focus has turned once again to intelligent systems to help identify and analyze network evidence and connect the dots between seemingly unrelated events. We have turned our attention from not only discovering aberrant behavior from the outside looking in, we are seriously collecting evidence within our networks to rapidly detect the accidental or malicious exfiltration of information via enterprise infrastructures. In addition, we are developing preemptive solutions that will both collect and reason about network activity in order to make critical decisions regarding what to collect, audit, monitor and/or shunt. You might say this sounds more like network defense instead of network forensics, and you may be right. However, when considering the volume, diversity, connections, content and speed of network traffic today, the collection and real-time analysis of the relevant or actionable evidence from the network is a necessity. Additionally, digital forensics today is moving beyond the courtroom. Digital forensics on the network can be applied to intelligence gathering, digital incident response and network failure analysis. The use of forensic techniques has begun to evolve to a point where these applications join traditional analysis of criminal activities and vie for the attention of digital forensic scientists and analysts. Some would say that digital forensics still is in its infancy and there is a case to be made for that, but today there is a multitude of critical activities that take place on the network that invite analysis using forensic techniques. Chester Hosmer is senior vice president, cybersecurity division, Allen Corporation of America.

11 SC • November 2009 • www.scmagazineus.com

Cyber investigation: Just starting to grow up by Austin Troxel

T

he past 20 years have seen the birth and growth of cyber investigations as a recognized specialty. Like any young adult, it has its awkward moments and has some growing to do. In spite of its youth though, cyber investigations or digital forensics (we still aren’t quite certain what to call it) has made a name for itself. In the late 1980s, we practitioners were basically PC technicians who often were trying to recover data that either we or a co-worker had inadvertently deleted. We had some utilities to copy data to different media, as well as some hexadecimal viewers that allowed us to reconstruct digital files in an agonizingly tedious process. We had no concept of “forensically sound” or “repeatable methodology.” We just wanted to get what we could as fast as we could. As the years progressed, our tools got better and we became more efficient. Cyber investigation came into its own. Since 2000, the field has made essential contributions in a number of high-profile criminal cases, including those involving Chandra Levy, Martha Stewart and, perhaps most notably to date, Dennis Rader, the convicted BTK killer. In the past few years, cyber investigation techniques and concepts have become part of our modern culture. Attorneys regularly speak of “e-discovery” and “preserving metadata.” Suspicious spouses seek “keyloggers” and ask examiners to retrieve deleted emails or chat logs. Corporate executives are now concerned with what constitutes “electronically-stored information” and its retention. Popular TV shows, including CSI and NCIS, regularly feature a white-coated technician having an “a-ha!” moment as they discover a critical piece of case-solving data extracted from a seemingly destroyed digital device. Examiners joke about the lack of a “Find Evidence” button on our workstations. Of course, we know there never will be one. Our tools may be ever-improving, but we must always bear in mind that better hammers, saws and ladders do not build houses. Skilled men and women do. Cyber investigations, in sum, is about the continuously evolving skills and acquired instincts of its practitioners. I can’t wait to see what the next 20 years will bring. Austin Troxell is a licensed private investigator whose practice is limited to digital forensics. He is the owner of Cyber Investigation Services in Woodruff, S.C.

Products

EMAIL Email security by William Stallings

E

lectronic mail (email) is a one-way transaction. Although email messages are frequently answered, each message transmission is a unique standalone event. For email transactions, there are two main security concerns: privacy and authenticity. A user may want to ensure that a message that they send can only be read by the intended recipient (privacy); and a recipient of a message may want assurance that the message came from the alleged sender and that the message has not been altered en route (authenticity). The earliest comprehensive approach to email security was Privacy Enhanced Mail (PEM), which was first issued as an internet Request for Comments (RFC) in 1987. PEM had all the ingredients found in modern email security standards, including encryption and authentication services based on the use of public-key certificates, digital signatures and symmetric encryption of the email contents. Widespread use of email security functions began with the introduction in 1991 of Pretty Good Privacy (PGP) by Phil Zimmerman. As with PEM, PGP combines confidentiality and digital signatures in a powerful, easy-to-use package. Zimmerman made PGP available as freeware on a wide range of platforms. Commercial versions, with product support, are now also available. While PGP appeals to individual users, corporate and government users were interested in adopting a standardized email security package. PEM was a candidate, but the certificate authority (CA) hierarchy specification was difficult to implement. Instead, the Internet Engineering Task Force (IETF) incorporated PEM functionality into a new standard based on the multipurpose internet mail extension (MIME), known as S/MIME. S/MIME was originally developed by RSA Data Security, and the first RFC for a version 2 of S/MIME was issued in 1998. S/MIME provides the same functionality as PGP and PEM, and introduces an easily implemented CA hierarchy facility. Both PGP and S/MIME are mature specifications and widely used. With these specifications stabilized, attention has turned in recent years to other aspects of email security, notably resistance to spam and authentication of mail servers. The most noteworthy such effort is DomainKeys Identified Mail (DKIM), a specification for cryptographically signing email messages, permitting a signing domain to claim responsibility for a message in the mail stream. Message recipients (or agents acting on

their behalf) can verify the signature by querying the signer’s domain directly to retrieve the appropriate public key and thereby can confirm that the message was attested to by a party in possession of the private key for the signing domain. DKIM is a proposed internet standard and has been widely adopted by a range of email providers, including corporations, government agencies, Gmail, Yahoo!, and many internet service providers (ISPs). Bill Stallings is the author of numerous textbooks, most recently Cryptography and Network Security, Fifth Edition (Prentice Hall, 2010). He also maintains the Computer Science Student Resource Site at WilliamStallings.com/StudentSupport.html.

Looking back at email encryption by Phil Dunkelberger

I

’m often asked, “Why should I bother to encrypt my email? Aren’t all of the communications links over which it flows already secure?” The short answer is, “Probably, and it doesn’t matter because your email is still vulnerable.” It is when email is “at rest” that it is very vulnerable. The average email sits quietly (and vulnerable) on any number of devices while it is being crafted, transmitted and stored awaiting delivery to its destination. And, it sits virtually forever on the systems to which it is finally delivered, stored and archived. When we re-started PGP Corporation in 2002, we learned three key lessons about securing enterprise email and the confidential information in contains. First, no one policy or technology approach is right for all organizations. For some, enforced end-to-end is the best way to go. For others, encrypting email based on recipient, content or some other variable at the gateway is the best approach. For some enterprises, leaving individual users in control of what does and doesn’t get encrypted is most appropriate. The key lesson here, though, is that for most enterprises, the best policy is a combination of these approaches based on function, content and other unique cultural variables that all enterprises have. Second, you cannot mess with the sender or recipient user experience – ever. For most users, this means burying the encryption technology in the enterprise infrastructure (or desktop environment) in a way that the user doesn’t even know it’s there. For others, though, it means providing very granular control of encryption policy to the end-user. Third, we also learned that you cannot mess with the operational experience from an IT perspective. It has to fit into all of the existing policy management, key management and audit requirements that exist in all modern enterprises.

www.scmagazineus.com • November 2009 • SC 12

Products

Beyond this, however, there is a larger lesson about the role email plays in most enterprises: It is just the tip of the iceberg when it comes to managing content security. Once email is secured, the very next question that must be addressed is, “What about shared storage, laptops, memory sticks, smart phones and all of the other devices on which confidential information now resides?” So, while email security policy may beget the first key management policy, it is only the beginning of the story when it comes to enterprise-wide data security. Additionally, with more and more enterprises looking at moving storage and processing to the cloud, securing email at rest is as important as ever. Protecting confidential data from unauthorized access and having a policy in place to safeguard the keys that allow that access, are two challenges organizations will need to overcome. Phil Dunkelberger, CEO and president, PGP Corporation, is a founding board member of the Cyber Security Industry Alliance (CSIA) and serves on the TechNet CEO Cybersecurity Task Force.

The evolution of instant messaging security by Herb Joiner

I

t was a little less than a decade ago that the internet was all about America Online (AOL), Prodigy and CompuServe, and ICQ was emerging as the first public instant messaging (IM) program. Used primarily by the geeks among us, you didn’t worry about security – it was a few characters of chat, nothing more. If you did worry about security, then you simply closed a few ports on the firewall, blocked access to certain URLs and that was it. Today, IM has not just grown up, it’s bringing a whole new family to the party. Dominated by Skype, Windows Live, Twitter and GoogleTalk, these evasive tools also have VoIP, file transfer, video and screen sharing. Combine that extra capability with the sheer number of internet users, which has skyrocketed from 360 million in 2000 to an estimated 1.6 billion today, and you see the task that IM security has today. Alongside this growth of publicly available tools, the real benefits of real-time communications and collaboration tools like IM has driven deployments of enterprise-grade unified communications (UC), such as Microsoft Office Communications Server, IBM Lotus Sametime and Cisco. The convergence of these enterprise UC deployments with consumer applications brought into organizations, generally by the end-user (with or without IT’s blessing), ensures that the risk threshold multiplies. IM security today has moved from simple protection of

13 SC • November 2009 • www.scmagazineus.com

content and of what chat was transferred, to virus checking, anti-malware checks on URLs sent within IM chats, and content checking on files transferred. FaceTime customers, who include the top 10 U.S. banks, demand the setting of ethical boundaries, lexicon-driven checking to prevent information leakage, and other such capabilities as required by the U.S. Securities and Exchange Commission, the Financial Industry Regulatory Authority (FINRA), and a host of other regulatory compliance bodies. What next then for IM security? The days of blocking are long gone. Organizations desire the benefits of collaboration – and so IM security must be part of the enablement. With IM now an integral element of Facebook and other social networks and the network perimeter dissolving fast, it’s to this new social phenomenon that security must turn its attention, and quickly. It’s a real-time world of risk out there. Be sure to stay safe. Herb Joiner brings over 25 years of executive management and architecture engineering experience in the software industry to his role as vice president of engineering for FaceTime. Joiner has six networking patents in his name with an additional five in submission with the U.S. patent office.

How SMS security has evolved by Sean Moshir

W

ith over one trillion text messages sent globally in 2008, there is no doubt SMS is the most popular communication channel on the handset. Its simplicity and affordability make it the obvious choice for the quick and easy transfer of information via the mobile device. However, because of a lack of security, the SMS channel has not been fully used by businesses and organizations. SMS’s lack of security has opened up flaws in the last 18 months across all major mobile operating systems, namely iPhone OS, Symbian, Windows Mobile and Android. SMS spoofing, SMS phishing, SMS worms and SMS hijacking are all examples of the way the SMS protocol is being exploited to initiate cyberattacks in various areas of the globe. Why is the SMS protocol so vulnerable? One of the primary reasons is because it is always on. Additionally, the SMS User Data Header is a point of vulnerability because it allows for new functionality to be built on top of the SMS structure, such as large/multipart messages. It also allows for a new set of attacks because it is above the SMS header layer and can easily be pushed on to the carrier network. SMS attack techniques exploit the fact that many behind-thescenes administrative messages are sent from the carrier to the

Products

phone. These messages can be forged by attackers because there is no source checking or cryptographic protection on these messages. If an attacker constructs a validly formatted message, phones usually falsely interpret it as such. Further, testing security scenarios around SMS can be expensive and global carriers and other players along the mobile value chain have as yet not been willing to expend the necessary resources to secure the channel. What does the future of SMS security look like? We believe that as more and more hackers get their hands on SMS and its internal configuration and protocol, there will be more and more such attacks. Text message worms will become more common. SMS hijacking will increase as cybercriminals attempt to impersonate others. SMS eavesdropping will increase significantly and the channel will become an easy target when used to transmit more and more critical and sensitive information. The reaction will be to secure the SMS channel by using SecureSMS with end-to-end encryption whereby the SMS cannot be spoofed or tampered with and the sender is authenticated. This ensures that the handset or carrier credentials cannot be manipulated, and that text messages cannot be intercepted and altered, thus securing this high growth and cost-effective mobile channel. Sean Moshir is the CEO and chairman of CellTrust, as well as founder of PatchLink Corporation (now Lumension). Over the last two decades, Moshir has led several industry changing technology initiatives, including the world’s first network management language, the first network anti-virus VAPs for Central Point Software, and other sophisticated network tools and system management software programs.

NETWORK & PLATFORM SECURITY 20 years of network and platform security by Greg Brown

T

wenty years ago, network security was done, for the most part, by separating networks. There were failed government projects, like Blacker, but the vast internet was open at the network level. Then, in 1994, William Cheswick and Steven Bellovin published Firewalls and Internet Security. This book defined the prevailing mentality that vulnerable services, and vulnerability in general, is the source of security breaches. A classic paper, “Improving the Security of Your Site by Breaking Into It,” by Dan Farmer (Sun Microsystems) and Weitse Venema (Eindhoven University of Technology) shined a further spotlight on defending against security breaches. Vulnerability in code is caused by error. By reducing the amount of code in a network security device (i.e., firewall), one can reduce the amount of error and, therefore, its vulnerability. This idea of “zero-defect code equals security” still persists today in patch management and proxy firewalls. But removing vulnerabilities is only part of a security solution. Firewalls took a step back for speed and ease-of-use and created stateless and stateful firewalls. These were less secure, but were easier to deploy and maintain. The balance between speed-latency and security is an ongoing decision point in the industry. Over the years, network platforms have been moving back up the network stack. Intrusion detection systems in the mid-90s started to examine protocol content. Then, improvements from network layer to application analysis took place. Now, it is about the content. Devices are reviewing the information being exchanged, as opposed to protecting the services exchanging the information. In short, network platforms now are required to process and evaluate content, such as mail, files, audio and video. These examinations are not just for attacks, but for policy, misuse, data exfiltration and viruses. Network devices can no longer make their decisions in a

www.scmagazineus.com • November 2009 • SC 14

Products

vacuum. They are now being integrated into the other security components to include network switching and host-based security applications. As information security has matured, the network platform has become a cost-effective complement to host-based security solutions. Greg Brown, senior director, product marketing for McAfee Network Defense, sponsored industry-leading advances in network security integration with McAfee’s systems and risk management product lines.

Two decades countering the threatscape by Ken Xie

I

’ve always believed that network security has, and probably always will, follow the growth and adoption of internet applications. Over the past 20 years, we’ve seen this realized. Prior to widespread internet use, securing IT assets was much easier as networks were closed. Information was largely spread via floppy disks and, thus, threats were minimal. IT security was fairly simple and infinitely easier to manage than it is today. The emergence of the internet in the early 1990s dramatically changed security requirements by opening up networks and making them vulnerable to new threats. With all the benefits that internet-based applications enabled, the security risks it created were also plentiful. Network security has been challenged with having to adapt to and protect against a constant barrage of new threats while keeping pace with vastly increasing network performance. As the internet went mainstream, it created the need for connection-based security – affecting network Layers 3 and 4. As a result, software firewalls and VPNs were adopted as the primary method for securing networks. As network speeds increased, hardware-accelerated firewall/VPN security appliances began to gain market acceptance largely due to cost. Computer systems with pre-installed software were less expensive compared to mainframes and an additional installation process. As new internet applications were adopted and introduced content-based threats (email, web, etc.), Layer-7 inspection was required and this further taxed network performance. The market, at this time, was flooded with point solutions aimed at addressing only individual parts of the problem (AV, IPS, antispam, firewalls, URL filtering, etc.). It became clear that single point solutions couldn’t address the growing sophistication of blended attacks and that they were complex and expensive to manage and maintain. The industry required an integrated network and content security solution to keep up with new

15 SC • November 2009 • www.scmagazineus.com

and emerging threats without crippling network performance or administrative resources. This gave rise to the unified threat management (UTM) market, which has quickly become one of the largest and fastest growing markets in the security space. Internet applications have driven the most revolutionary advances to network security over the past 10 years, including the shift from software-based security to security appliances, integration of multiple network and content security functions into a single UTM appliance, implementation of dedicated content and network processors to accelerate performance, and security deployed as a service. The security landscape continues to evolve because applications that touch the network change every day, creating new opportunities for attack. Hackers won’t cease in their inventiveness with ever-morphing attack modes. As a result, the security industry cannot let up in innovating new and better solutions that improve security, efficiency and performance. Ken Xie, Fortinet founder, president and CEO, has more than 20 years of technical and management experience in the networking and security industries.

Unix security in a nut shell by David Land

U

nix was born, if you will, in 1969 in AT&T Bell Labs. Since then, it has undergone many changes, by both hackers and developers, resulting in many truly unique Unix operating systems. In 1993, FreeBSD began its trek toward bringing a viable and free operating system to the community. This effort would take several years as there were many stumbling blocks. Today, there are some key areas of difference in the many Unix and Unix-like variants and how developers opted to construct the base operating system and the intent behind their implementation. Unix and Linux have grown in popularity and are asserting a very significant presence in today’s technology security arena. Early versions of Unix were developed with functionality in mind. As more and more threats appeared, Unix, Linux and other operating systems refocused their efforts on better security for their operating systems. Not to stop there, developers recognized the need to come up with security and administrative tools, allowing systems administrators to better manage the operating system. In the mid-90s, newer tools were developed with the intent to better secure the computing environment. Among these were, according to NIST, internal vulnerability scanning, patches and replacements, advanced authentication, password enhancing tools, password breaking tools, access control and auditing tools. Additionally, logging tools and utilities,

Products

intrusion detection tools, system status reporting tools, and mail security tools often got their starts on a Unix box. Finally, packet filtering tools, firewall tools, real-time attack response tools, encryption tools and host configuration tools round out the list of security-related solutions that use the Unix platform as a starting point. Today, Unix often serves as the basis for purpose-built versions of these tools. Likely, the most secure Unix version to be developed is OpenBSD as it was developed from the beginning with security in mind. In 2003, the NSA took on the task of securing Linux, developing SELinux, and went as far as to make it open source to elicit enhancements, which NSA developers might have otherwise overlooked. A true first for the super-secret government agency. As newer versions of Unix and Linux continue to evolve, more highly effective security tools are being developed. Couple these tools with robust Unix and Linux operating systems and you have the makings of real secure computing.

ly to move up toward Layer 7 in the stack. That is, the trend in network security has been to be more of a host security solution. To me, the success of network layer security devices is a big red flag, on which is written: “We still stink at writing secure software,” and also, “We don’t understand system administration, either.” So, can I say that the overall trend in network security has been for it to try to fix the obvious gaps in software and host security? Network security has been getting very good at doing something that it isn’t anymore. What impresses me, and even gives me a small glimmer of hope, is that the industry seems to realize that collecting and using lots of information is crucial, and supporting an effective management workflow is the name of the game. Nowadays, we see systems that meld vulnerability management, network level protection, log analysis and system configuration/policy compliance into a complete picture. That’s the beginning of an effective security management paradigm. Marcus Ranum is CSO of Tenable Network Security.

David Land is a retired Army counterintelligence special agent and investigator. A graduate of the Norwich University MSIA program in 2004, David has served as an adjunct professor for Norwich University and Virginia College, and was a contributing author to the Computer Security Handbook, 5th Ed.

Network security maturing to meet today’s challenges by Marcus Ranum

L

ooking at information assurance from the perspective of network security, I think that we’ve seen adequate confirmation of the old-school security practitioner’s dictum: “You can’t solve host security problems at the network level and vice versa.” When we started building firewalls, we knew they weren’t a silver bullet. The only real solution was going to be a lot of hard work. But we’ve seen progress in network security, and the most interesting facet of it is how security has mined virtually every bit of useful information we can to improve our network-level defenses. Back when I started, all we had were firewalls – and all they had to protect were about five different application protocols. Now, we have firewalls, intrusion detection sniffers, intrusion prevention systems (really, a marriage of a firewall and an intrusion detection engine), web firewalls, passive vulnerability monitors and network-level log data collectors. To add to all the fun, web applications became increasingly complex and sophisticated (read: insecure), and the firewalls were usually configured to let HTTP traffic through. Once we got past the basic firewall, the trend in network security has been consistent-

Windows security by Randy Franklin Smith

W

indows security has come a long way over the past 20 years, but the changes have been as much about attitude as functionality. The strides in functionality came first though. Released 20 years ago, Windows 3.0 is widely accepted as the real beginning of Windows, but it had no security. In fact, Windows wasn’t really an operating system until Windows NT 3.51 was released about 15 years later. NT’s architecture and main components are still the core of today’s Windows Server 2008 and Windows 7. Dave Cutler designed NT from the ground up with security deeply embedded into the system, instead of as an afterthought. However, careless coding practices, the ubiquity of Windows and a lack of awareness and responsiveness at Microsoft combined to give Windows NT a horrible reputation in the late 90s that endured into the early years of the new millennium. I remember in particular two ultra-low points in Windows security over the years. Major security hole discoveries (involving the low level code as opposed to the design) in 1996 and 1997, and shrink-wrapped applications designed to exploit them, such as RedButton, getadmin and L0phtCrack, gained huge media attention and derision from industry pundits. But, again in 2006, confidence in Windows security was at such a low point, this time largely due to Internet Information Server vulnerabilities, that Gartner recommended that companies start looking at Linux as a replacement. This got Micro-

www.scmagazineus.com • November 2009 • SC 16

Products

soft’s attention and led Bill Gates – the man who back in 1995 said, “There are no significant bugs in our released software that any significant number of users want fixed” – to reverse that stance in 2002 and launch the Trustworthy Computing Initiative. At least Microsoft today is willing to change. In the 90s, Microsoft was so blasé about security that security gurus like Russ Cooper had to design a website that constantly polled Microsoft’s FTP site for new patches so that the community could be notified by email. Today, Microsoft leads in the industry in a monthly, predictable and highly choreographed patch cycle. The destination-less journey of security continues for Windows as for all software. With the entre of organized crime and the progress Microsoft has made against protocol-based buffer overflows, exploits have evolved to new shapes, like malformed files used to target specific companies for financial gain instead of hacker notoriety. But Microsoft needs to change again. Right now, it’s too focused on operating system integrity when the real focus of the bad guys has shifted to business and consumer data. Windows, you’ve come a long way baby – but, you’ve still got a long way to go. And that will never change. Randy Franklin Smith is CEO of Monterey Technology Group and publishes UltimateWindowsSecurity.com.

WIRELESS Wireless security evolution by Kaustubh Phanse

F

inding technical solutions to security problems is easy, persuading users to embrace those fixes and to exercise selfdiscipline is much harder – and there is no better example of this than the story of wireless security evolution. Ten years ago, wireless LAN security was equated with encryption of data over Wi-Fi (802.11) links. Wired Equivalent Privacy (WEP) was the first Wi-Fi encryption standard. The goal of WEP was to provide confidentiality, access control and data integrity. Although WEP failed on all three technical fronts, its footprint continued to grow as it was easy to use. Failure of WEP quickly led to the defining of a new wireless security standard, namely IEEE 802.11i or Wi-Fi Protected Access (WPA/WPA2), which consisted of improved encryption and mutual authentication techniques. However, widespread adoption of WPA/WPA2 didn’t start until 2007, when the

17 SC • November 2009 • www.scmagazineus.com

perception that WEP could indeed be cracked in less than five minutes really hit home. In the meantime, open and WEP Wi-Fi networks saw an unprecedented growth in enterprises, airports, hotels, universities, homes and retail stores. The very drivers of Wi-Fi’s popularity – ease of use and low cost – are now turning out to be the major challenge for IT security administrators. Odds are very high that most enterprises today already have many unsecure Wi-Fi devices present in their facilities – laptops, smart phones, printers and portable access points. Such devices, when connected to an enterprise network, can open backdoors into that network compromising all wired security measures. Launching hack attacks through these backdoors is easier than ever before. As a result, Wi-Fi security today requires going beyond WPA2 to deploying a second layer of defense in the form of a wireless intrusion detection and prevention system (WIPS) for shutting down backdoor access. After lessons learned over the last 10 years, the industry has finally arrived at a workable two-layer solution which makes security manageable, easy to use and – more importantly – provides cover for security discipline common among Wi-Fi users. How many more years before most Wi-Fi deployments will be fully secure? Your guess is as good as mine. Dr. Kaustubh Phanse, senior manager and wireless architect at AirTight Networks, is an expert in the field of wireless technologies and provides guidance to enterprises on regulatory compliance and wireless security best practices.

Evolution of mobile computing by Ozzie Diaz

W

ireless is great. Wireless is flexible. Wireless gives us all the freedom we enjoy in the workplace and at home – anywhere, anytime and on practically any device. But, the instinctive focus on the freedom of an “always connected” lifestyle also blinds us to its perils and inherent security risks. Cellular services arrived in the 1980s and they were expensive and limited to the business traveler or those that could afford the “bricks” or “bags” along with the services. From its infancy and into the early 1990s in the pre-Wi-Fi days, wireless local area networking technologies were mostly proprietary and used for specialized applications in military, industrial and niche commercial applications, such as factory and building automation. Since few information systems connected to these technologies, there wasn’t a need for security, encryption or hardcore authentication protocols. Those days are gone. Two principle dimensions contributed to changing the requirements for wireless security: scale and standards. Moore’s

Products

Law progressed from the days of the first PCs, laptops and cell phones, and today more than four billion mobile phones are being used around the world. Standards have equally contributed to making the availability of these technologies commonplace among the IT and cellular equipment industries. The IEEE 802.11a/b/g/n standards are as pervasive as the power supply for laptops worldwide. So what has changed since the early days of cellular services? First, every IT network can now be connected wirelessly. Every employee can acquire a simple Wi-Fi access point to plug into their company’s IT network. And every worker carries an “always connected” mobile phone that has more computer power than the first PCs. Second, perimeter security is an ancient concept in the new era of “always connected.” Access control, authentication, encryption and other security aspects are chasing their tails to close the holes created by wireless networks and mobile devices. As the wireless industry evolves, the two security best practices of physical and logical perimeters should merge. Because of the breakdown of perimeter-based security due to wireless technologies, location-based security and access control will usher in a new dimension once thought unachievable. Ozzie Diaz, CEO, AirPatrol Corp., was previously CTO for wireless and mobility for HP. He is a seasoned technology executive with more than 20 years of experience in management, marketing, business development and leadership.

Wireless intrusion prevention systems by Amit Sinha

I

n 1999, the introduction of Wi-Fi certified wireless LANs (WLAN) created a new avenue for network security breaches, circumventing firewall-centric enterprise security architectures. Employees would often bring in their own wireless access points (AP) to leverage the benefits of mobility at work. These rogue APs typically lack proper security, providing attackers with unrestricted access to internal servers, as if they connected to an internal wired port. Many enterprises instituted a “no wireless policy” to counteract this. However, they quickly found that roaming around with a laptop sniffer to detect rogues and then isolate them from innocent neighbors was tedious and expensive. This gave rise to the need for centralized wireless intrusion prevention systems (WIPS) that could detect and terminate rogue wireless devices. As enterprises started adopting WLANs, several new attacks emerged, including denial-of-service, man-in-the-middle,

identity theft and key cracking. Hackers began masquerading as legitimate wireless devices and connected to the network. By 2003, they could crack WEP encryption, defined in the original 802.11 standard, in a few minutes. As the number of wireless APs and clients increased, managing and securing large distributed networks became a challenge. Left unmonitored, misconfigured APs and devices often created vulnerabilities. By 2006, several high-profile data breaches attributed to unsecure wireless networks had occurred, compromising the personal information gathered from tens of millions of credit cards. As a result, industry standards, such as PCI DSS, started emphasizing wireless security and compliance. Over the years, WIPS has evolved in two flavors: infrastructure-based, part-time WIPS and dedicated 24/7 solutions. Infrastructure-based WIPS leverage APs as part-time sensors, typically offering limited scanning, detection and reporting capabilities. This approach often suffers from false positive/negative alarms. Dedicated WIPS have detection and prevention capabilities that are more robust and offer minute-by-minute logging, reporting and forensic capabilities. Some dedicated WIPS also provide remote troubleshooting, WEP protection and vulnerability assessment options. Gartner estimates the WIPS market will grow to $209 million by the end of 2009 and continue to be driven by emerging wireless technologies, such as 802.11n, Wi-Max and 3G cellular. Dr. Amit Sinha, fellow and chief technologist of Motorola’s enterprise wireless LAN division, specializes in wireless communications and security. He is an inventor with 16 U.S. patents.

THE FUTURE How security looks in the future by Jonathan Gossels

T

hink about clocks. Only a few hundred years ago, the skill and cost involved in timekeeping required a municipal investment. The only clock in a village was located on a town hall or church, and time was communicated to the population through a pattern of ringing bells. On a sailing ship, the most protected item was its chronometer because without accurate timekeeping the ship was literally lost and the navigator could not determine longitude. Today, every one of us has so many clocks we can’t even keep track of them – dress watch, microwave, coffee maker, telephone, radio, television, bicycle odometer, cable box, computer and on

www.scmagazineus.com • November 2009 • SC 18

Products

and on. Timekeeping is now ubiquitous and free. Over the next 20 to 30 years, the security industry as we know it today will evolve to something barely recognizable. Just like seatbelts and airbags in cars, it will be ordinary and expected that security features and instrumentation will be integrated into all aspects of information processing and communications. You don’t have to look very hard to see the inevitability of this trend: It is not so long ago that a firewall was considered bleeding-edge technology. In a blink, we’ve gone from something that was expensive and required high skill, to inexpensive appliances with capabilities that the original firewall creators could never have imagined. Just like the clock on a coffee maker, these security appliances are reliable and take little skill to set up and operate. When we had to climb over and manually lock each door, many of us rationalized why it was okay to leave the car unlocked at the local shopping center. Today, when we push the lock button on the key fob as we walk away, we don’t fool ourselves with those rationalizations because security has been integrated into the vehicle and it is effortless to lock the car and activate the alarm. In every aspect of our use of information, communication or processing in the future, necessary security features will simply “be there.” It won’t be by accident. The market and outside regulations will have driven the requisite security integration. How is this going to play out? This trend toward security integration will continue as an evolutionary process. The advance will be simultaneous, but at varying rates, and the trailing edge (when mostly done) will move from networking infrastructure to computing and storage infrastructure and, finally, applications and application infrastructure. Make no mistake, effectively integrating security into applications will take the longest time and will be the most difficult technical challenge. Jonathan Gossels is president and CEO of SystemExperts Corp., a provider of IT compliance and network-security consulting services, with an active hands-on role advising clients in compliance and building effective security organizations.

Security in public clouds by Thomas Erl and Toufic Boubez

D

espite the headlines cloud computing is receiving these days, much of what is promised is far from being stable or mature, let alone safe. Any remote virtualized infrastructure can introduce security risks simply due to the fact that users generally have little or no control over how and to what extent security measures are being applied. This relates primarily to public cloud providers that can provision services, applications and entire chunks of infrastructure, but will usually only do so on their own terms. Especially when extending your existing enterprise architecture with a public cloud, maintaining a reasonable security architecture can become frustratingly difficult. For example, how will you integrate cloud-based services with corporate identity and access management suites? How will you implement cloud-based services across trust boundaries? A common answer to the security issue is the service level agreement (SLA). The SLAs given to you by the cloud provider can offer a series of guarantees that include assurances for various security controls. Even with SLAs that may make you feel as though adequate guarantees of reliability and security are in place, a greater problem looms: One of the most significant obstacles being faced by the cloud computing community as a whole is the lack of industry standardization that exists across public clouds. How does this relate to security? Adapting industry-standard security mechanisms used within the IT enterprise for use with remote clouds can very well lead to custom, hybrid and convoluted security architectures. These may comply with published SLAs, but they are by no means industry standard (because, again, the terms of compliance are defined by the cloud provider). This means that although you may establish an effective relationship with an external cloud provider, it can be difficult, or even downright painful, to move away at a later stage because the next cloud provider may impose completely different architectural and technological requirements. The potential of public cloud computing is there, but the industry certainly isn’t. For the time being, the focus remains on leveraging private clouds and building on virtualization infrastructure that doesn’t compromise on security. Thomas Erl is the world’s top-selling service-oriented architecture (SOA) author and president of SOASchool.com. Toufic Boubez is a SOA security and governance specialist and a certified SOACP trainer.

19 SC • November 2009 • www.scmagazineus.com

Products

Virtualization: gains and risks

Here comes Web 2.0

by Hadar Freehling

by Shreeraj Shah

F

rom its humble beginnings in the mainframe world over 20 years ago, virtualization has grown faster than anyone predicted. Virtualization developed from the need to maximize the investment made into mainframes that were often under-utilized. This is the same driver that has led the charge into virtualization on the x86 platforms. With huge saving in hardware, power and space, virtualization is continuing to grow. Today, not only are operating systems being virtualized, but so are applications and desktops. New server hardware platforms can allow more then 50 guest operating systems per server, depending, of course, on the type of OS and the load it brings. With application virtualization, users can now run two different versions of the same application on their desktops, which would be impossible otherwise. However, with the expanded use of virtualization comes new security concerns, such as the lack of insight into communications between guest operating systems hosted on the same platform. Where before you would have an IDS/IPS, firewall or other network security controls monitoring this traffic, now that communication never reaches the network. Instead, it lives on a virtual switch. This concern is heightened with desktop virtualization, since we all know that our end-users sometimes download or click on things they shouldn’t. Securing virtual systems will continue to be problematic and not just from a technical perspective. Figuring out which team handles setup and management of virtual systems is also a big concern. If the hypervisor is Linux-based and the guest OS is Windows-based, which team should set up the server and where is the hand-off. Without well-documented processes and procedures, you may have an unsecure hypervisor. Hopefully, security will become baked into the new virtualization software instead of floating as an afterthought. With virtualization use gaining within the enterprise, these new security threats may take down your otherwise secure environment. Virtualization’s use in the enterprise will only continue to grow. Soon, virtualized systems will move freely from the enterprise to the cloud and back. This will not only be transparent, but will be carried out in real time, allowing companies to increase system resources as needed. Lastly, I see organizations moving toward using virtual desktops, which would be located within a secure network segment with specific access to systems. In some ways, the days of the green screens may be coming back, but with more color.

A

ll industry reports and surveys suggest that Web 2.0 evolution has deeply penetrated into our lifestyles and technology stack. This evolution is not just restricted to social networking aspects, but is also well-rooted in enterprise technologies. We see its adaptation across industry segments, be it banking, health care, trading platforms or government initiatives. Web 2.0 implies “application of applications” and thus, the internet has now grown to a complex network of applications – one big mashup. Ignoring Web 2.0’s impact with respect to security would be a costly mistake for corporate and government entities and individuals. Web technologies have moved from synchronous to asynchronous framework and several new technologies are available for developers along with a new set of libraries. We are out of the era of name-value pair (Web 1.0) and have entered into the Web 2.0 world where several streams are available for consumption. This is the Web 2.0 era and it brings several security concerns along with interesting challenges, which leads us to a new set of attacks. Client-side technologies are on the attacker’s radar, be it JavaScript, Flash or plug-in. This is one of the major trends developing on Web 2.0 platforms. Cross-site request forgery (CSRF) is increasingly leveraged by attackers to bypass filters and to exploit authenticated sessions. Cross-site scripting (XSS) is helping attackers to exploit Web 2.0 components, like RSS feed readers, widgets, gadgets, blogs, mashups, etc. On the server end, attack vectors and payloads are getting delivered through XML, SOAP, etc. This raises the risk and threat level of the next generation applications running on Web 2.0 architecture. Discovery of Web 2.0 vulnerabilities are becoming extremely difficult due to the nature of architecture and hidden calls. Zero knowledge assessment with black box testing fails in identifying vulnerabilities and it is becoming imperative to perform some source code analysis. Web 2.0 applications consume streams coming from un-trusted sources and these streams need a lot of data validations to be done before consumption in the browser or server side APIs. To secure Web 2.0 applications, one needs to make a strong threat model followed by hybrid testing encompassing both source code analysis and parameter fuzzing. Shreeraj Shah is director at Blueinfy Solutions and the author of Web 2.0 Security (Thomson 07), Hacking Web Services (Thomson 06) and Web Hacking: Attacks and Defense (AddisonWesley 03).

Currently working as a security architect for a Fortune 500 company. Hadar holds a master’s degree in information assurance.

www.scmagazineus.com • November 2009 • SC 20

Products

Multifactor authentication by Paul Beverly

t

wenty years ago, smart cards were not so smart. But, from their genesis as simple memory-based phone cards, today the sophistication and computing power of these digital security devices add to the convenience of our everyday lives. Smart card technology was launched into the global mainstream by two catalyst movements. First was the introduction of secure banking cards in France. This chip-based card was the first implementation of true multifactor authentication requiring that the cardholder use both the card and a personal identification number (PIN) to complete a transaction. The success of this technology became the basis for today’s EMV standard, of which there are 826 million EMV cards and 11.5 million EMV terminals in use globally. Second was the decision to use smart cards to identify subscribers and secure mobile phones in the GSM standard. Radiolinja in Finland made the first GSM mobile phone call in 1991 and the mobile network standard has enjoyed tremendous

success since. Like smart bankcards, the SIM is also multifactor, PIN-protected and authenticated by the mobile network. Today, there are more than 3.4 billion GSM network connections worldwide, representing 80 percent of the global mobile communications market, according to the GSM Association. In the last two decades, the technology has followed the same curve as all things chip: faster, smaller, smarter and more “cost effective.” Multifactor smart card technology has experienced explosive growth entering new digital security markets, including health care cards and identity and access control. The success of smart cards goes beyond the “technology.” Smart cards have brought convenience to our everyday lives. People do not want to think about how to make their digital lifestyle work or how to make it safe. They want to shop, travel, make phone calls and surf the web with confidence. As the world grapples with identity theft, network security, fraud and cybercrime, smart card technology will continue to be the multifactor authentication technology of choice, ensuring the right balance of convenience and security empowering our digital interactions. Paul Beverly is president, Gemalto North America, and served as chairman of the Smart Card Alliance until 2003.

Our tally Leaders: This list is not meant to tally all companies in the particular space, but rather to illustrate some of the top-of-mind companies that have come to exemplify what that space is all about. Access management: AppGate Network Security Black Box (Veri-NAC) Bradford Networks Rohati (TNS) Novell Software Quest Software SafeNet CipherOptics nuBridges Passlogix BeyondTrust Ensim Biometrics: ACTAtek BIO-key International Cogent Systems DigitalPersona Fujitsu Hitachi America, Ltd. (HSS) Identica L1 Identity Solutions LG Iris Privaris Email security: Axway Tumbleweed Barracuda (IM) Quest Software (IM)

CellTrust FaceTime Cisco (IronPort) Entrust M86 PGP Proofpoint Trend Micro WebLOQ Forensics: i2 Inc. AccessData Cyber Security Technologies Technology Pathways Niksun Mandiant LogLogic Log Rhythm WetStone Paraben Governance: Agiliance Archer Technologies CA MetricStream Modulo Neupart Proteus Trustwave

21 SC • November 2009 • www.scmagazineus.com

Network & platform security: Global DataGuard IBM-ISS (Proventia Network MFS) ArcSight Express BigFix McAfee SkyRecon Sophos Sunbelt Symantec WatchGuard Smart Cards/Multifactor: Open Domain Sphinx Solutions Athena Smart Card Solutions ActivIdentity MXI Security HID Global Gemalto Spyrus Thales Vasco Vulnerability and threat management: Saint Rapid7 Core Security Nitro Security Tenable Network Security eEye Digital Security Fortinet (Fortiscan)

Lumension nCircle RedSeal Wireless security: AirMagnet AirPatrol AirTight AirWave Motorola AirDefense Cisco – Wireless Security SonicWALL (TZ Wireless N) Trend Micro (Mobile Security) Wrap-up/Future security: Acunetix Altor Networks Application Security Layer 7 Technologies VMware (vSphere) Sonoa Systems Breach Security Ping Identity Imperva Finjan

Products

Where is security going?

The evolution of IM security

by Dick Mackey

by Nick Chapman

I

I

Richard Mackey Jr. is VP of consulting for SystemExperts Corp., a provider of IT compliance and network-security consulting services.

Nick Chapman is a security researcher with SecureWorks’ counter threat unit.

t wasn’t long ago that only financial institutions considered themselves targets of online attacks and most people didn’t give much thought to virus infections. Nowadays, organizations of all kinds not only have an online presence, but also interact with partners and service providers via the internet. Computer security has become a constant concern, particularly with the rash of high profile compromises that have led to identity theft, financial penalties and costly legal action. The question is, now that security has become top of mind, where will we go from here? We can look at recent developments and get a hint. State and federal regulations, which typically trail industry practice by a few years, are beginning to require good security practice from companies entrusted with personal identifying information. The Payment Card Industry has been requiring rather stringent controls on organizations managing payment card data for a few years now. We can only assume the trend of more prescriptive controls will continue. In the product space, we have been seeing innovative controls, like data leakage prevention, more intelligent intrusion detection and prevention, and enterprise-wide vulnerability management. So, what’s left to do? A lot. The tough problems still need to be solved. As long as systems and application programmers keep writing security vulnerabilities into their code, attackers will be exploiting the vulnerabilities. Look for advances in development tools and more secure application libraries to help avoid the typical input validation problems that lead to cross-site scripting and SQL injection. Another area in which to look for improvement is the authentication space. Vulnerabilities in certificate checking implementations and the success of man-in-the-middle attacks are forcing organizations to think about whether the faith we place in SSL and cookie-based authentication are well founded. Furthermore, our blind trust in essentially unauthenticated infrastructure is likely to force the internet powers that be to work harder on establishing name services and mail protocols that provide some basis for trusting them. So, security needs work in all areas – from development to infrastructure services to preventive and detective tools to better, more disciplined practices in the enterprise. There’s enough work in this for everyone and the focus of the day will likely be driven by the popularity of the latest exploit. It may not be the way we would plan it, but it has led to improvements we have seen in the last 20 years and it’s unlikely that that trend will change.

nstant messaging (IM) is a common internet service that’s existed for decades. Internet Relay Chat (IRC) is a decentralized instant messaging protocol and one of the oldest IM systems. IRC has many security holes. Usernames are temporary and can be stolen by signing in when the original user is offline. Chat groups (channels) can be taken over when two servers are temporarily unable to communicate during a “netsplit.” The late 1990s introduced a new wave of centralized IM services, including ICQ, AIM and MSN. These next-generation IMs controlled username assignment and focused on individual communication rather than group chat, which eliminated major problems inherent in IRC. However, as IM use became more widespread, new problems were revealed. IM sends messages in cleartext, so eavesdroppers could read messages and occasionally usernames and passwords. This led to the obvious solution: encryption. IM uses several implementations of public-key encryption offering authentication and privacy for each message. Another approach, known as off-the-record (OTR), signs the session key, but not individual messages, providing a private, authenticated and deniable conversation. As the messages are unsigned, neither party can prove that any specific message was sent. Early on, a common IRC social engineering prank consisted of convincing new users to press Alt+F4 (close window hotkey). However, IM has now become a vector for spreading malware. Currently, there are worms that automatically spam users’ contacts trying to entice them to click on malicious links. As IM became mainstream, users began enlisting the service to communicate from the workplace, giving corporate security a new set of worries. While IM can enhance efficiency, it can also lead to data leakage and can provide another infection vector for malware.  In 1999, Jabber IM was announced. A formal RFC in 2003 for the open XMPP (extensible messaging and presence protocol) communication standard allowed organizations to set up their own server. This development has a wealth of security benefits, as organizations can allow their employees the productivity of instant messaging, but restrict traffic to their network, mandate encryption, restrict access (no spammers or malware), and ensure that the username matches the correct user. Additionally, it allows employees to choose from different IM clients (but which talk back to the same server), and shields organizations from being vulnerable to an attack targeting a specific IM platform.

www.scmagazineus.com • November 2009 • SC 22