Kevin Mitnick

Social Engineering & Cyber Security: What Military Leaders Should Take from Kevin Mitnick's Presentation

Kevin Mitnick, the infamous hacker and social engineer turned security consultant, gave a presentation at this year’s History Conference at the Naval Academy today. He gave numerous examples of extracting information from people and companies by using their own trust and knowledge against them. His demonstrations likely startled many of the audience members with the range of methodologies and, more importantly, the success rate.

Some may look at the seemingly endless list of ways attackers can obtain what they’re looking for and throw their hands up in despair. It’s important to take a step back and consider some important factors in responding to, and hopefully mitigating, attack vectors.

Technology alone won’t save you. If you fight technology with technology, you’ll lose. All the firewalls and intrusion detection systems in the world won’t be a guarantee that networks won’t be breached. There’s no such thing as an impenetrable system, and no such thing as bugless software. Kevin’s demonstration of exploiting vulnerabilities in widely used commercial software proves this. Moreover, this isn’t just software being used in the private sector. Many of the exploits he demonstrated take advantage of software that’s become an integral part of the way the military handles its information. As if this weren’t enough, the files used to carry out every successful exploit passed antivirus scanning without incident, and were run on fully patched, up-to-date systems.

That’s not to say technological security measures are pointless; far from it. Strong passwords, multi-factor authentication, limited access permissions, and strict data management are as important now as they’ve ever been. Placing full faith in their protection, however, is misguided.

Legislation and policy alone won’t save you. The first instinct of most government and private agencies is to react to new threats with new rules. Congress will propose laws, companies will write new usage regulations, and in the end they’ll do little to stem attacks. Punitive action will deter the low-level players for whom it isn’t worth the risk of fines or prison, and employees will perhaps comply with increased restrictions on their behavior. Those with the determination and skill will get what they’re seeking, and many of them won’t be caught.

In fact, regulations have an unintended consequence: complacency. In an interview with USNI News Online Editor Sam Lagrone, Mitnick indicated that, as an example, PCI DSS gives corporations a checklist for protecting client financial data, allowing them to be in legal compliance and in so doing avoid large expense in comprehensive security. Companies will only spend as much money as is necessary, and regulations spell out that exact necessity, whether it’s comprehensive or not. Companies feel secure in following the rules, and when the threat evolves beyond those rules the company lies vulnerable because they didn’t remain vigilant. Similar risks exist inside military structures, where policies exist to restrict certain behaviors but can’t account for new and inventive attacks, and result in training focusing in on symptoms rather than targeting the root problems.

Again, this isn’t to say legislation is pointless. It is very useful in punishing those that are caught. It’s also good incentive for organizations to take measures to protect data that they may otherwise be doing little to protect. Yet rules are inflexible, slow to change and expensive to enforce. The attacks against which they are designed to protect are anything but.

The military needs to take security training seriously. Anyone who’s currently serving in the military or works for the Department of Defense has probably gone through basic computer security education, often consisting of nothing more than a one hour self-guided online course once a year. Nobody can reasonably deny that the military is effective in training its people in executing their missions - they train hard, they train continuously, and the result is a force for whom reacting to threats becomes instinct. Yet when it comes to protecting computer systems and preventing data leakage, it appears to be applied as more of an afterthought than a real training regimen. With nearly all the information the military handles stored digitally, every servicemember should be trained continually, and tested in their response to threats on a regular basis.

Kevin Mitnick proposed this type of approach as part of his presentation (typically given to corporate managers and executives), and this component of his talk is especially germane to military operations. It doesn’t have to be significantly complex. Something as simple as an email intentionally crafted with inaccurate details that should throw up a red flag to trained users, and a link they’re convinced to click (if they fall for the attack) that then informs them of their mistake. Something like a random phone call from a person posing as a superior and requesting details about a mission or personnel, and verifying that the proper procedures are taken for verifying identity or identifying a suspicious request. It would cost more money, but it’s a crucial part of OPSEC and information assurance that isn’t being given due consideration.

The bottom line: all hope is not lost. There’s plenty that can be done to preserve military networks and defend against data leakage both from the outside and from the inside. The weakest part of any computer security strategy is always the user, and we should be putting more emphasis on doing everything we can to strengthen it.

I originally published this post on USNI Blog. I have republished it here for my archives.

comments powered by Disqus