Hardening Microsoft Solutions from Attacks

Take a minute to go over this post from Dirk-jan Mollema. Go ahead and read it. I’ll wait…

Did you realize how scary that kind of attack is? As an IT guy who specializes in Exchange server and loves studying security, that article scared the snot out of me. Based on my experience with organizations of all sizes I can say with a good bit of authority that almost every Exchange organization out there is probably vulnerable to this attack. Why? Because Exchange is scary to a lot of people and they don’t really know how to harden it effectively. But I also want to use the above attack as a way to illustrate what I feel is the best strategy for hardening a Windows environment (and, really, any environment).

Take this opportunity to look at your Exchange deployment (if you haven’t already moved to Exchange Online) and think about what you can do to protect your environment from this type of thing. In this post, though, I want to focus on Exchange Server and Windows Server hardening techniques in general, rather than this particular vulnerability because with any hardening effort, you want to examine the network as a whole and work downward without focusing on specific vulnerabilities. If you do the opposite, you will invariably end up playing a never ending game of whack-a-mole, trying to stay ahead of a world full of malicious attackers and never really being successful.

The techniques recommended in the Center for Internet Security’s (CIS) Critical Security Controls follow the top-down approach and represent one of the best guides for approaching information security at a technical level.

IT Hardening, a Quick Intro

Hardening is essentially all actions that you take to make an environment more secure. There are many different types of hardening; server hardening, network hardening, physical hardening, procedural hardening, etc. But these all seek to do the same thing, just in different ways.

If you take a close look at the actions the CIS controls recommend, you’ll (hopefully) notice that they seek to secure as much of the environment as possible when you start at control number 1. As you go through the controls, each subsequent control has a more narrow focus. Once you get to control number 5, you will probably have an environment that will stand up against all but the most determined attacks, but you don’t necessarily want to stop there.

The most important best practice in Information Security is the idea of “Defense in Depth”. This technique involves building layers of protection instead of relying on a single security measure to protect your environment. Having a firewall in place is only one “layer” of defense, and is regarded as the broadest level of protection you can have. Anti-virus tools, Intrusion Detection/Prevention tools, and hardening techniques represent additional layers of defense. You want as many layers as you can justify when measuring cost against risk (a much more difficult topic to cover).

Focusing on Windows

One thing that you hear regularly in the IT industry is the argument about what OS people choose to handle their IT. The common argument is that Linux is a more secure OS than Windows, and this is true, up to a point. The reality is that they are simply different approaches to crafting an OS.

Linux tends to be more modular in its approach. If you implement a Linux environment, you would start with the core OS and add features as needed. This approach is good for limiting the attack surface from the start, but it also has a number of drawbacks.

The biggest drawback for Linux is that there is no centralization for support and maintenance. There are lots of different solutions to the same problem, and there isn’t really a single source of support for all solutions, so you have to either have very capable Linux support specialists or handle lots of different vendors. This usually increases the cost of ongoing maintenance and support of the infrastructure. It’s also not uncommon for different Linux-based open source projects to be abandoned for whatever reason, leaving organizations that implemented that solution without support, and once the guy who knows how to use it effectively leaves, you’re left with a very serious problem.

Windows, on the other hand, is a fairly complete package of capabilities for most situations. Windows server has built in solutions that can do most of the work you will want in an IT environment, within some limits. For instance, Windows server doesn’t handle EMail well right out of the box. You have to also implement Exchange server to have a truly effective method of handling email, but with that solution you also gain a very powerful collaboration tool that handles calendaring, contact management, task management, and other features that you can pick and choose from. Microsoft also invests a lot of time and effort in developing training tools and educational resources to ensure that there is a large pool of talent to support their OS and other software solutions. You don’t often have to worry about finding someone who knows how to manage a Windows environment. There are boatloads of MCSAs and MCSEs looking for work almost all the time.

The major drawback with Windows is, of course, security. With all of the features built in, Windows has a very large attack surface compared to Linux. However, with careful planning and implementation, the attack surface of Windows can be decreased very effectively, such that there is virtually no difference between a standard Linux deployment and a hardened Windows environment.

Hardening Windows

Going back to the vulnerability outlined in the link from the start of this article, a single change to a Windows Active Directory environment will eliminate vulnerability: LDAP signing and channel binding. LDAP signing and channel binding are techniques that are used to prevent Man In the Middle attacks from succeeding. I explain the theory behind LDAP signing in more depth in my article on Understanding Digital Certificates. LDAP channel binding is a technique that prevents clients from using portions of authentication attempt against one DC when communicating with a different DC or client. Put simply, it “binds” a client to the entire authentication attempt by requiring clients to present proof that the authentication traffic it’s sending to the server isn’t forged or copied from a different authentication attempt.

Essentially, LDAP signing configures all Active Directory Domain Controllers to that they are verifying that they are actually talking to the server they are supposed to before doing anything. Implementing this is a little difficult, though, as it requires the use of a Certificate Authority to generate and deploy digital certificates, but once digital certificates are installed on Domain Controllers and Member Servers in a Windows Domain, LDAP signing is available (once systems are configured to require it) and becomes a very effective form of security that prevents a wide swatch of attacks that can be performed to gain unauthorized access.

LDAP signing alone won’t prevent all possible attacks in a Windows environment, though, which is why it’s essential to disable features and roles that each server is not using, and taking effective care of remote access to servers. Windows Remote Desktop is one of the most frequently used tools to breach security in a Windows environment, so limiting access to it is essential. As a rule of thumb, only allow System Administrators to access critical Windows Servers and never, *never* allow remote desktop ports through your firewall.

Check your firewalls now, if you have port 3389 allowed to the Internet, it’s only a matter of time before you get attacked and suffer severe consequences. Remote Desktop is *not* meant for allowing remote workers access over the Internet. Implement secure VPNs and practice effective password security policies if you want people to access your IT environment remotely.

Once all unnecessary features and roles are removed or effectively controlled in a Windows environment, build and maintain an effective patch management strategy. Microsoft regularly deploys patches to close security holes before attackers are regularly attacking them. Any patch management plan should make allowances for testing, approving, deploying, and installing Security-related patches as soon as possible.

Next, focus on granting only permissions necessary for workers to accomplish their tasks. This is a difficult practice to implement, because it takes a lot of investigation to determine what permissions each user needs. Many environments grant Administrative permission to users on company owned equipment, which is a horrible, lazy practice that will get your environment owned by a hacker very quickly.

Once you have all of the above security practices in place, you will then want to start focusing on more specific vulnerabilities. As an example method for preventing the attack in the link at the start of this post, changing a simple registry setting will block the attack. But it will not prevent future attacks that may attack vulnerabilities that aren’t well known.

How Does the Cloud Play Into This?

One of the major benefits of using cloud solutions like Exchange Online is that most of the work outlined above has been done already. Microsoft’s cloud servers are stored in highly secure datacenters with many protections against unauthorized access (as opposed to the common tactic of putting the server in a closet in your office). Servers in cloud environments are hardened as much as possible before being put into operation. Security vulnerabilities are usually addressed across the entire cloud environment within hours of discovery, and the servers don’t function with an eye to backwards compatibility, so things like NTLM and SMBv1 are disabled on all systems.

That said, the cloud poses its own security challenges. You must accept the level of security put in place by the cloud provider and will have little to no control over systems in a way that will let you increase security. Furthermore, utilizing a Hybrid-cloud solution (which is extremely common and will be for years to come) presents unique problems involving the interface between two separately controlled environments. Poor security practices in the on-prem side of a hybrid deployment will make the cloud side just as insecure.

You must accept public availability of your data and accept the reality that you don’t control where that data is (for the most part…this issue is slowly changing as cloud environments mature). In addition, your do not offload the responsibility of securing access to the data you store in the cloud. I’ll cover this subject in another post, but for now, understand that while cloud environments build a lot of security into their solutions, you still have a responsibility to make security a priority.

Conclusion (I never can think of a good heading here)

Security in any IT environment is a major challenge that takes careful planning and effective management. Failing to consider security challenges when deploying new solutions will almost always come back to bite you. But, with the right strategy and guidance, it *is* possible to build a secure environment that can withstand the vast majority of attacks.

 

 

Advertisements

Avoiding Vendor Bloat

Some IT software vendors may hate me for this blog post, but I want to write it anyway. During my decade as an IT consultant for businesses of varying sizes, I’ve observed a particularly annoying phenomenon, which I call “Vendor Bloat.” What happens here is an organization’s IT decision makers identify some need and immediately look for technical solutions that will meet that need. This is not always a bad idea, but in many situations, the organizations fail to realize that they already have technical solutions that meet the need and end up with a massive number of  technical solutions from different vendors. This results in an IT environment that is constantly fighting with appliances, servers, and software solutions. The end result is a terrible IT infrastructure that ends up hurting the business instead of helping the business meet its goals. The IT support team has numerous vendors to talk to for support and those vendors don’t help them get the solutions working with all the other stuff they have.

In one extreme example I recall going in to an organization that had 3 email security appliances; a spam filter, an email encryption appliance, and an email archiving appliance. They were constantly having issues with mail delivery delays and failures and just couldn’t figure out what was causing the problem. I took one look and just had to shake my head in frustration. I went through the architecture of the environment with the client and showed them how a single cloud service could provide all three of their email security needs. Once they switched to that method, the email delivery problem mysteriously disappeared.

IT Unitaskers

The core of the problem is due to a type of IT “Unitasker” solution that meets only a single organizational need. If you haven’t seen TV Chef Alton Brown’s tirade against Kitchen Unitaskers, go watch it to get a little background on the term “Unitasker.”

Basically, IT software solutions or appliances that only do a single thing are dumb, and are often very close to being scams. They cost lots of money, do very little, and do more to hurt your IT environment than help. You should know that most of the quality solutions out there have the ability to meet multiple needs without third party additions.

Following the Email Security example, you want to look for a spam filtering solution that provides some form of email encryption and either archiving or spooling services as well. An email encryption solution should also provide Data Loss Prevention capabilities or have spam filtering features as well, and even a solution for managing Whole Disk Encryption or Endpoint Security can add great value.

Aside from the general annoyance of dealing with different support frameworks to fix a problem, you do not want to have multiple vendors handling your mail-flow. It’s a nightmare to troubleshoot issues with more than one vendor or two vendors in the mix, and issues are bound to happen when you have your email bouncing through multiple servers or appliances before hitting a mailbox.

So how do we avoid Vendor Bloat?

Don’t Be Lazy

The first step to avoiding Vendor Bloat is getting over the desire to avoid work. There is a lot of work and careful examination involved in properly assessing the need for an IT solution. But that work must be done if you don’t want to have someone take advantage of you and sell you things you don’t need. You should never ever cede oversight of the IT environment to a vendor.

Honest Self-Assessment

One of the first bits of work you need to do is to honestly and thoroughly assess your environment’s existing infrastructure as well as the need you have. If, for instance, there is a phishing attack on the environment, you need to carefully assess the damage before looking at solutions to keeping them from happening.

The process here requires you to examine existing costs, budgetary constraints, solution need, and cost to continue as-is (including hidden costs like reduced efficiency). If the aforementioned phishing attack only cost you a few headaches and you’ve only been hit with a single similar attack in the past decade, a $100k+ solution isn’t likely to be a good purchase.

Technical Examination

Take a look at your existing IT infrastructure and determine the capabilities of what you already have. You’ve spent lots of good money building your IT infrastructure already, so you need to make sure you don’t already have the ability to meet the need you have without spending tons of money.

Exchange server (and Exchange Online), for instance, is already capable of providing partner-based forced Email encryption through the use of Mutually Authenticated TLS encryption (Also known as Domain Authenticated TLS). Setting this up usually only requires about an hour of work per partner organization, so if you have a limited set of companies that you need to ensure email encryption with, it’s worth it to set that relationship up with Exchange rather than spend thousands on an appliance or cloud solution that only does email encryption.

It helps to consider least effort solutions when being faced with a problem in IT. There are a lot of good reasons for this. First off, creative solutions with your existing environment will allow you to maintain the existing support framework without having to expand or train employees to manage and use new solutions.

If you are a high-level decision maker, be sure that you have access to technical advisors to assist in assessing need. This is particularly true if the need is in an area that you aren’t familiar with.

Vendor Pushback

Whenever a vendor tries to tell you how to meet your company’s needs with their software or service, push back! Don’t let the vendors control the conversation. You have a need and they need to prove that they can meet more than just that need. You have to ask, “What else does this do?”

There are also a lot of hidden costs that need to get added to the equation when you add a new system to an existing IT infrastructure. You have to train your own staff to manage it, you have to adjust your processes to account for the new services, and other managerial issues will pop up once the solution is in place. A vendor’s pitch to you will not account for the hidden costs, so you need to be vigilant and serious when interacting with vendors. Don’t be distracted by the flashy lights and cool tech, and don’t be afraid to say, “I don’t need this.”

Conclusion

Vendor Bloat can become a very serious problem quickly, aside from the general need to have an IT environment where all the pieces work together properly. It is possible, however, to avoid getting yourself stuck in the vendor bloat trap if you are honest, careful, and smart about assessing the need to actually buy a new solution.