Hardening Microsoft Solutions from Attacks

Take a minute to go over this post from Dirk-jan Mollema. Go ahead and read it. I’ll wait…

Did you realize how scary that kind of attack is? As an IT guy who specializes in Exchange server and loves studying security, that article scared the snot out of me. Based on my experience with organizations of all sizes I can say with a good bit of authority that almost every Exchange organization out there is probably vulnerable to this attack. Why? Because Exchange is scary to a lot of people and they don’t really know how to harden it effectively. But I also want to use the above attack as a way to illustrate what I feel is the best strategy for hardening a Windows environment (and, really, any environment).

Take this opportunity to look at your Exchange deployment (if you haven’t already moved to Exchange Online) and think about what you can do to protect your environment from this type of thing. In this post, though, I want to focus on Exchange Server and Windows Server hardening techniques in general, rather than this particular vulnerability because with any hardening effort, you want to examine the network as a whole and work downward without focusing on specific vulnerabilities. If you do the opposite, you will invariably end up playing a never ending game of whack-a-mole, trying to stay ahead of a world full of malicious attackers and never really being successful.

The techniques recommended in the Center for Internet Security’s (CIS) Critical Security Controls follow the top-down approach and represent one of the best guides for approaching information security at a technical level.

IT Hardening, a Quick Intro

Hardening is essentially all actions that you take to make an environment more secure. There are many different types of hardening; server hardening, network hardening, physical hardening, procedural hardening, etc. But these all seek to do the same thing, just in different ways.

If you take a close look at the actions the CIS controls recommend, you’ll (hopefully) notice that they seek to secure as much of the environment as possible when you start at control number 1. As you go through the controls, each subsequent control has a more narrow focus. Once you get to control number 5, you will probably have an environment that will stand up against all but the most determined attacks, but you don’t necessarily want to stop there.

The most important best practice in Information Security is the idea of “Defense in Depth”. This technique involves building layers of protection instead of relying on a single security measure to protect your environment. Having a firewall in place is only one “layer” of defense, and is regarded as the broadest level of protection you can have. Anti-virus tools, Intrusion Detection/Prevention tools, and hardening techniques represent additional layers of defense. You want as many layers as you can justify when measuring cost against risk (a much more difficult topic to cover).

Focusing on Windows

One thing that you hear regularly in the IT industry is the argument about what OS people choose to handle their IT. The common argument is that Linux is a more secure OS than Windows, and this is true, up to a point. The reality is that they are simply different approaches to crafting an OS.

Linux tends to be more modular in its approach. If you implement a Linux environment, you would start with the core OS and add features as needed. This approach is good for limiting the attack surface from the start, but it also has a number of drawbacks.

The biggest drawback for Linux is that there is no centralization for support and maintenance. There are lots of different solutions to the same problem, and there isn’t really a single source of support for all solutions, so you have to either have very capable Linux support specialists or handle lots of different vendors. This usually increases the cost of ongoing maintenance and support of the infrastructure. It’s also not uncommon for different Linux-based open source projects to be abandoned for whatever reason, leaving organizations that implemented that solution without support, and once the guy who knows how to use it effectively leaves, you’re left with a very serious problem.

Windows, on the other hand, is a fairly complete package of capabilities for most situations. Windows server has built in solutions that can do most of the work you will want in an IT environment, within some limits. For instance, Windows server doesn’t handle EMail well right out of the box. You have to also implement Exchange server to have a truly effective method of handling email, but with that solution you also gain a very powerful collaboration tool that handles calendaring, contact management, task management, and other features that you can pick and choose from. Microsoft also invests a lot of time and effort in developing training tools and educational resources to ensure that there is a large pool of talent to support their OS and other software solutions. You don’t often have to worry about finding someone who knows how to manage a Windows environment. There are boatloads of MCSAs and MCSEs looking for work almost all the time.

The major drawback with Windows is, of course, security. With all of the features built in, Windows has a very large attack surface compared to Linux. However, with careful planning and implementation, the attack surface of Windows can be decreased very effectively, such that there is virtually no difference between a standard Linux deployment and a hardened Windows environment.

Hardening Windows

Going back to the vulnerability outlined in the link from the start of this article, a single change to a Windows Active Directory environment will eliminate vulnerability: LDAP signing and channel binding. LDAP signing and channel binding are techniques that are used to prevent Man In the Middle attacks from succeeding. I explain the theory behind LDAP signing in more depth in my article on Understanding Digital Certificates. LDAP channel binding is a technique that prevents clients from using portions of authentication attempt against one DC when communicating with a different DC or client. Put simply, it “binds” a client to the entire authentication attempt by requiring clients to present proof that the authentication traffic it’s sending to the server isn’t forged or copied from a different authentication attempt.

Essentially, LDAP signing configures all Active Directory Domain Controllers to that they are verifying that they are actually talking to the server they are supposed to before doing anything. Implementing this is a little difficult, though, as it requires the use of a Certificate Authority to generate and deploy digital certificates, but once digital certificates are installed on Domain Controllers and Member Servers in a Windows Domain, LDAP signing is available (once systems are configured to require it) and becomes a very effective form of security that prevents a wide swatch of attacks that can be performed to gain unauthorized access.

LDAP signing alone won’t prevent all possible attacks in a Windows environment, though, which is why it’s essential to disable features and roles that each server is not using, and taking effective care of remote access to servers. Windows Remote Desktop is one of the most frequently used tools to breach security in a Windows environment, so limiting access to it is essential. As a rule of thumb, only allow System Administrators to access critical Windows Servers and never, *never* allow remote desktop ports through your firewall.

Check your firewalls now, if you have port 3389 allowed to the Internet, it’s only a matter of time before you get attacked and suffer severe consequences. Remote Desktop is *not* meant for allowing remote workers access over the Internet. Implement secure VPNs and practice effective password security policies if you want people to access your IT environment remotely.

Once all unnecessary features and roles are removed or effectively controlled in a Windows environment, build and maintain an effective patch management strategy. Microsoft regularly deploys patches to close security holes before attackers are regularly attacking them. Any patch management plan should make allowances for testing, approving, deploying, and installing Security-related patches as soon as possible.

Next, focus on granting only permissions necessary for workers to accomplish their tasks. This is a difficult practice to implement, because it takes a lot of investigation to determine what permissions each user needs. Many environments grant Administrative permission to users on company owned equipment, which is a horrible, lazy practice that will get your environment owned by a hacker very quickly.

Once you have all of the above security practices in place, you will then want to start focusing on more specific vulnerabilities. As an example method for preventing the attack in the link at the start of this post, changing a simple registry setting will block the attack. But it will not prevent future attacks that may attack vulnerabilities that aren’t well known.

How Does the Cloud Play Into This?

One of the major benefits of using cloud solutions like Exchange Online is that most of the work outlined above has been done already. Microsoft’s cloud servers are stored in highly secure datacenters with many protections against unauthorized access (as opposed to the common tactic of putting the server in a closet in your office). Servers in cloud environments are hardened as much as possible before being put into operation. Security vulnerabilities are usually addressed across the entire cloud environment within hours of discovery, and the servers don’t function with an eye to backwards compatibility, so things like NTLM and SMBv1 are disabled on all systems.

That said, the cloud poses its own security challenges. You must accept the level of security put in place by the cloud provider and will have little to no control over systems in a way that will let you increase security. Furthermore, utilizing a Hybrid-cloud solution (which is extremely common and will be for years to come) presents unique problems involving the interface between two separately controlled environments. Poor security practices in the on-prem side of a hybrid deployment will make the cloud side just as insecure.

You must accept public availability of your data and accept the reality that you don’t control where that data is (for the most part…this issue is slowly changing as cloud environments mature). In addition, your do not offload the responsibility of securing access to the data you store in the cloud. I’ll cover this subject in another post, but for now, understand that while cloud environments build a lot of security into their solutions, you still have a responsibility to make security a priority.

Conclusion (I never can think of a good heading here)

Security in any IT environment is a major challenge that takes careful planning and effective management. Failing to consider security challenges when deploying new solutions will almost always come back to bite you. But, with the right strategy and guidance, it *is* possible to build a secure environment that can withstand the vast majority of attacks.




Enabling Message Encryption in Office 365

As I mentioned in an earlier post, email encryption is a sticky thing. In a perfect world, everyone would have Opportunistic TLS enabled and all mail traffic would be automatically encrypted with STARTTLS encryption, which is a fantastic method of ensuring security of messages “in transit”. But some messages need to be encrypted “at rest” due to security policies or regulations. Unfortunately, researchers have recently discovered some key vulnerabilities in the S/MIME and OpenPGP. These encryption systems have been the most common ways of ensuring message encryption for messages while they are sitting in storage. The EFAIL vulnerabilities allow HTTP formatted messages to be exposed in cleartext by attacking a few weaknesses.

Luckily, Office 365 subscribers can improve the confidentiality of their email by implementing a feature that is already available to all E3 and higher subscriptions or by purchasing licenses for Azure Information Protection and assigning them to users that plan to send messages with confidential information in them. The following is a short How-To on enabling the O365 Message Encryption (OME) system and setting up rules to encrypt messages.

The Steps

To enable and configure OME for secure message delivery, the following steps are necessary:

  1. Subscribe to Azure Information Protection
  2. Activate OME
  3. Create Rules to Encrypt Messages

Details are below.

Subscribe to Azure Information Protection

The Azure Information Protection suite is an add-on subscription for Office 365 that will allow end users to perform a number of very useful functions with their email. It also integrates with SharePoint and OneDrive to act as a Data Loss Prevention tool. With AIP, users can flag messages or files so that they cannot be copied, forwarded, deleted, or a range of other common actions. For email, all messages that have specific classification flags or that meet specific requirements are encrypted and packaged into a locked HTML file that is sent to the recipient as an attachment. When the recipient receives the message, they have to register with Azure to be assigned a key to open the email. The key is tied to their email address and once registered the user can then open the HTML attachment and any future attachments without having to log in to anything.

Again, if you have E3 or higher subscriptions assigned to your users, they don’t need to also have AIP as well. However, each user that will be sending messages with confidential information in them will need either an AIP license or an E3/E5 license to do so. To subscribe to AIP, perform these steps:

  1. Open the Admin portal for Office 365
  2. Go to the Subscriptions list
  3. Click on “Add a Subscription” in the upper right corner
  4. Scroll down to find the Azure Information Protection
  5. Click the Buy Now option and follow the prompts or select the “Start Free Trial” option to get 25 licenses for 30 days to try it out before purchasing
  6. Wait about an hour for the service to be provisioned on your O365 tenant

Once provisioned, you can then move on to the next step in the process.

Activate OME

This part has changed very recently. Prior to early 2018, Activating OME took a lot of Powershell work and waiting for it to function properly. MS changed the method for activating OME to streamline the process and make it easier to work with. Here’s what you have to do:

  1. Open the Settings option in the Admin Portal
  2. Select Services & Add-ins
  3. Find Azure Information Protection in the list of services and click on it
  4. Click the link that says, “Manage Microsoft Azure Information Protection settings” to open a new window
  5. Click on the Activate button under “Rights Management is not activated”
  6. Click Activate in the Window that pops up

Once this is done, you will be able to use AIP’s Client application to tag messages for right’s management in Outlook. There will also be new buttons and options in Outlook Web App that will allow you to encrypt messages. However, the simplest method for encrypting messages is to use an Exchange Online Transport Rule to automatically encrypt messages.

Create Rules to Encrypt Messages

Once OME is activated, you’ll be able to encrypt messages using just the built in, default Rights Management tools, but as I mentioned, it’s much easier to use specific criteria to do the encryption automatically. Follow these stpes:

  1. Open the Exchange Online Admin Portal
  2. Go to Mail Flow
  3. Select Rules
  4. Click on the + and select “Add a New Rule”
  5. In the window that appears, click “More Options” to switch to the advanced rule system
  6. The rule you use can be anything from Encrypting messages flagged as Confidential to using a tag in the subject line. My personal preference is to use subject/body tags. Make your rule look like the below image to use this technique:Encrypt Rule

When set up properly, the end user will receive a message telling them that they have received a secure message. The email will have an HTML file attached that they can open up. They’ll need to register, but once registered they’ll be able to read the email without any other steps required and it will be protected from outside view.



Designing Infrastructure High Availability

IT people, for some reason, seem to have an affinity towards designing solutions that use “cool” features, even when those features aren’t really necessary. This tendency sometimes leads to good solutions, but a lot of times it ends up creating solutions that fall short of requirements or leave IT infrastructure with significant short-comings in any number of areas. Other times, “cool” features result in over-designed, unnecessarily expensive infrastructure designs.

The “cool” factor is probably most obvious in the realm of High Availability design. And yes, I do realize that with the cloud becoming more common and prevalent in IT there is less need to understand the key architectural decisions needed when designing HA, but there are still plenty of companies that refuse to use the cloud, and for good reason. Cloud solutions are not meant to be one size fits all solutions. They are one size fits most solutions.

High Availability (Also called “HA”) is a complex subject with a lot of variables involved. The complexity is due to the fact that there are multiple levels of HA that can be implemented, from light touch failover to globally replicated, multi-redundant, always on solutions.

High Availability Defined

HA is, put simply, any solution that allows an IT resource (Files, applications, etc) to be accessible at all times, regardless of hardware failure. In an HA designed infrastructure, your files are always available even if the server that normally stores those files breaks for any reason.

HA has also become much more common and inexpensive in recent years, so more people are demanding it. A decade ago, any level of HA involved costs that exponentially exceeded a normal, single server solution. Today, HA is possible for as little as half the cost of a single server (Though, more often, the cost is essentially double the single server cost).

Because of the cost reduction, many companies have started demanding it, and because of the cool factor, a lot of those companies have been spending way too much. Part of why this happens is due to the history of HA in IT.

HA History Lesson

Prior to the development of Virtualization (the technology that allows multiple “Virtual” servers to run on a single physical server), HA was prohibitively expensive and required massive storage arrays, large numbers of servers, and a whole lot of configuration. Then, VMWare implemented a solution called “VMotion” that allowed a Virtual Server to be moved between server hardware immediately at the touch of a button (Called VM High Availability). This signaled a kind of renaissance in High Availability because it allowed servers to survive a hardware failure for a fraction of the cost normally associated with HA. There is a lot more involved in this shift that just VMotion (SANs, cheaper high-speed internet, and similar advancements played a big part), but the shift began about the time VMotion was introduced.

Once companies started realizing they could have servers that were always running, regardless of hardware failures, an unexpected market for high-availability solutions popped up, and software developers started developing better techniques for HA in their products. Why would they care? Because there are a lot of situations where a server solution can stop working properly that aren’t related to hardware failures, and VMotion was only capable of handling HA in the event of hardware failures.

VM HA vs Software HA

The most common mistake I see people making in their HA designs is accepting the assumption that VM-level High Availability is enough. It is most definitely not. Take Exchange server as an example. There are a number of problems that can occur in Exchange that will prevent users from accessing their email. Log drives fill up, forcing database dismount. IIS can fail to function, preventing users from accessing their mailbox. Databases can become corrupted, resulting in a complete shutdown of Exchange until the database can be repaired or restored from backup. VM HA does nothing to help when these situations come up.

This is where the Exchange Database Availability Group (DAG) comes in to play. A DAG involves constantly replication changes to Mailbox Databases to additional Exchange servers (as many of them as you want, but 2-3 is most common). With a DAG in place, any issue that would cause a database to dismount in a single Exchange server will instead result in a Failover, where the database dismounts on one server and mounts on the other server immediately (within a few seconds or less).

The DAG solution alone, however, doesn’t provide full HA for Exchange, because IIS failures will still cause problems, and if there is a hardware failure, you have to change DNS records to point them to the correct server. This is why a Load Balancer is a necessary part of true HA solutions.

Load Balancing

A Load Balancer is a network device that allows users to access two servers with a single IP address. Instead of having to choose which server you talk to, you just talk to the load balancer and it decides which server to direct you to automatically. The server that is chosen depends on a number of factors. Among those is, of course, how many people are already on each server, since the primary purpose of a load balancer is to balance the load between servers more or less equally.

More importantly, though, most load balancers are capable of performing health checks to make sure the servers are responding properly. If a server fails a health check for any reason (for instance, if one server’s not responding to HTTP requests), the load balancer will stop letting users talk to that server, effectively ensuring that whatever failure occurs on the first server doesn’t result in users being unable to access their data.

Costs vs. Benefits

Adding a load balancer to the mix, of course, increases the cost of a solution, but that cost is generally justified by the benefit such a solution provides. Unfortunately, many IT solutions fail to take this fact into account.

If an HA solution requires any kind of manual intervention to fix, the time required for notifying IT staff and getting the switch completed varies heavily, and can be anywhere from 5 minutes to several hours. From an availability perspective, even this small amount of time can have a huge impact, depending on how much money is assumed as “lost” because of a failure. Here comes some math (And not just the Trigonometry involved in this slight tangent).


The easiest way to determine whether a specific HA solution is worth implementing involves a few simple calculations. First, though, we have to make a couple assumptions, none of which are going to be completely accurate, but are meant to help determine whether an investment like HA is worth making (Managers and CEOs take note)

  1. A critical system that experiences downtime results in the company being completely unable to make money for the period of time that system is down.
  2. The amount of money lost during downtime is equal to whatever percentage of a year the system is down times the amount of annual revenue the organization expects to make in a year.

For instance, if a company’s revenue is $1,000,000 annually, they will make an average of $2 per minute (Rounded up from $1.90), so you can assume that 5 minutes of downtime costs that company about $10 in gross revenue. The cheapest of Load balancers cost about $2,000 and will last about 5 years, so you recoup the cost of the load balancer by saving yourself 200 minutes of downtime. That’s actually less than the amount of time most organizations spend updating a single server. With Software HA in place, updates don’t cause downtime if done properly, so the cost of a load balancer is covered in just being able to keep Exchange running during updates (This isn’t possible with just VM HA). But, of course, that doesn’t cover the cost of the second server (Exchange runs well on a low-end server, so $5000 for server and licenses is about what it would cost). Now imagine if the company makes $10,000,000 in revenue, or think about a company that has revenue of several billion dollars a year. HA becomes a necessity according to these calculations very quickly.

VM HA vs Software HA Cost/Benefit

Realistically, the cost difference between VM HA and Software HA is extremely low for most applications. Everything MS sells has HA capability baked in that can be done for very low costs, now that the Clustering features are included in Windows 2012 Standard. So the costs associated with implementing Software HA vs VM HA are almost always justifiable. Thus, VM HA is rarely the correct solution. And mixing the two is not a good idea. Why? Because it requires twice the storage and network traffic to accomplish, and provides absolutely no additional benefit, other than the fact that VM Replication is kinda cool. Software HA requires 2 copies of the Server to function, and each copy should use a separate server (Separate servers are required for VM HA as well, so only the OS licensing  is an increased cost) to protect against hardware failure of one VM host server.

Know When to Use VM HA

Please note, though, that I am not saying you should never use VM HA. I am saying you shouldn’t use VM HA if software HA is available. You just need to know when to use it and when not to. If software HA isn’t possible (There are plenty of solutions out there with no High Availability capabilities), VM HA is necessary and provides the highest level of high availability for those products. Otherwise, use the software’s own HA capabilities, and you’ll save yourself from lots of headaches.

Protect Yourself from the WannaCry(pt) Ransomware

Well, this has been an exciting weekend for IT guys around the world. Two IT Security folks can say that they saved the world and a lot of people in IT had no weekend. The attack was shut down before it encrypted the world, but there’s a good chance the attack will just be changed and start over. So what can you do to keep your system and data from being compromised by this most recent cyberware attack? If you’ve patched everything up already, or don’t know if you’re patched or vulnerable to this attack (or you just don’t want to deal with Windows updates right now), and you want to be absolutely positive that your computer won’t be affected, disable SMBv1! Like, seriously. You don’t need it. Unless you’re a Luddite.

There are some environments that may still need it (Anyone still using Windows XP and server 2003, antiquated management software, or PoS NAS devices), so if you have a Windows Server environment, run

Set-SmbServerConfiguration –AuditSmb1Access $true

in PowerShell for a bit and watch the SMBServer audit logs for failures.

To disable SMBv1 Server capabilities on your devices, do the following:

Server 2012 and Later

  1. Open Powershell (Click start and enter Powershell in the search bar to open it if you don’t know how to get to it)
  2. Type in this and hit Enter: Remove-WindowsFeature FS-SMB1
  3. Wait a bit for the uninstall process to finish.
  4. Voila! WannaCry can’t spread to this system anymore.

Windows 7, Server 2008/2008R2

  1. Open Powershell (Click start and enter Powershell in the search bar to open it if you don’t know how to get to it)
  2. Type in this (everything on the same line) and hit Enter: Set-ItemProperty -Path “HKLM:\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters” SMB1 -Type DWORD -Value 0 -Force
  3. Wait a bit for the command to complete.
  4. Voila! WannaCry can’t spread to this system anymore.

Windows 8.1/10

  1. Open Powershell (Click start and enter Powershell in the search bar to open it if you don’t know how to get to it)
  2. Type in this and hit Enter: Disable-WindowsOptionalFeature -Online -FeatureName smb1protocol
  3. Wait a bit for the uninstall process to finish.
  4. Voila! WannaCry can’t spread to this system anymore.

If you’re using Windows Vista…I am so so sorry…But the Windows 7/8 instructions should still work for you.

If you still use Windows XP…stop it. And you’re just going to have to get the patch that MS released for this vulnerability.

An additional step you may want to take is to disable SMBv1’s *client* capabilities on your systems. Running the two commands below (on one each line) will do this for you. This isn’t completely necessary, since the client can’t connect to other systems unless they support SMBv1, so if the SMBv1 server component is disabled above, the SMBv1 client can’t do anything. But, if you want to disable the client piece as well, enter the following commands:

sc.exe config lanmanworkstation depend= bowser/mrxsmb20/nsi
sc.exe config mrxsmb10 start= disabled

Theory: Understanding Digital Certificates

One of the more annoying tasks in administering a publicly available website that uses HTTPS (Outlook Web App, for example) is certificate generation and installation. Anyone who has ordered a certificate from a major Certificate Authority (CA) like Godaddy or Network Solutions has dealt with the process. It goes something like this:

  1. Generate a Certificate Signing Request (CSR) on the web server
  2. Upload the CSR to a CA in a Certificate Request
  3. Wait for the CA to respond to your Request with a set of files
  4. Download the “Response” files
  5. Import the files on the Web Server

Once that gets done, you will (usually) have a valid certificate that allows the server to use SSL or TLS to encrypt communications with client machines.

Despite performing this process, you may be wondering *why* you have to go through this whole mess of annoyingness.

What is a Certificate

Put simply, a certificate is just a big hunk of data that is generated to provide clients and servers with the tools needed to properly encrypt and decrypt data. The most important tools included in a certificate is called a “Key”.

Just like a door key, the Key in a certificate is used to both prevent unauthorized access and allow authorized access. The keys in a certificate are generally used to encrypt data and then decrypt the data.

What Keys?

When you go through the certificate generation process above, you are generating two different, but mathematically related, keys; a Public Key and a Private Key. The public key is used to encrypt data, but cannot be used to decrypt the data it encrypts. The private key is able to decrypt data encrypted by the public key and must be kept as securely as possible.

If you were to look at a certificate file, you would be able to see the public key without any issues. You could even take the public key and use it to encrypt some data. However, the only way that data could be decrypted is if you have access to the private key. The private key is stored securely, and can only be accessed with specific authorization. If you gain physical access to a web server (or remote access to the GUI/Command line of the OS running the web server), you can gain access to the private key, but that usually requires a level of access unavailable to the majority of people.

Now, you may be wondering, “If the data can be encrypted by anyone, how do I ensure the data getting to the client machine is actually coming from the original server?” And If you weren’t already, you probably are now. Well, to make sure the sending server is authentic we have to authenticate it. That’s where another part of the digital certificate comes into play.


When you generate a certificate, you have to enter a common name for the certificate. The common name should match the name that is used to access the server. If anyone attempts to access the server using something other than what is defined by the common name, a certificate error is usually displayed. For an example of a certificate error, see below:

whatever error

This particular error was generated by changing the hosts file of my desktop to point http://www.whatever.com to a website running SSL (Facebook, if you’re wondering). Had I accessed the website using a URL that matched the common name listed on the certificate, I would not have received this error. In essence, I’ve attempted to access the site using a name that can’t be authenticated, so I can’t absolutely ensure that the data I’m getting hasn’t been intercepted, decrypted, modified, and re-encrypted.

“So, if anyone can encrypt data using the public key that anyone can get a hold of, can’t I just create a certificate that has the same common name and use that to authenticate a rogue server?” Well, no. Because there’s another part of the certificate that keeps this from happening.

The Circle of Trust

The primary role of the Certificate Authority in the certificate generation process is to verify that the certificates that get generated are only generated for servers that actually belong to the entity that runs the server. In addition, the CA has to be trusted by the computer accessing the information encrypted by the certificate.

Most Operating Systems and Web Browsers are configured to trust a number of CAs right out of the box. Companies like GoDaddy and Network Solutions have contracts with Microsoft, Apple, and other OS developers to have their Root CA Certificates trusted by default. The Root CA Certificate is used to generate all certificates obtained from the CA defined by the Root CA Certificate.

Now, it’s possible to create your own CA that will generate certificates, but because your CA’s Root Certificate is not automatically trusted by client computers around the world, those computers will generate a certificate error any time they access a certificate generated by your CA until the Root CA Certificate for your CA is installed as a trusted Root CA Certificate.

This trust relationship makes it extremely difficult to interject a rogue system between a client and server to read data because the rogue system can’t have a copy of the certificate generally used to access the server, and any system that tries to talk to the server with that rogue system in the mix will squawk like a parrot because there’s a server authentication problem.


So the breakdown of the whole certificate system is this,

  • Certificates hold the keys used to encrypt and decrypt data
  • Certificates are used to verify that the source of encrypted data is authentic
  • Certificates should be generated by trusted certificate authorities.

Now, it’s entirely possible to have a web server ignore some part of the system (a server can use a self-signed certificate, for instance), but such a server will be significantly less secure than one that follows the rules.

Most modern methods for accessing web servers make it extremely inconvenient to access a server that doesn’t follow the rules, and that means your users end up wasting a little bit of time whenever they access your site. In the IT business, time is a very scarce commodity, and every little bit wasted can add up to giant problems. So make sure your server is following the rules!


Email Encryption for the Common Man

One of my co-workers had some questions about email encryption and how it worked, so I ended up writing him a long response that I think deserves a wider audience. Here’s most of it (leaving out the NDA covered portions).

Email Encryption and HIPAA Compliance for the Uninitiated

In IT security, when we talk about encryption, there are a couple of different “types” of encryption that we worry about, one is encryption “in transit”, and the other is encryption “at rest.”

Encryption “in transit” is how we ensure that when data is moving from one system to another that it is either impossible or difficult beyond reasonable likelihood for someone to intercept and read that data. There are pieces of many data exchanges that we have no control over, so we cannot guarantee that there isn’t someone out there with a packet sniffer reading every bit that passes between our server and someone else’s (This is a form of “passive” data inspection, possible from just about any trunk line on a switch). We can make sure it doesn’t happen on our end, but we can’t control the ISP or the other person’s side of things.

The basic email encryption system, TLS (Transport Layer Security…Don’t ask what that means), usually follows this incredibly oversimplified pattern:

1. Server 1 contacts Server 2
2. Server 2 says, “Hi. I’m Server 2. Who are you?”
3. Server 1 says, “Hi. I’m Server 1.”
4. Server 2 says, “Nice to meet you Server 1. What can I do for you?”
5. Server 1 says, “Before we really get into that, I’d like to make sure no one is eavesdropping on our conversation. Can we start talking in a language no one but us knows?” (This is basically what encryption is)
6. Server 2 says, “Sure. What language would you like to use?”
7. Server 1 hands server two a certificate that serves as a kind of translator, which Server 2 will use to translate (decrypt) everything that Server 1 says from now on. Server 2 will also use this certificate to send any responses or other messages back to Server 1.
8. Server 2 says, after translating what it wants to say into the new encryption language, “Okay, what would you like to do?”
9. Server 1 translates this message from the encrypted language and makes its first request to server 2 after translating it into the encrypted language.

From this point on, each server will communicate exclusively with the encryption “language” provided by the certificate they exchanged, and anyone who is eavesdropping (packet sniffing) will only see a bunch of gobbledygook that they can’t understand.

There are more complex versions of this scenario that make things more secure. For instance, in a Domain Authenticated TLS situation, both servers have to be “Authenticated,” which is to say, they must prove they are the server the message is supposed to go to. This is done by validating the name that is printed on the certificate with the name the servers use when introducing themselves to one another.

In the example above, it is possible for someone to inject themselves into the conversation and decrypt everything from server 1, read it, encrypt it again, and send it on to server 2 (this is called a Man-in-the-middle attack, and is an “active” form of eavesdropping, because it requires a very complex setup and specialized hardware to accomplish, and also requires active manipulation of data that is being inspected). Domain Authenticated TLS makes this much more difficult, because a server that acts as a mediary in a man-in-the-middle attack cannot use the name that exists on the certificate unless it is owned by the entity that created the certificate to begin with. When you get certificate errors while browsing the web, this is usually due to either you entering a name that isn’t listed on the certificate that is installed on the server you’re talking to, or the server is using a name that isn’t listed on the certificate. (Certificates are a heavy subject, so I’ll just bypass that for now)

Anyway, data “at rest” is any data that is just sitting on a hard drive or disk somewhere, waiting for someone to read it. In order to read that data, you have to gain access to a server (or workstation) that has access to the data and read it. Encryption of data “at rest” requires more effort to accomplish, because it has to be decrypted every time someone tries to read it. Technologies like Bitlocker or PGP allow data to be encrypted while it’s just sitting there on a server.

We only care about encryption of data “in transit” when we work with HIPAA regulations. This is because the only way to access data that is “at rest” is to gain physical access to the data or to systems that have access to that data. HIPAA has other regulations that help reduce the likelihood that either of those things will happen, and since data “at rest” is never outside our realm of control, we can do much more to protect it. Most ePHI is sitting in a datacenter that is locked and requires specific permission to access, but that coverage doesn’t apply to the data when it’s moving between servers.

Passwords: How they Usually Work, How to Make Them Secure

One of the things in IT Security that took me a while to figure out was the subject of Password management. There were a few pieces of it that confused me for a while. I knew how to create “secure” passwords, but I didn’t really understand what made them secure or how someone could crack a password. It took a while, but I finally figured it out, and I thought I’d pass on the knowledge to keep others from having to bang their head against this subject.

Important Concepts

First, there are a few things you need to learn about before you can really understand passwords and how they work. I’ll outline them here.


Probably the most important concept to understand with passwords is that of “Hashing.” This does not mean cooking diced potatoes. In IT Security, Hashing refers to the process of passing data through a specific mathematical algorithm that cannot be reversed to hide or obscure the data. Data that is “hashed” cannot be discovered easily without knowledge of what was originally passed through the algorithm. If you pass the word “hashbrowns” through a hashing algorithm (there are a number of these, and I’ll go over a couple later), you’d get a string of letters and numbers that represent that word, also known as a fingerprint.

Hashing is pretty important in IT, because we use it for a number of purposes. The most common purpose for hashing is Password Validation, but it is also used in forensic investigations to provide legal documentation and proof that the contents of a seized or otherwise legally obtained data source are not changed during the investigative process. This is done by generating a hash prior to and following the investigation. If the hashes match up, then the investigator made no changes to the data and it can be verified as authentic and valid evidence that has not been tampered with.

In perfectland, that string of numbers and letters generated by the hashing example earlier would represent *only* the word “hashbrowns.” Sadly, we don’t live in perfectland, where everything works the way it should, and some hashing algorithms are not capable of producing enough unique strings of numbers and letters to make certain that each string represents only one set of data. When you have two sets of data that result in the same output, we refer to that as a “Collision.” Collisions are an important weakness in Hashing algorithms, and have caused the death of some algorithms. Particularly, the MD5 hashing algorithm was discovered to have a weakness in it that caused it to produce identical hashes. I’ll explain why this is bad later on in this post. Aside from collisions, there is only one real way to break a hashing algorithm; Brute Force.

Brute Force Attacks

A brute force attack is a little different in IT than it would be in a real world. If you came across a locked door in the real world, you could just use brute force to break down the door. In IT, you can’t really break down the door, but you can certainly try every possible key to the door and hope you get lucky. In a brute force password cracking attack, you attempt every possible password until you get the right one. Most scripts that do this will attempt several “well known” common passwords, and then move on to trying every possible other combination.

Modern security techniques have made brute force password cracking attempts realistically impossible. The most effective security techniques include account lockout and password attempt windows. Account lockouts allow administrators to block user accounts from accessing anything either until a set amount of time passes or until the user directly contacts an administrator to have their account unlocked. Both techniques effectively combat normal brute force attacks by significantly increasing the amount of time it takes for a brute force attack to be successful. Since there can be so many different combinations of letters in a password (depending on the length of the password, there can be billions of different possible passwords to attempt), forcing systems to only allow 10 attempts every 30 minutes longer makes it technically impossible to crack a password using brute force.

Just as an example, if a system requires 8 character passwords using only lowercase letters, there are  over 200 *billion* possible passwords that can be used. If you can only try 10 passwords every 30 minutes, it could take as long as 132,437 *years* to get a valid password for a single user account using a brute force attack. Now, it’s possible to get lucky and very quickly get a valid password using a brute force attack (particularly if you use a well known password or something like aaaaaaaa as your password), but the likelihood is very low, and that likelihood decreases even more if you include a greater number of valid characters for your password. Using all possible characters available on a US English keyboard, there are 6 quadrillion possible passwords that are 8 characters long. Adding one more character, for a 9 character password, increases that number to 600 quadrillion. This is why it’s important to allow and use “complex passwords” that utilize as many characters and as much length as is feasible. 

You Are the Weakest Link

The problem, though, with using complex passwords is that the more complex they are, the more difficult they are to *remember*. This is a problem, because the first thing a person does when they can’t remember something is to write it down and put it somewhere where it’s easy to get to. Like on a post it note on their monitor. This makes it so much easier to get someone’s password because all you have to do is walk by their computer and it’s sitting there in plain view. Which brings up the most important rule of IT security: The weakest link in any security system is people. People do all kinds of things to make things easier or quicker. Unfortunately, things that are easy and quick are also very much not secure. For instance, removing the password attempt lockout mechanism from a system will keep you from having to spend time unlocking user accounts, but will absolutely open you up to a brute force attempt. When you go from being able to enter only 10 passwords every 30 minutes to being able to enter upwards of 20 billion  per second (and beyond if the hacker has a lot of computing power at hand), it becomes a lot easier to break into the system with a brute force attack (so don’t turn off your password lockout policy!).

Since modern security policies allow us to make a brute force attack technically impossible, hackers have come up with other methods to obtain passwords and even bypass password mechanisms altogether. The most effective of these techniques is called Social Engineering.

Social Engineering

The most effective method for getting passwords is called social engineering. Social Engineering techniques rely on a hacker’s ability to take advantage of the weak link in any security system; the human being. For instance, even if you aren’t an IT professional, you’ve probably heard of the term “Phishing” before. This is a technique that uses weaknesses in human psychology to trick people into exposing private details including social security numbers, physical addresses, credit card numbers, and passwords. On very common example of a Phishing attempt is a mass email that purports to be from a supposedly credible source like a national banking company. The email will inform the user of some major problem with their bank account and prompt them to click a link to access their bank account. The link will take them to a web-site that very closely or exactly mimics the official bank web site, but the actual site you are sent to has a slightly different URL. Many people don’t pay very close attention to the URL they are being sent to, so they will unwittingly enter their actual bank access information into the login form on this fake website, resulting in their username and password being written to a database set up by the individual who sent out the email.

Another type of social engineering is something my step-kids love to pull off; shoulder surfing. This involves walking around looking for people who are in the process of entering a password for something (in my step-kids’ case, they can get my wife’s cell phone lock password within 10 minutes of her changing it by doing this). Most people aren’t as aware of their surroundings as they could be when they are entering a password, and shoulder surfing takes advantage of this fact. This is why many credit card machines in stores now come with “privacy shields” that help obscure which buttons are being pushed during a transaction. It’s a little more difficult to accurately enter your password on these devices, but it’s also more difficult for someone to purposefully snag your PIN.

Usually, though, social engineering is a little too involved and specialized to pull off. For a good example of a highly involved social engineering attack on a high security environment, watch the movie “Sneakers” staring Robert Redford (and Dan Akroyd…and Ben Kingsley…and some other people who aren’t as cool). Or go old school and just watch “The Sting”. In fact, just about every technique used by con men will typically fall under the social engineering category. But I digress.

A less involved technique for getting passwords involves the use of “Rainbow Tables.”

The Rainbow Connection

The best way to describe Rainbow Tables is to refer to them as a Distributed Brute Force attack that attacks the biggest weakness in password authentication mechanisms, password validation. As I mentioned earlier, when you enter a password, that password isn’t usually sent to the computer you’re trying to access. The password you enter is instead run through a hashing algorithm and the resulting hash string is sent and compared with the string that is stored on the computer. The weakness in this technique is that, despite the actual password being obscured, the hash is usually sent as is. And, in order for the system you are accessing to verify that you have the correct password, the hash is also stored somewhere in the system. This means that it is possible to discover what your password is by running a brute force attack against the hashing algorithm in use.

One of the biggest security problems with hashing algorithms is that the algorithms themselves are publicly available. This means that anyone can generate hash strings using them. Unfortunately, hashing algorithms *must* be published publicly available to ensure that they can’t be easily broken and are verifiable one-way mathematical algorithms. At any rate, a Rainbow Table is a database or other collection of hash values and the original data used to generate that hash value.

Rainbow Tables are built over a long period of time by a lot of individuals running a string through the hashing algorithm, then recording the resulting hash value. The original string and the hash value are entered into the Rainbow Table and stored permanently. Hackers can then use the rainbow table by comparing hash values that they uncover through various means with the ones stored in the Rainbow Table. The way a Rainbow Table is created, through the use of numerous computers and individuals to generate the information, means that it is a “distributed” effort. The fact that Rainbow Tables attempt to uncover every possible result generated by the hashing algorithm puts it into the Brute Force Attack classification. Thus, a Rainbow Table is the result of a Distributed Brute Force technique. Rainbow Tables are the preferred method for cracking passwords on systems that use hash exchange techniques because hashing algorithms are designed to generate *huge* numbers of values. For example, MD5 can generate 1.20892582e24 values. For those that aren’t aware of Math notation, that’s 1.20892582 with the decimal moved to the right 24 times. Put even more simply, it’s 120,892,582 with 16 more zeroes on the end. That’s a lot of possible passwords. Even assuming you could generate 30 billion hashes per second, it would still take you over a million years to find a string matching every single hash value the algorithm can put out. But, distribute that workload among a million people who are all able to generate 30 billion hashes per second and it will take a little over a year to accomplish the task (In the real world, the number of people generating hashes for Rainbow Tables is probably lower than 1 million and the rate they generate them is significantly lower than 30 billion per second). The Rainbow Table serves as a central repository for all the computers that are running hashes to record their progress and reduce the instances of generating the same hash numerous times. Eventually (and usually in an entirely feasible amount of time), any hashing algorithm can be made useless as a security mechanism for password validation because of the way Rainbow Tables work.

Creating Secure Passwords

Right now, almost every single hashing algorithm currently in wide use has hash values tied to known passwords that are stored in rainbow tables reaching as high as 8 characters with over 90% of all possible combinations. This means that every possible password combination for passwords of less than 8 characters long, using every possible combination of characters available using all characters available on a US keyboard, is recorded in a Rainbow table. What that means to you, dear reader, is that if you are using a password that is 8 characters long or less, there’s a 90% (or greater) chance that your password is listed on a rainbow table, and you should stop using it.

Now, it’s very likely you’ve been supremely frustrated when creating a new password at some point in your history of creating passwords because you were told that your password wasn’t “complex” or “strong” enough. A prevailing attitude in IT is that passwords with more characters available for each character in length makes the password more secure, and that attitude is partially correct. However, passwords that are longer than 8 characters that also have high numbers of characters used for each character in length are *much* more difficult to remember, and very few people ever make a password longer than 8 or 9 characters. As I mentioned, passwords of that length are not nearly as secure as they used to be because they are stored in Rainbow Tables, so to keep your passwords secure, you *have* to make them *longer*.

Here’s a fun fact for you…A password that is 14 characters long and only uses lower-case characters and the space bar is *much* more secure than a password that is 8 characters long and uses every character available. The number of available passwords for the former is a 21 digit number. The latter is 15 digits. This means that it will take significantly longer to generate a Rainbow Table for a 14 character password than for an 8 character password.

Now, I understand that a 14 character password might seem difficult to come up with, but consider this…if all you have to use in that password is lowercase letters and spaces, you can use your favorite movie quote. “may the force be with you” is 25 characters long. It will take 624 billion years for 1 million computers generating 30 billion hashes per second to complete a rainbow table that holds all the passwords possible using 25 characters using *just* lowercase letters and spaces. For reference, scientists estimate that the entire universe has only been around for 14 billion years.

However, there’s a problem with that last paragraph…The hashing algorithm. Some algorithms don’t generate enough hash values to cover the entire list of passwords you can use. For instance, the example above, using 25 character length passwords, means the number of possible passwords is a 36 digit long number. If you were to run that password through a hashing algorithm that generates a short value, CRC16, for example, you would likely end up generating a hash that also matches a much *shorter* password. In fact, the entire CRC16 algorithm will only support passwords of up to 10 characters in length, using the entirety of the US keyboard character set (you can actually generate five times as many passwords than CRC16 hash values using that length and complexity). Luckily, CRC16 isn’t heavily utilized in password validation these days. That said, it is important to point out that there *is* an upper limit to how long a password can be and remain “secure”. For the most part (as of 2015), you should consider a password that is more than 32 characters long to be increasingly likely to generate a hash value that could collide with a shorter password’s hash value.

So the longer your password is, the more secure it is (up to a point). If the system you are creating a password for *requires* you to use more than just lower case letters and spaces, put 1! at the start or end and then capitalize the first letter of your sentence.

And there you have it. That’s a whole lot of explanation on the subject of passwords, and there’s still a lot I can cover. In part 2 of this subject, I’ll cover the fine art of building a secure (and not annoying) password policy. I’ll also touch a little bit on Active Directory passwords, and why it generally *isn’t* possible to crack them with a Rainbow Table (when you only have access to a client machine, at least).