Enabling Message Encryption in Office 365

As I mentioned in an earlier post, email encryption is a sticky thing. In a perfect world, everyone would have Opportunistic TLS enabled and all mail traffic would be automatically encrypted with STARTTLS encryption, which is a fantastic method of ensuring security of messages “in transit”. But some messages need to be encrypted “at rest” due to security policies or regulations. Unfortunately, researchers have recently discovered some key vulnerabilities in the S/MIME and OpenPGP. These encryption systems have been the most common ways of ensuring message encryption for messages while they are sitting in storage. The EFAIL vulnerabilities allow HTTP formatted messages to be exposed in cleartext by attacking a few weaknesses.

Luckily, Office 365 subscribers can improve the confidentiality of their email by implementing a feature that is already available to all E3 and higher subscriptions or by purchasing licenses for Azure Information Protection and assigning them to users that plan to send messages with confidential information in them. The following is a short How-To on enabling the O365 Message Encryption (OME) system and setting up rules to encrypt messages.

The Steps

To enable and configure OME for secure message delivery, the following steps are necessary:

  1. Subscribe to Azure Information Protection
  2. Activate OME
  3. Create Rules to Encrypt Messages

Details are below.

Subscribe to Azure Information Protection

The Azure Information Protection suite is an add-on subscription for Office 365 that will allow end users to perform a number of very useful functions with their email. It also integrates with SharePoint and OneDrive to act as a Data Loss Prevention tool. With AIP, users can flag messages or files so that they cannot be copied, forwarded, deleted, or a range of other common actions. For email, all messages that have specific classification flags or that meet specific requirements are encrypted and packaged into a locked HTML file that is sent to the recipient as an attachment. When the recipient receives the message, they have to register with Azure to be assigned a key to open the email. The key is tied to their email address and once registered the user can then open the HTML attachment and any future attachments without having to log in to anything.

Again, if you have E3 or higher subscriptions assigned to your users, they don’t need to also have AIP as well. However, each user that will be sending messages with confidential information in them will need either an AIP license or an E3/E5 license to do so. To subscribe to AIP, perform these steps:

  1. Open the Admin portal for Office 365
  2. Go to the Subscriptions list
  3. Click on “Add a Subscription” in the upper right corner
  4. Scroll down to find the Azure Information Protection
  5. Click the Buy Now option and follow the prompts or select the “Start Free Trial” option to get 25 licenses for 30 days to try it out before purchasing
  6. Wait about an hour for the service to be provisioned on your O365 tenant

Once provisioned, you can then move on to the next step in the process.

Activate OME

This part has changed very recently. Prior to early 2018, Activating OME took a lot of Powershell work and waiting for it to function properly. MS changed the method for activating OME to streamline the process and make it easier to work with. Here’s what you have to do:

  1. Open the Settings option in the Admin Portal
  2. Select Services & Add-ins
  3. Find Azure Information Protection in the list of services and click on it
  4. Click the link that says, “Manage Microsoft Azure Information Protection settings” to open a new window
  5. Click on the Activate button under “Rights Management is not activated”
  6. Click Activate in the Window that pops up

Once this is done, you will be able to use AIP’s Client application to tag messages for right’s management in Outlook. There will also be new buttons and options in Outlook Web App that will allow you to encrypt messages. However, the simplest method for encrypting messages is to use an Exchange Online Transport Rule to automatically encrypt messages.

Create Rules to Encrypt Messages

Once OME is activated, you’ll be able to encrypt messages using just the built in, default Rights Management tools, but as I mentioned, it’s much easier to use specific criteria to do the encryption automatically. Follow these stpes:

  1. Open the Exchange Online Admin Portal
  2. Go to Mail Flow
  3. Select Rules
  4. Click on the + and select “Add a New Rule”
  5. In the window that appears, click “More Options” to switch to the advanced rule system
  6. The rule you use can be anything from Encrypting messages flagged as Confidential to using a tag in the subject line. My personal preference is to use subject/body tags. Make your rule look like the below image to use this technique:Encrypt Rule

When set up properly, the end user will receive a message telling them that they have received a secure message. The email will have an HTML file attached that they can open up. They’ll need to register, but once registered they’ll be able to read the email without any other steps required and it will be protected from outside view.

 

 

Advertisements

Avoiding Vendor Bloat

Some IT software vendors may hate me for this blog post, but I want to write it anyway. During my decade as an IT consultant for businesses of varying sizes, I’ve observed a particularly annoying phenomenon, which I call “Vendor Bloat.” What happens here is an organization’s IT decision makers identify some need and immediately look for technical solutions that will meet that need. This is not always a bad idea, but in many situations, the organizations fail to realize that they already have technical solutions that meet the need and end up with a massive number of  technical solutions from different vendors. This results in an IT environment that is constantly fighting with appliances, servers, and software solutions. The end result is a terrible IT infrastructure that ends up hurting the business instead of helping the business meet its goals. The IT support team has numerous vendors to talk to for support and those vendors don’t help them get the solutions working with all the other stuff they have.

In one extreme example I recall going in to an organization that had 3 email security appliances; a spam filter, an email encryption appliance, and an email archiving appliance. They were constantly having issues with mail delivery delays and failures and just couldn’t figure out what was causing the problem. I took one look and just had to shake my head in frustration. I went through the architecture of the environment with the client and showed them how a single cloud service could provide all three of their email security needs. Once they switched to that method, the email delivery problem mysteriously disappeared.

IT Unitaskers

The core of the problem is due to a type of IT “Unitasker” solution that meets only a single organizational need. If you haven’t seen TV Chef Alton Brown’s tirade against Kitchen Unitaskers, go watch it to get a little background on the term “Unitasker.”

Basically, IT software solutions or appliances that only do a single thing are dumb, and are often very close to being scams. They cost lots of money, do very little, and do more to hurt your IT environment than help. You should know that most of the quality solutions out there have the ability to meet multiple needs without third party additions.

Following the Email Security example, you want to look for a spam filtering solution that provides some form of email encryption and either archiving or spooling services as well. An email encryption solution should also provide Data Loss Prevention capabilities or have spam filtering features as well, and even a solution for managing Whole Disk Encryption or Endpoint Security can add great value.

Aside from the general annoyance of dealing with different support frameworks to fix a problem, you do not want to have multiple vendors handling your mail-flow. It’s a nightmare to troubleshoot issues with more than one vendor or two vendors in the mix, and issues are bound to happen when you have your email bouncing through multiple servers or appliances before hitting a mailbox.

So how do we avoid Vendor Bloat?

Don’t Be Lazy

The first step to avoiding Vendor Bloat is getting over the desire to avoid work. There is a lot of work and careful examination involved in properly assessing the need for an IT solution. But that work must be done if you don’t want to have someone take advantage of you and sell you things you don’t need. You should never ever cede oversight of the IT environment to a vendor.

Honest Self-Assessment

One of the first bits of work you need to do is to honestly and thoroughly assess your environment’s existing infrastructure as well as the need you have. If, for instance, there is a phishing attack on the environment, you need to carefully assess the damage before looking at solutions to keeping them from happening.

The process here requires you to examine existing costs, budgetary constraints, solution need, and cost to continue as-is (including hidden costs like reduced efficiency). If the aforementioned phishing attack only cost you a few headaches and you’ve only been hit with a single similar attack in the past decade, a $100k+ solution isn’t likely to be a good purchase.

Technical Examination

Take a look at your existing IT infrastructure and determine the capabilities of what you already have. You’ve spent lots of good money building your IT infrastructure already, so you need to make sure you don’t already have the ability to meet the need you have without spending tons of money.

Exchange server (and Exchange Online), for instance, is already capable of providing partner-based forced Email encryption through the use of Mutually Authenticated TLS encryption (Also known as Domain Authenticated TLS). Setting this up usually only requires about an hour of work per partner organization, so if you have a limited set of companies that you need to ensure email encryption with, it’s worth it to set that relationship up with Exchange rather than spend thousands on an appliance or cloud solution that only does email encryption.

It helps to consider least effort solutions when being faced with a problem in IT. There are a lot of good reasons for this. First off, creative solutions with your existing environment will allow you to maintain the existing support framework without having to expand or train employees to manage and use new solutions.

If you are a high-level decision maker, be sure that you have access to technical advisors to assist in assessing need. This is particularly true if the need is in an area that you aren’t familiar with.

Vendor Pushback

Whenever a vendor tries to tell you how to meet your company’s needs with their software or service, push back! Don’t let the vendors control the conversation. You have a need and they need to prove that they can meet more than just that need. You have to ask, “What else does this do?”

There are also a lot of hidden costs that need to get added to the equation when you add a new system to an existing IT infrastructure. You have to train your own staff to manage it, you have to adjust your processes to account for the new services, and other managerial issues will pop up once the solution is in place. A vendor’s pitch to you will not account for the hidden costs, so you need to be vigilant and serious when interacting with vendors. Don’t be distracted by the flashy lights and cool tech, and don’t be afraid to say, “I don’t need this.”

Conclusion

Vendor Bloat can become a very serious problem quickly, aside from the general need to have an IT environment where all the pieces work together properly. It is possible, however, to avoid getting yourself stuck in the vendor bloat trap if you are honest, careful, and smart about assessing the need to actually buy a new solution.

QuickPost: What do Exchange Virtual Directories Do?

This is just a quick little reference post to answer a question that isn’t well covered. Most Exchange admins are familiar with how to set the Virtual Directories in Exchange after a new server is added or a after initial deployment. What’s less clear to most is what those VDirs actually do as far as Exchange’s capabilities are concerned. I’ll also cover the difference between Internal/External URLs for the VDirs at the end. You may also want to visit this documentation to look at how each VDir’s IIS authentication should be set (in 2016, at least…click the other versions button to select yours).

OWA

I really hope everyone understands what this one does, but let’s just include the explanation anyway. OWA VDir is for Outlook Web Access. It’s used to host the website that users will connect to if they are attempting to access their mailbox through a web browser.

ECP

This one hosts the website used to access the Exchange Control Panel. ECP allows management of the entire exchange server if you have the correct administrative rights assigned, or advanced configuration of your mailbox if you don’t have admin rights.

Autodiscover

This is the endpoint that hosts the XML file used by Outlook and Activesync to determine where the correct Exchange server is for the user’s mailbox.

EWS Virtual Directory

This is one of the more important Virtual Directories to have the URLs set properly on. EWS is Exchange Web Services. EWS provides third party applications and clients with connectivity to the Exchange user’s mailbox in a way that allows those applications to communicate with the mailbox without using MAPI or RPC connections. This makes connections to Exchange more secure and app developer friendly. EWS is responsible for Calendar Sharing outside the Exchange organization, Free/Busy exchange, Out of Office messaging, and a number of other things. If this VDir isn’t set properly, those things may not work.

Microsoft-Server-Activesync

This VDir allows access to mobile devices that are compatible with Microsoft’s ActiveSync. It is used by any ActiveSync compatible application to access the user’s Mail and Calendar data. ActiveSync is *very* limited in what it can access. Things like shared calendars, delegated mailboxes, and public folders cannot be accessed through ActiveSync.

OAB

OAB stands for Offline Address Book. The OAB VDir hosts XML files that contain a downloadable copy of the Exchange Organization’s Global Address List and all other Address Lists that are configured to publish an OAB. This allows Outlook to download the address book for offline/disconnected use.

RPC

RPC stands for Remote Procedure Call, and it’s the technique the MAPI protocol uses to exchange mail between servers and clients. The RPC VDir is tied to a feature called Outlook Anywhere (or RPC over HTTPS in some versions). This VDir needs to be set correctly if you want users to be able to access Exchange 2007/2010 from outside the network. In 2013, it is used for computers inside and outside the network. In 2016, it is being replaced with MAPI over HTTPS, which functions a little differently. If this VDir isn’t set correctly, External users will not be able to use Outlook to connect to their mailbox.

MAPI

This VDir is home to the MAPI over HTTPS protocol used in Outlook 2016 and some versions of 2013. This VDir has to be set in Powershell because it hasn’t been added to the ECP GUI for Exchange. MAPI over HTTPS functions very similarly to RPC, with the exception that the entire protocol utilizes HTTPS for its work instead of just tunneling the RPC requests. It’s a bit more secure to do things this way, and it’s how Exchange will work in the future.

Powershell

This VDir provides administrators with remote access to the Exchange Management Shell in Powershell. In Exchange 2007/2010, the Exchange Management Shell was access directly from the server. In later service packs for 2010, this was changed to allow Powershell to function over HTTPS, which provides a more secure interface with Exchange.

Internal URLs vs External URLs

Each of the above VDirs can be configured with an Internal and External URL setting. What’s the difference? Well, when applications like Outlook connect to Autodiscover, they are given a URL as a referral in case the application needs to know where to reach each service. The URL that gets used depends on whether the client is joined to the same Active Directory Domain/Forest as Exchange, and whether the client is connected to the same network.

All clients not connected to the same network as Exchange (that is, the IP address of the client as seen by Exchange Server is part of a Subnet that is assigned to an Active Directory Site) will be given External URL settings for everything. Clients on the same network will be given the External URL if they are not a member of the AD Domain/Forest that Exchange belongs to. Clients on the same network that are members of the AD Domain/Forest Exchange is in will receive the Internal URL. In practice, it’s a good idea to make sure the Internal and External URLs are the same for all Virtual Directories in Exchange.

VDirs Where the URL Settings Don’t Matter

There are a few VDirs that have Internal/External URL settings that are not really used for any purpose. OWA and ECP don’t generally get accessed by applications that use Autodiscover, so there’s no requirement that the URL be set. Powershell is usually not used by applications that use Autodiscover, but it can be, so whether it’s set or not depends on your applications.

What URL Do I Use?

You may be wondering which URL you should be using to configure these VDirs. The answer is simple enough. Use a URL that matches the Certificate installed on the Exchange server. If the Certificate has exchange.domain.com listed as an acceptable CN or SAN, use https://exchange.domain.com/whatever. You’ll want to make sure that any certificate used with Exchange includes autodiscover.domain.com at a minimum. Additional names are recommended. If you don’t meet that requirement, you’ll need to use SRV records for autodiscover.

Data Encryption – How it Works (Part 1)


I’ve decided to start a short series of posts on data encryption, which is becoming an increasingly important subject in IT as government regulations and privacy concerns demand ever increasing levels of privacy and security.

In this series, I’ll try to cover the more confusing concepts in encryption, including the three main types of encryption systems used today; Private Key encryption, Public Key Encryption, and SSL/TLS encryption. I will cover how those types of encryption function and vary from one another. I will also get some coverage on one of the most confusing topics in IT security, Public Key Infrastructure. If you haven’t already read by article on Digital Certificates, I would highly recommend doing so before going on to part two of this series, since digital certificates underpin the vast majority of encryption standards today.

What is Encryption?

The goal of encryption is to make any message or information impossible to understand or read without permission. Perfect encryption is (currently) impossible. What I mean by that is there is no way to encrypt data so that it can’t *possibly* be read by someone who isn’t authorized to do so. There are an unlimited number of ways to encrypt data, but some methods are significantly more effective at preventing unauthorized disclosure of data than others.

Encryption Parts

Every encryption system, however, has a few things in common. First, there’s the data. If you don’t have something you want to keep private or secret, there’s no reason to encrypt your data, so no need for encryption. But since we live in a world where secrecy and privacy are occasionally necessary and desirable, we are going to have stuff we want to encrypt Credit card numbers, social security numbers, birth dates, and things like that, for instance, need to be encrypted to prevent people from misusing them. We call this data “Clear-text” because it’s clear what the text says.

The next part is the “encryption algorithm”. Encryption is based very heavily in math, so we have to borrow some mathematical terminology here. In math, an algorithm is all the steps required to reach a conclusion. The algorithm for 1+1 is identified by the + sign, which tells use the step we need to take to get the correct answer to the problem, which is to add the values together. Encryption algorithms can be as simple as adding numbers or so complicated that they require a library of books to explain. The more complicated the algorithm, the more difficult it is (in theory) to “crack” the encryption and expose the original clear-text.

Encryption algorithms also require some value to be added along with the clear-text to generate encrypted data. The extra value is called an encryption “key”. The encryption key has two purposes. First, it allows the encryption algorithm to produce a (theoretically) unique value from the clear-text. Second, it allows people who have permission to read the encrypted data to do so, since knowing what the key is will allow us to decrypt, or reveal, the clear-text (more on this in a bit).

These three pieces put together are used to create a unique “Cipher-text” that will appear to be just gobbledygook to casual inspection. The cipher-text can be given to anyone and whatever it represents will be unknown until the data is “decrypted”. The process we go through to do this is fairly simple. We take the clear-text and the key, enter them as input in the encryption algorithm, and after the whole algorithm is completed with those values, we get a cipher-text. The below image shows this:

Encryption

 

Every encryption algorithm requires the ability to “reverse” or “decrypt” the data, so they all have a different decryption algorithm. For instance, in order to get back to the original value of 1 after adding 1 to it to get 2, you would have to reverse that process by subtracting 1. In this case, we know what input (1) and algorithm (adding) was used to reach the value, so reversing it is easy. We just subtract whatever number we need to get back to the original value (1 in this case). In general, decryption algorithms will take the key and cipher-text as input to the algorithm. Once everything in the algorithm is done, it should result in the original clear-text, as shown below:

Decryption

Simple Examples

Two early examples of encryption come to us from Greek and Roman history. The Skytale was a fairly ingenious encryption tool that used a wooden block of varying size and shape as its key. The clear-text was written (or burned) on a strip of leather that was wrapped around the key on a single side of the key, which was usually hexagonal. The person who was supposed to receive the message had a key of similar shape and size. Wrapping the leather strip around the other key would allow the recipient to receive the message. Using the above terminology, the Clear-text is the message, the key is the block of wood, and the encryption algorithm is wrapping a strip of leather around the key and writing your message along with some fake gobbledygook on all the other sides. Unwrapping the leather from the block gives a cipher-text. Decryption is just wrapping the strip around a similarly shaped and sized block, then look at all sides to see which one makes sense.

One of the more famous encryption algorithms is called the “Caesar Cipher” because it was developed by Julius Caesar during his military conquests to keep his enemies from intercepting his plans. You’ve probably used this algorithm before without knowing it if you ever enjoyed passing notes to friends in school and wanted to keep the other kids (or the teacher) from knowing what the message said if they “intercepted” it.

The Caesar Cipher is fairly simple, but works well for quick, easy encryption. All you do is pick a number between 1 and 26 (or the number of letters in whatever language you’re using). When writing the message, you replace each letter with whatever letter whose space in the alphabet is equal to the number you chose above or below in the alphabet. For instance, “acbrownit” is “bdcspxmju” in a +1 Caesar Cypher. Decrypting the message is a simple matter of reversing that. For a Caesar Cypher, the key is whatever number you pick, the clear-text is the message you want to send, the algorithm is to add the key to the clear-text’s letters, outputting cipher-text.

Key Exchange

For any encryption algorithm to function properly as a way to send messages, you must have a way to ensure that the recipient of the message has the correct key to decrypt the message. Without a key, the recipient will be forced to “crack” the encryption to read the message. So you need to be able to provide that key to the recipient. The process of ensuring that both the sender and recipient have the keys to encrypt and decrypt the message (respectively), a “key exchange” must occur. This is often as simple as telling your friend what number to use with your Caesar cipher.

But what do you do if you need to exchange keys in a public place, surrounded by prying eyes (like, for instance, the Internet)? It becomes much more difficult to exchange keys when needed if there is significant distance between the sender and recipient, which means that the biggest weakness in any encryption standard is making sure that the recipient has the key they need to decrypt the message. If the key can be intercepted easily, the encryption system will fail.

The exchange method used will usually depend on they type of key required for decryption. For instance, in World War II, the German military developed a mechanical encryption device called “Enigma” that was essentially a typewriter, but it changed the letters used when typing out a message with a mechanical series of gears and levels. If you pushed the I button on the keyboard, depending on the key used it would type a J or a P (or whatever). The keys were written down in a large notebook that was given directly to military commanders before they departed on their missions, and the index location of the key assigned to the message was set on the machine itself to encrypt and decrypt messages. The process of creating that key book and handing it to the commander was a key exchange. It was kept secure by ensuring that the only people who had the notebook of keys were people that were allowed to have them. Commanders were ordered to destroy their Enigma machines and accompanying notebooks if capture was likely. The Allies in the war were able to capture some of these machines eventually, which allowed a lot of incredibly smart people a chance to examine them and learn the algorithm used to encrypt data, which ultimately resulted in the Enigma machines becoming useless.

Modern encryption systems utilize a number of different methods for exchanging keys. For example, there are VPN tunnels that utilize “hardware” keys. In these solutions, the networks on each side of the tunnel have a device that is connected to another over the internet through a VPN. Before a connection between each side can be established, a small electronic dongle (about the size of a flash drive) has to be plugged in on each side. The dongle contains the key used to encrypt and decrypt data. The key exchange in this scenario involves having an authorized individual take a key to each site and plug it in. This is a very low-tech kind of key exchange, but is extremely secure because, as long as the individual carrying the keys is trustworthy, we can be sure that no one else has a copy of the key.

There are many other kinds of key exchanges that can occur in an encryption system, but most people don’t realize when a key exchange is even happening on the Internet. Whenever you visit an encrypted website, there are actually two different kinds of key exchange that have to happen before the website is presented. Without the technology to perform those exchanges, entering your credit card to purchase the latest gadget online would be a much more complicated and annoying process.

The Future of Encryption

Encryption techniques have come a long way since the early days of leather straps around wooden blocks. Encryption is also used in more ways, by more people, and for more purposes than you can imagine. Despite the improvements and technological developments that have come along, there is still no such thing as a perfect, unbreakable encryption technique. It’s always possible to decrypt data without permission. All we can do is ensure that the time it takes to “crack” the encryption is prohibitive. AES encryption, for example, can take as long as the universe has existed to crack using brute force techniques (based on the average computer’s processing power). The future, though, will require better, more ingenious encryption systems. Why? Because, theoretically, a sufficiently powerful quantum computer (which doesn’t exist yet) can crack even the strongest encryption in almost no time at all. Rest assured, however, that someone (or a group of someones) will develop a better system that will be much more difficult for quantum computers to crack.

Summing it Up

Encryption is a part of our daily lives, whether we realize it or not. Understanding how it works is becoming more important as time goes on and the need to protect ourselves from prying eyes increases. Hopefully, after reading this article, you can see why encryption is important and what it really does for everyone.

 

Designing Infrastructure High Availability

IT people, for some reason, seem to have an affinity towards designing solutions that use “cool” features, even when those features aren’t really necessary. This tendency sometimes leads to good solutions, but a lot of times it ends up creating solutions that fall short of requirements or leave IT infrastructure with significant short-comings in any number of areas. Other times, “cool” features result in over-designed, unnecessarily expensive infrastructure designs.

The “cool” factor is probably most obvious in the realm of High Availability design. And yes, I do realize that with the cloud becoming more common and prevalent in IT there is less need to understand the key architectural decisions needed when designing HA, but there are still plenty of companies that refuse to use the cloud, and for good reason. Cloud solutions are not meant to be one size fits all solutions. They are one size fits most solutions.

High Availability (Also called “HA”) is a complex subject with a lot of variables involved. The complexity is due to the fact that there are multiple levels of HA that can be implemented, from light touch failover to globally replicated, multi-redundant, always on solutions.

High Availability Defined

HA is, put simply, any solution that allows an IT resource (Files, applications, etc) to be accessible at all times, regardless of hardware failure. In an HA designed infrastructure, your files are always available even if the server that normally stores those files breaks for any reason.

HA has also become much more common and inexpensive in recent years, so more people are demanding it. A decade ago, any level of HA involved costs that exponentially exceeded a normal, single server solution. Today, HA is possible for as little as half the cost of a single server (Though, more often, the cost is essentially double the single server cost).

Because of the cost reduction, many companies have started demanding it, and because of the cool factor, a lot of those companies have been spending way too much. Part of why this happens is due to the history of HA in IT.

HA History Lesson

Prior to the development of Virtualization (the technology that allows multiple “Virtual” servers to run on a single physical server), HA was prohibitively expensive and required massive storage arrays, large numbers of servers, and a whole lot of configuration. Then, VMWare implemented a solution called “VMotion” that allowed a Virtual Server to be moved between server hardware immediately at the touch of a button (Called VM High Availability). This signaled a kind of renaissance in High Availability because it allowed servers to survive a hardware failure for a fraction of the cost normally associated with HA. There is a lot more involved in this shift that just VMotion (SANs, cheaper high-speed internet, and similar advancements played a big part), but the shift began about the time VMotion was introduced.

Once companies started realizing they could have servers that were always running, regardless of hardware failures, an unexpected market for high-availability solutions popped up, and software developers started developing better techniques for HA in their products. Why would they care? Because there are a lot of situations where a server solution can stop working properly that aren’t related to hardware failures, and VMotion was only capable of handling HA in the event of hardware failures.

VM HA vs Software HA

The most common mistake I see people making in their HA designs is accepting the assumption that VM-level High Availability is enough. It is most definitely not. Take Exchange server as an example. There are a number of problems that can occur in Exchange that will prevent users from accessing their email. Log drives fill up, forcing database dismount. IIS can fail to function, preventing users from accessing their mailbox. Databases can become corrupted, resulting in a complete shutdown of Exchange until the database can be repaired or restored from backup. VM HA does nothing to help when these situations come up.

This is where the Exchange Database Availability Group (DAG) comes in to play. A DAG involves constantly replication changes to Mailbox Databases to additional Exchange servers (as many of them as you want, but 2-3 is most common). With a DAG in place, any issue that would cause a database to dismount in a single Exchange server will instead result in a Failover, where the database dismounts on one server and mounts on the other server immediately (within a few seconds or less).

The DAG solution alone, however, doesn’t provide full HA for Exchange, because IIS failures will still cause problems, and if there is a hardware failure, you have to change DNS records to point them to the correct server. This is why a Load Balancer is a necessary part of true HA solutions.

Load Balancing

A Load Balancer is a network device that allows users to access two servers with a single IP address. Instead of having to choose which server you talk to, you just talk to the load balancer and it decides which server to direct you to automatically. The server that is chosen depends on a number of factors. Among those is, of course, how many people are already on each server, since the primary purpose of a load balancer is to balance the load between servers more or less equally.

More importantly, though, most load balancers are capable of performing health checks to make sure the servers are responding properly. If a server fails a health check for any reason (for instance, if one server’s not responding to HTTP requests), the load balancer will stop letting users talk to that server, effectively ensuring that whatever failure occurs on the first server doesn’t result in users being unable to access their data.

Costs vs. Benefits

Adding a load balancer to the mix, of course, increases the cost of a solution, but that cost is generally justified by the benefit such a solution provides. Unfortunately, many IT solutions fail to take this fact into account.

If an HA solution requires any kind of manual intervention to fix, the time required for notifying IT staff and getting the switch completed varies heavily, and can be anywhere from 5 minutes to several hours. From an availability perspective, even this small amount of time can have a huge impact, depending on how much money is assumed as “lost” because of a failure. Here comes some math (And not just the Trigonometry involved in this slight tangent).

Math!

The easiest way to determine whether a specific HA solution is worth implementing involves a few simple calculations. First, though, we have to make a couple assumptions, none of which are going to be completely accurate, but are meant to help determine whether an investment like HA is worth making (Managers and CEOs take note)

  1. A critical system that experiences downtime results in the company being completely unable to make money for the period of time that system is down.
  2. The amount of money lost during downtime is equal to whatever percentage of a year the system is down times the amount of annual revenue the organization expects to make in a year.

For instance, if a company’s revenue is $1,000,000 annually, they will make an average of $2 per minute (Rounded up from $1.90), so you can assume that 5 minutes of downtime costs that company about $10 in gross revenue. The cheapest of Load balancers cost about $2,000 and will last about 5 years, so you recoup the cost of the load balancer by saving yourself 200 minutes of downtime. That’s actually less than the amount of time most organizations spend updating a single server. With Software HA in place, updates don’t cause downtime if done properly, so the cost of a load balancer is covered in just being able to keep Exchange running during updates (This isn’t possible with just VM HA). But, of course, that doesn’t cover the cost of the second server (Exchange runs well on a low-end server, so $5000 for server and licenses is about what it would cost). Now imagine if the company makes $10,000,000 in revenue, or think about a company that has revenue of several billion dollars a year. HA becomes a necessity according to these calculations very quickly.

VM HA vs Software HA Cost/Benefit

Realistically, the cost difference between VM HA and Software HA is extremely low for most applications. Everything MS sells has HA capability baked in that can be done for very low costs, now that the Clustering features are included in Windows 2012 Standard. So the costs associated with implementing Software HA vs VM HA are almost always justifiable. Thus, VM HA is rarely the correct solution. And mixing the two is not a good idea. Why? Because it requires twice the storage and network traffic to accomplish, and provides absolutely no additional benefit, other than the fact that VM Replication is kinda cool. Software HA requires 2 copies of the Server to function, and each copy should use a separate server (Separate servers are required for VM HA as well, so only the OS licensing  is an increased cost) to protect against hardware failure of one VM host server.

Know When to Use VM HA

Please note, though, that I am not saying you should never use VM HA. I am saying you shouldn’t use VM HA if software HA is available. You just need to know when to use it and when not to. If software HA isn’t possible (There are plenty of solutions out there with no High Availability capabilities), VM HA is necessary and provides the highest level of high availability for those products. Otherwise, use the software’s own HA capabilities, and you’ll save yourself from lots of headaches.

If You Have a Cisco Firewall, Disable this Feature NOW!!!

I don’t often have an opportunity to post a rant in an IT blog (And even less opportunity to create a click-bait headline), but here goes nothing! Cisco’s method of doing ESMTP packet inspection is INCREDIBLY STUPID and you should disable it immediately. Why do I say that? Because when Cisco ASAs/whatever they call them these days are configured to perform packet inspection on ESMTP traffic, the preferred option of doing so is to block the STARTTLS verb entirely.*

In other words, Cisco firewalls are designed to completely disable email encryption in order to inspect email traffic. This is such a stupid method of allowing packet inspection that I can barely find words to explain it. But find them I shall.

You might think that you want your Firewall to inspect your email traffic in order to block malicious email or prevent unauthorized access, or what have you. And in that context, I agree. It’s a useful thing. But knowing that the Firewall is not only inspecting the traffic but also preventing any kind of built in E-Mail encryption from running is rant food for me.

I can just imagine the people at Cisco one day sitting around coming up with ideas on how to implement ESMTP packet inspection. I can imagine some guy saying, “I know, we can design our firewall to function as a Smart Host, so it can receive encrypted emails from our customer’s email servers, decrypt them, inspect them, then communicate with the destination servers and attempt to encrypt the messages from there.” I can then imagine that guy being ignored by the rest of his coworkers once the lazy dork in the room says, “How about we just block the STARTTLS verb?”

Thank you, Cisco engineers, for using the absolute laziest possible method you could find to ensure that all email traffic gets inspected, thereby making certain that your packet inspection needs are met while preventing your clients from using TLS encryption over SMTP.

So, if you have a Cisco firewall and want to have the ability to, you know, encrypt email, make sure you disable ESMTP packet inspection. If that feature is turned on, all your email is completely unencrypted. Barracuda provides a lovely guide on disabling ESMTP inspection. https://www.barracuda.com/support/knowledgebase/50160000000IyefAAC

Cisco tells people to just disable the rule that blocks STARTTLS in email, but that wouldn’t really help their packet inspection much, since everything past the STARTTLS verb is encrypted. If it’s encrypted, it can’t be inspected, other than looking at the traffic and going, “Yep. That’s all gobbledygook. Must be encrypted.” So that’s just a dumb recommendation that doesn’t do anything useful (It also requires a trip to the Cisco CLI, which is a great fun thing). This is why I say disable ESMTP packet inspection on your Cisco Firewall, cause it’s making you less secure.

*For the uninitiated, ESMTP stands for Extended Simple Mail Transfer Protocol, and it’s what every mail server on the Internet today uses to exchange emails with each other. The STARTTLS verb is a command that initiates an encrypted email session, so blocking it prevents encrypted email exchanges entirely. This is a bad thing.

 

Protect Yourself from the WannaCry(pt) Ransomware

Well, this has been an exciting weekend for IT guys around the world. Two IT Security folks can say that they saved the world and a lot of people in IT had no weekend. The attack was shut down before it encrypted the world, but there’s a good chance the attack will just be changed and start over. So what can you do to keep your system and data from being compromised by this most recent cyberware attack? If you’ve patched everything up already, or don’t know if you’re patched or vulnerable to this attack (or you just don’t want to deal with Windows updates right now), and you want to be absolutely positive that your computer won’t be affected, disable SMBv1! Like, seriously. You don’t need it. Unless you’re a Luddite.

There are some environments that may still need it (Anyone still using Windows XP and server 2003, antiquated management software, or PoS NAS devices), so if you have a Windows Server environment, run

Set-SmbServerConfiguration –AuditSmb1Access $true

in PowerShell for a bit and watch the SMBServer audit logs for failures.

To disable SMBv1 Server capabilities on your devices, do the following:

Server 2012 and Later

  1. Open Powershell (Click start and enter Powershell in the search bar to open it if you don’t know how to get to it)
  2. Type in this and hit Enter: Remove-WindowsFeature FS-SMB1
  3. Wait a bit for the uninstall process to finish.
  4. Voila! WannaCry can’t spread to this system anymore.

Windows 7, Server 2008/2008R2

  1. Open Powershell (Click start and enter Powershell in the search bar to open it if you don’t know how to get to it)
  2. Type in this (everything on the same line) and hit Enter: Set-ItemProperty -Path “HKLM:\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters” SMB1 -Type DWORD -Value 0 -Force
  3. Wait a bit for the command to complete.
  4. Voila! WannaCry can’t spread to this system anymore.

Windows 8.1/10

  1. Open Powershell (Click start and enter Powershell in the search bar to open it if you don’t know how to get to it)
  2. Type in this and hit Enter: Disable-WindowsOptionalFeature -Online -FeatureName smb1protocol
  3. Wait a bit for the uninstall process to finish.
  4. Voila! WannaCry can’t spread to this system anymore.

If you’re using Windows Vista…I am so so sorry…But the Windows 7/8 instructions should still work for you.

If you still use Windows XP…stop it. And you’re just going to have to get the patch that MS released for this vulnerability.

An additional step you may want to take is to disable SMBv1’s *client* capabilities on your systems. Running the two commands below (on one each line) will do this for you. This isn’t completely necessary, since the client can’t connect to other systems unless they support SMBv1, so if the SMBv1 server component is disabled above, the SMBv1 client can’t do anything. But, if you want to disable the client piece as well, enter the following commands:

sc.exe config lanmanworkstation depend= bowser/mrxsmb20/nsi
sc.exe config mrxsmb10 start= disabled