Designing Infrastructure High Availability

IT people, for some reason, seem to have an affinity towards designing solutions that use “cool” features, even when those features aren’t really necessary. This tendency sometimes leads to good solutions, but a lot of times it ends up creating solutions that fall short of requirements or leave IT infrastructure with significant short-comings in any number of areas. Other times, “cool” features result in over-designed, unnecessarily expensive infrastructure designs.

The “cool” factor is probably most obvious in the realm of High Availability design. And yes, I do realize that with the cloud becoming more common and prevalent in IT there is less need to understand the key architectural decisions needed when designing HA, but there are still plenty of companies that refuse to use the cloud, and for good reason. Cloud solutions are not meant to be one size fits all solutions. They are one size fits most solutions.

High Availability (Also called “HA”) is a complex subject with a lot of variables involved. The complexity is due to the fact that there are multiple levels of HA that can be implemented, from light touch failover to globally replicated, multi-redundant, always on solutions.

High Availability Defined

HA is, put simply, any solution that allows an IT resource (Files, applications, etc) to be accessible at all times, regardless of hardware failure. In an HA designed infrastructure, your files are always available even if the server that normally stores those files breaks for any reason.

HA has also become much more common and inexpensive in recent years, so more people are demanding it. A decade ago, any level of HA involved costs that exponentially exceeded a normal, single server solution. Today, HA is possible for as little as half the cost of a single server (Though, more often, the cost is essentially double the single server cost).

Because of the cost reduction, many companies have started demanding it, and because of the cool factor, a lot of those companies have been spending way too much. Part of why this happens is due to the history of HA in IT.

HA History Lesson

Prior to the development of Virtualization (the technology that allows multiple “Virtual” servers to run on a single physical server), HA was prohibitively expensive and required massive storage arrays, large numbers of servers, and a whole lot of configuration. Then, VMWare implemented a solution called “VMotion” that allowed a Virtual Server to be moved between server hardware immediately at the touch of a button (Called VM High Availability). This signaled a kind of renaissance in High Availability because it allowed servers to survive a hardware failure for a fraction of the cost normally associated with HA. There is a lot more involved in this shift that just VMotion (SANs, cheaper high-speed internet, and similar advancements played a big part), but the shift began about the time VMotion was introduced.

Once companies started realizing they could have servers that were always running, regardless of hardware failures, an unexpected market for high-availability solutions popped up, and software developers started developing better techniques for HA in their products. Why would they care? Because there are a lot of situations where a server solution can stop working properly that aren’t related to hardware failures, and VMotion was only capable of handling HA in the event of hardware failures.

VM HA vs Software HA

The most common mistake I see people making in their HA designs is accepting the assumption that VM-level High Availability is enough. It is most definitely not. Take Exchange server as an example. There are a number of problems that can occur in Exchange that will prevent users from accessing their email. Log drives fill up, forcing database dismount. IIS can fail to function, preventing users from accessing their mailbox. Databases can become corrupted, resulting in a complete shutdown of Exchange until the database can be repaired or restored from backup. VM HA does nothing to help when these situations come up.

This is where the Exchange Database Availability Group (DAG) comes in to play. A DAG involves constantly replication changes to Mailbox Databases to additional Exchange servers (as many of them as you want, but 2-3 is most common). With a DAG in place, any issue that would cause a database to dismount in a single Exchange server will instead result in a Failover, where the database dismounts on one server and mounts on the other server immediately (within a few seconds or less).

The DAG solution alone, however, doesn’t provide full HA for Exchange, because IIS failures will still cause problems, and if there is a hardware failure, you have to change DNS records to point them to the correct server. This is why a Load Balancer is a necessary part of true HA solutions.

Load Balancing

A Load Balancer is a network device that allows users to access two servers with a single IP address. Instead of having to choose which server you talk to, you just talk to the load balancer and it decides which server to direct you to automatically. The server that is chosen depends on a number of factors. Among those is, of course, how many people are already on each server, since the primary purpose of a load balancer is to balance the load between servers more or less equally.

More importantly, though, most load balancers are capable of performing health checks to make sure the servers are responding properly. If a server fails a health check for any reason (for instance, if one server’s not responding to HTTP requests), the load balancer will stop letting users talk to that server, effectively ensuring that whatever failure occurs on the first server doesn’t result in users being unable to access their data.

Costs vs. Benefits

Adding a load balancer to the mix, of course, increases the cost of a solution, but that cost is generally justified by the benefit such a solution provides. Unfortunately, many IT solutions fail to take this fact into account.

If an HA solution requires any kind of manual intervention to fix, the time required for notifying IT staff and getting the switch completed varies heavily, and can be anywhere from 5 minutes to several hours. From an availability perspective, even this small amount of time can have a huge impact, depending on how much money is assumed as “lost” because of a failure. Here comes some math (And not just the Trigonometry involved in this slight tangent).

Math!

The easiest way to determine whether a specific HA solution is worth implementing involves a few simple calculations. First, though, we have to make a couple assumptions, none of which are going to be completely accurate, but are meant to help determine whether an investment like HA is worth making (Managers and CEOs take note)

  1. A critical system that experiences downtime results in the company being completely unable to make money for the period of time that system is down.
  2. The amount of money lost during downtime is equal to whatever percentage of a year the system is down times the amount of annual revenue the organization expects to make in a year.

For instance, if a company’s revenue is $1,000,000 annually, they will make an average of $2 per minute (Rounded up from $1.90), so you can assume that 5 minutes of downtime costs that company about $10 in gross revenue. The cheapest of Load balancers cost about $2,000 and will last about 5 years, so you recoup the cost of the load balancer by saving yourself 200 minutes of downtime. That’s actually less than the amount of time most organizations spend updating a single server. With Software HA in place, updates don’t cause downtime if done properly, so the cost of a load balancer is covered in just being able to keep Exchange running during updates (This isn’t possible with just VM HA). But, of course, that doesn’t cover the cost of the second server (Exchange runs well on a low-end server, so $5000 for server and licenses is about what it would cost). Now imagine if the company makes $10,000,000 in revenue, or think about a company that has revenue of several billion dollars a year. HA becomes a necessity according to these calculations very quickly.

VM HA vs Software HA Cost/Benefit

Realistically, the cost difference between VM HA and Software HA is extremely low for most applications. Everything MS sells has HA capability baked in that can be done for very low costs, now that the Clustering features are included in Windows 2012 Standard. So the costs associated with implementing Software HA vs VM HA are almost always justifiable. Thus, VM HA is rarely the correct solution. And mixing the two is not a good idea. Why? Because it requires twice the storage and network traffic to accomplish, and provides absolutely no additional benefit, other than the fact that VM Replication is kinda cool. Software HA requires 2 copies of the Server to function, and each copy should use a separate server (Separate servers are required for VM HA as well, so only the OS licensing  is an increased cost) to protect against hardware failure of one VM host server.

Know When to Use VM HA

Please note, though, that I am not saying you should never use VM HA. I am saying you shouldn’t use VM HA if software HA is available. You just need to know when to use it and when not to. If software HA isn’t possible (There are plenty of solutions out there with no High Availability capabilities), VM HA is necessary and provides the highest level of high availability for those products. Otherwise, use the software’s own HA capabilities, and you’ll save yourself from lots of headaches.

Advertisements

How Does Exchange Autodiscover Work?

Autodiscover is one of the more annoying features of Exchange since Microsoft reworked the way their Email solution worked in Exchange 2007. All versions since have implemented it and Microsoft may eventually require its use in versions following Exchange 2016. So what is Autodiscover and how does it work?

Some Background

Prior to Exchange 2007, Outlook clients had to be configured manually. In order to do that, you had to know the name of the Exchange server and use it to configure Outlook. Further, if you wanted to use some of the features introduced in Exchange 2003 SP2 and Outlook 2003 (and newer), you had to manually configure a lot of settings that didn’t really make sense. In particular, Outlook Anywhere requires configuration settings that might be a little confusing to the uninitiated. This got even more complicated in larger environments that had numerous Exchange servers but could not yet afford the expense of a load balancer.

The need to manually configure email clients resulted in a lot of administrative overhead, since Exchange admins and Help Desk staff were often required to configure Outlook for users or provide a detailed list of instructions for people to do it themselves. As most IT people are well aware, even the best set of instructions can be broken by some people, and an IT guy was almost always required to spend a lot of time configuring Outlook to talk to Exchange.

Microsoft was not deaf to the cries of the overworked IT people out there, and with Exchange 2007 and Outlook 2007 introduced Autodiscover.

Automation Salvation!

Autodiscover greatly simplifies the process of configuring Outlook to communicate with an Exchange server by automatically determining which Exchange server the user’s Mailbox is on and configuring Outlook to communicate with that server. This makes it much easier for end users to configure Outlook, since the only things they need to know are their email address, AD user name, and password.

Not Complete Salvation, Though

Unfortunately, Autodiscover didn’t completely dispense with the need to get things configured properly. It really only shifted the configuration burden from Users over to the Exchange administrator, since the Exchange environment has to be properly configured to work with Autodiscover. If things aren’t set up properly, Autodiscover will fail annoyingly.

How it Works

In order to make Autodiscover work without user interaction, Microsoft developed a method for telling Outlook where it needed to look for the configuration info it needed. They decided this was most easily accomplished with a few DNS lookups based on the one piece of information that everyone had to put in regardless of their technical know how, the email address. Since they could only rely on getting an email address from users, they knew they’d need to have a default pattern for the lookups, otherwise the client machines would need at least a little configuration before working right. Here’s the pattern they decided on:

  1. Look in Active Directory to see if there is information about Exchange
  2. Look at the root domain of the user’s email Address for configuration info
  3. Look at autodiscover.emaildomain.com for configuration info
  4. Look at the domain’s root DNS to see if any SRV records exist that point to a host that holds configuration info.

Note here that Outlook will only move from one step to the next if it doesn’t find configuration information.

For each step above, Outlook is looking for a specific file or a URL that points it to that file. The file in question is autodiscover.xml. By default, this is kept at https://<exchangeservername>/autodiscover/autodiscover.xml. Each step in the check process will try to find that file and if it’s not there, it moves on. If, by the end of step 4, Outlook finds nothing, you’ll get an error saying that an Encrypted Connection was unavailable, and you’ll probably start tearing your hair out in frustration.

What’s in the File?

Autodiscover.xml is a dynamically generated file written in XML that contains the information Outlook needs to access the mailbox that was entered in the configuration wizard. When Outlook makes a request to Exchange Autodiscover, the following things will happen:

  1. Exchange requests credentials to access the mailbox.
  2. If the credentials are valid, Exchange checks the AD attributes on the mailbox that has the requested Email address.
  3. Exchange determines which server the Mailbox is located on. This information is usually stored in the msExchangeHomeServer attribute on the associated AD account.
  4. Exchange examines its Topology data to determine the best Client Access Server (CAS) to use for access to the mailbox. The Best CAS is determined using the following checks:
    1. Determine AD Site the Mailbox’s Server is located in
    2. Determine if there is a CAS assigned to that AD site
    3. If no CAS is in the site, use Site Topology to determine next closest AD Site.
    4. Step 3 is repeated until a CAS is found.
  5. Exchange returns all necessary configuration data stored in AD for the specific server. The configuration data returned is:
    1. Autodiscover Service Universal Resource Identifier (URI) – This is assigned to each CAS server and is stored by Outlook for future use.
    2. CAS server name
    3. Exchange Web Services URL
    4. Outlook Anywhere Configuration Data, if enabled.
  6. Outlook will take the returned information and punch it into the necessary spots in the user’s profile information.

Necessary Configuration

Because all of this is done automatically, it is imperative that the Exchange server is configured to return the right information. If the information returned to Autodiscover is incorrect, either the mailbox connection will fail or you’ll get a certificate error. To get Autodiscover configured right, parts 5.1, 5.3, and 5.4 of the above process must be set. This can be done with a script, in the Exchange Management Shell, and in the Exchange Management UI (EMC for 2007 and 2010, ECP/EAP for 2013/2016).

Importance of Autodiscover

In the future, you may be forced to use Autodiscover for Outlook configuration. For now, you can absolutely avoid using it by manually configuring each outlook client. However, before doing that, consider the cost of having to touch each and every computer to properly configure Outlook. It can take 5 minutes or more to configure Outlook on one computer using the manual method, and with Exchange 2013 it can take longer as you also are required to input Outlook Anywhere configuration settings, which are more complex than just entering a server name, username, and password. If you multiply that by the number of computers you might have in your environment and add in the time it takes to actually get to the computers, boot them up, and get to the Outlook settings, the time spent configuring Outlook manually starts to add up very quickly. Imagine how much work you’d be stuck with configuring 100 systems!

In contrast, it usually only takes 10 to 20 minutes to configure Autodiscover. When Autodiscover is working properly, all you have to do is tell your users what their email address is and Outlook will do all the work for you. With a little more configuration or some GPO work, you don’t even have to tell them that!

When you start to look at the vast differences in the amount of time you have to spend configuring Outlook, whether or not to use Autodiscover stops being a question of preference and starts being an absolutely necessary part of any efficient Exchange-based IT environment. Learning to configure it properly is, therefore, one of the most important jobs of an Exchange administrator.

Configuring Exchange Autodiscover

As of the release of Outlook 2016, Microsoft has chosen to begin requiring the use of Autodiscover for setting up Outlook clients to communicate with the server. This means that, moving forward, Autodiscover will need to be properly configured.

This page contains some information and some links to other posts I’ve written on the subject of Autodiscover. This page is currently under construction as I write additional posts to assist in configuring and troubleshooting Autodiscover.

Initial Configuration

The initial configuration of Autodiscover requires that you have a Digital Certificate properly installed on your Exchange Server. If you use a Multi-Role configuration (No longer recommended by MS for Exchange versions after 2010), the Certificate should be installed on the CAS server.

Certificate Requirements

The certificate should have a Common Name that matches the name your users will be using to access Exchange. If you want users to use mail.domain.com to access the Exchange server, make sure that is the Common Name when creating the certificate.

The optimal configuration for Exchange also requires that you include autodiscover.domain.com as a Subject Alternate Name (SAN). You should also make sure that there is also an A or CNAME record in DNS to point users to autodiscover.domain.com. SAN certificates can cost significantly more money than a normal certificate, but there are ways to bypass the need for a SAN certificate (See the next section below for more info).

A Wildcard certificate is usable with Exchange, and can serve as a less expensive way to provide support for a large number of URLs. A Wildcard can also be used on other servers that use the same DNS domain as the Exchange server. However, wildcards are technically not as secure as a SAN cert, since they can be used with any URL in the domain. In addition, they do not support Sub-domains.

The certificate you install on Exchange should also be obtained from a reputable Third Party Certificate Authority. The following Certificate Authorities can generate Certificates that are trusted by the majority of web browsers and operating systems:

Comodo PositiveSSL
DigiCert
Entrust
Godaddy
Network Solutions

Also note, when generating your Certificate Signing Request (CSR), you should generate the CSR with a sufficient bit length. Currently, the recommended minimum for CSR generation is 2048 bits. 1024 and lower bit lengths may not be supported by Certificate Authorities.

Exchange Server Configuration

Autodiscover will determine the settings to apply to client machines by reading the Exchange Server configuration. This means the Exchange Service URLs must be properly configured. If they are not configured to use a name that exists on the Certificate in use, Outlook will generate a Certificate Error.

I will write a post on this subject in the future. For now, you can get this information easily from a Google Search.

DNS configuration

There are 2 different URLs Autodiscover will use when searching for configuration information. These URLs are based on the user’s Email Domain (The portion of the email address after the @). For bob@acbrownit.com, the Email Domain is acbrownit.com. The URLs checked automatically are:

domain.com
autodiscover.domain.com

As long as one of the above URLs exists on the Certificate and has an A record or CNAME record in DNS pointing to a CAS server, Autodiscover will work properly. The instructions for this can vary depending on the DNS provider you use.

Other Configurations

There are some situations that may cause autodiscover to fail if the above requirements are all met. The following situations require additional setup and configuration.

Domain Joined Computers

Computers that are part of the same Active Directory Domain as the Exchange server will attempt to reach the Active Directory Service Connection Point (SCP) for Autodiscover before attempting to find autodiscover at the normal URLs listed above. In this situation, you will typically need to configure the SCP to point to one of the URLs on your certificate.

Go to this post to find instructions for configuring the SCP:

Exchange Autodiscover Part 2 – The Active Directory SCP

Single Name Certificates

If you do not want to spend the additional money required to obtain a SAN or Wildcard certificate for Exchange, you can use a Service Locator (SRV) Record in DNS to define the location of autodiscover. A Service Locator Record allows you to define any URL you want for the Autodiscover service, so you can create one to bypass the need for having a SAN or Wildcard certificate.

Go to this post to find instructions for configuring a SRV record:

Internal DNS and Exchange Autodiscover

 

Theory: Understanding Digital Certificates

One of the more annoying tasks in administering a publicly available website that uses HTTPS (Outlook Web App, for example) is certificate generation and installation. Anyone who has ordered a certificate from a major Certificate Authority (CA) like Godaddy or Network Solutions has dealt with the process. It goes something like this:

  1. Generate a Certificate Signing Request (CSR) on the web server
  2. Upload the CSR to a CA in a Certificate Request
  3. Wait for the CA to respond to your Request with a set of files
  4. Download the “Response” files
  5. Import the files on the Web Server

Once that gets done, you will (usually) have a valid certificate that allows the server to use SSL or TLS to encrypt communications with client machines.

Despite performing this process, you may be wondering *why* you have to go through this whole mess of annoyingness.

What is a Certificate

Put simply, a certificate is just a big hunk of data that is generated to provide clients and servers with the tools needed to properly encrypt and decrypt data. The most important tools included in a certificate is called a “Key”.

Just like a door key, the Key in a certificate is used to both prevent unauthorized access and allow authorized access. The keys in a certificate are generally used to encrypt data and then decrypt the data.

What Keys?

When you go through the certificate generation process above, you are generating two different, but mathematically related, keys; a Public Key and a Private Key. The public key is used to decrypt data, but cannot be used to decrypt the data it encrypts. The private key is able to decrypt data encrypted by the public key and must be kept as securely as possible.

If you were to look at a certificate file, you would be able to see the public key without any issues. You could even take the public key and use it to encrypt some data. However, the only way that data could be decrypted is if you have access to the private key. The private key is usually stored securely, and can only be accessed with specific authorization. If you gain physical access to a web server (or remote access to the GUI/Command line of the OS running the web server), you can gain access to the private key, but that usually requires a level of access unavailable to the majority of people.

Now, you may be wondering, “If the data can be encrypted by anyone, how do I ensure the data getting to the client machine is actually coming from the original server?” And If you weren’t already, you probably are now. Well, to make sure the sending server is authentic we have to authenticate it. That’s where another part of the digital certificate comes into play.

Authentication

When you generate a certificate, you have to enter a common name for the certificate. The common name should match the name that is used to access the server. If anyone attempts to access the server using something other than what is defined by the common name, a certificate error is usually displayed. For an example of a certificate error, see below:

whatever error

This particular error was generated by changing the hosts file of my desktop to point http://www.whatever.com to a website running SSL (Facebook, if you’re wondering). Had I accessed the website using a URL that matched the common name listed on the certificate, I would not have received this error. In essence, I’ve attempted to access the site using a name that can’t be authenticated, so I can’t absolutely ensure that the data I’m getting hasn’t been intercepted, decrypted, modified, and re-encrypted.

“So, if anyone can encrypt data using the public key that anyone can get a hold of, can’t I just create a certificate that has the same common name and use that to authenticate a rogue server?” Well, no. Because there’s another part of the certificate that keeps this from happening.

The Circle of Trust

The primary role of the Certificate Authority in the certificate generation process is to verify that the certificates that get generated are only generated for servers that actually belong to the entity that runs the server. In addition, the CA has to be trusted by the computer accessing the information encrypted by the certificate.

Most Operating Systems and Web Browsers are configured to trust a number of CAs right out of the box. Companies like GoDaddy and Network Solutions have contracts with Microsoft, Apple, and other OS developers to have their Root CA Certificates trusted by default. The Root CA Certificate is used to generate all certificates obtained from the CA defined by the Root CA Certificate.

Now, it’s possible to create your own CA that will generate certificates, but because your CA’s Root Certificate is not automatically trusted by client computers around the world, those computers will generate a certificate error any time they access a certificate generated by your CA until the Root CA Certificate for your CA is installed as a trusted Root CA Certificate.

This trust relationship makes it extremely difficult to interject a rogue system between a client and server to read data because the rogue system can’t have a copy of the certificate generally used to access the server, and any system that tries to talk to the server with that rogue system in the mix will squawk like a parrot because there’s a server authentication problem.

Review

So the breakdown of the whole certificate system is this,

  • Certificates hold the keys used to encrypt and decrypt data
  • Certificates are used to verify that the source of encrypted data is authentic
  • Certificates should be generated by trusted certificate authorities.

Now, it’s entirely possible to have a web server ignore some part of the system (a server can use a self-signed certificate, for instance), but such a server will be significantly less secure than one that follows the rules.

Most modern methods for accessing web servers make it extremely inconvenient to access a server that doesn’t follow the rules, and that means your users end up wasting a little bit of time whenever they access your site. In the IT business, time is a very scarce commodity, and every little bit wasted can add up to giant problems. So make sure your server is following the rules!