Hardening Microsoft Solutions from Attacks

Take a minute to go over this post from Dirk-jan Mollema. Go ahead and read it. I’ll wait…

Did you realize how scary that kind of attack is? As an IT guy who specializes in Exchange server and loves studying security, that article scared the snot out of me. Based on my experience with organizations of all sizes I can say with a good bit of authority that almost every Exchange organization out there is probably vulnerable to this attack. Why? Because Exchange is scary to a lot of people and they don’t really know how to harden it effectively. But I also want to use the above attack as a way to illustrate what I feel is the best strategy for hardening a Windows environment (and, really, any environment).

Take this opportunity to look at your Exchange deployment (if you haven’t already moved to Exchange Online) and think about what you can do to protect your environment from this type of thing. In this post, though, I want to focus on Exchange Server and Windows Server hardening techniques in general, rather than this particular vulnerability because with any hardening effort, you want to examine the network as a whole and work downward without focusing on specific vulnerabilities. If you do the opposite, you will invariably end up playing a never ending game of whack-a-mole, trying to stay ahead of a world full of malicious attackers and never really being successful.

The techniques recommended in the Center for Internet Security’s (CIS) Critical Security Controls follow the top-down approach and represent one of the best guides for approaching information security at a technical level.

IT Hardening, a Quick Intro

Hardening is essentially all actions that you take to make an environment more secure. There are many different types of hardening; server hardening, network hardening, physical hardening, procedural hardening, etc. But these all seek to do the same thing, just in different ways.

If you take a close look at the actions the CIS controls recommend, you’ll (hopefully) notice that they seek to secure as much of the environment as possible when you start at control number 1. As you go through the controls, each subsequent control has a more narrow focus. Once you get to control number 5, you will probably have an environment that will stand up against all but the most determined attacks, but you don’t necessarily want to stop there.

The most important best practice in Information Security is the idea of “Defense in Depth”. This technique involves building layers of protection instead of relying on a single security measure to protect your environment. Having a firewall in place is only one “layer” of defense, and is regarded as the broadest level of protection you can have. Anti-virus tools, Intrusion Detection/Prevention tools, and hardening techniques represent additional layers of defense. You want as many layers as you can justify when measuring cost against risk (a much more difficult topic to cover).

Focusing on Windows

One thing that you hear regularly in the IT industry is the argument about what OS people choose to handle their IT. The common argument is that Linux is a more secure OS than Windows, and this is true, up to a point. The reality is that they are simply different approaches to crafting an OS.

Linux tends to be more modular in its approach. If you implement a Linux environment, you would start with the core OS and add features as needed. This approach is good for limiting the attack surface from the start, but it also has a number of drawbacks.

The biggest drawback for Linux is that there is no centralization for support and maintenance. There are lots of different solutions to the same problem, and there isn’t really a single source of support for all solutions, so you have to either have very capable Linux support specialists or handle lots of different vendors. This usually increases the cost of ongoing maintenance and support of the infrastructure. It’s also not uncommon for different Linux-based open source projects to be abandoned for whatever reason, leaving organizations that implemented that solution without support, and once the guy who knows how to use it effectively leaves, you’re left with a very serious problem.

Windows, on the other hand, is a fairly complete package of capabilities for most situations. Windows server has built in solutions that can do most of the work you will want in an IT environment, within some limits. For instance, Windows server doesn’t handle EMail well right out of the box. You have to also implement Exchange server to have a truly effective method of handling email, but with that solution you also gain a very powerful collaboration tool that handles calendaring, contact management, task management, and other features that you can pick and choose from. Microsoft also invests a lot of time and effort in developing training tools and educational resources to ensure that there is a large pool of talent to support their OS and other software solutions. You don’t often have to worry about finding someone who knows how to manage a Windows environment. There are boatloads of MCSAs and MCSEs looking for work almost all the time.

The major drawback with Windows is, of course, security. With all of the features built in, Windows has a very large attack surface compared to Linux. However, with careful planning and implementation, the attack surface of Windows can be decreased very effectively, such that there is virtually no difference between a standard Linux deployment and a hardened Windows environment.

Hardening Windows

Going back to the vulnerability outlined in the link from the start of this article, a single change to a Windows Active Directory environment will eliminate vulnerability: LDAP signing and channel binding. LDAP signing and channel binding are techniques that are used to prevent Man In the Middle attacks from succeeding. I explain the theory behind LDAP signing in more depth in my article on Understanding Digital Certificates. LDAP channel binding is a technique that prevents clients from using portions of authentication attempt against one DC when communicating with a different DC or client. Put simply, it “binds” a client to the entire authentication attempt by requiring clients to present proof that the authentication traffic it’s sending to the server isn’t forged or copied from a different authentication attempt.

Essentially, LDAP signing configures all Active Directory Domain Controllers to that they are verifying that they are actually talking to the server they are supposed to before doing anything. Implementing this is a little difficult, though, as it requires the use of a Certificate Authority to generate and deploy digital certificates, but once digital certificates are installed on Domain Controllers and Member Servers in a Windows Domain, LDAP signing is available (once systems are configured to require it) and becomes a very effective form of security that prevents a wide swatch of attacks that can be performed to gain unauthorized access.

LDAP signing alone won’t prevent all possible attacks in a Windows environment, though, which is why it’s essential to disable features and roles that each server is not using, and taking effective care of remote access to servers. Windows Remote Desktop is one of the most frequently used tools to breach security in a Windows environment, so limiting access to it is essential. As a rule of thumb, only allow System Administrators to access critical Windows Servers and never, *never* allow remote desktop ports through your firewall.

Check your firewalls now, if you have port 3389 allowed to the Internet, it’s only a matter of time before you get attacked and suffer severe consequences. Remote Desktop is *not* meant for allowing remote workers access over the Internet. Implement secure VPNs and practice effective password security policies if you want people to access your IT environment remotely.

Once all unnecessary features and roles are removed or effectively controlled in a Windows environment, build and maintain an effective patch management strategy. Microsoft regularly deploys patches to close security holes before attackers are regularly attacking them. Any patch management plan should make allowances for testing, approving, deploying, and installing Security-related patches as soon as possible.

Next, focus on granting only permissions necessary for workers to accomplish their tasks. This is a difficult practice to implement, because it takes a lot of investigation to determine what permissions each user needs. Many environments grant Administrative permission to users on company owned equipment, which is a horrible, lazy practice that will get your environment owned by a hacker very quickly.

Once you have all of the above security practices in place, you will then want to start focusing on more specific vulnerabilities. As an example method for preventing the attack in the link at the start of this post, changing a simple registry setting will block the attack. But it will not prevent future attacks that may attack vulnerabilities that aren’t well known.

How Does the Cloud Play Into This?

One of the major benefits of using cloud solutions like Exchange Online is that most of the work outlined above has been done already. Microsoft’s cloud servers are stored in highly secure datacenters with many protections against unauthorized access (as opposed to the common tactic of putting the server in a closet in your office). Servers in cloud environments are hardened as much as possible before being put into operation. Security vulnerabilities are usually addressed across the entire cloud environment within hours of discovery, and the servers don’t function with an eye to backwards compatibility, so things like NTLM and SMBv1 are disabled on all systems.

That said, the cloud poses its own security challenges. You must accept the level of security put in place by the cloud provider and will have little to no control over systems in a way that will let you increase security. Furthermore, utilizing a Hybrid-cloud solution (which is extremely common and will be for years to come) presents unique problems involving the interface between two separately controlled environments. Poor security practices in the on-prem side of a hybrid deployment will make the cloud side just as insecure.

You must accept public availability of your data and accept the reality that you don’t control where that data is (for the most part…this issue is slowly changing as cloud environments mature). In addition, your do not offload the responsibility of securing access to the data you store in the cloud. I’ll cover this subject in another post, but for now, understand that while cloud environments build a lot of security into their solutions, you still have a responsibility to make security a priority.

Conclusion (I never can think of a good heading here)

Security in any IT environment is a major challenge that takes careful planning and effective management. Failing to consider security challenges when deploying new solutions will almost always come back to bite you. But, with the right strategy and guidance, it *is* possible to build a secure environment that can withstand the vast majority of attacks.

 

 

Advertisements

Enabling Message Encryption in Office 365

As I mentioned in an earlier post, email encryption is a sticky thing. In a perfect world, everyone would have Opportunistic TLS enabled and all mail traffic would be automatically encrypted with STARTTLS encryption, which is a fantastic method of ensuring security of messages “in transit”. But some messages need to be encrypted “at rest” due to security policies or regulations. Unfortunately, researchers have recently discovered some key vulnerabilities in the S/MIME and OpenPGP. These encryption systems have been the most common ways of ensuring message encryption for messages while they are sitting in storage. The EFAIL vulnerabilities allow HTTP formatted messages to be exposed in cleartext by attacking a few weaknesses.

Luckily, Office 365 subscribers can improve the confidentiality of their email by implementing a feature that is already available to all E3 and higher subscriptions or by purchasing licenses for Azure Information Protection and assigning them to users that plan to send messages with confidential information in them. The following is a short How-To on enabling the O365 Message Encryption (OME) system and setting up rules to encrypt messages.

The Steps

To enable and configure OME for secure message delivery, the following steps are necessary:

  1. Subscribe to Azure Information Protection
  2. Activate OME
  3. Create Rules to Encrypt Messages

Details are below.

Subscribe to Azure Information Protection

The Azure Information Protection suite is an add-on subscription for Office 365 that will allow end users to perform a number of very useful functions with their email. It also integrates with SharePoint and OneDrive to act as a Data Loss Prevention tool. With AIP, users can flag messages or files so that they cannot be copied, forwarded, deleted, or a range of other common actions. For email, all messages that have specific classification flags or that meet specific requirements are encrypted and packaged into a locked HTML file that is sent to the recipient as an attachment. When the recipient receives the message, they have to register with Azure to be assigned a key to open the email. The key is tied to their email address and once registered the user can then open the HTML attachment and any future attachments without having to log in to anything.

Again, if you have E3 or higher subscriptions assigned to your users, they don’t need to also have AIP as well. However, each user that will be sending messages with confidential information in them will need either an AIP license or an E3/E5 license to do so. To subscribe to AIP, perform these steps:

  1. Open the Admin portal for Office 365
  2. Go to the Subscriptions list
  3. Click on “Add a Subscription” in the upper right corner
  4. Scroll down to find the Azure Information Protection
  5. Click the Buy Now option and follow the prompts or select the “Start Free Trial” option to get 25 licenses for 30 days to try it out before purchasing
  6. Wait about an hour for the service to be provisioned on your O365 tenant

Once provisioned, you can then move on to the next step in the process.

Activate OME

This part has changed very recently. Prior to early 2018, Activating OME took a lot of Powershell work and waiting for it to function properly. MS changed the method for activating OME to streamline the process and make it easier to work with. Here’s what you have to do:

  1. Open the Settings option in the Admin Portal
  2. Select Services & Add-ins
  3. Find Azure Information Protection in the list of services and click on it
  4. Click the link that says, “Manage Microsoft Azure Information Protection settings” to open a new window
  5. Click on the Activate button under “Rights Management is not activated”
  6. Click Activate in the Window that pops up

Once this is done, you will be able to use AIP’s Client application to tag messages for right’s management in Outlook. There will also be new buttons and options in Outlook Web App that will allow you to encrypt messages. However, the simplest method for encrypting messages is to use an Exchange Online Transport Rule to automatically encrypt messages.

Create Rules to Encrypt Messages

Once OME is activated, you’ll be able to encrypt messages using just the built in, default Rights Management tools, but as I mentioned, it’s much easier to use specific criteria to do the encryption automatically. Follow these stpes:

  1. Open the Exchange Online Admin Portal
  2. Go to Mail Flow
  3. Select Rules
  4. Click on the + and select “Add a New Rule”
  5. In the window that appears, click “More Options” to switch to the advanced rule system
  6. The rule you use can be anything from Encrypting messages flagged as Confidential to using a tag in the subject line. My personal preference is to use subject/body tags. Make your rule look like the below image to use this technique:Encrypt Rule

When set up properly, the end user will receive a message telling them that they have received a secure message. The email will have an HTML file attached that they can open up. They’ll need to register, but once registered they’ll be able to read the email without any other steps required and it will be protected from outside view.

 

 

Designing Infrastructure High Availability

IT people, for some reason, seem to have an affinity towards designing solutions that use “cool” features, even when those features aren’t really necessary. This tendency sometimes leads to good solutions, but a lot of times it ends up creating solutions that fall short of requirements or leave IT infrastructure with significant short-comings in any number of areas. Other times, “cool” features result in over-designed, unnecessarily expensive infrastructure designs.

The “cool” factor is probably most obvious in the realm of High Availability design. And yes, I do realize that with the cloud becoming more common and prevalent in IT there is less need to understand the key architectural decisions needed when designing HA, but there are still plenty of companies that refuse to use the cloud, and for good reason. Cloud solutions are not meant to be one size fits all solutions. They are one size fits most solutions.

High Availability (Also called “HA”) is a complex subject with a lot of variables involved. The complexity is due to the fact that there are multiple levels of HA that can be implemented, from light touch failover to globally replicated, multi-redundant, always on solutions.

High Availability Defined

HA is, put simply, any solution that allows an IT resource (Files, applications, etc) to be accessible at all times, regardless of hardware failure. In an HA designed infrastructure, your files are always available even if the server that normally stores those files breaks for any reason.

HA has also become much more common and inexpensive in recent years, so more people are demanding it. A decade ago, any level of HA involved costs that exponentially exceeded a normal, single server solution. Today, HA is possible for as little as half the cost of a single server (Though, more often, the cost is essentially double the single server cost).

Because of the cost reduction, many companies have started demanding it, and because of the cool factor, a lot of those companies have been spending way too much. Part of why this happens is due to the history of HA in IT.

HA History Lesson

Prior to the development of Virtualization (the technology that allows multiple “Virtual” servers to run on a single physical server), HA was prohibitively expensive and required massive storage arrays, large numbers of servers, and a whole lot of configuration. Then, VMWare implemented a solution called “VMotion” that allowed a Virtual Server to be moved between server hardware immediately at the touch of a button (Called VM High Availability). This signaled a kind of renaissance in High Availability because it allowed servers to survive a hardware failure for a fraction of the cost normally associated with HA. There is a lot more involved in this shift that just VMotion (SANs, cheaper high-speed internet, and similar advancements played a big part), but the shift began about the time VMotion was introduced.

Once companies started realizing they could have servers that were always running, regardless of hardware failures, an unexpected market for high-availability solutions popped up, and software developers started developing better techniques for HA in their products. Why would they care? Because there are a lot of situations where a server solution can stop working properly that aren’t related to hardware failures, and VMotion was only capable of handling HA in the event of hardware failures.

VM HA vs Software HA

The most common mistake I see people making in their HA designs is accepting the assumption that VM-level High Availability is enough. It is most definitely not. Take Exchange server as an example. There are a number of problems that can occur in Exchange that will prevent users from accessing their email. Log drives fill up, forcing database dismount. IIS can fail to function, preventing users from accessing their mailbox. Databases can become corrupted, resulting in a complete shutdown of Exchange until the database can be repaired or restored from backup. VM HA does nothing to help when these situations come up.

This is where the Exchange Database Availability Group (DAG) comes in to play. A DAG involves constantly replication changes to Mailbox Databases to additional Exchange servers (as many of them as you want, but 2-3 is most common). With a DAG in place, any issue that would cause a database to dismount in a single Exchange server will instead result in a Failover, where the database dismounts on one server and mounts on the other server immediately (within a few seconds or less).

The DAG solution alone, however, doesn’t provide full HA for Exchange, because IIS failures will still cause problems, and if there is a hardware failure, you have to change DNS records to point them to the correct server. This is why a Load Balancer is a necessary part of true HA solutions.

Load Balancing

A Load Balancer is a network device that allows users to access two servers with a single IP address. Instead of having to choose which server you talk to, you just talk to the load balancer and it decides which server to direct you to automatically. The server that is chosen depends on a number of factors. Among those is, of course, how many people are already on each server, since the primary purpose of a load balancer is to balance the load between servers more or less equally.

More importantly, though, most load balancers are capable of performing health checks to make sure the servers are responding properly. If a server fails a health check for any reason (for instance, if one server’s not responding to HTTP requests), the load balancer will stop letting users talk to that server, effectively ensuring that whatever failure occurs on the first server doesn’t result in users being unable to access their data.

Costs vs. Benefits

Adding a load balancer to the mix, of course, increases the cost of a solution, but that cost is generally justified by the benefit such a solution provides. Unfortunately, many IT solutions fail to take this fact into account.

If an HA solution requires any kind of manual intervention to fix, the time required for notifying IT staff and getting the switch completed varies heavily, and can be anywhere from 5 minutes to several hours. From an availability perspective, even this small amount of time can have a huge impact, depending on how much money is assumed as “lost” because of a failure. Here comes some math (And not just the Trigonometry involved in this slight tangent).

Math!

The easiest way to determine whether a specific HA solution is worth implementing involves a few simple calculations. First, though, we have to make a couple assumptions, none of which are going to be completely accurate, but are meant to help determine whether an investment like HA is worth making (Managers and CEOs take note)

  1. A critical system that experiences downtime results in the company being completely unable to make money for the period of time that system is down.
  2. The amount of money lost during downtime is equal to whatever percentage of a year the system is down times the amount of annual revenue the organization expects to make in a year.

For instance, if a company’s revenue is $1,000,000 annually, they will make an average of $2 per minute (Rounded up from $1.90), so you can assume that 5 minutes of downtime costs that company about $10 in gross revenue. The cheapest of Load balancers cost about $2,000 and will last about 5 years, so you recoup the cost of the load balancer by saving yourself 200 minutes of downtime. That’s actually less than the amount of time most organizations spend updating a single server. With Software HA in place, updates don’t cause downtime if done properly, so the cost of a load balancer is covered in just being able to keep Exchange running during updates (This isn’t possible with just VM HA). But, of course, that doesn’t cover the cost of the second server (Exchange runs well on a low-end server, so $5000 for server and licenses is about what it would cost). Now imagine if the company makes $10,000,000 in revenue, or think about a company that has revenue of several billion dollars a year. HA becomes a necessity according to these calculations very quickly.

VM HA vs Software HA Cost/Benefit

Realistically, the cost difference between VM HA and Software HA is extremely low for most applications. Everything MS sells has HA capability baked in that can be done for very low costs, now that the Clustering features are included in Windows 2012 Standard. So the costs associated with implementing Software HA vs VM HA are almost always justifiable. Thus, VM HA is rarely the correct solution. And mixing the two is not a good idea. Why? Because it requires twice the storage and network traffic to accomplish, and provides absolutely no additional benefit, other than the fact that VM Replication is kinda cool. Software HA requires 2 copies of the Server to function, and each copy should use a separate server (Separate servers are required for VM HA as well, so only the OS licensing  is an increased cost) to protect against hardware failure of one VM host server.

Know When to Use VM HA

Please note, though, that I am not saying you should never use VM HA. I am saying you shouldn’t use VM HA if software HA is available. You just need to know when to use it and when not to. If software HA isn’t possible (There are plenty of solutions out there with no High Availability capabilities), VM HA is necessary and provides the highest level of high availability for those products. Otherwise, use the software’s own HA capabilities, and you’ll save yourself from lots of headaches.

Do I need Anonymous Relay?

Problems

If you have managed an Exchange server in the past, you’ve probably been required to set things up to allow printers, applications, and other devices the ability to send email through the Exchange server. Most often, the solution to this request is to configure an Anonymous Open Relay connector. The first article I ever wrote on this blog was on that very subject: http://wp.me/pUCB5-b .  If you need to know what a Relay is, go read that blog.

What people don’t always do, though, is consider the question of whether or not they need an anonymous relay in Exchange. I didn’t really cover that subject in my first article, so I’ll cover it here.

When you Need an Open Relay

There are three factors that determine whether an organization needs an Open Relay. Anonymous relay is only required if you meet all three of the factors. Any other combination can be worked around without using anonymous relaying. I’ll explain how later, but for now, here are the three factors you need to meet:

  1. Printers, Scanners, and Applications don’t support changes to the SMTP port used.
  2. Printers, Scanners, and Applications don’t support SMTP Authentication.
  3. Your system needs to send mail to email addresses that don’t exist in your mail environment (That is to say, your system sends mail to email addresses that you don’t manage with your own mail server).

At this point, I feel it important to point out that Anonymous relays are inherently insecure. You can make them more secure by limiting access, but using an anonymous relay will always place a technical solution in the environment that is designed specifically to circumvent normal security measures. In other words, do so at your own informed risk, and only when it’s absolutely required.

The First Factor

If the system you want to send SMTP messages doesn’t allow you to send email over a port other than 25, you will need to have an open relay if the messages the system sends are addressed to email addresses outside your environment. The bold stuff there is an important distinction. The SMTP protocol defines port 25 as the “default” port for mail exchange, and that’s the port that every email server uses to receive email from all other systems, which means that, based on modern security concerns, sending mail to port 25 is only allowed if the recipient of the email you send exists on the mail server. So if you are using the abc.com mail server to send messages to bob@xyz.com, you will need to use a relay server to do it, or the mail will be rejected because relay is (hopefully) not allowed.

The Second Factor

If your system doesn’t allow you to specify a username and password in the SMTP configuration it has, then you will have to send messages Anonymously. For our purposes, an “anonymous” user is a user that hasn’t logged in with a username and password. SMTP servers usually talk to one another Anonymously, so it’s actually common for anonymous SMTP access to be valid and is actually necessary for mail exchange to function, but SMTP servers will, by default, only accept messages that are destined for email addresses that they manage. So if abc.com receives a message destined for bob@abc.com, it will accept it. However, abc.com will reject messages to jim@xyz.com, *unless* the SMTP session is Authenticated. In other words, if bob@abc.com wants to send jim @xyz.com a message, he can open an SMTP session with the abc.com mail server, enter his username and password, and send the message. If he does that, the SMTP server will accept the message, then contact the xyz.com mail server and deliver it. The abc.com mail server doesn’t need to have a username and password to do this, because the xyz.com mail server knows who jim@xyz.com is, so it just accepts the message and delivers it to the correct mailbox. So if you are able to set a username and password with the system you need to send mail with, you don’t need anonymous relay.

The Third Factor

Most of the time, applications and devices will only need to send messages to people who have mailboxes in your environment, but there are plenty of occasions where applications or devices that send email out need to be able to send mail to people *outside* the environment. If you don’t need to send to “external recipients” as these users are called, you can use the Direct Send method outlined in the solutions below.

Solutions

As promised, here are the solutions you can use *other* than anonymous relay to meet the needs of your application if it doesn’t meet *all three* of the deciding factors.

Authenticated Relay (Factor #3 applies)

In Exchange server, there is a default “Receive Connector” that accepts all messages sent by Authenticated users on port 587, so if your system allows you to set a username and password and change the port, you don’t need anonymous relaying. Just configure the system to use your Exchange Hub Transport server (or CAS in 2013) on port 587, and it should work fine, even if your requirements meet the last deciding factor of sending mail to external recipients.

Direct Send (Factor #2 applies and/or #3 doesn’t apply)

If your system needs to send messages to abc.com users using the abc.com mail server, you don’t need to relay or authenticate. Just configure your system to send mail directly to the mail server. The “direct send” method uses SMTP as if it were a mail server talking to another mail server, so it works without additional work. Just note that if you have a spam filter that enforces SPF or blocks messages from addresses in your environment to addresses in your environment, it’s likely these messages will get blocked, so make allowances as needed.

Authenticated Mail on Port 25 (Only factor #1 applies)

If the system doesn’t allow you to change the port number your system uses, but does allow you to authenticate, you can make a small change to Exchange to allow the system to work. This is done by opening the Default Receive connector (AKA – the Default Front End receive connector on Exchange 2013 and later) and adding Exchange Users to the Permission settings on the Security tab as shown with the red X below:

default-front-end-enabled

Once this setting is changed, restart the Transport service on the server and you can then perform authenticated relaying on port 25.

Conclusion

If you do find you need to use an anonymous relay, by all means, do so with careful consideration, but always be conscious of the fact that it isn’t always necessary. As always, comments questions on this article and others are always welcome and I’ll do my best to answer as soon as possible.

Configuring Exchange Autodiscover

As of the release of Outlook 2016, Microsoft has chosen to begin requiring the use of Autodiscover for setting up Outlook clients to communicate with the server. This means that, moving forward, Autodiscover will need to be properly configured.

This page contains some information and some links to other posts I’ve written on the subject of Autodiscover. This page is currently under construction as I write additional posts to assist in configuring and troubleshooting Autodiscover.

Initial Configuration

The initial configuration of Autodiscover requires that you have a Digital Certificate properly installed on your Exchange Server. If you use a Multi-Role configuration (No longer recommended by MS for Exchange versions after 2010), the Certificate should be installed on the CAS server.

Certificate Requirements

The certificate should have a Common Name that matches the name your users will be using to access Exchange. If you want users to use mail.domain.com to access the Exchange server, make sure that is the Common Name when creating the certificate.

The optimal configuration for Exchange also requires that you include autodiscover.domain.com as a Subject Alternate Name (SAN). You should also make sure that there is also an A or CNAME record in DNS to point users to autodiscover.domain.com. SAN certificates can cost significantly more money than a normal certificate, but there are ways to bypass the need for a SAN certificate (See the next section below for more info).

A Wildcard certificate is usable with Exchange, and can serve as a less expensive way to provide support for a large number of URLs. A Wildcard can also be used on other servers that use the same DNS domain as the Exchange server. However, wildcards are technically not as secure as a SAN cert, since they can be used with any URL in the domain. In addition, they do not support Sub-domains.

The certificate you install on Exchange should also be obtained from a reputable Third Party Certificate Authority. The following Certificate Authorities can generate Certificates that are trusted by the majority of web browsers and operating systems:

Comodo PositiveSSL
DigiCert
Entrust
Godaddy
Network Solutions

Also note, when generating your Certificate Signing Request (CSR), you should generate the CSR with a sufficient bit length. Currently, the recommended minimum for CSR generation is 2048 bits. 1024 and lower bit lengths may not be supported by Certificate Authorities.

Exchange Server Configuration

Autodiscover will determine the settings to apply to client machines by reading the Exchange Server configuration. This means the Exchange Service URLs must be properly configured. If they are not configured to use a name that exists on the Certificate in use, Outlook will generate a Certificate Error.

I will write a post on this subject in the future. For now, you can get this information easily from a Google Search.

DNS configuration

There are 2 different URLs Autodiscover will use when searching for configuration information. These URLs are based on the user’s Email Domain (The portion of the email address after the @). For bob@acbrownit.com, the Email Domain is acbrownit.com. The URLs checked automatically are:

domain.com
autodiscover.domain.com

As long as one of the above URLs exists on the Certificate and has an A record or CNAME record in DNS pointing to a CAS server, Autodiscover will work properly. The instructions for this can vary depending on the DNS provider you use.

Other Configurations

There are some situations that may cause autodiscover to fail if the above requirements are all met. The following situations require additional setup and configuration.

Domain Joined Computers

Computers that are part of the same Active Directory Domain as the Exchange server will attempt to reach the Active Directory Service Connection Point (SCP) for Autodiscover before attempting to find autodiscover at the normal URLs listed above. In this situation, you will typically need to configure the SCP to point to one of the URLs on your certificate.

Go to this post to find instructions for configuring the SCP:

Exchange Autodiscover Part 2 – The Active Directory SCP

Single Name Certificates

If you do not want to spend the additional money required to obtain a SAN or Wildcard certificate for Exchange, you can use a Service Locator (SRV) Record in DNS to define the location of autodiscover. A Service Locator Record allows you to define any URL you want for the Autodiscover service, so you can create one to bypass the need for having a SAN or Wildcard certificate.

Go to this post to find instructions for configuring a SRV record:

Internal DNS and Exchange Autodiscover