Enabling Message Encryption in Office 365

As I mentioned in an earlier post, email encryption is a sticky thing. In a perfect world, everyone would have Opportunistic TLS enabled and all mail traffic would be automatically encrypted with STARTTLS encryption, which is a fantastic method of ensuring security of messages “in transit”. But some messages need to be encrypted “at rest” due to security policies or regulations. Unfortunately, researchers have recently discovered some key vulnerabilities in the S/MIME and OpenPGP. These encryption systems have been the most common ways of ensuring message encryption for messages while they are sitting in storage. The EFAIL vulnerabilities allow HTTP formatted messages to be exposed in cleartext by attacking a few weaknesses.

Luckily, Office 365 subscribers can improve the confidentiality of their email by implementing a feature that is already available to all E3 and higher subscriptions or by purchasing licenses for Azure Information Protection and assigning them to users that plan to send messages with confidential information in them. The following is a short How-To on enabling the O365 Message Encryption (OME) system and setting up rules to encrypt messages.

The Steps

To enable and configure OME for secure message delivery, the following steps are necessary:

  1. Subscribe to Azure Information Protection
  2. Activate OME
  3. Create Rules to Encrypt Messages

Details are below.

Subscribe to Azure Information Protection

The Azure Information Protection suite is an add-on subscription for Office 365 that will allow end users to perform a number of very useful functions with their email. It also integrates with SharePoint and OneDrive to act as a Data Loss Prevention tool. With AIP, users can flag messages or files so that they cannot be copied, forwarded, deleted, or a range of other common actions. For email, all messages that have specific classification flags or that meet specific requirements are encrypted and packaged into a locked HTML file that is sent to the recipient as an attachment. When the recipient receives the message, they have to register with Azure to be assigned a key to open the email. The key is tied to their email address and once registered the user can then open the HTML attachment and any future attachments without having to log in to anything.

Again, if you have E3 or higher subscriptions assigned to your users, they don’t need to also have AIP as well. However, each user that will be sending messages with confidential information in them will need either an AIP license or an E3/E5 license to do so. To subscribe to AIP, perform these steps:

  1. Open the Admin portal for Office 365
  2. Go to the Subscriptions list
  3. Click on “Add a Subscription” in the upper right corner
  4. Scroll down to find the Azure Information Protection
  5. Click the Buy Now option and follow the prompts or select the “Start Free Trial” option to get 25 licenses for 30 days to try it out before purchasing
  6. Wait about an hour for the service to be provisioned on your O365 tenant

Once provisioned, you can then move on to the next step in the process.

Activate OME

This part has changed very recently. Prior to early 2018, Activating OME took a lot of Powershell work and waiting for it to function properly. MS changed the method for activating OME to streamline the process and make it easier to work with. Here’s what you have to do:

  1. Open the Settings option in the Admin Portal
  2. Select Services & Add-ins
  3. Find Azure Information Protection in the list of services and click on it
  4. Click the link that says, “Manage Microsoft Azure Information Protection settings” to open a new window
  5. Click on the Activate button under “Rights Management is not activated”
  6. Click Activate in the Window that pops up

Once this is done, you will be able to use AIP’s Client application to tag messages for right’s management in Outlook. There will also be new buttons and options in Outlook Web App that will allow you to encrypt messages. However, the simplest method for encrypting messages is to use an Exchange Online Transport Rule to automatically encrypt messages.

Create Rules to Encrypt Messages

Once OME is activated, you’ll be able to encrypt messages using just the built in, default Rights Management tools, but as I mentioned, it’s much easier to use specific criteria to do the encryption automatically. Follow these stpes:

  1. Open the Exchange Online Admin Portal
  2. Go to Mail Flow
  3. Select Rules
  4. Click on the + and select “Add a New Rule”
  5. In the window that appears, click “More Options” to switch to the advanced rule system
  6. The rule you use can be anything from Encrypting messages flagged as Confidential to using a tag in the subject line. My personal preference is to use subject/body tags. Make your rule look like the below image to use this technique:Encrypt Rule

When set up properly, the end user will receive a message telling them that they have received a secure message. The email will have an HTML file attached that they can open up. They’ll need to register, but once registered they’ll be able to read the email without any other steps required and it will be protected from outside view.

 

 

Advertisements

Do I need Anonymous Relay?

Problems

If you have managed an Exchange server in the past, you’ve probably been required to set things up to allow printers, applications, and other devices the ability to send email through the Exchange server. Most often, the solution to this request is to configure an Anonymous Open Relay connector. The first article I ever wrote on this blog was on that very subject: http://wp.me/pUCB5-b .  If you need to know what a Relay is, go read that blog.

What people don’t always do, though, is consider the question of whether or not they need an anonymous relay in Exchange. I didn’t really cover that subject in my first article, so I’ll cover it here.

When you Need an Open Relay

There are three factors that determine whether an organization needs an Open Relay. Anonymous relay is only required if you meet all three of the factors. Any other combination can be worked around without using anonymous relaying. I’ll explain how later, but for now, here are the three factors you need to meet:

  1. Printers, Scanners, and Applications don’t support changes to the SMTP port used.
  2. Printers, Scanners, and Applications don’t support SMTP Authentication.
  3. Your system needs to send mail to email addresses that don’t exist in your mail environment (That is to say, your system sends mail to email addresses that you don’t manage with your own mail server).

At this point, I feel it important to point out that Anonymous relays are inherently insecure. You can make them more secure by limiting access, but using an anonymous relay will always place a technical solution in the environment that is designed specifically to circumvent normal security measures. In other words, do so at your own informed risk, and only when it’s absolutely required.

The First Factor

If the system you want to send SMTP messages doesn’t allow you to send email over a port other than 25, you will need to have an open relay if the messages the system sends are addressed to email addresses outside your environment. The bold stuff there is an important distinction. The SMTP protocol defines port 25 as the “default” port for mail exchange, and that’s the port that every email server uses to receive email from all other systems, which means that, based on modern security concerns, sending mail to port 25 is only allowed if the recipient of the email you send exists on the mail server. So if you are using the abc.com mail server to send messages to bob@xyz.com, you will need to use a relay server to do it, or the mail will be rejected because relay is (hopefully) not allowed.

The Second Factor

If your system doesn’t allow you to specify a username and password in the SMTP configuration it has, then you will have to send messages Anonymously. For our purposes, an “anonymous” user is a user that hasn’t logged in with a username and password. SMTP servers usually talk to one another Anonymously, so it’s actually common for anonymous SMTP access to be valid and is actually necessary for mail exchange to function, but SMTP servers will, by default, only accept messages that are destined for email addresses that they manage. So if abc.com receives a message destined for bob@abc.com, it will accept it. However, abc.com will reject messages to jim@xyz.com, *unless* the SMTP session is Authenticated. In other words, if bob@abc.com wants to send jim @xyz.com a message, he can open an SMTP session with the abc.com mail server, enter his username and password, and send the message. If he does that, the SMTP server will accept the message, then contact the xyz.com mail server and deliver it. The abc.com mail server doesn’t need to have a username and password to do this, because the xyz.com mail server knows who jim@xyz.com is, so it just accepts the message and delivers it to the correct mailbox. So if you are able to set a username and password with the system you need to send mail with, you don’t need anonymous relay.

The Third Factor

Most of the time, applications and devices will only need to send messages to people who have mailboxes in your environment, but there are plenty of occasions where applications or devices that send email out need to be able to send mail to people *outside* the environment. If you don’t need to send to “external recipients” as these users are called, you can use the Direct Send method outlined in the solutions below.

Solutions

As promised, here are the solutions you can use *other* than anonymous relay to meet the needs of your application if it doesn’t meet *all three* of the deciding factors.

Authenticated Relay (Factor #3 applies)

In Exchange server, there is a default “Receive Connector” that accepts all messages sent by Authenticated users on port 587, so if your system allows you to set a username and password and change the port, you don’t need anonymous relaying. Just configure the system to use your Exchange Hub Transport server (or CAS in 2013) on port 587, and it should work fine, even if your requirements meet the last deciding factor of sending mail to external recipients.

Direct Send (Factor #2 applies and/or #3 doesn’t apply)

If your system needs to send messages to abc.com users using the abc.com mail server, you don’t need to relay or authenticate. Just configure your system to send mail directly to the mail server. The “direct send” method uses SMTP as if it were a mail server talking to another mail server, so it works without additional work. Just note that if you have a spam filter that enforces SPF or blocks messages from addresses in your environment to addresses in your environment, it’s likely these messages will get blocked, so make allowances as needed.

Authenticated Mail on Port 25 (Only factor #1 applies)

If the system doesn’t allow you to change the port number your system uses, but does allow you to authenticate, you can make a small change to Exchange to allow the system to work. This is done by opening the Default Receive connector (AKA – the Default Front End receive connector on Exchange 2013 and later) and adding Exchange Users to the Permission settings on the Security tab as shown with the red X below:

default-front-end-enabled

Once this setting is changed, restart the Transport service on the server and you can then perform authenticated relaying on port 25.

Conclusion

If you do find you need to use an anonymous relay, by all means, do so with careful consideration, but always be conscious of the fact that it isn’t always necessary. As always, comments questions on this article and others are always welcome and I’ll do my best to answer as soon as possible.

How Does Exchange Autodiscover Work?

Autodiscover is one of the more annoying features of Exchange since Microsoft reworked the way their Email solution worked in Exchange 2007. All versions since have implemented it and Microsoft may eventually require its use in versions following Exchange 2016. So what is Autodiscover and how does it work?

Some Background

Prior to Exchange 2007, Outlook clients had to be configured manually. In order to do that, you had to know the name of the Exchange server and use it to configure Outlook. Further, if you wanted to use some of the features introduced in Exchange 2003 SP2 and Outlook 2003 (and newer), you had to manually configure a lot of settings that didn’t really make sense. In particular, Outlook Anywhere requires configuration settings that might be a little confusing to the uninitiated. This got even more complicated in larger environments that had numerous Exchange servers but could not yet afford the expense of a load balancer.

The need to manually configure email clients resulted in a lot of administrative overhead, since Exchange admins and Help Desk staff were often required to configure Outlook for users or provide a detailed list of instructions for people to do it themselves. As most IT people are well aware, even the best set of instructions can be broken by some people, and an IT guy was almost always required to spend a lot of time configuring Outlook to talk to Exchange.

Microsoft was not deaf to the cries of the overworked IT people out there, and with Exchange 2007 and Outlook 2007 introduced Autodiscover.

Automation Salvation!

Autodiscover greatly simplifies the process of configuring Outlook to communicate with an Exchange server by automatically determining which Exchange server the user’s Mailbox is on and configuring Outlook to communicate with that server. This makes it much easier for end users to configure Outlook, since the only things they need to know are their email address, AD user name, and password.

Not Complete Salvation, Though

Unfortunately, Autodiscover didn’t completely dispense with the need to get things configured properly. It really only shifted the configuration burden from Users over to the Exchange administrator, since the Exchange environment has to be properly configured to work with Autodiscover. If things aren’t set up properly, Autodiscover will fail annoyingly.

How it Works

In order to make Autodiscover work without user interaction, Microsoft developed a method for telling Outlook where it needed to look for the configuration info it needed. They decided this was most easily accomplished with a few DNS lookups based on the one piece of information that everyone had to put in regardless of their technical know how, the email address. Since they could only rely on getting an email address from users, they knew they’d need to have a default pattern for the lookups, otherwise the client machines would need at least a little configuration before working right. Here’s the pattern they decided on:

  1. Look in Active Directory to see if there is information about Exchange
  2. Look at the root domain of the user’s email Address for configuration info
  3. Look at autodiscover.emaildomain.com for configuration info
  4. Look at the domain’s root DNS to see if any SRV records exist that point to a host that holds configuration info.

Note here that Outlook will only move from one step to the next if it doesn’t find configuration information.

For each step above, Outlook is looking for a specific file or a URL that points it to that file. The file in question is autodiscover.xml. By default, this is kept at https://<exchangeservername>/autodiscover/autodiscover.xml. Each step in the check process will try to find that file and if it’s not there, it moves on. If, by the end of step 4, Outlook finds nothing, you’ll get an error saying that an Encrypted Connection was unavailable, and you’ll probably start tearing your hair out in frustration.

What’s in the File?

Autodiscover.xml is a dynamically generated file written in XML that contains the information Outlook needs to access the mailbox that was entered in the configuration wizard. When Outlook makes a request to Exchange Autodiscover, the following things will happen:

  1. Exchange requests credentials to access the mailbox.
  2. If the credentials are valid, Exchange checks the AD attributes on the mailbox that has the requested Email address.
  3. Exchange determines which server the Mailbox is located on. This information is usually stored in the msExchangeHomeServer attribute on the associated AD account.
  4. Exchange examines its Topology data to determine the best Client Access Server (CAS) to use for access to the mailbox. The Best CAS is determined using the following checks:
    1. Determine AD Site the Mailbox’s Server is located in
    2. Determine if there is a CAS assigned to that AD site
    3. If no CAS is in the site, use Site Topology to determine next closest AD Site.
    4. Step 3 is repeated until a CAS is found.
  5. Exchange returns all necessary configuration data stored in AD for the specific server. The configuration data returned is:
    1. CAS server name
    2. Exchange Web Services URL
    3. Outlook Anywhere Configuration Data, if enabled.
    4. Unified Communications Server info
    5. Mapi over HTTPS Proxy server address (if that is enabled)
  6. Outlook will take the returned information and punch it into the necessary spots in the user’s profile information.

Necessary Configuration

Because all of this is done automatically, it is imperative that the Exchange server is configured to return the right information. If the information returned to Autodiscover is incorrect, either the mailbox connection will fail or you’ll get a certificate error. To get Autodiscover configured right, parts 5.1, 5.2, 5.3, and 5.5 of the above process must be set. This can be done with a script, in the Exchange Management Shell, and in the Exchange Management UI (EMC for 2007 and 2010, ECP/EAP for 2013/2016).

Importance of Autodiscover

With the release of Outlook 2016, it is no longer possible to configure server settings manually in Outlook. You must use Autodiscover. Earlier versions can avoid using it by manually configuring each outlook client. However, before doing that, consider the cost of having to touch each and every computer to properly configure Outlook. It can take 5 minutes or more to configure Outlook on one computer using the manual method, and with Exchange 2013 it can take longer as you also are required to input Outlook Anywhere configuration settings, which are more complex than just entering a server name, username, and password. If you multiply that by the number of computers you might have in your environment and add in the time it takes to actually get to the computers, boot them up, and get to the Outlook settings, the time spent configuring Outlook manually starts to add up very quickly. Imagine how much work you’d be stuck with configuring 100 systems!

In contrast, it usually only takes 10 to 20 minutes to configure Autodiscover. When Autodiscover is working properly, all you have to do is tell your users what their email address is and Outlook will do all the work for you. With a little more configuration or some GPO work, you don’t even have to tell them that!

When you start to look at the vast differences in the amount of time you have to spend configuring Outlook, whether or not to use Autodiscover stops being a question of preference and starts being an absolutely necessary part of any efficient Exchange-based IT environment. Learning to configure it properly is, therefore, one of the most important jobs of an Exchange administrator.

Email Encryption for the Common Man

One of my co-workers had some questions about email encryption and how it worked, so I ended up writing him a long response that I think deserves a wider audience. Here’s most of it (leaving out the NDA covered portions).

Email Encryption and HIPAA Compliance for the Uninitiated

In IT security, when we talk about encryption, there are a couple of different “types” of encryption that we worry about, one is encryption “in transit”, and the other is encryption “at rest.”

Encryption “in transit” is how we ensure that when data is moving from one system to another that it is either impossible or difficult beyond reasonable likelihood for someone to intercept and read that data. There are pieces of many data exchanges that we have no control over, so we cannot guarantee that there isn’t someone out there with a packet sniffer reading every bit that passes between our server and someone else’s (This is a form of “passive” data inspection, possible from just about any trunk line on a switch). We can make sure it doesn’t happen on our end, but we can’t control the ISP or the other person’s side of things.

The basic email encryption system, TLS (Transport Layer Security…Don’t ask what that means), usually follows this incredibly oversimplified pattern:

1. Server 1 contacts Server 2
2. Server 2 says, “Hi. I’m Server 2. Who are you?”
3. Server 1 says, “Hi. I’m Server 1.”
4. Server 2 says, “Nice to meet you Server 1. What can I do for you?”
5. Server 1 says, “Before we really get into that, I’d like to make sure no one is eavesdropping on our conversation. Can we start talking in a language no one but us knows?” (This is basically what encryption is)
6. Server 2 says, “Sure. What language would you like to use?”
7. Server 1 hands server two a certificate that serves as a kind of translator, which Server 2 will use to translate (decrypt) everything that Server 1 says from now on. Server 2 will also use this certificate to send any responses or other messages back to Server 1.
8. Server 2 says, after translating what it wants to say into the new encryption language, “Okay, what would you like to do?”
9. Server 1 translates this message from the encrypted language and makes its first request to server 2 after translating it into the encrypted language.

From this point on, each server will communicate exclusively with the encryption “language” provided by the certificate they exchanged, and anyone who is eavesdropping (packet sniffing) will only see a bunch of gobbledygook that they can’t understand.

There are more complex versions of this scenario that make things more secure. For instance, in a Domain Authenticated TLS situation, both servers have to be “Authenticated,” which is to say, they must prove they are the server the message is supposed to go to. This is done by validating the name that is printed on the certificate with the name the servers use when introducing themselves to one another.

In the example above, it is possible for someone to inject themselves into the conversation and decrypt everything from server 1, read it, encrypt it again, and send it on to server 2 (this is called a Man-in-the-middle attack, and is an “active” form of eavesdropping, because it requires a very complex setup and specialized hardware to accomplish, and also requires active manipulation of data that is being inspected). Domain Authenticated TLS makes this much more difficult, because a server that acts as a mediary in a man-in-the-middle attack cannot use the name that exists on the certificate unless it is owned by the entity that created the certificate to begin with. When you get certificate errors while browsing the web, this is usually due to either you entering a name that isn’t listed on the certificate that is installed on the server you’re talking to, or the server is using a name that isn’t listed on the certificate. (Certificates are a heavy subject, so I’ll just bypass that for now)

Anyway, data “at rest” is any data that is just sitting on a hard drive or disk somewhere, waiting for someone to read it. In order to read that data, you have to gain access to a server (or workstation) that has access to the data and read it. Encryption of data “at rest” requires more effort to accomplish, because it has to be decrypted every time someone tries to read it. Technologies like Bitlocker or PGP allow data to be encrypted while it’s just sitting there on a server.

We only care about encryption of data “in transit” when we work with HIPAA regulations. This is because the only way to access data that is “at rest” is to gain physical access to the data or to systems that have access to that data. HIPAA has other regulations that help reduce the likelihood that either of those things will happen, and since data “at rest” is never outside our realm of control, we can do much more to protect it. Most ePHI is sitting in a datacenter that is locked and requires specific permission to access, but that coverage doesn’t apply to the data when it’s moving between servers.

How Will the Cloud Affect My Career as an IT Professional?

Well, after a year’s hiatus due to some particularly difficult personal trials, I’ve decided to come back to the block and weigh in on one of the big hot-button subjects in the IT industry – How the cloud will affect the job market.

The Push to Cloud

In the modern world, as the Internet has developed and increased in prominence in our lives, the increased infrastructure, security technology, and bandwidth is beginning to allow businesses and individuals to forgo the traditional need to pay big bucks for things like processing power and storage. Companies have been moving their critical systems into third party data-centers for years, but with the development of entirely cloud based solutions like Office 365, Azure, Google Apps, and AWS we’re starting to see a large industry push to reduce infrastructure costs by moving away from self-managed IT solutions. So now we’re starting to see another paradigm shift in the IT industry.

Now, this is not to say that IT hasn’t seen any kind of paradigm shift before, quite the contrary. It seems like every year we’re having to face some new technology that is permeating the industry. From the introduction of Ethernet, to wireless networking, to virtualization and VDI, most of us have dealt with the changes as they’ve come, learning new techniques and adjusting the way we work. But the push to cloud has a lot of IT personnel worried about their jobs.

What About my Job!?

Cost savings has been the primary driver behind the recent push to adopt cloud services. Executives around the world are salivating at the possibility of reducing their costs by shifting the responsibility of IT infrastructure management onto third parties. This shifting of responsibility has a lot of IT people on edge, knowing that if the stuff they do every day is outsourced to a third party service provider, what will happen to their job? If we have no servers or networks to maintain, am I going to lose my job?

The answer here is actually pretty simple. Unless you’re a part of some specific niche industry jobs, you’re pretty likely to keep your job.

Working IT in the Age of the Cloud

While moving to the cloud does reduce the need for critical infrastructure and complex solutions, it doesn’t really reduce the need for administration, problem solving, end-user support, and technical know-how. After performing numerous migrations from various email systems to Office 365, the one thing I’ve discovered is that moving to the cloud doesn’t really make the IT guy’s job any easier. In many ways, it actually makes things more difficult and complex, which means that if you’re a competent IT professional your job is pretty safe.

How will the Cloud Change My Job?

Now, that isn’t to say that your job isn’t going to change. Moving to the cloud requires a good deal of adaptation and adjustment to new ways of thinking and managing resources. For instance, if you want to have any kind of Active Directory integration with Office 365, you’re going to have to use Dirsync, and using Dirsync means you can’t modify things like distribution groups, user accounts, and passwords in Office 365. These things have to be managed in Active Directory, and that means that all of those really menial tasks you’ve been handing off to department heads, like adding distribution group members, are going to land right back into your wheelhouse if you’re as nervous as I am about giving people with no IT experience access to the Active Directory Users and Computers snap-in. For things like password changes, be prepared to face a massive influx of support calls asking for help resetting passwords as well.

In addition to the technical limitations involved with moving to the cloud, you also have to deal with the fact that you lose a lot of direct control over the IT infrastructure. Since your resources are now located on a system owned and operated by someone else, if things break you have to go to the vendor to get it fixed, and that brings up any number of frustrations, depending on who you work with. If you’ve ever spent any time on the phone with Microsoft Support, you’re likely to dread any interaction with them from that point on. You won’t have to do much of the work involved in fixing the problem, but you will have to sit around twiddling your thumbs while someone else does, and that can be a little maddening at times. This does, of course, depend on how competent the person on the other end of the phone is. Sometimes you get lucky and find someone who knows their stuff. Sadly, that’s more of an exception than a rule, so you may need to brush up on your people skills a bit and learn how to light a fire under the support technician on the phone with you.

One Step Forward, Two Steps Back

As more services start moving to the cloud, we’ll probably see a reduction of administrative overhead, but for now cloud solutions will feel like a major step backward to a lot of people, and it really is a step backward. You see, in the old days, most businesses that made investments in IT infrastructure would utilize systems called Mainframes. The mainframe system would perform all of the actual processing work (and was as big as a house in some cases), and people who used computers would interface with the mainframe from a system that was directly connected to it. Sound a lot like a Virtual Desktop Environment? It’s a lot like how cloud services work, too, except that with the Cloud, we replace the centralized Mainframe system with a vast, globe spanning collection of servers. It’d be like if one company owned a single mainframe that was rented out for numerous companies to use at once. As a result, we have to rethink the way we work. Luckily, this does mean fewer trips to the data-center to reboot servers or move cables around, which is a major plus for some people (not me, though. Data-centers relax me for some reason. I’m not sure if it’s the steady humming sound or 50 degree temperatures).

Niche Workers Beware!

With all of this said, there are a couple jobs that are going to start disappearing in the next few years. If you happen to work entirely in one of these areas, you should seriously consider branching out or you may soon find yourself without a reason to work.

1. Backup Operations – This is one niche where the writing has been on the wall for a while now. Companies have been moving toward high-availability solutions for some time now, which means that the need to spend copious funds on backup solutions and storage has been falling. High availability solutions generally rely on having multiple copies of critical data on multiple servers, so the loss of a single server no longer puts people into panic mode. With the cloud, data is placed in systems with so much redundancy, with such a high level of integrity, that data loss is extremely uncommon, and unrecoverable data loss is nearly impossible for some cloud solutions. So if you’re a backup operator and that’s all you’ve done for years, you might find your job under the cross-hairs. It’s time to start expanding your repertoire.

2. Hardware maintenance – To me, it should be obvious that computer hardware specialists are going to see less work with the move to cloud, since there is a definite drop in the amount of server and network hardware required when your company is running there, but I figured I should at least mention it.

3. Internal Network Administration – This particular job won’t ever go away, but with cloud solutions we may see a definite drop in the complexity and overhead required for running a LAN, and the concept of the WAN may begin to disappear as satellite offices will only require an Internet connection to access company resources located in the Cloud.

4. IT Infrastructure Design Specialists – Since the cloud consists almost entirely of prepackaged solutions, the demand for complex architectural designs will start to disappear, meaning that people who make their living designing and implementing IT infrastructure solutions are going to have a lot less work to do. This is the one that makes me sad, since I really enjoy Infrastructure Design. As the cloud push progresses, Infrastructure design will change from designing solutions to developing solutions for managing and interfacing with cloud-based services (which is not nearly as much fun).

5. SAN management – The concept of the Storage Area Network isn’t really even into adolescence yet and here we’re moving away from it? Well, yes, pretty much. As the cloud sees greater levels of adoption, the need for people who focus on the management, provisioning, and optimization of centralized data storage are probably going to see less work.

Now, I’m not saying that these niche jobs will disappear entirely. Nor is this a comprehensive list of jobs that the cloud will be making less important. There will always be companies who avoid the Cloud like the plague, and they’re going to need people who know their stuff. But what I will say is that if you focus in these areas alone, be prepared to branch out or you’re probably going to start spending your days working for a cloud service provider like Microsoft, which might not be as much fun as working for a (much) smaller company.

Adapt or Die

In the end, the cloud will simply force IT professionals to do what most of us do best, adapt to changes in our surroundings. We’ll need to change the way we think and interact in our jobs to be successful in our careers, but we should still have careers despite the changes to the IT landscape.

Exchange Autodiscover – The Active Directory SCP

In a previous post I explained how you can use a SRV record to resolve certificate issues with Autodiscover when your Internal domain isn’t the same as your Email domain. This time, I’m going to explain how to fix things by making changes to Exchange and Active Directory that will allow things to function normally without having to use a SRV record or any DNS records at all, for that matter. But only if the computers that access Exchange are members of your Domain and you configure Outlook using user@domain.local. This is how Exchange hands out Autodiscover configuration URLs by default without any DNS or SRV records. However, if you have an Private Domain Name in your AD environment, which you should try to avoid when you’re building new environments now, you will always get a Certificate Error when you use Outlook because SSL certificates from third party CA providers won’t do private domains on SAN certificates anymore. To fix this little problem, I will first give you a little information on a lesser known feature in Active Directory called the Service Connection Point (SCP).

Service Connection Points

SCPs play an Important role in Active Directory. They are basically entries in the Active Directory Configuration Partition that define how domain based users and computers can connect to various services on the domain. Hence the name Service Connection Point. These will typically show up in one of the Active Directory tools that a lot of people overlook, but is *extremely* important in Exchange since 2007 was released, Active Directory Sites and Services (ADSS). ADSS is typically used to define replication boundaries and paths for Active Directory Domain Controllers, and Exchange uses the information in ADSS to direct users to the appropriate Exchange server in large environments with multiple AD Sites. But what you can also do is view and make changes to the SCPs that are set up in your AD environment. You do this with a feature that is overlooked even more than ADSS itself, the Services node in ADSS. This can be exposed by right clicking the Active Directory Sites and Services object when you have ADSS open, selecting view, then clicking “Show Services Node” like this:

ADSS - Services Node

Once you open the services node, you can see a lot of the stuff that AD uses in the back end to make things work in the domain. Our focus here, however, is Exchange, so go into the Microsoft Exchange node. You’ll see your Exchange Organization’s name there, and you can then expand it to view all of the Service Connection Points that are related to Exchange. I wouldn’t recommend making any changes in here unless you really know what you’re doing, since this view is very similar to ADSIEdit in that it allows you to examine stuff that can very rapidly break things in Active Directory.

Changing the Exchange Autodiscover SCP

If we look into the Microsoft Exchange services tree, you first see the Organization Name. Expand this, then navigate to the Administrative Group section. In any Exchange version that supports Autodiscover, this will show up as First Administrative Group (FYDIBOHF23SPDLT). If the long string of letters confuses you, don’t worry about it. That’s just a joke the developers of Exchange 2007 put into the system. It’s a +1 Caesar Cipher that means EXCHANGE12ROCKS when decoded. Programmers don’t get much humor in life, so we’ll just have to forgive them for that and move on. Once you expand the administrative group node, you’ll be able to see most of the configuration options for Exchange that are stored in AD. Most of these shouldn’t be touched. For now, expand the Servers node. This is the section that defines all of your Exchange servers and how client systems can connect to them. If you dig around in here. Mostly you just see folders, but if you right click on any of them and click Properties, you should be able to view an Attributes tab (in Windows 2008+, at least, prior to that you have to use ADSIEdit to expose the attributes involved in the Services for ADSS). There are lots of cool things you can do in here, like change the maximum size of your Transaction Log files, implement strict limits on number of databases per server, change how much the database grows when there isn’t enough space in the database to commit a transaction, and other fun things. What we’re focusing on here is Autodiscover, though, so expand the Protocols tree, then go to Autodiscover, as seen below.

autodiscover node

Now that we’re here, we see each one of the Exchange CAS servers in our environment. Mine is called Exchange2013 because I am an incredibly creative individual (Except when naming servers). Again, you can right click the server name and then select Properties, then go to the Attribute Editor tab to view all the stuff that you can control about Autodiscover here. It looks like a lot of stuff, right? Well, you’ll really only want to worry about two attributes here. The rest are defined and used by Exchange to do…Exchangey stuff (Technical term). And you’ll really only ever want to change one of them. The two attributes you should know the purpose of are “keywords” and “serviceBindingInformation”.

  • keywords: This attribute, as you may have noticed, defines the Active Directory Site that the CAS server is located in. This is filled in automatically by the Exchange subsystem in AD based on the IP address of the server. If you haven’t created subnets in ADSS and assigned them to the appropriate site, this value will always be the Default site. If you change this attribute, it will get written over in short order, and you’ll likely break client access until the re-write occurs. The *purpose* of this value is to allow the Autodiscover Service to assign a CAS server based on AD site. So, if you have 2 Exchange Servers, one in site A and another in site B, this value will ensure that clients in site A get configured to use the CAS server in that site, rather than crossing a replication boundary to view stuff in site B.
  • serviceBindingInformation: Here’s the value we are most concerned with in this post! This is the value that defines where Active Directory Domain joined computers will go for Autodiscover Information when you enter their email address as username@domain.local if you have a private domain name in your AD environment. By default, this value will be the full FQDN of the server, as it is seen in the Active Directory Domain’s DNS forward lookup zone. So, when domain joined computers configure Outlook using user@domain.local they will look this information up automatically regardless of any other Autodiscover, SRV, or other records that exist in DNS for the internal DNS zone. Note: If your email domain is different from your AD domain, you may need to use your AD domain as the email domain when configuring Outlook for the SCP lookup to occur. If you do not want to use the AD Domain to configure users, you will want to make sure there is an Autodiscover DNS record in the DNS zone you use for your EMail Domain.

Now, since we know that the serviceBindingInformation value sets the URL that Outlook will use for Autodiscover, we can change it directly through ADSS or ADSIEdit by replacing what’s there with https://servername.domain.com/Autodiscover/Autodiscover.xml . Once you do this, internal clients on the domain that use user@domain.local to configure Outlook will be properly directed to a value that is on the certificate and can be properly configured without certificate errors.

Now, if you’re a little nervous about making changes this way, you can actually change the value of the serviceBindingInformation attribute by using the Exchange Management Shell. You do this by running the following command:

get-clientaccessserver | set-clientaccessserver -autodiscoverserviceinternaluri “https://servername.domain.com/Autodiscover/Autodiscover.xml&#8221;

This will directly modify the Exchange AD SCP and allow your clients to use Autodiscover without getting certificate errors. Not too difficult and you don’t have to worry about split DNS or SRV records. Note, though, that like the SRV record you will be forcing your internal clients to go out of your network to the Internet to access your Exchange server. To keep this from happening, you will have to have an Internal version of your External DNS zone that has Internal IPs assigned in all the A records. There just is no way around that with private domain names any longer.

Final Note

Depending on your Outlook version and how your client machines connect, there is some additional configuration that will need to be completed to fully resolve any certificate errors you may have. Specifically, you will need to modify some of the Exchange Virtual Directory URLs to make sure they are returning the correct information to Autodiscover.

Avoiding Issues with Certificates in Exchange 2007+

For information, modern Active Directory Best Practices can help you avoid having trouble with certificate errors in Exchange. Go here to see some information about modern AD Domain Naming best practices. If you follow that best practice when creating your AD environment, you won’t have to worry so much about certificate errors in Exchange, as long as the Certificate you use has the Exchange Server(s) name listed. However, if you can’t build a new environment or aren’t already planning to migrate to a new AD environment in the near future, it isn’t worth the effort to do so when small configuration changes like the one above can fix certificate errors.

Office 365, ADFS, and SQL

He’s an issue I’ve just run into that there doesn’t seem to be a good answer to on the Internet. When you are building a highly available ADFS farm to enable Single Sign On for Office 365, should you use the Windows Integrated Database (WID) that comes with Windows Server or store the ADFS Configuration on a SQL server? Put simply, if you are using ADFS *only* for Office 365, there is no need to use SQL server for storing the configuration database. Here’s why.

What Does ADFS with SQL Get You?

I spent a good day or so trying to answer this question because I came in after the initial configuration of a Hybrid Exchange migration to Office 365 that was set up using SQL to store ADFS. When we went to test failover, we ran into a *mess* of problems with this configuration. So I’m going to try to explain things in a way that makes sense, rather than just telling you what Technet says in their article on the subject.

Storing your ADFS configuration in a SQL database gives you 4 things:

  1. The ability to detect and block SAML and WS-Federation Token Replay attempts.
  2. The ability to use SAML artifact resolution
  3. The ability to use SQL fail-over to increase availability of the Configuration database
  4. Scalability!!!

Using WID with ADFS prevents these features from being available to you. Here’s what each of those lines of text mean (Why no, I’m not going to just give you that without definitions like Technet does. Can you tell I’m a little grumpy about Technet’s coverage of this subject?)

SAML and WS-Federation Token Replay Attempts

This is actually a potential security vulnerability that comes about because of how Token authentication mechanisms work. In some cases, an application that accepts a federated token identity for access may allow users to “replay” a previously issued authentication Token to bypass the authentication mechanisms used to issue that Token. You can do this by ending a session with a federated application, then returning to the application with a cached version of the interaction. The easiest way to do this is to just push the Back button in a web browser. If the federated application isn’t properly designed, it will accept the cached version of the token used in the previous session and continue operating just like someone never logged out.

It should be noted that this is a serious security issue. If you have an environment where there are systems exposed to public use by multiple users (Kiosks, for example), you will want to make sure that Token Replay attempts are dropped. But here’s the thing, Token Replay is a vulnerability that affects the application you are using federated login to access. It is not a vulnerability of ADFS itself. Basically, if the applications that you are using ADFS to federate with are designed properly, there is no need to have your ADFS environment provide this kind of protection. Office 365 was designed to prevent Token Replay attempts from succeeding. So if you are only federating with Office 365, you don’t need to have this functionality in your ADFS environment. So, no need to have a SQL based Configuration Database for this reason.

SAML Artifact Resolution

I have tried for a while to find a good definition for what SAML Artifact Resolution actually is. I’m still not 100% on it, because the technical documentation for the SAML 2.0 standard is about as informative as a block of graphite. However, it seems like this is essentially a method through which both sides of a SAML token exchange can check on and authenticate one another using a single session, rather than multiple sessions. This comes in handy for situations where load balancing is in use and the application requests verification of the token provider.

The good news is that you don’t have to understand SAML Artifact Resolution. Office 365 doesn’t use it, so you don’t have to worry about having it. No SQL required for this purpose.

SQL Failover Capabilities

At first glance, having SQL server failover capabilities for your ADFS configuration database sounds like a great idea. I mean, SQL server has great failover capabilities built in. The problem is, so does ADFS. When you use WID for your ADFS Configuration database, the first server in the Farm will store a read/write enabled copy of the configuration database. Each additional ADFS server added to the farm will pull a copy of this database and store it in a read only format. So in a WID based ADFS farm, every ADFS server in the farm has a copy of the configuration database by default. ADFS Proxy servers don’t store or need access to the configuration database, so only the backend ADFS servers will benefit at all from SQL integration.

But here’s the thing, for SQL integration to work properly in a way that allows full SQL failover capabilities, you need to have a valid SQL cluster. With SQL 2008R2 and earlier, this means that you have to have at minimum 2 SQL servers installed on a version of Windows that has Cluster Services available. Cluster services requires that you have shared storage before it will create a failover cluster for SQL versions prior to 2012. If you have 2012, you can manage much simpler failover, but if you’re limited to SQL 2008R2 and earlier, you’re going to end up with a significantly higher infrastructure requirement to get SQL failover working. It you can use SQL 2012 to store the ADFS database, there is some advantage to using SQL clustering for ADFS, but the advantage doesn’t actually justify the costs of doing so.

“What about SQL Mirroring?” You ask. Well, that’s a good idea in theory. You can have your ADFS config database mirrored on two SQL servers, but you will run into some major headaches using this technique, especially if you ever need to run a failover. First off, ADFS configuration with SQL server requires that you use the SFConfig command line tool to create the config database and add servers to the farm. This is a huge pain to do, but it’s necessary. Second, if you ever have to fail over to the other SQL server for any reason, you have to reconfigure every one of the servers in the farm. SQL mirroring doesn’t utilize a clustered VIP, so the configuration with SFConfig requires you to point the server to the active SQL server. When you activate the SQL database on the other server, things may or may not work without reconfiguration. If they do work, there’s a chance that they might *stop* working at any time in the future without notice until you run SFConfig on each member of the farm and use the now active mirror’s address in the command. Then you may have to log in to each of your Proxy servers and run the Proxy Config wizard to reinitialize the proxy trust between it and the ADFS farm. That means that if you want to fail over with SQL mirroring, you have to log in to 4 servers at minimum and reconfigure each one every time the active SQL mirror changes. This is a truly unwieldy and excessive failover process. It isn’t worth the loss of expense from not using a full SQL cluster to do this for ADFS’s configuration database.

The failover abilities of SQL allow you to have constantly available access to a writable copy of the ADFS configuration database. If you use WID and the first server in the farm goes down, you cannot make changes to your ADFS configuration. No adding new servers, no modifications to the ADFS functions, etc. The good news is that the only time you need access to a writable copy of the database is when you are adding new servers to the farm or making modifications to ADFS functions. If you’re using Office 365, you almost never have to make changes to either of these once its set up and running. ADFS will continue operating as long as even one server in the farm is accessible without a writable copy of the database. So realistically, you don’t need SQL failover capabilities for ADFS’s configuration database.

Scalability!!!

Scalability is one of my favorite buzz words. Scalability means “I can add more servers to handle this workload without any problems.” And SQL allows you to do that. However, WID integrated ADFS allows you to do so as well. The limitation, though, is that you can only have up to 5 servers in a WID based ADFS farm (NOTE: Proxy servers don’t count against this maximum number). “Only five servers!?” you may ask in heated exasperation. “Yes!” I answer. “Well, I can’t stand limitations! I will use SQL!” you respond. “You will never need more than 4 ADFS servers in ever!” I retort. ADFS is one of the most light-weight roles available for Windows server. I have deployed ADFS on numerous environments with user counts well past the 10,000 mark and the ADFS servers almost never measure a blip on the performance monitors. This is because ADFS does three things, it provides a website for users to log in to, it queries Active Directory to authenticate users, and it generates a small token that contains less than 1KB of data that it then sends to a remote service. This entire process requires about 10 seconds per user per session, and that includes about 9 seconds of someone typing in their credentials. So there isn’t really even much need to have load balancing capabilities on ADFS. One server can handle monstrous loads. If you have an environment with 100,000 users accessing numerous federated applications, you might need to add 1 or 2 extra servers to your farm to handle the load, but I doubt it. You’re better off just adding RAM and processing power to your ADFS servers to improve performance than adding extra servers to the farm.

If you want multi-site high availability, you may need 5 servers, but probably not more. Let’s say you want to have site resiliency in your ADFS configuration as well as failover capability in your primary and secondary site. This configuration requires a minimum of 4 ADFS servers. Two for the primary site, two for the failover site, and 4 proxy servers that don’t count against the maximum of 5 servers. There you have it, full site-resiliency and in-site failover capability using less than 5 servers. You can even add a third site to the farm without having any issues if you want using the remaining server. But let’s be honest here, if there is an emergency that takes down two geographically dispersed datacenters at the same time, you’re going to be looking for the nearest shotgun to fight off zombies/anarchist bandits, not checking to make sure your ADFS farm is still working, so triple hot-site resiliency for ADFS is probably more than anyone needs. So you don’t need SQL for ADFS if you want scalability. It’s scalable enough without SQL.

EXPLOSIONS! I mean Conclusions

If you are planning to build an ADFS farm for Single Sign On with Office 365, you will ask the question, “Do I need SQL?” The answer to that question, in pretty much every possible case is “Not if you’re just going to use it for Office 365.” SQL integrated ADFS configurations add some features, sure, but there is almost no situation where any of those features must exist for ADFS to work with Office 365. So don’t bother even trying it out. It adds an unnecessary level of complexity, cost, and management difficulties and give no advantages whatsoever.