Do I need Anonymous Relay?

Problems

If you have managed an Exchange server in the past, you’ve probably been required to set things up to allow printers, applications, and other devices the ability to send email through the Exchange server. Most often, the solution to this request is to configure an Anonymous Open Relay connector. The first article I ever wrote on this blog was on that very subject: http://wp.me/pUCB5-b .  If you need to know what a Relay is, go read that blog.

What people don’t always do, though, is consider the question of whether or not they need an anonymous relay in Exchange. I didn’t really cover that subject in my first article, so I’ll cover it here.

When you Need an Open Relay

There are three factors that determine whether an organization needs an Open Relay. Anonymous relay is only required if you meet all three of the factors. Any other combination can be worked around without using anonymous relaying. I’ll explain how later, but for now, here are the three factors you need to meet:

  1. Printers, Scanners, and Applications don’t support changes to the SMTP port used.
  2. Printers, Scanners, and Applications don’t support SMTP Authentication.
  3. Your system needs to send mail to email addresses that don’t exist in your mail environment (That is to say, your system sends mail to email addresses that you don’t manage with your own mail server).

At this point, I feel it important to point out that Anonymous relays are inherently insecure. You can make them more secure by limiting access, but using an anonymous relay will always place a technical solution in the environment that is designed specifically to circumvent normal security measures. In other words, do so at your own informed risk, and only when it’s absolutely required.

The First Factor

If the system you want to send SMTP messages doesn’t allow you to send email over a port other than 25, you will need to have an open relay if the messages the system sends are addressed to email addresses outside your environment. The bold stuff there is an important distinction. The SMTP protocol defines port 25 as the “default” port for mail exchange, and that’s the port that every email server uses to receive email from all other systems, which means that, based on modern security concerns, sending mail to port 25 is only allowed if the recipient of the email you send exists on the mail server. So if you are using the abc.com mail server to send messages to bob@xyz.com, you will need to use a relay server to do it, or the mail will be rejected because relay is (hopefully) not allowed.

The Second Factor

If your system doesn’t allow you to specify a username and password in the SMTP configuration it has, then you will have to send messages Anonymously. For our purposes, an “anonymous” user is a user that hasn’t logged in with a username and password. SMTP servers usually talk to one another Anonymously, so it’s actually common for anonymous SMTP access to be valid and is actually necessary for mail exchange to function, but SMTP servers will, by default, only accept messages that are destined for email addresses that they manage. So if abc.com receives a message destined for bob@abc.com, it will accept it. However, abc.com will reject messages to jim@xyz.com, *unless* the SMTP session is Authenticated. In other words, if bob@abc.com wants to send jim @xyz.com a message, he can open an SMTP session with the abc.com mail server, enter his username and password, and send the message. If he does that, the SMTP server will accept the message, then contact the xyz.com mail server and deliver it. The abc.com mail server doesn’t need to have a username and password to do this, because the xyz.com mail server knows who jim@xyz.com is, so it just accepts the message and delivers it to the correct mailbox. So if you are able to set a username and password with the system you need to send mail with, you don’t need anonymous relay.

The Third Factor

Most of the time, applications and devices will only need to send messages to people who have mailboxes in your environment, but there are plenty of occasions where applications or devices that send email out need to be able to send mail to people *outside* the environment. If you don’t need to send to “external recipients” as these users are called, you can use the Direct Send method outlined in the solutions below.

Solutions

As promised, here are the solutions you can use *other* than anonymous relay to meet the needs of your application if it doesn’t meet *all three* of the deciding factors.

Authenticated Relay (Factor #3 applies)

In Exchange server, there is a default “Receive Connector” that accepts all messages sent by Authenticated users on port 587, so if your system allows you to set a username and password and change the port, you don’t need anonymous relaying. Just configure the system to use your Exchange Hub Transport server (or CAS in 2013) on port 587, and it should work fine, even if your requirements meet the last deciding factor of sending mail to external recipients.

Direct Send (Factor #2 applies and/or #3 doesn’t apply)

If your system needs to send messages to abc.com users using the abc.com mail server, you don’t need to relay or authenticate. Just configure your system to send mail directly to the mail server. The “direct send” method uses SMTP as if it were a mail server talking to another mail server, so it works without additional work. Just note that if you have a spam filter that enforces SPF or blocks messages from addresses in your environment to addresses in your environment, it’s likely these messages will get blocked, so make allowances as needed.

Authenticated Mail on Port 25 (Only factor #1 applies)

If the system doesn’t allow you to change the port number your system uses, but does allow you to authenticate, you can make a small change to Exchange to allow the system to work. This is done by opening the Default Receive connector (AKA – the Default Front End receive connector on Exchange 2013 and later) and adding Exchange Users to the Permission settings on the Security tab as shown with the red X below:

default-front-end-enabled

Once this setting is changed, restart the Transport service on the server and you can then perform authenticated relaying on port 25.

Conclusion

If you do find you need to use an anonymous relay, by all means, do so with careful consideration, but always be conscious of the fact that it isn’t always necessary. As always, comments questions on this article and others are always welcome and I’ll do my best to answer as soon as possible.

Advertisements

How Does Exchange Autodiscover Work?

Autodiscover is one of the more annoying features of Exchange since Microsoft reworked the way their Email solution worked in Exchange 2007. All versions since have implemented it and Microsoft may eventually require its use in versions following Exchange 2016. So what is Autodiscover and how does it work?

Some Background

Prior to Exchange 2007, Outlook clients had to be configured manually. In order to do that, you had to know the name of the Exchange server and use it to configure Outlook. Further, if you wanted to use some of the features introduced in Exchange 2003 SP2 and Outlook 2003 (and newer), you had to manually configure a lot of settings that didn’t really make sense. In particular, Outlook Anywhere requires configuration settings that might be a little confusing to the uninitiated. This got even more complicated in larger environments that had numerous Exchange servers but could not yet afford the expense of a load balancer.

The need to manually configure email clients resulted in a lot of administrative overhead, since Exchange admins and Help Desk staff were often required to configure Outlook for users or provide a detailed list of instructions for people to do it themselves. As most IT people are well aware, even the best set of instructions can be broken by some people, and an IT guy was almost always required to spend a lot of time configuring Outlook to talk to Exchange.

Microsoft was not deaf to the cries of the overworked IT people out there, and with Exchange 2007 and Outlook 2007 introduced Autodiscover.

Automation Salvation!

Autodiscover greatly simplifies the process of configuring Outlook to communicate with an Exchange server by automatically determining which Exchange server the user’s Mailbox is on and configuring Outlook to communicate with that server. This makes it much easier for end users to configure Outlook, since the only things they need to know are their email address, AD user name, and password.

Not Complete Salvation, Though

Unfortunately, Autodiscover didn’t completely dispense with the need to get things configured properly. It really only shifted the configuration burden from Users over to the Exchange administrator, since the Exchange environment has to be properly configured to work with Autodiscover. If things aren’t set up properly, Autodiscover will fail annoyingly.

How it Works

In order to make Autodiscover work without user interaction, Microsoft developed a method for telling Outlook where it needed to look for the configuration info it needed. They decided this was most easily accomplished with a few DNS lookups based on the one piece of information that everyone had to put in regardless of their technical know how, the email address. Since they could only rely on getting an email address from users, they knew they’d need to have a default pattern for the lookups, otherwise the client machines would need at least a little configuration before working right. Here’s the pattern they decided on:

  1. Look in Active Directory to see if there is information about Exchange
  2. Look at the root domain of the user’s email Address for configuration info
  3. Look at autodiscover.emaildomain.com for configuration info
  4. Look at the domain’s root DNS to see if any SRV records exist that point to a host that holds configuration info.

Note here that Outlook will only move from one step to the next if it doesn’t find configuration information.

For each step above, Outlook is looking for a specific file or a URL that points it to that file. The file in question is autodiscover.xml. By default, this is kept at https://<exchangeservername>/autodiscover/autodiscover.xml. Each step in the check process will try to find that file and if it’s not there, it moves on. If, by the end of step 4, Outlook finds nothing, you’ll get an error saying that an Encrypted Connection was unavailable, and you’ll probably start tearing your hair out in frustration.

What’s in the File?

Autodiscover.xml is a dynamically generated file written in XML that contains the information Outlook needs to access the mailbox that was entered in the configuration wizard. When Outlook makes a request to Exchange Autodiscover, the following things will happen:

  1. Exchange requests credentials to access the mailbox.
  2. If the credentials are valid, Exchange checks the AD attributes on the mailbox that has the requested Email address.
  3. Exchange determines which server the Mailbox is located on. This information is usually stored in the msExchangeHomeServer attribute on the associated AD account.
  4. Exchange examines its Topology data to determine the best Client Access Server (CAS) to use for access to the mailbox. The Best CAS is determined using the following checks:
    1. Determine AD Site the Mailbox’s Server is located in
    2. Determine if there is a CAS assigned to that AD site
    3. If no CAS is in the site, use Site Topology to determine next closest AD Site.
    4. Step 3 is repeated until a CAS is found.
  5. Exchange returns all necessary configuration data stored in AD for the specific server. The configuration data returned is:
    1. CAS server name
    2. Exchange Web Services URL
    3. Outlook Anywhere Configuration Data, if enabled.
    4. Unified Communications Server info
    5. Mapi over HTTPS Proxy server address (if that is enabled)
  6. Outlook will take the returned information and punch it into the necessary spots in the user’s profile information.

Necessary Configuration

Because all of this is done automatically, it is imperative that the Exchange server is configured to return the right information. If the information returned to Autodiscover is incorrect, either the mailbox connection will fail or you’ll get a certificate error. To get Autodiscover configured right, parts 5.1, 5.2, 5.3, and 5.5 of the above process must be set. This can be done with a script, in the Exchange Management Shell, and in the Exchange Management UI (EMC for 2007 and 2010, ECP/EAP for 2013/2016).

Importance of Autodiscover

With the release of Outlook 2016, it is no longer possible to configure server settings manually in Outlook. You must use Autodiscover. Earlier versions can avoid using it by manually configuring each outlook client. However, before doing that, consider the cost of having to touch each and every computer to properly configure Outlook. It can take 5 minutes or more to configure Outlook on one computer using the manual method, and with Exchange 2013 it can take longer as you also are required to input Outlook Anywhere configuration settings, which are more complex than just entering a server name, username, and password. If you multiply that by the number of computers you might have in your environment and add in the time it takes to actually get to the computers, boot them up, and get to the Outlook settings, the time spent configuring Outlook manually starts to add up very quickly. Imagine how much work you’d be stuck with configuring 100 systems!

In contrast, it usually only takes 10 to 20 minutes to configure Autodiscover. When Autodiscover is working properly, all you have to do is tell your users what their email address is and Outlook will do all the work for you. With a little more configuration or some GPO work, you don’t even have to tell them that!

When you start to look at the vast differences in the amount of time you have to spend configuring Outlook, whether or not to use Autodiscover stops being a question of preference and starts being an absolutely necessary part of any efficient Exchange-based IT environment. Learning to configure it properly is, therefore, one of the most important jobs of an Exchange administrator.

Email Encryption for the Common Man

One of my co-workers had some questions about email encryption and how it worked, so I ended up writing him a long response that I think deserves a wider audience. Here’s most of it (leaving out the NDA covered portions).

Email Encryption and HIPAA Compliance for the Uninitiated

In IT security, when we talk about encryption, there are a couple of different “types” of encryption that we worry about, one is encryption “in transit”, and the other is encryption “at rest.”

Encryption “in transit” is how we ensure that when data is moving from one system to another that it is either impossible or difficult beyond reasonable likelihood for someone to intercept and read that data. There are pieces of many data exchanges that we have no control over, so we cannot guarantee that there isn’t someone out there with a packet sniffer reading every bit that passes between our server and someone else’s (This is a form of “passive” data inspection, possible from just about any trunk line on a switch). We can make sure it doesn’t happen on our end, but we can’t control the ISP or the other person’s side of things.

The basic email encryption system, TLS (Transport Layer Security…Don’t ask what that means), usually follows this incredibly oversimplified pattern:

1. Server 1 contacts Server 2
2. Server 2 says, “Hi. I’m Server 2. Who are you?”
3. Server 1 says, “Hi. I’m Server 1.”
4. Server 2 says, “Nice to meet you Server 1. What can I do for you?”
5. Server 1 says, “Before we really get into that, I’d like to make sure no one is eavesdropping on our conversation. Can we start talking in a language no one but us knows?” (This is basically what encryption is)
6. Server 2 says, “Sure. What language would you like to use?”
7. Server 1 hands server two a certificate that serves as a kind of translator, which Server 2 will use to translate (decrypt) everything that Server 1 says from now on. Server 2 will also use this certificate to send any responses or other messages back to Server 1.
8. Server 2 says, after translating what it wants to say into the new encryption language, “Okay, what would you like to do?”
9. Server 1 translates this message from the encrypted language and makes its first request to server 2 after translating it into the encrypted language.

From this point on, each server will communicate exclusively with the encryption “language” provided by the certificate they exchanged, and anyone who is eavesdropping (packet sniffing) will only see a bunch of gobbledygook that they can’t understand.

There are more complex versions of this scenario that make things more secure. For instance, in a Domain Authenticated TLS situation, both servers have to be “Authenticated,” which is to say, they must prove they are the server the message is supposed to go to. This is done by validating the name that is printed on the certificate with the name the servers use when introducing themselves to one another.

In the example above, it is possible for someone to inject themselves into the conversation and decrypt everything from server 1, read it, encrypt it again, and send it on to server 2 (this is called a Man-in-the-middle attack, and is an “active” form of eavesdropping, because it requires a very complex setup and specialized hardware to accomplish, and also requires active manipulation of data that is being inspected). Domain Authenticated TLS makes this much more difficult, because a server that acts as a mediary in a man-in-the-middle attack cannot use the name that exists on the certificate unless it is owned by the entity that created the certificate to begin with. When you get certificate errors while browsing the web, this is usually due to either you entering a name that isn’t listed on the certificate that is installed on the server you’re talking to, or the server is using a name that isn’t listed on the certificate. (Certificates are a heavy subject, so I’ll just bypass that for now)

Anyway, data “at rest” is any data that is just sitting on a hard drive or disk somewhere, waiting for someone to read it. In order to read that data, you have to gain access to a server (or workstation) that has access to the data and read it. Encryption of data “at rest” requires more effort to accomplish, because it has to be decrypted every time someone tries to read it. Technologies like Bitlocker or PGP allow data to be encrypted while it’s just sitting there on a server.

We only care about encryption of data “in transit” when we work with HIPAA regulations. This is because the only way to access data that is “at rest” is to gain physical access to the data or to systems that have access to that data. HIPAA has other regulations that help reduce the likelihood that either of those things will happen, and since data “at rest” is never outside our realm of control, we can do much more to protect it. Most ePHI is sitting in a datacenter that is locked and requires specific permission to access, but that coverage doesn’t apply to the data when it’s moving between servers.

How Will the Cloud Affect My Career as an IT Professional?

Well, after a year’s hiatus due to some particularly difficult personal trials, I’ve decided to come back to the block and weigh in on one of the big hot-button subjects in the IT industry – How the cloud will affect the job market.

The Push to Cloud

In the modern world, as the Internet has developed and increased in prominence in our lives, the increased infrastructure, security technology, and bandwidth is beginning to allow businesses and individuals to forgo the traditional need to pay big bucks for things like processing power and storage. Companies have been moving their critical systems into third party data-centers for years, but with the development of entirely cloud based solutions like Office 365, Azure, Google Apps, and AWS we’re starting to see a large industry push to reduce infrastructure costs by moving away from self-managed IT solutions. So now we’re starting to see another paradigm shift in the IT industry.

Now, this is not to say that IT hasn’t seen any kind of paradigm shift before, quite the contrary. It seems like every year we’re having to face some new technology that is permeating the industry. From the introduction of Ethernet, to wireless networking, to virtualization and VDI, most of us have dealt with the changes as they’ve come, learning new techniques and adjusting the way we work. But the push to cloud has a lot of IT personnel worried about their jobs.

What About my Job!?

Cost savings has been the primary driver behind the recent push to adopt cloud services. Executives around the world are salivating at the possibility of reducing their costs by shifting the responsibility of IT infrastructure management onto third parties. This shifting of responsibility has a lot of IT people on edge, knowing that if the stuff they do every day is outsourced to a third party service provider, what will happen to their job? If we have no servers or networks to maintain, am I going to lose my job?

The answer here is actually pretty simple. Unless you’re a part of some specific niche industry jobs, you’re pretty likely to keep your job.

Working IT in the Age of the Cloud

While moving to the cloud does reduce the need for critical infrastructure and complex solutions, it doesn’t really reduce the need for administration, problem solving, end-user support, and technical know-how. After performing numerous migrations from various email systems to Office 365, the one thing I’ve discovered is that moving to the cloud doesn’t really make the IT guy’s job any easier. In many ways, it actually makes things more difficult and complex, which means that if you’re a competent IT professional your job is pretty safe.

How will the Cloud Change My Job?

Now, that isn’t to say that your job isn’t going to change. Moving to the cloud requires a good deal of adaptation and adjustment to new ways of thinking and managing resources. For instance, if you want to have any kind of Active Directory integration with Office 365, you’re going to have to use Dirsync, and using Dirsync means you can’t modify things like distribution groups, user accounts, and passwords in Office 365. These things have to be managed in Active Directory, and that means that all of those really menial tasks you’ve been handing off to department heads, like adding distribution group members, are going to land right back into your wheelhouse if you’re as nervous as I am about giving people with no IT experience access to the Active Directory Users and Computers snap-in. For things like password changes, be prepared to face a massive influx of support calls asking for help resetting passwords as well.

In addition to the technical limitations involved with moving to the cloud, you also have to deal with the fact that you lose a lot of direct control over the IT infrastructure. Since your resources are now located on a system owned and operated by someone else, if things break you have to go to the vendor to get it fixed, and that brings up any number of frustrations, depending on who you work with. If you’ve ever spent any time on the phone with Microsoft Support, you’re likely to dread any interaction with them from that point on. You won’t have to do much of the work involved in fixing the problem, but you will have to sit around twiddling your thumbs while someone else does, and that can be a little maddening at times. This does, of course, depend on how competent the person on the other end of the phone is. Sometimes you get lucky and find someone who knows their stuff. Sadly, that’s more of an exception than a rule, so you may need to brush up on your people skills a bit and learn how to light a fire under the support technician on the phone with you.

One Step Forward, Two Steps Back

As more services start moving to the cloud, we’ll probably see a reduction of administrative overhead, but for now cloud solutions will feel like a major step backward to a lot of people, and it really is a step backward. You see, in the old days, most businesses that made investments in IT infrastructure would utilize systems called Mainframes. The mainframe system would perform all of the actual processing work (and was as big as a house in some cases), and people who used computers would interface with the mainframe from a system that was directly connected to it. Sound a lot like a Virtual Desktop Environment? It’s a lot like how cloud services work, too, except that with the Cloud, we replace the centralized Mainframe system with a vast, globe spanning collection of servers. It’d be like if one company owned a single mainframe that was rented out for numerous companies to use at once. As a result, we have to rethink the way we work. Luckily, this does mean fewer trips to the data-center to reboot servers or move cables around, which is a major plus for some people (not me, though. Data-centers relax me for some reason. I’m not sure if it’s the steady humming sound or 50 degree temperatures).

Niche Workers Beware!

With all of this said, there are a couple jobs that are going to start disappearing in the next few years. If you happen to work entirely in one of these areas, you should seriously consider branching out or you may soon find yourself without a reason to work.

1. Backup Operations – This is one niche where the writing has been on the wall for a while now. Companies have been moving toward high-availability solutions for some time now, which means that the need to spend copious funds on backup solutions and storage has been falling. High availability solutions generally rely on having multiple copies of critical data on multiple servers, so the loss of a single server no longer puts people into panic mode. With the cloud, data is placed in systems with so much redundancy, with such a high level of integrity, that data loss is extremely uncommon, and unrecoverable data loss is nearly impossible for some cloud solutions. So if you’re a backup operator and that’s all you’ve done for years, you might find your job under the cross-hairs. It’s time to start expanding your repertoire.

2. Hardware maintenance – To me, it should be obvious that computer hardware specialists are going to see less work with the move to cloud, since there is a definite drop in the amount of server and network hardware required when your company is running there, but I figured I should at least mention it.

3. Internal Network Administration – This particular job won’t ever go away, but with cloud solutions we may see a definite drop in the complexity and overhead required for running a LAN, and the concept of the WAN may begin to disappear as satellite offices will only require an Internet connection to access company resources located in the Cloud.

4. IT Infrastructure Design Specialists – Since the cloud consists almost entirely of prepackaged solutions, the demand for complex architectural designs will start to disappear, meaning that people who make their living designing and implementing IT infrastructure solutions are going to have a lot less work to do. This is the one that makes me sad, since I really enjoy Infrastructure Design. As the cloud push progresses, Infrastructure design will change from designing solutions to developing solutions for managing and interfacing with cloud-based services (which is not nearly as much fun).

5. SAN management – The concept of the Storage Area Network isn’t really even into adolescence yet and here we’re moving away from it? Well, yes, pretty much. As the cloud sees greater levels of adoption, the need for people who focus on the management, provisioning, and optimization of centralized data storage are probably going to see less work.

Now, I’m not saying that these niche jobs will disappear entirely. Nor is this a comprehensive list of jobs that the cloud will be making less important. There will always be companies who avoid the Cloud like the plague, and they’re going to need people who know their stuff. But what I will say is that if you focus in these areas alone, be prepared to branch out or you’re probably going to start spending your days working for a cloud service provider like Microsoft, which might not be as much fun as working for a (much) smaller company.

Adapt or Die

In the end, the cloud will simply force IT professionals to do what most of us do best, adapt to changes in our surroundings. We’ll need to change the way we think and interact in our jobs to be successful in our careers, but we should still have careers despite the changes to the IT landscape.

Exchange Autodiscover – The Active Directory SCP

In a previous post I explained how you can use a SRV record to resolve certificate issues with Autodiscover when your Internal domain isn’t the same as your Email domain. This time, I’m going to explain how to fix things by making changes to Exchange and Active Directory that will allow things to function normally without having to use a SRV record or any DNS records at all, for that matter. But only if the computers that access Exchange are members of your Domain and you configure Outlook using user@domain.local. This is how Exchange hands out Autodiscover configuration URLs by default without any DNS or SRV records. However, if you have an Private Domain Name in your AD environment, which you should try to avoid when you’re building new environments now, you will always get a Certificate Error when you use Outlook because SSL certificates from third party CA providers won’t do private domains on SAN certificates anymore. To fix this little problem, I will first give you a little information on a lesser known feature in Active Directory called the Service Connection Point (SCP).

Service Connection Points

SCPs play an Important role in Active Directory. They are basically entries in the Active Directory Configuration Partition that define how domain based users and computers can connect to various services on the domain. Hence the name Service Connection Point. These will typically show up in one of the Active Directory tools that a lot of people overlook, but is *extremely* important in Exchange since 2007 was released, Active Directory Sites and Services (ADSS). ADSS is typically used to define replication boundaries and paths for Active Directory Domain Controllers, and Exchange uses the information in ADSS to direct users to the appropriate Exchange server in large environments with multiple AD Sites. But what you can also do is view and make changes to the SCPs that are set up in your AD environment. You do this with a feature that is overlooked even more than ADSS itself, the Services node in ADSS. This can be exposed by right clicking the Active Directory Sites and Services object when you have ADSS open, selecting view, then clicking “Show Services Node” like this:

ADSS - Services Node

Once you open the services node, you can see a lot of the stuff that AD uses in the back end to make things work in the domain. Our focus here, however, is Exchange, so go into the Microsoft Exchange node. You’ll see your Exchange Organization’s name there, and you can then expand it to view all of the Service Connection Points that are related to Exchange. I wouldn’t recommend making any changes in here unless you really know what you’re doing, since this view is very similar to ADSIEdit in that it allows you to examine stuff that can very rapidly break things in Active Directory.

Changing the Exchange Autodiscover SCP

If we look into the Microsoft Exchange services tree, you first see the Organization Name. Expand this, then navigate to the Administrative Group section. In any Exchange version that supports Autodiscover, this will show up as First Administrative Group (FYDIBOHF23SPDLT). If the long string of letters confuses you, don’t worry about it. That’s just a joke the developers of Exchange 2007 put into the system. It’s a +1 Caesar Cipher that means EXCHANGE12ROCKS when decoded. Programmers don’t get much humor in life, so we’ll just have to forgive them for that and move on. Once you expand the administrative group node, you’ll be able to see most of the configuration options for Exchange that are stored in AD. Most of these shouldn’t be touched. For now, expand the Servers node. This is the section that defines all of your Exchange servers and how client systems can connect to them. If you dig around in here. Mostly you just see folders, but if you right click on any of them and click Properties, you should be able to view an Attributes tab (in Windows 2008+, at least, prior to that you have to use ADSIEdit to expose the attributes involved in the Services for ADSS). There are lots of cool things you can do in here, like change the maximum size of your Transaction Log files, implement strict limits on number of databases per server, change how much the database grows when there isn’t enough space in the database to commit a transaction, and other fun things. What we’re focusing on here is Autodiscover, though, so expand the Protocols tree, then go to Autodiscover, as seen below.

autodiscover node

Now that we’re here, we see each one of the Exchange CAS servers in our environment. Mine is called Exchange2013 because I am an incredibly creative individual (Except when naming servers). Again, you can right click the server name and then select Properties, then go to the Attribute Editor tab to view all the stuff that you can control about Autodiscover here. It looks like a lot of stuff, right? Well, you’ll really only want to worry about two attributes here. The rest are defined and used by Exchange to do…Exchangey stuff (Technical term). And you’ll really only ever want to change one of them. The two attributes you should know the purpose of are “keywords” and “serviceBindingInformation”.

  • keywords: This attribute, as you may have noticed, defines the Active Directory Site that the CAS server is located in. This is filled in automatically by the Exchange subsystem in AD based on the IP address of the server. If you haven’t created subnets in ADSS and assigned them to the appropriate site, this value will always be the Default site. If you change this attribute, it will get written over in short order, and you’ll likely break client access until the re-write occurs. The *purpose* of this value is to allow the Autodiscover Service to assign a CAS server based on AD site. So, if you have 2 Exchange Servers, one in site A and another in site B, this value will ensure that clients in site A get configured to use the CAS server in that site, rather than crossing a replication boundary to view stuff in site B.
  • serviceBindingInformation: Here’s the value we are most concerned with in this post! This is the value that defines where Active Directory Domain joined computers will go for Autodiscover Information when you enter their email address as username@domain.local if you have a private domain name in your AD environment. By default, this value will be the full FQDN of the server, as it is seen in the Active Directory Domain’s DNS forward lookup zone. So, when domain joined computers configure Outlook using user@domain.local they will look this information up automatically regardless of any other Autodiscover, SRV, or other records that exist in DNS for the internal DNS zone. Note: If your email domain is different from your AD domain, you may need to use your AD domain as the email domain when configuring Outlook for the SCP lookup to occur. If you do not want to use the AD Domain to configure users, you will want to make sure there is an Autodiscover DNS record in the DNS zone you use for your EMail Domain.

Now, since we know that the serviceBindingInformation value sets the URL that Outlook will use for Autodiscover, we can change it directly through ADSS or ADSIEdit by replacing what’s there with https://servername.domain.com/Autodiscover/Autodiscover.xml . Once you do this, internal clients on the domain that use user@domain.local to configure Outlook will be properly directed to a value that is on the certificate and can be properly configured without certificate errors.

Now, if you’re a little nervous about making changes this way, you can actually change the value of the serviceBindingInformation attribute by using the Exchange Management Shell. You do this by running the following command:

get-clientaccessserver | set-clientaccessserver -autodiscoverserviceinternaluri “https://servername.domain.com/Autodiscover/Autodiscover.xml&#8221;

This will directly modify the Exchange AD SCP and allow your clients to use Autodiscover without getting certificate errors. Not too difficult and you don’t have to worry about split DNS or SRV records. Note, though, that like the SRV record you will be forcing your internal clients to go out of your network to the Internet to access your Exchange server. To keep this from happening, you will have to have an Internal version of your External DNS zone that has Internal IPs assigned in all the A records. There just is no way around that with private domain names any longer.

Final Note

Depending on your Outlook version and how your client machines connect, there is some additional configuration that will need to be completed to fully resolve any certificate errors you may have. Specifically, you will need to modify some of the Exchange Virtual Directory URLs to make sure they are returning the correct information to Autodiscover.

Avoiding Issues with Certificates in Exchange 2007+

For information, modern Active Directory Best Practices can help you avoid having trouble with certificate errors in Exchange. Go here to see some information about modern AD Domain Naming best practices. If you follow that best practice when creating your AD environment, you won’t have to worry so much about certificate errors in Exchange, as long as the Certificate you use has the Exchange Server(s) name listed. However, if you can’t build a new environment or aren’t already planning to migrate to a new AD environment in the near future, it isn’t worth the effort to do so when small configuration changes like the one above can fix certificate errors.

Office 365, ADFS, and SQL

He’s an issue I’ve just run into that there doesn’t seem to be a good answer to on the Internet. When you are building a highly available ADFS farm to enable Single Sign On for Office 365, should you use the Windows Integrated Database (WID) that comes with Windows Server or store the ADFS Configuration on a SQL server? Put simply, if you are using ADFS *only* for Office 365, there is no need to use SQL server for storing the configuration database. Here’s why.

What Does ADFS with SQL Get You?

I spent a good day or so trying to answer this question because I came in after the initial configuration of a Hybrid Exchange migration to Office 365 that was set up using SQL to store ADFS. When we went to test failover, we ran into a *mess* of problems with this configuration. So I’m going to try to explain things in a way that makes sense, rather than just telling you what Technet says in their article on the subject.

Storing your ADFS configuration in a SQL database gives you 4 things:

  1. The ability to detect and block SAML and WS-Federation Token Replay attempts.
  2. The ability to use SAML artifact resolution
  3. The ability to use SQL fail-over to increase availability of the Configuration database
  4. Scalability!!!

Using WID with ADFS prevents these features from being available to you. Here’s what each of those lines of text mean (Why no, I’m not going to just give you that without definitions like Technet does. Can you tell I’m a little grumpy about Technet’s coverage of this subject?)

SAML and WS-Federation Token Replay Attempts

This is actually a potential security vulnerability that comes about because of how Token authentication mechanisms work. In some cases, an application that accepts a federated token identity for access may allow users to “replay” a previously issued authentication Token to bypass the authentication mechanisms used to issue that Token. You can do this by ending a session with a federated application, then returning to the application with a cached version of the interaction. The easiest way to do this is to just push the Back button in a web browser. If the federated application isn’t properly designed, it will accept the cached version of the token used in the previous session and continue operating just like someone never logged out.

It should be noted that this is a serious security issue. If you have an environment where there are systems exposed to public use by multiple users (Kiosks, for example), you will want to make sure that Token Replay attempts are dropped. But here’s the thing, Token Replay is a vulnerability that affects the application you are using federated login to access. It is not a vulnerability of ADFS itself. Basically, if the applications that you are using ADFS to federate with are designed properly, there is no need to have your ADFS environment provide this kind of protection. Office 365 was designed to prevent Token Replay attempts from succeeding. So if you are only federating with Office 365, you don’t need to have this functionality in your ADFS environment. So, no need to have a SQL based Configuration Database for this reason.

SAML Artifact Resolution

I have tried for a while to find a good definition for what SAML Artifact Resolution actually is. I’m still not 100% on it, because the technical documentation for the SAML 2.0 standard is about as informative as a block of graphite. However, it seems like this is essentially a method through which both sides of a SAML token exchange can check on and authenticate one another using a single session, rather than multiple sessions. This comes in handy for situations where load balancing is in use and the application requests verification of the token provider.

The good news is that you don’t have to understand SAML Artifact Resolution. Office 365 doesn’t use it, so you don’t have to worry about having it. No SQL required for this purpose.

SQL Failover Capabilities

At first glance, having SQL server failover capabilities for your ADFS configuration database sounds like a great idea. I mean, SQL server has great failover capabilities built in. The problem is, so does ADFS. When you use WID for your ADFS Configuration database, the first server in the Farm will store a read/write enabled copy of the configuration database. Each additional ADFS server added to the farm will pull a copy of this database and store it in a read only format. So in a WID based ADFS farm, every ADFS server in the farm has a copy of the configuration database by default. ADFS Proxy servers don’t store or need access to the configuration database, so only the backend ADFS servers will benefit at all from SQL integration.

But here’s the thing, for SQL integration to work properly in a way that allows full SQL failover capabilities, you need to have a valid SQL cluster. With SQL 2008R2 and earlier, this means that you have to have at minimum 2 SQL servers installed on a version of Windows that has Cluster Services available. Cluster services requires that you have shared storage before it will create a failover cluster for SQL versions prior to 2012. If you have 2012, you can manage much simpler failover, but if you’re limited to SQL 2008R2 and earlier, you’re going to end up with a significantly higher infrastructure requirement to get SQL failover working. It you can use SQL 2012 to store the ADFS database, there is some advantage to using SQL clustering for ADFS, but the advantage doesn’t actually justify the costs of doing so.

“What about SQL Mirroring?” You ask. Well, that’s a good idea in theory. You can have your ADFS config database mirrored on two SQL servers, but you will run into some major headaches using this technique, especially if you ever need to run a failover. First off, ADFS configuration with SQL server requires that you use the SFConfig command line tool to create the config database and add servers to the farm. This is a huge pain to do, but it’s necessary. Second, if you ever have to fail over to the other SQL server for any reason, you have to reconfigure every one of the servers in the farm. SQL mirroring doesn’t utilize a clustered VIP, so the configuration with SFConfig requires you to point the server to the active SQL server. When you activate the SQL database on the other server, things may or may not work without reconfiguration. If they do work, there’s a chance that they might *stop* working at any time in the future without notice until you run SFConfig on each member of the farm and use the now active mirror’s address in the command. Then you may have to log in to each of your Proxy servers and run the Proxy Config wizard to reinitialize the proxy trust between it and the ADFS farm. That means that if you want to fail over with SQL mirroring, you have to log in to 4 servers at minimum and reconfigure each one every time the active SQL mirror changes. This is a truly unwieldy and excessive failover process. It isn’t worth the loss of expense from not using a full SQL cluster to do this for ADFS’s configuration database.

The failover abilities of SQL allow you to have constantly available access to a writable copy of the ADFS configuration database. If you use WID and the first server in the farm goes down, you cannot make changes to your ADFS configuration. No adding new servers, no modifications to the ADFS functions, etc. The good news is that the only time you need access to a writable copy of the database is when you are adding new servers to the farm or making modifications to ADFS functions. If you’re using Office 365, you almost never have to make changes to either of these once its set up and running. ADFS will continue operating as long as even one server in the farm is accessible without a writable copy of the database. So realistically, you don’t need SQL failover capabilities for ADFS’s configuration database.

Scalability!!!

Scalability is one of my favorite buzz words. Scalability means “I can add more servers to handle this workload without any problems.” And SQL allows you to do that. However, WID integrated ADFS allows you to do so as well. The limitation, though, is that you can only have up to 5 servers in a WID based ADFS farm (NOTE: Proxy servers don’t count against this maximum number). “Only five servers!?” you may ask in heated exasperation. “Yes!” I answer. “Well, I can’t stand limitations! I will use SQL!” you respond. “You will never need more than 4 ADFS servers in ever!” I retort. ADFS is one of the most light-weight roles available for Windows server. I have deployed ADFS on numerous environments with user counts well past the 10,000 mark and the ADFS servers almost never measure a blip on the performance monitors. This is because ADFS does three things, it provides a website for users to log in to, it queries Active Directory to authenticate users, and it generates a small token that contains less than 1KB of data that it then sends to a remote service. This entire process requires about 10 seconds per user per session, and that includes about 9 seconds of someone typing in their credentials. So there isn’t really even much need to have load balancing capabilities on ADFS. One server can handle monstrous loads. If you have an environment with 100,000 users accessing numerous federated applications, you might need to add 1 or 2 extra servers to your farm to handle the load, but I doubt it. You’re better off just adding RAM and processing power to your ADFS servers to improve performance than adding extra servers to the farm.

If you want multi-site high availability, you may need 5 servers, but probably not more. Let’s say you want to have site resiliency in your ADFS configuration as well as failover capability in your primary and secondary site. This configuration requires a minimum of 4 ADFS servers. Two for the primary site, two for the failover site, and 4 proxy servers that don’t count against the maximum of 5 servers. There you have it, full site-resiliency and in-site failover capability using less than 5 servers. You can even add a third site to the farm without having any issues if you want using the remaining server. But let’s be honest here, if there is an emergency that takes down two geographically dispersed datacenters at the same time, you’re going to be looking for the nearest shotgun to fight off zombies/anarchist bandits, not checking to make sure your ADFS farm is still working, so triple hot-site resiliency for ADFS is probably more than anyone needs. So you don’t need SQL for ADFS if you want scalability. It’s scalable enough without SQL.

EXPLOSIONS! I mean Conclusions

If you are planning to build an ADFS farm for Single Sign On with Office 365, you will ask the question, “Do I need SQL?” The answer to that question, in pretty much every possible case is “Not if you’re just going to use it for Office 365.” SQL integrated ADFS configurations add some features, sure, but there is almost no situation where any of those features must exist for ADFS to work with Office 365. So don’t bother even trying it out. It adds an unnecessary level of complexity, cost, and management difficulties and give no advantages whatsoever.

Should I Switch to Office 365? A Frank Examination

The Cloud – An Explanation

As time moves on, technology moves with it, and times they are a’changin’! There have been many drastic changes in the world of IT over the years, but the most recent change, the move toward cloud computing, is probably one of the most drastic and industry redefining change to occur since the release of HTTP in the early 1990s.

Cloud computing is, put simply, placing your IT infrastructure into the hands of a third party, and it’s becoming big news for companies like Microsoft, Amazon, and Apple, who are working hard to push the IT world into the Cloud so they can take advantage of the recurring income model that their Cloud systems are built around.

A more complex explanation of Cloud Computing almost always requires a metaphor or analogy of some kind, so here’s mine; Cake. I love cake. Everyone loves cake. If you don’t love cake, you’re crazy and you should give your cake to me. But there is a problem with cake. If you live alone, you can never really have cake, because in order to have cake you usually have to buy or make a whole cake, and if you have a whole cake to yourself, you will very quickly regret having purchased a whole cake for yourself as you roll yourself out of bed and out the door each morning on your gigantic rolls of fat. Then you’ll have a heart attack. Cake is meant to be shared. One cake is enough to allow 12 people (or more, or less, depending on how much they love cake) to have a comfortable amount of cake. This is good for everyone because they all get cake and aren’t fat because of it. So, how is the Cloud like cake? Well, now that computing technology has advanced to the point where almost every computer system provides more power than is necessary for most tasks in the corporate environment, buying and building a complex IT infrastructure that meets all the needs of a specific company can be extremely expensive, and it’s highly likely that much of that infrastructure is going to be wasted because it is more than is needed.

Virtualization was the first step to addressing the concerns of excessive computing power. It allowed IT departments to combine multiple server roles, securely segregated, on a single physical server. Prior to the advent of Virtualization, secure segregation of server roles required numerous physical servers, which in turn required a lot of space, power, and resources to maintain. Virtualization started shrinking the corporate Datacenters of the world, and the concept of Cloud computing seeks primarily to not only shrink the corporate datacenter, but to centralize it

Cloud computing, like a giant cake, allows multiple corporate environments to share a single, gigantic infrastructure system. Rather than each company having their own segregated and wholly owned infrastructure that is managed, configured, and maintained separately, Cloud solutions like Office 365 seek to provide all the services of a highly integrated and functional environment without requiring a fully dedicated infrastructure for each company that needs or wants that type of functionality. This is accomplished through the use of highly customized versions of products that are normally available to individual corporations for a one time investment in a packaged, much smaller recurring fee system. The cloud provider builds, maintains, and supports the Infrastructure and the cloud user makes use of the system just like it was owned by them. In the case of Office 365, every individual or company that uses the solution is effectively lumped into the same infrastructure system and uses the portion of that system that they need, rather than using a portion of their own system and wasting the portion they don’t use. This is handy because one of the worst things in the world is stale cake. I mean unused IT resources.

Getting to it – What Does Office 365 Offer

Office 365, being Microsoft’s Cloud solutions environment, provides most of the IT services that companies depend on Microsoft for. Specifically, Email and Calendaring (Through Exchange), Collaboration (Through Sharepoint and Exchange), Instant Messaging (Through Lync), and centralized file management and storage through OneDrive (Formally SkyDrive, formally something else. Microsoft changes the names of stuff every week it seems, so we’ll just call this cloud storage). If you wanted to compare the Infrastructure requirements you would need to meet what Office 365 provides for its monthly per-user fee (or a la cart if you don’t want all the services together) you would need the following things in your environment:

Exchange Online:

  • At minimum 3 Exchange 2013 servers configured with DAG
  • A hardware load balancer
  • A high-speed Fiber-channel SAN with about 55GB of storage per user (For normal Mailboxes) and an additional 5GB for each resource mailbox (Rooms, equipment, shared calendars, etc.).
  • An infinitely expanding low speed SAN (For archive mailboxes, for E3 licenses and above)
  • A secure email delivery solution to provide email stubbing (for E3 licenses and above)
  • Spam filtering services or a spam appliance

Sharepoint Online:

  • At minimum 2 Sharepoint 2013 Servers
  • Additional high-speed Fiber-Channel space, up to 50GB per user (again…this is space for OneDrive/SkyDrive/whatever they call it next)
  • More Load Balancing

Lync Online:

  • At minimum 3 Lync 2013 servers

Generic Software Infrastructure

  • Several Windows 2008/2012 Licenses
  • Active Directory (2 DCs minimum)
  • Multi-Tiered SAN infrastructure (with multi-site geo-replication capabilities)
  • A Load Balancer
  • Highly secured Firewalls

Software

  • Office Professional Plus license for each user

Physical Requirements

  • Multiple Physical locations spread across numerous geographical regions.
  • Each Physical Location should have Concrete security walls, entry barriers, full time security staff, man-traps, and multi-factor authentication before admittance

And that’s just for basic service. You would also need several employees dedicated to maintaining the infrastructure and supporting the environment, since it is very difficult to find individual IT personnel who have the skill set or mental constitution necessary to manage such an environment.

In other words, if you were to build the infrastructure necessary to provide the same functionality and level of service available with Office 365, you would need to spend several hundred thousand dollars in hardware, software, and manpower. This also does not take into account Architecting and development costs for setting up the environment, which is typically done by third party companies or contractors.

Normally, individual companies would need to spend this kind of money every time they upgrade their infrastructure to keep up with new features and changes in technology. But with Office 365, upgrades, patching and new services/features are released regularly, with no need to manage a patching system. All in all, using Office 365 can represent a significant cost savings for companies that need high availability, accessibility, scalability, and security in their IT infrastructure.

Why Wouldn’t I Use Office 365

If it costs a whole lot less and provides a lot of great features, why would you not want to use Office 365? The answer here is that cloud solutions are designed to meet the needs of the average environment, not every environment. As the cloud begins to mature (it’s really just an infant right now, so don’t be surprised if it tosses Cheerios across the room every now and then) it will become much more customizable, but for now, there is very little customization that can happen with Office 365. For instance, any Line of Business application you have in your environment that must me installed on an Exchange Server is not usable. Many software providers that require this type of functionality are moving their focus to cloud based solutions now, but things like Blackberry Enterprise Server will not work with Office 365. As I mentioned, most companies are building systems that integrate with Office 365, for instance Research in Motion has teamed with Microsoft to build in support for BES features into Office 365. This support has to be activated, though. But there are several things that simply can’t be done with Office 365. I’ll try to provide some of the limitations here. For a more detailed explanation of what Office 365 *can* do, check out the official service descriptions available from Microsoft.


  1. Microsoft limits email restorations to 14 days. If an email is deleted from a Mailbox, the Deleted Items folder, and then purged from the hidden recoverable items folder, you will only have the ability to recover that email for 14 days after being fully purged. Within the 14 day window, a support request to Microsoft is required to recover the email. Outside that Window, there is no possible way to recover it. I should mention that this is a technical limitation of the Exchange Online service. Exchange Online utilizes a 14 day lagged, 3+ copy DAG configuration. This configuration allows Microsoft to use Circular Logging on their databases to reduce resource usage, and allows an extremely high level of availability. However, the maximum amount of time that any DAG member can be lagged is 14 days. Once a full email purge is fully committed to the lagged database copy, there is no way to recover it.
  2. Office 365 does not allow unauthenticated email relaying. In order to send email to Office 365, you *must* authenticate with a licensed user account. If you have Line of Business applications or equipment that doesn’t support unauthenticated email, consider upgrading your version to one that supports authenticated relaying. If this is not possible, it is necessary to utilize an SMTP server in your network that supports unauthenticated relay, such as PostFix or the IIS based SMTP server that is included with Windows Server.
  3. Exchange Online has strict limits on the amount of email that each mailbox can send. This is to prevent spamming from Office 365’s mail servers and reduce resource overhead. Each mailbox is allowed to send to at most 10,000 recipients per day (a recipient is considered to be a single email address listed in the To:, CC:, or BCC: fields of an email, so a distribution list can be used to count as a single recipient). In addition, each email can be sent to up to 500 recipients and each mailbox can only send up to 30 messages per minute. Microsoft will not increase these limits even if you ask. If you have a business need to exceed these limitations, consider using a cloud based mass-emailing service like MailChimp.
  4. Many of the administrative capabilities and controls that are available with an On-Premise Exchange, SharePoint, or Lync environment are not exposed to Office 365 tenant administrators. If there is a control that you would like to enable or use and you can’t find it in the Admin Portal, you may be able to do it in the Remote Powershell sessions provided by Microsoft. Even with Powershell, there is only so much you can do. There are 4 different Modules for Managing Office 365 in Powershell, Exchange Online (with some additional things), Lync Online, Windows Azure AD, and SharePoint Online. As a general rule, though, administrative settings that control server level or organizational level functions will not be available to you. If you have a business need to change a high level configuration, you must prepare and submit a Support Request to Microsoft through the Office 365 Admin Portal. This is a fairly simple process, but support requests can take a significant amount of time to complete, so be prepared to wait for your changes to apply.
  5. If it breaks, there’s nothing you can do to fix it in most situations. Unless you are using ADFS and Dirsync in your environment, bringing Office 365 services back online if something fails is completely out of your wheelhouse. If you do use ADFS or Dirsync, the only thing you can really troubleshoot and fix is Active Directory Object syncing or Login issues. Everything else false under Microsoft’s SLAs and is their responsibility to fix. Microsoft guarantees a minimum service up-time of 99.9% (or 43 minutes acceptable downtime per month). The SLA documents provide exact details on service credits granted for falling below that level, but if your IT management has decided that a higher level of up-time is required, Office 365 may not be a good solution for your environment.

There are a number of other limitations to Office 365’s service, so many, in fact, that it would be difficult to outline them all with a small book. Because of that, Microsoft (and I) typically recommend a short Proof of Concept (PoC) period before migrating to the service. A PoC will highlight errors in your system configuration that will negatively impact interoperability with Office 365 and make sure that your environment and business needs can be completely met with Office 365.

I Work in IT – How Will This Impact My Job

One of the greatest causes of push-back from moving to the Cloud comes from IT staff. Most IT people are justifiably twitchy when it comes to keeping their jobs. There are a lot of people competing for IT jobs and one of the major selling points of Cloud Services is decreased employment costs. Add that to the fact that the first group to get targeted for layoffs during a recession is the IT department and questions about how this will impact IT workers becomes a greatly valid question. The truth is that while movements to the cloud will decrease the need for dedicated IT staff in most companies, it also increases the need for IT staff at datacenters and consulting firms. Skilled IT people are in short supply, so keeping up with the technology trends and times is very important. Learning about the cloud and understanding it will keep you from being unemployed for extended periods (I speak from experience on this, I promise).

That said, large environments that maintain IT staff will still need to keep a significant portion of their IT workers even if they move to the cloud. Microsoft and other cloud solutions do not provide End User Support so there will always be a need for Help Desk and on-site support staff. In addition, companies that migrate to the cloud will continue to need IT staff that can interface with Cloud Services Support Staff. A single support request with Microsoft should convince most of the difficulty that exists in managing support requests and maintaining lines of communication during outages and system failures. There will also still be a need for Systems and Network Administrators (In fact, with Cloud services Network Administrators may be in even higher demand as Internet Connection Up-time becomes more important). In all reality, on-site IT staff will still be very necessary, but the nature of the job will begin to change as more cloud services become available. Instead of fighting fires and panicking about system failures and inefficiencies, IT staff can focus on developing processes and making non-cloud based services work better. Cloud services make the typical IT employee’s job easier, not less necessary.

What is Available in the Cloud Besides Office 365

The primary principal of Cloud Computing is that the equipment that runs your infrastructure no longer exists in your physical locations. There are a lot of ways this is accomplished. Terms like Private Cloud are bandied about by Marketing teams with wild abandon without a really concrete definition. Ask three different salesmen what a Private Cloud is and you’ll probably get three completely different answers (Or some really blank stares. Salesmen. *eyeroll* Am I right?). But essentially, you’re in the cloud if you have to have a Public Internet connection to reach your resources. If you have a dedicated MPLS-like connection to a datacenter and someone else manages, maintains, and updates those systems for you, this is not operating in the Cloud. Some people will call this a Private Cloud, but the real term used to define this type of relationship is Managed Services (since that’s what that kind of relationship was called well before the term Private Cloud was coined).

At any rate, cloud services can range from Solutions like Office 365 and Google Apps for Business to things like Drop-Box and Imgur. For IT purposes, some additional services that may be useful include Microsoft’s Infrastructure as a Service (IaaS) Azure solution. Azure allows you to create entirely cloud based Virtual Machines in Microsoft’s datacenters that can be acessed from anywhere. Amazon’s AWS provides a number of services for Web Based businesses to perform necessary functions. Google’s Apps for Business provides similar functionality to Office 365 but with (in my opinion) less polish.

Other Considerations

Migrating to the cloud is a difficult decision to make. There are pros and cons just like any other business decision. Take some time to understand what is involved in moving to the cloud and make sure to plan for life after moving if you decide to do so. One of the best recommendations I can make for people considering a move like this is to contact a company that specializes in cloud services (Like Business and Decision North America, the company I work for. How’s that for shameless plugs?). Aside from being able to explain things in greater detail and help plan for moving to the cloud, using a Microsoft Partner to assist in your move will open up options for quick escalations and better communication with Microsoft’s Support teams in the event of problems. Companies that do not have a partner to assist them must deal with Microsoft’s Office 365 support teams themselves, and this can take up to 2 days to receive the first response , depending on their workload and day of the week (never make a support case at 4PM on Friday. Just a friendly tip). If you work with a Microsoft Partner, this response time can be decreased to the amount of time it takes for someone in Redmond to get their butt kicked (metaphorically speaking). All in all, the world is just now beginning to move into the cloud, so now is the time to begin preparing for the inevitable.