Hardening Microsoft Solutions from Attacks

Take a minute to go over this post from Dirk-jan Mollema. Go ahead and read it. I’ll wait…

Did you realize how scary that kind of attack is? As an IT guy who specializes in Exchange server and loves studying security, that article scared the snot out of me. Based on my experience with organizations of all sizes I can say with a good bit of authority that almost every Exchange organization out there is probably vulnerable to this attack. Why? Because Exchange is scary to a lot of people and they don’t really know how to harden it effectively. But I also want to use the above attack as a way to illustrate what I feel is the best strategy for hardening a Windows environment (and, really, any environment).

Take this opportunity to look at your Exchange deployment (if you haven’t already moved to Exchange Online) and think about what you can do to protect your environment from this type of thing. In this post, though, I want to focus on Exchange Server and Windows Server hardening techniques in general, rather than this particular vulnerability because with any hardening effort, you want to examine the network as a whole and work downward without focusing on specific vulnerabilities. If you do the opposite, you will invariably end up playing a never ending game of whack-a-mole, trying to stay ahead of a world full of malicious attackers and never really being successful.

The techniques recommended in the Center for Internet Security’s (CIS) Critical Security Controls follow the top-down approach and represent one of the best guides for approaching information security at a technical level.

IT Hardening, a Quick Intro

Hardening is essentially all actions that you take to make an environment more secure. There are many different types of hardening; server hardening, network hardening, physical hardening, procedural hardening, etc. But these all seek to do the same thing, just in different ways.

If you take a close look at the actions the CIS controls recommend, you’ll (hopefully) notice that they seek to secure as much of the environment as possible when you start at control number 1. As you go through the controls, each subsequent control has a more narrow focus. Once you get to control number 5, you will probably have an environment that will stand up against all but the most determined attacks, but you don’t necessarily want to stop there.

The most important best practice in Information Security is the idea of “Defense in Depth”. This technique involves building layers of protection instead of relying on a single security measure to protect your environment. Having a firewall in place is only one “layer” of defense, and is regarded as the broadest level of protection you can have. Anti-virus tools, Intrusion Detection/Prevention tools, and hardening techniques represent additional layers of defense. You want as many layers as you can justify when measuring cost against risk (a much more difficult topic to cover).

Focusing on Windows

One thing that you hear regularly in the IT industry is the argument about what OS people choose to handle their IT. The common argument is that Linux is a more secure OS than Windows, and this is true, up to a point. The reality is that they are simply different approaches to crafting an OS.

Linux tends to be more modular in its approach. If you implement a Linux environment, you would start with the core OS and add features as needed. This approach is good for limiting the attack surface from the start, but it also has a number of drawbacks.

The biggest drawback for Linux is that there is no centralization for support and maintenance. There are lots of different solutions to the same problem, and there isn’t really a single source of support for all solutions, so you have to either have very capable Linux support specialists or handle lots of different vendors. This usually increases the cost of ongoing maintenance and support of the infrastructure. It’s also not uncommon for different Linux-based open source projects to be abandoned for whatever reason, leaving organizations that implemented that solution without support, and once the guy who knows how to use it effectively leaves, you’re left with a very serious problem.

Windows, on the other hand, is a fairly complete package of capabilities for most situations. Windows server has built in solutions that can do most of the work you will want in an IT environment, within some limits. For instance, Windows server doesn’t handle EMail well right out of the box. You have to also implement Exchange server to have a truly effective method of handling email, but with that solution you also gain a very powerful collaboration tool that handles calendaring, contact management, task management, and other features that you can pick and choose from. Microsoft also invests a lot of time and effort in developing training tools and educational resources to ensure that there is a large pool of talent to support their OS and other software solutions. You don’t often have to worry about finding someone who knows how to manage a Windows environment. There are boatloads of MCSAs and MCSEs looking for work almost all the time.

The major drawback with Windows is, of course, security. With all of the features built in, Windows has a very large attack surface compared to Linux. However, with careful planning and implementation, the attack surface of Windows can be decreased very effectively, such that there is virtually no difference between a standard Linux deployment and a hardened Windows environment.

Hardening Windows

Going back to the vulnerability outlined in the link from the start of this article, a single change to a Windows Active Directory environment will eliminate vulnerability: LDAP signing and channel binding. LDAP signing and channel binding are techniques that are used to prevent Man In the Middle attacks from succeeding. I explain the theory behind LDAP signing in more depth in my article on Understanding Digital Certificates. LDAP channel binding is a technique that prevents clients from using portions of authentication attempt against one DC when communicating with a different DC or client. Put simply, it “binds” a client to the entire authentication attempt by requiring clients to present proof that the authentication traffic it’s sending to the server isn’t forged or copied from a different authentication attempt.

Essentially, LDAP signing configures all Active Directory Domain Controllers to that they are verifying that they are actually talking to the server they are supposed to before doing anything. Implementing this is a little difficult, though, as it requires the use of a Certificate Authority to generate and deploy digital certificates, but once digital certificates are installed on Domain Controllers and Member Servers in a Windows Domain, LDAP signing is available (once systems are configured to require it) and becomes a very effective form of security that prevents a wide swatch of attacks that can be performed to gain unauthorized access.

LDAP signing alone won’t prevent all possible attacks in a Windows environment, though, which is why it’s essential to disable features and roles that each server is not using, and taking effective care of remote access to servers. Windows Remote Desktop is one of the most frequently used tools to breach security in a Windows environment, so limiting access to it is essential. As a rule of thumb, only allow System Administrators to access critical Windows Servers and never, *never* allow remote desktop ports through your firewall.

Check your firewalls now, if you have port 3389 allowed to the Internet, it’s only a matter of time before you get attacked and suffer severe consequences. Remote Desktop is *not* meant for allowing remote workers access over the Internet. Implement secure VPNs and practice effective password security policies if you want people to access your IT environment remotely.

Once all unnecessary features and roles are removed or effectively controlled in a Windows environment, build and maintain an effective patch management strategy. Microsoft regularly deploys patches to close security holes before attackers are regularly attacking them. Any patch management plan should make allowances for testing, approving, deploying, and installing Security-related patches as soon as possible.

Next, focus on granting only permissions necessary for workers to accomplish their tasks. This is a difficult practice to implement, because it takes a lot of investigation to determine what permissions each user needs. Many environments grant Administrative permission to users on company owned equipment, which is a horrible, lazy practice that will get your environment owned by a hacker very quickly.

Once you have all of the above security practices in place, you will then want to start focusing on more specific vulnerabilities. As an example method for preventing the attack in the link at the start of this post, changing a simple registry setting will block the attack. But it will not prevent future attacks that may attack vulnerabilities that aren’t well known.

How Does the Cloud Play Into This?

One of the major benefits of using cloud solutions like Exchange Online is that most of the work outlined above has been done already. Microsoft’s cloud servers are stored in highly secure datacenters with many protections against unauthorized access (as opposed to the common tactic of putting the server in a closet in your office). Servers in cloud environments are hardened as much as possible before being put into operation. Security vulnerabilities are usually addressed across the entire cloud environment within hours of discovery, and the servers don’t function with an eye to backwards compatibility, so things like NTLM and SMBv1 are disabled on all systems.

That said, the cloud poses its own security challenges. You must accept the level of security put in place by the cloud provider and will have little to no control over systems in a way that will let you increase security. Furthermore, utilizing a Hybrid-cloud solution (which is extremely common and will be for years to come) presents unique problems involving the interface between two separately controlled environments. Poor security practices in the on-prem side of a hybrid deployment will make the cloud side just as insecure.

You must accept public availability of your data and accept the reality that you don’t control where that data is (for the most part…this issue is slowly changing as cloud environments mature). In addition, your do not offload the responsibility of securing access to the data you store in the cloud. I’ll cover this subject in another post, but for now, understand that while cloud environments build a lot of security into their solutions, you still have a responsibility to make security a priority.

Conclusion (I never can think of a good heading here)

Security in any IT environment is a major challenge that takes careful planning and effective management. Failing to consider security challenges when deploying new solutions will almost always come back to bite you. But, with the right strategy and guidance, it *is* possible to build a secure environment that can withstand the vast majority of attacks.

 

 

Advertisements

Configuring Exchange Virtual Directories

Below is a script designed to aid admins with setting External URLs on exchange server. Currently this is an initial version with no features or frills. It simply builds External URL Configuration cmdlets base on server name and root URL.

You’ll note that this script is much shorter than other versions out there. This is because I am using an array of hash tables to store and access the unique portions of the URLs. A counter lets the script cycle through each VDir to generate and run the necessary commands. Note: version 1 doesn’t include the Powershell URL, since that one uses HTTP instead of HTTPS.

One last thing to note is that this onlt works on Exchange 2016 due to the removal of the RPC endpoint in IIS.

 

$url = "https://mail.domain.prod/"
$server = "servername"
$vdirs = @{
cmd =@("owa","webservices","mapi","oab","activesync")
url =@("owa","ews/Exchange.asmx","mapi","oab","Microsoft-Server-ActiveSync")
}
$i=0
while($i -lt 6){
$newurl = "get-" + $vdirs.cmd[$i] + "virtualdirectory -server " +$server + " | set-" + $vdirs.cmd[$i] +"virtualdirectory -externalurl " + $url + $vdirs.url[$i] + "-force $true"
write-host setting URL for vdirs.cmd[$i]
Invoke-expression $newurl
$i++
}

Enabling Message Encryption in Office 365

As I mentioned in an earlier post, email encryption is a sticky thing. In a perfect world, everyone would have Opportunistic TLS enabled and all mail traffic would be automatically encrypted with STARTTLS encryption, which is a fantastic method of ensuring security of messages “in transit”. But some messages need to be encrypted “at rest” due to security policies or regulations. Unfortunately, researchers have recently discovered some key vulnerabilities in the S/MIME and OpenPGP. These encryption systems have been the most common ways of ensuring message encryption for messages while they are sitting in storage. The EFAIL vulnerabilities allow HTTP formatted messages to be exposed in cleartext by attacking a few weaknesses.

Luckily, Office 365 subscribers can improve the confidentiality of their email by implementing a feature that is already available to all E3 and higher subscriptions or by purchasing licenses for Azure Information Protection and assigning them to users that plan to send messages with confidential information in them. The following is a short How-To on enabling the O365 Message Encryption (OME) system and setting up rules to encrypt messages.

The Steps

To enable and configure OME for secure message delivery, the following steps are necessary:

  1. Subscribe to Azure Information Protection
  2. Activate OME
  3. Create Rules to Encrypt Messages

Details are below.

Subscribe to Azure Information Protection

The Azure Information Protection suite is an add-on subscription for Office 365 that will allow end users to perform a number of very useful functions with their email. It also integrates with SharePoint and OneDrive to act as a Data Loss Prevention tool. With AIP, users can flag messages or files so that they cannot be copied, forwarded, deleted, or a range of other common actions. For email, all messages that have specific classification flags or that meet specific requirements are encrypted and packaged into a locked HTML file that is sent to the recipient as an attachment. When the recipient receives the message, they have to register with Azure to be assigned a key to open the email. The key is tied to their email address and once registered the user can then open the HTML attachment and any future attachments without having to log in to anything.

Again, if you have E3 or higher subscriptions assigned to your users, they don’t need to also have AIP as well. However, each user that will be sending messages with confidential information in them will need either an AIP license or an E3/E5 license to do so. To subscribe to AIP, perform these steps:

  1. Open the Admin portal for Office 365
  2. Go to the Subscriptions list
  3. Click on “Add a Subscription” in the upper right corner
  4. Scroll down to find the Azure Information Protection
  5. Click the Buy Now option and follow the prompts or select the “Start Free Trial” option to get 25 licenses for 30 days to try it out before purchasing
  6. Wait about an hour for the service to be provisioned on your O365 tenant

Once provisioned, you can then move on to the next step in the process.

Activate OME

This part has changed very recently. Prior to early 2018, Activating OME took a lot of Powershell work and waiting for it to function properly. MS changed the method for activating OME to streamline the process and make it easier to work with. Here’s what you have to do:

  1. Open the Settings option in the Admin Portal
  2. Select Services & Add-ins
  3. Find Azure Information Protection in the list of services and click on it
  4. Click the link that says, “Manage Microsoft Azure Information Protection settings” to open a new window
  5. Click on the Activate button under “Rights Management is not activated”
  6. Click Activate in the Window that pops up

Once this is done, you will be able to use AIP’s Client application to tag messages for right’s management in Outlook. There will also be new buttons and options in Outlook Web App that will allow you to encrypt messages. However, the simplest method for encrypting messages is to use an Exchange Online Transport Rule to automatically encrypt messages.

Create Rules to Encrypt Messages

Once OME is activated, you’ll be able to encrypt messages using just the built in, default Rights Management tools, but as I mentioned, it’s much easier to use specific criteria to do the encryption automatically. Follow these stpes:

  1. Open the Exchange Online Admin Portal
  2. Go to Mail Flow
  3. Select Rules
  4. Click on the + and select “Add a New Rule”
  5. In the window that appears, click “More Options” to switch to the advanced rule system
  6. The rule you use can be anything from Encrypting messages flagged as Confidential to using a tag in the subject line. My personal preference is to use subject/body tags. Make your rule look like the below image to use this technique:Encrypt Rule

When set up properly, the end user will receive a message telling them that they have received a secure message. The email will have an HTML file attached that they can open up. They’ll need to register, but once registered they’ll be able to read the email without any other steps required and it will be protected from outside view.

 

 

Do I need Anonymous Relay?

Problems

If you have managed an Exchange server in the past, you’ve probably been required to set things up to allow printers, applications, and other devices the ability to send email through the Exchange server. Most often, the solution to this request is to configure an Anonymous Open Relay connector. The first article I ever wrote on this blog was on that very subject: http://wp.me/pUCB5-b .  If you need to know what a Relay is, go read that blog.

What people don’t always do, though, is consider the question of whether or not they need an anonymous relay in Exchange. I didn’t really cover that subject in my first article, so I’ll cover it here.

When you Need an Open Relay

There are three factors that determine whether an organization needs an Open Relay. Anonymous relay is only required if you meet all three of the factors. Any other combination can be worked around without using anonymous relaying. I’ll explain how later, but for now, here are the three factors you need to meet:

  1. Printers, Scanners, and Applications don’t support changes to the SMTP port used.
  2. Printers, Scanners, and Applications don’t support SMTP Authentication.
  3. Your system needs to send mail to email addresses that don’t exist in your mail environment (That is to say, your system sends mail to email addresses that you don’t manage with your own mail server).

At this point, I feel it important to point out that Anonymous relays are inherently insecure. You can make them more secure by limiting access, but using an anonymous relay will always place a technical solution in the environment that is designed specifically to circumvent normal security measures. In other words, do so at your own informed risk, and only when it’s absolutely required.

The First Factor

If the system you want to send SMTP messages doesn’t allow you to send email over a port other than 25, you will need to have an open relay if the messages the system sends are addressed to email addresses outside your environment. The bold stuff there is an important distinction. The SMTP protocol defines port 25 as the “default” port for mail exchange, and that’s the port that every email server uses to receive email from all other systems, which means that, based on modern security concerns, sending mail to port 25 is only allowed if the recipient of the email you send exists on the mail server. So if you are using the abc.com mail server to send messages to bob@xyz.com, you will need to use a relay server to do it, or the mail will be rejected because relay is (hopefully) not allowed.

The Second Factor

If your system doesn’t allow you to specify a username and password in the SMTP configuration it has, then you will have to send messages Anonymously. For our purposes, an “anonymous” user is a user that hasn’t logged in with a username and password. SMTP servers usually talk to one another Anonymously, so it’s actually common for anonymous SMTP access to be valid and is actually necessary for mail exchange to function, but SMTP servers will, by default, only accept messages that are destined for email addresses that they manage. So if abc.com receives a message destined for bob@abc.com, it will accept it. However, abc.com will reject messages to jim@xyz.com, *unless* the SMTP session is Authenticated. In other words, if bob@abc.com wants to send jim @xyz.com a message, he can open an SMTP session with the abc.com mail server, enter his username and password, and send the message. If he does that, the SMTP server will accept the message, then contact the xyz.com mail server and deliver it. The abc.com mail server doesn’t need to have a username and password to do this, because the xyz.com mail server knows who jim@xyz.com is, so it just accepts the message and delivers it to the correct mailbox. So if you are able to set a username and password with the system you need to send mail with, you don’t need anonymous relay.

The Third Factor

Most of the time, applications and devices will only need to send messages to people who have mailboxes in your environment, but there are plenty of occasions where applications or devices that send email out need to be able to send mail to people *outside* the environment. If you don’t need to send to “external recipients” as these users are called, you can use the Direct Send method outlined in the solutions below.

Solutions

As promised, here are the solutions you can use *other* than anonymous relay to meet the needs of your application if it doesn’t meet *all three* of the deciding factors.

Authenticated Relay (Factor #3 applies)

In Exchange server, there is a default “Receive Connector” that accepts all messages sent by Authenticated users on port 587, so if your system allows you to set a username and password and change the port, you don’t need anonymous relaying. Just configure the system to use your Exchange Hub Transport server (or CAS in 2013) on port 587, and it should work fine, even if your requirements meet the last deciding factor of sending mail to external recipients.

Direct Send (Factor #2 applies and/or #3 doesn’t apply)

If your system needs to send messages to abc.com users using the abc.com mail server, you don’t need to relay or authenticate. Just configure your system to send mail directly to the mail server. The “direct send” method uses SMTP as if it were a mail server talking to another mail server, so it works without additional work. Just note that if you have a spam filter that enforces SPF or blocks messages from addresses in your environment to addresses in your environment, it’s likely these messages will get blocked, so make allowances as needed.

Authenticated Mail on Port 25 (Only factor #1 applies)

If the system doesn’t allow you to change the port number your system uses, but does allow you to authenticate, you can make a small change to Exchange to allow the system to work. This is done by opening the Default Receive connector (AKA – the Default Front End receive connector on Exchange 2013 and later) and adding Exchange Users to the Permission settings on the Security tab as shown with the red X below:

default-front-end-enabled

Once this setting is changed, restart the Transport service on the server and you can then perform authenticated relaying on port 25.

Conclusion

If you do find you need to use an anonymous relay, by all means, do so with careful consideration, but always be conscious of the fact that it isn’t always necessary. As always, comments questions on this article and others are always welcome and I’ll do my best to answer as soon as possible.

How Does Exchange Autodiscover Work?

Autodiscover is one of the more annoying features of Exchange since Microsoft reworked the way their Email solution worked in Exchange 2007. All versions since have implemented it and Microsoft may eventually require its use in versions following Exchange 2016. So what is Autodiscover and how does it work?

Some Background

Prior to Exchange 2007, Outlook clients had to be configured manually. In order to do that, you had to know the name of the Exchange server and use it to configure Outlook. Further, if you wanted to use some of the features introduced in Exchange 2003 SP2 and Outlook 2003 (and newer), you had to manually configure a lot of settings that didn’t really make sense. In particular, Outlook Anywhere requires configuration settings that might be a little confusing to the uninitiated. This got even more complicated in larger environments that had numerous Exchange servers but could not yet afford the expense of a load balancer.

The need to manually configure email clients resulted in a lot of administrative overhead, since Exchange admins and Help Desk staff were often required to configure Outlook for users or provide a detailed list of instructions for people to do it themselves. As most IT people are well aware, even the best set of instructions can be broken by some people, and an IT guy was almost always required to spend a lot of time configuring Outlook to talk to Exchange.

Microsoft was not deaf to the cries of the overworked IT people out there, and with Exchange 2007 and Outlook 2007 introduced Autodiscover.

Automation Salvation!

Autodiscover greatly simplifies the process of configuring Outlook to communicate with an Exchange server by automatically determining which Exchange server the user’s Mailbox is on and configuring Outlook to communicate with that server. This makes it much easier for end users to configure Outlook, since the only things they need to know are their email address, AD user name, and password.

Not Complete Salvation, Though

Unfortunately, Autodiscover didn’t completely dispense with the need to get things configured properly. It really only shifted the configuration burden from Users over to the Exchange administrator, since the Exchange environment has to be properly configured to work with Autodiscover. If things aren’t set up properly, Autodiscover will fail annoyingly.

How it Works

In order to make Autodiscover work without user interaction, Microsoft developed a method for telling Outlook where it needed to look for the configuration info it needed. They decided this was most easily accomplished with a few DNS lookups based on the one piece of information that everyone had to put in regardless of their technical know how, the email address. Since they could only rely on getting an email address from users, they knew they’d need to have a default pattern for the lookups, otherwise the client machines would need at least a little configuration before working right. Here’s the pattern they decided on:

  1. Look in Active Directory to see if there is information about Exchange
  2. Look at the root domain of the user’s email Address for configuration info
  3. Look at autodiscover.emaildomain.com for configuration info
  4. Look at the domain’s root DNS to see if any SRV records exist that point to a host that holds configuration info.

Note here that Outlook will only move from one step to the next if it doesn’t find configuration information.

For each step above, Outlook is looking for a specific file or a URL that points it to that file. The file in question is autodiscover.xml. By default, this is kept at https://<exchangeservername>/autodiscover/autodiscover.xml. Each step in the check process will try to find that file and if it’s not there, it moves on. If, by the end of step 4, Outlook finds nothing, you’ll get an error saying that an Encrypted Connection was unavailable, and you’ll probably start tearing your hair out in frustration.

What’s in the File?

Autodiscover.xml is a dynamically generated file written in XML that contains the information Outlook needs to access the mailbox that was entered in the configuration wizard. When Outlook makes a request to Exchange Autodiscover, the following things will happen:

  1. Exchange requests credentials to access the mailbox.
  2. If the credentials are valid, Exchange checks the AD attributes on the mailbox that has the requested Email address.
  3. Exchange determines which server the Mailbox is located on. This information is usually stored in the msExchangeHomeServer attribute on the associated AD account.
  4. Exchange examines its Topology data to determine the best Client Access Server (CAS) to use for access to the mailbox. The Best CAS is determined using the following checks:
    1. Determine AD Site the Mailbox’s Server is located in
    2. Determine if there is a CAS assigned to that AD site
    3. If no CAS is in the site, use Site Topology to determine next closest AD Site.
    4. Step 3 is repeated until a CAS is found.
  5. Exchange returns all necessary configuration data stored in AD for the specific server. The configuration data returned is:
    1. CAS server name
    2. Exchange Web Services URL
    3. Outlook Anywhere Configuration Data, if enabled.
    4. Unified Communications Server info
    5. Mapi over HTTPS Proxy server address (if that is enabled)
  6. Outlook will take the returned information and punch it into the necessary spots in the user’s profile information.

Necessary Configuration

Because all of this is done automatically, it is imperative that the Exchange server is configured to return the right information. If the information returned to Autodiscover is incorrect, either the mailbox connection will fail or you’ll get a certificate error. To get Autodiscover configured right, parts 5.1, 5.2, 5.3, and 5.5 of the above process must be set. This can be done with a script, in the Exchange Management Shell, and in the Exchange Management UI (EMC for 2007 and 2010, ECP/EAP for 2013/2016).

Importance of Autodiscover

With the release of Outlook 2016, it is no longer possible to configure server settings manually in Outlook. You must use Autodiscover. Earlier versions can avoid using it by manually configuring each outlook client. However, before doing that, consider the cost of having to touch each and every computer to properly configure Outlook. It can take 5 minutes or more to configure Outlook on one computer using the manual method, and with Exchange 2013 it can take longer as you also are required to input Outlook Anywhere configuration settings, which are more complex than just entering a server name, username, and password. If you multiply that by the number of computers you might have in your environment and add in the time it takes to actually get to the computers, boot them up, and get to the Outlook settings, the time spent configuring Outlook manually starts to add up very quickly. Imagine how much work you’d be stuck with configuring 100 systems!

In contrast, it usually only takes 10 to 20 minutes to configure Autodiscover. When Autodiscover is working properly, all you have to do is tell your users what their email address is and Outlook will do all the work for you. With a little more configuration or some GPO work, you don’t even have to tell them that!

When you start to look at the vast differences in the amount of time you have to spend configuring Outlook, whether or not to use Autodiscover stops being a question of preference and starts being an absolutely necessary part of any efficient Exchange-based IT environment. Learning to configure it properly is, therefore, one of the most important jobs of an Exchange administrator.

Email Encryption for the Common Man

One of my co-workers had some questions about email encryption and how it worked, so I ended up writing him a long response that I think deserves a wider audience. Here’s most of it (leaving out the NDA covered portions).

Email Encryption and HIPAA Compliance for the Uninitiated

In IT security, when we talk about encryption, there are a couple of different “types” of encryption that we worry about, one is encryption “in transit”, and the other is encryption “at rest.”

Encryption “in transit” is how we ensure that when data is moving from one system to another that it is either impossible or difficult beyond reasonable likelihood for someone to intercept and read that data. There are pieces of many data exchanges that we have no control over, so we cannot guarantee that there isn’t someone out there with a packet sniffer reading every bit that passes between our server and someone else’s (This is a form of “passive” data inspection, possible from just about any trunk line on a switch). We can make sure it doesn’t happen on our end, but we can’t control the ISP or the other person’s side of things.

The basic email encryption system, TLS (Transport Layer Security…Don’t ask what that means), usually follows this incredibly oversimplified pattern:

1. Server 1 contacts Server 2
2. Server 2 says, “Hi. I’m Server 2. Who are you?”
3. Server 1 says, “Hi. I’m Server 1.”
4. Server 2 says, “Nice to meet you Server 1. What can I do for you?”
5. Server 1 says, “Before we really get into that, I’d like to make sure no one is eavesdropping on our conversation. Can we start talking in a language no one but us knows?” (This is basically what encryption is)
6. Server 2 says, “Sure. What language would you like to use?”
7. Server 1 hands server two a certificate that serves as a kind of translator, which Server 2 will use to translate (decrypt) everything that Server 1 says from now on. Server 2 will also use this certificate to send any responses or other messages back to Server 1.
8. Server 2 says, after translating what it wants to say into the new encryption language, “Okay, what would you like to do?”
9. Server 1 translates this message from the encrypted language and makes its first request to server 2 after translating it into the encrypted language.

From this point on, each server will communicate exclusively with the encryption “language” provided by the certificate they exchanged, and anyone who is eavesdropping (packet sniffing) will only see a bunch of gobbledygook that they can’t understand.

There are more complex versions of this scenario that make things more secure. For instance, in a Domain Authenticated TLS situation, both servers have to be “Authenticated,” which is to say, they must prove they are the server the message is supposed to go to. This is done by validating the name that is printed on the certificate with the name the servers use when introducing themselves to one another.

In the example above, it is possible for someone to inject themselves into the conversation and decrypt everything from server 1, read it, encrypt it again, and send it on to server 2 (this is called a Man-in-the-middle attack, and is an “active” form of eavesdropping, because it requires a very complex setup and specialized hardware to accomplish, and also requires active manipulation of data that is being inspected). Domain Authenticated TLS makes this much more difficult, because a server that acts as a mediary in a man-in-the-middle attack cannot use the name that exists on the certificate unless it is owned by the entity that created the certificate to begin with. When you get certificate errors while browsing the web, this is usually due to either you entering a name that isn’t listed on the certificate that is installed on the server you’re talking to, or the server is using a name that isn’t listed on the certificate. (Certificates are a heavy subject, so I’ll just bypass that for now)

Anyway, data “at rest” is any data that is just sitting on a hard drive or disk somewhere, waiting for someone to read it. In order to read that data, you have to gain access to a server (or workstation) that has access to the data and read it. Encryption of data “at rest” requires more effort to accomplish, because it has to be decrypted every time someone tries to read it. Technologies like Bitlocker or PGP allow data to be encrypted while it’s just sitting there on a server.

We only care about encryption of data “in transit” when we work with HIPAA regulations. This is because the only way to access data that is “at rest” is to gain physical access to the data or to systems that have access to that data. HIPAA has other regulations that help reduce the likelihood that either of those things will happen, and since data “at rest” is never outside our realm of control, we can do much more to protect it. Most ePHI is sitting in a datacenter that is locked and requires specific permission to access, but that coverage doesn’t apply to the data when it’s moving between servers.

How Will the Cloud Affect My Career as an IT Professional?

Well, after a year’s hiatus due to some particularly difficult personal trials, I’ve decided to come back to the block and weigh in on one of the big hot-button subjects in the IT industry – How the cloud will affect the job market.

The Push to Cloud

In the modern world, as the Internet has developed and increased in prominence in our lives, the increased infrastructure, security technology, and bandwidth is beginning to allow businesses and individuals to forgo the traditional need to pay big bucks for things like processing power and storage. Companies have been moving their critical systems into third party data-centers for years, but with the development of entirely cloud based solutions like Office 365, Azure, Google Apps, and AWS we’re starting to see a large industry push to reduce infrastructure costs by moving away from self-managed IT solutions. So now we’re starting to see another paradigm shift in the IT industry.

Now, this is not to say that IT hasn’t seen any kind of paradigm shift before, quite the contrary. It seems like every year we’re having to face some new technology that is permeating the industry. From the introduction of Ethernet, to wireless networking, to virtualization and VDI, most of us have dealt with the changes as they’ve come, learning new techniques and adjusting the way we work. But the push to cloud has a lot of IT personnel worried about their jobs.

What About my Job!?

Cost savings has been the primary driver behind the recent push to adopt cloud services. Executives around the world are salivating at the possibility of reducing their costs by shifting the responsibility of IT infrastructure management onto third parties. This shifting of responsibility has a lot of IT people on edge, knowing that if the stuff they do every day is outsourced to a third party service provider, what will happen to their job? If we have no servers or networks to maintain, am I going to lose my job?

The answer here is actually pretty simple. Unless you’re a part of some specific niche industry jobs, you’re pretty likely to keep your job.

Working IT in the Age of the Cloud

While moving to the cloud does reduce the need for critical infrastructure and complex solutions, it doesn’t really reduce the need for administration, problem solving, end-user support, and technical know-how. After performing numerous migrations from various email systems to Office 365, the one thing I’ve discovered is that moving to the cloud doesn’t really make the IT guy’s job any easier. In many ways, it actually makes things more difficult and complex, which means that if you’re a competent IT professional your job is pretty safe.

How will the Cloud Change My Job?

Now, that isn’t to say that your job isn’t going to change. Moving to the cloud requires a good deal of adaptation and adjustment to new ways of thinking and managing resources. For instance, if you want to have any kind of Active Directory integration with Office 365, you’re going to have to use Dirsync, and using Dirsync means you can’t modify things like distribution groups, user accounts, and passwords in Office 365. These things have to be managed in Active Directory, and that means that all of those really menial tasks you’ve been handing off to department heads, like adding distribution group members, are going to land right back into your wheelhouse if you’re as nervous as I am about giving people with no IT experience access to the Active Directory Users and Computers snap-in. For things like password changes, be prepared to face a massive influx of support calls asking for help resetting passwords as well.

In addition to the technical limitations involved with moving to the cloud, you also have to deal with the fact that you lose a lot of direct control over the IT infrastructure. Since your resources are now located on a system owned and operated by someone else, if things break you have to go to the vendor to get it fixed, and that brings up any number of frustrations, depending on who you work with. If you’ve ever spent any time on the phone with Microsoft Support, you’re likely to dread any interaction with them from that point on. You won’t have to do much of the work involved in fixing the problem, but you will have to sit around twiddling your thumbs while someone else does, and that can be a little maddening at times. This does, of course, depend on how competent the person on the other end of the phone is. Sometimes you get lucky and find someone who knows their stuff. Sadly, that’s more of an exception than a rule, so you may need to brush up on your people skills a bit and learn how to light a fire under the support technician on the phone with you.

One Step Forward, Two Steps Back

As more services start moving to the cloud, we’ll probably see a reduction of administrative overhead, but for now cloud solutions will feel like a major step backward to a lot of people, and it really is a step backward. You see, in the old days, most businesses that made investments in IT infrastructure would utilize systems called Mainframes. The mainframe system would perform all of the actual processing work (and was as big as a house in some cases), and people who used computers would interface with the mainframe from a system that was directly connected to it. Sound a lot like a Virtual Desktop Environment? It’s a lot like how cloud services work, too, except that with the Cloud, we replace the centralized Mainframe system with a vast, globe spanning collection of servers. It’d be like if one company owned a single mainframe that was rented out for numerous companies to use at once. As a result, we have to rethink the way we work. Luckily, this does mean fewer trips to the data-center to reboot servers or move cables around, which is a major plus for some people (not me, though. Data-centers relax me for some reason. I’m not sure if it’s the steady humming sound or 50 degree temperatures).

Niche Workers Beware!

With all of this said, there are a couple jobs that are going to start disappearing in the next few years. If you happen to work entirely in one of these areas, you should seriously consider branching out or you may soon find yourself without a reason to work.

1. Backup Operations – This is one niche where the writing has been on the wall for a while now. Companies have been moving toward high-availability solutions for some time now, which means that the need to spend copious funds on backup solutions and storage has been falling. High availability solutions generally rely on having multiple copies of critical data on multiple servers, so the loss of a single server no longer puts people into panic mode. With the cloud, data is placed in systems with so much redundancy, with such a high level of integrity, that data loss is extremely uncommon, and unrecoverable data loss is nearly impossible for some cloud solutions. So if you’re a backup operator and that’s all you’ve done for years, you might find your job under the cross-hairs. It’s time to start expanding your repertoire.

2. Hardware maintenance – To me, it should be obvious that computer hardware specialists are going to see less work with the move to cloud, since there is a definite drop in the amount of server and network hardware required when your company is running there, but I figured I should at least mention it.

3. Internal Network Administration – This particular job won’t ever go away, but with cloud solutions we may see a definite drop in the complexity and overhead required for running a LAN, and the concept of the WAN may begin to disappear as satellite offices will only require an Internet connection to access company resources located in the Cloud.

4. IT Infrastructure Design Specialists – Since the cloud consists almost entirely of prepackaged solutions, the demand for complex architectural designs will start to disappear, meaning that people who make their living designing and implementing IT infrastructure solutions are going to have a lot less work to do. This is the one that makes me sad, since I really enjoy Infrastructure Design. As the cloud push progresses, Infrastructure design will change from designing solutions to developing solutions for managing and interfacing with cloud-based services (which is not nearly as much fun).

5. SAN management – The concept of the Storage Area Network isn’t really even into adolescence yet and here we’re moving away from it? Well, yes, pretty much. As the cloud sees greater levels of adoption, the need for people who focus on the management, provisioning, and optimization of centralized data storage are probably going to see less work.

Now, I’m not saying that these niche jobs will disappear entirely. Nor is this a comprehensive list of jobs that the cloud will be making less important. There will always be companies who avoid the Cloud like the plague, and they’re going to need people who know their stuff. But what I will say is that if you focus in these areas alone, be prepared to branch out or you’re probably going to start spending your days working for a cloud service provider like Microsoft, which might not be as much fun as working for a (much) smaller company.

Adapt or Die

In the end, the cloud will simply force IT professionals to do what most of us do best, adapt to changes in our surroundings. We’ll need to change the way we think and interact in our jobs to be successful in our careers, but we should still have careers despite the changes to the IT landscape.