Exchange Server EMail Routing – Accepted Domains and Send Connectors

Exchange Server (And Exchange Online) can be a little confusing at times, particularly when we’re dealing with mail routing. Internal mail routes are handled almost automatically (especially if you keep all your Exchange servers in the same AD Site, which I recommend), but how do you get it to route email to mail servers *outside* your organization? What about partner companies, business departments with their own AD forests, or between on-prem and cloud mail platforms? Most environments don’t have to mess with complicated mail routing issues, but if you’re a consultant, or if you are working with a large Exchange deployment with multiple partner organizations, you will need to understand how mail routing works in Exchange. There are 3 pieces to this; Exchange Organizations, Accepted Domains, and Connectors.

Exchange Organizations

This portion of Exchange Mail routing is more about terminology than function. Put Simply, an Exchange Organization is an number of Exchange mail servers that exist in a single Active Directory (AD) forest. Exchange is heavily integrated with Active Directory, which is Microsoft’s technology that allows central control of usernames, passwords, and computers/devices. If you’re trying to learn Exchange Server and you don’t know anything about Active Directory, stop and learn that first, or you will have a lot of problems understanding what is going on.

An Exchange Organization covers every single Exchange server that exists in the same AD forest. You can’t have two Exchange Organizations in the same forest, and this is an important concept, because email routing in each Exchange Organization is controlled with Active Directory Sites and not traditional email routing techniques. If two users in the same AD Forest want to send email to one another, it doesn’t matter what their @domain.com email address is (As long as it’s an accepted domain for that Organization), mail routing will be done automatically by Exchange based on which AD Site each each user’s mailbox is located in.

The need for clear terminology is because Exchange deployments in different Forests have to be routed properly to work right. This is especially complicated if multiple Exchange Organizations have users with the same email domain (@domain.com).

Accepted Domains

Accepted domains are the core component of Exchange email routing. Each domain represents the portion of an email address after the @ sign. So for my email address, adam@acbrown-it.com, the Exchange environment that managed my email has acbrown-it.com as an accepted domain. In Exchange Server (On-prem) there are also three types of accepted domains. For Exchange Online, there are two types of accepted domains. The accepted domain types are:

  1. Authoritative
  2. Internal Relay
  3. External Relay (On-prem only)

Each type of accepted domain functions differently and, depending on circumstances, can be used to route email. It’s worth noting here that the name of each type doesn’t necessarily make its function obvious. Here’s how they work…

Authoritative Domain

You might think that an Authoritative domain would be a central server for a specific email domain. For instance, if you had an environment that had multiple Exchange organizations, the name “Authoritative” would make you think that you would set the @domain.com domain as authoritative on the main Exchange server that receives email for this domain. This is not how it works. When a domain is set as authoritative, that tells Exchange that all mail routing for the domain will STOP at this organization. If you were set up with two Exchange organizations that had @domain.com email addresses in them and you set the first server that received email for that domain as authoritative for the accepted domain, no email would ever reach the second Exchange organization. In this case, Authoritative should be seen as the Exchange server saying, “The buck stops here!” for all email in that domain.

Internal Relay

Internal Relay domains are used in situations where more than one Exchange Organization contains users with the same email domain. When an organization is set up to use an Internal Relay domain, it will look for a mailbox that matches the email address in its own organization first, but if it doesn’t find that mailbox, it will send the message off to another organization. This is very important to remember, because you have to decide where the email will go next using a Send Connector (explained later). If you use Internal Relay domains, note that Email routing between organizations *must* stop at an authoritative domain. If it doesn’t, email will get NDRs referencing Loop Detection, which is a pain.

External Relay

An External Relay domain only exists in on-prem Exchange. External Relay works similarly to an Internal Relay domain, except that Exchange will *not* check its own recipient list to see if the email address matches. Email addresses that match an External Relay domain will be immediately forwarded without any real processing. This type of accepted domain has very little functional use these days, except to allow for a hub and spoke architectural design, when a single entity acts as a central point for mail. With an External Relay domain, that central point can relay messages to as many other entities as it wishes without wasting CPU cycles checking the recipient lists before forwarding the messages to their ultimate destination. This type of accepted domain is not available in Exchange Online, simply because Microsoft wants you to check the recipient list before forwarding messages, and to reduce complexity (Since Office 365 is heavily marketed toward smaller businesses whose technical staff may be lacking experience or knowledge, and having this option available might confuse people).

Send Connectors

Accepted domains aren’t enough to properly route mail through Exchange, since they just tell the system what to do with email once received. After messages are processed against the accepted domain list, Exchange has to know what to do with them. This is where send connectors come in.

Each send connector is configured to apply to a specific list of domains (Or all domains, if the connector uses the * address scope). If the transport service sees an email it needs to send outside the Exchange organization, it will process the email domain against the list of send connectors to determine how to send the message. Each send connector has an address “scope” or address space that determines when it’s supposed to be used. If the email domain of the recipient on the email that is being sent outside the organization matches the send connector, that connector’s rules will be used.

One important thing about send connectors that you need to remember is that there should always be a connector with an address space that has just an asterisk (The star symbol) as shown here:address space

This will always be processed *last* and ensures that emails that don’t match any other send connector get routed properly (Unless you only want the Exchange server to route to domains you specify, in which case, leave off the * connector from the list…also, you’re crazy if you want to do this).

If you want to get really, unnecessarily complicated, you can also configure Scoped Send Connectors (to ensure only Exchange servers in a specific AD site can use the connector) or implement secondary connectors for the same domain (If you want to allow Exchange to have a second location to send mail to if the first location fails). I don’t recommend doing either of these things. If you find that you do need to,  you may need to re-examine your Exchange architecture (Up to you).

The Delivery tab of the send connector properties is where the work of a send connector is defined. By default, the connector will follow the MX record settings that the server sees when determining where mail is sent. It’s important to note here that you have to pay attention to what the *server* sees, not what is available to everyone on the Internet. If your Exchange server is set up to use a Domain Controller for DNS, you can create your own MX records for any domain in the world by creating a Forward Lookup Zone for that domain, then creating MX records to route mail for that domain. Again, I don’t recommend doing this, just note that it’s a possibility, so make sure you are taking that into account when troubleshooting.

More common, however, is what is called “Smarthost” delivery. A Smarthost is basically any SMTP server that is capable of determining how to properly handle the message. Almost every mail server in the world can be used as a smarthost, but you should have a specific purpose in mind when using this setting. For instance, if you want to send all email to a spam filter for processing and relay, you would set up an address space of * and set the smarthost to the spam filter’s IP/DNS address. Your Exchange deployment is probably configured to do this already (Even Exchange Online has a hidden send connector that points outgoing email to Exchange Online Protection). If, however, you want to send email for a specific domain to a specific server, you would set the address space to equal the domain, then set the smarthost to be the IP/DNS address of that server.

Summing Up

If you understand the relationship between send connectors and accepted domains, you can do a lot of really cool stuff. For instance, you could have half your users in Office 365 and half in Google Apps (Probably not the best idea, but it’s possible). Exchange Hybrid configurations make heavy use of accepted domains and send connectors to properly route email between cloud and on-prem users. And there are plenty of other use cases. If you’re feeling brave or working in a test lab, tinker around with these settings a bit and see what nifty tricks you can pull, but take care to remember the rules as I’ve explained them. If you don’t, you may spend hours troubleshooting just to find yourself feeling really dumb when you discover that the accepted domain isn’t set right or the send connector sends to the wrong server.

Advertisements

Clearing Logs from All Exchange Servers

Here’s a fun script. There are plenty of scripts that clear logs from an Exchange server, but this one goes the extra mile by doing it on every Exchange server in your environment (CAS, HUB, and MBX). The short explanation for why is that I work with 16+ node Exchange deployments, so setting up a single-server script on multiple servers is a huge pain. I imagine other people are dealing with that as well.

The script will pass through a list directories that are stored in a hash table and delete all .log files in that directory and all child directories, based on the age of the file (older than 7 days by default).

This script is *not* meant to clear Transaction Logs and should not be configured to do so, though it is certainly possible to configure it to do so. You’ve been warned.

#This line will check to determine if the Exchange Snapin is added. If not, it will add it. For other Exchange versions, change to match your version's snapin name.
if((get-pssnapin | where {$_.name -eq "microsoft.exchange.management.powershell.snapin"}) -eq $null){Add-PSSnapin microsoft.exchange.management.powershell.snapin}

#Pulls a list of Exchange servers.
$servers= Get-exchangeserver

#This foreach loop will pass through the list of servers one at a time and run an invoke-command command against each server. The invoke-command script
#uses a hash table that holds each log file folder.
foreach ($server in $servers)
{
    #This command will run a script that cycles through a Hashtable of paths to pass to a delete command. Change the file paths to match your environment.
    #Current paths are Exchange and IIS defaults.
    Invoke-Command -ComputerName $server -ScriptBlock {
        #Number of days worth of files you want to retain when the script runs. This value should be negative because the .AddDays() method doesn't do subtraction,
        #and there is no .RemoveDays() method. So if you wanted to keep 14 days of files, you would set this value to -14. Default is -7.
        $x = -7
        #User Instruction - Change $dirnumber to match the number of directories you would like the script to clear.
        $dirnumber = 3
        #This hashtable is used to store paths where you would like to delete files. Hashtable starts with 3 entries, add more by appending a comma, then
        #the path in quotations. Be sure to add the file extension (*.log for log files) that you want to erase to avoid potential disaster. Also, don't do transaction logs.
        $dirs = @{dir="C:\Program Files\Microsoft\Exchange Server\V15\Logging\*.log","C:\inetpub\logs\LogFiles\W3SVC1\*.log","C:\inetpub\logs\LogFiles\W3SVC2\*.log"}
        #Simple counter object to keep track of which cycle the deletion script is on.
        $i=0
        #This is the loop that does all the magic. As long as the cycle number ($i) is less than the number of directories in the $dirs hashtable,
        #the loop will continue cycling, finding and removing files based on
        while ($i -lt $dirnumber){
            $files = dir $dirs.dir[$i] -recurse | where {$_.LastWriteTime -lt ((get-date).AddDays($x))} | Remove-Item -Confirm:$false -force -ErrorAction SilentlyContinue
            $i++
        }
    }
}

Adam’s O365 Tips and Tricks Part 1: Exchange Online Email Recovery and Retention

With most people moving to Exchange Online or other cloud-based solutions for email, I’ve decided to write up some tips and tricks that might not be well known, but will give you some useful tools for managing Office 365 (Well, I guess they’re calling it Microsoft 365 now), which is the cloud service I am most familiar with. I’ll be expanding and adding articles on the subject as I come up with ideas and remember things I’ve done through the years, so be sure to check back periodically to see what’s new. For this edition, I’ll be covering Exchange Online Backups

Exchange Online Backups Aren’t Necessary!

One thing that drives me bonkers about the third party tools market for Office 365 are the number of companies selling Office 365 Backup Services. Some of that may be helpful for things like OneDrive and SharePoint (Unless you have an E3 license), but Exchange Online provides numerous tools for recovering email and handling retention for all license levels, as long as it’s configured correctly.

Recovering Deleted Emails

The most important thing you can do with Exchange Online is to make sure that a feature called “Single Item Recovery” is enabled. What this feature will do is allow admins to recover any deleted item in any mailbox, even if the user has purged it from the Deleted Items folder (Available by Right-Clicking Inbox and selecting “Recover Deleted Items”). Single Item Recovery will allow items to be deleted, but will retain them for a period of time that you can configure in Exchange Online Powershell (Default is…*Forever*). Recovering emails usually requires the InPlace eDiscovery feature in the compliance tools (Those controls have moved around a lot, so just look for any compliance search features in EoL’s admin portal or the O365 Portal). For a more in-depth look at the feature, visit this Technet Blog.

Fun With Shared Mailboxes

One of the more entertaining features of Exchange Online is Shared Mailboxes. A Shared Mailbox is a limited functionality mailbox that (currently) has a 50GB limit, does not have a password (and so can’t be logged into directly), and is FREE. Yes, you read that correctly. You can have as many shared mailboxes in your EoL tenant as you want and don’t have to pay a license for them. This opens up a world of possibilities for creative admins. Just realize that you have to grant users permission to open these mailboxes before they can be accessed. By default, once you grant permission to a shared mailbox, it will auto-mount in Outlook after about an hour (you can keep it from mounting automatically by using PowerShell to grant the permission with the -automapping switch set to $false).

Shared Mailboxes feel very much like a legal gray area in Exchange Online, because even the entry level subscriptions for EoL allow them and they can be used to mimic many of the higher cost subscription features. If you feel icky about these tips, feel free to ignore them, as the legality of these uses really isn’t documented anywhere. Microsoft’s licensing tactics are notorious for being extremely complicated and confusing (I like to joke that understanding Microsoft’s licensing requires a chicken, a sacred altar, and an obsidian dagger crafted under the light of a blood moon), so take all this under advisement.

Terminated User Retention

If you are off-boarding an employee that is leaving the company for any reason, it is always a good idea to retain a copy of that user’s email for legal or transitional purposes. Most of the time, admins will access the user’s mailbox and export it to a PST for safe-keeping. This is absolutely still a possibility in EoL, but why use your own on-prem data storage to keep the email when you can convert the mailbox to a shared mailbox and have that users’ email available in the cloud for as long as you want without having to pay for it? It’s a great trick for handling data retention following an employee leaving. The EoL admin portal even makes it easy for you. Just click on the recipient and click the “Convert to shared mailbox” button. The process may take a while to finalize, since Shared Mailboxes are stored on different databases with cheaper storage than live mailboxes. Once the process is complete, however, you can either leave the mailbox as is or grant access to people who need it.

Mailbox Extension

This one is more legally questionable than terminated user retention (which seems to be perfectly acceptable, given the ease of implementation), and is entirely theoretical from a licensing standpoint, so if someone knows whether this is allowed or not, feel free to comment and I’ll remove this section. That said, it’s possible to use shared mailboxes to give a user more storage space for their mailbox.

The current limits for Exchange mailboxes are extremely generous, with 50GB for Business and E1 subscriptions, 100GB for E3 and up. Most organizations won’t use up a portion of that storage for email (especially considering the attachment limit of 50MB), but some executives and administrative staff members may break those limits, particularly in larger environments.

To add a shared mailbox as an extended storage space for a user’s mailbox, you need only create the shared mailbox in Exchange Admin > Recipients > Shared and add the necessary user as a “Delegate” with full access permissions. Instruct the user to move or copy emails to the new mailbox once it populates in Outlook, and voila. More mailbox. You can do this as many times as you feel necessary, just understand that adding mailboxes to Outlook can cause significant slowdowns once there are more than 3-4 additional mailboxes mounted.

Additional “FROM:” Addresses

One of the inherent limitations of Exchange that MS has either not been able to solve or has chosen not to solve is that each mailbox can only have a single email address assigned as the “From:” address. If you want to send email using multiple email addresses, you have to have an additional mailbox. The solution for this conundrum in Office 365 is to create a shared mailbox that has the additional email address set as the Primary SMTP address, then grant the user’s regular mailbox Send As permission on the mailbox. You can then choose whether to set up email forwarding on the shared mailbox to redirect messages to the primary mailbox (Preferred) or grant full access to the shared mailbox and mount it as a secondary.

End of Part 1

Hopefully one of these tips proves useful for you (The list is short right now, but I expect it to expand in time), and if you happen to know of a good trick, tool, or tip for other admins, let me know and I’ll add it to the list.

DNS – An Introduction

Though you may not know it, DNS (Or Domain Name System) is probably the most used things on the Internet. In fact, you’re using it right now. For those who don’t know what DNS is or does, it is the system we use to translate Domain Names to IP Addresses.

The World Before DNS

Back in the early days of the Internet (And by early, I mean before it was even *called* the Internet), all of the computers that were connected to one another could only be reached by using a series of numbers. To get to the computer you wanted to access you had to know the right number for it. It was kind of like the modern telephone network, where you have to know the phone number of the person you want to talk to. This being the time before anyone had a really easy way to remember all of the address numbers for the computers they had or wanted to access (aside from writing it all down on a piece of paper), a shortcut was very quickly developed, the HOSTS file.

A HOSTS file is a simple text file that was stored on the computer and allowed people to assign memorable names to the computer addresses they wanted to access. Instead of putting in a number like 123.231.123.231 to access a computer, users just had to put in the name that was assigned to that number. Keeping with the phone comparison, this was similar to having a phone number that, based on the letters assigned to each number on a phone, allows you to say “Call me at 1-555-MYPLACE”. This is both easier to remember and easier to communicate (As a side note, each computer still has a HOSTS file that you can use to assign a specific name to a specific number. In windows, the file is located at C:\Windows\System32\Drivers\Etc\HOSTS. You can play around with that and see what happens if you want. Many IT pranks involve modifying the HOSTS file, so it’s always good to know about it). The problem, though, was that each system had to have its own HOSTS file. So each computer had a completely unique set of data about which words translated to which numbers.

The unique HOSTS file on each computer lead to some issues, specifically it lead to a lot of work filling out the file for each computer you wanted to use, not to mention the problems that may occur when you want to communicate the location of some internet based resource to someone. So after a little while a central “authority” created a publicly available HOSTS file that could be obtained by anyone who didn’t want to fill out their HOSTS file with all the names and IPs they wanted or needed. This was a good short term solution, but after the Internet became “The Internet” (as opposed to its original name ARPANET), the size update frequency of the centralized HOSTS file became too overwhelming. This is when the need for a fully automated method of handling the word to number translation became apparent. Here is where DNS comes in to play.

What DNS Does

DNS was created to allow easy creation, distribution, and update of “Internet Names.” Internet Names are the words that we assign to numbers (IP Addresses). You use DNS every day without realizing it. In fact, you used it to get to this website.

DNS is, put simply, a group of servers that do nothing but maintain and distribute word to number translations (as well as number to word translations, but that piece, known as Reverse DNS, is beyond the scope of this article).

How DNS Works

DNS functions by separating lists of name to number translations in a group of similar names in “Domains”. Each dot in a URL represents a level of authority. For instance, in my blog’s URL, http://www.acbrownit.com, includes four levels of authority, with the authority level becoming more narrow as you move to the left in the URL.

The highest level of authority in a URL starts *before* the .com, with the International Assigned Numbers Authority (IANA). The IANA’s servers represent the core list of DNS records. If you would like to look at the full list of records, you can go to IANA’s website (you can click on each Zone to see the ownership records and servers that hold the database for that zone). Historically, IANA has maintained complete authority over Internet DNS records and was originally maintained by the US government. A few years ago, IANA was spun off into a separate, independent organization without any governmental oversight. About the same time, IANA opened the root DNS zones up to complete customization.

Originally, there were less than 200 root DNS zones, .com, .info, .org, .gov, and zones for each nation (.uk, .aus, .ca for the UK, Australia, and Canada, as examples). There were a few other zones, but IANA kept a pretty strict cap on DNS root zones to ensure that each DNS server on the Internet was capable of storing the entire DNS database, if necessary. Early Internet connected DNS servers had significantly more limitations than modern servers. The average smart watch has several orders of magnitude more processing and storage capacity than the earliest DNS servers, which put significant limits on the number of URLs available. With IANA removing the strict limits on root DNS zones, thousands are now available, including .APPLE (guess who owns that one), .BANANAREPUBLIC, and others. These newer root zones are often referred to as “Vanity” domains.

The COM domain is the next highest level of authority in my URL, and is referred to as a Top Level Domain (TLD). It is owned and maintained by Verisign Global Registry Services. Verisign’s DNS servers hold a list of records called a DNS Zone that points every domain that ends in .com to the authoritative servers used to store the zones for the next level of authority.

The ACBROWNIT domain is the next level of authority. This domain is “owned” by WordPress, but administered by…well, me. I pay a certain amount of money each year to maintain my rights to do whatever I want with the acbrownit.com domain, including move it to a different registrar like Godaddy, Network Solutions, or others, if I want to. WordPress also maintains the servers that provide access to my blog, and I pay a flat rate each year to use both services.

The next level of authority is completely managed by me, and represents what is called a DNS “A record”. “A Records” consist of a name and an IP address. In this case, the name is WWW and the IP address is 192.0.78.25. The IP is tied directly to the network where my blog’s data is stored.

The DNS Lookup Process

DNS lookup occurs according to the below flowchart. Please note, this is a very simplified version and leaves out a number of technical details, but should give you an idea of how things work.

DNS Process

Every computer that has an Internet connection is configured with a DNS Server that acts as their primary point of contact for looking up DNS records. Usually, this service is provided by the company you purchase your Internet connection from. Most Internet Service Providers only allow their own customers to use their DNS servers. There are also a lot of “public” DNS servers that are owned by various companies. Public DNS servers are available to anyone who wants to use them, and most IT guys have at least a few memorized. The most common are owned by Google (8.8.8.8 and 8.8.4.4) or Level 3 (4.2.2.2). There are a number of sites that provide lists of publicly available DNS servers.

The End of This Post

So that was a lot of information that should help you to better understand how DNS works. Every computer uses it, and without it, the Internet would not be able to function as well as it does. Hopefully, you understand it a little better. You may never give a thought about it again, but it never hurts to know more about how things work. And for those who are just starting a career in tech or are budding hobbyists, this article should give you much needed information that will serve you well in the future.

Stay tuned for the next post on DNS, where I’ll cover some of the more technical parts of the protocol, including record types, how each record type functions, historical weaknesses in DNS that have been and still are exploited to spread malicious software or phishing email, and how you can use DNS to provide a little bit of failover capability to servers.

 

 

Hardening Microsoft Solutions from Attacks

Take a minute to go over this post from Dirk-jan Mollema. Go ahead and read it. I’ll wait…

Did you realize how scary that kind of attack is? As an IT guy who specializes in Exchange server and loves studying security, that article scared the snot out of me. Based on my experience with organizations of all sizes I can say with a good bit of authority that almost every Exchange organization out there is probably vulnerable to this attack. Why? Because Exchange is scary to a lot of people and they don’t really know how to harden it effectively. But I also want to use the above attack as a way to illustrate what I feel is the best strategy for hardening a Windows environment (and, really, any environment).

Take this opportunity to look at your Exchange deployment (if you haven’t already moved to Exchange Online) and think about what you can do to protect your environment from this type of thing. In this post, though, I want to focus on Exchange Server and Windows Server hardening techniques in general, rather than this particular vulnerability because with any hardening effort, you want to examine the network as a whole and work downward without focusing on specific vulnerabilities. If you do the opposite, you will invariably end up playing a never ending game of whack-a-mole, trying to stay ahead of a world full of malicious attackers and never really being successful.

The techniques recommended in the Center for Internet Security’s (CIS) Critical Security Controls follow the top-down approach and represent one of the best guides for approaching information security at a technical level.

IT Hardening, a Quick Intro

Hardening is essentially all actions that you take to make an environment more secure. There are many different types of hardening; server hardening, network hardening, physical hardening, procedural hardening, etc. But these all seek to do the same thing, just in different ways.

If you take a close look at the actions the CIS controls recommend, you’ll (hopefully) notice that they seek to secure as much of the environment as possible when you start at control number 1. As you go through the controls, each subsequent control has a more narrow focus. Once you get to control number 5, you will probably have an environment that will stand up against all but the most determined attacks, but you don’t necessarily want to stop there.

The most important best practice in Information Security is the idea of “Defense in Depth”. This technique involves building layers of protection instead of relying on a single security measure to protect your environment. Having a firewall in place is only one “layer” of defense, and is regarded as the broadest level of protection you can have. Anti-virus tools, Intrusion Detection/Prevention tools, and hardening techniques represent additional layers of defense. You want as many layers as you can justify when measuring cost against risk (a much more difficult topic to cover).

Focusing on Windows

One thing that you hear regularly in the IT industry is the argument about what OS people choose to handle their IT. The common argument is that Linux is a more secure OS than Windows, and this is true, up to a point. The reality is that they are simply different approaches to crafting an OS.

Linux tends to be more modular in its approach. If you implement a Linux environment, you would start with the core OS and add features as needed. This approach is good for limiting the attack surface from the start, but it also has a number of drawbacks.

The biggest drawback for Linux is that there is no centralization for support and maintenance. There are lots of different solutions to the same problem, and there isn’t really a single source of support for all solutions, so you have to either have very capable Linux support specialists or handle lots of different vendors. This usually increases the cost of ongoing maintenance and support of the infrastructure. It’s also not uncommon for different Linux-based open source projects to be abandoned for whatever reason, leaving organizations that implemented that solution without support, and once the guy who knows how to use it effectively leaves, you’re left with a very serious problem.

Windows, on the other hand, is a fairly complete package of capabilities for most situations. Windows server has built in solutions that can do most of the work you will want in an IT environment, within some limits. For instance, Windows server doesn’t handle EMail well right out of the box. You have to also implement Exchange server to have a truly effective method of handling email, but with that solution you also gain a very powerful collaboration tool that handles calendaring, contact management, task management, and other features that you can pick and choose from. Microsoft also invests a lot of time and effort in developing training tools and educational resources to ensure that there is a large pool of talent to support their OS and other software solutions. You don’t often have to worry about finding someone who knows how to manage a Windows environment. There are boatloads of MCSAs and MCSEs looking for work almost all the time.

The major drawback with Windows is, of course, security. With all of the features built in, Windows has a very large attack surface compared to Linux. However, with careful planning and implementation, the attack surface of Windows can be decreased very effectively, such that there is virtually no difference between a standard Linux deployment and a hardened Windows environment.

Hardening Windows

Going back to the vulnerability outlined in the link from the start of this article, a single change to a Windows Active Directory environment will eliminate vulnerability: LDAP signing and channel binding. LDAP signing and channel binding are techniques that are used to prevent Man In the Middle attacks from succeeding. I explain the theory behind LDAP signing in more depth in my article on Understanding Digital Certificates. LDAP channel binding is a technique that prevents clients from using portions of authentication attempt against one DC when communicating with a different DC or client. Put simply, it “binds” a client to the entire authentication attempt by requiring clients to present proof that the authentication traffic it’s sending to the server isn’t forged or copied from a different authentication attempt.

Essentially, LDAP signing configures all Active Directory Domain Controllers to that they are verifying that they are actually talking to the server they are supposed to before doing anything. Implementing this is a little difficult, though, as it requires the use of a Certificate Authority to generate and deploy digital certificates, but once digital certificates are installed on Domain Controllers and Member Servers in a Windows Domain, LDAP signing is available (once systems are configured to require it) and becomes a very effective form of security that prevents a wide swatch of attacks that can be performed to gain unauthorized access.

LDAP signing alone won’t prevent all possible attacks in a Windows environment, though, which is why it’s essential to disable features and roles that each server is not using, and taking effective care of remote access to servers. Windows Remote Desktop is one of the most frequently used tools to breach security in a Windows environment, so limiting access to it is essential. As a rule of thumb, only allow System Administrators to access critical Windows Servers and never, *never* allow remote desktop ports through your firewall.

Check your firewalls now, if you have port 3389 allowed to the Internet, it’s only a matter of time before you get attacked and suffer severe consequences. Remote Desktop is *not* meant for allowing remote workers access over the Internet. Implement secure VPNs and practice effective password security policies if you want people to access your IT environment remotely.

Once all unnecessary features and roles are removed or effectively controlled in a Windows environment, build and maintain an effective patch management strategy. Microsoft regularly deploys patches to close security holes before attackers are regularly attacking them. Any patch management plan should make allowances for testing, approving, deploying, and installing Security-related patches as soon as possible.

Next, focus on granting only permissions necessary for workers to accomplish their tasks. This is a difficult practice to implement, because it takes a lot of investigation to determine what permissions each user needs. Many environments grant Administrative permission to users on company owned equipment, which is a horrible, lazy practice that will get your environment owned by a hacker very quickly.

Once you have all of the above security practices in place, you will then want to start focusing on more specific vulnerabilities. As an example method for preventing the attack in the link at the start of this post, changing a simple registry setting will block the attack. But it will not prevent future attacks that may attack vulnerabilities that aren’t well known.

How Does the Cloud Play Into This?

One of the major benefits of using cloud solutions like Exchange Online is that most of the work outlined above has been done already. Microsoft’s cloud servers are stored in highly secure datacenters with many protections against unauthorized access (as opposed to the common tactic of putting the server in a closet in your office). Servers in cloud environments are hardened as much as possible before being put into operation. Security vulnerabilities are usually addressed across the entire cloud environment within hours of discovery, and the servers don’t function with an eye to backwards compatibility, so things like NTLM and SMBv1 are disabled on all systems.

That said, the cloud poses its own security challenges. You must accept the level of security put in place by the cloud provider and will have little to no control over systems in a way that will let you increase security. Furthermore, utilizing a Hybrid-cloud solution (which is extremely common and will be for years to come) presents unique problems involving the interface between two separately controlled environments. Poor security practices in the on-prem side of a hybrid deployment will make the cloud side just as insecure.

You must accept public availability of your data and accept the reality that you don’t control where that data is (for the most part…this issue is slowly changing as cloud environments mature). In addition, your do not offload the responsibility of securing access to the data you store in the cloud. I’ll cover this subject in another post, but for now, understand that while cloud environments build a lot of security into their solutions, you still have a responsibility to make security a priority.

Conclusion (I never can think of a good heading here)

Security in any IT environment is a major challenge that takes careful planning and effective management. Failing to consider security challenges when deploying new solutions will almost always come back to bite you. But, with the right strategy and guidance, it *is* possible to build a secure environment that can withstand the vast majority of attacks.

 

 

Configuring Exchange Virtual Directories

Below is a script designed to aid admins with setting External URLs on exchange server. Currently this is an initial version with no features or frills. It simply builds External URL Configuration cmdlets base on server name and root URL.

You’ll note that this script is much shorter than other versions out there. This is because I am using an array of hash tables to store and access the unique portions of the URLs. A counter lets the script cycle through each VDir to generate and run the necessary commands. Note: version 1 doesn’t include the Powershell URL, since that one uses HTTP instead of HTTPS.

One last thing to note is that this onlt works on Exchange 2016 due to the removal of the RPC endpoint in IIS.

 

$url = "https://mail.domain.prod/"
$server = "servername"
$vdirs = @{
cmd =@("owa","webservices","mapi","oab","activesync")
url =@("owa","ews/Exchange.asmx","mapi","oab","Microsoft-Server-ActiveSync")
}
$i=0
while($i -lt 6){
$newurl = "get-" + $vdirs.cmd[$i] + "virtualdirectory -server " +$server + " | set-" + $vdirs.cmd[$i] +"virtualdirectory -externalurl " + $url + $vdirs.url[$i] + "-force $true"
write-host setting URL for vdirs.cmd[$i]
Invoke-expression $newurl
$i++
}

Enabling Message Encryption in Office 365

As I mentioned in an earlier post, email encryption is a sticky thing. In a perfect world, everyone would have Opportunistic TLS enabled and all mail traffic would be automatically encrypted with STARTTLS encryption, which is a fantastic method of ensuring security of messages “in transit”. But some messages need to be encrypted “at rest” due to security policies or regulations. Unfortunately, researchers have recently discovered some key vulnerabilities in the S/MIME and OpenPGP. These encryption systems have been the most common ways of ensuring message encryption for messages while they are sitting in storage. The EFAIL vulnerabilities allow HTTP formatted messages to be exposed in cleartext by attacking a few weaknesses.

Luckily, Office 365 subscribers can improve the confidentiality of their email by implementing a feature that is already available to all E3 and higher subscriptions or by purchasing licenses for Azure Information Protection and assigning them to users that plan to send messages with confidential information in them. The following is a short How-To on enabling the O365 Message Encryption (OME) system and setting up rules to encrypt messages.

The Steps

To enable and configure OME for secure message delivery, the following steps are necessary:

  1. Subscribe to Azure Information Protection
  2. Activate OME
  3. Create Rules to Encrypt Messages

Details are below.

Subscribe to Azure Information Protection

The Azure Information Protection suite is an add-on subscription for Office 365 that will allow end users to perform a number of very useful functions with their email. It also integrates with SharePoint and OneDrive to act as a Data Loss Prevention tool. With AIP, users can flag messages or files so that they cannot be copied, forwarded, deleted, or a range of other common actions. For email, all messages that have specific classification flags or that meet specific requirements are encrypted and packaged into a locked HTML file that is sent to the recipient as an attachment. When the recipient receives the message, they have to register with Azure to be assigned a key to open the email. The key is tied to their email address and once registered the user can then open the HTML attachment and any future attachments without having to log in to anything.

Again, if you have E3 or higher subscriptions assigned to your users, they don’t need to also have AIP as well. However, each user that will be sending messages with confidential information in them will need either an AIP license or an E3/E5 license to do so. To subscribe to AIP, perform these steps:

  1. Open the Admin portal for Office 365
  2. Go to the Subscriptions list
  3. Click on “Add a Subscription” in the upper right corner
  4. Scroll down to find the Azure Information Protection
  5. Click the Buy Now option and follow the prompts or select the “Start Free Trial” option to get 25 licenses for 30 days to try it out before purchasing
  6. Wait about an hour for the service to be provisioned on your O365 tenant

Once provisioned, you can then move on to the next step in the process.

Activate OME

This part has changed very recently. Prior to early 2018, Activating OME took a lot of Powershell work and waiting for it to function properly. MS changed the method for activating OME to streamline the process and make it easier to work with. Here’s what you have to do:

  1. Open the Settings option in the Admin Portal
  2. Select Services & Add-ins
  3. Find Azure Information Protection in the list of services and click on it
  4. Click the link that says, “Manage Microsoft Azure Information Protection settings” to open a new window
  5. Click on the Activate button under “Rights Management is not activated”
  6. Click Activate in the Window that pops up

Once this is done, you will be able to use AIP’s Client application to tag messages for right’s management in Outlook. There will also be new buttons and options in Outlook Web App that will allow you to encrypt messages. However, the simplest method for encrypting messages is to use an Exchange Online Transport Rule to automatically encrypt messages.

Create Rules to Encrypt Messages

Once OME is activated, you’ll be able to encrypt messages using just the built in, default Rights Management tools, but as I mentioned, it’s much easier to use specific criteria to do the encryption automatically. Follow these stpes:

  1. Open the Exchange Online Admin Portal
  2. Go to Mail Flow
  3. Select Rules
  4. Click on the + and select “Add a New Rule”
  5. In the window that appears, click “More Options” to switch to the advanced rule system
  6. The rule you use can be anything from Encrypting messages flagged as Confidential to using a tag in the subject line. My personal preference is to use subject/body tags. Make your rule look like the below image to use this technique:Encrypt Rule

When set up properly, the end user will receive a message telling them that they have received a secure message. The email will have an HTML file attached that they can open up. They’ll need to register, but once registered they’ll be able to read the email without any other steps required and it will be protected from outside view.