If You Have a Cisco Firewall, Disable this Feature NOW!!!

I don’t often have an opportunity to post a rant in an IT blog (And even less opportunity to create a click-bait headline), but here goes nothing! Cisco’s method of doing ESMTP packet inspection is INCREDIBLY STUPID and you should disable it immediately. Why do I say that? Because when Cisco ASAs/whatever they call them these days are configured to perform packet inspection on ESMTP traffic, the preferred option of doing so is to block the STARTTLS verb entirely.*

In other words, Cisco firewalls are designed to completely disable email encryption in order to inspect email traffic. This is such a stupid method of allowing packet inspection that I can barely find words to explain it. But find them I shall.

You might think that you want your Firewall to inspect your email traffic in order to block malicious email or prevent unauthorized access, or what have you. And in that context, I agree. It’s a useful thing. But knowing that the Firewall is not only inspecting the traffic but also preventing any kind of built in E-Mail encryption from running is rant food for me.

I can just imagine the people at Cisco one day sitting around coming up with ideas on how to implement ESMTP packet inspection. I can imagine some guy saying, “I know, we can design our firewall to function as a Smart Host, so it can receive encrypted emails from our customer’s email servers, decrypt them, inspect them, then communicate with the destination servers and attempt to encrypt the messages from there.” I can then imagine that guy being ignored by the rest of his coworkers once the lazy dork in the room says, “How about we just block the STARTTLS verb?”

Thank you, Cisco engineers, for using the absolute laziest possible method you could find to ensure that all email traffic gets inspected, thereby making certain that your packet inspection needs are met while preventing your clients from using TLS encryption over SMTP.

So, if you have a Cisco firewall and want to have the ability to, you know, encrypt email, make sure you disable ESMTP packet inspection. If that feature is turned on, all your email is completely unencrypted. Barracuda provides a lovely guide on disabling ESMTP inspection. https://www.barracuda.com/support/knowledgebase/50160000000IyefAAC

Cisco tells people to just disable the rule that blocks STARTTLS in email, but that wouldn’t really help their packet inspection much, since everything past the STARTTLS verb is encrypted. If it’s encrypted, it can’t be inspected, other than looking at the traffic and going, “Yep. That’s all gobbledygook. Must be encrypted.” So that’s just a dumb recommendation that doesn’t do anything useful (It also requires a trip to the Cisco CLI, which is a great fun thing). This is why I say disable ESMTP packet inspection on your Cisco Firewall, cause it’s making you less secure.

*For the uninitiated, ESMTP stands for Extended Simple Mail Transfer Protocol, and it’s what every mail server on the Internet today uses to exchange emails with each other. The STARTTLS verb is a command that initiates an encrypted email session, so blocking it prevents encrypted email exchanges entirely. This is a bad thing.



Protect Yourself from the WannaCry(pt) Ransomware

Well, this has been an exciting weekend for IT guys around the world. Two IT Security folks can say that they saved the world and a lot of people in IT had no weekend. The attack was shut down before it encrypted the world, but there’s a good chance the attack will just be changed and start over. So what can you do to keep your system and data from being compromised by this most recent cyberware attack? If you’ve patched everything up already, or don’t know if you’re patched or vulnerable to this attack (or you just don’t want to deal with Windows updates right now), and you want to be absolutely positive that your computer won’t be affected, disable SMBv1! Like, seriously. You don’t need it. Unless you’re a Luddite.

There are some environments that may still need it (Anyone still using Windows XP and server 2003, antiquated management software, or PoS NAS devices), so if you have a Windows Server environment, run

Set-SmbServerConfiguration –AuditSmb1Access $true

in PowerShell for a bit and watch the SMBServer audit logs for failures.

To disable SMBv1 Server capabilities on your devices, do the following:

Server 2012 and Later

  1. Open Powershell (Click start and enter Powershell in the search bar to open it if you don’t know how to get to it)
  2. Type in this and hit Enter: Remove-WindowsFeature FS-SMB1
  3. Wait a bit for the uninstall process to finish.
  4. Voila! WannaCry can’t spread to this system anymore.

Windows 7, Server 2008/2008R2

  1. Open Powershell (Click start and enter Powershell in the search bar to open it if you don’t know how to get to it)
  2. Type in this (everything on the same line) and hit Enter: Set-ItemProperty -Path “HKLM:\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters” SMB1 -Type DWORD -Value 0 -Force
  3. Wait a bit for the command to complete.
  4. Voila! WannaCry can’t spread to this system anymore.

Windows 8.1/10

  1. Open Powershell (Click start and enter Powershell in the search bar to open it if you don’t know how to get to it)
  2. Type in this and hit Enter: Disable-WindowsOptionalFeature -Online -FeatureName smb1protocol
  3. Wait a bit for the uninstall process to finish.
  4. Voila! WannaCry can’t spread to this system anymore.

If you’re using Windows Vista…I am so so sorry…But the Windows 7/8 instructions should still work for you.

If you still use Windows XP…stop it. And you’re just going to have to get the patch that MS released for this vulnerability.

An additional step you may want to take is to disable SMBv1’s *client* capabilities on your systems. Running the two commands below (on one each line) will do this for you. This isn’t completely necessary, since the client can’t connect to other systems unless they support SMBv1, so if the SMBv1 server component is disabled above, the SMBv1 client can’t do anything. But, if you want to disable the client piece as well, enter the following commands:

sc.exe config lanmanworkstation depend= bowser/mrxsmb20/nsi
sc.exe config mrxsmb10 start= disabled

Resolving the Internal/External DNS zone Dilemma with Pinpoint DNS

Here’s an interesting trick that might help you resolve some of your DNS management woes, particularly if you have a different Public and Private DNS zone in your environment. For instance, you have a domain name of whatever.com externally, but use whatever.local internally. When your DNS is set up like that, all attempts to access systems using the whatever.com domain name will default to using the external, Public IP addresses assigned in that DNS zone. If you want to have internal, Private IP addresses assigned to those systems instead (which is common), you normally have to create an entire zone for whatever.com on your Internal DNS servers and populate it with A records for all the systems that exist in the public DNS zone. This technique, known as Split Horizon DNS or just Split DNS, results in additional administrative burden, since changes to the external DNS zone have to be replicated internally as well, and you have to spend time recreating all the DNS records that are already there. Luckily, there’s a little DNS trick you can use to get past this limitation: Pinpoint DNS.

Pinpoint DNS – What is it?

Put simply, Pinpoint DNS is a technique that utilizes some of the features of DNS to allow you to create a record for a single host name that exists in a different DNS zone than you usually use. For instance, instead of creating an entire Primary zone in your internal DNS for whatever.com, you can create a Pinpoint DNS record for really.whatever.com.

Make it So!

To implement Pinpoint DNS, all you have to do is create a new Primary DNS zone in DNS. Instead of naming the zone whatever.com, name the zone really.whatever.com. Once the zone is created, you can then assign an IP address to the root of that new zone (in Windows, this shows up as the IP being “Same as Parent”). Attempts to connect to really.whatever.com will resolve the root zone IP address, and you will be connected to whatever you set that IP to. So, instead of having an entire internal DNS zone full of DNS A records that you have to fill out, even if you only want an Internal IP on one of them, you can have a DNS zone for the single Internal IP record.


There really aren’t a lot of downsides to this, other than it could confuse people who aren’t familiar with the technique. It does look a little odd to see a lot of Forward Lookup Zones in DNS with only a single record in them, but that’s just aesthetic.

Functionally, as long as the DNS zones you create for Pinpoint records are AD integrated, there aren’t any technical downsides to this technique, but if you have a large, distributed DNS infrastructure that *isn’t* AD integrated, this technique will greatly increase administrative burden, since you have to create replication configurations for each Pinpoint record. If you run a DNS environment that isn’t part of Active Directory, Pinpoint DNS isn’t a good solution, because it increases the burden more than managing split horizon DNS.

DNS is a very light-weight protocol (having been designed in the late 70s), so replication traffic increases caused by having multiple Forward Lookup Zones is generally not an issue here.

Windows How To

To implement this, do the following:

  1. Open DNS Management (preferably a Domain Controller)
  2. Expand the DNS server that’s listed
  3. Right Click the Forward Lookup Zones entry and select New Zone to open the new zone wizard. Hit Next when the wizard opens.
  4. Make sure Primary DNS Zone is selected, and that the AD Integration option is checked. Click Next.
  5. Select the option to replication to all DCs in the Forest (particularly if you are in a multi-domain Forest. It’s not necessary for single domain forests, but it’s a good idea to set this anyway, in case that ever changes). Click Next.
  6. Enter the name of the zone. This will be the host name you’re assigning an IP to, so really.whatever.com for the previous example. Click Next.
  7. Select the option to only allow secure updates (It’s the default, anyway). Click Next, then Finish to finalize the wizard and create the zone.
  8. Expand your Forward Lookup Zones and you’ll see the zone there, like below:PinpointDNSZone
  9. Right Click the new Zone, select New Host (A or AAAA).
  10. In the wizard that appears, *leave the host name blank*. This is important, since it is the key part of Pinpoint DNS. An empty host name assigns the A record to the root domain.
  11. Enter the IP address you want to point to in the IP address field, then click Add Host. Your record should look like the one below:PinpointDNSRecord
  12. Verify the new record appears in the really.whatever.com zone, and shows as (Same as Parent).

Once that’s done, the next time you ping really.whatever.com (after running “ipconfig /flushdns” to clear your DNS cache, of course), you’ll receive the Internal IP address you assigned to the Pinpoint zone, and the rest of your external DNS records will remain managed by external DNS servers.

ADFS or Password Sync: Which one do you use?

I’ve run into a number of people who get confused about this subject when trying to determine how to get their On-Prem accounts and Office 365 synced and working properly. Most often, people are making a comment somewhere that says, “Just use Password sync, it’s just as good and doesn’t require a server,” or something similar. While I wish this were true, it most absolutely is not. While both options fulfill a similar requirement (“I want my AD usernames and Passwords to work with Office 365”), they both do so in a completely different manner that can have a major impact on security, workflow, and administration of services.

Single Sign-On vs Same Sign-On

To see the difference here, you have to understand the terminology involved. The primary goal for synchronizing user accounts between Office 365 and Active Directory is to give users the ability to use the same username and password to use O365 that they use when logging in to their computer. There are two terms used to describe this relationship. Single Sign-On refers to technology that allows users to access numerous applications while only logging in once. You’ve probably used Facebook or Google’s version of this to access applications, games, or other software. Same Sign-On, however, allows a user to access multiple applications with the same username and password. If you have two bank accounts and use the same username and password to access them, you’re using a simplified version of Same Sign-on. Most Same Sign-on solutions in IT involve an application that reads username and password data used by one system and copies it to another system.

The biggest difference between the two technologies is that Single Sign-On allows you to authenticate one time and access all the applications that are tied to that sign-on system. Same Sign-On requires you to log in to all applications regardless of which or how many applications you’ve already logged into using that username and password.

Single Sign-on and Same Sign-on have a lot of similarities as well. They both allow you to use the same username and password and both simplify account management (theoretically). Most importantly, for Office 365 at least, they allow you to manage usernames and passwords in a single environment, rather than having to change passwords in multiple locations every time something needs to change. The way changes are accomplished is where the decision to use ADFS or Password Sync faces its biggest test.

ADFS is Single Sign-On, Password Sync is Same Sign-On

For the purposes of Office 365, which is what this article focuses on, ADFS is considered a Single Sign-On solution, while Password Sync is Same Sign-On. What does this mean for you, the IT administrator, when you are deciding how to set up your environment? It means you need to consider the following realities of each solution:

ADFS Issues

  1. ADFS requires more administrative overhead to function:
    1. ADFS is not a perfect solution and it does fail sometimes.
    2. Troubleshooting ADFS can be a daunting task. The error messages provided by ADFS are really poorly worded and generic, so a lot of digging in logs is required to really figure out where a problem is coming from.
    3. ADFS requires a trust between your environment and Office 365. Maintaining the trust takes some effort. ADFS relies on Digital Certificates that have expiration dates, so you have to make sure the certificates are updated before they expire or ADFS won’t work.
  2. ADFS is tricky to configure sometimes. The Office 365 setup for it has been streamlined, but there are occasional setup issues that can be difficult to resolve or confusing.
  3. If your ADFS server goes down for any reason, Office 365 can’t be accessed. This means that a High Availability ADFS cluster is very beneficial. It’s also expensive.
  4. In short, ADFS has a significantly higher cost to use than password sync, but it is also more secure.

Password Sync

  1. Password sync copies the “hash” for the AD password to Office 365. This means that if Office 365 gets taken over by hackers (very very unlikely, but still a potential concern), they also get to take over your network because they have all your password hashes. This doesn’t happen with ADFS.
  2. The Synchronization between Office 365 and AD occurs on a scheduled basis. This occurs every 30 minutes at a minimum, so if you change someone’s password in AD, you have to wait up to 30 minutes for the password to change in Office 365. This can be very confusing for users and result in a lot of time consuming support calls, particularly if you enable account lockout in Office 365. You can force syncs to occur, but this does add a good bit of administrative time to the password change process.

Issue Mitigation

There are some ways to get around the issues involved with each solution. For instance, Microsoft is currently working on a cloud-based version of ADFS that will allow you to have ADFS level security without the added infrastructure and administrative costs of an ADFS server/cluster. They also provide an “upgraded” version of Azure AD (which is the back-end system for account management in Office 365) called Azure AD Premium. AAD Premium costs about 4 dollars a month, but allows you to provide your users with self-service password reset features and adds attribute “write-back” capabilities that allow you to manage users in the cloud when using ADConnect, which isn’t possible otherwise, meaning you can change distribution group membership, user passwords, and other attributes in Office 365 and those changes will by written to your AD environment.


In the end, the decision between ADFS and Password Sync is entirely up to you. If you have major regulatory governance requirements or are very concerned about security, ADFS is a very capable system that will greatly improve system security for Office 365. However, if you work for a small organization with little to no major security concerns, Password sync will provide you with a lot of benefit.

Update – 10/30/2017

It’s been a while since I wrote this post, but a number of changes to ADFS and the addition of Passthrough authentication using AD Connect mean that I need to update some of the conclusions here, and will definitely change the solution you may choose.

  1. Password Sync has a specific limitation for environments that use limitations to logon hours in Active Directory. Because the attributes for logon hours are not properly synced through Azure AD Connect, logon hour limitations will not function in Office 365 when using Password Sync. ADFS authenticates against AD directly, so it will not allow users to log in if AD says they are outside of their login hours window(s).
  2. Passthrough Authentication in Azure AD Connect *greatly* improves authentication in Office 365 by creating an authentication that passes credentials to AD through Azure ADConnect, rather than storing password hashes in the cloud. This significantly reduces the security risks associated with using password sync.
  3. ADFS in Server 2012 R2 and later allows a pretty awesome feature that I wasn’t aware of til just now, a self-service password reset portal tied to the ADFS portal. https://blogs.msdn.microsoft.com/samueld/2015/05/13/adfs-2012-r2-now-supports-password-change-not-reset-across-all-devices/ covers this in greater detail.


Do I need Anonymous Relay?


If you have managed an Exchange server in the past, you’ve probably been required to set things up to allow printers, applications, and other devices the ability to send email through the Exchange server. Most often, the solution to this request is to configure an Anonymous Open Relay connector. The first article I ever wrote on this blog was on that very subject: http://wp.me/pUCB5-b .  If you need to know what a Relay is, go read that blog.

What people don’t always do, though, is consider the question of whether or not they need an anonymous relay in Exchange. I didn’t really cover that subject in my first article, so I’ll cover it here.

When you Need an Open Relay

There are three factors that determine whether an organization needs an Open Relay. Anonymous relay is only required if you meet all three of the factors. Any other combination can be worked around without using anonymous relaying. I’ll explain how later, but for now, here are the three factors you need to meet:

  1. Printers, Scanners, and Applications don’t support changes to the SMTP port used.
  2. Printers, Scanners, and Applications don’t support SMTP Authentication.
  3. Your system needs to send mail to email addresses that don’t exist in your mail environment (That is to say, your system sends mail to email addresses that you don’t manage with your own mail server).

At this point, I feel it important to point out that Anonymous relays are inherently insecure. You can make them more secure by limiting access, but using an anonymous relay will always place a technical solution in the environment that is designed specifically to circumvent normal security measures. In other words, do so at your own informed risk, and only when it’s absolutely required.

The First Factor

If the system you want to send SMTP messages doesn’t allow you to send email over a port other than 25, you will need to have an open relay if the messages the system sends are addressed to email addresses outside your environment. The bold stuff there is an important distinction. The SMTP protocol defines port 25 as the “default” port for mail exchange, and that’s the port that every email server uses to receive email from all other systems, which means that, based on modern security concerns, sending mail to port 25 is only allowed if the recipient of the email you send exists on the mail server. So if you are using the abc.com mail server to send messages to bob@xyz.com, you will need to use a relay server to do it, or the mail will be rejected because relay is (hopefully) not allowed.

The Second Factor

If your system doesn’t allow you to specify a username and password in the SMTP configuration it has, then you will have to send messages Anonymously. For our purposes, an “anonymous” user is a user that hasn’t logged in with a username and password. SMTP servers usually talk to one another Anonymously, so it’s actually common for anonymous SMTP access to be valid and is actually necessary for mail exchange to function, but SMTP servers will, by default, only accept messages that are destined for email addresses that they manage. So if abc.com receives a message destined for bob@abc.com, it will accept it. However, abc.com will reject messages to jim@xyz.com, *unless* the SMTP session is Authenticated. In other words, if bob@abc.com wants to send jim @xyz.com a message, he can open an SMTP session with the abc.com mail server, enter his username and password, and send the message. If he does that, the SMTP server will accept the message, then contact the xyz.com mail server and deliver it. The abc.com mail server doesn’t need to have a username and password to do this, because the xyz.com mail server knows who jim@xyz.com is, so it just accepts the message and delivers it to the correct mailbox. So if you are able to set a username and password with the system you need to send mail with, you don’t need anonymous relay.

The Third Factor

Most of the time, applications and devices will only need to send messages to people who have mailboxes in your environment, but there are plenty of occasions where applications or devices that send email out need to be able to send mail to people *outside* the environment. If you don’t need to send to “external recipients” as these users are called, you can use the Direct Send method outlined in the solutions below.


As promised, here are the solutions you can use *other* than anonymous relay to meet the needs of your application if it doesn’t meet *all three* of the deciding factors.

Authenticated Relay (Factor #3 applies)

In Exchange server, there is a default “Receive Connector” that accepts all messages sent by Authenticated users on port 587, so if your system allows you to set a username and password and change the port, you don’t need anonymous relaying. Just configure the system to use your Exchange Hub Transport server (or CAS in 2013) on port 587, and it should work fine, even if your requirements meet the last deciding factor of sending mail to external recipients.

Direct Send (Factor #2 applies and/or #3 doesn’t apply)

If your system needs to send messages to abc.com users using the abc.com mail server, you don’t need to relay or authenticate. Just configure your system to send mail directly to the mail server. The “direct send” method uses SMTP as if it were a mail server talking to another mail server, so it works without additional work. Just note that if you have a spam filter that enforces SPF or blocks messages from addresses in your environment to addresses in your environment, it’s likely these messages will get blocked, so make allowances as needed.

Authenticated Mail on Port 25 (Only factor #1 applies)

If the system doesn’t allow you to change the port number your system uses, but does allow you to authenticate, you can make a small change to Exchange to allow the system to work. This is done by opening the Default Receive connector (AKA – the Default Front End receive connector on Exchange 2013 and later) and adding Exchange Users to the Permission settings on the Security tab as shown with the red X below:


Once this setting is changed, restart the Transport service on the server and you can then perform authenticated relaying on port 25.


If you do find you need to use an anonymous relay, by all means, do so with careful consideration, but always be conscious of the fact that it isn’t always necessary. As always, comments questions on this article and others are always welcome and I’ll do my best to answer as soon as possible.

What is a DNS SRV record?

If you’ve had to work with Active Directory or Exchange, there’s a good chance you’ve come across a feature of DNS called a SRV record. SRV records are an extremely important part of Active Directory (They are, in fact, the foundation of AD) and an optional part of Exchange Autodiscover. There are a lot of other applications that use SRV records to some degree or another (Lync/Skype for Business relies heavily on them, for instance).The question, though, is why SRV records are so important and what exactly do they do?

What does a SRV record do?

The purpose of a SRV record is found in its longer, more jargon filled name: Service Locator Record. It’s basically a DNS record that is meant to allow applications to find a Server that is providing a Service the application needs to function. They provide a centralized method of configuration and control of applications that result in less work configuring the client of a client/server based application.

For example, let’s say you’re an application designer and you are creating an application that needs to talk to a server for some reason. Prior to the existence of SRV records in DNS, you had two choices:

  1. Program the application so it only ever talked to a server if it had a specific name or IP address
  2. Include some configuration settings in the application that would let end users put in the DNS name of the server.

Both of these options are not very useful for usability. Hard-coding IP addresses or host names for the server makes setup difficult and very strict in its requirements. Making end users enter the server information usually causes a lot more work for IT staff, as they would usually be required to do this for all the users.

SRV records were first added to the DNS protocol’s specifications around the year 2000 to give programmers another option for designing Client/Server based software. With SRV records, the application can be designed to look for a SRV record and get server information without having be directly configured by end users or IT staff. This is similar to the first option above, but allows greater flexibility because the server can have any name or IP address you want and the application can still find it. Some of the advanced features of SRV records also allow failover capabilities and a lot of other cool stuff.

How do SRV Records Work?

Since Active Directory relies so heavily on SRV records, let’s use it as an example to explain how they work. First, let’s take a look at a typical AD DNS zone. Below, you can see a picture that shows the fully expanded _MSDCS zone for my test lab:srv-records-for-sysinteg

This shows the _Kerberos and _ldap SRV records created by a Domain Controller (Megaserver). Here’s basically what those records are for:

  1. Windows Login requires a Domain-Joined client to connect to a Domain Controller
  2. The login system is programmed to find a Domain Controller by looking for a SRV record at _ldap.Default-First-Site-Name._sites.DC._msdcs.sysinteg.ad
  3. The SRV record listed above has a value that returns megaserver.sysinteg.ad as the location of the server providing the _ldap service.
  4. The computer’s programming fills in a blank left for whatever value the _ldap service returns with the value that is returned (megaserver.sysinteg.ad).
  5. The computer then talks to megaserver.sysinteg.ad exclusively for all functions that require it to use LDAP (Which is the underlying Protocol used by AD for what it does).

If SRV records didn’t exist, we would be required to manually configure every computer on the domain to use megaserver.sysinteg.ad for anything related to AD. Now, that’s certainly not an unfeasible solution, but it does give us a lot more work to do.

What Makes up a SRV record?

A SRV record has a number of settings that are required for them to function. To see all the settings, look at the image below:


That shows an Exchange Autodiscover SRV record. I’ll explain what each setting here does:

Domain: This is an un-changeable value. It shows the DNS Domain the SRV record belongs to.
Service: This is the “service” the SRV record will be used to define. In the image, that service is Autodiscover. Note that all SRV records should have an Underscore at the start, so the service value is _autodiscover. The underscore prevents issues where there might be a regular A record with the same name as a SRV record.
Protocol: This is the Protocol used by the service. This can functionally be anything, since the protocol in a SRV record is usually only meant to organize SRV records, but it’s best to use the protocols allowed by RFC 2782 to ensure compatibility (_tcp and _udp are universally accepted), but the Protocol can be anything. Unless you are designing software that uses SRV records, you’ll never be in a situation where you’ll have to make a decision about what to put as the Protocol. If you’re trying to configure a SRV record for some application that you are setting up, just follow the instructions when creating a SRV record.
Priority: In a situation where multiple servers are providing the same service, the Priority value determines which server should be contacted first. The server chosen will always be the one with the lowest number value here.
Weight: In a situation where you have multiple SRV records with the same Service value and Priority value, the Weight is used to determine which server should be used. When the application is designed according to RFC 2782, the Weight value of all SRV records is added together to determine the full Weight. Whatever portion of that weight a single SRV record is assigned determines how often a server will be used by the application. For instance, if you have 2 SRV records with the same Service and Priority where Server 1 has a weight of 50 and Server 2 has a weight of 25, Server 1 will be chosen by the application as its service provider 2/3s of the time because it’s weight of 50 is 2/3s of the total weight assigned, or 75. Server 2 will be chosen the remaining 1/3 of the time. If there’s only one server to host the service, set this value to 0 to avoid confusion.
Port Number: This setting provides Port data for the application to use when contacting the server. If, for instance, your server is providing this service on port 5000, you would put 5000 in as the Port number. The setting here is defined by how the server is configured. For Autodiscover, as shown above, the value is 443, which is the default port designated by the HTTPS protocol. The Autodiscover Website in my environment is being hosted on the default HTTPS port, so I put in port 443. If I wanted to change my server to use port 5000, I could do so, but I would need to update my SRV record to match (As an aside, if I wanted to change the port Autodiscover was published on, I would be required to use a SRV record for Autodiscover to work, as opposed to any other method).
Host Offering this Service: This is, put simply, the host name of the server we want our clients to communicate with. You can use an IP address or a Host name here, but it’s generally best to use the Host name, since IPs can and do change over time.

Using SRV Records to Enable High Availability

If you managed to read through all the descriptions of those settings up there, you may have noticed my explanation of the Priority and Weight settings. Well, those two settings allow for one of the best features of SRV records: High Availability.

Prior to the existence of SRV records, the only way you could use DNS to enable high availability was to use a feature called Round Robin. Round Robin DNS is where you have multiple IP addresses assigned to one host name (or A record). When this is set up, the DNS server will alternate between all the IPs assigned to that A record, giving the first IP out to the first client, the second IP to the second client, the third IP to the third client, and the first IP again to the fourth client (assuming 3 IPs for one A record).

With a SRV record, though, we can configure much more advanced and capable High Availability features by having multiple SRV records that have the same Service Name, but different combinations of Priority and Weight.

When we use SRV records, we have two options for high availability: Failover and Load Balancing. We can also combine the two if we wish. To do this, we manipulate the values of Priority and Weight.

If we want failover capabilities for our application, we would have two servers hosting the service and configure one server with a lower Priority value than the second. When the application performs a SRV record lookup, it will retrieve all the SRV records and attempt to contact all servers until it gets a response, using the Priority value to determine the order. A lower Priority value will be contacted first.

If we want to have load balancing for the application (all servers can be used at any time), we have multiple SRV records with the same service name, like with the Failover solution, and the same Priority value. We then determine how much of the load we want each server to take. If we have two servers providing the same service and want them to share the load equally, we pick any even number between 2 and 65534 (65535 is the highest possible Weight value) then divide that number by 2. The resulting value is entered for the Weight on both servers. When a client queries the SRV record, it will receive all values that match the SRV record, calculate the total weight, and then pick a random number between 1 and whatever the total weight value of all SRV records is to determine which server to talk to.

For instance, if you had Server 1 and Server 2 both with a Weight of 50 in their SRV record, the client would assign half of the total weight value, 100, to Server 1 and half to Server 2. Let’s say it assigns 1-50 to Server 1 and 51-100 to Server 2. The client would then pick a number between 1 and 100. If it picked a number between 1 and 50, the client would communicate with Server 1. Otherwise, it would talk to Server 2. Note: Because this functions using a random number, you will not always end up with a results that match the calculated expectations. Also note: The system used to determine which system is used, based on the Weight value, is determined by the application’s developer. This is just a simple example of how it can work. Some developers may choose a scheme that always results in an exact load distribution.

The Weight value can be used with as many servers as you want (up to 65534 servers), and with any percentage amount you want to define your load balancing scheme. You can have 4 Servers, with only three providing service 33% of the time, while the fourth server only gets chosen when all others are down by setting the weight for three SRV records to 33 and the fourth to 0. Note that a value of 0 means that the server is only chosen when all others are unavailable. You should not set multiple copies of the same SRV record with weights of 0.

Lastly, you can combine Priority and Weight to have multiple load balanced groups of servers. This isn’t a very common solution, but it is possible to have Server 1 and 2 using priority 1 and weight 50, with Server 3 and 4 using priority 2 with weight 50. In this situation, Servers 1 and 2 would provide 50 percent of the system load, but if both Server 1 and 2 stopped working, Server 3 and 4 would then be used, while distributing the load between themselves.

Tinkering with AD

If you want to see how SRV records can be used to handle high availability and get a good example of a system that uses SRV records to their fullest capabilities, try tinkering with some of your AD SRV records. By manipulating Priority and Weight, you can force clients to always use a specific DC, or configure them to use one DC more often than others.

Try modifying the Weight and Priority of the various SRV records to see what happens. For instance, if you want one specific DC in your environment to handle Kerberos authentication and another one to hand LDAP lookups, change the priorities of those records so one server has a 0 in Kerberos and 100 in LDAP, while the other has 100 in Kerberos and 0 in LDAP. You can also tinker with the Weight to give a DC with more resources priority over smaller, backup DCs. Give your monster DC a weight of 90 and a tiny, possibly older DC a weight of 10. By default, Clients in AD will pick a DC at random.

The easiest way to see this in action is to set one DC with a Priority of 10 and another with a priority of 20 on all SRV records in the _msdcs zone. Then make sure the DNS data is replicated between the DCs (either wait or do a manual replication). Run ipconfig /flushdns on a client machine and log out, then back in. Run SET LOGONSERVER in CMD to see which DC the computer is using. Now, switch the priorities of the SRV records in DNS, wait for replication, run ipconfig /flushdns, then then log out and back in again. Run SET LOGONSERVER again and you should see that the second DC is now chosen.

Final Thoughts

As I mentioned, much of a SRV record’s configuration is determined by Software Developers, since they define how their application functions. To be specific, as an IT administrator or engineer, you’ll never be able to decide what the Service Name and Protocol will be. Those are always determine by software developers. You’ll also never be in control of whether or not an application will use SRV records. Software Developers have to design their applications to make use of SRV records. But if you take some time to understand how a SRV record works, you can greatly improve functionality and security for any and all applications that support configuration using SRV records.

If you’re a Software Developer, I have to point out the incredible usefulness of SRV records and the power they give to you. Instead of having to hard-code server configurations or develop UIs that allow your end users to put in server information, you can utilize SRV records to partially automate your applications and make life easier for the IT people who make your software work. SRV records have been available for almost 2 decades now. It’s about time we started using them more and cut down the workload of the world’s IT guys.



A Treatise on Information Security

One famous misquote of American Founding Father Ben Franklin goes like this, “Anyone who would sacrifice freedom for security deserves neither.” At first glance, this statement speaks to the heart of people who have spent hours waiting in line at the airport, waiting for a TSA agent to finish groping a 90 year old lady in a wheel chair so they can take off their shoes and be guided into a glass tube to be bombarded with the emissions of a full body scanner. But the reality of any kind of security, and Information Security in particular, is that any increase of security requires sacrificing freedom. The question we all have to ask, as IT professionals tasked with improving or developing proper security controls and practices, is whether or not the cost of lost freedom is worth the amount of increased security.

The Balancing Act

If you were to dig a little, like I have, you would find that Mr. Franklin actually said, “Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety.” This version of the quote demonstrates very eloquently one of the principle struggles of developing security policies in IT. After all, there is a famous axiom in the Industry (it’s quote day here at ACBrown’s IT World), “The most secure computer is unplugged.” Or something like that. I’m probably misquoting.

In a humorous demonstration of that axiom, I present a short story. When I was a contractor performing DIACAP (Go look it up) audits on US military bases, we were instructed to use a tool called the “Gold Disc.” The Gold Disc was developed by personnel in the military to scan through a workstation or server and check for configuration settings that violated the DISA (That’s the Defense Information Systems Agency) STIG (That’s Security Technical Implementation Guide. Not the guy that drives cars for that one TV show). The Gold Disc was a handy tool, but the final screen that gave you the results of the scan had a little button on it that we were expressly forbidden from ever pushing. That button said, simply, “Remediate All.” Anyone who pushed that button would find that they were instantly locked out of the network, unable to communicate with anything. Pushing the button on an important server would result in mass hysteria, panic, and sudden loss of employment for the person who pushed the button. You see, the Remediate All button caused the tool to change every configuration setting to comply exactly with the DISA STIG recommendations. If you’re not laughing yet, here’s the puchline…Perfectly implementing the DISA STIG puts computers in a state that makes it impossible for them to communicate with one another properly. <Insert follow up joke regarding Government and the problems it causes here>.

On the other hand, computers that blatantly failed to comply with the DISA STIG recommendations would (theoretically) be removed from the network (after 6 or 7 months of bureaucratic nonsense). In the end, there was a point in the middle where we wanted the systems to be. That balancing point was the point where computers were secure enough to prevent the majority of attacks from succeeding, but not so secure that they significantly inhibited the ability of people to do their jobs effectively and in a timely matter. As IT Security professionals, we have a duty to find the right balance of security and freedom for the environments we are responsible for.

The Costs of Security

Everything in IT has a cost. The cost can’t always be easily quantified, but there is always a cost associated. For instance, something as simple as password expiration in Active Directory has a very noticeable cost. How much time do system administrators spend unlocking accounts for people who forgot their password after it just reset? Multiply the number of hours spent unlocking accounts and helping people reset their passwords by the amount of money the average system administrator makes and you get the cost of that level of security in dollars. But that is only the direct cost.

Implementing password expiration and account lockout policies also reduce the level of freedom your employees have in controlling their user accounts. That lost freedom also translates into lost revenue as employees are forced to spend their time calling tech support to get their password reset. Then you also consider lost productivity due to people wasting time trying to remember the password they set earlier that morning.

With some estimates showing that nearly 30 percent of all help-desk work hours are devoted to password resets, the cost of enabling password expiration climbs pretty high.

The Cost of Freedom

On the other hand, every day an individual goes without resetting their passwords increases the likelihood of that password being discovered. Furthermore, every day a discovered password is left unchanged increases the likelihood of that password being used by an unauthorized individual. If the individual who lost the password is highly privileged (a CEO for example), the cost to the business who employs that individual can be astronomical. There are numerous cases of companies going bankrupt after major intrusions linked to exposed passwords

So while it may cost a lot to implement a password expiration policy, it can cost infinitely more not to. In comparison, the cost of implementing a password expiration policy is almost always justified. This is particularly true when working for organizations that fall under the purview of Regulatory Compliance laws (Queue the dramatic music).

Regulatory Compliance

One of the unfortunate realities of the IT world is that some organizations have outright failed to consider the costs of *not* having a good security policy and just plain failed to have good security. Those organizations got hit hard and either lost data that cost the business huge amounts of money, or worse, data that put their customers at risk of identity theft. So, because the kids couldn’t play safe without supervision, most Governments around the world have developed laws that tell businesses in key industries things that they must do when developing their IT infrastructure.

For instance, the Healthcare industry in the US must follow the HITECH addition to HIPAA (so many acronyms) which mandates the need for utilizing IT infrastructure that prevents the unauthorized disclosure of certain types of patient information. Publicly owned corporations in the US are required to follow the rules outlined in the Sarbanes Oxley act, which requires companies to maintain adequate records of business dealings for a significant period of time. The aforementioned DIACAP audits are performed to verify whether military installations are complying with the long list of instructions and requirements developed by the DoD (if you ever have trouble sleeping…).

Organizations that fall under the umbrella of one or more Regulatory Compliance laws are compelled to ensure their IT infrastructure meets the defined requirements. Failing to do so is often punishable with significant fines. Failing to do so and getting attacked in a way that makes use of security holes meant to be plugged by regulations is a huge problem (not just for the organization itself). For regulatory compliance applicable organizations, the costs associated with violating regulations must always be considered when developing a security policy. This is mostly a good thing, since the costs of actually meeting the regulations is occasionally extremely high.

Mitigating Costs – Not Always Worth It

There are actually a lot of technical solutions in the IT industry that exist entirely to reduce the costs associated with implementing security technologies. For instance, utilizing a Self-Service Password Reset (SSPR, cause that’s a lot of typing) solution can significantly reduce the number of man-hours required by help-desk staff to reset passwords and unlock accounts. But such solutions also have costs associated with them. Aside from the purchase cost, many of these solutions significantly reduce security in an organization.  SSPRs, again, increase user freedom and control of their user account, which makes things less secure again. However, depending on the SSPR in use, how much security is reduced depends on how users interact with the software. An SSPR that only requires someone to enter their username and current password is likely to reduce security significantly more than an SSPR that requires users to answer 3 “security questions,” which will, in turn, reduce security much more than an SSPR that requires people to provide their Social Security Number, submit a urine sample, and authenticate with a retina scan while sacrificing a chicken from Uruguay with a special ceremonial dagger. But, again, the time spent by employees resetting their own password (not to mention the cost of importing chickens from Uruguay) increases the cost of such solutions. The key to determining which solutions and technologies to use is a matter of finding the right balance of freedom and security in the environment.

When Security Costs Too Much Freedom

There are times when the financial costs and the cost of freedom associated with a security measure are obviously too high (I’m looking at you, TSA). Implementing longer passwords may have many technical security advantages, but doing so includes a risk that the loss of freedom is too great for people to handle. For instance, implementing a 20 character minimum password policy that includes password complexity requirements might cause some employees with bad memories to write their password down and put it in a place that easy for them to remember. Like on a post-it note stuck to their monitor. Suddenly, that very secure password policy is defeated by a low-tech solution. Now you have a password accessible to anyone walking around in the office (like Janitor Bob) that can be used to access critical information and sell it to the highest bidder (AKA, your competitor). This is a prime example of the unconsidered costs of security being too high. Specifically, the security requirement costs so much freedom and negatively impacts employees so much that they end up bypassing security entirely.

Balancing Act

In the end, IT security is a massive balancing act. To properly balance security and freedom in IT, it is necessary to ask questions and obtain as much knowledge about the environment as possible. The investigative part is among the most important phases in any security policy. Organizations looking to increase security need to have balance in their security implementations. Decisions on IT security must always be thoughtful ones.