Access Control on KEMP LoadMaster Virtual Service

In a previous post I’ve talked about Hardware Load Balancing a Relay Connector. I’ve explained why this could be an issue when the clients and the Exchange servers are in the same subnet.

Transparency Modes

The option “client impersonation” is called Layer 7 Transparent Mode by Kemp Technologies. There are two modes Non-Transparent and Transparent. Just to be clear the difference is that with Non-Transparent the Kemps IP Address is passed on to the Exchange server. With Transparent mode enabled the client’s IP Address is passed on to the Exchange server.

Until firmware version 6 it was not possible to set an allow list which could make it difficult or maybe even impossible to use the LoadMaster to balance the Relay traffic in this type of environment. I’ve known a few customers that wanted to use application awareness and therefor Layer 7 but because of this issue they couldn’t.  I’ve contacted Kemp a couple of times regarding this issue and they have offered me to test the new version 6 beta. After signing a Non Disclosure Agreement I’ve updated my LoadMaster in the test environment and started testing. Happy to see that they’ve added this feature. By the time I’m publishing this article version 6 is released and I’ve had the approval of Kemp to publish.

Firmware 6.0-16

With the new firmware Kemp has released for it’s LoadMaster it is now possible to set Access Control on a Virtual Service. You can do this for each Virtual Service. Under Standard Options there’s a new option.


Here you can set the addresses that are allowed or not allowed to use the relay connector. image

It’s probably a good idea to deny access from every IP Address and only allow the ones you want.



Now you know how to set up the Relay Connector and the Load Balancer with a Kemp in this type of configuration. This was definitely something I was hoping for in the new version. Thanks to Kemp and there research department, they are aware of there customers and are happy to help or implement new features when requested.

Special thanks to Ekkehard Gümbel from Kemp Technologies for the review.

Posted in Exchange 2007, Exchange 2010, Hardware Load Balancer, Kemp | Tagged , , , , , , | Leave a comment

DHCP Failover Windows Server 8

I’ve installed a Windows Server Developer Preview to test some new features. One of the first things I want to talks about is the feature that is called DHCP Failover. This is a new feature that we expected to be in Windows Server 2008 R2. In Windows Server 2008 we had a new option to split the scope. Actually this was possible in previous editions but you had the split the scope manually over the servers. In 2008 it became pretty easy by using a wizard and you had a nice bar to divide the scope.

Now there is a new feature to Failover the DHCP scope. In my opinion this is a very nice improvement. In the introduction it says: “DHCP Failover allows setup of DHCP for high availability by synchronizing IP address lease information between 2 server. DHCP failover also provides load balancing of DHCP requests”. So it helps to get the load balanced over two servers and it keeps both servers up-to-date of the given leases in case of server-outage.

If you like more information on how to install the DHCP server role on a Windows Server 8 please read the following article by John Delizo.

Configure your scope for DHCP Failover

Select the scope that you want to configure failover and select “Configure Failover”.


In this window you’ll see that you can also select all the scopes. By default this option is selected.


In this window we select the partner server which we would like to use. If you are configuring an existing failover configuration you will notice that the server is already in the drop-down list.


Now you’ll have to choose the type of relationship with the second server. First give it a name. When it comes to the mode you’ll have two options. Load Balance and Hot Standby.

Load Balancing Mode

The Load Balance option will Load Balance the requests. The difference between the Split-Scope is that this option allows both DHCP Server to host the complete scope. They will notify (synchronize) each other when an address is leased.

Hot standby Mode

This mode allows you to have an Active/Passive configuration. One server will be leasing addresses and the other will be in hot standby. You can choose how many addresses are reserved for the standby node in order to act quickly in case of down-time on the first node. Here you’ll see both options:

Failover Relationship total

When you click next the configuration will be committed to both servers.


DHCP Reservations become easier

One thing that always was a problem were the reservations. In previous versions of Windows Server you could split the scope but you had to make sure the reservations are in that scope. You can’t lease an address that isn’t part of the scope. In Windows Server 2008 there was even a warning when you tried that. Now because both server can host the complete scope it’s easy to make you’re reservation on one node and simply replicate it to the other node. There are two options, replicate scope and replicate failover relationship. Replicate scope allows you to only replicate the selected scope and replicate relationship will replicate all scopes.


Now you know a bit more about the benefits of using the Windows Server 8 version of the DHCP server. After seeing this new features I’m pretty sure it will be used by quite a lot of administrators. It’s something I was hoping for in the new version. I’ll check these option again in any future release and hope to inform you again. Thanks for reading.

Posted in DHCP Server, Window Server 8, Windows Server Developers Preview | Tagged , , , , , | Leave a comment

UPN logon Outlook Web App on Threat Management Gateway

At a customer of PQR I’ve implemented an Exchange 2010 environment. The customer has multiple SMTP domains. When I published the Outlook Web App (OWA) on the Forefront Threat Management Gateway (TMG) I’ve used the default publishing rule.

When the customer started testing the environment he told me that he would rather see people login with there email address than with the domain\username. There is no way you can log on with the e-mail address because it always wants to know three things, the username, password and the domain. As you might know a User Principal Name is the username and domain in an e-mail address format. For example, the username is Peter and the domain is the UPN would be according to internet RFC 822 . This only works if your e-mail address equals your User Principal Name.

In my case this would work. The users could fill out there email address instead of imagedomain\username on the OWA form. TMG understands the UPN and authenticateshe user. If you don’t have TMG it’s easy to change this on the Exchange Server OWA authentication properties. This option does not exist on the Authentication Tab on the Web Listener of the TMG Server.

So I needed to change the TMG form so that users know what to enter on the form. imageInstead of this default view of the OWA web site I’ve changed the following setting to reach my goal.

On the TMG server in the “C:\Program Files\Microsoft Forefront Threat Management Gateway\Templates\CookieAuthTemplates\Exchange\HTML\nls” directory (or another imagelocation if the installation path was changed during the install) you can see al lot of directories.

These directories represent different languages.  Every directory has it’s own “strings.txt” file. This file is used by the form to show text in the right languages. You can edit the L_UserName_Text=”Domain\user name:” to L_UserName_Text=”E-mail Address:” For this change you need to restart the Microsoft Forefront TMG Firewall service.

Don’t forget to change all the different string.txt files of all the languages that you think you’re users will use. And maybe the most important part is that this does not change the way you are logging on to the environment. It only changes the way it tells you how to log on. If the UPN is not the same as the Email Address you can’t authenticate.

Posted in Exchange, Exchange 2010 | Tagged , , , , , | 2 Comments

Shadow Redundancy explained

In this post I would like to talk about a nice feature of Exchange 2010 called Shadow Redundancy. As you know email is becoming more and more important every day. Because of that Microsoft introduced a lot of features to help you with High Availability (HA). I believe shadow redundancy is one of them and they have enabled it by default. Great isn’t it!

Shadow redundancy makes sure that when a message is delivered to the next hop it’s not deleted from the queue until it’s sure that the next hop has received the message. This feature only works between Exchange 2010 Hub Transport and/or Edge Transport servers and does not work on previous versions of Exchange. Exchange detects that the clip_image001next hop supports shadow redundancy through the XSHADOW verb. You can see this verb yourself if you connect to a HT or ET with telnet on port 25. After entering the ‘ehlo’ command you’ll see the XSHADOW verb at the bottom. The XSHADOW verb is used to advertise that the server supports shadow redundancy. You may notice that the rest of the list is exactly the same as the list of an Exchange 2007 server.

How it works

The HUB01 will deliver a message that is intended for a user outside the organization to the EDGE01 server. HUB01 detects that the EDGE01 supports shadow redundancy. HUB01 will move the message to the shadow redundancy queue and marks it with EDGE01 as the primary owner. When opening the Queue Viewer you’ll notice that there is one email in the Shadow Redundancy Queue.



When EDGE01 delivers the message to the internet it updates the discard status of the message indicating that the delivery was successful. HUB01 will check the status of all sent messages (default every 15 minutes) by issuing a XQDISCARD command to EDGE01. EDGE01 will check the discard status and responds with a list of all the messages that are considered to be successfully delivered. HUB01 will delete the messages from the Shadow Redundancy Queue if they are on that list.


Failure (EDGE01 Outage)

If HUB01 can’t contact the EDGE01 within the time-out period after it has sent the message, the HUB01 resubmits the message to the EDGE02. The message in the Shadow Redundancy Queue now has the EDGE02 as the primary owner. If there is not alternative route the message won’t be resubmitted and remains in the shadow queue until the Auto Discard Interval is up.


Temporary Failure

So what happens if the EDGE01 is temporary offline? For example the EDGE01 is down for let’s say 15 minutes when our time-out is 10 minutes and the retry is 0. This means that the message is resubmitted to the EDGE02 because the HUB01 isn’t sure that the message is delivered properly. In this case it’s possible that the message is already delivered and that the EDGE02 is delivering the same message ‘again’. Exchange mailbox users won’t see duplicate messages in their mailbox because Exchange has duplicate message detection. However recipients on other email systems may have a duplicate message if it doesn’t support duplicate message detection. For more information about duplicate message detection checkout this blog.

Shadow Redundancy settings

The shadow redundancy process is controlled by the Shadow Redundancy Manager. You can change the Shadow Redundancy settings by using the EMS. Get-TransportConfig will show you all of the options. As you may notices there are a lot more options than on the Exchange 2007 version. It’s a good idea to use Get-TransportOptions | FL Shadow* which only shows the options for Shadow Redundancy.


Obviously to change these settings use the Set-TransportConfig command with the parameter you would like to change. Setting the ShadowRedundancyEnabled to $false will disable the whole process of the entire organization not just one Hub Transport server. For more information about Shadow Redundancy : Understanding Shadow Redundancy

Posted in Exchange 2010 | Tagged , , | Leave a comment

Bifurcation in Exchange (Delayed Fan-Out)

As you probably know Exchange 2010 uses Active Directory Sites by default to route email. Not so long ago I had a question about when and how bifurcation occurs. In this article I would like to explain what bifurcation is in Exchange 2007/2010.

When an email is routed through your organization it will use AD Site selection algorithm to decide which route it will take. This is done in the following order. First is the least cost path principle. This means the path with the lowest cost from the source site to the destination site. If there are two routes with the same cost it will then check the number of hops (segments). The one with the least amount of hops is used. As you can imagine there is a possibility that the number of hops are equal on both routes. Then it will use alphanumerically lower preceding AD Site Names.


Exchange will use a direct connect with the hub server who is the closed to the mailbox server where the recipient resides. If the email is send to two recipients each located in a different site Exchange will use bifurcation. It will check the path that the email is going to use and check which hub transport server is the last hop on the route that both the emails will take. The hub transport server will then bifurcate the message and setup its own direct connection to the next hop which is the closed to the recipients mailbox server.


This process is called delayed fan-out and can save a lot of bandwidth on the internal network. In versions prior to Exchange 2007 the email was sent as many times as there were recipients. This is a feature of the Exchange Hub Transport Server role. It delays the splitting (bifurcation) of a message until it reaches a “fork”in the routing topology.

Posted in Exchange, Exchange 2007, Exchange 2010 | Tagged , , , | 1 Comment

Hardware Load Balancing a Relay Connector

Recently I was at a customer to install an Exchange 2010 SP1 environment to upgrade their current Exchange 2003 environment. They had two DAG servers and wanted to use two Hardware Load Balancers as well. In this blog I would like to go in to the relay connectors with the HLB’s.

I’ve created two Relay Connectors, one for each Hub Transport server. Like I always do I configured the connector to only accept SMTP traffic from certain IP Addresses (Multi-Functional etc.) cause you don’t want to create an Open Relay.

Because we have HLB’s we don’t want our clients to connect directly to the Hub Transport servers. We want them to connect to the Virtual IP Address of the HLB. This means that the HLB will establish the connection to the Relay Connector.

Okay so far so good, but now you have a few options. If you have configured the HLB with the default Load Balance settings it will establish a new connection with the Relay Connector for you. This means that when a client connects to the for example relay.customer.local (the HLB Relay VIP Address) the HLB will use its own IP Address to communicate with the Relay connector. The result is that everyone can relay email through the HLB’s.

The HLB’s we used are from Barracuda and they have a nice option called “Client Impersonation”. What this option does is instead of connecting with the HLB IP Address it connects with the clients IP Address. It impersonates the client. Well that is exactly what we want.. nice I thought. But there is a problem. When you try a telnet relay.customer.local 25 you’ll get a blank screen… no HELO nothing..

I can explain what is really happening. The request is send to the HLB which then impersonates the connection to one of the Exchange servers. Exchange then sends back the information packets directly to the client. Because of the impersonation Exchange thinks the packets are coming from the client. Okay.. so now we come to the exiting part of it, the client wants to receive the packets from the HLB and doesn’t expect them to come from Exchange.


Think of it as asking a question to somebody. If I ask person B how much 2 x 2 is.  I expect him to answer me. But if person B doesn’t know the answer he will probably ask person C. When person C comes directly to me and says it’s 4 how am I supposed to know that it is the answer to my 2 x 2 question. If person C tells the answer to person B, I know that the answer belongs to my previously asked question of 2 x 2.

This means that if you set the HLB’s as the default gateway on your Exchange Servers it will work. i.e. the reply from the Exchange server will go through the load balancer and it will them pass the reply to the actual client. This will only work if the client is in a different subnet than of the exchange and barracuda, otherwise the exchange will not use the gateway IP address part. Well not in my case simply because all the IP Addresses belong to the same subnet and therefore Exchange will not use the gateway address.


So if your complete network (clients, servers, HLB’s etc.) are located on the same subnet setting the HLB as the default gateway doesn’t do the trick. In this case you’ll have to make an allow list on the HLB’s for the relay VIP Address and set the “Client Impersonation” to “No”.


Does this mean I don’t have to make an allow list on my Exchange Relay Connector? No absolutely not because you don’t want you clients to be able to connect to one of the Exchange servers directly. You’ll have to keep the Exchange and HLB’s Allow list in synch.

Special thanks to Aravind Ghosh from Barracuda support for reviewing this article.

Posted in Exchange, Exchange 2007, Exchange 2010 | Tagged , , , , , | 4 Comments

Exchange 2010 is in an Inconsistent State

In this example the organization contains an Exchange 2007 SP3 server with the Hub Transport, Client Access Server and the Mailbox role installed.  When trying to add a Exchange 2010 server to the organization for a coexistence migration you receive an hardware failure during the setup. If the broken piece of hardware is an item that can be replaced you could probably continue the setup, but if you have to start over you could have an installation problem depending on where it crashed during the setup.

So this basically means that you would install the preferred operation system such as Windows Server 2008 R2 on a new server. When you have taken care of all the prerequisites such as patches and features you try to run the setup again. The Exchange 2010 setup will check all the role prerequisites before actually installing them and you immediately are prompted  with the following errors.

Allready installed

When you are trying to install using the command prompt the error will you like this.

Command Error Exchange

The problem is that the Exchange server already is nested in the Exchange organization under it’s hostname. This actually means that if you change the hostname and re-run the setup it will not tell you that the roles are already installed. So this will work but this still doesn’t remove the previous installation from the Exchange organization. After the installation you will notice both hostnames will appear in the Exchange Management Console.

Exchange Servers

To resolve this problem we need to tell the Exchange organization that the specific host isn’t part of it anymore. Actually it never really was does it? You can do this by opening ADSI Edit and connect to the Configuration. Be sure to do this on the Global Catalog server. Otherwise it’s possible that you don’t see the object you want to delete. Navigate to the following: 

CN=Configuration, CN=Services, CN=Microsoft Exchange, CN=<Your Exchange Organization>, CN=Administrative Groups, CN= <Exchange Administrative Group>, CN=Servers, CN=<Server Name>

Delete the CN= of the server that is no longer part of the Organization.ADSIEdit

So be sure to delete the object before trying to reinstall Exchange Server 2010 or if you have chosen a new hostname delete the CN object containing the old one. Beware that if you have a large Active Directory structure with different sites and domain controllers it could take a while to replicate the changes you’ve made.

Posted in Exchange, Exchange 2010 | Tagged | 1 Comment

Domain Replication has exceeded the tombstone lifetime

I’ve just found out that a test environment of mine has been booted a few times with only one domain controller. The result is that the domain controllers aren’t replicating anymore. When this happens you see an 2042 event in the Directory Service log. If you try to replicate it manually the following error pops up. image

This can also happen when your network isn’t working properly or when replication error’s have occurred for to long without anyone noticing them. In large environments it’s possible that a complete site has been disconnected due to unavailable WAN connections. Restoring backups older than the tombstone lifetime without Active Directory restore can also result in these problems.

The reason why the domain controllers will not continue the replication is because they are protected for so called Lingering Objects. For example, one or more objects that are deleted from Active Directory on all other domain controllers might remain on the disconnected domain controller. Such objects are called Lingering Objects. Because the domain controller is offline during the entire time that the tombstone is alive, the domain controller never receives replication of the tombstone and therefor doesn’t know that the object has been deleted.

So the question is how do we fix this? Well first of all we need to know which domain controller has been disconnected for longer than a tombstone lifetime. Usually you’d think somebody already knows that but then again it won’t have replication error’s.  The repadmin /showrepl command will answer this question. To restore the replication you have basically three options:

1. Force a demote

Since a normal dcpromo demote is not possible. The only option is to use dcpromo /forceremoval. This means it will demote the domain controller to a member server but will not notify the other DC’s that it has been demoted. You have to manually remove the metadata and objects. It’s easier to only remove the computer object in Active Directory and promote the server with the same name again. If you want to remove the DC it’s possible to do a normal demote from this point. Event concerning Lingering Objects should be history.

2. Force a replication

If you want to force a replication between the disconnected site and the rest of the DC’s it’s possible to disable the Lingering Objects check or to extend the Tombstone lifetime. In a Windows 2003 forest strict replication consistency is default enabled. You can change this via the register of with the following repamin command: repadmin /regkey “Domain Controller” +strict

Another way to achieve this goal is to extend the Tombstone lifetime with ADSI Edit. You can find the option in CN=Configuration,DC=ForestRootDomainName,CN=Services and CN=Windows NT. Right click CN=Directory Service, and then click Properties. In the Attribute column, click tombstoneLifetime and change the value.  Check the event log for the last successful replication date, this is very important in deciding the correct number of days. Beware that it is possible that objects that were removed are showing up in Active Directory again! You have to be sure that there aren’t that many changes in AD otherwise you can end up with a big mess.

3. Remove Lingering Objects

The last option is to remove the Lingering Objects. I believe this is the best way but it is also the hardest. When you have remove all the Lingering Objects, Active Directory will start the replication again. The only way to remove these objects is with the following repadmin command: repadmin /removelingeringobjects <Dest_DSA_LIST> <Source DSA GUID> <NC> [/Advisory_Mode] With /Advisory_Mode the command will show you what it will delete. These logs will be logged in the Directory Services event log.

For more information about fixing replication lingering object problems or different Event ID’s check out this TechNet article

Posted in Active Directory | Tagged | 1 Comment