Spoof your old dead Exchange Server

Ok, so if you have say, Citrix, or a standard image with Office pre-installed, then someone had to pick an Exchange server to point to for the Outlook profile creation wizard.

So sometimes, in large organizations, teams don’t necessarily speak to one another before they make small decisions like which server to point to.  The person creating the Office install might pick, say, his home mail server.

So when that mail server, years later, gets decommissioned, this can suddenly cause problems.

How do you fix this?

Simple!  Glad you asked.

2 things need to be done.

1.  Establish IP connectivity to the old server name.  Easy enough, go into DNS and create a new A record for the old/missing Exchange server, with the IP of the server you’d like this task to point to.

2.  Go into ADSIEdit, find the computer object for the target server, right click and hit properties.  Scroll down to ServicePrincipalName and edit.  Add the following type of record:


Give that a little time to replicate around and voila, everything goes back to normal.

Why is step 2 necessary?  Kerberos security rearing it’s ugly head.  The target server needs to know it’s acting as the old server or it will refuse connections.


Note that this is a possible work around and may cause corrupt MAPI profiles on your clients.  The real fix here is to address the install, or clients configured to a server that no longer exists.

Version Store 624 events

Applies to Exchange 2000, Exchange 2003, Exchange 2007.

So in Version Store 623 errors, Version Store gets ‘clogged’, if you will, and will fail to process transactions.

624 errors on the other hand, are caused by a lack of available virtual memory on the server.  Sometimes this has no impact and the server corrects itself, but in a memory leak condition, this can be the sign your Exchange server is no longer accepting client connections and is in need of some assistance.

In the particular instance where I have seen this occur, the 624 event comes after a series of errors:


First we throw a MSExchangeDSAccess 2104 event.

Event ID     : 2104
Raw Event ID : 2104
Record Nr.   : 4802384
Category     : None
Source       : MSExchangeDSAccess
Type         : Error
Generated    : 9/7/2008 12:27:27 PM
Written      : 9/7/2008 12:27:27 PM
Machine      : JAHUMBALABAH
Message      : Process STORE.EXE (PID=636). All the DS Servers in domain are not responding.

Shortly thereafter you’ll see a MSExchangeDSAccess 2102.

Event ID     : 2102
Raw Event ID : 2102
Record Nr.   : 4802387
Category     : None
Source       : MSExchangeDSAccess
Type         : Error
Generated    : 9/7/2008 12:28:15 PM
Written      : 9/7/2008 12:28:15 PM
Machine      : JAHUMBALABAH
Message      : Process MAD.EXE (PID=2588). All Domain Controller Servers in use are not responding:


Then we will see a MSExchangeSA 9152.

Event ID     : 9152
Raw Event ID : 9152
Record Nr.   : 4802391
Category     : None
Source       : MSExchangeSA
Type         : Error
Generated    : 9/7/2008 12:31:15 PM
Written      : 9/7/2008 12:31:15 PM
Machine      : JAHUMBALABAH
Message      : Microsoft Exchange System Attendant reported an error ‘0x8007000e’ in its DS Monitoring thread.

This particular error is an out of memory error.  Uh oh.

Then DSAccess has another problem…. a 9154.

Event ID     : 9154
Raw Event ID : 9154
Record Nr.   : 4802392
Category     : None
Source       : MSExchangeSA
Type         : Error
Generated    : 9/7/2008 12:31:20 PM
Written      : 9/7/2008 12:31:20 PM
Machine      : JAHUMBALABAH
Message      : DSACCESS returned an error ‘0x80004005’ on DS notification. Microsoft Exchange System Attendant will re-set DS notification later.

This means a call failed, due to lack of memory…

Then the error you’ve all been waiting for, a 624 gets thrown by ESE.

Event ID     : 624
Raw Event ID : 624
Record Nr.   : 4802473
Category     : None
Source       : ESE
Type         : Error
Generated    : 9/7/2008 12:32:58 PM
Written      : 9/7/2008 12:32:58 PM
Machine      : JAHUMBALABAH
Message      : Information Store (636) Storage Group 1 (First Storage Group): The version store for this instance (1) cannot grow because it is receiving Out-Of-Memory errors from the OS. It is likely that a long-running transaction is preventing cleanup of the version store and causing it to build up in size. Updates will be rejected until the long-running transaction has been completely committed or rolled back.

Current version store size for this instance: 1Mb

Maximum version store size for this instance: 249Mb

Global memory pre-reserved for all version stores: 1Mb

Possible long-running transaction:

   SessionId: 0xBD345AC0

   Session-context: 0x00000000

   Session-context ThreadId: 0x000015AC

   Cleanup: 1


So what can cause this?  Check your task manager.  Do you see any handle leaks or processes with out of control handles?  In the instance I saw for this, it was a mixture of stale messages stuck in the SMTP temp tables and a third-party AV scanner that had an apparent memory leak.  Both Inetinfo and Store were over 2 gig and had 32k handles each.  Once we resolved the issue Store was around 6k handles and Inetinfo around 3k.

What is happening is a memory leak is consuming all the virtual memory space in Store and Inetinfo, at least in our case here.  Yours may differ in what is causing the leak, but I’d bet more than likely its going to be something that ties into Store, such as Anti-Virus, something gumming up IIS and then Epoxy, or something along those lines.

Because you run out of memory, DSAccess starts to fail, then you see the string of errors above.

If you see this, what should you do first and foremost?  Give PSS a call so we can help you debug it.

More information on this can be found here:



Avoiding Version Store problems in the enterprise environment

Applies to Exchange 2003 

  So one of the things that can go wrong with Exchange is that it can run out of something called Version Store.  Version store is an in-memory list of changes made to the database.  Nagesh Mahadev has an awesome post about Version Store on our msexchangeteam.com blog, posted here.  To borrow his summary:  In simple terms, the Version Store is where transactions are held in memory until they can be written to disk.

  So version store running out of memory can be caused by either a long running transaction.  This is pretty self explanatory.  Say your anti-virus product wants to scan something in VSAPI and locks it and then goes to lunch.  Your version store will consume more and more memory until it runs out because it’s trying to work around this long running transaction, keeping track of all the rollbacks and whatnot.

  The other problem is with I/O.  Since we’re holding transactions in memory until they can be written to disk, if something prevents us from writing to disk, we can hit version store problems.  Sometimes this type of problem can be precipitated by 9791 event log entries in the application event log.  If this happens, get ready to do some adplus store dumps when version buckets allocated hits 70%.

What to do to prevent or mitigate this risk?

  1. Consider increasing transaction log buffers, especially if you are seeing transaction log stalls in your environment.  The logic here is that if store can’t commit transactions to the log files fast enough, it can cause version store to back up.  By default the number of buffers is 500, you can increase this to 9000.  This will prevent a single database from needing to write a bunch of TLs at once and backing up version store.  I highly recommend using the EXBPA for governance on this, details on the rule for setting this, etc can be found here.

  2. Watch your PTE resources and treat accordingly.  I’ve seen customers run low on free PTEs and run into version store problems because they don’t have the capacity to perform IO operations as fast as the database would like.

  3. Make sure your online maintenance is completing frequently, at least once a week on each database.  Part of online maintenance is defragmenting your database.  On a highly fragmented database(s) version store has to keep track unoptimized links and tables and dealing with records that are not on the fewest number of pages possible, in essence bloating version store size with each transaction.  For indepth information on Exchange Store Maintenance, go here.

  4. Keep your message size limits down.  Going hand in hand with this is preventing older Outlook clients from accessing your server.  Old clients (Older than Outlook 2003 SP2 in cached mode, any version of Outlook 2003 and higher for online mode) ignore your message size limits for submitting messages, so older clients could attach a 100 meg file and submit and store would have to deal with it even though it’s over the size limit.  This should give you the gist of what I’m talking about here.

Hope this helps with your environment.