21 June 2013

A request to allocate an ephemeral port number from the global UDP port space has failed

Had a really weird problem with one of our clients today. They have a Win 2012 Server (Hyper-V host) with 4 VMs. One VM is the DC, the other 3 have a Line Of Business (LOB) apps with SQL server instances. All was working fine but some of the automatic background processes on one of the LOBs were unable to access the Internet. Internally client PCs were able to connect to the LOBs MS-SQL DB without any issues.

When we checked this server via the Hyper-V console session, we noted that the DNS IPs in the network adapter had gone. I re-added the DCs (its also the DNS) IP, but this did not resolve the issue. I was unable to access the server over the network (RDP) or access other PCs from within the server's console. Yet the users were happily using the LOB and associated database!

A quick reboot would probably resolve the issue, but this was out of questions as users were seeing clients and needed their LOB application. Eventually I found this warning in the event log:


After using a bit of Google foo, we came up with some answers, thanks to all the posters! This gave us the most background info:
http://blogs.technet.com/b/askds/archive/2008/10/29/port-exhaustion-and-you-or-why-the-netstat-tool-is-your-friend.aspx
Clearly, one of the processes running on this server is not a good citizien! Running "netstat -anob" produced a 3.5MB file! This blog entry:
http://kasperk.it/windows-server/a-request-to-allocate-an-ephemeral-port-number-from-the-global-udp-port-space-has-failed-due-to-all-such-ports-being-in-use
suggested to kill the offending process, but the svchost.exe was also running lanman Workstation, which I thought was a big risk, gen the system was still operational for the end-user.

We scheduled a suitable time to rectify the issue and I then killed the "dnscache" task. Low and behold the network sprung back to life and the system performed normally. The LOB application which I left running on 1 PC noted a sub-second interruption but recovered transparently. For good measure we restarted the system.

What is the cause of the port exhaustion? At this stage I don't know, but I will keep an eye out. The server had been running without an issue for 5 months. It was last restarted 6 days ago when I installed Windows Updates (June 2013). There is likely an issue with the update and one of the programs on the server.

20 June 2013

Quickbooks / Reckon Accounts Firewall rules

In a previous blogpost, I discussed setting the correct firewall port rules for Quickbooks. In Australia the developer of Quickbooks has parted ways from the American mothership and the product has been re-branded Reckon Accounts. Who knows what is going to happen with the program over time.

Rather than port numbers I now use the executable names for the firewall exceptions. When it comes to annual upgrades, you just have to change the file/folder names, rather than guessing port numbers or call their support desk.

I have developed a script that sets the right options:

REM Add firewall rules for Reckon Accounts 2013
REM Explanations can be found at
REM http://technet.microsoft.com/en-us/library/dd734783(v=ws.10).aspx

netsh advfirewall firewall add rule name="Reckon Accounts DB Service" profile=Domain,Private dir=in action=allow program="C:\Program Files (x86)\Intuit\ReckonAccounts 2013\QBLanService.exe"
netsh advfirewall firewall add rule name="Reckon Accounts DB Manager" profile=Domain,Private dir=in action=allow program="C:\Program Files (x86)\Intuit\ReckonAccounts 2013\QBDBMgrN.exe"
netsh advfirewall firewall add rule name="Reckon Accounts DB Manager (N)" ;profile=Domain,Private dir=in action=allow program="C:\Program Files (x86)\Intuit\ReckonAccounts 2013\QBDBMgrN.exe"
pause

Cut and paste the above into notepad and save as a command file (.cmd), then run it and the firewall rules for the server are set!