ServicePulse and ServiceControl


My question is what is the recommended version of Particular.ServiceControl that complements the Particular.ServicePulse?

We were on an old version of this,Particular.ServiceControl 1.40.0 and Particular.ServicePulse 1.9.1. After reading the documentation, it was recommended to upgrade to Particular.ServiceControl 1.45 or later. I upgraded to Particular.ServiceControl-1.48.0 and Particular.ServicePulse-1.13.4.

Backed up the database and took snapshot of machine.

From then after, performed the upgrade to Particular.ServiceControl-2.1.4, and Particular.ServicePulse-1.14.4

However, there are some clarification needed:

From the servicecontrol.exe.config as shown below

<?xml version=“1.0” encoding=“utf-8”?>
<add key=“Raven/Esent/MaxVerPages” value=“2048” />
<add key=“ServiceControl/DBPath” value=“D:\ServiceControl\DB” />
<add key=“ServiceControl/TransportType” value=“NServiceBus.MsmqTransport, NServiceBus.Core” />
<add key=“ServiceControl/HostName” value=“localhost” />
<add key=“ServiceControl/Port” value=“33333” />
<add key=“ServiceControl/DatabaseMaintenancePort” value=“33334” />
<add key=“ServiceControl/LogPath” value=“D:\ServiceControl\Logs” />
<add key=“ServiceControl/ForwardAuditMessages” value=“False” />
<add key=“ServiceControl/ForwardErrorMessages” value=“False” />
<add key=“ServiceControl/AuditRetentionPeriod” value=“10.00:00:00” />
<add key=“ServiceControl/ErrorRetentionPeriod” value=“10.00:00:00” />
<add key=“ServiceBus/AuditQueue” value=“audit” />
<add key=“ServiceBus/ErrorQueue” value=“error” />
<gcServer enabled=“true” />

I get the following in the logs as indicated

  • in the ravenlogfile this appears,

2018-09-20 09:30:29.3146|91|Warn|Raven.Client.Connection.Async.AsyncServerClient|Was unable to fetch topology from primary node http:// localhost also there is no cached topology

  • in the logfile, this appears

2018-09-20 09:13:47.0995|111|Warn|ServiceControl.MSMQ.DLQMonitor.CheckDeadLetterQueue|36 messages in the Dead Letter Queue on MACHINE. Please submit a support ticket to Particular using if you would like help from our engineers to ensure no message loss while resolving these dead letter messages.

What would be the recommended steps to resolve the above issues?

Have also encountered where the messages are not processed and appears hung.

  • MSMQ is stuck on starting, this resorted into having to kill the process.
  • Delete the MQInSeqs.lg1, MQInSeqs.lg2, MQTrans.lg1, MQTrans.lg2, QMLog file in C:\Windows\System32\msmq\storage
  • HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSMQ\Parameters, set the value of LogDataCreated to zero.
  • Reboot

Then I perform a database compaction and clean up according to this link Compacting RavenDB 3.5 • ServiceControl • Particular Docs in order to resolve the issue.

Any suggestions, tips would be greatly appreciated.


Hi Tom

The WARN you see in RavenDB can be ignored. It is RavenDB trying to determine whether it is operating in a cluster mode.

The CheckDeadLetterQueue error could indicate an issue with MSMQ. The check verifies that the dead letter queues on the machine are empty. If they aren’t it reports this error.

Can you send us the full log files of ServiceControl to

We can then further investigate the support case


Hi Daniel,

Thanks for getting back to me, what would be the best way to check for the Dead Letter Queue ?

Is there some simple Powershell script to query it?


Hi Tom,

Are you looking for an automated way or can it be manual?

The deadletter queues are specific to a machine and not global. So the check that fails checks the deadletter queue of the machine that ServiceControl is running on. You can access the queue under Computer Management


Or you can use Get-MsmqQueue Powershell commands to read the -QueueType SystemDeadLetter see Get-MsmqQueue (MSMQ) | Microsoft Learn

Hope that gives you some pointers in the right direction


Hi Daniel,

Many thanks, am curious, what would be the impact if there was an automated process to clear out the DLQ’s.

Also, why would the error be appearing. The scenario I have is this, a centralised machine that is running particular insight server, there are environments that employs NSB and sends messages to particular insight, those environments gets shut off at times and the number of failed messages in the particular insight server increases, then the custom check trips off with the error about DLQ’s