ServiceControl Monitoring getting lots of visibility timeouts

servicecontrol

(Tomas Gerhardsen) #1

I’m getting lots of warnings in the in the ServiceControl Monitoring log.

“NServiceBus.Transport.AzureStorageQueues.MessagePump|Dispatching the message took longer than a visibility timeout. The message will reappear in the queue and will be obtained again.”

What could cause this? Can I change the timeout value for the monitoring service? The system is not very busy.


(Tomasz) #2

Hi Tomas,

you should not need to change the visibility timeout.

Are there any other entires in the log except for the one you posted? Is this a permanent behavior that you see?

Cheers,
Tomek


(Tomas Gerhardsen) #3

Yes, they are permanent. I can also see that the monitoring service is creating a lot of requests. > 40M the last 24h.

These are the messages right after the service was started:

ServiceControl Monitoring Version: 2.0.0.0
Selected Transport: ServiceControl.Transports.AzureStorageQueues.ServiceControlAzureStorageQueueTransport, ServiceControl.Transports.AzureStorageQueues

2019-01-29 15:36:56.9244|4|Warn|NServiceBus.TransportExtensions|A connection string named ‘NServiceBus/Transport’ was found. Using named connection strings is discouraged. Instead, load the connection string in your code and pass the value to EndpointConfiguration.UseTransport().ConnectionString(connectionString).
2019-01-29 15:42:19.7073|18|Warn|NServiceBus.Features.FeatureStartupTaskController|Exception occurred during stopping of feature startup task ‘TimeoutPollerRunner’.
System.ObjectDisposedException: Cannot access a disposed object.
Object name: ‘ExpiredTimeoutsPoller’.
at NServiceBus.ExpiredTimeoutsPoller.ThrowIfDisposed()
at NServiceBus.Features.TimeoutPollerRunner.OnStop(IMessageSession session)
at NServiceBus.Features.FeatureStartupTaskController.d__5.MoveNext()
2019-01-29 15:42:19.7183|18|Warn|NServiceBus.ExpiredTimeoutsPoller|Failed to fetch timeouts from the timeout storage
System.ObjectDisposedException: Cannot access a disposed object.
Object name: ‘ExpiredTimeoutsPoller’.
at NServiceBus.ExpiredTimeoutsPoller.ThrowIfDisposed()
at NServiceBus.ExpiredTimeoutsPoller.SpinOnce(CancellationToken cancellationToken)
at NServiceBus.ExpiredTimeoutsPoller.d__10.MoveNext()
— End of stack trace from previous location where exception was thrown —
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at NServiceBus.ExpiredTimeoutsPoller.d__9.MoveNext()
2019-01-29 15:42:19.7273|18|Warn|NServiceBus.RepeatedFailuresOverTimeCircuitBreaker|The circuit breaker for TimeoutStorageConnectivity is now in the armed state
2019-01-29 15:42:26.3233|1|Warn|NServiceBus.TransportExtensions|A connection string named ‘NServiceBus/Transport’ was found. Using named connection strings is discouraged. Instead, load the connection string in your code and pass the value to EndpointConfiguration.UseTransport().ConnectionString(connectionString).
2019-01-29 15:42:27.5683|4|Warn|NServiceBus.TransportExtensions|A connection string named ‘NServiceBus/Transport’ was found. Using named connection strings is discouraged. Instead, load the connection string in your code and pass the value to EndpointConfiguration.UseTransport().ConnectionString(connectionString).
2019-01-29 16:03:58.1030|5|Warn|NServiceBus.Transport.AzureStorageQueues.MessagePump|Dispatching the message took longer than a visibility timeout. The message will reappear in the queue and will be obtained again.
NServiceBus.Transport.AzureStorageQueues.LeaseTimeoutException: The pop receipt of the cloud queue message ‘446894f3-bcc3-4f60-94e2-c5bdf7ff4175’ is invalid as it exceeded the next visible time by ‘00:00:00.0980521’.
at NServiceBus.Transport.AzureStorageQueues.MessageRetrieved.AssertVisibilityTimeout()
at NServiceBus.Transport.AzureStorageQueues.AtLeastOnceReceiveStrategy.d__1.MoveNext()
— End of stack trace from previous location where exception was thrown —
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at NServiceBus.Transport.AzureStorageQueues.MessagePump.d__8.MoveNext()


(Tomasz) #4

Hi Tomas,

are there any network connectivity issues between the monitoring service and the storage account?

Could you try upgrading the instance the latest version? The excessive storage usage might be caused by a bug that was patched in the newest version of the monitoring service.

Tomek


(Tomas Gerhardsen) #5

Hi Tomek,

Our monitoring service is installed on an Azure VM and I don’t think there are any network connectivity issues. I have also noticed the same problem on our test environment also on Azure.

I already have the latest version installed: 2.1.2.

/Tomas


(Sean Feldman) #6

Tomas,

Are VM and storage account are in the same region when fried to Azure? Specifically, I’m after things such North EU vs West EU.


(Tomas Gerhardsen) #7

Yes, they are both in West EU.


(Tomasz) #8

Hi Tomas,

I was able to reproduce the problem that you see. Once I have more information I’ll let you know.

Tomek


(Tomasz) #9

Hi Tomas,

It looks that we will need to release a patch for ASQ transport. Can you send an email to support@particular.net so that I can share the patch with you to validate that it solves the problem?

Cheers,
Tomasz


(Daniel Marbach) #10

Hi Tomas

We released ASQ 7.5.6 and 8.1.3

and updated ServiceControl and ServiceControl.Monitoring

Regards
Daniel


(Tomasz) #11

Hi Tomas,

please have a look at ServiceControl 3.6.5 - Patch release available. This patch fixes a bug in native queue length monitroing logic that caused the large number of requests to the storage.

Cheers,
Tomek