We have set value of MaximumConcurrencyLevel to 8 , which is equivalent to the number of Virtual processor we have, while we increase this value it degrade overall performance by around 10-20% & still CPU usage is at 25% only, wanted to understand what value for parameter MaximumConcurrencyLevel is best suitable & what other things we need to take care with this.
(Not set any value for MaximumMessageThroughputPerSecond, using NSB 5 )
The limiting factor likely isn’t your CPU. It either is your network, your disks, or a remote service like your database. Only if you have CPU heavy tasks like video encoding/transcoding or math issues or use in memory databases/caches your CPU is more likely to become the bottleneck.
You have to setup proper infrastructure monitoring to tweak your environment for best performance.
That said, my rule of thumb is that for most transports the optimal value is between 1 and 4 times the amount of cores in a machine for running a single endpoint on a single box.
If you run more endpoint instance on a single box they will compete for resources so probably that would not be a great start.
Measure current performance of all involved resources, do small increments of 1 or 2, measure everything again and compare.
Please describe the kind of workload of your endpoint, how many endpoint are running on your machine and other characteristics if you want more precise advice.
Currently we have around 8 different endpoints(not 8 instance of same endpoint) on single machine, using SQL transport, handler perform database calls using WCF services. We had analyzed database query using profiler as well but database query does not taking much time.
Check the connection limit on your database server. Monitor the active connection count and verify is that it the bottleneck.
8*8 = 64, verify if your database allows for that many concurrent connections.
Also check the API of your WCF service. It could be that that API does not allow concurrent invocations. Make sure its
ConcurrencyMode is set to the right value.