In the legacy version of Azure Service Bus (ASB) I can use MessageWaitTimeout in SessionHandlerOptions to control the timeout between 2 messages. For example, if I set timeout 5 seconds, after complete the first message, the queue waits for 5s then picks the next message.
In the new version Azure.Messaging.ServiceBus, the queue has to wait for around 1 minute to pick up the next message. I only need to process one-by-one messages, no need to process concurrent messages.
I follow this example and can't find any solution to set timeout like the old version.
Does anyone know how to do it?
var options = new ServiceBusSessionProcessorOptions
{
AutoCompleteMessages = false,
MaxConcurrentSessions = 1,
MaxConcurrentCallsPerSession = 1,
MaxAutoLockRenewalDuration = TimeSpan.FromMinutes(2),
};
EDIT:
I found the solution. It is RetryOptions in ServiceBusClient
var client = new ServiceBusClient("connectionString", new ServiceBusClientOptions
{
RetryOptions = new ServiceBusRetryOptions
{
TryTimeout = TimeSpan.FromSeconds(5)
}
});
With the latest stable release, 7.2.0, this can be configured with the SessionIdleTimeout property.
Related
I have a background service web app using C# and hangfire. I notice that just in a short duration, for example, 1 minute, there are around 800 calls to the database by hangfire. Note that I have no yet created any request job or recurring job in the hangfire. Is there a way to reduce the frequency of hangfire calls to the database?
I have set to just 1 worker and QueuePollInterval to TimeSpan.FromMinutes(120) but not sure why there are 3 calls to the db every 2 seconds.
var options = new SqlServerStorageOptions
{
QueuePollInterval = TimeSpan.FromMinutes(120)
};
builder.Services.AddHangfire(configuration => configuration
.UseSqlServerStorage(myServicesConnStr, options)
.UseFilter(new AutomaticRetryAttribute { Attempts = 1, DelaysInSeconds = new int[] { 1800 } })
);
builder.Services.AddHangfireServer(options => options.WorkerCount = 1);
Traces from database would be nice to have some visibility.
My best bet would be that those calls are coming from Dashboard or heartbeats.
Look into following properties (and their current values) to reduce calls:
HeartbeatInterval
ServerCheckInterval
SchedulePollingInterval
StatsPollingInterval
References: https://api.hangfire.io/html/Properties_T_Hangfire_Server_BackgroundProcessingServerOptions.htm
https://docs.hangfire.io/en/latest/configuration/using-sql-server.html#configuring-the-polling-interval
I have several questions when using MassTransit with RabbitMq.
I have two queues. One for normal messsages and one for priority messages.
Priority ones must be handled before the normal ones.
Lets say I'm configuring bus this way:
public void ConfigureRabbitMq(IBusRegistrationContext context, IRabbitMqBusFactoryConfigurator configurator)
{
var rabbitConfig = RabbitMqConfig.Get<RabbitMqConfiguration>();
configurator.Host(rabbitConfig.Host, rabbitConfig.VirtualHost, hfg =>
{
hfg.Password(rabbitConfig.Password);
hfg.Username(rabbitConfig.UserName);
});
configurator.ConcurrentMessageLimit = 8;
configurator.ReceiveEndpoint(rabbitConfig.SendQueue, endpoint =>
{
endpoint.Durable = true;
endpoint.ConcurrentMessageLimit = 5;
endpoint.PrefetchCount = 25;
endpoint.UseMessageRetry(r => r.Incremental(5, TimeSpan.FromMinutes(1), TimeSpan.FromMinutes(1)));
endpoint.ConfigureConsumer<Service.MessageService.Send>(context);
});
configurator.ReceiveEndpoint(rabbitConfig.SendPriorityQueue, endpoint =>
{
endpoint.Durable = true;
endpoint.ConcurrentMessageLimit = 5;
endpoint.PrefetchCount = 25;
endpoint.UseMessageRetry(r => r.Incremental(5, TimeSpan.FromMinutes(1), TimeSpan.FromMinutes(1)));
endpoint.ConfigureConsumer<Service.MessageService.Send>(context);
});
}
What will 'configurator.ConcurrentMessageLimit = 8;' do?
Is it gonna limit the number of messages for entire app or set limit for every endpoint to 8?
Can I somehow make sure that the messages from 'SendPriorityQueue' are handled before the 'SendQueue'?
configurator.ConcurrentMessageLimit = 8;
Sets the default endpoint concurrent message limit to 8. That’s it.
Since you are specifying both ConcurrentMessageLimit and PrefetchCount on each receive endpoint, the default concurrent message limit is overridden, essentially unused in this configuration. Each receive endpoint will prefetch up to 25 messages and process up to 5 concurrently (up to 10 concurrent messages total across both receive endpoints).
I'm having problems with Azure Batch Jobs. I'm trying to create an app pool, create a CloudTask and then execute my Application Package being online.
Do you see something not working correctly?
Here is the code used now. Main code:
await CreatePoolAsync(batchClient, currentPoolId, applicationFiles);
await CreateJobAsync(batchClient, currentJobId, currentPoolId);
await AddTasksAsync(batchClient, currentJobId, inputFiles, optimizationId, outputContainerSasUrl);
Creating the Pool:
CloudPool pool = batchClient.PoolOperations.CreatePool(
poolId: poolId,
targetDedicated: 1, // 3 compute nodes
virtualMachineSize: "small", // single-core, 1.75 GB memory, 225 GB disk
cloudServiceConfiguration: new CloudServiceConfiguration(osFamily: "4")); // Windows Server 2012 R2
pool.ApplicationPackageReferences = new List<ApplicationPackageReference>
{
new ApplicationPackageReference
{
ApplicationId = "my_app"
}
};
Creating the Job:
CloudJob job = batchClient.JobOperations.CreateJob();
job.Id = jobId;
job.PoolInformation = new PoolInformation {PoolId = poolId};
await job.CommitAsync();
And adding the Task.
string taskId = "myAppEngineTask";
string taskCommandLine = $"cmd /c %AZ_BATCH_APP_PACKAGE_MY_APP%\\MyApp.Console.exe -a NSGA2 -r 1000 -m db -i {optimizationId}";
CloudTask task = new CloudTask(taskId, taskCommandLine);
task.ApplicationPackageReferences = new List<ApplicationPackageReference>
{
new ApplicationPackageReference
{
ApplicationId = "my_app"
}
};
await batchClient.JobOperations.AddTaskAsync(jobId, tasks);
When done with adding tasks, everything seems to be up and running, but I get error code: -2146232576 and nothing is printed to any logs.
To diagnose failures for tasks, you will want to first see if the CloudTask ExecutionInformation.FailureInformation (if SDK 7.0.0+ or ExecutionInformation.SchedulingError if prior SDK version) is set. Examine those fields for any information.
For your particular task, it looks like it could be related to you adding a task-level Application package reference when you have already done that at the pool-level. Try omitting the task.ApplicationPackageReferences.
Consult the Application Package Documentation for more information regarding the difference between Pool-level and Task-level application packages and which one would suit your scenario the best.
We are encountering this exception very often in our production code without any increase in number of requests to Couchbase or any memory pressure on the server itself.
The node has been allocated 30GB of RAM and the usage is of 3GB maximum but every now and then this exception is being thrown. The bucket is opened only once per application lifetime and only get and upsert operations are performed afterwards. The connection is initialised like this:
Config = new ClientConfiguration()
{
Servers = serverList,
UseSsl = false,
DefaultOperationLifespan = 2500,
BucketConfigs = new Dictionary<string, BucketConfiguration>
{
{ bucketName, new BucketConfiguration
{
BucketName = bucketName,
UseSsl = false,
DefaultOperationLifespan = 2500,
PoolConfiguration = new PoolConfiguration
{
MaxSize = 2000,
MinSize = 200,
SendTimeout = (int)Configuration.Config.Instance.CouchbaseConfig.Timeout
}
}}
}
};
Cluster = new Cluster(Config);
Bucket = Cluster.OpenBucket();
Can you please let me know if this initialisation is correct and more importantly what to check on the Couchbase server to find the cause of this issue? I have checked all logs on the server but could not find anything special at the time when those errors are being thrown.
Thank you,
Stacktrace:
System.Exception.Couchbase exception
at ###.DataLayer.Couchbase.CouchbaseUserOperations.Get()
at ###.API.Services.BaseService`1.SetUserID()
at ###.API.Services.EventsService+<GetResponse>d__0.MoveNext()
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start()
at ###.API.Services.EventsService.GetResponse()
at ###.API.Services.BaseService`1+<Any>d__28.MoveNext()
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start()
at ###.API.Services.BaseService`1.Any()
at lambda_method()
at ServiceStack.Host.ServiceRunner`1.Execute()
at ServiceStack.Host.ServiceRunner`1.Process()
at ServiceStack.Host.ServiceExec`1.Execute()
at ServiceStack.Host.ServiceRequestExec`2.Execute()
at ServiceStack.Host.ServiceController.ManagedServiceExec()
at ServiceStack.Host.ServiceController+<>c__DisplayClass11.<RegisterServiceExecutor>b__f()
at ServiceStack.Host.ServiceController.Execute()
at ServiceStack.HostContext.ExecuteService()
at ServiceStack.Host.RestHandler.ProcessRequestAsync()
at ServiceStack.Host.Handlers.HttpAsyncTaskHandler.System.Web.IHttpAsyncHandler.BeginProcessRequest()
at System.Web.HttpApplication+CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute()
at System.Web.HttpApplication.ExecuteStep()
at System.Web.HttpApplication+PipelineStepManager.ResumeSteps()
at System.Web.HttpApplication.BeginProcessRequestNotification()
at System.Web.HttpRuntime.ProcessRequestNotificationPrivate()
at System.Web.Hosting.PipelineRuntime.ProcessRequestNotificationHelper()
at System.Web.Hosting.PipelineRuntime.ProcessRequestNotification()
at System.Web.Hosting.UnsafeIISMethods.MgdIndicateCompletion()
at System.Web.Hosting.UnsafeIISMethods.MgdIndicateCompletion()
at System.Web.Hosting.PipelineRuntime.ProcessRequestNotificationHelper()
at System.Web.Hosting.PipelineRuntime.ProcessRequestNotification()
Caused by: System.Exception : Couchbase.Core.NodeUnavailableException: The node 172.31.34.105:11210 that the key was mapped to is either down or unreachable. The SDK will continue to try to connect every 1000ms. Until it can connect every operation routed to it will fail with this exception.
at ###.DataLayer.Couchbase.CouchbaseUserOperations.Get()
at ###.API.Services.BaseService`1.SetUserID()
at ###.API.Services.EventsService+<GetResponse>d__0.MoveNext()
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start()
at ###.API.Services.EventsService.GetResponse()
at ###.API.Services.BaseService`1+<Any>d__28.MoveNext()
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start()
at ###.API.Services.BaseService`1.Any()
at lambda_method()
at ServiceStack.Host.ServiceRunner`1.Execute()
at ServiceStack.Host.ServiceRunner`1.Process()
at ServiceStack.Host.ServiceExec`1.Execute()
at ServiceStack.Host.ServiceRequestExec`2.Execute()
at ServiceStack.Host.ServiceController.ManagedServiceExec()
at ServiceStack.Host.ServiceController+<>c__DisplayClass11.<RegisterServiceExecutor>b__f()
at ServiceStack.Host.ServiceController.Execute()
at ServiceStack.HostContext.ExecuteService()
at ServiceStack.Host.RestHandler.ProcessRequestAsync()
at ServiceStack.Host.Handlers.HttpAsyncTaskHandler.System.Web.IHttpAsyncHandler.BeginProcessRequest()
at System.Web.HttpApplication+CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute()
at System.Web.HttpApplication.ExecuteStep()
at System.Web.HttpApplication+PipelineStepManager.ResumeSteps()
at System.Web.HttpApplication.BeginProcessRequestNotification()
at System.Web.HttpRuntime.ProcessRequestNotificationPrivate()
at System.Web.Hosting.PipelineRuntime.ProcessRequestNotificationHelper()
at System.Web.Hosting.PipelineRuntime.ProcessRequestNotification()
at System.Web.Hosting.UnsafeIISMethods.MgdIndicateCompletion()
at System.Web.Hosting.UnsafeIISMethods.MgdIndicateCompletion()
at System.Web.Hosting.PipelineRuntime.ProcessRequestNotificationHelper()
at System.Web.Hosting.PipelineRuntime.ProcessRequestNotification()
A NodeUnavailableException could be returned for any number of network related issues...However, since you mentioned you are running on AWS, it's likely the TCP keep-alives settings needs to be tuned on the client.
Your MinSize connections (200) is so large, that you are not likely using them all and they are sitting by idly until the AWS LB decides to shut them down. When this happens the SDK will temporarily put the node (1000ms) that failed into a down state and then try to reconnect. During this time any keys mapped to it will fail with that exception.
This blog describes how to set the TCP keep-alives time and interval: http://blog.couchbase.com/introducing-couchbase-.net-sdk-2.1.0-the-asynchronous-couchbase-.net-client
var config = new ClientConfiguration
{
EnableTcpKeepAlives = true, //default it true
TcpKeepAliveTime = 1000*60*60, //set to 60mins
TcpKeepAliveInterval = 5000 //KEEP ALIVE will be sent every 5 seconds after 1hr
};
var cluster = new Cluster(config);
var bucket = cluster.OpenBucket();
That assumes you are using version 2.1.0 or greater of the client. If you are not, you can do it through the ServicePointManager:
//setting keep-alive time to 200 seconds
ServicePointManager.SetTcpKeepAlive(true, 200000, 1000);
You'll have to set that that to a value less than what the AWS LB is set to (I believe it's 60 seconds).
You should also probably set your connection pool min and max a bit lower, like 5 and 10.
Even though the problem was not fully solved since we still encounter timeouts but at a lower rate, we increased the performance by using the ClusterHelper singleton instance as follows:
ClusterHelper.Initialize(
new ClientConfiguration
{
Servers = serverList,
UseSsl = false,
DefaultOperationLifespan = 2500,
EnableTcpKeepAlives = true,
TcpKeepAliveTime = 1000*60*60,
TcpKeepAliveInterval = 5000,
BucketConfigs = new Dictionary<string, BucketConfiguration>
{
{
"default",
new BucketConfiguration
{
BucketName = "default",
UseSsl = false,
Password = "",
PoolConfiguration = new PoolConfiguration
{
MaxSize = 50,
MinSize = 10
}
}
}
}
});
I am using service bus queues to communicate between web role and worker role. Sometimes web role messages are not being accepted by worker role. But it immediately accepts the next message i send. So i was thinking maybe its happening because the Batched Operations is enabled. I have been trying to put it to false but i havent been successful. This is my code.
public static QueueClient GetServiceBusQueueClient(string queuename)
{
string connectionString;
if (RoleEnvironment.IsAvailable)
connectionString = CloudConfigurationManager.GetSetting("Microsoft.ServiceBus.ConnectionString");
else
connectionString = ConfigurationManager.AppSettings["Microsoft.ServiceBus.ConnectionString"];
var namespaceManager = NamespaceManager.CreateFromConnectionString(connectionString);
QueueDescription queue = null;
if (!namespaceManager.QueueExists(queuename))
{
queue = namespaceManager.CreateQueue(queuename);
queue.EnableBatchedOperations = false;
queue.MaxDeliveryCount = 1000;
}
else
{
queue = namespaceManager.GetQueue(queuename);
queue.EnableBatchedOperations = false;
queue.MaxDeliveryCount = 1000;
}
MessagingFactorySettings mfs = new MessagingFactorySettings();
mfs.NetMessagingTransportSettings.BatchFlushInterval = TimeSpan.Zero;
string issuer;
string accessKey;
if (RoleEnvironment.IsAvailable)
issuer = RoleEnvironment.GetConfigurationSettingValue("AZURE_SERVICEBUS_ISSUER");
else
issuer = ConfigurationManager.AppSettings["AZURE_SERVICEBUS_ISSUER"];
if (RoleEnvironment.IsAvailable)
accessKey = RoleEnvironment.GetConfigurationSettingValue("AZURE_SERVICEBUS_ACCESS_KEY");
else
accessKey = ConfigurationManager.AppSettings["AZURE_SERVICEBUS_ACCESS_KEY"];
mfs.TokenProvider = TokenProvider.CreateSharedSecretTokenProvider(issuer, accessKey);
MessagingFactory messagingFactory = MessagingFactory.Create(namespaceManager.Address, mfs);
QueueClient Client = messagingFactory.CreateQueueClient(queue.Path);
return Client;
}
But the EnableBatchedOperations is always true and the MaxDeliveryCount is always 10 by default.
Let me know if you know what's the issue
Thanks
If you want to set the EnabledBatchedOperations, you have to do that before you create the queue. you do that by creating a QueueDescription object then pass that to the CreateQueue method. For example:
QueueDescription orderQueueDescription =
new QueueDescription(queuename)
{
RequiresDuplicateDetection = true,
MaxDeliveryCount = 1000,
};
namespaceMgr.CreateQueue(orderQueueDescription);
Update:
The documentation is pretty clear on this:
Since metadata cannot be changed once a messaging entity is created, modifying the duplicate detection behavior requires deleting and recreating the queue. The same principle applies to any other metadata. [1]
QueueDescription Represents the metadata description of the queue.
[1] http://msdn.microsoft.com/en-us/library/windowsazure/hh532012.aspx
Update Azure SDK 2.3
UpdateQueue method on the NamespaceManager still doesn't let you update any properties apart from suspending or resuming the queue.
If you need to change MaxDeliveryCount on an existing queue and you don't want to delete and recreate the queue, your only option is to change it in the Azure portal.