This is my code to Configure Azure Storage Account
public CloudTableClient ConfigureStorageAccount()
{
var storageCred = new StorageCredentials(ConfigurationManager.AppSettings["SASToken"]);
CloudTableClient = new CloudTableClient(
new StorageUri(new Uri(ConfigurationManager.AppSettings["StorageAccountUri"])), storageCred);
var backgroundRequestOption = new TableRequestOptions()
{
// Client has a default exponential retry policy with 4 sec delay and 3 retry attempts
// Retry delays will be approximately 3 sec, 7 sec, and 15 sec
MaximumExecutionTime = TimeSpan.FromSeconds(30),
// PrimaryThenSecondary in case of Read-access geo-redundant storage, else set this to PrimaryOnly
LocationMode = LocationMode.PrimaryThenSecondary,
};
CloudTableClient.DefaultRequestOptions = backgroundRequestOption;
return CloudTableClient;
}
When I specify backgroundRequestOption I am getting an error The Uri for the target storage location is not specified. Please consider changing the request's location mode.
When I don't specify backgroundRequestOption I don't get any error. Where do I need to specify this URI?
You need to specify both PrimaryUri and SecondaryUri if LocationMode.PrimaryThenSecondary is chosen.
Related
I have several questions when using MassTransit with RabbitMq.
I have two queues. One for normal messsages and one for priority messages.
Priority ones must be handled before the normal ones.
Lets say I'm configuring bus this way:
public void ConfigureRabbitMq(IBusRegistrationContext context, IRabbitMqBusFactoryConfigurator configurator)
{
var rabbitConfig = RabbitMqConfig.Get<RabbitMqConfiguration>();
configurator.Host(rabbitConfig.Host, rabbitConfig.VirtualHost, hfg =>
{
hfg.Password(rabbitConfig.Password);
hfg.Username(rabbitConfig.UserName);
});
configurator.ConcurrentMessageLimit = 8;
configurator.ReceiveEndpoint(rabbitConfig.SendQueue, endpoint =>
{
endpoint.Durable = true;
endpoint.ConcurrentMessageLimit = 5;
endpoint.PrefetchCount = 25;
endpoint.UseMessageRetry(r => r.Incremental(5, TimeSpan.FromMinutes(1), TimeSpan.FromMinutes(1)));
endpoint.ConfigureConsumer<Service.MessageService.Send>(context);
});
configurator.ReceiveEndpoint(rabbitConfig.SendPriorityQueue, endpoint =>
{
endpoint.Durable = true;
endpoint.ConcurrentMessageLimit = 5;
endpoint.PrefetchCount = 25;
endpoint.UseMessageRetry(r => r.Incremental(5, TimeSpan.FromMinutes(1), TimeSpan.FromMinutes(1)));
endpoint.ConfigureConsumer<Service.MessageService.Send>(context);
});
}
What will 'configurator.ConcurrentMessageLimit = 8;' do?
Is it gonna limit the number of messages for entire app or set limit for every endpoint to 8?
Can I somehow make sure that the messages from 'SendPriorityQueue' are handled before the 'SendQueue'?
configurator.ConcurrentMessageLimit = 8;
Sets the default endpoint concurrent message limit to 8. That’s it.
Since you are specifying both ConcurrentMessageLimit and PrefetchCount on each receive endpoint, the default concurrent message limit is overridden, essentially unused in this configuration. Each receive endpoint will prefetch up to 25 messages and process up to 5 concurrently (up to 10 concurrent messages total across both receive endpoints).
In the legacy version of Azure Service Bus (ASB) I can use MessageWaitTimeout in SessionHandlerOptions to control the timeout between 2 messages. For example, if I set timeout 5 seconds, after complete the first message, the queue waits for 5s then picks the next message.
In the new version Azure.Messaging.ServiceBus, the queue has to wait for around 1 minute to pick up the next message. I only need to process one-by-one messages, no need to process concurrent messages.
I follow this example and can't find any solution to set timeout like the old version.
Does anyone know how to do it?
var options = new ServiceBusSessionProcessorOptions
{
AutoCompleteMessages = false,
MaxConcurrentSessions = 1,
MaxConcurrentCallsPerSession = 1,
MaxAutoLockRenewalDuration = TimeSpan.FromMinutes(2),
};
EDIT:
I found the solution. It is RetryOptions in ServiceBusClient
var client = new ServiceBusClient("connectionString", new ServiceBusClientOptions
{
RetryOptions = new ServiceBusRetryOptions
{
TryTimeout = TimeSpan.FromSeconds(5)
}
});
With the latest stable release, 7.2.0, this can be configured with the SessionIdleTimeout property.
I have an application developed with c# which the first functionality is a method that connect to a storage account in order to be able to manage blobs.
My problem is that I want to block connection after 3 essaies of trying to connect.
this is the method that represent the connection to the storage account
public bool Connect(out String strerror)
{
strerror = "";
try
{
storageAccount = new CloudStorageAccount(new StorageCredentials(AccountName, AccountConnectionString), true);
MSAzureBlobStorageGUILogger.TraceLog(MessageType.Control,CommonMessages.ConnectionSuccessful);
return true;
}
catch (Exception ex01)
{
Console.WriteLine(CommonMessages.ConnectionFailed + ex01.Message);
strerror =CommonMessages.ConnectionFailed +ex01.Message;
return false;
}
}
At the moment you create the CloudStorageAccount variable there's still no connection made to the Storage Account, which you can easily test out by adding random credentials. In the background all the library does is fire a REST call to the Storage API and therefore doesn't make any connection until you actually retrieve or send data.
The library also already has its own mechanism implemented to retry requests in case of failures, which defaults to 3 retries but you can change manually like this:
var options = new BlobRequestOptions()
{
RetryPolicy = new ExponentialRetry(deltaBackoff, maxAttempts),
};
cloudBlobClient.DefaultRequestOptions = options;
What about wrapping it in a while loop and continuing to retry until either success or hitting the 3 attempt maximum?
string strError;
const int maxConnectionAttempts = 3;
var connectionAttempts = 0;
var connected = false;
while (!connected && connectionAttempts < maxConnectionAttempts)
{
connected = Connect(out strError);
connectionAttempts++;
}
We are encountering this exception very often in our production code without any increase in number of requests to Couchbase or any memory pressure on the server itself.
The node has been allocated 30GB of RAM and the usage is of 3GB maximum but every now and then this exception is being thrown. The bucket is opened only once per application lifetime and only get and upsert operations are performed afterwards. The connection is initialised like this:
Config = new ClientConfiguration()
{
Servers = serverList,
UseSsl = false,
DefaultOperationLifespan = 2500,
BucketConfigs = new Dictionary<string, BucketConfiguration>
{
{ bucketName, new BucketConfiguration
{
BucketName = bucketName,
UseSsl = false,
DefaultOperationLifespan = 2500,
PoolConfiguration = new PoolConfiguration
{
MaxSize = 2000,
MinSize = 200,
SendTimeout = (int)Configuration.Config.Instance.CouchbaseConfig.Timeout
}
}}
}
};
Cluster = new Cluster(Config);
Bucket = Cluster.OpenBucket();
Can you please let me know if this initialisation is correct and more importantly what to check on the Couchbase server to find the cause of this issue? I have checked all logs on the server but could not find anything special at the time when those errors are being thrown.
Thank you,
Stacktrace:
System.Exception.Couchbase exception
at ###.DataLayer.Couchbase.CouchbaseUserOperations.Get()
at ###.API.Services.BaseService`1.SetUserID()
at ###.API.Services.EventsService+<GetResponse>d__0.MoveNext()
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start()
at ###.API.Services.EventsService.GetResponse()
at ###.API.Services.BaseService`1+<Any>d__28.MoveNext()
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start()
at ###.API.Services.BaseService`1.Any()
at lambda_method()
at ServiceStack.Host.ServiceRunner`1.Execute()
at ServiceStack.Host.ServiceRunner`1.Process()
at ServiceStack.Host.ServiceExec`1.Execute()
at ServiceStack.Host.ServiceRequestExec`2.Execute()
at ServiceStack.Host.ServiceController.ManagedServiceExec()
at ServiceStack.Host.ServiceController+<>c__DisplayClass11.<RegisterServiceExecutor>b__f()
at ServiceStack.Host.ServiceController.Execute()
at ServiceStack.HostContext.ExecuteService()
at ServiceStack.Host.RestHandler.ProcessRequestAsync()
at ServiceStack.Host.Handlers.HttpAsyncTaskHandler.System.Web.IHttpAsyncHandler.BeginProcessRequest()
at System.Web.HttpApplication+CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute()
at System.Web.HttpApplication.ExecuteStep()
at System.Web.HttpApplication+PipelineStepManager.ResumeSteps()
at System.Web.HttpApplication.BeginProcessRequestNotification()
at System.Web.HttpRuntime.ProcessRequestNotificationPrivate()
at System.Web.Hosting.PipelineRuntime.ProcessRequestNotificationHelper()
at System.Web.Hosting.PipelineRuntime.ProcessRequestNotification()
at System.Web.Hosting.UnsafeIISMethods.MgdIndicateCompletion()
at System.Web.Hosting.UnsafeIISMethods.MgdIndicateCompletion()
at System.Web.Hosting.PipelineRuntime.ProcessRequestNotificationHelper()
at System.Web.Hosting.PipelineRuntime.ProcessRequestNotification()
Caused by: System.Exception : Couchbase.Core.NodeUnavailableException: The node 172.31.34.105:11210 that the key was mapped to is either down or unreachable. The SDK will continue to try to connect every 1000ms. Until it can connect every operation routed to it will fail with this exception.
at ###.DataLayer.Couchbase.CouchbaseUserOperations.Get()
at ###.API.Services.BaseService`1.SetUserID()
at ###.API.Services.EventsService+<GetResponse>d__0.MoveNext()
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start()
at ###.API.Services.EventsService.GetResponse()
at ###.API.Services.BaseService`1+<Any>d__28.MoveNext()
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start()
at ###.API.Services.BaseService`1.Any()
at lambda_method()
at ServiceStack.Host.ServiceRunner`1.Execute()
at ServiceStack.Host.ServiceRunner`1.Process()
at ServiceStack.Host.ServiceExec`1.Execute()
at ServiceStack.Host.ServiceRequestExec`2.Execute()
at ServiceStack.Host.ServiceController.ManagedServiceExec()
at ServiceStack.Host.ServiceController+<>c__DisplayClass11.<RegisterServiceExecutor>b__f()
at ServiceStack.Host.ServiceController.Execute()
at ServiceStack.HostContext.ExecuteService()
at ServiceStack.Host.RestHandler.ProcessRequestAsync()
at ServiceStack.Host.Handlers.HttpAsyncTaskHandler.System.Web.IHttpAsyncHandler.BeginProcessRequest()
at System.Web.HttpApplication+CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute()
at System.Web.HttpApplication.ExecuteStep()
at System.Web.HttpApplication+PipelineStepManager.ResumeSteps()
at System.Web.HttpApplication.BeginProcessRequestNotification()
at System.Web.HttpRuntime.ProcessRequestNotificationPrivate()
at System.Web.Hosting.PipelineRuntime.ProcessRequestNotificationHelper()
at System.Web.Hosting.PipelineRuntime.ProcessRequestNotification()
at System.Web.Hosting.UnsafeIISMethods.MgdIndicateCompletion()
at System.Web.Hosting.UnsafeIISMethods.MgdIndicateCompletion()
at System.Web.Hosting.PipelineRuntime.ProcessRequestNotificationHelper()
at System.Web.Hosting.PipelineRuntime.ProcessRequestNotification()
A NodeUnavailableException could be returned for any number of network related issues...However, since you mentioned you are running on AWS, it's likely the TCP keep-alives settings needs to be tuned on the client.
Your MinSize connections (200) is so large, that you are not likely using them all and they are sitting by idly until the AWS LB decides to shut them down. When this happens the SDK will temporarily put the node (1000ms) that failed into a down state and then try to reconnect. During this time any keys mapped to it will fail with that exception.
This blog describes how to set the TCP keep-alives time and interval: http://blog.couchbase.com/introducing-couchbase-.net-sdk-2.1.0-the-asynchronous-couchbase-.net-client
var config = new ClientConfiguration
{
EnableTcpKeepAlives = true, //default it true
TcpKeepAliveTime = 1000*60*60, //set to 60mins
TcpKeepAliveInterval = 5000 //KEEP ALIVE will be sent every 5 seconds after 1hr
};
var cluster = new Cluster(config);
var bucket = cluster.OpenBucket();
That assumes you are using version 2.1.0 or greater of the client. If you are not, you can do it through the ServicePointManager:
//setting keep-alive time to 200 seconds
ServicePointManager.SetTcpKeepAlive(true, 200000, 1000);
You'll have to set that that to a value less than what the AWS LB is set to (I believe it's 60 seconds).
You should also probably set your connection pool min and max a bit lower, like 5 and 10.
Even though the problem was not fully solved since we still encounter timeouts but at a lower rate, we increased the performance by using the ClusterHelper singleton instance as follows:
ClusterHelper.Initialize(
new ClientConfiguration
{
Servers = serverList,
UseSsl = false,
DefaultOperationLifespan = 2500,
EnableTcpKeepAlives = true,
TcpKeepAliveTime = 1000*60*60,
TcpKeepAliveInterval = 5000,
BucketConfigs = new Dictionary<string, BucketConfiguration>
{
{
"default",
new BucketConfiguration
{
BucketName = "default",
UseSsl = false,
Password = "",
PoolConfiguration = new PoolConfiguration
{
MaxSize = 50,
MinSize = 10
}
}
}
}
});
I am using service bus queues to communicate between web role and worker role. Sometimes web role messages are not being accepted by worker role. But it immediately accepts the next message i send. So i was thinking maybe its happening because the Batched Operations is enabled. I have been trying to put it to false but i havent been successful. This is my code.
public static QueueClient GetServiceBusQueueClient(string queuename)
{
string connectionString;
if (RoleEnvironment.IsAvailable)
connectionString = CloudConfigurationManager.GetSetting("Microsoft.ServiceBus.ConnectionString");
else
connectionString = ConfigurationManager.AppSettings["Microsoft.ServiceBus.ConnectionString"];
var namespaceManager = NamespaceManager.CreateFromConnectionString(connectionString);
QueueDescription queue = null;
if (!namespaceManager.QueueExists(queuename))
{
queue = namespaceManager.CreateQueue(queuename);
queue.EnableBatchedOperations = false;
queue.MaxDeliveryCount = 1000;
}
else
{
queue = namespaceManager.GetQueue(queuename);
queue.EnableBatchedOperations = false;
queue.MaxDeliveryCount = 1000;
}
MessagingFactorySettings mfs = new MessagingFactorySettings();
mfs.NetMessagingTransportSettings.BatchFlushInterval = TimeSpan.Zero;
string issuer;
string accessKey;
if (RoleEnvironment.IsAvailable)
issuer = RoleEnvironment.GetConfigurationSettingValue("AZURE_SERVICEBUS_ISSUER");
else
issuer = ConfigurationManager.AppSettings["AZURE_SERVICEBUS_ISSUER"];
if (RoleEnvironment.IsAvailable)
accessKey = RoleEnvironment.GetConfigurationSettingValue("AZURE_SERVICEBUS_ACCESS_KEY");
else
accessKey = ConfigurationManager.AppSettings["AZURE_SERVICEBUS_ACCESS_KEY"];
mfs.TokenProvider = TokenProvider.CreateSharedSecretTokenProvider(issuer, accessKey);
MessagingFactory messagingFactory = MessagingFactory.Create(namespaceManager.Address, mfs);
QueueClient Client = messagingFactory.CreateQueueClient(queue.Path);
return Client;
}
But the EnableBatchedOperations is always true and the MaxDeliveryCount is always 10 by default.
Let me know if you know what's the issue
Thanks
If you want to set the EnabledBatchedOperations, you have to do that before you create the queue. you do that by creating a QueueDescription object then pass that to the CreateQueue method. For example:
QueueDescription orderQueueDescription =
new QueueDescription(queuename)
{
RequiresDuplicateDetection = true,
MaxDeliveryCount = 1000,
};
namespaceMgr.CreateQueue(orderQueueDescription);
Update:
The documentation is pretty clear on this:
Since metadata cannot be changed once a messaging entity is created, modifying the duplicate detection behavior requires deleting and recreating the queue. The same principle applies to any other metadata. [1]
QueueDescription Represents the metadata description of the queue.
[1] http://msdn.microsoft.com/en-us/library/windowsazure/hh532012.aspx
Update Azure SDK 2.3
UpdateQueue method on the NamespaceManager still doesn't let you update any properties apart from suspending or resuming the queue.
If you need to change MaxDeliveryCount on an existing queue and you don't want to delete and recreate the queue, your only option is to change it in the Azure portal.