string uri = "myurl";
string blobstatus = GetBlobStatus(uri);
if (blobstatus != LeaseStatus.Locked.ToString())
{
string response = AquireBlob(uri);
//process data.
string abc = ":em";
ReleaseBlob(response, uri);
}
Above is my code for leasing and releasing locks on blob. I'm looking at this method to use for multi-instance worker role where I want to run a specific code after x interval of time, as multiple instances could execute the code at same time.
The problem is that I manage to get the LeaseId properly but when the second instance checks blob lease status it is always unspecified. Why it is so? any clues?
I followed the following link for getting a head start.
Leasing Windows Azure Blobs Using the Storage Client Library - blog.smarx.com
I think your approach should not rely on checking the blob status first and based on that decide whether to acquire lease or not. You should always try and acquire the lease and capture the exception thrown in that process. That way if this code is running in multi-instance environment, only one instance will be able to acquire the lease (and other instances will just throw an error).
Good suggestions.. i solved the problem. found out that in fact that LeaseStatus property is not good and never returns results.
I had to get status by putting in web request and then i could get a right result.
Related
I am creating an NSUrlSession for a background upload using a unique identifier.
Is there a way, say after closing and reopening the app, to retrieve that NSUrlSession and cancel the upload task in case it has not been processed yet?
I tried simply recreating the NSUrlSession using the same identifier to check whether it still contains the upload task, however it does not even allow me to create this session, throwing an exception like "A background URLSession with identifier ... already exists", which is unsurprising as documentation explicitly says that a session identifier must be unique.
I am trying to do this with Xamarin.Forms 2.3.4.270 in an iOS platform project.
Turns out I was on the right track. The error message "A background URLSession with identifier ... already exists" actually seems to be more of a warning, but there is not actually an exception thrown (the exception I had did not actually come from duplicate session creation).
So, you can in fact reattach to an existing NSUrlSession and will find the contained tasks still present, even after restarting the app. Just create a new configuration with the same identifier, use that to create a new session, ignore the warning that's printed out, and go on from there.
I am not sure if this is recommended for production use, but it works fine for my needs.
private async Task EnqueueUploadInternal(string uploadId)
{
NSUrlSessionConfiguration configuration = NSUrlSessionConfiguration.CreateBackgroundSessionConfiguration(uploadId);
INSUrlSessionDelegate urlSessionDelegate = (...);
NSUrlSession session = NSUrlSession.FromConfiguration(configuration, urlSessionDelegate, new NSOperationQueue());
NSUrlSessionUploadTask uploadTask = await (...);
uploadTask.Resume();
}
private async Task CancelUploadInternal(string uploadId)
{
NSUrlSessionConfiguration configuration = NSUrlSessionConfiguration.CreateBackgroundSessionConfiguration(uploadId);
NSUrlSession session = NSUrlSession.FromConfiguration(configuration); // this will print a warning
NSUrlSessionTask[] tasks = await session.GetAllTasksAsync();
foreach (NSUrlSessionTask task in tasks)
task.Cancel();
}
I have a C# application I recently converted into a service. As part of its normal operation, it creates PDF invoices via CR using the following code:
foreach (string docentry in proformaDocs)
using (ReportDocument prodoc = new ReportDocument()) {
string filename = outputFolder + docentry + ".pdf";
prodoc.Load(/* .rpt file */);
prodoc.SetParameterValue(0, docentry);
prodoc.SetParameterValue(1, 17);
prodoc.SetDatabaseLogon(/* login data */);
prodoc.ExportToDisk(CrystalDecisions.Shared.ExportFormatType.PortableDocFormat,
filename);
prodoc.Close();
prodoc.Dispose();
}
foreach (string docentry in invoiceDocs)
using (ReportDocument invdoc = new ReportDocument()) {
string filename = differentOutputFolder + docentry + ".pdf";
invdoc.Load(/* different .rpt file */);
invdoc.SetParameterValue(0, docentry);
invdoc.SetParameterValue(1, 13);
invdoc.SetDatabaseLogon(/* login data */);
invdoc.ExportToDisk(CrystalDecisions.Shared.ExportFormatType.PortableDocFormat,
filename);
invdoc.Close();
invdoc.Dispose();
}
GC.Collect();
Problem is, after about 3-4 hours of runtime with the above code executing at most every two minutes, the Load() operation hits the processing job limit despite me explicitly disposing the report objects. However, if I leave the service running and launch a non-service instance of the same application, that one executes properly even while the service is still throwing the job limit exception. With the non-service instance having taken care of the processing, the service has nothing to do for the moment - but the instant it does, it throws the error again until I manually stop and restart the service, at which point the error goes away for another 3-4 hours.
How am I hitting the job limit if I'm manually disposing every single report object as soon as I'm done with it and calling garbage collection after each round of processing and disposing? And if the job limit is reached, how can a parallel instance of the same code not be affected by it?
UPDATE: I managed to track down the problem and as it turns out, it's not with CR. I take CR's database login credentials from a SAP Company object inside a Database wrapper class stored in a Dictionary, fetched with this:
public Company GetSAP(string name) {
Database db; //wrapper class
SAP.TryGetValue(name, out db); //fetching from the Dictionary
return db.SAP; //Company object in the wrapper class
}
For some reason, calling this freezes the thread, but the Timer launching the service's normal operation naturally doesn't wait for it to complete and launches another thread, which freezes too upon calling this. This keeps up until the number of frozen threads hits the job limit, at which point an exception is thrown by each new thread due to the still frozen threads filling the job limit. I put in a check to not launch a new thread if one is still running and the application froze upon calling the above function.
The getter of the object the return db.SAP above calls has literally nothing in it other than a return.
Alright, the problem was kinda solved. For some reason, the getters in the COM object I was trying to fetch the login credentials from freeze if accessed from a service but not from a non-service application. Testing this COM-object-stuffed-into-wrapper-class-stuffed-into-Dictionary setup in an IIS application also yielded no freezes. I have no idea why and short of SAP sharing the source code for said COM object, I'm unlikely to ever find out. So I simply declared a few string fields for storing the credentials and cut accessing the COM object out entirely since I didn't need it, only its fields.
I have integrated Pay with Amazon with my web app, but I have determined that capturing funds only works when I step through the code debugging, and does not happen if I don't have a break-point. To me, this indicates that a pause is necessary. I am using recurring payments. The relevant section of code is below:
...
//make checkout object
AmazonAutomaticSimpleCheckout asc = new AmazonAutomaticSimpleCheckout(billingAgreeementId);
//capture
CaptureResponse cr = asc.Capture(authId, amount, 1);
//check if capture was successful
if (cr.CaptureResult.CaptureDetails.CaptureStatus.State == PaymentStatus.COMPLETED)
{
...
//give the user the things they paid for in the database
...
return "success";
}
...
So, if I have a break-point at the capture line under //capture, then the function returns success. If I do not have the break-point, I get a runtime exception System.NullReferenceException: Object reference not set to an instance of an object. regarding the following if statement.
To me, this implies that I should be able to await the capture method.
Also note, the capture(...) method is calling the CaptureAction(...) method, just as the C# sample does.
//Invoke the Capture method
public CaptureResponse Capture(string authId, string captureAmount, int indicator)
{
return CaptureAction(propertiesCollection, service, authId, captureAmount, billingAgreementId, indicator, null, null);
}
How can I await the capture call? Am I forgetting to pass a parameter to indicate that it should execute the operation immediately?
It seems after some experimentation, that a function that will essentially achieve the wait I was performing manually using a break-point is the function CheckAuthorizationStatus(), which is also in the C# sample provided with the documentation.
So the fixed code simply adds CheckAuthorizationStatus() before calling the capture() method. CheckAuthorizationStatus() apparently loops until the state of the authorization changes. This seems somewhat kludgey to me, but seems to be how the Pay with Amazon APIs are meant to be used, as best I can tell. Corrected code below:
//make checkout object
AmazonAutomaticSimpleCheckout asc = new AmazonAutomaticSimpleCheckout(billingAgreeementId);
//capture
CaptureResponse cr;
GetAuthorizationDetailsResponse gadr = asc.CheckAuthorizationStatus(authId);
cr = asc.Capture(authId, amount, 1);
//gadr = asc.CheckAuthorizationStatus(authId);
//check if capture was succeddful
if (cr.CaptureResult.CaptureDetails.CaptureStatus.State == PaymentStatus.COMPLETED)
{
...
return "success";
}
When using asynchronous mode you will typically rely on a couple of ways of handling it. The result of AuthorizeOnBillingAgreement will return a Amazon authorization Id (e.g. P01-1234567-1234567-A000001). Once you have the authorization Id you can:
Poll GetAuthorizationDetails - This will return the authorization details which will contain the "State" of the authorization. When the state is "Open" you can then make the Capture API call passing in the authorization Id.
Wait for the Instant Payment Notification (IPN). If you have a IPN handler you can watch for it and make the capture API call as described in step 1. The IPN is usually sent within 60 seconds and it will have the final processing status (Open or Declined).
You shouldn't add an arbitrary pause. You should always check the state of the authorization before making the capture. Even if the payment status is completed you still need to check the state.
Disclaimer:
I don't implement recurring payments, only a straightforward payment - though just reading the documentation it seems similar or at least there is a synchronous option.
Because it meets my requirements, I opt for the synchronous process. In essence treating it like a "payment gateway" - give me the result "now" and I'll deal with whatever result.
Additionally, AUTH and CAPTURE in one step - again, this is based on one's operational requirement/s.
The 2 related items are:
CaptureNow=true
TransactionTimeout=0
A value of zero always returns a synchronous Open or Declined
You'll get (synchronously):
AuthorizeResult.AuthorizationDetails which will have
AmazonAuthorizationId, AuthorizationAmount, etc
AuthorizeResult.AuthorizationDetails.IdList
null on failure
otherwise it will contain the capture id (if capture was successful)
AuthorizeResult.AuthorizationDetails.IdList.member - I've only seen this to contain 1 item (the CaptureId)
You can then use the CaptureId to call GetCaptureDetails and do what you need to do after parsing the GetCaptureDetailsResponse
Again, above is based on Payments API flow (not recurring Payments/Billing Agreement) so I hope it at least helps/gives you an avenue/idea for testing the synchronous option.
we use EWS Managed API to Sync our CRM with the Exchange-Server. As long as I used EWS Mangage API 1.1 everything worked perfect. Now I updated to Api 2.0 (Dll-version: 15.0.516.14) and I'm getting an ArgumentException if I bind to the same Folder from diffrent threads and don't understand why.
Here's a samplecode which raises the exception:
private void TestAsyncFolderGet()
{
try
{
ExchangeService service = this.GetService();
Parallel.For(0, 20, (i) =>
{
Folder fo = Folder.Bind(service, WellKnownFolderName.Inbox);
});
}
catch (Exception ex)
{
this.State = "Failed: " + ex.Message;
}
}
private ExchangeService GetService()
{
ExchangeService result = new ExchangeService(ExchangeVersion.Exchange2010);
result.AutodiscoverUrl("test#foo.com");
return result;
}
My real scenario is that im getting changed items using a pullsubscription and handle the changes async. While doing this I'm binding to the parentfolder to get some informations.
Can anyone help me avoid the Exception?
Stacktrace and exception infos:
System.ArgumentException: An item with the same key has already been added.
at System.Collections.Generic.Dictionary2.Insert(TKey key, TValue value, Boolean add)
at Microsoft.Exchange.WebServices.Data.ExchangeServiceBase.SaveHttpResponseHeaders(WebHeaderCollection headers)
at Microsoft.Exchange.WebServices.Data.SimpleServiceRequestBase.ReadResponse(IEwsHttpWebResponse response)
at Microsoft.Exchange.WebServices.Data.ExchangeService.InternalFindFolders(IEnumerable1 parentFolderIds, SearchFilter searchFilter, FolderView view, ServiceErrorHandling errorHandlingMode)
at Microsoft.Exchange.WebServices.Data.ExchangeService.FindFolders(FolderId parentFolderId, FolderView view)
I made a supportcall to Microsoft and got this answer...
I am from the Messaging Developer Support team and have now taken ownership of this case. I’ve taken a look at the issue as you have described it in the forums, and based on the sample code there, the simple answer is that ExchangeService is not guaranteed to be thread safe except as a public static member (see http://msdn.microsoft.com/en-us/library/microsoft.exchange.webservices.data.exchangeservice(v=exchg.80).aspx ).
There are various techniques you can use to avoid the issue. You could use an ExchangeService for each thread, though this may not be advisable if you have lots of threads running at once as you may well hit throttling limits (each service instance may result in a new session on the server). You could implement a cache for folder objects, so that if different threads request the same object, the cache object can return it if it has already been requested (this would also increase performance as it would reduce requests to the server).
An important point to note is that as EWS is a web application, you should use multi-threading carefully, and minimise the number of worker threads. If each of the worker threads is generating requests to the Exchange server, then you are unlikely to gain much in performance terms as compared to using one worker thread, as you will be waiting on the response from Exchange.
So the solution in my case was to create a class called "SafeExecuter" which take care that only call to the Exchange per user is made at the same time. Also it takes care that the throttlingpolicy is not exceeded.
We have some basic C# logic that iterates over a directory and returns the folders and files within. When run against a network share (\\server\share\folder) that is inaccessible or invalid, the code seems to 'hang' for about 30 seconds before returning back from the call.
I'd like to end up with a method that will attempt to get folders and files from the given path, but without the timeout period. In other words, to reduce or eliminate the timeout altogether.
I've tried something as simple as validating the existence of the directory ahead of time thinking that an 'unavailable' network drive would quickly return false, but that did not work as expected.
System.IO.Directory.Exists(path) //hangs
System.IO.DirectoryInfo di = new System.IO.DirectoryInfo(path); //hangs
Any suggestions on what may help me achieve an efficient (and hopefully managed) solution?
You can use this code:
var task = new Task<bool>(() => { var fi = new FileInfo(uri.LocalPath); return fi.Exists; });
task.Start();
return task.Wait(100) && task.Result;
Place it on its own thread, if it doesn't come back in a certain amount of time, move on.
Perhaps you could try pinging the server first, and only ask for the directory info if you get a response?
See...
Faster DirectoryExists function?
...for a way of setting the execution time for Directory.Exists