Call webservice multiple times - c#

In a database i have 30k users, each with a specified phonenumber. For each phonenumber, I will call a webservice which pulls some information for the user. Many users are not presented in the webservice, so I will just recieve null, but I don't know which users, and new users can be presented from time to time. The webservice updates realtime, so new results will come from minute to minute.
If the response is not null, and the recieved file is not the same as received last time, I create a PDF document from the recived XML-file.
The webservice call is started by a scheduled task which starts an .aspx-site with the following pseudo-code:
Foreach phonenumber {
HttpWebRequest webRequest = (HttpWebRequest)WebRequest.Create("webservice/phonenumber");
using (StreamReader reader = new StreamReader(webResponse.GetResponseStream()))
{
makePdf();
}
}
The problem is, of course, that the request takes forever. For 30k users it would take about 7 hours. I have tried looking at async webservice calls, but couldn't get anything to work. Can someone point me in the right direction - if possible -, or tell me how I should go about this?
Thanks

First you should check if the time is in the webservice or in the PDF creation. If it is in the webservice, check if you are missing indexes, etc. To improve the webserice you can thing of getting multiple phonenumbers at the same time. If your problem is in the PDF creation, find a way to improve the creation of the pdfs. It's not clear what tools you are using to create a pdf at this moment.

Why cant your service just implement a method that accepts a timestamp parameter and return a list(or array) of users that changed since that timestamp. That way you should be able to send request only for those users.

Related

Is it possible to access parameters sent via multipart/form-data without waiting for the entire form to arrive

I have a pretty big video file I upload to a web service via multipart/form-data.
It takes ~ 30 seconds to arrive and I would prefer not waiting that long simply to access parameters I send along with the file.
My question is simple, can I access parameters sent with the form without waiting for the video payload to be uploaded?
Can this be done using headers or any other methods? 
Streaming vs. Buffering
It's about how the webserver is set up. For IIS you can enable Streaming.
Otherwise, by default, IIS will use 'buffering' - the whole request is loaded into memory first (IIS's memory that you can't get to) before your app running in IIS can get it.
Not using IIS? You have to figure out how to get the webserver to do the same thing.
How to stream using IIS:
Streaming large file uploads to ASP.NET MVC
Note the way the file is read in the inner loop:
while ((cbRead = clientRequest.InputStream.Read(rgbBody, 0, rgbBody.Length)) > 0)
{
fileStream.Write(rgbBody, 0, cbRead);
}
Here instead of just saving the data like that question does, you will have to parse any xml/json/etc or whatever contains the file parameters you speak of ... and expect the video to be sent afterwards. You can process them right away if it's a quick process ... then get the rest of the video ... or you can send them to a background thread.
You probably won't be able to parse it just dumping what you have to a json or xml parser, there will be an unclosed tag or } at the top that isn't closed til after the video data is uploaded (however that is done). Or if it's multipart data from a form submission, as you imply, you will have to parse that partial upload yourself, instead of just asking IIS for the post data.
So this will be tricky, you can first start by writing 1k at a time to a log file with a time stamp to prove that you're getting the data as it comes. after that it's just a coding headache.
Getting this to work also means you'll have to have some control over the client and how it sends the data.
That's because you'll at least have to ensure it sends the file parameters FIRST!
Which concerns me, because, if you have control of the client, why can't you take the simple route (as Nobody and Nkosi imply) and use 2 requests? You mention you need one. Why not write js client code to send the parameters first in an XHR and then the file in a second request, using a correlation ID in both to tie them together? (the server could return this from the first request and you could send it in the 2nd).
Obviously, if you're just having a form with some inputs and a file upload and doing submit, then you need one request ;-) But if you have control over the client side you're not stuck with that.
Good luck, there is some advanced programming here, but nothing super high-tech. You will make it work!!!
If you don't have control over the server code, you are probably stuck, if the server app's webserver is buffering, the server app won't get anything, of course, if you wanted to do something with the file parameters first, this really implies you have control of the server side ;-)

Interesting Issue found when using WCF Data Service Client Library to query data from a WCF Data Service

I have a simple data model with 3 tables (Account, Contact, and User) with the following relationships:
User -> Account (1 - Many) Account -> Contact (Many - 1)
I am exposing my data via an OData (v3) WCF Data Service, which is consumed by a .NET client that uses the WCF Data Service Client Library. I used the Add Service utility to generate the client proxy code to call the data service.
All methods in the client class uses the class's single DataServiceContext object for calling the web service. i.e.:
DC.WhEntities svcClient = new DC.WhEntities(new Uri(BaseUrl));
What I am having a hard time trying to figure out is why the same query request to the service starts failing after the 6th time. I have literally tried all possible ways to construct a call to the data service:
First approach:
DataServiceQuery<DC.User> users = svcClient.Users.Expand("Accounts");
QueryOperationResponse<DC.User> response = users.Execute() as QueryOperationResponse<DC.User>;
var user = response.FirstOrDefault(u => u.Id == long.Parse(key.ToString()));
Second approach:
string queryString = string.Format("Users({0}L)?$expand=Accounts", key.ToString());
foreach (var user in response) {...}
The last statement in both of the above solution starts failing with a message below after it has executed successfully 6 times in a row:
The response payload is a not a valid response payload. Please make sure that the top level element is a valid Atom element or belongs to 'http://schemas.microsoft.com/ado/2007/08/dataservices' namespace.
**StackTrace:**
at System.Data.Services.Client.Materialization.ODataMaterializer.CreateODataMessageReader(IODataResponseMessage responseMessage, ResponseInfo responseInfo, Boolean projectionQuery, ODataPayloadKind& payloadKind)
at System.Data.Services.Client.Materialization.ODataMaterializer.CreateMaterializerForMessage(IODataResponseMessage responseMessage, ResponseInfo responseInfo, Type materializerType, QueryComponents queryComponents, ProjectionPlan plan, ODataPayloadKind payloadKind)
at System.Data.Services.Client.DataServiceRequest.Materialize(ResponseInfo responseInfo, QueryComponents queryComponents, ProjectionPlan plan, String contentType, IODataResponseMessage message, ODataPayloadKind expectedPayloadKind)
at System.Data.Services.Client.QueryResult.ProcessResult[TElement](ProjectionPlan plan)
at System.Data.Services.Client.DataServiceRequest.Execute[TElement](DataServiceContext context, QueryComponents queryComponents)
When this happens, my WCF Data Service just stopped working and returns a response with
error on line 1 at column 83: Unescaped '<' not allowed in attributes values.
I am not sure if I am missing anything fundamental or if I'm constructing the WCF Data Service Client request incorrectly or if there is something on the WCF Data Service side that doesn't like the same client requesting the same thing more than 6 times.
I've already spent a few days and I meant 3+ days trying to figure this out. I am new to WCF Data Service and I thought I could learn from this tutorial, but so far I got more pain than gain.
I am experiencing similar issue, suddenly my server started (maybe some updates inflicted this, yet the cause is unknown) to return bad responses. If I start my server it works for some time, lets say responds to few requests in a normal manner and then starts to break xml structure of the OData feeds, resulting in <, hexadecimal value 0x3C, is an invalid attribute character. Line 2, position 72. exception.
SOLUTION:
I solved the problem by following this feed
If you have WCF tracing configured, make sure logMessagesAtTransportLevel="false" is turned off, otherwise you will experience this issue.
I tried setting logMessagesAtTransportLevel to false and still got the error.
Then i remembered seeing this issue before when I had an assembly conflict. I went and created a brand new service and this solved my problem even when I had logMessagesAtTransportLevel set to true on my client. This ensured me that the problem was the service.
Although my solution solved my problem, I still don't know the exact issue and I already ran out of time to find it out. However, It is good to see that people are willing to help out and I truly appreciate the effort.
Thanks everyone again for your help.
Qster123.

Access Unshipped Orders on Amazon MWS in C#

this is our first time using the Amazon MWS (or any API for that matter) and we want to pull all of the unshipped orders from our seller account. We've tried using many different methods (RequestReportRequest, with this link: http://www.amazonsellercommunity.com/forums/message.jspa?messageID=2370410, and more) but none seem to work. Is there a simple way to access our unshipped orders using C#?
Thanks for the help.
Should be the same in all supported languages.
You can request an report using the RequestReport API operation with ReportType being set to _GET_FLAT_FILE_ACTIONABLE_ORDER_DATA_ . As a response you get a ReportRequestId which you store.
Next you periodically check the status of your report request by calling the GetReportRequestList operation, probably with the parameter ReportRequestIdList containing your ReportRequestId. The response tells you in which ReportProcessingStatus the reportRequest is. According to the sellercentral webpage it can take up to 45 minutes to finish a report.
Once the ReportProcessingStatus is DONE, you need to get the reportId. For this purpose you use the GetReportList operation with the parameter ReportRequestIdList set to your ReportRequestId. The response contains the reportId
Finally, you get your report by calling GetReport with the reportId you got in step 3.
For more details, have a look in the MWS API reference

Web Services. Get input data, process it at background thread

I've got several web-services: asmx,wcf. At couple of them there are some methods, which take a lot of time for processing, but size of input data for these methods are small and it takes not much time to transfer on the wire. I want move to not sync model. Client passes data to service, service answers that data transfer was correct and process it at background thread witout connection with client. So agter transfering connection should be closed. IS it possible? Can u help me with articles or may be just google request.
John is right - Once you close an http connection, it is done. You can't get back to the same process.
So if you can use another technology that allows duplex on one connection (e.g. WCF), do it!
However,
if you have no choice but to use webservices,
here are three ways to make it work. You may get timeouts on any of them.
Option 1:
Forget the part about 'client answers data was correct.' Just have each thread make its request and wait for the data.
Option 2:
Now, assuming that won't work and you must do the validation, this way requires the client to make 2 requests.
First request: returns valid/invalid.
Second request: returns the long-running results.
Variation of option 2:
If you have timeout problems, you could have the first request generate a GUID or unique database key and start another process, passing it this key, and return the key to the client. (if you can get the server to allow you to start a process - depends on security settings/needs - if not you may be able to start an async thread and have it keep running after the websvc one ends?) The process will do the long task, update the row in the database w/ the unique id when finished, revealing the results plus a 'done' flag. The second request by the client could always return immediately and if the processing is not done, return that, if it is, return the results. The client will repeat this every 5 sec or so until done.
Hacks, I know, but we don't always have a choice for the technology we use.
Don't do this with ASMX web services. They weren't designed for that. If you must do it with ASMX, then have the ASMX pass the data off to a Windows Service that will do the actual work, in the background.
This is more practical with WCF.
We have been writing stuff to interact with the UK gov website and the way they handle something similar is that you send your request and data to the server and it responds saying, roughly, "thanks very much - we're processing it now, please call back later using this id" - all in an XML message. You then, at some point later, send a new http request to the service saying, essentially, "I'm enquiring about the status of this particular request id" and the server returns a result that says either it has processed OK, or processed with errors, or is still processing, please try again in xx seconds.
Similar to option 2 described previously.
It's a polling solution rather than a callback or 2 way conversation but it seems to work.
The server will need to keep, or have access to, some form of persistent table or log for each request state - it can contain eg, the id, the original request, current stage through the workflow, any error messages so far, the result (if any) etc. And the web service should probably have passed the bulk of the request off to a separate Windows service as already mentioned.

Get Tweets of all users using TweetSharpAPI

I have implemented a method which manually scrapes the Search Twitter page and gets the tweets on different pages.
But since there is a fast refresh rate, the method triggers an exception.
Therefore I have decided to use TweetSharp API instead
var search = FluentTwitter.CreateRequest()
.AuthenticateAs(TWITTER_USERNAME, TWITTER_PASSWORD)
.Users()
.SearchFor("dumbledore");
var result = search.Request();
var users = result.AsUsers();
this code was on the site.
Does anyone know how I can avoid giving my credentials and retrieve from all users and not just the ones I have as friends?
Thanks!
What you want to do is interface with the Twitter Streaming API. This API allows you to open a persistent connection with Twitter and Twitter will then stream results to you as they come in.
(taken from the Twitter Streaming API page)
That said, TweetSharp doesn't currently support the Streaming API. However, it's not difficult to open a connection to Twitter in .NET and process the responses as they're received (however, I'd recommend using the HttpClient class to process this asynchronously, as well as using a proper JSON parsing library, like Json.NET).
Note the third column in the diagram "Streaming connection process", specifically the middle part:
Receives streamed Tweets, performs processing and stores result
As well as the "HTTP Server process" column:
Server pulls processed result from data store and renders view.
While not explicitly mentioned, you are best off just persisting the Tweet as you get it into a data store and then having another process handle the Tweets; the volume of Tweets you might get is so high that performing any processing when you get the Tweet will backlog the receiving of new Tweets.
For your specific case, you'll want to access the Public Streams with a POST filter of "dumbledore".

Categories