I am currently working on a project which I think using soap as part of it would be a good idea but I can't find how it will work in the way that I need.
I have a C# Console Application called ConsoleApp, ConsoleApp will also have a PHP web interface. What I'm thinking of doing, is the PHP web interface controls the ConsoleApp in some way, so I click a button on the web interface, then this does a sends a soap request to a soap service and then the soap service, sends the information on to the consoleApp, and the result is returned back to the SoapService and then returned back to PHP.
This seems like it would need to separate soap services, one for php to interface with and one within the ConsoleApp but this doesn't sound right, I think I might be misunderstanding the purpose of Soap.
How can this be achieved. Thanks for any help you can provide
UPDATE
As requested I thought I'd add a bit more information on what I am trying to achieve.
In the console app, it is acting as an email server sending out emails that are given to the program and then being sent on, and if it can't send it retries a couple of times until the email goes into a failed state.
The web interface will provide a status of what the email server is doing, i.e. how many emails are incoming, how many are yet to be processed, how many have sent and how many have failed.
From the web page you will be able to shutdown or restart the email server or put one of the failed emails back into the the queue to be processed.
The idea is, when the user adds a failed email back into the queue it sends a soap message that the console app will receive, add the information back into the queue, log the event in the console apps log file, increment a counter which is how it keep track of emails that need to be processed. Once this has been done it should then send a response back to the web interface to say whether or not the email was successfully added back into the queue or whether it failed for some reason.
I don't really want to keep on polling the database every so many seconds as there could be the potential for their to be a large number of emails that will be being processed so polling the database would put a large load on the MySQL server which I don't want, which is why I thought soap as the email server would only need to do something when it receives a soap request to do something.
Thanks for any help.
Every web service is going to need a client (in your case PHP) and a server (ConsoleApp). Even though there are two endpoints, it is still one web service. Your PHP will send a SOAP request which ConsoleApp will receive, process and respond to with a SOAP response.
So when someone clicks the button on the web page, you can use JavaScript to build and send the SOAP envelope in the browser. The alternative is to POST the values to a PHP page that will build and send the SOAP.
I have to admit though, your scenario sounds a unusual. I personally haven't heard of web pages talking directly with console apps. Web pages usually talk to web servers, and the servers are usually the ones issuing atypical requests, like your request to ConsoleApp. While it is technically possible, but I think it is going to be harder then you are expecting.
Personally, I would ditch SOAP in favor of a much more simple and scalable solution. Assuming you have access to a database, I would have the PHP create a record in the database when the user clicks the button. ConsoleApp would then poll the database every X seconds to look for new records. When it finds a new record, it processes it.
This has the benefit of being simple (database access is almost always easier than SOAP) and scalable (you could easily run an arbitrary number of ConsoleApps to process all of the incoming requests if you are expecting heavy loads). Also, neither the PHP page nor the ConsoleApp have a direct dependency on the other so each individual component is less likely to cause a failure in the whole system.
Related
I have a simple Web API that receives POST messages from geographically dispersed clients.
I have a swathe of WinForms Applications that act as "satellite" applications which send payloads to websites. So these applications maintain state, logins, and other functionality specific to each website they represent.
Theory is:
WebAPI receives a request, and sends it to the specific WinForms application which then forwards on the payloads as per its requirement.
So External Client->Web Api Post->?IPC MECHANISM HERE? Sends Payload->WinForms Delivers Payload->Sends Response back->Back up the pipe and back through WebAPI
Where I am "missing" at the moment is the most efficient and quickest way possible for my my POST Request, to get the message to the respective WinForms Application AND get a response.
I have been reading about signalR. I have been able to create a Hub and send a message to one client (which is all I have for testing now anyway). BUT - I have read this morning that I cannot get "return" messages back from my client.
Some of the SO messages about this are dated and brings me to my question - is there any way I can achieve this with SignalR.
As an appendix to the question:
If not, are there alternative bits of tech I can be advised to review or pursue as an alternate?
FYI - I did have this all wired in internally to WebAPI itself which then simply responded with the return values, but as requirements have grown - "state"/"logins"/"sessions"/"cookies" need to be maintained which prompted the decision to "externalise" each payload delivery system.
First of all, my apologies for the most likely inappropriate wording of the question. Not knowing exactly how to describe the problem I need to solve has been a major roadblock in my attempts to solve it.
I currently have a web server (Laravel) that needs to communicate with a SQL server in a different network, which only permits outgoing traffic. I made it work by having a C# daemon, running inside the SQL server's network, poll it for data and send it to the server through HTTP POST requests.
However, I now need the web server to communicate with the daemon. Something as simple as:
someone looks up a username on the web server
the server requests the daemon to look it up on the database
the daemon returns whatever information it found to the web server.
What I'm struggling with is finding the best way to do this.
All I need is for the server to be able to push requests to the daemon in real time. The daemon can reply through HTTP POST requests to the server, just like it is doing already. The best potential solution I have found is WebSockets, but it also sound like it might be overkill.
Am I missing something or are WebSockets indeed the way to go?
Guzzle is what I use to make HTTP requests to external APIs in Laravel.
Websockets are mostly used when the user has no control over the responses, for example, a chat window. But if you need to, for example, click a button and wait for the response Guzzle is enough.
I have already put together code using the System.net.Webclient class to pull source code from a webpage, which I then use a string search on, to get specific information. This in itself works fine, but my issue is that the source code changes every few seconds, and I would like the data I have received to change accordingly. I understand that I could simply set up a loop to have this process repeat, but unfortunately my current code take a full 2.7 seconds to complete, and I would like to avoid this large lag time. In addition, I want to avoid spamming the webpage with requests if possible. I was thinking about a streamread that stays open, so that multiple requests wouldn't have to be sent, but I wasn't entirely sure how to go about this...
So to sum it up, is there a way that I can pull updating information from a website using the System.Net namespace in a manner that is both fast, and avoids spamming the website with requests?
I am afraid that HTTP protocol is not adapted to your real-time data refresh requirement. Other than polling with HTTP requests at regular intervals you cannot know whether the data changed on the server and get this fresh data.
For example the WebSocket technology is more adapted to those scenarios. Of course the data provider must implement it so that clients could subscribe to this live feed.
There's also another way to implement this feature over the HTTP protocol. It uses an iframe to implement long polling. Here's an example. The idea is that the server uses chunked transfer encoding and sends continuous streams of data to the socket. The client subscribes to this stream and is able to be notified of changes occurring on the server. Once again, it's a technology that must be implemented by the server side so that you, as a client, could take advantage of it.
If all that the server provides is data via HTML page you are doomed to do screen scraping by hammering this server with HTTP requests until your IP address gets black listed and denied access.
I'm looking to create a web service and accompanying web app that uses an async web service call. I've seen plenty of suggestions on how to do async calls but none seem to fit exactly what i'm trying to do or are using a really outdated tech. I'm trying to do this in ASP.net 3.5 (VS2008)
What i need to do is:
the webpage needs to submit a request to the service
the page then needs to poll the service every 5 seconds or so to see if the task has completed
once complete the request needs to be retrieved from the service.
Could someone give me some suggestions or point me in the right direction?
The way I have typically handled asynchronous server-side processing is by:
Have the webpage initiate a request against a webservice and have the service return an ID to the long-running transaction. In my case, I have used Ajax with jQuery on the client webpage and a webservice that returns data in JSON format. ASP.NET MVC is particularly well suited for this, but you can use ASP.NET to return JSON string in response to a GET, or not use JSON at all.
Have the server create a record in a database that also stores the associated data to be processed. The ID of this transaction is returned to the client webpage. The service then sends a message to a third service via a message queue. In my case, the service was a WCF service hosted in a Windows Service with MSMQ as the intermediary. It should be noted that it is better not to do the actual task processing in ASP.NET, as it is not meant for requests that are long-running. In a high demand system you could exhaust available threads.
A third service receives and responds to the queued message by reading and processing necessary data from the database. It eventually marks the database record "complete".
The client webpage polls the webservice passing the transaction record ID. The webservice queries the database based on this ID to determine if the record is marked complete or not. If it is complete, it queries for the result dataset and returns it. Otherwise it returns an empty set.
The client webpage processes the webservice response, which will either contain the resulting data or an empty set, in which it should continue polling.
This just serves as an example, you may find that you can take shortcuts and avoid doing processing in a third service and just use ASP.NET threads. But that presents it's own problems, namely how you would have another request (the polling request) know if the original request is complete. The hackish-solution to that is to use a thread-safe collection in a static variable which would hold a transaction ID/result pair. But for that effort, it really is better to use a database.
EDIT: I see now that it appears to be a demonstration rather than a production system. I still stand by my above outline for "real-world" situations, but for a demo the "hackish" solution would suffice.
Which part are going to need to do async ? As far as I can tell your actions are synchronous:
1) -> 2) -> 3)
A simple web service would do, IIS (as any web server) supports multiple request to be handled async so you have no problem.
Something which you may need to be aware of. And also the javascript engine executes code in a single thread.
Step 0: Create the web service.
Step 1: Create the web app project (assuming it's ASP.NET).
Step 2: Add a web reference to the webs service to your web app project.
Step 3: The reference would create a proxy for you, using which you can invoke both synchronous and asynchronous calls.
I'm having an issue sending large volumes of emails out from an ASP.Net application. I won't post the code, but instead explain what's going on. The code should send emails to 4000 recipients but seems to stall at 385/387.
The code creates the content for the email in a string.
It then selects a list of email address to send to.
Looping through the data via a datareader it picks out the email address and sends an email.
The email sending is done by a separate method which can handle failures and returns it's outcome.
As each record is sent I produce an XML node in an XML document to log each specific attempt to send.
The loop seems to end prematurely and the XML document is saved to disk.
Now I know the code works. I have run it locally using the same SMTP machine and it worked fine with 500 records. Granted there was less content, but I can't see how that would make any difference.
I don't think the page itself times out, but even if it did, I was sure .Net would continue processing the page, even if the user saw a page time out error.
Any suggestions appreciate because I'm pretty stumped.
You're sending lots of emails. During the span of a single request? IIS will kill a request if it takes longer than a certain (configurable) amount of time.
You need to use a separate process to do stuff like this. Whether that's a Timer you start from within global.asax, or a Thread which checks for a list of emails in a database/app_data directory, or a service you send a request to via WCF, or some combination of these.
The way I've handled this in the past is to queue the emails into a SQL Server table and then launch another thread to actually process/send the emails. Another aspx utility page can give me the status of the queue or restart the processing.
I also highly recommend that use an existing, legit, third-party mailing service for your SMTP server if you are sending mail out to the general public. Otherwise you run the risk of your ISP shutting off your mail access or (worse) your own server being blacklisted.
If the web server has a timeout setting, it will kill the page if it runs too long.
I recommend you check the value of HttpServerUtility.ScriptTimeout - if this is set then when a script has run for that length of time, it will be shut down.
Something you could do to help is go completely old-school - combine some Response.Writes with a few Response.Flush to send some data back to the client browser, and this tends to keep the script alive (certainly worked on an old ASP.NET 1.1 site we had).
Also, you need to take into account when this script is being run - the server may well also have been configured to perform an application reset (by default this is set to every 29 hours in IIS), if your server is set to something like 24 hours and this coincides with the time your script it run, you could be seeing that too - although the fact that the script's logging its response probably rules that out - unless your XML document is badly formed?
All that being said, I'd go with Will's answer of using a seperate process (not just a thread hosted by the site), or as Bryan said, go with a proper mailing service, which will help you with things like bounce backs, click tracking, reporting, open counts, etc, etc.