A little background... I have a .NET webpage that communicates one way with a service. (using OnCustomCommand()) When the user presses a button, a function is called. Which is all good and dandy, however when the function is done executing I need to be able to send a message, function call, or some communication to the .NET webpage.
Is there a way for my service to call a function, send message or update my .Net webpage?
I've looked around and seen mostly .NET -> Service but nothing seems to go the other way.
EDIT: Its a windows service, and the ASP page and WindowsService reside on the same server.
Have the service write the output to a common area... such as a shared file, or a database. Then refresh the webpage and have it query that file for the response output.
Support more than one user you should have have some session ID that will be used to determine where the output is saved. For example, call a command line parameter with a GUID like this:
Echo This is a test > c:\Some Directory\Session12345.txt
And then have your aspx page query and refresh using a GET like this http://example.com/GetOutput.aspx?Session=12345
From there use ASP to access a file with an appended SessionID in the URL.
You can extend this concept to work with JQuery and WCF as needed. Of course, you will need to add security to this to prevent MITM attacks. But it sounds like this is a small project not connected to the internet so the extra features may not be that important.
Communication can only be done from client to server. Use Ajax/webservice/scriptmethod for retrieving status of service call.
Related
An example will be something like: User clicks a button on a webpage I created with ASP.NET.
My app calls an API hosted on a third party server. The user sees a spinning icon/Please wait message, while my app is waiting the third party server to return the results to me, which may take a minute.
The solution isn't limited to what version of .NET.
In most cases, since you can't wait for code behind to finish (since the web page is stuck up on the server), and ONLY after ALL your code behind is done, can the page THEN travel back down to client side, be re-freshed, JavaScript starts running, and THEN the results are displayed client side. In effect code behind NEVER interacts directly with the user, but only with the web page and ONLY while the page has been posted up to the server, travels up to the server. Your code behind can then in this very short time, modify values on the page, and then the while page is sent back to the client.
So, what this means is you have to adopt ajax. That means you write JavaScript client side, and they can call a web service you setup on your web site. In such cases then, no post-back of the page has occurred, and there are "many" features of JavaScript that allow you to do this. (such as promise, await, etc.). So, most web service calls by nature are asynchronous (you don't know how long you have to wait). So, yes, this is quite much the best practice, and how this common type of issue is to be approached.
So, in effect you not use code behind on the web page for this, but create a web service (web method) for the given page, or a separate aspx page with your required web service calls. Such pages do NOT have use of the controls on your current page, don't have use of ViewState, but can use session().
So, just google making web service calls in asp.net.
You can however also start a new process thread, and have it talk to that "other service". And then say drop in a timer (and even a up-date panel ). It could trigger every one second, and poll say a database (or session()) values to determine when done, and then update the web page, and then stop the timer.
So, the timer trick means you can often avoid writing a web service, and then having to wire up the ajax calls to web methods in JavaScript. (this is the correct way, but the timer + starting another process can work quite well also, and in most cases eliminates the need for a web service and ajax calls).
I'm a JS/jQuery developer who's dipping his toe for the first time in the C#/.NET world with a new web service.
There's an external SSO service that I need to communicate with. I send it a URL with some query string parameters, it replies with a URL that includes an SSO token, then I need to pop that URL into the end user's browser.
Any pointers on how to do this with C#?
Some additional info... I'm trying to modify some existing code that sort of did something similar using System.Net.HttpWebRequest/HttpWebResponse and an HTML form with hidden inputs, but I'm a bit lost trying to make sense of what the code is doing, and anyway it uses a form with POST, whereas the SSO service I'm connecting to just uses query string parameters.
Just to close the loop on this, I resolved it by... using JavaScript. Basically the C# code injects some JS to the page to perform the redirect. Probably not be the best way to do this, but it worked...
I have a program that opens a web browser control and just displays a web page from our server. They can't navigate around or anything.
The users are not allowed to know the credentials required to login, so after some googling on how to log into a server I found this:
http://user_name:password#URL
This is 'hard coded' into the web browsers code. -It works fine.
HOWEVER: Some smart ass managed to grab the credentials by using WireShark which tracks all the packets sent from your machine.
Is there a way I can encrypt this so the users cannot find out?
I've tried other things like using POST but with the way the page was setup, it was proving extremely difficult to get working. -(Its an SSRS Report Manager webpage)
I forgot to include a link to this question: How to encrypt/decrypt the url in C#
^I cannot use this answer as I myself am not allowed to change any of the server setup!
Sorry if this is an awful question, I've tried searching around for the past few days but can't find anything that works.
Perhaps you could work around your issue with a layer of indirection - for example, you could create a simple MVC website that doesn't require any authentication (or indeed, requires some authentication that you fully control) and it is this site that actually makes the request to the SSRS page.
That way you can have full control over how you send authentication, and you need never worry about someone ever getting access to the actual SSRS system. Now if your solution requires the webpage to be interactive then I'm not sure this will work for you, but if it's just a static report, it might be the way to go.
i.e. your flow from the app would be
User logs into your app (or use Windows credentials, etc)
User clicks to request the SSRS page
Your app makes an HTTP request to your MVC application
Your MVC application makes the "real" HTTP request to SSRS (eg via HttpClient, etc) and dumps the result back to the caller (for example,it could write the SSRS response via #HTML.Raw in an MVC View) The credentials for SSRS will therefore never be sent by your app, so you don't need to worry about that problem any more...
Just a thought.
Incidentally, you could take a look here for the various options that SSRS allows for authentication; you may find some method that suits (for e.g Custom authentication) - I know you mentioned you can't change anything on the server so I'm just including it for posterity.
I have an ASP.NET website and a seperate C# application. The application writes data to a file, the website populates the treeview with the data in the file. I populate the treeview in the page Load event.
The website checks if the file has changed. This happens from a code behind file. If the file did change, the website needs to be refreshed. I cannot use Response.Redirect because I get a
Response is not available in this context
I tried System.Web.HttpContext.Current.Response.Redirect, but this gives me a NullReference.
How can I refresh the page from a code behind file, so that it loads the right data in the treeview? Other suggestions that work but use something else than refreshing the page are welcome. Thanks in advance!
Edit: The actual problem is dynamically updating the treeview (new data = updated treeview). I have tried to do something with data from a MySql database but failed. The idea is the same, except the data isn't coming from a file but from a database. I added this because I thought this info might help users understand my problem.
you can't send data to the client from an initiative of the server.
You will have to poll (jQuery/ajax) if new data is available, then refresh from the client side.
this involves basically :
on the server
a web service, webmethod page method, custom handler, etc. that can tell if new data is available
on the client
a timer that query the server if data is refreshed, and, in this case, that refresh the page, or reconstruct the DOM if using some JS templating
[Edit] a bit of background :
Actually, System.Web.HttpContext.Current.Response is null because of the asynchronous model of the Http protocol. The browser emits a request "http://srv/resource", the server intercept it on the port 80 (by default), parse the request, build a response (mostlya bunch of html content) and send the response the browser. Then the connection is closed. This choice allows a great scalability, as it does not requires to keeps thousands of connections alive with nearly no data passing on it.
The impact of this, is that the web server have to knowledge of the client, other than what is send in the request. The server receive text, and send text in return.
Microsoft has created the ASP.Net framework to reproduce the RAD feeling of desktop applications. You think with controls and events, not in producing html flow like ASP or PHP. They succeeded in the sense, that, building web apps are quite similar to desktop development.
The quite is actually what is causing you some confusion. Even if the asp.net framework encapsulate most of the plumbing (viewstate is the key) to simulate this behavior, asp.net will, at least, still be a parser for request text that produces a html text to send to the client, in one shot.
So you have to cheat. You can, as I suggested, automate the browser (using javascript) to wrap this asynchronous work into a "dynamic" application.
You can't successfully use a FileSystemWatcher from within a webpage.
The instance of the page lives just long enough to handle a single request. And after that request has been served, you can't issue a redirect. The browser will not be listening anymore.
You need to do polling from your webpage, using the date you last read that file. If the Last Modified date of that file has changed from what you remember, you will need to refresh your page.
How would I go about doing the following...
I want to build a web service for my application to grab a piece of data from an external website, that requires the user to login. The website has no public API , hence the reason for the scraper.
Is there a library to perform the following functions? or what do I do?
automate fill-in form, auto click
Automate submit button
check which URL the user has landed
on, and redirect user to URL
Grab data from label.
EDIT: what im asking for is there a web service, library etc to make it easier to perform screen scraping/automation functions???
Instead of filling a form and virtually clicking buttons, you should look at the source of the form, and figure out how the data is being submitted. In most cases you can simply send a post request with the log in data. If there is something special besides a simple post request, I use this addon to figure out what requests are being done that you can't see. Using C#, I would use the HttpWebRequest class because it handles cookies for you.
If the website does not ban robots, you can use YQL to simulate everything you need. However, it can be a bit difficult or impossible as you basically have to implement a text-only browser within JS.