I'm sorry if this subject has already been answered, but I couldn't find what I needed (yet).
I'm working on a program that downloads files from university websites that use the same infrastructure. It's an open source project which I'm trying to support in my free time
(hosted in goodle code: http://code.google.com/p/highlearner/)
Until now we used GET and POST requests to login into the right page and download stuff. But the universities keep changing their websites and every little change requires teaking in Highlearner, which requires a new version, auto-updating all users, etc. Also, every university has its own login page, requiring me to tailor a login sequences..
So I'm looking for a more robust solution. Instead of manually redirecting and setting the HTTP parameters. Is there some kind of mini browser that supports with HTML + Javascript? No GUI is needed, I just need the engine.
This way, I will simply need to fill out the form parameters and let the browser do the work.
Thanks,
Nitay
You could try to automate the process with WatiN library . It allows you to click buttons, submit forms, etc.
using (var ie = new IE(loginUrl))
{
if (ie.TextField("username").Exists
&& ie.TextField("password").Exists)
{
ie.TextField("username").Value = "username";
ie.TextField("password").Value = "password";
ie.Button(Find.ByName("submit")).Click();
}
}
Related
I am developing a Facebook WPF application for my senior design project in college. I've never coded in C# or developed a WPF application before this. Right now I'm trying to implement logout functionality. I'm using a WebBrowser to do this, and the documentation seems to say that the method of doing this is to navigate to:
https://www.facebook.com/logout.php?next={redirectURI}&access_token={token}
in the browser, where the sections in curly braces are variables. For some reason, it brings me back to the Facebook home page (news feed) every time I do this. Is this due to a change made by Facebook in recent years or is there an error on my part? Alternative methods of logging out via a web browser, such as an alternative logout URL, would be appreciated as well.
With FB SDK V6, there are some nuances. I'll delve into a few things you need to verify in your code. Your code probably would've worked with previously, but today you should make the following changes, assuming you haven't already:
The redirectURI in your code needs to be changed to "http://www.facebook.com". Standard redirect URIs (including those associated with your access token generation) don't seem to work anymore.
You also need to make sure your redirectURI is an absolute URI. There is a very simple way to do this, which I will show in the code below.
Bringing it together, this code will work for the current FB C# SDK via a WebBrowser:
var fb = new FacebookClient();
var logoutURL = fb.GetLogoutUrl(new { access_token = {userAccessToken}, next = "https://www.facebook.com/"});
WebBrowser1.Navigate(logoutURL.AbsoluteUri);
A final note is that in my code, I chose to ask for the logoutURL instead of hardcoding it. It looks like after making the changes your logoutURL would still be correct, but it may be beneficial to retrieve the url to help ensure correctness. Good luck on your project.
Okay I know c# got a vast and very ease to use application development programs but this is what i want to learn now.So when user opens his browser and enters some url in it. Is it possible to send this data or the entered url addressto some other code one such a c# code or some other example c++ which is located on his hard drive.
To be simple when user clicks some link on a webpage or enters some url or closes the browser or when he opens the web browser, Can we detect all his actions that he perform on web browser through c# code or anyother way(I guess add-on or pluins the way it works) but Is it possible to send his actions to c# code and program it and give certain output back to browser so that browser performs it and outputs to user.
Something like browser-->c#code-->website.. I want c# code to act between the browser and webpages.
work I tried so far
I started googling on this and learnt little about how browsers work but still unable to find the solution. However I guess plugins are the way to do such tasks and found firebreath cross platform,a way to develop plugins for browsers. So is this possible by plugins? if so could you suggest me some good tools to develop my own plugins. Thanks
There are several options depending on what you want to achieve:
Proxy
You could implement a http proxy and configure the browser to use that proxy. The proxy sees all traffic and can do whatever it wants... this works rather "browser-agnostic". See the links here and here.
PlugIn
You could implement a plugin... alhtough this a browser-specific... for example IE used to have BHOs to this kind of stuff (not sure whether this is still possible with IE10...). Some options can be found here, here, here, here and here.
You can use FiddlerCore for this
Fiddler.FiddlerApplication.BeforeRequest += sess =>
{
Console.WriteLine("REQUEST TO : " + sess.fullUrl);
sess.bBufferResponse = true;
};
Fiddler.FiddlerApplication.Startup(8877, true, true);
Console.ReadLine();
Fiddler.FiddlerApplication.Shutdown();
System.Threading.Thread.Sleep(750);
After running this code, open your browser and navigate to any page.
My application has some menu buttons that sends the users to my website.
I want to differentiate in the website how many users came from my app, out of all the regular users.
My app is written in C#, and currently I direct users like this:
string url = "http://mysite/somepage";
System.Diagnostics.Process.Start(url);
On the server side, I use Piwik for my web paralytics.
Any suggestions?
Update
One good solution will be to add some parameter to the URL. Yet I was wondering if it's possible to play with the referrer field, for the sake paralytics simplicity.
Add something to the url, probably in the querystring that identifies that the user has originated from your application, like:
string url = "http://mysite/somepage?source=myApplication";
System.Diagnostics.Process.Start(url);
You can/could also use this to track the versions of your app that are in use by adding more to the url, for example ?source=myApplication&version=1.0.3 =)
Just add a parameter to the URL coming from your app, other users will not have that:
string url = "http://mysite/somepage?fromApp=v1";
On your website, you can pick that up to differentiate users. Do a redirect immediately after, so they will not bookmark the page with this URL.
Can't you just add some parameter to the URL your application is using and use that to filter users coming from your app?
Please consider the following scenario,
There are two web applications App1 & App2. A user would submit his information on App1 though a form. On click of a specific button/link on App1, the same data should be posted to a page on App2 and the user should also be redirected to the same page on App2.
I would like some help in finding out the best way to implement this functionality.
One of the approaches that I have already tried out is by creating a temporary HTML form at runtime, setting the action attribute of the form to the App2 Page and get the form posted by using javascript submit. The data can then be fetched on App2 page by using the response.form object.
This approach works well, but i was still wondering if there is any other way to implement the required functionality.
I would really appriciate if you can give some insights on using RESTful webservices to implement this, or else, using some HttpModule to intercept requests at App1 and modify redirect response to app2 or any other approach that you might find fit for the purpose.
Edit:
Using querystring isnt an option for me.
I've had a need to do similar things with feed agregation and building rss feeds from web page content on different domains.
User Gets app1 page, fills in details and submits then on the server for app1 I have a method that looks like this ...
HTMLDocument FetchURL( string url )
{
WebClient wc = new WebClient();
string remoteContent = wc.DownloadString(url);
// mshtml api is very weird but lets just say you have to do things this way ...
HtmlDocument doc = new HTMLDocument();
IHTMLDocument2 doc2 = (IHTMLDocument2)doc;
doc2.write(new object[] { remoteContent });
return (HTMLDocument)doc2;
}
This function does 2 things of use ...
It gets the page of content at "url"
It parses that content in to a HTMLDocument object
Once you have this function you can then call it passing it the url to the remote page and get back a html doucment.
The functions in the HTMLDocument object will allow you to do javascript like dom queries such as :
docObject.GetElementById("id");
I then have different functions that do different things with this object based on the page / site i'm returning data from.
There is however one fatal flaw here ...
This is likely to work really well with sites that don't change much in structure and are built by code but not so well on less dynamic sites.
With stackoverflow for example its easy to pull out a question and the accepted answer for that question so I could use this code to pull and publish content from here on my own web site.
However ...
This is not going to help you for user / login related details as this sort of information is not shared to generally everyone.
It's bit like me going and trying this to link facebook profiles to my own website, I would have to go through some form of api that asked the user to authenticate their details before making the request.
simply pulling a web page based on a url only will give the other site no authentication information unless that site accepts the user login details in the quesrystring and you already have them.
You may however be able to chain requests by ripping apart my sample method, requesting the login page parsing the results, filling in the form, then posting back using the same web client instance to login then requesting the url.
The idea being that you would have a form that asks the user to put in their login details for the remote site on your site then you go and find their profile page based on that.
This would be best farmed out to a class rather than just a simple method like i have here.
In my case though i was only after something simple (the bbc top 40 uk charts) which i pulled information from not only the bbc but places like amazon, google, and youtube, then i built a page :)
It's neat but serves no functional purpose other than pulling all your other fave sources of info on to 1 page.
If you are already committed to using javascript, then why not an ajax post, and change the window.location based on the response?
You can use HttpServerUtility.Transfer this will preserve your form contents and transfer the user to the new page.
http://msdn.microsoft.com/en-us/library/system.web.httpserverutility.transfer.aspx
I have built something like what you are describing, and I found that using a <form> tag to POST to app2 is the most reliable way... basically, the way you found that worked well.
If App2 is residing on a different domain, it's usually best to create your own interface for the submission, and have that interface handle the posting from App1 to App2.
(Browser) -> Submits form to App1 ->
(App1) -> validate input
-> stores local info
-> creates an HttpRequest/POST object
-> posts to App2
(App2) -> handles the post
<- returns the response
-> confirms the results of App2
<- returns the results to the browser.
In essense, you want to control and proxy requests from your Applications domain to any outside interfaces as much as possible.
Note: I'm answering my own question
just to have a correct answers marked
against it. All the suggestions
provided by various members here are
correct in their own way, but they
were not apt for my requirements.
Hence, I cant accept any of them as
correct.
The way I have Implemented is by creating a custom control which would have a configurable property containing the URL to post data and another one accepting a dictionary object as the data input to be posted.
This control would internally create a HTML form with action attribute set to the URL specified by the user and have the data feilds created out of the dictionary object. This form would then be posted on the button click event on the page hosting this control.
I am developing an application in which I am displaying products in a grid. In the grid there is a column which have a disable/enable icon and on click of that icon I am firing a request through AJAX to my page manageProduct.aspx for enabling/disabling that particular product.
In my ajax request I am passing productID as parameter, so the final ajax query is as
http://example.com/manageProduct.aspx?id=234
Now, if someone (professional hacker or web developer) can get this URL (which is easy to get from my javascript files), then he can make a script which will run as a loop and will disable all my products.
So, I want to know that is there any mechanism, technique or method using which if someone tries to execute that page directly then, it will return an error (a proper message "You're not authorized or something") else if the page is executed from the desired page, like where I am displaying product list, then it will ecxecute properly.
Basically I wnat to secure my AJAX requests, so taht no one can directly execute them.
In PHP:
In php my colleague secure this PHP pages by checking the refrer of the page. as below:
$back_link = $_SERVER['HTTP_REFERER'];
if ($back_link =='')
{
echo 'You are not authorized to execute this page';
}
else
{
//coding
}
Please tell me how to the same or any other different but secure techique in ASP.NET (C#), I am using jQUERY in my app for making ajax requests.
Thanks
Forget about using the referer - it is trivial to forge. There is no way to reliably tell if a request is being made directly or as a response to something else.
If you want to stop unauthorised people from having an effect on the system by requesting a URL, then you need something smarter then that to determine their authorisation level (probably a password system implemented with HTTP Basic Auth or Cookies).
Whatever you do, don't rely on http headers like 'HTTP_REFERER', as they can be easily spoofed.
You need to check in your service that your user is logged in. Writing a good secure login system isn't easy either but that is what you need to do, or use the built in "forms authentication".
Also, do not use sequential product id's, use uniqueidentifiers, you can still have an integer product id for display but for all other uses like the one you describe you will want to use the product uniqueidentifier/guid.