If a Gmail user sends an email with an inline image, then the image will be stored on Google's servers, and the email's HTML will include code like this...
<img src="https://mail.google.com/mail/u/0?lot-sof-stuff-here" alt="Yoda.png" width="300" height="300">
If you're logged in to Google, you can visit this link in your browser, and you'll be redirected to the image, whose URL looks like...
https://gm1.ggpht.com/lots-of-stuff-here-too
If you're not logged in, you'll get sent to Google's sign-in page
I want to download the image from my C# code, but have two problems, first I need to get past the Google sign-in, and second I need to handle the redirect.
I tried the following...
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url);
request.AllowAutoRedirect = true;
request.Credentials = new NetworkCredential("gmailaddress", "password");
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
...but the response object has a ResponseUri for the Google sign-in page, which sounds like it ignored the credentials I passed in.
Anyone any idea how I can do this? Thanks
Update Following the comment by SiKing, I don't think tis is a Gmail issue as such, I think the storage being used here is more general Google storage. The problem is I can't find any Google API for accessing that storage. If anyone knows of any, please point me in the right direction.
Related
I want to get HTML code of B page. Unfortunately site requires to open A page first to get session_id, after it I can finally open webpage I wanted. What is solution to get html code of B page? I try do it with WebClient, but session_id is probably not saved.
var client = new WebClient();
client.DownloadString("http://moria.umcs.lublin.pl/link/");
client.DownloadString("http://moria.umcs.lublin.pl/link/grid/1/810");
It depends on how the server tracks that you have already visited page A when you visit page B.
Most likely it uses some kind of session ID, which is probably saved in cookies. Examining HTTP request and response headers in any browser's developer tools can get you an idea of what this website does to track the user.
If you need to be able to store session ID in cookies, cookies-aware web-client sample is given here
I would use HttpWebRequest instead of WebClient. I did not see any method in WebClient where you can get or set cookies. Take a look at this MSDN link. Your code for the initial request would be something like in the link. For the next request to another page, set the CookieContainers with the cookies from the response that you got from the initial request; before you request for the response.
https://msdn.microsoft.com/en-us/library/system.net.httpwebrequest.cookiecontainer(v=vs.110).aspx
Successfully using the Mantis SOAP API (aka "MantisConnect") from C#, I can successfully read an issue and also get the download_url field.
When trying to download the attachment by something like this:
using (var request = new WebClient())
{
request.Credentials = new NetworkCredential("username", "password");
return request.DownloadData(mantisAtt.download_url);
}
it "downloads" a HTML page with the login screen instead of the binary attachment content.
So my question is:
How can I programmatically download an attachment for an issue in Mantis?
I was on the completely wrong track. Instead of following the download URL being returned I now use the function mc_issue_attachment_get and everything works as expected.
So to solve, do not download from the URL but simply use the intended SOAP API function.
(I found the solution after posting my question to the "mantisbt-soap-dev" mailing list and got a fast reply)
I'm trying to use YoutubeFisher library with ASP.NET. I make an HttpWebRequest to grab html content, process the contest to extract the video links and display links on the web page. I managed to make it work on localhost. I can retrieve video links and download the video on the locahost. But when I push it to the Server, it works only if I send the request from the same Server. If that page is accessed by a client browser, the client can see the links properly, but when link is clicked the client gets the HTTP Error 403, everytime the client clicks on the link even though the link is correct.
My analysis is that when the Server makes HttpWebRequest to grab HTML contet, it sends HTTP header as well. The HTML content (links to the video file) that is sent back from YouTube server, I think, will reponse to only the request that matches that HTTP header, that is sent from the Server. So, when client clicks on the link it sends request to YouTube server with different HTTP header.
So, I'm thinking of getting the HTTP header from the client, then modify the Server HTTP header to include HTTP header info of the client before making HttpWebRequest. I'm not quite sure if this will work. As far as I know, HTTP heaer cannot be modified.
Below is the code that makes HttpWebRequest from YouTubeFisher library,
public static YouTubeService Create(string youTubeVideoUrl)
{
YouTubeService service = new YouTubeService();
service.videoUrl = youTubeVideoUrl;
service.GetVideoPageHtmlSource();
service.GetVideoTitle();
service.GetDownloadUrl();
return service;
}
private void GetVideoPageHtmlSource()
{
HttpWebRequest req = HttpWebRequest.Create(videoUrl) as HttpWebRequest;
HttpWebResponse resp = req.GetResponse() as HttpWebResponse;
videoPageHtmlSource = new StreamReader(resp.GetResponseStream(), Encoding.UTF8).ReadToEnd();
resp.Close();
}
Client browses the page but the links are there but give HTTP 403:
Browse the page the from the Server itself, everything works as expected:
How do I make HttpWebRequest on the behalf of the client then? Is my analysis of this problem correct?
Thank you for your input.
Use an http monitor such as Charles, Fiddler or even Firebug to find out what additional headers are being sent from the brower in the success case. I suspect you'll need to duplicate one or more of accept, user-agent or referer.
In the past I've just assumed that youtube has those links encoded so that they only work for the original request IP. If that were the case it would be far more difficult. I have no clue if this is the case or not, try forwarding all the header elements you can before going down this route...
The only possibility that comes to mind is that you'd have to use a javascript request to download the page to the client's browser, then upload that to your server for processing, or do the processing in javascript.
Or you could have the client download the video stream via your server, so your server would pass through the data. This would obviously use a ton of your bandwidth.
I've read a lot of post on here, and other sites, but still not getting any clarification of my question. So here goes.
I have a Facebook link that requires you to be logged in. Is there a way using .Net (C#) that I can use Facebook API or something to "click" this link without a browser control.
Essentially, I have an application that I wrote that detects certain Farmville links. Right now I'm using a browser control to process the links. However, it's messy and crashes a lot.
Is there a way I can send the url along with maybe a token and api key to process the link?
Does anyone even understand what I'm asking? lol.
Disclaimer: I don't know what Facebook's API looks like, but I'm assuming it involves sending HTTP requests to their servers. I am also not 100% sure on your question.
You can do that with the classes in the System.Net namespace, specifically WebRequest and WebResponse:
using System.Net;
using System.IO;
...
HttpWebRequest req = WebRequest.Create("http://apiurl.com?args=go.here");
req.UserAgent = "The name of my program";
WebResponse resp = req.GetResponse();
string responseData = null;
using(TextReader reader = new StreamReader(resp.GetResponseStream())) {
responseData = reader.ReadToEnd();
}
//do whatever with responseData
You could put that in a method for easy access.
Sounds like your are hacking something... but here goes.
You might want to try a solution like Selenium, which is normally used for testing websites.
It is a little trouble to setup, but you can programmatically launch the facebook website in a browser of your choosing for login, programmatically enter username and password, programmatically click the login button, then navigate to your link.
It is kind of clunky, but get's the job done no matter what, since it appears to facebook that you are accessing their site from a browser.
I've tried similar sneaky tricks to enter my name over and over for Publisher's Clearing House, but they eventually wised up and banned my IP.
I need to login to a website and perform an action. The website is REST based so I can easily login by doing this (the login info is included as a querystring on the URL, so I dont't need to set the credentials):
CookieContainer cookieJar = new CookieContainer();
HttpWebRequest firstRequest = (HttpWebRequest) WebRequest.Create(loginUrl);
firstRequest.CookieContainer = cookieJar;
firstRequest.KeepAlive = true;
firstRequest.Method = "POST";
HttpWebResponse firstResponse = (HttpWebResponse)firstRequest.GetResponse();
That works and logs me in. I get a cookie back to maintain the session and it's stored in the cookieJar shown above. Then I do a second request such as this:
HttpWebRequest secondRequest = (HttpWebRequest) WebRequest.Create(actionUrl);
secondRequest.Method = "POST";
secondRequest.KeepAlive = true;
secondRequest.CookieContainer = cookieJar;
WebResponse secondResponse = secondRequest.GetResponse();
And I ensure I assign the cookies to the new request. But for some reason this doesn't appear to work. I get back an error telling me "my session has timed out or expired", and this is done one right after the other so its not a timing issue.
I've used Fiddler to examine the HTTP headers but I'm finding that difficult since this is HTTPS. (I know i can decrypt it but doesn't seem to work well.)
I can take my URL's for this rest service and paste them into firefox and it all works fine, so it must be something I'm doing wrong and not the other end of the connection.
I'm not very familiar with HTTPS. Do I need to do something else to maintain my session? I thought the cookie would be it, but perhaps there is something else I need to maintain across the two requests?
Here are the headers returned when I send in the first request (except I changed the cookie to protect the innocent!):
X-DB-Content-length=19
Keep-Alive=timeout=15, max=50
Connection=Keep-Alive
Transfer-Encoding=chunked
Content-Type=text/html; charset=WINDOWS-1252
Date=Mon, 16 Nov 2009 15:26:34 GMT
Set-Cookie:MyCookie stuff goes here
Server=Oracle-Application-Server-10g
Any help would be appreciated, I'm running out of ideas.
I finally got it working after decrypting the HTTP traffic from my program.
The cookie I'm getting back doesn't list the Path variable. So .NET takes the current path and assigns that as the path on the cookie including the current page. ie: If it was at http://mysite/somepath/somepage.htm it would set the cookie path=/somepath/somepage.htm. This is a bug as it should be assigned to "/" which is what all web browsers do. (hope they fix this.)
After noticing this I grabbed the cookie and modified the path property and everything works fine now.
Anyone else with a problem like this check out Fiddler. .NET uses the windows certificate store so to decrypt http traffic from your program you will need to follow the instructions here: http://www.fiddler2.com/Fiddler/help/httpsdecryption.asp . You will also need to turn on decryption under the Options\HTTPS tab of Fiddler.
From MSDN:
When a user moves back and forth between secure and public areas, the ASP.NET-generated session cookie (or URL if you have enabled cookie-less session state) moves with them in plaintext, but the authentication cookie is never passed over unencrypted HTTP connections as long as the Secure cookie property is set.
So basically, the cookie can be passed over both HTTP and HTTPS if the 'Secure' property is set to 'false'.
see also how can I share an asp.net session between http and https