Facebook Links without browser - c#

I've read a lot of post on here, and other sites, but still not getting any clarification of my question. So here goes.
I have a Facebook link that requires you to be logged in. Is there a way using .Net (C#) that I can use Facebook API or something to "click" this link without a browser control.
Essentially, I have an application that I wrote that detects certain Farmville links. Right now I'm using a browser control to process the links. However, it's messy and crashes a lot.
Is there a way I can send the url along with maybe a token and api key to process the link?
Does anyone even understand what I'm asking? lol.

Disclaimer: I don't know what Facebook's API looks like, but I'm assuming it involves sending HTTP requests to their servers. I am also not 100% sure on your question.
You can do that with the classes in the System.Net namespace, specifically WebRequest and WebResponse:
using System.Net;
using System.IO;
...
HttpWebRequest req = WebRequest.Create("http://apiurl.com?args=go.here");
req.UserAgent = "The name of my program";
WebResponse resp = req.GetResponse();
string responseData = null;
using(TextReader reader = new StreamReader(resp.GetResponseStream())) {
responseData = reader.ReadToEnd();
}
//do whatever with responseData
You could put that in a method for easy access.

Sounds like your are hacking something... but here goes.
You might want to try a solution like Selenium, which is normally used for testing websites.
It is a little trouble to setup, but you can programmatically launch the facebook website in a browser of your choosing for login, programmatically enter username and password, programmatically click the login button, then navigate to your link.
It is kind of clunky, but get's the job done no matter what, since it appears to facebook that you are accessing their site from a browser.
I've tried similar sneaky tricks to enter my name over and over for Publisher's Clearing House, but they eventually wised up and banned my IP.

Related

How do I download an image from a Gmail message?

If a Gmail user sends an email with an inline image, then the image will be stored on Google's servers, and the email's HTML will include code like this...
<img src="https://mail.google.com/mail/u/0?lot-sof-stuff-here" alt="Yoda.png" width="300" height="300">
If you're logged in to Google, you can visit this link in your browser, and you'll be redirected to the image, whose URL looks like...
https://gm1.ggpht.com/lots-of-stuff-here-too
If you're not logged in, you'll get sent to Google's sign-in page
I want to download the image from my C# code, but have two problems, first I need to get past the Google sign-in, and second I need to handle the redirect.
I tried the following...
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url);
request.AllowAutoRedirect = true;
request.Credentials = new NetworkCredential("gmailaddress", "password");
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
...but the response object has a ResponseUri for the Google sign-in page, which sounds like it ignored the credentials I passed in.
Anyone any idea how I can do this? Thanks
Update Following the comment by SiKing, I don't think tis is a Gmail issue as such, I think the storage being used here is more general Google storage. The problem is I can't find any Google API for accessing that storage. If anyone knows of any, please point me in the right direction.

Application to download files from URL

Email Link
So I currently work for an engineering company and we receive files via Aconex (Aconex is a web based document management system for all consulting teams on a given project). Currently we have a system where we download the files (there is a link in the email that leads to the Aconex website) from the email and file them in a dated folder under the specific project. I've attached an image of the Aconex email link.
Now for the issue. Sometimes it can be quite overwhelming when you receive 20+ project related emails in a day (on top of everything else) and some of these may slip through the gaps.
Basically I would like to automate this process somehow. I want the user to be able to add the email link to the application, hit 'Process' and the files are then downloaded and filed under the specific project.
I've got some basic programming experience (mainly in c#) and would like to use this as my first 'real world' programming project.
Any help that can be offered is really appreciated.
Thanks people!
So, actually you want to make HTTP request to download it from publicly available web server?
You would need these two in the header:
using System.IO;
using System.Net;
The IO will help you messing with the local file system paths, directories and etc. Check: https://msdn.microsoft.com/en-us/library/54a0at6s(v=vs.110).aspx
The Net namespace contains many options for creating requests or downloading files.
Then in a method you create a WebRequest instance, with e.g. Create static method that have an URI object as input:
var httpRequest = WebRequest.Create(urlObject);
// here you setup your stuff like authorization or request method etc:
httpRequest.Method = "GET";
httpRequest.Timeout = settings.timeout;
// and finally call the request (this is an async approach)
httpRequest.BeginGetResponse(getResponse, this);
Here you get the response
public void getResponse(IAsyncResult __result)
{
WebResponse response = httpRequest.EndGetResponse(__result);
// here you deal with the response
}
Or you may use simpler way:
var webClient = new WebClient();
Uri urlObject = new Uri("http://yourUrl");
String localPath = "local\\filesystem\\path.file";
webClient.DownloadFileAsync(urlObject, localPath, this);
Check: https://msdn.microsoft.com/en-us/library/w8bysebz(v=vs.110).aspx
Have I got your issue right?

Web Search using C# Program

I am trying to do a web search from within a C# app. I am currently using this code that gets an error.
WebRequest http = HttpWebRequest.Create(url);
HttpWebResponse response = (HttpWebResponse)http.GetResponse(); //error occurs here
I keep getting "The remote name could not be resolved: 'search.yahooapis.com'".
Here is the code for the url parameter:
StringBuilder url = new StringBuilder();
url.Append("http://search.yahooapis.com/WebSearchService/V1/webSearch?");
url.Append("appid=YahooDemo&results=100&query=");
url.Append(HttpUtility.UrlEncode(searchFor));
The problem, I think, is that I need an API key from Yahoo in place of 'YahooDemo' in the above code. I went to http://developer.apps.yahoo.com/projects and got an application ID but when I enter it it still does not work? I think the problem is I did not know what to put in the Yahoo project for Application URL and Callback Domain - I don't really know what this even means? I am happy to use other providers such as Google or Bing if this makes it easier. But I am new to C# so really need detailed but simple explanations to understand what I need to do. I am a bit lost. In the end I basically just want to do a web search from my C# program to look for key words, so if their is an easier way to do this I am all for it. Any suggestions?

getting source code of redirected http site via c# webclient

I have problem with certain site - I am provided with list of product ID numbers (about 2000) and my job is to pull data from producer site. I already tried forming url of product pages, but there are some unknown variables that I can't put to get results. However there is search field so i can use url like this: http://www.hansgrohe.de/suche.htm?searchtext=10117000&searchSubmit=Suchen - the problem is, that given page display info (probably java script) and then redirect straight to desired page - the one that i need to pull data from.
is there any way of tracking this redirection thing?
I would like to put some of my code, but everything i got so far, i find unhelpful because it just download source of preregistered page.
public static string Download(string uri)
{
WebClient client = new WebClient();
client.Encoding = Encoding.UTF8;
client.Headers.Add("user-agent", "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.2; .NET CLR 1.0.3705;)");
string s = client.DownloadString(uri);
return s;
}
Also suggested answer is not helpfull in this case, because redirection doesn't come with http request - page is redirected after few seconds of loading http://www.hansgrohe.de/suche.htm?searchtext=10117000&searchSubmit=Suchen url
I just found solution, And since i'm new, and i have to wait few hours to answer my question, it will end up there:
I hope that other users will find it usefull:
{pseudocode}
webBrowser1.Navigate('url');
while (webBrowser1.Url.AbsoluteUri != 'url')
{
// wait
}
String desiredUri = webBrowser1.Url.AbsoluteUri;
Thanks for answers.
Welcome to the wonderful world of page scraping. The short answer is "you can't do that." Not in the general case, anyway, and certainly not with WebClient. The problem appears to be that some Javascript does the redirection. And since all WebClient does is download the page, it's not even going to download the Javascript. Much less parse and execute it.
You might be able to do this by creating a program that uses the WebBrowser class. You can have it load the page. It should do the redirect and then you can inspect the result, which should be the page you were looking for. I haven't actually done this, but it does seem possible.
Your other option is to fire up your Web browser's developer tools (like IE's F12 Developer Tools) and watch what's happening. You can then inspect the Javascript that's being executed as well as the modified DOM, and see where the redirect happens.
Yes, it's tedious work. But once you figure out the redirect for one page, you can probably generate the URL for the other pages you want automatically.

How to ping a website / ip that is deployed in iis

I need to make a website that is designed to monitor / check the connectivity of our internal applications that is deployed on iis, Its like a list of links to our internal websites that we developed. The question is, how would I be able to check if a website is up and running? and how would I check if a website is down? I will simply display the links of our system and color it based on their status, green for up, and red if its down or has errors.. hoping for your advice. sample codes would be appreciated.
Just load anything from that server, if it loads your site is up and running, if it doesnt load, then just generate an error or show red
The simplest way to do this is to have a windows service or scheduled task running which performs WebRequests against the list of websites and checking the status codes.
If a status code of 200 is returned, show green. Anything else (4xx, 5xx, timeout), show red. Have the service store the results in a database and have the 'red-green' page read from that database.
That would be a generic, one-size-fits-all solution. It may not work for all sites, as some sites could have basic authentication, in which case your monitor will incorrectly record that the site is down. So you would need to store metadata against the sites and perform basic authentication (or any other business logic) to determine whether it's up or down.
If you have access to the websites you want to monitor then I would have thought the easiest way is to put a status page on the websites you want to monitor designed to be polled by your service. This way if the website is up you can get more advanced status information from it by reading the page content.
If you just want to check the http status then just access any page on the website (preferably a small one!) and check the response status code.
Something like
// prepare the web page we will be asking for
HttpWebRequest request = (HttpWebRequest)
WebRequest.Create(Url);
//if (AuthRequired())
// request.Credentials = new NetworkCredential(Username, Password);
// execute the request
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
then you can read the response.StatusCode

Categories