Post Not Publishing to Page via FaceBook API - c#

All I am trying to do is post to a page using the API. This task was extremely simple using Twitter; but with FaceBook, it has been very challenging.
I am using the following code:
string url = #"https://graph.facebook.com/{page_id}/feed?message=Hello&access_token={app_id}|{app_secret}";
WebClient client = new WebClient();
Stream data = client.OpenRead(url);
StreamReader reader = new StreamReader(data);
string s = reader.ReadToEnd();
Console.WriteLine(s);
It returns data like this:
{"data":[{"story":"Page updated their cover photo.","created_time":"2017-03-13T22:49:56+0000","id":"1646548358..._164741855..."}...
But, the post is never seen on the page! How can I successfully post from my app to my page?

Your request should be a POST request as the data you're getting back suggests this is a GET request.
Also you need the publish_pages permission to successfully post to a page

Related

How to "create the video" after file uploaded to DailyMotion using c#

Im following the instructions from here to publish a new video on DailyMotion, using c# and a WebClient.
i successfully got the auth-token, then an upload url, then the actual file to upload. im stuck at step 4, called: "create the video"
it states to POST url=<the url i got from previous step> to https://api.dailymotion.com/me/videos (with the Authorization token in the header), but all my attempts result in "bad request" - without further explanation.
any ideas?
using (var client = new WebClient())
{
var createRequest = $"url={videoUpload.url}";
client.Headers.Add("Authorization", $"Bearer {authToken.access_token}");
client.Headers.Add("Content-Type", "application/x-www-form-urlencoded");
var createVideo = client.UploadString("https://api.dailymotion.com/me/videos", "POST", createRequest);
}
also tried:
var createRequest = $"url={HttpUtility.UrlEncode(videoUpload.url)}";
I tried your code and my video was created successfully. As explained in our documentation a 400 error is related to a missing/invalid parameter.
I assume you are trying to send the upload url (returned in step 2) instead of the url returned by step 3 (url of your uploaded file).
You can find an article (with examples of returned values) which use a simplified way to upload on Dailymotion here.

Call classic ASP function from ASPX

I am working on an old web application where pages were written in classic ASP and are wrapped in aspx pages using Iframes. I am rewriting one of those pages in ASP.NET (using C#) removing the dependency on Iframes altogether. The page_to_rewrite.asp call many other functions present in other ASP pages in the same application.
I am facing difficulty calling those ASP functions from aspx.cs. I tried to use WebClient class like this:
using (WebClient wc = new WebClient())
{
Stream _stream= wc.OpenRead("http://localhost/Employee/finance_util.asp?function=GetSalary?EmpId=12345");
StreamReader sr= new StreamReader(_stream);
string s = sr.ReadToEnd();
_stream.Close();
sr.Close();
}
Every request coming to this application is checked for a valid session cookie using an IIS HTTP module and if its not present user is redirected to login page. Now when I call this ASP page url from aspx I get the login page of my application as the response as no session cookie is present.
Can anyone please kindly suggest how can I call the ASP methods successfully.
As told by #Schadensbegrenzer in the comment I just had to pass the cookie in the request header like this:
using (WebClient wc = new WebClient())
{
wc.Headers[HttpRequestHeader.Cookie] = "SessionID=" + Request.Cookies["SessionID"].Value;
Stream _stream= wc.OpenRead("http://localhost/Employee/finance_util.asp?function=GetSalary&EmpId=12345");
StreamReader sr= new StreamReader(_stream);
string s = sr.ReadToEnd();
_stream.Close();
sr.Close();
}
In other similar questions on StackOverflow some people have suggested to also include User-Agent in the request header if you are getting blank output from the asp page as some web servers require this value in the request headers. See if it helps in your case. Mine worked even without it.
Also you will have to handle the request in your ASP page something like this:
Dim param_1
Dim param_2
Dim output
param_1 = Request.QueryString("function")
param_2 = Request.QueryString("EmpId")
If param_1 = "GetSalary" Then
output = GetSalary(param_2)
response.write output
End If
Hope it helps!

C# loading html of a webpage currently on

I am trying to make a small app that can log in automatically on a website, get certain texts on the website and return to user.
To show what I have, I did below to make it log in,
System.Windows.Forms.HtmlDocument doc = logger.Document as System.Windows.Forms.HtmlDocument;
try
{
doc.GetElementById("loginUsername").SetAttribute("value", "myusername");
doc.GetElementById("loginPassword").SetAttribute("value", "mypassword");
doc.GetElementById("loginSubmit").InvokeMember("click");
And below to load html of the page
WebClient myClient = new WebClient();
Stream response = myClient.OpenRead(webbrowser.Url);
StreamReader reader = new StreamReader(response);
string src = reader.ReadToEnd(); // finally reading html and saving in variable
Now, it successfully loaded html but html of the page where it's not logged in. Is there a way to refer to current html somehow? Or another way to achieve my goals. Thank you for reading!
Use the Webclient class so you can use sessions and cookies.
check this Q&A: Using WebClient or WebRequest to login to a website and access data
Why don't you make REST API calls and send the data like username and password from your code itself?
Is there any Web API for the URL ? If yes , you can simply call the service and pass on the required parameters. The API shall return in JSON/XML which you can parse and extract information

Using C# HttpClient to login on a website and scrape information from another page

I am trying to use C# and Chrome Web Inspector to login on http://www.morningstar.com and retrieve some information on the page http://financials.morningstar.com/income-statement/is.html?t=BTDPF&region=usa&culture=en-US.
I do not quite understand what is the mental process one must use to interpret the information from Web Inspector to simulate a login and simulate keeping the session and navigating to the next page to collect information.
Can someone explain or point me to a resource ?
For now, I have only some code to get the content of the home page and the login page:
public class Morningstar
{
public async static void Ru4n()
{
var url = "http://www.morningstar.com/";
var httpClient = new HttpClient();
httpClient.DefaultRequestHeaders.TryAddWithoutValidation("Accept", "text/html,application/xhtml+xml,application/xml");
httpClient.DefaultRequestHeaders.TryAddWithoutValidation("Accept-Encoding", "gzip, deflate");
httpClient.DefaultRequestHeaders.TryAddWithoutValidation("User-Agent", "Mozilla/5.0 (Windows NT 6.2; WOW64; rv:19.0) Gecko/20100101 Firefox/19.0");
httpClient.DefaultRequestHeaders.TryAddWithoutValidation("Accept-Charset", "ISO-8859-1");
var response = await httpClient.GetAsync(new Uri(url));
response.EnsureSuccessStatusCode();
using (var responseStream = await response.Content.ReadAsStreamAsync())
using (var decompressedStream = new GZipStream(responseStream, CompressionMode.Decompress))
using (var streamReader = new StreamReader(decompressedStream))
{
//Console.WriteLine(streamReader.ReadToEnd());
}
var loginURL = "https://members.morningstar.com/memberservice/login.aspx";
response = await httpClient.GetAsync(new Uri(loginURL));
response.EnsureSuccessStatusCode();
using (var responseStream = await response.Content.ReadAsStreamAsync())
using (var streamReader = new StreamReader(responseStream))
{
Console.WriteLine(streamReader.ReadToEnd());
}
}
EDIT: In the end, on the advice of Muhammed, I used the following piece of code:
ScrapingBrowser browser = new ScrapingBrowser();
//set UseDefaultCookiesParser as false if a website returns invalid cookies format
//browser.UseDefaultCookiesParser = false;
WebPage homePage = browser.NavigateToPage(new Uri("https://members.morningstar.com/memberservice/login.aspx"));
PageWebForm form = homePage.FindFormById("memberLoginForm");
form["email_textbox"] = "example#example.com";
form["pwd_textbox"] = "password";
form["go_button.x"] = "57";
form["go_button.y"] = "22";
form.Method = HttpVerb.Post;
WebPage resultsPage = form.Submit();
You should simulate the login process of the web site. The easiest way to do this is inspecting the website via some debugger (for example Fiddler).
Here is the login request of the web site:
POST https://members.morningstar.com/memberservice/login.aspx?CustId=&CType=&CName=&RememberMe=true&CookieTime= HTTP/1.1
Accept: text/html, application/xhtml+xml, */*
Referer: https://members.morningstar.com/memberservice/login.aspx
** omitted **
Cookie: cookies=true; TestCookieExist=Exist; fp=001140581745182496; __utma=172984700.91600904.1405817457.1405817457.1405817457.1; __utmb=172984700.8.10.1405817457; __utmz=172984700.1405817457.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); __utmc=172984700; ASP.NET_SessionId=b5bpepm3pftgoz55to3ql4me
email_textbox=test#email.com&pwd_textbox=password&remember=on&email_textbox2=&go_button.x=36&go_button.y=16&__LASTFOCUS=&__EVENTTARGET=&__EVENTARGUMENT=&__VIEWSTATE=omitted&__EVENTVALIDATION=omited
When you inspect this, you'll see some cookies and form fields like "__VIEWSTATE". You'll need the actual values of this field to log in. You can use following steps:
Make a request and scrape fields like "__LASTFOCUS", "__EVENTTARGET", "__EVENTARGUMENT", "__VIEWSTATE", "__EVENTVALIDATION", and cookies.
Create a new POST request to the same page, use CookieContainer from the previous one; build a post string using scraped fields, username and password. Post it with MIME type application/x-www-form-urlencoded.
If successful use the cookies for further requests to stay logged in.
Note: You can use htmlagilitypack, or scrapysharp to scrape html. ScrapySharp provide easy to use tools for form posting forms and browsing websites.
the mental is process is simulate a person login in the website, some logins are made using AJAX or traditional POST request, so, the first thing you need to do, is made that request like browser does, in the server response, you will get cookies, headers and other information, you need to use that info to build a new request, this are the scrappy request.
Steps are:
1) Build a request, like browser does, to authenticate yourself to the app.
2) Inspect response, and saves headers, cookies or other useful info to persisting your session with the server.
3) Make another request to server, using the info you gathered from second step.
4) Inspect response, and use data analysis algorithm or something else to extract the data.
Tips:
You are not using here javascript engine, some websites use javascript to show graphs, or execute some interation in the DOM document. In that cases, maybe you will need to use WebKit lib wrapper.

Issues retrieving facebook social plugin comments for page, C# HttpWebRequest class

I'm hoping I've done something knuckle-headed here and there is an easy answer. I'm simply trying to retrieve the list of comments for a page on my site. I use the social plug-in and then retrieve the comment id via the edge event. Server side I send the page id back and do a simple request using a HttpWebRequest. Worked well back in October, but now I get an 'internal error' response from FB. I can use the same url string put it into a browser and get the comments back in the browser in json.
StringBuilder url = new StringBuilder();
url.Append("https://graph.facebook.com/comments/?ids=" + comment.page);
string requestString = url.ToString();
HttpWebRequest request = WebRequest.Create(requestString) as HttpWebRequest;
HttpWebResponse response = request.GetResponse() as HttpWebResponse;
Ideas? Thanks much in advance.
Since you're using the Facebook C# SDK (per your tag), try:
var url = "{your url}";
var api = new Facebook.FacebookClient(appId,appSec);
dynamic commentsObj = api.Get("/comments/?ids=" + url);
dynamic arrayOfComments = commentsObj[url].data

Categories