changing webclient response time - c#

I want to use facebook like button in my website. as most of my visitors are from Iran and facebook is filtered in Iran, if somebody comes from Iran, the like button gets filtered and the page gets very ugly. what I have thought to prevent this is to use webclient to try to connect to facebook. if successful, I place the like button, otherwise I don't:
string fb_result="failed";
WebClient webclient= new WebClient();
try{
fb_result=webclient.DownloadString("http://www.fabebook.com");
}
catch{
fb_result="failed"; //if its being filtered exception is raised
}
the in my html:
<%if(fb_result)!="failed"){%>
<div id="fb-root"></div>
<script>(function(d, s, id) {
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) return;
js = d.createElement(s); js.id = id;
js.src = "//connect.facebook.net/en_US/all.js#xfbml=1";
fjs.parentNode.insertBefore(js, fjs);
}(document, 'script', 'facebook-jssdk'));</script>
<div class="fb-like" data-href="http://www.mysite.com" data-send="true" data-width="450" data-show-faces="true"></div>
<%}%>
this works fine but the problem is, when webclient is unable to connect, it takes to much time to raise the error. is there any way to make webclient try for a shorter time and if not able to connect raise the error faster.
by the way, any other ways to check if connection to facebook is possible or not is appeciated. as this might not be the only way to check it.

As I stated in my comments, the WebClient code you've written will not work because it is executed on the server, so it will attempt to connect to Facebook using your webserver's connection, which will always succeed (unless your webhost goes offline).
The best approach is to have an AJAX request made by your page when it loads in the browser. It should make a request for a known resource on Facebook (such as their homepage, or the "Like" image itself). Your AJAX response handler will then load the rest of the Facebook client scripts if, and only if, the response is as-expected. If you test it by requesting the Like image then you just need to check the response's content-type (i.e. ensure it's image/png); if the response has a non-200 status code or if the request times-out then you don't load the Facebook scripts.

Related

Can my oauth2 callback page be the same html page? and how do I get the token?

First off, I'm using static html and javascript and C# Web API.
So I have a link that calls an oauth2 server, on my html file, say index.html
Now is it ok to set the callback page to index.html
It seems to work, and it gets sent to index.html?code=125f0...
Is this ok to do or do I need a seperate callback page. Is code, the token?
Now how should I consume this?The javascript doesn't seem to get hit on the call back.
Edit, actually, the javascript seems to get hit on the call, back but I'm not getting anything undefined from:
$(function () {
var params = {},
queryString = location.hash.substring(1),
regex = /(^&=]+)=([^&*])/g,
m;
while (m = regex.exec(queryString)) {
params[decodeURIComponent(m[1])] = decodeURIComponent(m[2]);
}
if (params.error) {
if (params.error == "access_denied") {
sAccessToken = "access_denied";
//alert(sAccessToken);
}
} else {
sAccessToken = params.code;
alert(sAccessToken);
}
});
Also, can my callback page be a C# web api call? And send the token that way. I'm guessing no, cus then you'd never know what user agent is sending it, and couldn't communicate back unless you somehow passed a id and used signalR? It seems better to get it in javascript and send the token to web api. But then can web api make calls to the resource if it has the token?
sorry, I'm still learning
OAuth2 has various "profiles". The "Authorization Code Grant" flow (what you are using) requires a server side component that exchanges the code for token.
Single Page Applications, typically use the implicit flow. See here for a quick description: https://docs.auth0.com/protocols#5 (ignore references to "Auth0", the underlying protocol is the same regardless of the implementation).
See here for a more thorough description of both flows: What is the difference between the 2 workflows? When to use Authorization Code flow?
Sorry, it was sorta of strange question and bad wording. But what I ended up doing is making an HTML callback page which takes in the code. I popup the OAuth2 server page in a window then it calls my callback page. Then my callback page will close the window and pass the code back to my parent page.

HttpWebResponse returns only one element of the page

Hello i making a simple httpwebrequest and then i read (StreamReader) the response and just want to get the html page of website,but i get only one laber(only one element of the page) in the browser all fine(i see all page) but when i try to set cookies to Deny\disable i also in the browser get this label(only one element of the page) and all is disappear.Sow i getting opinion if after i disabled cookies in browser i get the same page(like in code) that mean my HttpWebRequest is have settings cookies=deny/disable.
You can go to https://www.bbvanetcash.com/local_kyop/KYOPSolicitarCredenciales.html and disable cookies with F12 and you will see the difrance and also this page with one label.
Sow this my code any ideas what i need to change here?
HttpWebRequest myHttpWebRequest = (HttpWebRequest)WebRequest.Create("https://www.bbvanetcash.com/local_kyop/KYOPSolicitarCredenciales.html");
HttpWebResponse myHttpWebResponse = (HttpWebResponse)myHttpWebRequest.GetResponse();
Stream streamResponseLogin = myHttpWebResponse.GetResponseStream();
StreamReader streamReadLogin = new StreamReader(streamResponseLogin);
LoginInfo = streamReadLogin.ReadToEnd();
Your code is receiving complete page content, but it cannot receive the dynamic contents. This is happening because the page you are trying to access relies on Cookies for maintaining session as well as JavaScript (it is using jQuery) for loading dynamic contents and providing rich user experience.
To successfully receive the whole page, your code must
support retrieving, storing and sending cookie objects across various HttpRequest and HttpResponse.
be able to execute JavaScript code to load the dynamic contents/markup of the page
To test 'if your code is receiving proper values or not' visit the site Web Sniffer and put your URL there.
As you can try on web-sniffer site, for www.google.com, the response you are getting is a redirect instruction.... that means, even to access the Google's home page, your code must understand HTTP status messages (302 there).

File download via Webapi controller - handling errors

I am working on an application that provides some links for users to download files.
The page itself is served up by an MVC controller but the links are pointing to a WebAPI controller running on a separate domain.
(I would have preferred same domain but for various reasons it has to be a separate project and it will run on a separate domain. I don't think CORS is part of the issue anyway as this is not using XHR, but I mention it just in case).
So in development, the main MVC project is http://localhost:56626/Reports/
And the links on the page might look like this:
Report 12345
where port 51288 is hosting the Web API.
The WebAPI controller uses ReportID to locate a file, and write its contents into the response stream, setting the disposition as an attachment:
//security.permission checks and scaffolding/database interaction
//left out for clarity
try
{
string filename = #"C:\ReportFiles\TestReport.csv";
var stream = new FileStream(path, FileMode.Open);
result.Content = new StreamContent(stream);
result.Content.Headers.ContentType = new MediaTypeHeaderValue("text/csv");
var disp = new ContentDispositionHeaderValue("attachment");
disp.FileName = "TestReport.csv";
result.Content.Headers.ContentDisposition = disp;
return result;
}
catch(Exception ex)
{
//how to return a response that won't redirect on error?
}
By doing this the user can then click on the link and without any redirection, the user gets prompted to save or open the file, which is what I want; they stay on the original page with the links and just get an Open/Save dialog from the browser.
The problem arises when something goes wrong in the Web API controller - either an exception or some internal logic condition that means the file cannot be downloaded.
In this case when clicking the link, the download doesn't happen (obviously) and they get taken to the target URL instead i.e http://localhost:51288/api/ReportDownload?ReportID=12345 which is not desirable for my requirements.
I would much rather be able to catch the error somehow on the client-side by returning for e.g. HTTP 500 in the response, and just display a message to the user that the download failed.
Now to be honest, I don't even understand how the browser can do the "in place" File/Save dialog in the first place:
I always thought if you click a link that has no explicit target attribute,the browser would just open the new request in your current tab i.e it's just another GET request to the target URL, but it seems this is not the case
The browser seems to be doing a hidden background fetch of the target URL in this case (same behaviour in FF,Chrome and IE) which I cannot even see in the F12 tools.
The F12 Network log shows no activity at all except in the specific case where the response has NOT been setup as Content-Disposition: attachment i.e an error -only in this case do I see the (failed) HTTP GET being logged in the Network request list.
I suppose I could just catch any exception in the controller and send back a dummy file called "Error.csv" with contents "Ha Ha Nope!" or something similar, but that would be a last resort...any ideas welcome!
If the user clicks on the link, the browser will follow it - then depending on the response headers and browser configuration, it'll either show the file dialog or render directly - you can't really change that behavior (apart from using preventDefault when the link is clicked, which kind of defeats the purpose).
I'd suggest taking a closer look at http://jqueryfiledownload.apphb.com/ which lets you do something like this:
$.fileDownload('some/file/url')
.done(function () { alert('File download a success!'); })
.fail(function () { alert('File download failed!'); });
Then you could bind the download action using jQuery.

AsyncCallback tied to a WebRequest is never reached

I have an ASPX page which should retrieve some content (some plain text data) asynchronously, and write something before/during/after the operation.
Currently, I can reach the "during" step but page content doesn't change anymore afterwards.
Big issue is I cannot perform any kind of debugging due to infrastructure (mis)configuration and not being allowed to run Remote Debugging Tools, I have to rely on publishing and see what happens...
Code behind looks like this (This is a .NET 3.5 (changing target framework is not an option) project created under VS2008 and later upgraded to VS2010)
void Page_Load()
{
myLabel = "Preparing to fetch content ...";
FetchContent();
}
void FetchContent()
{
try {
// "http://myUrl" returns text with header 'Content-disposition: inline;'
// If called directly, Text can be seen in the browser alright.
WebRequest request = WebRequest.Create("http://myUrl");
myLabel = "Fetching ...";
request.BeginGetResponse(new AsyncCallback((result)=>
{
//EXCEPTION HERE: 401 Unauthorized ??? url works via browser!
WebResponse resp = request.EndGetResponse(result);
StreamReader stream = new StreamReader(resp.GetResponseStream());
myLabel = "Done";
}
} catch {myLabel = "Request KO"; }
}
In the ASPX code, myLabel is simply shown:
<body>
<pre><%=myLabel %></pre>
</body>
The url responds fairly quickly if called from a browser, but in this code myLabel never shows Done., it stays on the Fetching... text like the callback is never fired.
Am I missing something obvious here ?
UPDATE
Closer inspection revealed that EndGetResponse returns a 401 Unauthorized status code. It works flawlessly if I invoke the exact same url via a browser though ! Some now more focused searching got me the solution now.
After finding out the 401 Unauthorized status code in the response, I managed to find other answers right here on SO which made me solve my (as it turns out) trivial issue adding this:
request.UseDefaultCredentials = true;

Server side include external HTML?

In my asp.net-mvc application I need to include a page that shows a legacy page.
The body of this page is created by calling an existing Perl script.
This Perl script is externally hosted.
Is there a way to do something like this:
<!-- #Include virtual="http://www.example.com/theScript.plx"-->
Not as a direct include, because ASP.NET server-side-includes require the page to be compiled at the server.
You could use jQuery to download the HTML from that URL when the page loads, though I appreciate that's not perfect.
Alternatively (and I have no idea whether this will work) you could perform a WebRequest to the perl webpage from your ASP.NET MVC controller, and put the resulting HTML in the view as text. That way you could make use of things like output caching to limit the hits to the perl page if it doesn't change often.
If you wanted to do it all in one go, you could do an HTTP Request from the server and write the contents to the page?
Something like this:
Response.Write(GetHtmlPage("http://www.example.com/theScript.plx"));
Calling this method:
public String GetHtmlPage(string strURL)
{
// the html retrieved from the page
String strResult;
WebResponse objResponse;
WebRequest objRequest = System.Net.HttpWebRequest.Create(strURL);
objResponse = objRequest.GetResponse();
// the using keyword will automatically dispose the object
// once complete
using (StreamReader sr = new StreamReader(objResponse.GetResponseStream()))
{
strResult = sr.ReadToEnd();
// Close and clean up the StreamReader
sr.Close();
}
return strResult;
}
(Most code ripped blatantly from here and therefore not checked)
You could implement this in a low-key fashion by simply using a frame and setting the frame source to the url that needs to be included. This is quite simple and can be down without any server or client side scripting, so that'd be my preferred approach, if possible.
If you want the html to appear to come from your server, however, you'll need to manually include it - typically by using WebRequest as Neil says. You may wish to cache the remote page for performance, though, since it's a perl script, I'll assume the page is dynamic, so this might not be a great idea.

Categories