I have an ASP.Net page that when the user arrives, I would like to test if they are able to connect to another page on my server via HTTPS connection. If TLS is not enabled in the user's settings, they are being refused access.
If the test fails, then I would like to display a specific message.
I have considered using:
WebClient _client = new WebClient();
and
HttpWebRequest request = (HttpWebRequest)WebRequest.Create("https://myurl....");
but these are performing the requests from the server side and therefore they connect without issue even if the client can't connect themselves.
I have also considered an ajax request; however, I cannot make an https request from http because of the Same Origin Policy
Do you all have any ideas that would allow me to test https while on an http page?
Thanks in advance
You can create an <img> tag pointing to a valid image in the HTTPS page, then handle its load and error events.
Related
I have a solution with two ASP.NET Core MVC projects. One project (Client) is making a request to the other (Server) using HttpClient. When the action in Server receives the request, I want to get the URL of the thing that sent it. Every article I have read purports Request.Headers["Referer"] as the solution, but in my case Headers does not contain a "referer" key (or "referrer").
When receiving the request in Server, how should I find the URL of the Client that sent it?
That is how you you get the referring url for a request. But the referer isn't the thing that sent the request. The referer gets set in the headers by the browser when a person clicks on a link from one website to go to another website. When that request is made by the browser to the new website the request will typically have the Referer header which will contain the url of the prior website.
The receiving server can't get the url of the "client" making the request, remember a typical web browser client isn't at any url. All the receiving server can get is the IP address of the client typically.
Since you have control of the client software, if you wanted you could have the client put whatever info you want in the header of the request before it's sent to the server and the server could then get that info out of the header.
If you're using HttpClient, then it is up to the site making the request to add that header. It isn't added automatically in this case. So: change the code - or request that the code is changed - so as to add the header and value that you expect. If you are proxying through a request, you might get the value from the current request's Referer header, and add that.
Even in the general case of a browser making the request as part of a normal page cycle, you can't rely on it: the Referer header is often deliberately not sent; depending on the browser version, configuration, whether you're going between different domains, whether it is HTTPS or not, and rel markers on a <a href=... such as "noreferrer".
I need to consume a third-party WebSocket API in .NET Core and C#; the WebSocket server is implemented using socket.io (using protocol version 0.9), and I am having a hard time understanding how socket.io works... besides that the API requires SSL.
I found out that the HTTP handshake must be initiated via a certain path, which is...
socket.io/1/?t=...
...whereby the value of the parameter t is a Unix-timestamp (in seconds). The service replies with a session-key, timeout information, and a list of supported transport protocols. Due to simplicity, this first request is made via HttpClient and does not involve any additional headers.
Next, another HTTP request is required, which should result in an HTTP 101 Switching Protocol response. I specified the following headers in accordance to the previous request...
Connection: Upgrade
Upgrade: websocket
Sec-WebSocket-Key: ...
Sec-WebSocket-Version: 13
...whereby the value of the Key-header is a Base64-encoded GUID-value that the server will use to calculate the Sec-WebSocket-Accept header value. I also precalculate the expected Sec-WebSocket-Accept header value, for validation...
I tried to make that request using HttpClient as well, but that does not seem to work... I actually don´t understand why, because I expect an HTTP response. I also tried to make the request using TcpClient by sending a manually prepared GET request over a SslStream, which accepts the remote certificate as expected. Sending data seems to work, but there´s no response data... the Read-method returns zero.
What do I miss here? Do I need to setup a listener for the WebSocket connection as well, and if yes how? I don´t want to implement a feature complete socket.io client, I´d just like to keep it as simple as possible to catch some events...
The best way of debugging these issues is to use a sniffer like wireshark or fiddler. Often connect using an IE and compare IE results with my application and modify my app so it works like the IE. Using WebClient instead of HttpClient will also work better because the WebClient does more automatically than the HttpClient.
A web connection uses the header of the client and the headers in the server webpage to negotiate a connection mode. Adding additional headers to you client will change the connection mode. Cookies are also used to select the connection mode. Cookies are the results of previous connection to the same server which shortens the negotiations and stores info from previous connection so less data has to be downloaded from server. The server remembers the cookies. Cookies have a timeout and is kept until timeout expires. The IE history in your client has a list of IP addresses and Net automatically sends the cookies associated with the server IP.
If a bad connection is made to the server the cookies is also bad so the only was of connection is to remove the cookie. Usually I go into the IE and delete cookies manually in the IE history.
To check if a response is good the server returns a status. A completed response contains a status 200 DONE. You can get status which are errors. You can also get a 100 Continue which means you need to send another request to get the rest of the webpage.
Http has 1.0 (stream mode) and 1.1 (chunk mode). Net library doesn't work with chunk. Chunk requires client to send message to get next chunk and I have not found a way in Net to send the next chunk message. So if a server responds with a 1.1 then you have to add to your client headers to use 1.0 only.
Http uses TCP as the transport layer. So in a sniffer you will see TCP and HTTP. Usually you can filter sniffer just to return Http and look at header for debugging. Occasionally TCP disconnects and then you have to look at TCP to find why the disconnect occurs.
We use Request.Url.GetLeftPart(UriPartial.Authority) to get the domain part of the site. This served our requirement on http.
We recently change site to https (about 3 days ago) but this still returns with http://..
Urls were all changed to https and show in browser address bar.
Any idea why this happens?
The following example works fine and returns a string with "https":
var uri = new Uri("https://www.google.com/?q=102njgn24gk24ng2k");
var authority = uri.GetLeftPart(UriPartial.Authority);
// authority => "https://www.google.com"
You either have an issue with the HttpContext class right here, or all your requests are still using http:
You can check the requests HttpContext.Current.Request.IsSecureConnection property. If it is true, and the GetLeftPart method still returns http for you, I think you won't get around a replacing here.
If all your requests are really coming with http, you might enforce a secure connection in IIS.
You should also inspect the incoming URL and log it somewhere for debugging purposes.
This can also happen when dealing with a load balancer. In one situation I worked on, any https requests were converted into http by the load balancer. It still says https in the browser address bar, but internally it's a http request, so the server-side call you are making to GetLeftPart() returns http.
If your request is coming from ARR with SSL Offloading,
Request.Url.GetLeftPart(UriPartial.Authority) just get http
I'm trying to use YoutubeFisher library with ASP.NET. I make an HttpWebRequest to grab html content, process the contest to extract the video links and display links on the web page. I managed to make it work on localhost. I can retrieve video links and download the video on the locahost. But when I push it to the Server, it works only if I send the request from the same Server. If that page is accessed by a client browser, the client can see the links properly, but when link is clicked the client gets the HTTP Error 403, everytime the client clicks on the link even though the link is correct.
My analysis is that when the Server makes HttpWebRequest to grab HTML contet, it sends HTTP header as well. The HTML content (links to the video file) that is sent back from YouTube server, I think, will reponse to only the request that matches that HTTP header, that is sent from the Server. So, when client clicks on the link it sends request to YouTube server with different HTTP header.
So, I'm thinking of getting the HTTP header from the client, then modify the Server HTTP header to include HTTP header info of the client before making HttpWebRequest. I'm not quite sure if this will work. As far as I know, HTTP heaer cannot be modified.
Below is the code that makes HttpWebRequest from YouTubeFisher library,
public static YouTubeService Create(string youTubeVideoUrl)
{
YouTubeService service = new YouTubeService();
service.videoUrl = youTubeVideoUrl;
service.GetVideoPageHtmlSource();
service.GetVideoTitle();
service.GetDownloadUrl();
return service;
}
private void GetVideoPageHtmlSource()
{
HttpWebRequest req = HttpWebRequest.Create(videoUrl) as HttpWebRequest;
HttpWebResponse resp = req.GetResponse() as HttpWebResponse;
videoPageHtmlSource = new StreamReader(resp.GetResponseStream(), Encoding.UTF8).ReadToEnd();
resp.Close();
}
Client browses the page but the links are there but give HTTP 403:
Browse the page the from the Server itself, everything works as expected:
How do I make HttpWebRequest on the behalf of the client then? Is my analysis of this problem correct?
Thank you for your input.
Use an http monitor such as Charles, Fiddler or even Firebug to find out what additional headers are being sent from the brower in the success case. I suspect you'll need to duplicate one or more of accept, user-agent or referer.
In the past I've just assumed that youtube has those links encoded so that they only work for the original request IP. If that were the case it would be far more difficult. I have no clue if this is the case or not, try forwarding all the header elements you can before going down this route...
The only possibility that comes to mind is that you'd have to use a javascript request to download the page to the client's browser, then upload that to your server for processing, or do the processing in javascript.
Or you could have the client download the video stream via your server, so your server would pass through the data. This would obviously use a ton of your bandwidth.
I have built a web proxy from scratch (using Socket and NetworkStream classes). I am now trying to implement SSL support for it so that it can handle HTTPS requests and responses. I have a good idea of what I need to do (using SslStream) but I don't know how to determine if the request I get from the client is SSL or not.
I have searched for hours on this subject and have been unable to find a suitable solution.
After I do this:
TcpListener pServer = new TcpListener(localIP, port);
pServer.Start(256);
Socket a_socket = pServer.AcceptSocket();
How do I know if I need to read the information using SslStream or NetworkStream?
Client will send you a CONNECT method request after this point you need to just redirect the traffic.
Sample Connect :
CONNECT www.google.com:443 HTTP/1.1
After seeing this just switch to data redirect mode. You can not intercept or read the data so you don't need to worry about SSLStream anyway, you won't touch it.
However if you want to MITM (man in the middle) then you need to switch to SSL otherwise just redirect whatever comes to the target URL and port, that's it.
Obviously client browser will popup with an SSL certificate exception if you intercept the request.
You need to add support for the CONNECT command.
http://www.codeproject.com/KB/IP/akashhttpproxy.aspx
This is why proxy clients use one proxy for HTTP and different one for HTTPS. You can't know what type of connection you're going to receive.