I can hit this endpoint, http://catfacts-api.appspot.com/api/facts?number=99 via Postman and it returns JSON
Additionally I am using create-react-app and would like to avoid setting up any server config.
In my client code I am trying to use fetch to do the same thing, but I get the error:
No 'Access-Control-Allow-Origin' header is present on the requested
resource. Origin 'http://localhost:3000' is therefore not allowed
access. If an opaque response serves your needs, set the request's
mode to 'no-cors' to fetch the resource with CORS disabled.
So I am trying to pass in an object, to my Fetch which will disable CORS, like so:
fetch('http://catfacts-api.appspot.com/api/facts?number=99', { mode: 'no-cors'})
.then(blob => blob.json())
.then(data => {
console.table(data);
return data;
})
.catch(e => {
console.log(e);
return e;
});
Interestingly enough the error I get is actually a syntax error with this function. I am not sure my actual fetch is broken, because when I remove the { mode: 'no-cors' } object, and supply it with a different URL it works just fine.
I have also tried to pass in the object { mode: 'opaque'} , but this returns the original error from above.
I belive all I need to do is disable CORS.. What am I missing?
mode: 'no-cors' won’t magically make things work. In fact it makes things worse, because one effect it has is to tell browsers, “Block my frontend JavaScript code from seeing contents of the response body and headers under all circumstances.” Of course you never want that.
What happens with cross-origin requests from frontend JavaScript is that browsers by default block frontend code from accessing resources cross-origin. If Access-Control-Allow-Origin is in a response, then browsers relax that blocking and allow your code to access the response.
But if a site sends no Access-Control-Allow-Origin in its responses, your frontend code can’t directly access responses from that site. In particular, you can’t fix it by specifying mode: 'no-cors' (in fact that’ll ensure your frontend code can’t access the response contents).
However, one thing that will work: if you send your request through a CORS proxy.
You can also easily deploy your own proxy to Heroku in just 2-3 minutes, with 5 commands:
git clone https://github.com/Rob--W/cors-anywhere.git
cd cors-anywhere/
npm install
heroku create
git push heroku master
After running those commands, you’ll end up with your own CORS Anywhere server running at, for example, https://cryptic-headland-94862.herokuapp.com/.
Prefix your request URL with your proxy URL; for example:
https://cryptic-headland-94862.herokuapp.com/https://example.com
Adding the proxy URL as a prefix causes the request to get made through your proxy, which:
Forwards the request to https://example.com.
Receives the response from https://example.com.
Adds the Access-Control-Allow-Origin header to the response.
Passes that response, with that added header, back to the requesting frontend code.
The browser then allows the frontend code to access the response, because that response with the Access-Control-Allow-Origin response header is what the browser sees.
This works even if the request is one that triggers browsers to do a CORS preflight OPTIONS request, because in that case, the proxy also sends back the Access-Control-Allow-Headers and Access-Control-Allow-Methods headers needed to make the preflight successful.
I can hit this endpoint, http://catfacts-api.appspot.com/api/facts?number=99 via Postman
https://developer.mozilla.org/en-US/docs/Web/HTTP/Access_control_CORS explains why it is that even though you can access the response with Postman, browsers won’t let you access the response cross-origin from frontend JavaScript code running in a web app unless the response includes an Access-Control-Allow-Origin response header.
http://catfacts-api.appspot.com/api/facts?number=99 has no Access-Control-Allow-Origin response header, so there’s no way your frontend code can access the response cross-origin.
Your browser can get the response fine and you can see it in Postman and even in browser devtools—but that doesn’t mean browsers expose it to your code. They won’t, because it has no Access-Control-Allow-Origin response header. So you must instead use a proxy to get it.
The proxy makes the request to that site, gets the response, adds the Access-Control-Allow-Origin response header and any other CORS headers needed, then passes that back to your requesting code. And that response with the Access-Control-Allow-Origin header added is what the browser sees, so the browser lets your frontend code actually access the response.
So I am trying to pass in an object, to my Fetch which will disable CORS
You don’t want to do that. To be clear, when you say you want to “disable CORS” it seems you actually mean you want to disable the same-origin policy. CORS itself is actually a way to do that — CORS is a way to loosen the same-origin policy, not a way to restrict it.
But anyway, it’s true you can—in your local environment—do suff like give a browser runtime flags to disable security and run insecurely, or you can install a browser extension locally to get around the same-origin policy, but all that does is change the situation just for you locally.
No matter what you change locally, anybody else trying to use your app is still going to run into the same-origin policy, and there’s no way you can disable that for other users of your app.
You most likely never want to use mode: 'no-cors' in practice except in a few limited cases, and even then only if you know exactly what you’re doing and what the effects are. That’s because what setting mode: 'no-cors' actually says to the browser is, “Block my frontend JavaScript code from looking into the contents of the response body and headers under all circumstances.” In most cases that’s obviously really not what you want.
As far as the cases when you would want to consider using mode: 'no-cors', see the answer at What limitations apply to opaque responses? for the details. The gist of it is:
In the limited case when you’re using JavaScript to put content from another origin into a <script>, <link rel=stylesheet>, <img>, <video>, <audio>, <object>, <embed>, or <iframe> element (which works because embedding of resources cross-origin is allowed for those)—but for some reason you don’t want to/can’t do that just by having the markup of the document use the resource URL as the href or src attribute for the element.
When the only thing you want to do with a resource is to cache it. As alluded to in What limitations apply to opaque responses?, in practice the scenario that’s for is when you’re using Service Workers, in which case the API that’s relevant is the Cache Storage API.
But even in those limited cases, there are some important gotchas to be aware of; see the answer at What limitations apply to opaque responses? for the details.
I have also tried to pass in the object { mode: 'opaque'}
There is no 'opaque' request mode — opaque is instead just a property of the response, and browsers set that opaque property on responses from requests sent with no-cors mode.
But incidentally the word opaque is a pretty explicit signal about the nature of the response you end up with: “opaque” means you can’t see into any of its details; it blocks you from seeing.
If you are trying to address this issue temporarily on your localhost, you can use this chrome extension : Allow CORS Access-Control-Allow-Origin
https://chrome.google.com/webstore/detail/allow-cors-access-control/lhobafahddgcelffkeicbaginigeejlf
If you are using Express as back-end you just have to install cors and import and use it in app.use(cors());.
If it is not resolved then try switching ports.
It will surely resolve after switching ports
So if you're like me and developing a website on localhost where you're trying to fetch data from Laravel API and use it in your Vue front-end, and you see this problem, here is how I solved it:
In your Laravel project, run command php artisan make:middleware Cors. This will create app/Http/Middleware/Cors.php for you.
Add the following code inside the handles function in Cors.php:
return $next($request)
->header('Access-Control-Allow-Origin', '*')
->header('Access-Control-Allow-Methods', 'GET, POST, PUT, DELETE, OPTIONS');
In app/Http/kernel.php, add the following entry in $routeMiddleware array:
‘cors’ => \App\Http\Middleware\Cors::class
(There would be other entries in the array like auth, guest etc. Also make sure you're doing this in app/Http/kernel.php because there is another kernel.php too in Laravel)
Add this middleware on route registration for all the routes where you want to allow access, like this:
Route::group(['middleware' => 'cors'], function () {
Route::get('getData', 'v1\MyController#getData');
Route::get('getData2', 'v1\MyController#getData2');
});
In Vue front-end, make sure you call this API in mounted() function and not in data(). Also make sure you use http:// or https:// with the URL in your fetch() call.
Full credits to Pete Houston's blog article.
You can also set up a reverse proxy which adds the CORS headers using a self-hosted CORS Anywhere or Just CORS if you want a managed solution.
https://justcors.com/<id>/<your-requested-resource>
http://cors-anywhere.com/<your-requested-resource>
Very easy solution (2 min to config) is to use local-ssl-proxy package from npm
The usage is straight pretty forward:
1. Install the package:
npm install -g local-ssl-proxy
2. While running your local-server mask it with the local-ssl-proxy --source 9001 --target 9000
P.S: Replace --target 9000 with the -- "number of your port" and --source 9001 with --source "number of your port +1"
Solution for me was to just do it server side
I used the C# WebClient library to get the data (in my case it was image data) and send it back to the client. There's probably something very similar in your chosen server-side language.
//Server side, api controller
[Route("api/ItemImage/GetItemImageFromURL")]
public IActionResult GetItemImageFromURL([FromQuery] string url)
{
ItemImage image = new ItemImage();
using(WebClient client = new WebClient()){
image.Bytes = client.DownloadData(url);
return Ok(image);
}
}
You can tweak it to whatever your own use case is. The main point is client.DownloadData() worked without any CORS errors. Typically CORS issues are only between websites, hence it being okay to make 'cross-site' requests from your server.
Then the React fetch call is as simple as:
//React component
fetch(`api/ItemImage/GetItemImageFromURL?url=${imageURL}`, {
method: 'GET',
})
.then(resp => resp.json() as Promise<ItemImage>)
.then(imgResponse => {
// Do more stuff....
)}
I had a similar problem with my browser debugger saying my response.body was null but fiddler and the developer tools show it as populated that turned out to be basically the same scenario as this. I was using a local Angular application hitting a Web Api service running on IISExpress. I fixed it by following the steps outlined here to find the correct applicationhost.config file to add a Access-Control-Allow-Origin header like so:
<customHeaders>
<clear />
<add name="X-Powered-By" value="ASP.NET" />
<add name="Access-Control-Allow-Origin" value="*" />
<add name="Access-Control-Allow-Headers" value="Content-Type" />
</customHeaders>
If all the above solutions don't work, probably it's because of the file permissions as sometimes even if you have fixed the non-cors problem using Heroku or another way, it throws 403 forbidden error. Set the directory/file permissions like this:
Permissions and ownership errors
A 403 Forbidden error can also be caused by incorrect ownership or permissions on your web content files and folders.
Permissions
Rule of thumb for correct permissions:
Folders: 755
Static Content: 644
Dynamic Content: 700
I need to consume a third-party WebSocket API in .NET Core and C#; the WebSocket server is implemented using socket.io (using protocol version 0.9), and I am having a hard time understanding how socket.io works... besides that the API requires SSL.
I found out that the HTTP handshake must be initiated via a certain path, which is...
socket.io/1/?t=...
...whereby the value of the parameter t is a Unix-timestamp (in seconds). The service replies with a session-key, timeout information, and a list of supported transport protocols. Due to simplicity, this first request is made via HttpClient and does not involve any additional headers.
Next, another HTTP request is required, which should result in an HTTP 101 Switching Protocol response. I specified the following headers in accordance to the previous request...
Connection: Upgrade
Upgrade: websocket
Sec-WebSocket-Key: ...
Sec-WebSocket-Version: 13
...whereby the value of the Key-header is a Base64-encoded GUID-value that the server will use to calculate the Sec-WebSocket-Accept header value. I also precalculate the expected Sec-WebSocket-Accept header value, for validation...
I tried to make that request using HttpClient as well, but that does not seem to work... I actually don´t understand why, because I expect an HTTP response. I also tried to make the request using TcpClient by sending a manually prepared GET request over a SslStream, which accepts the remote certificate as expected. Sending data seems to work, but there´s no response data... the Read-method returns zero.
What do I miss here? Do I need to setup a listener for the WebSocket connection as well, and if yes how? I don´t want to implement a feature complete socket.io client, I´d just like to keep it as simple as possible to catch some events...
The best way of debugging these issues is to use a sniffer like wireshark or fiddler. Often connect using an IE and compare IE results with my application and modify my app so it works like the IE. Using WebClient instead of HttpClient will also work better because the WebClient does more automatically than the HttpClient.
A web connection uses the header of the client and the headers in the server webpage to negotiate a connection mode. Adding additional headers to you client will change the connection mode. Cookies are also used to select the connection mode. Cookies are the results of previous connection to the same server which shortens the negotiations and stores info from previous connection so less data has to be downloaded from server. The server remembers the cookies. Cookies have a timeout and is kept until timeout expires. The IE history in your client has a list of IP addresses and Net automatically sends the cookies associated with the server IP.
If a bad connection is made to the server the cookies is also bad so the only was of connection is to remove the cookie. Usually I go into the IE and delete cookies manually in the IE history.
To check if a response is good the server returns a status. A completed response contains a status 200 DONE. You can get status which are errors. You can also get a 100 Continue which means you need to send another request to get the rest of the webpage.
Http has 1.0 (stream mode) and 1.1 (chunk mode). Net library doesn't work with chunk. Chunk requires client to send message to get next chunk and I have not found a way in Net to send the next chunk message. So if a server responds with a 1.1 then you have to add to your client headers to use 1.0 only.
Http uses TCP as the transport layer. So in a sniffer you will see TCP and HTTP. Usually you can filter sniffer just to return Http and look at header for debugging. Occasionally TCP disconnects and then you have to look at TCP to find why the disconnect occurs.
I am currently adding a SOAP-WSDL Service to my project using "Add Service Reference". it creates all necessary classes and functions for me to call. Some of these functions dont give me a response. After 1 minute delay i get a timeout exception. However when i forge the request using Postman (a chrome extension for making network requests) it gets full response. i got suspicious and started to inspect network using Wireshark. after inspection i saw that problem was at the service. but i can't make them fix that. so i need to imitate Postman request using C#.
Main difference between mine and postman is, Postman posts all necessary data in single request for a response, but my client posts just http headers waits for a Http Continue and continues sending necessary data. i guess this breaks the service.
Here are wireshark screenshots for Postman and my client
(Postman is on the left of image and right one is my .net client - sorry for my perfect paint skills)
Is there any configuration on .net wsdl client with basicHttpBinding i can configure to make single request for a call?
edit: when i further investigated my C# client i saw that .net http layer send an initial POST with saying (Expect: 100 Continue), after it receives continue it continues to send SOAP Envelope
edit2: i changed my code to issue request using HttpWebRequest, but .net still sends header and post body with seperate requests (even when i disabled Expect: 100 Continue) and in my theory this breaks service.
question2: is there anyway to say HttpWebRequest, don't split header and body? send all of them in single request?
edit3: i solved the problem. but it wasn't about 100 Continue. service was being broken because of missing User-Agent Header
For HttpWebRequest you can add the following to your .config file - we use this on our service. I'm not, however, sure how it impacts WCF channels.
<configuration>
<system.net>
<settings>
<servicePointManager expect100Continue="false" />
</settings>
</system.net>
</configuration>
i finally solved the problem. after long inspections and tries i saw that remote service stops responding in the middle of the data communication if i dont add User-Agent HTTP Header. so i added http header using IClientMessageInspector before every request
here is wcf code if anybody needs it
public object BeforeSendRequest(ref Message request, IClientChannel channel)
{
HttpRequestMessageProperty realProp;
object property;
//check if this property already exists
if (request.Properties.TryGetValue(HttpRequestMessageProperty.Name, out property))
{
realProp = (HttpRequestMessageProperty) property;
if (string.IsNullOrEmpty(realProp.Headers["User-Agent"])) //don't modify if it is already set
{
realProp.Headers["User-Agent"] = "doktorinSM/2.1";
}
return null;
}
realProp = new HttpRequestMessageProperty();
realProp.Headers["User-Agent"] = "doktorinSM/2.1";
request.Properties.Add(HttpRequestMessageProperty.Name, realProp);
return null;
}
I'm using HttpClient to send an asynchronous POST request to a remote web server. That remote web server responds with the Connection header set to keep-alive. Eventually, it will close the connection.
What I can't figure out how to do is keep receiving data until the connection is set to close.
For example, consider the following code:
HttpClient client = new HttpClient();
string requestUri = ...; //My GET uri
string content = ...; //url-encoded contents of the request
HttpResponseMessage response = await client.PostAsync(requestUri, new StringContent(content));
In the code above, response will have the Connection header set to keep-alive. I couldn't find any members on HttpResponseMessage that seem to give me any means of continuing to receive information after the first keep-alive is received.
I can't figure out how to continue receiving responses until the Connection closes.
I was able to find all sorts of resources that dealt with sending keep-alive requests. That's not my issue. My issue is in receiving/handling a keep-alive response from the server.
Please let me know if there is anymore information that I could provide to improve this question.
Connection: keep-alive purpose is not to enable server to send multiple responses for the same single request - this would fundamentally violate how HTTP works.
Connection: keep-alive is server's way of signaling to the client that client can submit next request using the same, already established, TCP connection, thus help avoid the costs of establishing new connection (and possibly TLS handshake as well).
Reuse of the connection is (typically) handled by underlying library, and is abstracted away such that user (as in developer) of the library need not worry about handling.
All you should be doing is keep making as many client.Post/Get/etc requests, and the HttpClient library will take care of managing the underlying TCP connection and its reuse.
There is one area in HTTP that may look counter to the above and thus confusing, but is not. The case is when server returns HTTP Status 100 Continue. In this case you will see another response come from the server for still the same original request (and by "see" I mean not from HttpClient, but rather on the wire if you were to snoop with a tool like WireShark or a proxy). What is happening in this case is if a client is making a large request, the server simply signals to the client that it is continuing to accept and read request, and for the client to continue sending that original request. Once full request is received, the server will process and respond with a final response code and message. And again, HttpClient will abstract away this interim 100 Continue response, so as developer you need not worry about it (in most if not all cases).
If all you want is receive the response for a given request you don't need to deal with keep-alive at all. Keep-alive is purely a performance optimization. The framework transparently manages connections.
All you do is send a request and receive a response. There is exactly one response per request.
One side will eventually close the connection which is not of any concern to you. You don't notice any effect of that.
Issue:
Consider the following working code.
System.Net.WebProxy proxy = new System.Net.WebProxy(proxyServer[i]);
System.Net.HttpWebRequest objRequest = (System.Net.HttpWebRequest)System.Net.WebRequest.Create(https_url);
objRequest.Method = "GET";
objRequest.Proxy = proxy;
First, notice that proxyServer is an array so each request may use a different proxy.
If I comment out the last line thereby removing the use of any proxies, I can monitor requests in Fiddler just fine, but once I reinstate the line and start using them, Fiddler stops logging outbound requests from my app.
Question:
Do I need to configure something in Fiddler to see the requests or is there a change in .Net I can make?
Notes:
.Net 4.0
requests are sometimes https, but i don't think this is directly relevant to issue
all requests are outbound (not localhost/127.0.0.1)
Fiddler is a proxy itself. By assigning a different proxy to your request.. you're essentially taking Fiddler out of the equation.
If you're looking to capture traffic and use your own proxy.. you can't use a proxy (by definition that makes no sense).. you want a network analyzer, such as WireShark. This captures the traffic instead of having the traffic routed through it (as a proxy does), allowing you to have it monitor traffic and route your requests through your custom proxy.