Is there a better way to debug a SOAP request? - c#

At the moment I'm confronted with the following situation:
I use a web app in my browser that runs on an app server (appsrv).
Depending on configurations I set in the web app (XML and XSLT) appsrv sends a SOAP request to a web service (web_service) on an other server (websvcsrv).
If something goes wrong and I want to check what's sent in the SOAP request I have to do the following:
Change a line in web.config on appsrv from:
<add key="WebServiceUrl_{web_service name}"
value="https://websvcsrv/path/to/web/service.asmx" />
to
<add key="WebServiceUrl_{web_service name}"
value=http://{server which exists and accepts http but has otherwise
absolutely nothing to with the web app or the web service (dummy) }" />
This is only to change from https to the unencrypted http. See below for why this is necessary.
Run Wireshark on appsrv and start capturing.
Run the web app in my browser to a point where the SOAP request is sent from appsrv to the web_service on websvcsrv, where the latter two refer to dummy now.
Ignore the error that's shown by appsrv in the browser (since just dummy is behind now).
Switch to Wireshark on appsrv, filter for xml, find the SOAP request and look at its XML. (This is where http comes into play, since with encrypted https one can't see anything useful for human eyes.)
Correct possible errors in the web app config.
Undo 1., i.e. change line in web.config from dummy to websvcsrv.
Try the corrections with the real web_service on the real websvcsrv.
If something still goes wrong re-start at 1. If not, continue at 10.
Enhance the configuration.
Run the web app in my browser to a point where the–now enhanced–SOAP request is sent from appsrv to the web_service on websvcsrv.
If something goes wrong start at 1.
Needless to say that this is amongst the craziest things I've ever experienced in my decades of developing.
Is there a better way to debug this?

Related

Browser debugger says my response.body is null but fiddler and the developer tools show it as populated [duplicate]

I can hit this endpoint, http://catfacts-api.appspot.com/api/facts?number=99 via Postman and it returns JSON
Additionally I am using create-react-app and would like to avoid setting up any server config.
In my client code I am trying to use fetch to do the same thing, but I get the error:
No 'Access-Control-Allow-Origin' header is present on the requested
resource. Origin 'http://localhost:3000' is therefore not allowed
access. If an opaque response serves your needs, set the request's
mode to 'no-cors' to fetch the resource with CORS disabled.
So I am trying to pass in an object, to my Fetch which will disable CORS, like so:
fetch('http://catfacts-api.appspot.com/api/facts?number=99', { mode: 'no-cors'})
.then(blob => blob.json())
.then(data => {
console.table(data);
return data;
})
.catch(e => {
console.log(e);
return e;
});
Interestingly enough the error I get is actually a syntax error with this function. I am not sure my actual fetch is broken, because when I remove the { mode: 'no-cors' } object, and supply it with a different URL it works just fine.
I have also tried to pass in the object { mode: 'opaque'} , but this returns the original error from above.
I belive all I need to do is disable CORS.. What am I missing?
mode: 'no-cors' won’t magically make things work. In fact it makes things worse, because one effect it has is to tell browsers, “Block my frontend JavaScript code from seeing contents of the response body and headers under all circumstances.” Of course you never want that.
What happens with cross-origin requests from frontend JavaScript is that browsers by default block frontend code from accessing resources cross-origin. If Access-Control-Allow-Origin is in a response, then browsers relax that blocking and allow your code to access the response.
But if a site sends no Access-Control-Allow-Origin in its responses, your frontend code can’t directly access responses from that site. In particular, you can’t fix it by specifying mode: 'no-cors' (in fact that’ll ensure your frontend code can’t access the response contents).
However, one thing that will work: if you send your request through a CORS proxy.
You can also easily deploy your own proxy to Heroku in just 2-3 minutes, with 5 commands:
git clone https://github.com/Rob--W/cors-anywhere.git
cd cors-anywhere/
npm install
heroku create
git push heroku master
After running those commands, you’ll end up with your own CORS Anywhere server running at, for example, https://cryptic-headland-94862.herokuapp.com/.
Prefix your request URL with your proxy URL; for example:
https://cryptic-headland-94862.herokuapp.com/https://example.com
Adding the proxy URL as a prefix causes the request to get made through your proxy, which:
Forwards the request to https://example.com.
Receives the response from https://example.com.
Adds the Access-Control-Allow-Origin header to the response.
Passes that response, with that added header, back to the requesting frontend code.
The browser then allows the frontend code to access the response, because that response with the Access-Control-Allow-Origin response header is what the browser sees.
This works even if the request is one that triggers browsers to do a CORS preflight OPTIONS request, because in that case, the proxy also sends back the Access-Control-Allow-Headers and Access-Control-Allow-Methods headers needed to make the preflight successful.
I can hit this endpoint, http://catfacts-api.appspot.com/api/facts?number=99 via Postman
https://developer.mozilla.org/en-US/docs/Web/HTTP/Access_control_CORS explains why it is that even though you can access the response with Postman, browsers won’t let you access the response cross-origin from frontend JavaScript code running in a web app unless the response includes an Access-Control-Allow-Origin response header.
http://catfacts-api.appspot.com/api/facts?number=99 has no Access-Control-Allow-Origin response header, so there’s no way your frontend code can access the response cross-origin.
Your browser can get the response fine and you can see it in Postman and even in browser devtools—but that doesn’t mean browsers expose it to your code. They won’t, because it has no Access-Control-Allow-Origin response header. So you must instead use a proxy to get it.
The proxy makes the request to that site, gets the response, adds the Access-Control-Allow-Origin response header and any other CORS headers needed, then passes that back to your requesting code. And that response with the Access-Control-Allow-Origin header added is what the browser sees, so the browser lets your frontend code actually access the response.
So I am trying to pass in an object, to my Fetch which will disable CORS
You don’t want to do that. To be clear, when you say you want to “disable CORS” it seems you actually mean you want to disable the same-origin policy. CORS itself is actually a way to do that — CORS is a way to loosen the same-origin policy, not a way to restrict it.
But anyway, it’s true you can—in your local environment—do suff like give a browser runtime flags to disable security and run insecurely, or you can install a browser extension locally to get around the same-origin policy, but all that does is change the situation just for you locally.
No matter what you change locally, anybody else trying to use your app is still going to run into the same-origin policy, and there’s no way you can disable that for other users of your app.
You most likely never want to use mode: 'no-cors' in practice except in a few limited cases, and even then only if you know exactly what you’re doing and what the effects are. That’s because what setting mode: 'no-cors' actually says to the browser is, “Block my frontend JavaScript code from looking into the contents of the response body and headers under all circumstances.” In most cases that’s obviously really not what you want.
As far as the cases when you would want to consider using mode: 'no-cors', see the answer at What limitations apply to opaque responses? for the details. The gist of it is:
In the limited case when you’re using JavaScript to put content from another origin into a <script>, <link rel=stylesheet>, <img>, <video>, <audio>, <object>, <embed>, or <iframe> element (which works because embedding of resources cross-origin is allowed for those)—but for some reason you don’t want to/can’t do that just by having the markup of the document use the resource URL as the href or src attribute for the element.
When the only thing you want to do with a resource is to cache it. As alluded to in What limitations apply to opaque responses?, in practice the scenario that’s for is when you’re using Service Workers, in which case the API that’s relevant is the Cache Storage API.
But even in those limited cases, there are some important gotchas to be aware of; see the answer at What limitations apply to opaque responses? for the details.
I have also tried to pass in the object { mode: 'opaque'}
There is no 'opaque' request mode — opaque is instead just a property of the response, and browsers set that opaque property on responses from requests sent with no-cors mode.
But incidentally the word opaque is a pretty explicit signal about the nature of the response you end up with: “opaque” means you can’t see into any of its details; it blocks you from seeing.
If you are trying to address this issue temporarily on your localhost, you can use this chrome extension : Allow CORS Access-Control-Allow-Origin
https://chrome.google.com/webstore/detail/allow-cors-access-control/lhobafahddgcelffkeicbaginigeejlf
If you are using Express as back-end you just have to install cors and import and use it in app.use(cors());.
If it is not resolved then try switching ports.
It will surely resolve after switching ports
So if you're like me and developing a website on localhost where you're trying to fetch data from Laravel API and use it in your Vue front-end, and you see this problem, here is how I solved it:
In your Laravel project, run command php artisan make:middleware Cors. This will create app/Http/Middleware/Cors.php for you.
Add the following code inside the handles function in Cors.php:
return $next($request)
->header('Access-Control-Allow-Origin', '*')
->header('Access-Control-Allow-Methods', 'GET, POST, PUT, DELETE, OPTIONS');
In app/Http/kernel.php, add the following entry in $routeMiddleware array:
‘cors’ => \App\Http\Middleware\Cors::class
(There would be other entries in the array like auth, guest etc. Also make sure you're doing this in app/Http/kernel.php because there is another kernel.php too in Laravel)
Add this middleware on route registration for all the routes where you want to allow access, like this:
Route::group(['middleware' => 'cors'], function () {
Route::get('getData', 'v1\MyController#getData');
Route::get('getData2', 'v1\MyController#getData2');
});
In Vue front-end, make sure you call this API in mounted() function and not in data(). Also make sure you use http:// or https:// with the URL in your fetch() call.
Full credits to Pete Houston's blog article.
You can also set up a reverse proxy which adds the CORS headers using a self-hosted CORS Anywhere or Just CORS if you want a managed solution.
https://justcors.com/<id>/<your-requested-resource>
http://cors-anywhere.com/<your-requested-resource>
Very easy solution (2 min to config) is to use local-ssl-proxy package from npm
The usage is straight pretty forward:
1. Install the package:
npm install -g local-ssl-proxy
2. While running your local-server mask it with the local-ssl-proxy --source 9001 --target 9000
P.S: Replace --target 9000 with the -- "number of your port" and --source 9001 with --source "number of your port +1"
Solution for me was to just do it server side
I used the C# WebClient library to get the data (in my case it was image data) and send it back to the client. There's probably something very similar in your chosen server-side language.
//Server side, api controller
[Route("api/ItemImage/GetItemImageFromURL")]
public IActionResult GetItemImageFromURL([FromQuery] string url)
{
ItemImage image = new ItemImage();
using(WebClient client = new WebClient()){
image.Bytes = client.DownloadData(url);
return Ok(image);
}
}
You can tweak it to whatever your own use case is. The main point is client.DownloadData() worked without any CORS errors. Typically CORS issues are only between websites, hence it being okay to make 'cross-site' requests from your server.
Then the React fetch call is as simple as:
//React component
fetch(`api/ItemImage/GetItemImageFromURL?url=${imageURL}`, {
method: 'GET',
})
.then(resp => resp.json() as Promise<ItemImage>)
.then(imgResponse => {
// Do more stuff....
)}
I had a similar problem with my browser debugger saying my response.body was null but fiddler and the developer tools show it as populated that turned out to be basically the same scenario as this. I was using a local Angular application hitting a Web Api service running on IISExpress. I fixed it by following the steps outlined here to find the correct applicationhost.config file to add a Access-Control-Allow-Origin header like so:
<customHeaders>
<clear />
<add name="X-Powered-By" value="ASP.NET" />
<add name="Access-Control-Allow-Origin" value="*" />
<add name="Access-Control-Allow-Headers" value="Content-Type" />
</customHeaders>
If all the above solutions don't work, probably it's because of the file permissions as sometimes even if you have fixed the non-cors problem using Heroku or another way, it throws 403 forbidden error. Set the directory/file permissions like this:
Permissions and ownership errors
A 403 Forbidden error can also be caused by incorrect ownership or permissions on your web content files and folders.
Permissions
Rule of thumb for correct permissions:
Folders: 755
Static Content: 644
Dynamic Content: 700

How to inform users that api maintenance is in progress

I am using azure app service and DB for my C# ODATA API and DB as the backend of of my phone app.
I only have one app service that hosts 10s of endpoints. There are times when I need to publish new versions and I don't want any incoming requests during that time of deployment.
I don't mind that users are not able to finish their requests during the maintenance.
Is there anything in Azure or API that can let me:
1. turn off the api/app service manually?
2. Be able to inform the user that a maintenance is in progress?
This is my trial:
the only thing I can come up with is this. While users always use the "odata" in their url requests: https://myserverl/odata/Users
which is setup in the webapi.config like this:
config.MapODataServiceRoute("odata", "odata", builder.GetEdmModel());
I put the routePrefix (2nd odata) in a web.config.
When I need to turn off access, I change my web.config (which I can access manually even after the publish of code into Azure) to be like this:
<add key="odata" value="noaccess" />
and in my webapi.config:
string odata = ConfigurationManager.AppSettings["odata"].ToString();
config.MapODataServiceRoute("odata", odata, builder.GetEdmModel());
and then save the web.config which will reset the server and all incoming requests that has "odata" will result into error. I can always set it back later.
This method will stop the users from sending requests during maintenance but will not let them know what is going on.
I figured it out.
when I call the server from my client, I verify that the response is between 200 & 299 before parsing results or any other further processing.
So now, I check also for the possible response from the server that it could be either 403 (access is denied) or 503 (server is unavailable). That's where I can add code to notify the user.
In Azure, simply stopping the app service, will generate one of those 2 error codes.
Note: You must check for both: 403 & 503.

URL rewrite issue with WCF services

I am at the tail end of a project that rewrites our existing public endpoints to be hosted in IIS (our current implementation was written before IIS7 and is a home-grown hosting application).
I'm also at my wits end with trying to get the URL Rewrite functionality to work properly so that we can seamlessly move our existing customers over to the new endpoints. I'm having a couple of issues with running the new endpoints alongside our legacy app.
I thought this would work fine:
Legacy URL:
https://mydevserver:443/2.0.22/ServiceA
New URL:
https://mydevserver:9995/2.1.22/ServiceB.svc
Rule in web.config:
<rule name="Test">
<match url="2.0.22/ServiceA" />
<action type="Rewrite" url="2.1.22/ServiceB.svc" />
</rule>
So I shut down the legacy service, fired up my client and pointed it to the legacy service URL, but I get an error that there is no endpoint listening. Which makes sense to me, as that URL is registered by our legacy app and it's not available:
There was no endpoint listening at
https://mydevserver:443/2.0.22/ServiceA that could accept the message.
This is often caused by an incorrect address or SOAP action. See
InnerException, if present, for more details.
So I thought if I changed the binding port in IIS for my new endpoints to be the https default, it would work -
New URL:
[same as legacy]/2.1.22/ServiceB.svc
This prompts IIS to give me a warning that the binding is already being used by a product other than IIS, and I might overwrite the existing cert for the address/port combination. So I say OK, and rebind the cert to the 443 port (for good measure), but when I point my client to the old URL, I get sort of the same error but it's worded a little differently:
The HTTP service located at https://mydevserver:443/2.0.22/ServiceA is
unavailable. This could be because the service is too busy or because
no endpoint was found listening at the specified address. Please
ensure that the address is correct and try accessing the service again
later.
I feel like I've tried every combination of everything (wildcard matches, paths, etc.) but I am clearly missing something. I would appreciate any help on this issue.
*Also, is it even possible to host the IIS endpoints on a different server and use URL rewrites?
From further research, I have determined that this is not possible:
You can only rewrite the URL in the same site and same application
pool.
http://blogs.msdn.com/b/asiatech/archive/2011/08/25/return-404-4-not-found-when-url-rewrite.aspx

IIS7 URL Rewrite returns 404 for WCF requests (reverse proxy)

I am using IIS7.5, .net 4.0. I am working locally.
I have installed Application Request Routing, Web Farm Framework, WebDeploy and UrlRewrite to set up a reverse proxy. This works fine for the most part.
I have two websites:
DefaultWebSite (port 80, app pool: Default App Pool (.net 4)) and
Target (port 8085, app pool: TargetAppPool(my identity, .net 4)).
I have a rewrite rule on DefaultWebSite (created as directed on IIS.net) which redirects all localhost (port 80) traffic to localhost:8085 just as detailed in the above link. This works fine for most document types (.aspx, .xap, .htm, .ico) but a request to MyService.svc fails. It returns a 404.
To be clear:
When I paste localhost:8085/MyService.svc into a browser I get the requested WCF page.
When I paste localhost/MyService.svc into a browser I get a 404.
When I paste localhost:8085/MyIcon.ico into a browser I get the requested resource.
When I paste localhost/MyIcon.ico into a browser I get the requested resource.
.svc is the only document type that I've found that returns a 404.
I've got two pieces of info that might be of relevance.
App Pools. When I change the DefaultWebSite's app pool to TargetAppPool then the 404 becomes a 500 ("Failed to map the path '/'"). All other requests are successful when this change is made. Not sure if this relevant or not.
FREB (Failed Request Tracing) Log. I found a page (http://blogs.msdn.com/b/asiatech/archive/2011/08/25/return-404-4-not-found-when-url-rewrite.aspx) which details the steps in a FREB log when a URL rewrite is more successful than mine (it fails later on). I've not been able to find out how to generate a FREB log for a successful rewrite (if that's possible) so I can only compare my FREB log to the one on that blog. I can see that their step 21 (URL_CHANGED) in my FREB log but not 22 (URL_REWRITE_END). I've not got enough experience with these logs to notice anything more significant than that (suggestions welcomed).
My main question is: does anyone know why just URLs requesting .svc resources are not being rewritten?
A secondary question is: does anyone know how to generate a FREB log for successful request (if it's even possible)?
Thanks
Update:
I have changed the architecture to try to get more info.
I have moved the Target website to a different PC on which I have installed Microsoft Network Monitor to capture the incoming traffic.
Before I changed the url-rewrite rule to point at this new website I got the correct response when I made a request to MyService.svc on the new PC. Fine.
As soon as I changed the rewrite rule to route the request to the new Target website then it responds as before (404). I have made both POST and GET requests. There is no sign of any of the requests in the Network Monitor log (all other calls -200, 404 or otherwise- appear in this log).
This leads me to think that there is something incompatible with url-rewrites and *.svc requests. I tried making a request to MyService.asmx (having created this file) and it correctly returned a page, so it is limited to *.svc. Any ideas?
The solution to this is in the config file of the Target web site.
In web.config (in the Target application) there is a section which read:
<serviceHostingEnvironment aspNetCompatibilityEnabled="true"/>.
I changed this to read:
<serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" />.
Credit must go to http://forums.iis.net/post/1956671.aspx for this (although s/he claims it is the proxy's config which needs to be changed, but I found it be the Target app, not the proxy server).
If you still can't get it running, make sure you don't have the WCF handlers on the website which acts as the reverse proxy.
I disabled this by adding this web.config of the reverse proxy:
<system.webServer>
...
<handlers>
<remove name="svc-ISAPI-4.0_64bit" />
<remove name="svc-ISAPI-4.0_32bit" />
<remove name="svc-Integrated-4.0" />
</handlers>
</system.webServer>
Because the rewrite appears to work for all resources except when the extension is .svc I would say this would be the area to concentrate on.
I would imagine that the rewrite rules are matching your other resources, but not your service, and because these are usually regular expressions (which are often complicated) I would say it would be worth testing any rules you find with your urls. Details of how to find the regular expressions for an UrlRewrite can be found here.
It is also probably also worth looking at any outbound rules with the same mindset.

C# SOAP with a custom URL

Okay, simple situation: I'm writing a simple console application which connects to a SOAP web service. I've imported a SOAP Service reference and as a result, my application has a default endpoint build into it's app.config file.
The web service, however, can run on multiple servers and the URL to the proper web service is passed through the commandline parameters of my application. I can read the URL, but how do I connect the web service to this custom URL?
(It should be very simple, in my opinion. It's something I'm overlooking.)
Is this using an auto-generated class deriving from SoapHttpClientProtocol? If so, just set the Url property when you create an instance of the class.
Well, .NET can provide some very useless error messages sometimes. In IIS, the service was configured to AutoDetect cookieless mode. As a result, I had to append "?AspxAutoDetectCookieSupport=1" to the URL. Although that would fix the problem, it was just easier to go to the IIS console, open the properties of the service, go to the ASP.NET tab page, click the "Edit configuration" button, to to "State Management" in the newly popped up screen and change "Cookieless mode" into something other than "AutoDetect"...
Excuse me. Dumb error. Am going to hit myself on the head a few times for this. ;-)
As Jon said, you set the Url, as in:
Namespace.ClassName nwe = new Namespace.ClassName();
nwe.Url = "http://localhost/MyURL/site.asmx";

Categories