We have a strange issue with a silverlight app which seems to centre around the 'clientaccesspolicy.xml' file.
We have a website, which is the default website on IIS7. In the root of this website we have the 'clientaccesspolicy.xml' file.
We also have a web service defined in a 'http://thewebsite/asubdirectory/service.asmx' which handles some of the silverlight requests to the website.
What seems to happen, is that when we try to load the silverlight component, there is an http request for 'http://asubdirectory/clientaccesspolicy.xml' which is clearly wrong.
What's odd, is that if I setup the default website to be blank, and setup this particular website as an application/virtual directory below the default website. e.g. http://thewebsite/subdomain/ then the request for clientaccesspolicy goes to http://thewebsite/clientaccesspolicy.xml and assuming I keep a copy of the file at the root of the default website, things work ok.
What I'd like to know, is how silverlight/IIS is determining that it needs to look further up that the root for the clientaccesspolicy when the website is defined as the default.
Could it be the service location or the service references in silverlight? Is there a sensible way round this?
Many thanks,
Doug
Silverlight needs to ask target site for cross-domain policy if it is not the same domain. So based on your "http://asubdirectory" I think somewhere your code is wrong and actually tries to use service at http://asubdirectory/someservice location instead of http://thewebsite/asubdirectory/someservice.
Related
Currently I have an web application with web page (Redirect.aspx) with the following codes in its PageLoad function
Response.Redirect("https://redirectURL.com/folder1/folder2");
In my local debug, I'm able to reach the https://redirectURL.com/folder1/folder2 when i access the Redirect.aspx
However, when I hosted the web application (as Sample.com) in AWS EC2 server, when i access the https://Sample.com/Redirect.aspx it go redirected to https://Sample.come/folder1/folder2 instead of https://redirectURL.com/folder1/folder2.
Not sure whats triggering the change of website hence hope to get some helps.
Have found the reason. It wasn't due to code, it was because of IIS setting.
In ARR - Application Request Module, there is a "Reverse rewrite host in response headers" feature and somehow it was turned on hence causing the issue. Turning off this solve my issue.
I have some kind of a job scheduling implemented which calls a function ProcessJob. Now inside this method I need to generate url to one of my pages i.e DoanloadPage.aspx?some_params. That url is sent to user via email and when user clicks that link, it will take to the page.
The problem here is that I am not generating url in a web request method or I don't have access to the Request object. URL need to be generated in a custom class which is threaded i.e not in a web request.
So I can't go with these solutions:
HostingEnvironment.MapPath("test.aspx");
VirtualPathUtility.ToAbsolute("123.aspx");
HttpContext.Current.Request.Url.Authority;
None of these works because I think they all rely on current request or session somehow. So how do I generate urls for my app inside my code so I can use them anyway I want.
If your method cannot use HttpContext.Current.Request.Url, for example in case it's a background scheduled task, then you can use either of the following options:
In case that your code is hosted in the same ASP.NET application, you can pass the site domain name of the site to your class, in the first request. To do so, you need to handle Application_BeginRequest event and get the domain from HttpContext.Current.Request.Url and then pass it to your class, or store it in an application scope storage. You can find an implementation in this post or the original article.
Note: The code is available in SO, so I don't repeat the code
here.
If your code is not hosted in the same ASP.NET application or if for any reason you don't want to rely on Application_BeginRequest, as another option you can store the site domain name in a setting (like appsettigs in app.condig or web.config if it's web app) and use it in your code.
You can do something like this. Dns.GetHostName will return the name of the computer that is hosting the site. You can use that to check if the site is on a development server.
string domain = "www.productionurl/123.aspx";
if (Dns.GetHostName() == "Development")
{
domain = "www.developmenturl/123.aspx";
}
The Dns.GetHostName() is not the only way to check. You could also use the HostingEnvironment.ApplicationPhysicalPath. You can check that also and see if the path is that of the development server.
My answer is: don't do this. You're building a distributed system, albeit a simple one, and generally speaking it is problematic to introduce coupling between services in a distributed system. So even though it is possible to seed your domain using Application_BeginRequest, you are then tying the behavior of your batch job to your web site. With this arrangement you risk propagating errors and you make deployment of your system more complicated.
A better way to look at this problem is to realize that the core desire is to synchronize the binding of your production site with the URL that is used in your batch job. In many cases an entry in the app.config of your batch would be the best solution, there really isn't any need to introduce code unless you know that your URL will be changing frequently or you will need to scale to many different arbitrary URLs. If you have a need to support changing the URL programmatically, I recommend you look at setting up a distributed configuration system like Consul and read the current URLs from your deployment system for both the IIS binding and the app.config file for your batch. So even in this advanced scenario, there's no direct interaction between your batch and your web site.
I have a tough question here and I would like to tap the wisdom of the masses to ensure that I am approaching this issue in the most efficient way possible.
Goal: Move 78 web applications (all configured to be an IIS application under a root website) from a Windows Server 2003 box to a 2012 box with as little coding as possible. The 2012 box has a different subdomain "xxx2.blah.com" and the 03 server is mapped to a "xxx.blah.com" server. In short, the user bookmarks won't work once we migrate so we want to write a redirection utility to assist getting the users to the new xxx2.blach.com location without them noticing.
Current State:It is important to note that each application under the root website in IIS6 is configured to run under its own, and sometimes shared, app pools. Some of the applications have querystring values appended to the end of the .NET request that we want to retain because it affects the UI and other business logic already coded.
We were thinking of removing the files within each application to force IIS to return a 404. Once the 404 occurs, we were wanting to run our custom utility to lookup what the equivalent URL is. Since the 404 is an "error" by all intents and purposes, we were thinking that we could "handle" the error like this (ASP.NET 2.0 : Best Practice for writing Error Page)
Is it possible to write that code once, add the logic to the global.asax file in the root website, and then somehow instruct each web application under that root site to execute the code in the parent site? I know they each run under their own app pool and that may mean that we cannot pass execution off onto another application easily but I could be wrong. In addition, we are hoping to not have to copy/paste code 78 times. Any general "best practices" or advice would be greatly appreciated. Also, adjusting it on the network is not an option as the old xxx.blah.com is on a completely different network than the new xxx2.blah.com network.
#Carl
Thank you very much. I initially missed that those variables were available to me for this purpose! The final solution for me was to set the "Redirect To" textbox to "http://xxx2.blah.com$V?UpdateNote=true&$P". This enables the redirection to occur with both the path and querystring name/value pairs in tact and also allowed me to append my own value so that the application could detect it and display a "This page has moved" message to the user.
Thank you Carl! You da man.
I am running a website and a windows service . I am able to change at runtime the level of log of my Website using a page I made, and I would like to do the same for my windows service( ie: using a page to monitor the different levels of Log I am using in the service).
Would you have some tips and tricks to achieve that? Or should I resign and upload a new version of the log4net file every time I need to log things a bit more in details (this upload is a bit tricky and quite annoying to do)?
thanks for your ideas,
[EDIT]
UNfortunately none of the answer listed here are aimed at my problem. Mine is to access the log4net from a service located on a Machine A from a WebSite running on a machine B. So that accessing the Web of MAchine A may allow me to change log level of service thread of Machine B.
If your windows service is using ConfigureAndWatch you should be able to edit the config file just like you do for your website with that page you made if you place the configuration file in a place that is accessible via the web page.
You will also have to change the path to the configuration file you load in your windows service but this should be a solution.
You can modify a config file and have your application pick up the changes. The trick is that you cannot use the app.config/web.config file to do so. Otherwise, it takes a reboot of your application before the changes will take place. Here is a SO question that has a couple answers that might work:
.net dynamically refresh app.config
You can also make changes through code like so:
http://weblogs.asp.net/psteele/archive/2010/05/03/tweaking-log4net-settings-programmatically.aspx
Ok only things I found, it to interface my service and Web Application to access the same table in the database, and make regular check to this table to change log level in servizio.
If someone has a better idea, I am all ears.
I'm building a CMS using WebForms on .NET 4.0 and have the following route that allows URLs like www.mysite.com/about to be mapped to the Page.aspx page, which looks up the dynamic content.
routes.MapPageRoute("page", "{name}", "~/Page.aspx");
The problem is that I have a couple of folders in my project that are interfering with possible URLs. For example, I have a folder called "blog" where I store pages related to handling blog functionality, but if someone creates a page for their site called "blog" then navigating to www.mysite.com/blog gets the following error:
403 - Forbidden: Access is denied. You
do not have permission to view this
directory or page using the
credentials that you supplied.
Other similar URLs route correctly, but I think because .NET is identifying /blog as a physical location on the server it is denying directory access. Is there a way to tell IIS / .NET to only look for physical files instead of files and folders?
It looks like IIS is denying you access to the actual folder.
By default Routing is supposed to honor the file system. Though this can be turned off.