We are currently seeing an issue with the use of HttpContext.Current.Items where the local environments of the developers show no issues (all works as expected) in the server environment we are suddenly loosing items placed inside (getting a NullReferenceException).
I sketched some code and use below:
Global.asax we initialise the factory at BeginRequest:
protected void Application_BeginRequest(object sender, EventArgs e)
{
HttpContext.Current.Items["Key"] = new Factory();
}
Inside the BaseControl we have a property to retrieve the factory easily:
public Factory Factory
{
get { return HttpContext.Current.Items["Key"] as Factory; }
}
In the UserControl we use it through the base property:
protected void Page_Load(object sender, EventArgs e)
{
Product p = Factory.CreateProduct();
}
Environment information:
Local DEVs are running on Windows 7 and 8 using IIS 7.5 and 8 and Sitecore 6.6
The server is running Windows Server 2008 R2 using IIS 7.5 and Sitecore 6.6
For all local DEVs (we're working on this project with 6 people) there's no issue with the code described above. However once we deploy the code to the test server the locations that use the Factory break (ea the HttpContext.Current.Items is empty)
I can imagine only 1 scenario when it behaves like you described: in the Global.asax file the Inherits property on the test server points to the Sitecore.Web.Application directly omitting your code execution.
Could you double check if the Global.asax file starts with
<%#Application Language='C#' Inherits="My.Assembly.Namespace.Global" %>
This could happen if the Global.asax was changed in your dev enironment but hasn't been deployed to the test environment.
If it's not an issue, try to check if the Application_BeginRequest is executed on the test server. It would give you an answer whether the Factory is never added to HttpContext.Current.Items or whether it's removed from the Items during the request handling.
I noticed you use the same name for your property as it's type:
public Factory Factory {}
Maybe this initiates some unexpected behavior?
Related
This is a two part question, which circles around the same problematcis.
Without setting anything up in Global.Asax, and just inserting this line in my .cshtml file for my basic layout, and running my site in debug mode, the MiniProfiler is automatically displayed in my frontend.
#MiniProfiler.RenderIncludes()
Trying to shut down the MiniProfiler doesn't seem to work. I have tried something as obvious as the following code, but the MiniProfiler is still run, and is still represented in my frontend, when running my site locally:
protected void Application_BeginRequest()
{
MiniProfiler.Stop();
if (!Request.IsLocal)
{
MiniProfiler.Start();
}
}
Furthermore In Umbraco.Core's WebProfiler I found the following code:
if (GlobalSettings.DebugMode == false)
return false;
In my case, i would like to be able to let the Profiler run on my clients site, but that's not possible as DebugMode is always false when I publish my code to my clients site.
Ideally I would like to do something like:
protected void Application_BeginRequest()
{
MiniProfiler.Stop();
if (Request.IsLocal || Request.UserHostAddress == "My/developers IP")
{
MiniProfiler.Start();
}
}
How can I programmatically stop the MiniProfiler to set up cases where I don't want it to run when in DebugMode? And how can I make the MiniProfiler available for developers on my customers live sites?
I have an application which contains multiple hubs all on unique paths, so when calling the default :
routes.MapHubs("path", new HubConfiguration(...));
It blows up saying that the signalr.hubs is already defined (as mentioned here MapHubs not needed in SignalR 1.01?).
Now I can understand that it should only be called once, but then you will only get 1 path, so is there any way to handle a path per hub scenario? like how with MVC you specify the controller and action? so something like:
routes.MapHub<SomeHub>("path", new HubConfiguration(...));
== Edit for more info ==
It is mentioned often that you should never need to call this map hubs more than once, and in most scenarios I can agree, however I would not say that this is going to be the case for all applications.
In this scenario it is a website which at runtime loads any plugins which are available, each plugin is exposed the dependency injection framework to include its dependencies and the route table to include its routes. The hubs may have nothing to do with each other (other than the fact that they are both hub objects). So the hubs are not all known up front and are only known after the plugins are loaded, and yes I could wait until after this and try binding the hubs there, however then how do I have custom routes for each one then?
This seems to be a case of SignalR trying to abstract a little too much, as I dont see it being a bad idea to have custom routes rather than the default "/signalr", and as the routes all have different responsibilities it seems bad to have one entry route for them all.
So anyway I think the question still stands, as I dont see this as being a bad use case or bad design it just seems to be that I want to be able to have a route with a hub applied to it, much like in mvc you apply a controller and action to a route.
You shouldn't need more than the signalr.hubs route. If you point your browser to that route, you will see it automatically finds all public types assignable to IHub and creates a JavaScript proxy for them. You can interact with different hubs by name from JavaScript, i.e. if you have the following Hub:
public class GameHub : Hub
You can connect to that specific hub by doing:
var gameHubProxy = $.connection.gameHub;
You can also explicitly specify a name for your hub by adding the HubNameAttribute to the class:
[HubName("AwesomeHub")]
public class GameHub : Hub
You can then retrieve the specific proxy by doing
var awesomeHubProxy = $.connection.awesomeHub;
UPDATE:
I'm not sure whether SignalR will be able to run on multiple paths in the same application. It could potentially mess things up and the default assembly locator won't be able to pick up hubs loaded at runtime anyway.
However, there is a solution where you can implement your own IAssemblyLocator that will pick up hubs from your plugin assemblies:
public class PluginAssemblyLocator : DefaultAssemblyLocator
{
private readonly IEnumerable<Assembly> _pluginAssemblies;
public PluginAssemblyLocator(IEnumerable<Assembly> pluginAssemblies)
{
_pluginAssemblies = pluginAssemblies;
}
public override IList<Assembly> GetAssemblies()
{
return base.GetAssemblies().Union(_pluginAssemblies).ToList();
}
}
After you've loaded your plugins, you should call MapHubs and register an override of SignalRs IAssemblyLocator service:
protected void Application_Start(object sender, EventArgs e)
{
// Load plugins and let them specify their own routes (but not for hubs).
var pluginAssemblies = LoadPlugins(RouteTable.Routes);
RouteTable.Routes.MapHubs();
GlobalHost.DependencyResolver.Register(typeof(IAssemblyLocator), () => new PluginAssemblyLocator(pluginAssemblies));
}
NOTE: Register the IAssemblyLocator AFTER you've called MapHubs because it will also override it.
Now, there are issues with this approach. If you're using the static JavaScript proxy, it won't be re-generated every time it's accessed. This means that if your /signalr/hubs proxy is accessed before all plugins/hubs has been loaded, they won't be picked up. You can get around this by either making sure that all hubs are loaded by the time you map the route or by not using the static proxy at all.
This solution still requires you to get a reference to your plugin assemblies, I hope that's feasible...
Recently I asked this :
Get Base URL of My Web Application
This worked to an extent in debug, as I use the VS Development server.
I then produced an Install, the install will then point to IIS 7.
I had :
void Application_Start(object sender, EventArgs e)
{
_baseUrl = HttpContext.Current.Request.Url.ToString();
....
}
But this threw the following error :
Request is not available in this context
I then did some reading up and here is why this happens :
http://mvolo.com/iis7-integrated-mode-request-is-not-available-in-this-context-exception-in-applicationstart
I then moved the code out from Application_Start to Application_BeginRequest, using the technique above as I found Application_BeginRequest was being executed several times.
But the problem is I need the Base URL of IIS 7, for use in Application_Start, and so I have a global string I tried to set in :
FirstRequestInitialization.Initialize(context);
But not surpriseingly when attempting this :
Application["BaseUrl"] = HttpContext.Current.Request.Url.ToString();
I get this error :
'Microsoft.Web.Administration.Application' is a 'type' but is used like a 'variable'
All I want is the Base URL of IIS 7.
I can't use Directory Entries as I can't support IIS 6.
How can I do this? Any workarounds? Can I execute AppCmd from VS?
Any help much appreciated. Thanks!
Short answer: you can't get it, because the websites do not have a single canonical base URI - a website (or rather, a web application) can be configured to answer to requests on any binding, any domain name, and any resource path - and a website can be reconfigured in the host webserver (IIS) without the application being made aware of this at all.
If you really want to store your "base URL" (even though such a thing doesn't really exist) then you can do it from within Application_BeginRequest like so:
private static readonly Object _arbitraryUrlLock = new Object();
private static volatile String _arbitraryUrl;
public void Application_BeginRequest() {
if( _arbitraryUrl == null )
lock( _arbitraryUrlLock )
if( _arbitraryUrl == null )
_arbitraryUrl = HttpContext.Current.Request.Url.ToString();
}
I have tried PhluffyFotos example on Azure SDK 1.2 and it works perfect. Today I have installed on another (clen) computer Azure SDK 1.3 and I have also want to try PhluffyFotos on it but it does not work. I have problem with this part:
if (!Roles.GetAllRoles().Contains("Administrator"))
{
Roles.CreateRole("Administrator");
}
It seems it somehow does not load the custom RoleProvider (TableStorageRoleProvider). Do you have any idea what it could be?
I get the following error: "The Role Manager feature has not been enabled.", because of the following exception "'System.Web.Security.Roles.ApplicationName' threw an exception of type 'System.Configuration.Provider.ProviderException'".
Can someone test this example and see what is the problem? http://phluffyfotos.codeplex.com/
Firsty I have the "SetConfigurationSettingPublisher" problem with this example, but I have successfully resole it.
EDIT:
I have look deeper into it and I am sure there are a problem with Role provider. Somehow the Roles class do not read config file. Have anyone any idea why?
I have the exact same problem with my own project. I verified with Fusion logs that the assembly which contains the custom providers dont even load. so it seems the problem is somehow related to the web.config settings being ignored.
To run PhluffyFotos example on Azure SKD 1.3 you have to the following:
Change reference Microsoft.WindowsAzure.StorageClient from 1.0 to 1.1
Move "GetConfigurationSettingValue" to the Global.asax "Application_Start" event.
Move Role related initialization to the Global.asax "Application_BeginRequest" event, but you have to ensure that it executes only once. Example:
private static object gate = new object();
private static bool initialized = false;
protected void Application_BeginRequest()
{
if (initialized)
{
return;
}
lock (gate)
{
if (!initialized)
{
// We need to check if this is the first launch of the app and pre-create
// the admin role and the first user to be admin (still needs to register).
if (!Roles.GetAllRoles().Contains("Administrator"))
{
Roles.CreateRole("Administrator");
}
if (!Roles.GetUsersInRole("Administrator").Any())
{
Roles.AddUserToRole(RoleEnvironment.GetConfigurationSettingValue("DefaultAdminRoleUser"), "Administrator");
}
initialized = true;
}
}
}
I posted a version of the code with the fixes suggested by Peter to rapidshare here:
http://rapidshare.com/files/434649379/PhluffyFotos.zip
For those who don't want to fuss around fixing the dependencies etc.
Cheers,
Daniel
I am trying to build (csharp) one webservice /WCF engine that make two actions:
Have one timer (thread), that will run in each 10-10 minutes, requesting some information (connecting with other server to grab some info - status) to update in one database. (This must be automatic and no human action will be available). The idea is the webservice automaticaly (10x10 minutes) update the database with the recent information status.
One service method that get some information from one database. (This is one simple method that gives the information when someone request it). This method will responsible to select the status info from database.
The problem is the step 1, because step 2 is very easy.
Can anyone help me, with ideas or some code, how to the step 1.
Any pattern should be used here?
Since it's a webapp (for instance, a "WCF Service Application" project type in VS2010), you can hook into the application events.
By default that project template type doesn't create a Global.asax, so you'll need to "add new item" and choose "Global Application Class" (it won't be available if you already have a Global.asax, FWIW).
Then you can just use the start and end events on the application to start and stop your timer, so something like:
public class Global : System.Web.HttpApplication
{
private static readonly TimeSpan UpdateEngineTimerFrequency = TimeSpan.FromMinutes(10);
private Timer UpdateEngineTimer { get; set; }
private void MyTimerAction(object state)
{
// do engine work here - call other servers, bake cookies, etc.
}
protected void Application_Start(object sender, EventArgs e)
{
this.UpdateEngineTimer = new Timer(MyTimerAction,
null, /* or whatever state object you need to pass */
UpdateEngineTimerFrequency,
UpdateEngineTimerFrequency);
}
protected void Application_End(object sender, EventArgs e)
{
this.UpdateEngineTimer.Dispose();
}
}
The Single Responsibility Principle suggests that you should split these two responsibilities into two services. One (a Windows Service) would handle the Timer. The second, the WCF Service, would have the single operation to query the database and return the data.
These are independent functions, and should be implemented independently.
Additionally, I would recommend against depending on IIS or Application_Start and similar methods. That will prevent your WCF service from being hosted in WAS or some other environment. Keep in mind that WCF is much more flexible than ASMX web services. It doesn't restrict where you host your service. You should think carefully before you place such restrictions on your own service.