Ncache shared by multiple processors in the same server - c#

I am working on a requirement to cache some database values, that can be reused. But I want the cache to be accessible to all the processor in the same server.
Overview:
So basically, there will be multiple processors that get work from an API and process the record to the database. Some of these database values will be cached.
The processors will be multiple windows services and I want them to share the same cache.
How can this be achieved using Ncache. I am pretty new to using this. So any links or directions are greatly appreciated.

The biggest value to NCache is that is can be used as an OutProc distributed in-memory cache, where the cache resides within the NCache process itself; this differs from an InProc cache where access would be limited to a single process.
You need to configure an OutProc cache running on either a separate dedicated caching server (or cluster) or on the same server as your services.
Refer to http://www.alachisoft.com/resources/docs/ncache/admin-guide/local-cache.html for more information on OutProc and InProc caches.
Once you install the NCache server, you can create your caching configuration by modifying the config.ncconf file that by default lives at C:\Program Files\NCache\config.
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<configuration>
<cache-config cache-name="MyOutProcCacheName">
<cache-settings inproc="False">
<logging enable-logs="True" trace-errors="True" trace-debug="False" log-path=""/>
<performance-counters enable-counters="True" snmp-port="0"/>
<cache-notifications item-remove="False" item-add="False" item-update="False"/>
<cleanup interval="15sec"/>
<storage type="heap" cache-size="2024mb"/>
<eviction-policy default-priority="normal" eviction-ratio="5%"/>
<cache-topology topology="local-cache"/>
<client-death-detection enable="False" grace-interval="60sec"/>
</cache-settings>
</cache-config>
</configuration>
The above configuration will create an OutProc cache on the local server (see cache-topology). This can be configured as a various mirrored, partitioned, or replicated caches in a clustered environment if needed (refer to http://www.alachisoft.com/resources/docs/ncache/admin-guide/cache-topologies.html).
You can then start the NCache service, and connect to the service from within your application, and initialize connection to the named cache instance in the configuration above.
Cache outProcCache = NCache.InitializeCache("MyOutProcCacheName");
You can also configure the connection to the NCache server/service entirely within the code instead of a client.ncconf file by sending configuration parameters to the InitializeCache method above.
CacheInitParams connectionParams = new CacheInitParams();
connectionParams.ServerList = new CacheServerInfo[]{ new CacheServerInfo("ncacheIp", ncachePort) };
Cache outProcCache = NCache.InitializeCache("MyOutProcCacheName", connectionParams);

Related

Share Session Across Multiple Applications [duplicate]

I am trying to share sessions between two web applications, both hosted on the same server. One is a .net 2.0 web forms application the other is as .net 3.5 MVC2 application.
Both apps have their session set up like this:
<sessionState
mode="StateServer"
stateConnectionString="tcpip=127.0.0.1:42424"
/>
In the webform application I am posting the the session key to the MVC app:
protected void LinkButton1_Click(object sender, EventArgs e)
{
Session["myvariable"] = "dan";
string sessionKey = HttpContext.Current.Session.SessionID;
//Followed by some code that posts sessionKey to the other application
}
I then recieve it in the MVC application and try use the same session like this:
[HttpPost]
public void Recieve(string sessionKey )
{
var manager = new SessionIDManager();
bool redirected;
bool IsAdded;
manager.SaveSessionID(HttpContext.ApplicationInstance.Context, Id, out redirected, out IsAdded);
var myVar = Session["myvariable"];
}
The key is being posted but the session does not seem to get loaded in the MVC app, i.e. sessionKey is null. Can what I am trying to do be done?
I did it this way:
Basically the idea is both apps use native .net sessionState stored in sqlserver. By using the same machine key and making a small tweak to a stored procedure – both apps can share any session keys and/or forms authenication.
Both apps would do something like this in their web.config:
<sessionState mode="SQLServer" sqlConnectionString="Data Source=.\SQLEXPRESS;User Id=test;Password=test;Application Name=AppName" />
<machineKey
validationKey="SOMEKEY"
validation="SHA1" decryption="AES"
/>
Session state db would need to be set up on a database server, that both apps can see.
Docs for doing this:
http://msdn.microsoft.com/en-us/library/ms229862(VS.80).aspx
Command that would need to be run:
C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\bin>aspnet_regsql.exe -E -ssadd --sstype p -S .\SQLEXPRESS
Stored procedure (TempGetAppID) tweak to:
#appId int OUTPUT
AS
-- start change
-- Use the application name specified in the connection for the appname if specified
-- This allows us to share session between sites just by making sure they have the
-- the same application name in the connection string.
DECLARE #connStrAppName nvarchar(50)
SET #connStrAppName = APP_NAME()
-- .NET SQLClient Data Provider is the default application name for .NET apps
IF (#connStrAppName <> '.NET SQLClient Data Provider')
SET #appName = #connStrAppName
-- end change
SET #appName = LOWER(#appName)
The problem is that session keys are scoped to the applications, so two applications having the same session key in fact have separate sessions.
You can do one of two things:
Put both applications as a virtual directory under a common IIS Application. I don't think this is a good idea, but it will work.
Roll your own session data solution for the data you want to share. Possibly using the backend database as the common storage, if you have one that is.
Based on Justin's comment, just to clarify option 2 is not refering to the SQL state managemet for out of process sessions. I mean for you to actually manually manage the shared data for the two sessions, possibly using a database.
You can use a common Machine key to generate same Session ID inside both applications for a given user. Additionally, you should also plan on storing sessions of both applications in a common store such as ASP.NET State Service or a distributed cache.
You can use NCache distributed cache which takes provides session sharing feature between different applications. You specify same Application ID tag for both apps inside session state settings which allows you to share session object provided you have same Session ID generated for both applications.

ASP.NET and C#: connection string in web.config vs connection string stored on Azure

I have a web form developed in ASP.NET and C#. I am storing the connection string to a database in the web.config file like this:
<connectionStrings configSource="MySecrets.config" />
This points to a local file in the same directory as the solution. Debugging locally works, however it is not advisable to commit this file to source control to avoid exposing these secrets.
This article mentions that it is possible to store connection strings on Azure - in the Configurations section of an App Service. The article also says that it's possible to retrieve the connection strings in the code by doing:
dbConn = System.Configuration.ConfigurationManager.AppSettings("myConnStringName")
The article also mentions that "if the application setting(s) happen to already exist in your web.config file, Windows Azure Web Sites will automatically override them at runtime using the values associated with your website. Connection strings work in a similar fashion.
(This assumes that your connection strings are explicit in the web.config file, and if committed to source control, they would be exposed.)
However, in my code, I already have a line with:
dbConn = WebConfigurationManager.ConnectionStrings["myConnStringName"].ConnectionString
Questions:
1) How am I supposed to reconcile these two lines without declaring the same variable (dbConn) twice?
2) How can I not commit MySecrets.config to source control, but at the same time use it when I debug my app locally, while using the connection string stored on Azure when working with the published app?

relying on a stateful service for configuration values?

We have approximately 100 microservices running. Each microservice has an entire set of configuration files such as applicationmanifest.xml, settings.xml, node1.xml, etc.
This is getting to be a configuration nightmare.
After exploring this, someone has suggested:
You can keep configs inside stateful service, then change parameters
through your API.
The problem I see with this, is that there is now a single point of a failure: the service that provides the configuration values.
Is there a centralized solution to maintaining so much configuration data for every microservice?
While a central configuration service seems like the way to do, if you do it you introduce a few problem that you must get right each time). When you have an central configuration service it MUST be updated with the correct configuration before you start your code upgrade and you must of course keep previous configurations around in case your deployment rolls back. Here's the configuration slide that I presented when I was on the Service Fabric team.
Service Fabric ships with the ability to version configuration, you should use that, but not in the manner that Service Fabric recommends. For my projects, I use the Microsoft.Extensions.Configuration for configuration. Capture the configuration events
context.CodePackageActivationContext.ConfigurationPackageAddedEvent += CodePackageActivationContext_ConfigurationPackageAddedEvent;
context.CodePackageActivationContext.ConfigurationPackageModifiedEvent += CodePackageActivationContext_ConfigurationPackageModifiedEvent;
context.CodePackageActivationContext.ConfigurationPackageRemovedEvent += Context_ConfigurationPackageRemovedEvent;
Each of these event handler can call to load the configuration like this
protected IConfigurationRoot LoadConfiguration()
{
ConfigurationBuilder builder = new ConfigurationBuilder();
// Get the name of the environment this service is running within.
EnvironmentName = Environment.GetEnvironmentVariable(EnvironmentVariableName);
if (string.IsNullOrWhiteSpace(EnvironmentName))
{
var err = $"Environment is not defined using '{EnvironmentVariableName}'.";
_logger.Fatal(err);
throw new ArgumentException(err);
}
// Enumerate the configuration packaged. Look for the service type name, service name or settings.
IList<string> names = Context?.CodePackageActivationContext?.GetConfigurationPackageNames();
if (null != names)
{
foreach (string name in names)
{
if (name.Equals(GenericStatelessService.ConfigPackageName, StringComparison.InvariantCultureIgnoreCase))
{
var newPackage = Context.CodePackageActivationContext.GetConfigurationPackageObject(name);
// Set the base path to be the configuration directory, then add the JSON file for the service name and the service type name.
builder.SetBasePath(newPackage.Path)
.AddJsonFile($"{ServiceInstanceName}-{EnvironmentName}.json", true, true)
.AddJsonFile($"{Context.ServiceTypeName}-{EnvironmentName}.json", true, true);
// Load the settings into memory.
builder.AddInMemoryCollection(LoadSettings(newPackage));
}
}
}
// Swap in a new configuration.
return builder.Build();
}
You can now interact with Configuration using the .Net configuration. Last thing to cover is the format of the configuration files. In the PackageRoot | Config directory, you simply include your configuration files. I happen to use the Name of the service + datacenter.
The files internal look like this, where there is a JSON property for each service fabric class.
{
"Logging": {
"SeqUri": "http://localhost:5341",
"MaxFileSizeMB": "100",
"DaysToKeep": "1",
"FlushInterval": "00:01:00",
"SeqDefaultLogLevel": "Verbose",
"FileDefaultLogLevel": "Verbose"
},
"ApplicationOperations": {
"your values here": "<values>"
},
If you stuck this long, the big advantage of this is that the configuration gets deployed at the same time as the code and if the code rolls back, so does the configuration, leaving you in a know state.
NOTE: It seems your question is blurred between whether to use a single configuration service is reliable or whether to use static vs dynamic configuration.
For the debate on static vs dynamic configuration, see my answer to the OP's other question.
A config service sounds reasonable particularly when you consider that Service Fabric is designed to be realiable, even stateful services.
MSDN:
Service Fabric enables you to build and manage scalable and reliable applications composed of microservices that run at high density on a shared pool of machines, which is referred to as a cluster
Develop highly reliable stateless and stateful microservices. Tell me more...
Stateful services store state in a reliable distrubuted dictionary enclosed in a transaction which guarentees the data is stored if the transaction was successful.
OP:
The problem I see with this, is that there is now a single point of a failure: the service that provides the configuration values.
Not necessarily. It's not really the service that is the single point of failure but a "fault domain" as defined by Service Fabric and your chosen Azure data centre deployment options.
MSDN:
A Fault Domain is any area of coordinated failure. A single machine is a Fault Domain (since it can fail on its own for various reasons, from power supply failures to drive failures to bad NIC firmware). Machines connected to the same Ethernet switch are in the same Fault Domain, as are machines sharing a single source of power or in a single location. Since it's natural for hardware faults to overlap, Fault Domains are inherently hierarchal and are represented as URIs in Service Fabric.
It is important that Fault Domains are set up correctly since Service Fabric uses this information to safely place services. Service Fabric doesn't want to place services such that the loss of a Fault Domain (caused by the failure of some component) causes a service to go down. In the Azure environment Service Fabric uses the Fault Domain information provided by the environment to correctly configure the nodes in the cluster on your behalf. For Service Fabric Standalone, Fault Domains are defined at the time that the cluster is set up
So you would probably want to have at least two configuration services running on two separate fault domains.
More
Describing a service fabric cluster

How to get and set a persistent variable server side in a web application

I am trying to save a simple int on the server side then any user can log in and update it. I thought this would be a simple task and began trying to use the settings designer however i couldn't change the scope from "application" to "user" as application settings are read only.
I know i could save and change the variable in an XML file but i thought there must be a more simple way.
I have tried to use user profiles, however it isn't working any ideas? (I have also used Context.Profile.)
<profile>
<providers>
<clear/>
<add name="AspNetSqlProfileProvider" type="System.Web.Profile.SqlProfileProvider" connectionStringName="ApplicationServices" applicationName="/"/>
</providers>
<properties>
<add name="FilmNumber" type="int" allowAnonymous="true" defaultValue="2"/>
</properties>
</profile>
Code:
//********Get poster Number*******
int LastPosterNumber=0;
LastPosterNumber = (int)HttpContext.Current.Profile.GetPropertyValue("FilmNumber");
string strFileName;
strFileName = FileField.PostedFile.FileName;
string c = (LastPosterNumber + 1).ToString();
string dirPath = System.Web.HttpContext.Current.Server.MapPath("~") + "/Images/FilmPosters/" + c + ".jpg";
FileField.PostedFile.SaveAs(dirPath);
//******Save new poster number*******
HttpContext.Current.Profile.SetPropertyValue("FilmNumber", int.Parse(c));
HttpContext.Current.Profile.Save();
Try to avoid Settings because they require your config file to be modifiable, also the concept of "user settings" in ASP.NET does not exist because the application always runs under the user context of W3C's worker process - the .NET Settings API is not aware of ASP.NET Membership or other concepts of "Users".
Your best option is to use a DBMS, Application state or a static field in your application (if it doesn't need to be persisted beyond the lifespan of w3wp.exe) or a file on disk, you don't have to use XML serialization, you can write it out manually (just be sure to lock the file first because ASP.NET applications are multi-threaded).
In my applications, I only store the connection string and the bare minimum of initialization settings in web.config, per-user settings I store in a database table and write a simple API layer for this (usually with application-settings and inheritance or "EffectiveSettings"). Note that this is completely different (as far as the implementation is concerned) from .NET's Settings API which I avoid completely for various reasons, including those already promulgated in this answer.
Notes on IIS w3wp.exe lifespan:
IIS will terminate or "recycle" w3wp.exe at any time for a variety of reasons, which is why your ASP.NET application must persist to long-term storage at the nearest opportunity and any in-memory state will be lost. Reasons for this include:
Inactivity. If an application pool worker process has not handled a request in at least 45 minutes (or so) IIS will shut down the process.
Recycling. IIS takes pre-emptive measures against worker processes leaking resources by terminating and restarting them every 90 minutes or so (or it might be 29 hours, I'm not sure).
Unresponsiveness. IIS gives worker processes a strict timeout to respond to incoming requests, I think the default is 60 seconds. If no response is sent to the client then the process will be restarted.
Reaching a memory limit. Similar to the automatic time-based recycling described above, IIS will restart a worker process if it reaches a memory limit (which is why it's important to manage your resources).
Reaching a request limit. Again, similar to the automatic time-based recycling, IIS will restart a worker process after its odometer reads X many requests. This is disabled by default but your ISP might have enabled it.

Will this be global for the whole site?

I need to load an xml file into memory and have it available for globally for the whole site. Does this code accomplish this?
If so, how is updating this "cached" version accomplished in the future?
XPathDocument ConvProductDoc;
ConvProductDoc = Cache["doc"] as XPathDocument;
if (ConvProductDoc == null) {
ConvProductDoc = new XPathDocument(HttpContext.Current.Request.MapPath(#"\data\foo\bar\my.xml"));
Cache.Insert("doc", ConvProductDoc);
}
Yes, the ASP.NET Cache object is site wide. There are a lot of options for managing the Cache and setting expiration rules, etc.
To assign/update the value in Cache, you simply set it like you would any Dictionary or HashTable value:
Cache["doc"] = newValue;
You can read a lot more about the Cache object in the MSDN Docs here: http://msdn.microsoft.com/en-us/library/aa478965.aspx#aspnet-cachingtechniquesbestpract_topic4
If you website is only on one server then yes.
If your website is distributed over more than one server then no.
The cache key and data will only be available on the server in which it has been stored.
If you server is in Amazon EC2 for example, you could use ElastiCache which would ensure the cache is available over a distributed server environment.

Categories