Best way to cache stored procedure result in C# - c#

I am running a web site in ASP.NET/C#/SQL Server 2012 that needs to cache the result of some stored procedure queries. The result should have an absolute expiration. What options are there to do this?
Preferably setting command.ExpirationDateTime = DateTime.Now.AddMinutes(10) would be great, but as far as I know nothing like that is possible.
Edit:
The data will be returned from an API, so caching using pages or user controls is not possible.

Have a look at the Enterprise Library Caching Application Block. This has the exact functionality you are looking for
The Caching Application Block
cache.Add(listid.ToString(), list, CacheItemPriority.Normal, null,
new SlidingTime(TimeSpan.FromMinutes(60)));

I don't understand you restriction on where you can actually perform caching, but I assume you'll have access to HttpRuntime.Cache? If that's the case, I have written a series of utilities for caching service responses in a blog post (Caching Services - The Lazy Way).
The basics of this utility is you can do:
string cacheKey = GenerateCacheKey(myParam); //most likely a derivative of myParam
if (Cache.IsInCache<MyResultType>(cacheKey))
{
return Cache.GetFromCache<MyResultType>(cacheKey);
}
var result = GetMyRequestedResult(myParam);
if (result != null) //or whatever makes sense
{
Cache.InsertIntoCacheAbsoluteExpiration(cacheKey, result, DateTime.Now.AddMinutes(0));
}
return result;
If you have any services in between, the post shows a cute class for interacting/caching with those services.

I ended up creating a hash from the SqlCommand by merging the command text with the parameter names and values. That hash I used as a cache key when putting/getting stuff in/from the HttpContext.Current.Cache object. Works fine. Probably not super fast, but since some queries are somewhat much slower it is all ok.

You can also use System.Runtime.Caching.ObjectCache starting from .Net Framework 4 not only in web applications. Here is an example:
List<EmailData> result = null;
ObjectCache cache = MemoryCache.Default;
var key = string.Concat(title, ":", language);
var item = cache.GetCacheItem(key);
if (item != null)
return item.Value as List<EmailData>;
using (var connection = _connectionFactory.OpenConnection())
{
result = connection.Query<EmailData>(sql, new { title, language }).ToList();
}
var cachingPolicy = new CacheItemPolicy
{
AbsoluteExpiration = DateTimeOffset.UtcNow.AddMinutes(_cacheExpirationIntervalInMinutes)
};
cache.Set(new CacheItem(key, result), cachingPolicy);
return result;
You can read more: https://msdn.microsoft.com/en-us/library/system.runtime.caching.objectcache(v=vs.110).aspx

Related

Improving memory management in a Net Core Windows-Service

I wanted to ask about the best approach for having a console application which can also be used as a windows-service in a net core environment. The problem is not for having such an application, but rather the executed code.
I try to explain what exactly my problem is.
If the windows-service is started, a for-loop is being initiated which does several things
accessing amazons AWS SQS
accesing via HTTP-Request a csv-file => those data is being used and partially stored in a db
accessing tables of an oracle db via EF (insert,update and delete)
So far so good. Everything is working out as I want to. Using dependency injection (Scoped) for accessing in my loop those methods I have programmed for getting all the action done.
The tricky part is that the memory usage of this application is rather ... not optimal. It does what it does, but while downloading and using the data of the csv files, the application uses too much memory and doesn't free up properly. Do you have any suggestions or knowlegde base articles how to handle such scenarios (loop in windows-service)?
I tried to free up everything I can, like clearing lists and setting them to null, disabled any tracking in EF while querying data (also disabled extra insert / update changetracker) and using "using statements" ( / disposing elements).
Also, I would like to add that I am using the latest SDK of Amazon AWS (Core and SQS) and EF Core 2.2.6 with Oracle Provider.
Any chance you might have a hint?
If you need more information, just tell me. I will then provide more data as needed.
Kind regards
Regarding the comment of reading csv file
Reading the file from the URL.
using (var response = await request.GetResponseAsync())
{
await using (var receiveStream = response.GetResponseStream())
{
using (var readStream = new StreamReader(receiveStream, Encoding.UTF8))
{
var content = readStream.ReadToEnd();
result.Content = content.Split('\n').ToList();
result.IsSuccess = true;
}
}
}
and after reading it, I convert it to my target class
public static async Task<List<Curve>> ReturnCurveData(List<string> content)
{
var checkVar = -1;
var list = new List<Curve>();
foreach (var entry in content)
{
if (string.IsNullOrEmpty(entry)) continue;
var entrySplitted = entry.Split('|');
if (entrySplitted.Length < 3) continue;
else if(!int.TryParse(entrySplitted[0], out checkVar)) continue;
var item = new Curve();
item.Property1 = Convert.ToInt32(entrySplitted[0]);
item.Property2 = (entrySplitted.Length ==3) ? DateTime.Now : Convert.ToDateTime(entrySplitted[1]);
item.Property3 = (entrySplitted.Length ==3) ? Convert.ToDateTime(entrySplitted[1]) : Convert.ToDateTime(entrySplitted[2]);
item.Value = (entrySplitted.Length ==3) ? Convert.ToDecimal(entrySplitted[2], new CultureInfo("en-US")): Convert.ToDecimal(entrySplitted[3], new CultureInfo("en-US"));
list.Add(item);
}
return await Task.FromResult(list);
}
Regarding the definition of scope
var hostBuilder = new HostBuilder()
.UseContentRoot(Directory.GetCurrentDirectory())
.ConfigureAppConfiguration((hostingContext, config) =>
{
...
})
.ConfigureServices((hostContext, services) =>
{
services.AddScoped<Data.Queries.Database.Db>();
services.AddScoped<Data.Queries.External.Aws>();
services.AddScoped<Data.Queries.External.Web>();
services.RegisterEfCoreOracle<DbContext>(AppDomain.CurrentDomain.BaseDirectory,
"cfg_db.json");
services.AddScoped<IExecute, Execute>();
services.AddHostedService<ExecuteHost>();
})
.ConfigureLogging((hostingContext, logging) =>
{
...
});
public static void RegisterEfCoreOracle<T>(this IServiceCollection services, string configurationDirectory, string configurationFile, ServiceLifetime lifetime = ServiceLifetime.Scoped) where T : DbContext
{
//Adding configuration file
IConfiguration configuration = new ConfigurationBuilder()
.SetBasePath(configurationDirectory)
.AddJsonFile(configurationFile, optional: false)
.Build();
services.Configure<OracleConfiguration<T>>(configuration);
var oraConfig = services.ReturnServiceProvider().GetService<IOptions<OracleConfiguration<T>>>();
if (oraConfig.Value.Compatibility != null)
{
services.AddDbContext<T>(o => o
.UseOracle(oraConfig.Value.ConnectionString(), b => b
.UseOracleSQLCompatibility(oraConfig.Value.Compatibility)), lifetime);
}
else
{
services.AddDbContext<T>(o => o
.UseOracle(oraConfig.Value.ConnectionString()), lifetime);
}
}
Well since you posted your code, we can analyze the problem pretty easily:
var content = readStream.ReadToEnd();
As I said, never read the file in memory, process it line by line, for example using a StreamReader or StringReader, or any of the million csv parsers in Nuget.
result.Content = content.Split('\n').ToList();
Not only are you reading the entire file in memory, you're then splitting it into values (so in addition to the entire file contents, you're allocating an array of length equal to the number of lines, and for each line allocating strings for each element that's separated by a comma, for a total of lines*elements strings), and in addition to all that allocating yet another list and copying the contents of the array into it.
Edit: You're splitting it into lines here, and into values in the second part. My analysis is correct, but the problem is split over multiple lines.
Needless to say this is ludicrous at best. Stop using Split, don't ToList needlessly, and don't read all of it at once. You're writing a web server, all of this could theoretically be done once for each request processed, which can easily go in the dozens depending on your CPU.
I won't go over the second part, but it's even worse. At a glance I see even more lists allocated and even more Splits. And the line return await Task.FromResult(list); shows you don't understand async funtions at all. Not only what you have is not async at all, but if you insist on making it async for the fun of it, return the list directly, not as an awaited task.

Dynamics CRM Online Object caching not caching correctly

I have a requirement where we need a plugin to retrieve a session id from an external system and cache it for a certain time. I use a field on the entity to test if the session is actually being cached. When I refresh the CRM form a couple of times, from the output, it appears there are four versions (at any time consistently) of the same key. I have tried clearing the cache and testing again, but still the same results.
Any help appreciated, thanks in advance.
Output on each refresh of the page:
20170511_125342:1:55a4f7e6-a1d7-e611-8100-c4346bc582c0
20170511_125358:1:55a4f7e6-a1d7-e611-8100-c4346bc582c0
20170511_125410:1:55a4f7e6-a1d7-e611-8100-c4346bc582c0
20170511_125342:1:55a4f7e6-a1d7-e611-8100-c4346bc582c0
20170511_125437:1:55a4f7e6-a1d7-e611-8100-c4346bc582c0
20170511_125358:1:55a4f7e6-a1d7-e611-8100-c4346bc582c0
20170511_125358:1:55a4f7e6-a1d7-e611-8100-c4346bc582c0
20170511_125437:1:55a4f7e6-a1d7-e611-8100-c4346bc582c0
To accomplish this, I have implemented the following code:
public class SessionPlugin : IPlugin
{
public static readonly ObjectCache Cache = MemoryCache.Default;
private static readonly string _sessionField = "new_sessionid";
#endregion
public void Execute(IServiceProvider serviceProvider)
{
var context = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext));
try
{
if (context.MessageName.ToLower() != "retrieve" && context.Stage != 40)
return;
var userId = context.InitiatingUserId.ToString();
// Use the userid as key for the cache
var sessionId = CacheSessionId(userId, GetSessionId(userId));
sessionId = $"{sessionId}:{Cache.Select(kvp => kvp.Key == userId).ToList().Count}:{userId}";
// Assign session id to entity
var entity = (Entity)context.OutputParameters["BusinessEntity"];
if (entity.Contains(_sessionField))
entity[_sessionField] = sessionId;
else
entity.Attributes.Add(new KeyValuePair<string, object>(_sessionField, sessionId));
}
catch (Exception e)
{
throw new InvalidPluginExecutionException(e.Message);
}
}
private string CacheSessionId(string key, string sessionId)
{
// If value is in cache, return it
if (Cache.Contains(key))
return Cache.Get(key).ToString();
var cacheItemPolicy = new CacheItemPolicy()
{
AbsoluteExpiration = ObjectCache.InfiniteAbsoluteExpiration,
Priority = CacheItemPriority.Default
};
Cache.Add(key, sessionId, cacheItemPolicy);
return sessionId;
}
private string GetSessionId(string user)
{
// this will be replaced with the actual call to the external service for the session id
return DateTime.Now.ToString("yyyyMMdd_hhmmss");
}
}
This has been greatly explained by Daryl here: https://stackoverflow.com/a/35643860/7708157
Basically you are not having one MemoryCache instance per whole CRM system, your code simply proves that there are multiple app domains for every plugin, so even static variables stored in such plugin can have multiple values, which you cannot rely on. There is no documentation on MSDN that would explain how the sanboxing works (especially app domains in this case), but certainly using static variables is not a good idea.Of course if you are dealing with online, you cannot be sure if there is only single front-end server or many of them (which will also result in such behaviour)
Class level variables should be limited to configuration information. Using a class level variable as you are doing is not supported. In CRM Online, because of multiple web front ends, a specific request may be executed on a different server by a different instance of the plugin class than another request. Overall, assume CRM is stateless and that unless persisted and retrieved nothing should be assumed to be continuous between plugin executions.
Per the SDK:
The plug-in's Execute method should be written to be stateless because
the constructor is not called for every invocation of the plug-in.
Also, multiple system threads could execute the plug-in at the same
time. All per invocation state information is stored in the context,
so you should not use global variables or attempt to store any data in
member variables for use during the next plug-in invocation unless
that data was obtained from the configuration parameter provided to
the constructor.
Reference: https://msdn.microsoft.com/en-us/library/gg328263.aspx

NetSuite SuiteTalk: SavedSearch for "Deleted Record" Type

How does one get the results of a "Saved Search" of Type "Deleted Record" in NetSuite? Other search types are obvious(CustomerSearchAdvanced, ItemSearchAdvanced, etc...) but this one seems to have no reference online, just documentation around deleting records, not running saved searches on them.
Update 1
I should clarify a little bit more what I'm trying to do. In NetSuite you can run(and Save) Saved Search's on the record type "Deleted Record", I believe you are able to access at least 5 columns(excluding user defined ones) through this process from the web interface:
Date Deleted
Deleted By
Context
Record Type
Name
You are also able to setup search criteria as part of the "Saved Search". I would like to access a series of these "Saved Search's" already present in my system utilizing their already setup search criteria and retrieving data from all 5 of their displayed columns.
The Deleted Record record isn't supported in SuiteTalk as of version 2016_2 which means you can't run a Saved Search and pull down the results.
This is not uncommon when integrating with NetSuite. :(
What I've always done in these situations is create a RESTlet (NetSuite's wannabe RESTful API framework) SuiteScript that will run the search (or do whatever is possible with SuiteScript and not possible with SuiteTalk) and return the results.
From the documentation:
You can deploy server-side scripts that interact with NetSuite data
following RESTful principles. RESTlets extend the SuiteScript API to
allow custom integrations with NetSuite. Some benefits of using
RESTlets include the ability to:
Find opportunities to enhance usability and performance, by
implementing a RESTful integration that is more lightweight and
flexible than SOAP-based web services. Support stateless communication
between client and server. Control client and server implementation.
Use built-in authentication based on token or user credentials in the
HTTP header. Develop mobile clients on platforms such as iPhone and
Android. Integrate external Web-based applications such as Gmail or
Google Apps. Create backends for Suitelet-based user interfaces.
RESTlets offer ease of adoption for developers familiar with
SuiteScript and support more behaviors than NetSuite's SOAP-based web
services, which are limited to those defined as SuiteTalk operations.
RESTlets are also more secure than Suitelets, which are made available
to users without login. For a more detailed comparison, see RESTlets
vs. Other NetSuite Integration Options.
In your case this would be a near trivial script to create, it would gather the results and return JSON encoded (easiest) or whatever format you need.
You will likely spend more time getting the Token Based Authentication (TBA) working than you will writing the script.
[Update] Adding some code samples related to what I mentioned in the comments below:
Note that the SuiteTalk proxy object model is frustrating in that it
lacks inheritance that it could make such good use of. So you end with
code like your SafeTypeCastName(). Reflection is one of the best tools
in my toolbox when it comes to working with SuiteTalk proxies. For
example, all *RecordRef types have common fields/props so reflection
saves you type checking all over the place to work with the object you
suspect you have.
public static TType GetProperty<TType>(object record, string propertyID)
{
PropertyInfo pi = record.GetType().GetProperty(propertyID);
return (TType)pi.GetValue(record, null);
}
public static string GetInternalID(Record record)
{
return GetProperty<string>(record, "internalId");
}
public static string GetInternalID(BaseRef recordRef)
{
PropertyInfo pi = recordRef.GetType().GetProperty("internalId");
return (string)pi.GetValue(recordRef, null);
}
public static CustomFieldRef[] GetCustomFieldList(Record record)
{
return GetProperty<CustomFieldRef[]>(record, CustomFieldPropertyName);
}
Credit to #SteveK for both his revised and final answer. I think long term I'm going to have to implement what is suggested, short term I tried implementing his first solution("getDeleted") and I'd like to add some more detail on this in case anyone needs to use this method in the future:
//private NetSuiteService nsService = new DataCenterAwareNetSuiteService("login");
//private TokenPassport createTokenPassport() { ... }
private IEnumerable<DeletedRecord> DeletedRecordSearch()
{
List<DeletedRecord> results = new List<DeletedRecord>();
int totalPages = Int32.MaxValue;
int currentPage = 1;
while (currentPage <= totalPages)
{
//You may need to reauthenticate here
nsService.tokenPassport = createTokenPassport();
var queryResults = nsService.getDeleted(new GetDeletedFilter
{
//Add any filters here...
//Example
/*
deletedDate = new SearchDateField()
{
#operator = SearchDateFieldOperator.after,
operatorSpecified = true,
searchValue = DateTime.Now.AddDays(-49),
searchValueSpecified = true,
predefinedSearchValueSpecified = false,
searchValue2Specified = false
}
*/
}, currentPage);
currentPage++;
totalPages = queryResults.totalPages;
results.AddRange(queryResults.deletedRecordList);
}
return results;
}
private Tuple<string, string> SafeTypeCastName(
Dictionary<string, string> customList,
BaseRef input)
{
if (input.GetType() == typeof(RecordRef)) {
return new Tuple<string, string>(((RecordRef)input).name,
((RecordRef)input).type.ToString());
}
//Not sure why "Last Sales Activity Record" doesn't return a type...
else if (input.GetType() == typeof(CustomRecordRef)) {
return new Tuple<string, string>(((CustomRecordRef)input).name,
customList.ContainsKey(((CustomRecordRef)input).internalId) ?
customList[((CustomRecordRef)input).internalId] :
"Last Sales Activity Record"));
}
else {
return new Tuple<string, string>("", "");
}
}
public Dictionary<string, string> GetListCustomTypeName()
{
//You may need to reauthenticate here
nsService.tokenPassport = createTokenPassport();
return
nsService.search(new CustomListSearch())
.recordList.Select(a => (CustomList)a)
.ToDictionary(a => a.internalId, a => a.name);
}
//Main code starts here
var results = DeletedRecordSearch();
var customList = GetListCustomTypeName();
var demoResults = results.Select(a => new
{
DeletedDate = a.deletedDate,
Type = SafeTypeCastName(customList, a.record).Item2,
Name = SafeTypeCastName(customList, a.record).Item1
}).ToList();
I have to apply all the filters API side, and this only returns three columns:
Date Deleted
Record Type(Not formatted in the same way as the Web UI)
Name

Reusing the Data without hitting DB - ASP.NET MVC

Am modifying the complex function which is already written where they are using the below code :
private List<string> Values()
{
if (ViewBag.Sample == null)
{
ViewBag.Sample = TestData();
}
}
// where TestData() hits the DB and returns corresponding result
Values() is called multiple places in the same file where this will return by hitting the DB TestData() first time and from next calls it will directly return from ViewBag.
Is this is a good approach ?
What are all the alternative approach we have in MVC to handle this scenario ?As DB hit is a costly call we need to use some other techniques.
Thanks
You could either keep your data in session like this:
Session['*your session key*'] = TestData();
And then retrieve it like this:
var myData = Session['*your session key*'] as YourObject //cast it to an object if you need to.
Or you could use caching:
System.Web.HttpRuntime.Cache[cacheKey] = TestData
And retrieving:
var myData =System.Web.HttpRuntime.Cache[cacheKey] as YourObject
That code should ensure that you only touch the database the first time the method is invoked.
If the same data is used on multiple pages you could also have a lot at the Cache- or Session class.
If size of the data retrieved from database is not very big then you can use Cache
Otherwise you can store data in Session as well.
You have the options to keep the data like Session, Cache.
[OutputCache(Duration = 60)] // Caches for 60 seconds
private List<string> Values()
{
if (ViewBag.Sample == null)
{
ViewBag.Sample = TestData();
}
}
MVC Model Binder
See Sample code

MethodInfo.Invoke sometimes returns null and sometimes returns value

I'm working on an asp.net MVC application.
I have a class that wraps a repository that fetches data from a db using simple linq statement. I've written a decorator class to add caching logic (using caching application block).
since I have several methods that I want to decorate, and the logic is all the same for each one (check if exists in cache, if not invoke real getter and store in cache), I wrote something like this:
a helper method that does the common logic of checking if exists in cache and so on:
public object CachedMethodCall(MethodInfo realMethod, params object[] realMethodParams)
{
object result = null;
string cacheKey = CachingHelper.GenereateCacheKey(realMethod, realMethodParams);
// check if cache contains key, if yes take data from cache, else invoke real service and cache it for future use.
if (_CacheManager.Contains(cacheKey))
{
result = _CacheManager.GetData(cacheKey);
}
else
{
result = realMethod.Invoke(_RealService, realMethodParams);
// TODO: currently cache expiration is set to 5 minutes. should be set according to the real data expiration setting.
AbsoluteTime expirationTime = new AbsoluteTime(DateTime.Now.AddMinutes(5));
_CacheManager.Add(cacheKey, result, CacheItemPriority.Normal, null, expirationTime);
}
return result;
}
this all works fine and lovely. in each decorated method I have the following code:
StackTrace currentStack = new StackTrace();
string currentMethodName = currentStack.GetFrame(0).GetMethod().Name;
var result = (GeoArea)CachedMethodCall(_RealService.GetType().GetMethod(currentMethodName), someInputParam);
return result;
the problem is that sometimes the line where realMethod.Invoke(...) is happening returns null. If I put a breakpoint right after and then return the execution to that line, result is not null and data is fetched from the DB. all the input variables are correct, data exists in the DB, 2nd run gets the data, so what goes wrong in the first run?!
thanks :)
I think I managed to solve this problem by updating the code as follows:
public object CachedMethodCall(MethodInfo realMethod, params object[] realMethodParams)
{
string cacheKey = CachingHelper.GenereateCacheKey(realMethod, realMethodParams);
object result = _CacheManager.GetData(cacheKey);
if (result == null)
{
result = realMethod.Invoke(_RealService, BindingFlags.InvokeMethod, null, realMethodParams, CultureInfo.InvariantCulture);
// TODO: currently cache expiration is set to 5 minutes. should be set according to the real data expiration setting.
AbsoluteTime expirationTime = new AbsoluteTime(DateTime.Now.AddMinutes(5));
_CacheManager.Add(cacheKey, result, CacheItemPriority.Normal, null, expirationTime);
}
return result;
}
I noticed that the previous _CacheManager.Contains call was sometimes returning true even though the cache did not contain the data. I suspect threads causing the problems, but I'm not sure...

Categories