I have a Web API providing a backend to an Angular.JS web application. The backend API needs to track the state of user activities. (Example: it needs to note which content ID a user last retrieved from the API)
Most access to the API is authenticated via username/password. For these instances, it works fine for me to store the user state in our database.
However, we do need to allow "guest" access to the service. For guests, the state does need to be tracked but should not be persisted long-term (e.g. session-level tracking). I'd really like to not have to generate "pseudo users" in our user table just to store the state for guest users, which does not need to be maintained for a significant period of time.
My plan is to generate a random value and store it in the client as a cookie. (for guests only - we use bearer authentication for authenticated users.) I would then store whatever state is necessary in an in-memory object, such as a Dictionary, using the random value as a key. I could then expire items off the dictionary periodically. It is perfectly acceptable for this data to be lost if the Web API is ever relaunched, and it would even be acceptable for the dictionary to be reset say, every day at a certain time.
What I don't know how to do in WebAPI is create the dictionary object, so that it will persist across Web API calls. I basically need a singleton dictionary object that will maintain its contents for as long as the server is running the Web API (barring a scheduled clearing or programmatic flushing)
I had the idea of dumping the Dictionary off to disk every time an API call is made, and then reading it back in when it's needed, but this does not allow for multiple simultaneous in-flight requests. The only method I can think of right now is to add another database table (guest_state or something) and replicate the users table, and then setup some sort of manual method to regularly clean out the data in the guest table.
Summary: what I need is
a way to store some data persistently in a Web API backend without having to go off to a database
preferably store this data in a Dictionary object so I can use randomly-generated session IDs as the key, and an object to store the state
the data is OK to be cleared after a set period of time or on a regular basis (not too frequently, maybe a minimum of a 6 hour persistence)
I figured out a solution using the Singleton pattern:
public static class Services
{
private static Dictionary<string, string> cache;
private static object cacheLock = new object();
public static Dictionary<string,string> AppCache
{
get
{
lock (cacheLock)
{
if (cache == null)
{
cache = new Dictionary<string, string>();
}
return cache;
}
}
}
}
public class testController()
{
[HttpGet]
public HttpResponseMessage persist()
{
HttpResponseMessage hrm = Request.CreateResponse();
hrm.StatusCode = HttpStatusCode.OK;
Services.AppCache.Add(Guid.NewGuid().ToString(), DateTime.Now.ToString());
string resp = "";
foreach (string s in Services.AppCache.Keys)
{
resp += String.Format("{0}\t{1}\n", s, Services.AppCache[s]);
}
resp += String.Format("{0} records.", Services.AppCache.Keys.Count);
hrm.Content = new StringContent(resp, System.Text.Encoding.ASCII, "text/plain");
return hrm;
}
}
It seems the Services.AppCache object successfully holds onto data until either the idle timeout expires or the application pool recycles. Luckily I can control all of that in IIS, so I moved my app to its own AppPool and setup the idle timeout and recycling as appropriate, based on when I'm ok with the data being flushed.
Sadly, if you don't have control over IIS (or can't ask the admin to set the settings for you), this may not work if the default expirations are too soon for you... At that point using something like a LocalDB file or even a flat JSON file might be more useful.
Related
I have a system where at some point, the user will be locked to a single page. In this situation his account his locked and he cannot be redirected to any other page and this is after authentication.
The verification is done using Page Filters accessing database. To improve performance I have used memory cache.
However, the result wasn't as expected because once the cache is used for a single user it will affect all the others.
As far as i know, you can separate caching using tag helpers per user but I have no idea if this is possible using code
public async Task<IActionResult> Iniciar(int paragemId, string paragem)
{
var registoId = Convert.ToInt32(User.GetRegistoId());
if (await _paragemService.IsParagemOnGoingAsync(registoId))
{
return new JsonResult(new { started = false, message = "Já existe uma paragem a decorrer..." });
}
else
{
await _paragemService.RegistarInicioParagemAsync(paragemId, paragem, registoId);
_registoService.UpdateParagem(new ProducaoRegisto(registoId)
{
IsParado = true
});
await _registoService.SaveChangesAsync();
_cache.Set(CustomCacheEntries.RecordIsParado, true, DateTimeOffset.Now.AddHours(8));
return new JsonResult(new { started = true, message = "Paragem Iniciada." });
}
}
here i only check first if the user account is blocked in the database first without checking cache first and then create the cache entry.
Every user will be locked because of this.
So my point is... Is there a way to achieve this like tag helpers?
The CacheTagHelper is different than cache in general. It works via the request and therefore can vary on things like headers or cookie values. Just using MemoryCache or IDistributedCache directly is low-level; you're just adding values for keys directly, so there's nothing here to "vary" on.
That said, you can compose your key using something like the authenticated user's id, which would then give each user a unique entry in the cache, i.e. something like:
var cacheKey = $"myawesomecachekey-{User.FindFirstValue(ClaimTypes.NameIdentifier)}";
Short of that, you should use session storage, which is automatically unique to the user, because it's per session.
There are several alternatives to the cache. For details please see this link that describes them in greater detail.
Session State
An alternative would be to store the value in session state. This way, the session of one user does not interfere with the ones of others.
However, there are some downsides of this approach. If the session state is kept in memory, you cannot run your application in a server farm because one server does not know of the others session memory. So you would need to save the session state in a cache (REDIS?) or a database.
In addition, as session memory is stored in the server users cannot change it and avoid the redirection that you try to implement. The downside is that this reduces the amount of users that your server can handle because the server needs to have a specific amount of memory per user.
Cookies
You can send a cookie to the client and check for this cookie when the next request arrives at your server. The downside of this approach is that the user can delete the cookie. If the only consequence of a missing cookie is a request to the database, this is neglectable.
You can use session cookies that are discarded by the server when the session expires.
General
Another hint is that you need to clear the state memory when a user signs out so that with the next sign in, the state is correctly set up for the new user.
I am trying to design a web api that can get data from an external server but with limitations. I'm trying to figure out how best to design it to be efficient.
My api has an endpoint that takes an input. It is is a domain name like tom#domain.com. My endpoint then makes an http call to the domain to get an auth token, then makes another call to that domain with the username to get some data which is returned to the client. However my api can accept multiple usernames (comma delimited like ?users=tom#domain.a.com, bill#domain.b.com). My web server knows for each domain what is the max parallel connections I can make to get the data.
So the problem is how to organize the data so I can maximize parallel computing but stay within the limits.
Here's my thoughts:
First parse the user list and group them up. Then have a static dictionary. Key is domain, value is a custom object which has 2 queues. Both queues holds a list of Tasks (from async/await). However the first queue max length will be the value of the limit for that domain.
?users=bill#D.com, max#D.com, sarah#A.com, tom#D.com
dictionary = {
"D.com" : [
[],
["bill#D.com", "max#D.com", "tom#D.com"]
],
"A.com" : [
[],
["sarah#A.com"]
]
}
Then I can run a code every second, which loops through all dictionary values, and fills the first queue with as many Task objects from the second queue (.e. removing from 2nd queue and putting in first) so its within the limit.
As soon as its in the first queue, the task executes using Parallel.Invoke() then when the task is completed it gets removed from first queue (unless some request is waiting for it, explained in next paragraph).
I do this because if another api request is made to my endpoint with some names thats already from the first request, I want to reuse it. So If it's in the first queue, I call await on that Task.
Somehow when a task finishes, I need to know that no other people are waiting for that user in the task, and in that case, remove it from the first queue. Also if a client disconnects it should remove the watching of the users part for that client.
Does anyone know if this is a good approach?
Since it's parallel, you know right away you're probably going to need to use System.Collections.Concurrent, and since you need key/value lookup (user identifier/HTTP response) you need a ConcurrentDictionary. And since there is a common cache for all users, you will want to store it in a static variable, which is available to all threads and all HTTP requests.
Here is a simple example:
public class MyCacheClass
{
//Store the list of users/requests
static private ConcurrentDictionary<string, Task<HttpResponseMessage>> _cache = new ConcurrentDictionary<string, Task<HttpResponseMessage>>();
//Get from the ConcurrentDictionary or add if it's not there
public async Task<HttpResponseMessage> GetUser(string key)
{
return await _cache.GetOrAdd(key, GetResponse(key));
}
//You just to implement this method, potentially in a subclass, to get the data
protected virtual async Task<HttpResponseMessage> GetResponse(string key)
{
var httpClient = new HttpClient();
var url = string.Format(#"http://www.google.com?q={0}", key);
return await httpClient.GetAsync(url);
}
}
Then to get a user's information, just call:
var o = new MyCacheClass();
var userInfo = await o.GetUser(userID);
Note: If you're going to use code like this on a production system, you might consider adding some means of purging or trimming the cache after a period of time or when it reaches a certain size. Otherwise your solution may not scale the way you need it to.
I've been trying the following to retrieve data:
void InitializeTrello()
{
TrelloConfiguration.Serializer = new ManateeSerializer();
TrelloConfiguration.Deserializer = new ManateeSerializer();
TrelloConfiguration.JsonFactory = new ManateeFactory();
TrelloConfiguration.RestClientProvider = new Manatee.Trello.WebApi.WebApiClientProvider();
TrelloConfiguration.ThrowOnTrelloError = true;
}
T DownloadDataFromTrello<T>(TrelloAccount account, Func<T> func)
{
TrelloConfiguration.Cache.Clear();
TrelloAuthorization.Default.AppKey = account.AppKey;
TrelloAuthorization.Default.UserToken = account.UserToken;
T result = func();
TrelloProcessor.Flush();
return result;
}
Method DownloadDataFromTrello is being called a few times with different AppKey and UserToken parametres. I receive the same data every call despite calling TrelloConfiguration.Cache.Clear() inside the function.
I would like to use library without resorting to dirty tricks with unloading static classes and retain the lazy loading functionality. Does anyone know how to use this library with multiple user accounts properly?
All of the entity constructors take a second parameter: a TrelloAuthorization that defaults to TrelloAuthorization.Default. The entity instance uses this authorization throughout its lifetime.
var customAuth = new TrelloAuthorization
{
AppKey = "your app key",
UserToken = "a user's token"
}
var card = new Card("card id", customAuth);
The default cache only looks at the entity ID as the key so even if you change the default authorization you would get the same instances back (using the old auth) if the system is pulling them from a cache (e.g. a card is downloaded as part of a List.Cards enumeration). If you explicitly create the entity through a constructor (as above) the new entity is added to the cache, but only the first one will be returned since it's matched only on ID.
To consider the auth as a match for the key, I'd have to either update the default cache or expose the auth so that you can write your own cache and set the TrelloConfiguration.Cache property. I'm not sure which I prefer right now.
Using a custom auth (possibly in combination with periodically clearing the cache) is currently your best option. Please feel free to create an issue or let me know here if this is a feature you'd like.
We are building an MVC application, where there is huge static data to be loaded when the user first time logs in.
Luckily most of the data that has to be loaded during login is all master data and doesn't change for anyusers
But since the size of the master data is huge, we felt it is best to implement caching server side as the browser might not be able to hold the data
I have read an codeproject post on OutputCache by an Microsoft MVP, he clearly explained what cache does and what are the things to keep in mind while using caching.
So i implemented all that he suggested in my controller by just adding the line
[OutputCache(Duration = 10, VaryByParam = "none",
Location=OutputCacheLocation, NoStore=true)]
above my ActionMethod.
But i could not debug whether the data is loading from cache or there is another server hit happening.
So my first question is how do i debug whether Output cache is working or not?
And then, in our previous MVC applications we used httpcontext.current.cache which worked absolutely fine.
So, here is my second question, which is why should i prefer OuputCache over httpcontext.current.cache and why not vice versa?
What difference do they offer to caching an application?
EDIT:1
This is the method in my login view controller,
public ActionResult GetRegions(string Ids)
{
objRegionsResult = GetRegionsList();
if (!string.IsNullOrEmpty(Ids))
objRegionsResult = objRegionsResult.Where(x => Ids.Split(',').Contains(x.Type.ToString())).ToList();
return Json(objRegionsResult, JsonRequestBehavior.AllowGet);
}
private List<MORegionMaster> GetRegionsList()
{
RequestUri = "Home/GetRegions";
HttpResponseMessage response = ConnectAPI(RequestUri);
if (response.IsSuccessStatusCode)
{
objRegionsResult = response.Content.ReadAsAsync<List<MORegionMaster>>().Result;
}
}
return objRegionsResult;
}
So the above method is where i hit the api controller, which inturn hits the businesslogic class and subsequently the database and returns the datatable.
We use OutputCache when we want to cache the result of an action (not static files but cache the business logic result). We use this when we want to serve the data for all users for a particular duration.
We use httpcontext.current.cache when we want to cache some data that can be used multiple times within the same request like caching "Current logged in user object" to avoid multiple db hits.
Also, lifetime of Output Cache is not limited to current http request only but the lifetime of httpcontext.current.cache is up to current http request only.
I have a webshop running which contains parts of cars. The prices next to the parts are loaded from a webservice running else where. This webservice only contains one webmethod: GetArticleInformation.
In this webservice there is a link to another webservice WebshopServiceClient running elsewhere which contains the info about the cars and holds the prices.
Now when a user select a part of the vehicle he wants to buy the first webservice is called and the method GetArticleInformation is executed. In this method I want to create a session which hold the logon of the second webservice ( the database ). In this way I want to prevent that for every call a new logon is required.
[WebMethod(EnableSession = true)]
public GetBackItems GetArticleInformation(User user, Items items)
{
//Create session if needed
client = (WebshopServiceClient)Session["SphinxLogon"];
if (client == null)
{
client = new WebshopServiceClient();
bool done = client.Logon();
if (done)
{
Session["SphinxLogon"] = client;
}
}
//Get information and send it back
...
}
Now when the user in the webshop selects a part the session is created but the next time the user selects a part the session is null again.
What am I doing wrong or what am I missing?
I would consider 'talking' to the various web-services via an internal 'proxy'-procedure -- fired up on app-start for example -- which would handle all traffic etc with the services. That way the individual client sessions do not have to logon or maintain a session with the services but can still be managed via the proxy. Individual clients would get a 'ticket' from the proxy which then could be part of their session and could be used to manage it.