I have a situation where i need to keep the state of 5 variables of approx. 10,000 users.
I only need to keep the state during the session. If a user close down the window the data shall be cleared of security reasons and GDPR.
The bot Will be on Facebook. There will be no authentication required for the user.
I think it will be to much to manage with the in memory storage.
Is table storage a good option here? Or any better suggestions?
For testing and prototyping purposes, you can use the Bot Builder Framework's in-memory data storage. For production bots, you can implement your own storage adapter or use one of Azure Extensions. The Azure Extensions allow you to store your bot's state data in either Table Storage, CosmosDB, or SQL.
https://learn.microsoft.com/en-us/azure/bot-service/dotnet/bot-builder-dotnet-state?view=azure-bot-service-3.0
As #Kamran said, you can use any number of storage options for the backend state storage.
Regarding the issue around session lifetime; memory storage is volatile because when the service restarts, you lose your state. Which is good for testing. But it won't really map to a users session. You could have a new 'session' but still have saved state in memory storage. You will want to look into the conversation ID, and perhaps have logic around that. That is the closest thing to session lifetime.
Related
I made an UWP app for Microsoft Store. However, user data automatically saved in the LocalState folder will be deleted every time the app is updated. I want the data to be retained after every updating, so I'm trying to suggest the users to save their data by themselves in the Document folder or somewhere to avoid their data deleted, but I don't want to bother them. Where should I save user data?
The roaming folder will be unable to use in future and I don't want to use Azure because of its fee.
The common approach is to store the data in some remote location, like for example in the cloud. You would typically use a service of some kind to request and save the data.
If you think Azure is to expensive, you'll have to find a cheaper storage solution. The principle is the same regardless of which storage provider you use.
As mentioned in the docs, roaming data is (or at least will be) deprecated. The recommended replacement is Azure App Service.
My company holds a dozen websites and isolated DBs (identical schemas).
every customer has its own website (different app pool) and DB.
every website has its own configuration, several connection strings, but they all have same schema for configuration.
cust1.domain.com
cust2.domain.com
cust3.domain.com
We would like to merge all websites to one (single app pool) and stay with isolated DBs for security and large amount of data reasons.
what is the best practice for designing a DAL and configuration of it?
what are the implications of it, if large amount of tenant will be on the same time? does one application pool can manage this situation or it can be managed somehow?
BTW, we are using asp-membership for users authentication.
Thanks in advance,
Eddie
Use Application_PostAuthenticate event in global.asax to load the correct database and then close the connection in Application_EndRequest
One option is to use the profile in membership and store a piece of information that will allow you to determine which of the actual db's they should be connecting to. Downside is that you will need to store this piece of information for the duration of the users session so either a cookie or session variable is likley to be needed.
The implications of one site vs many depends a lot on your environment and application, do you currently have the multiple sites on a single box or do you have a web farm? do you know the number of concurrent users for each site, the amount of traffic? Performance monitor can help you here to see how busy each site is but you may need more invasive logging to determine metrics such as concurrent users. I found this server fault question around IIS 7 performance which may be of help
You can try 'Shared DataBase With Different Schema' from multi tenant data architecture . In your DAL you can choose specific schema which perticular to current user. Simple and secure in this way
Continue reading http://msdn.microsoft.com/en-us/library/aa479086.aspx
I have a Asp.Net MVC4 website which can connect to multiple databases depending on the user's login credentials. In order to get the database access list for the user, I have to perform a few complex joins when they login. To avoid having to do this more than once, I am currently encrypting and storing the database ID in a cookie. I now realize that this may not be a good idea and even strong encryption may be broken. In addition, the encrypted cookie is transferred around on every request increasing traffic. I am now thinking about using the HttpContext.Current.Cache to store the data instead. Can anyone comment on whether this is a good idea. I would also be interested in knowing if there are better options out there. My website is not deployed on a server farm right now but what would be the implications if I were to use a cache and a server farm in future?
Based on your requirements (i.e. keep a hold of sensitive user specific info across a session), the correct place is for this is the SessionState. AFAIK sessions states can be shared across multiple web servers so if you did use a server farm you wouldn't need to change anything.
Session is right container for user sensitive data. Or you can store it in database and get it there by some identifier that is stored in session(it is useful if you store large amount of data).
I am creating an application to be accessed by multiple clients, but
each customer will have a different database, only access the
same application in IIS, I'm using DDD, C # and MvC3 and Entity Framework 4.1 CF. Does anyone have any example or an idea of how best to configure the connection string
specific to each client?
First, you need to identify whether it's a database per client (machine?), user identity authenticating, or some other identifier. For example, if it's per account, then two machines may be able to authenticate as that account and get the same storage.
Once you have that identifier, you'll need a master table somewhere with a map of account to database connection string. You'll probably also want to cache that table in memory to avoid two db roundtrips on every request.
That global configuration information is typically stored in a database. You could go as simple as a file but that would cause problems if you ever wanted to scale out your front end servers, so common storage is best.
I like to write a process in Worker role, to download (sync) batch of files under a folder(directory) to local mirrored folder(directory)
Is there a timestamp(or a way to get) on the time of last folder(directory) updated?
Since folder(directory) structure unsure, but simply put is download whatever there to local, as soon as it changes. Except recursion and setup a timer to check it repeatedly, whats another smart idea do you have?
(edit) p.s. I found many solutions on sync files from local to Azure storage, but the same principle on local files cannot apply on Azure blob, I am still looking for a way that most easily to download(sync) files to local as soon as they are changed.
Eric, I believe the concept you're trying to implement isn't really that effective for your core requirement, if I understand it correctly.
Consider the following scenario:
Keep your views in the blob storage.
Implement Azure (AppFabric) Cache.
Store any view file to the cache, if it's not yet there on a web request with unlimited(or a very long) expiration time.
Enable local cache on your web role instances with a short expiration time (e.g. 5 minutes)
Create a (single, separated) worker role, outside your web roles, which scans your blobs' ETags for changes in interval. Reset the view's cache key for any blob changed
Get rid of those ugly "workers" inside of your web roles :-)
There're a few things to think about in this scenario:
Your updated views will get to the web role instances within "local cache expiration time + worker scan interval". The lower the values, the more distributed cache requests and blob storage transactions.
The Azure AppFabric Cache is the only Azure service preventing the whole platform to be truly scalable. You have to choose the best cache plan based on the overall size (in MB) of your views, the number of your instances and the number of simultaneous cache requests required per instance.
consider caching of the compiled views inside your instances (not in the AppFabric cache). Reset this local cache based on the dedicated AppFabric cache key/keys. This will raise the performance greatly for you, as rendering the output html will be as easy as injecting the model to the pre-compiled views.
of course, the cache-retrieval code in your web roles must be able to retrieve the view from the primary source (storage), if it is unable to retrieve it from the cache for whatever reason.
My suggestion is to create an abstraction on top of the blob storage, so that no one is directly writing to the blob. Then submit a message to Azure's Queue service when a new file is written. Have the file receiver poll that queue for changes. No need to scan the entire blob store recursively.
As far as the abstraction goes, use an Azure web role or worker role to authenticate and authorize your clients. Have it write to the Blob store(s). You can implement the abstraction using HTTPHandlers or WCF to directly handle the IO requests.
This abstraction will allow you to overcome the blob limitation of 5000 files you mention in the comments above, and will allow you scale out and provide additional features to your customers.
I'd be interested in seeing your code when you have a chance. Perhaps I can give you some more tips or code fixes.