Cache server for C# windows application - c#

I am using System.Runtime.Caching in windows application(c#)
This works fine for single machine. Now I want to share cache between multiple instances running on different computers.
So I am thinking of using MEMCACHED or Hazelcast IMDG
Any c# implementation or idea/ results how this works. Please note I am doing this for windows application. Any other suggestion is also welcome.
Thanks

Using In memory cache, needs clear understanding of following use cases:
Is it only a memory dump ?
Does it needs Querying ability, like IQueryable, provided by the frameworks like Entity Framework for the Sql Server ?
If its only in memory dump then Redis, MemCache are fine, as they just store data in the memory as binary, de-serialized once fetched at the client location, which is not good if you are querying lots of data and want processing like filtering, sorting, pagination to be done at the source.
In case IQueryable processing is required, then check out ApacheIgnite.Net. Hazelcast and ScaleOut, out of them I find Apache Ignite to be most advanced, since it just doesn't support the Expression trees, via IQueryable, but also Ansi Sql, which is a very big advantage over other Cache
Also another point remains, why not replace Sql Server or similar database by a modern in memory system like VoltDB or Document db like Raven, as they are much faster, well integrated and completely exclude the need to separate Cache

I would suggest you to use Redis for caching purpose.
Here's a good c# library for Redis: ServiceStack.Redis

Related

Which data access technology is better for DocumentDB

I'm creating a website content management system which stores a whole bunch of website articles and let user be able to modify these articles through the system. I'm a typical SQL Server developer however I'm thinking maybe this system can be done in DocumentDB.We are using C# plus WebAPI to do the read and write. I'm testing different data access technology to see which one performs better. I have been trying Ling, Linq Lambda, SQL and Stored Procedure. The thing is all these query methods seems all running around 600ms to 700ms when I test via Postman. For example, one of my test is a simple Get http://localhost:xxxxxx/multilanguage/resources/1, which would take 600ms+. That was only a 1 kb document and there are only have 5 documents stored in my collection so far. So I guess what I want to ask is: is there a quicker way to query DocumentDB than this. The reason I ask is because I did something similar in SQL Server before(not to query document, it was for relational tables). A much more complex query in a stored procedure on multiple joined tables only takes around 300ms. So I guess there should be a quicker way to do this. Thanks for any suggestions!
Most probably if you will change implementation to stab you will get same performance since actually you are testing connection time between yours server and client (postman).
There's a couple things you can do, but do keep in mind that DocumentDB, and other NoSQL solutions behave very differently than standard SQL Server. For example, the more nodes and RAM available to DocumentDB the better it will perform overall. The development instance of DocumentDB on Azure is understandably going to use fewer resources than a production instance. Since Azure takes care of scaling, one way to think about it is that the more data you have the better it will perform.
That said, something you are probably not used to is sharing your connection object for your whole application. That avoids the start up penalties every time you want to get your data. Summarizing Performance Tips:
Use TCP connection instead of HTTPS when you can
Use await client.OpenAsync() to avoid pausing on start up latency for the first request
Connect to the DocumentDB in the same region (keep in mind if you host across regions)
Use a singleton to access DocumentDB (it's threadsafe)
Cache your SelfLinks for quick access
Tune your page sizes so that you get only the data you intend to use
The more advanced performance tips cover index policies, etc. DocumentDB and other NoSQL databases behave differently than SQL databases. That also means your assumptions about how the APIs work are probably wrong. Make sure you are testing similar concepts. The SQL Server database connection object needs you to create/dispose of objects for each transaction so it can return those connections back to a connection pool. Treating DocumentDB the same way is going to cause the same kind of performance problems as if you didn't use a connection pool.

MongoDB - Hosting on multiple servers

I am wanting to use MongoDB on my Windows Server and I am using the .NET code at:
https://github.com/atheken/NoRM/wiki/
I have 2 web servers that I need to host MongoDB on and keep the database on both instances in sync. What should I be looking at to accomplish this? It seems the master/slave replication option is ideal.
If I do this, can I keep my connection string as?
mongodb://localhost/MyDatabase?strict=false
Thanks for any help. This is my first attempt as using MongoDB.
MongoDB doesn't support this kind of peer-to-peer replication, only master-slave where data is always written to a primary database then sync'd out to secondary replicas. You can, however, distribute reads across the replicas by using the slaveOk option. Check out replica sets for more info. To distribute writes, take a look at sharding.
Also, it might not be ideal to host MongoDB and your web server on the same box. Mongo is greedy when it comes to memory, and if the database grows larger than available RAM then web server performance could really suffer.

creating a backend data storage for quick retrieval

I am writing a software which stores all the information of a users interaction in a global session object/class. I would like to store this values collected in a persistent storage. However i cannot use heavy databases such as sql server or mysql in the target pc as i need to keep the installer minimum in size.
I also need to retrieve values from the storage by passing simple Linq queries,etc.
My question is what is the next best thing to databases which can be manipulated by C# code?
Probably either SQLite or SQL Server Compact Edition - these are both fairly full-featured database systems than run entirely in-process and are frequently used these sorts of thing (for example Firefox uses SQLite to store bookmarks).
The next rung down the ladder of complexity would probably be either XML (using LINQ to XML), or just serialisable objects (using LINQ to objects) - you would of course incur performance penalties over a "proper" compact database like SQLite if you started storing a lot of data, however you would probably need store more than you think before it became noticable, and for small data sets the simplicity would even make this faster than SQLite (for example you could restrict your application to storing the last 100 or so actions).
SQL Server CE and SQLite are popular for the scenario you are describing. XML is as well
You could connect to Access MDB files. You don't need an SQL server for this, and it uses the same syntax.
Just need to use OleDb.
Example: DataEasy: Connect to MS Access (.mdb) Files Easily using C#

query on MUMPS from asp.net/C#

Does anybody know how to query from MUMPS database using C# without using KBSQL -ODBC?
We have requirement to query from MUMPS database (Mckesson STAR Patient care) and when we use KBSQL it is limited for 6 concurrent users. So we are trying to query MUMPS directly without using KBSQL.
I am expecting something like LINQ TO MUMPS.
I think Mckesson uses Intersystems' Cache
as its mumps (M) provider. Cache has support for .Net (see the documentation here). Jesse Liberty has a pretty good article on using C#, .Net and Windows Forms as the front end to a Cache database.
I'm not sure about LINQ (I'm no expert here), but this might give you an idea as to where to start to get your project done.
Michael
First off, I too feel your pain. I had the unfortunate experience of developing in MagicFS/Focus a couple of years back, and we had the exact same request for relational query support. Why do people always want what they can't have?
Anyway, if the version of MUMPS you're using is anything like MagicFS/Focus and you have access to the file system which holds the "database" (flat files), then one possible avenue is:
Export the flat files to XML files. To do this, you'll have to manually emit the XML from the flat files using MUMPS or your language of choice. As painful as this sounds, MUMPS may be the way to go since you may not want to determine the current record manually.
Read in the XML using LINQ to XML
Run LINQ queries.
Granted, the first step is easier said than done, and may even be more difficult if you're trying to build the XML files on the fly. A variant to this would be to manage generation of the XML files like indexes, via a nightly server process or the like.
If you're only going to query in specific ways (i.e. I want to join Foo and Bar tables on ID, and that's all I want), I would consider instead pulling and caching that data into server-side C# collections, and skip querying altogether (or pull those across using WCF or like and then do your LINQ queries).
You can avoid the 6 user limitation by separating the database connection from the application instances
Use the KB/SQL ODBC thru a middle tier ( either a DAL in your application)
or a separate service (windows service ).
This component can talk to the MUMPS database using at most 6 separate threads ( in line with the KB/SQL limitation).
The component can use ADO.NET for ODBC to communicate with the KBSQL ODBC driver.
You can then consume the data from your application using LINQ for ADO.NET.
You may need to use a queuing system like MSMQ to manage queuing of the data requests. if the 6 concurrent connections are insufficient for the volume of requests.
It is a good design practice to queue requests and use LINQ asynchronous calls to avoid blocking user interaction.
All current MUMPS language implementations have the ability to specify MUMPS programs that respond to a TCP/IP connection. The native MUMPS database is structured as a hierarchy of ordered multi-key and value pairs, essentially a superset of the NoSQL paradigm.
KB/SQL is a group of programs that respond to SQL/ODBC queries, translate them into this MUMPS "global" data queries, retrieve and consolidate the results from MUMPS, and then send back the data in the form that the SQL/ODBC protocol expects.
If you have the permissions/security authorization for your implementation that allows you to create and run MUMPS programs (called "routines"), then you can respond to any protocol you desire from the those programs. MUMPS systems can produce text or binary results on a TCP/IP port, or a host operating system file. Many vendors explicitly keep you from doing this in their contracts to provide healthcare and financial solutions.
To my knowledge the LINQ syntax is a proprietary Microsoft product, although there are certainly LINQ-like Open Source efforts out there. I have not seen any formal definition of a line protocol for LINQ, but if there is one, a MUMPS routine can be written to communicate using that protocol. It would have to do something similar to KB/SQL however, since neither the LINQ syntax nor the SQL syntax are very close to the native MUMPS syntax.
The MUMPS data structuring and storage mechanism can be mechanically translated into the an XML syntax. This may still require an extensive effort as it is highly unlikely that the vendor of your system will provide a DTD defined for this mechanically created XML syntax, and you will still have to deal with encoded values and references which are stored in the MUMPS based system in their raw form.
What vendor and version of MUMPS are you using? The solution will undoubtedly be dependent on the vendor's api they have exposed.

Any ORMs that work with MS-Access (for prototyping)?

I'm in the early stages of a project, and it's not clear yet whether we'll need a "real" database (i.e. SQL Server et al). So I've been doing some prototyping using MS-Access, which is working fine so far. (developing in C#/VS2008/.Net 3.5/MS-Access 2000).
However, the object-relational impedance mismatch is already becoming annoying, and will only get worse as the project evolves.
I have not been able to find an ORM that will work with MS-Access. Any suggestions?
Edit - Follow Up
We ended up using Fluent NHibernate, mainly because it Automaps our object model to a relational database, which has been a huge win for us. Most of the FNH code samples we found used SQLite, and this worked so well that we intend to use it for our production database. (The app is a desktop scientific data collection and analysis package).
MSAccess files can be set up as an ODBC source on Windows machines. Almost any ORM will allow you to use ODBC. Here is a quick tutorial on how to set that up, it's outlined for Win2k but the process is the same for XP+. You also need to have MDAC installed on your box.
NHibernate seems to have native support of MSAccess as well, see here. I've never used it though. It also has an ODBC driver.. Many others support ODBC as well.
And again, as others are saying.. MSAccess does not scale... period. Installing a real database server is fairly easy, so I'd recommend SQL Server Express as others have, or even MySQL or Postgre, whatever is easier to set up.
If this is an application that you intend to deploy to clients, with each client having their own unique database, I would recommend another solution entirely, SQLite. SQLite gives you database power on an app by app basis. If you have a central database server, one of the previously mentioned solutions would be best.
There's only one scenario when choosing the Access Database Engine is a good choice: when building a self-contained Access application using Access Forms (though choosing to use Access in the first place is a questionable choice ;)
The database engine that VS2008 plays nicest with is SQL Server and you will have no problem finding an ORM that plays nice with SQL Server.
Can't give you an answer to your question, but instead of Access you might want to consider one of the following options:
SQL Server Express: is free and compatible with the full SQL Server
SQL Server Compact: also free, does not require any deployment/installation, does not support all features (e.g. no stored procedures).
At this stage, if you are unsure whether you need a "real" database or not, I'd skip MS Access and go straight to sql server express. It's free and still allows you to do everything you need to.
Plus, if you later decide you need to scale up, then you can without any pain.
I recommend you to use something like Microsoft SQL Server or PostgreSQL for prototyping. If you don't want to learn specific SQL syntax and install special tools for designing database schema, you can use ORM that automatically generates database schema from your persistent classes declaration. Anyway this approach is very effective for prototyping.
LLBLGen works with Access
Access is just a bad, bad idea. I believe MS only includes Access in Office to keep legacy users happy.
Even if you find an ORM that will work with an Access database, with few exceptions you're locking yourself into a niche tool that likely will not work out-of-the box with a real database engine. If you decide to switch to a real database engine later on, you'll not only have to deal with migrating the database, but switching to a different ORM.
See this comparison between SQL Server Express and SQL Server Compact. The comparison document also mentions some problems with other data stores, including Access.
If you are REALLY concerned about being able to install SQL Server Express, consider SQL Server Compact:
it can be linked into your redistributable app. No need to install a service (which may require admin rights during install of your application); everything is taken care of when you install your app. This makes the most sense if you need the data to reside on the user's machine instead of a server, and is most analogous to using Access.
It's less powerful than Express (doesn't support views, triggers, stored procedures, which I consider a requirement)
Can be scaled up to Express or other SQL Server versions very easily
Suitable for small-footprint installs like tablets, mobile devices, etc.
Always keep scalability in mind when designing any application. You don't want to wind up having to write a PHP->C++ compiler if/when your app becomes successful just because you picked the wrong tool up front.
While we're at it:
The big issue with Access (or, in this case, the Jet engine, which is the part you'd really be using when integrating an Access database with a .NET app) is that there is no "server" that handles datase requests. The engine, hosted in your app, must read and write directly to a file on disk that contains the database. Whenever this happens, the file must be locked to prevent concurrent writes. Dirty reads become more common as the number of users grows, as does the potential for database corruption.
Imagine having every customer at a large restaurant trying to simultaneously enter the kitchen to write down their orders or retrieve their food. Chaos would result. There'd be a lot of broken dishes, the kitchen would be a mess, you'd be lucky to get what you ordered in any sort of edible condition. With one customer, this probably works fine. With 5, eh, maybe. With 20,50,1000? Not so much.
So, the restaurant industry introduced waiters and managers that buffer IO to the kitchen. The database server application does something roughly analogous to this by restricting access to the files on disk. Everyone gets what they want, faster and in a much more reliable way, and the data store is protected.

Categories