Does anybody know how to query from MUMPS database using C# without using KBSQL -ODBC?
We have requirement to query from MUMPS database (Mckesson STAR Patient care) and when we use KBSQL it is limited for 6 concurrent users. So we are trying to query MUMPS directly without using KBSQL.
I am expecting something like LINQ TO MUMPS.
I think Mckesson uses Intersystems' Cache
as its mumps (M) provider. Cache has support for .Net (see the documentation here). Jesse Liberty has a pretty good article on using C#, .Net and Windows Forms as the front end to a Cache database.
I'm not sure about LINQ (I'm no expert here), but this might give you an idea as to where to start to get your project done.
Michael
First off, I too feel your pain. I had the unfortunate experience of developing in MagicFS/Focus a couple of years back, and we had the exact same request for relational query support. Why do people always want what they can't have?
Anyway, if the version of MUMPS you're using is anything like MagicFS/Focus and you have access to the file system which holds the "database" (flat files), then one possible avenue is:
Export the flat files to XML files. To do this, you'll have to manually emit the XML from the flat files using MUMPS or your language of choice. As painful as this sounds, MUMPS may be the way to go since you may not want to determine the current record manually.
Read in the XML using LINQ to XML
Run LINQ queries.
Granted, the first step is easier said than done, and may even be more difficult if you're trying to build the XML files on the fly. A variant to this would be to manage generation of the XML files like indexes, via a nightly server process or the like.
If you're only going to query in specific ways (i.e. I want to join Foo and Bar tables on ID, and that's all I want), I would consider instead pulling and caching that data into server-side C# collections, and skip querying altogether (or pull those across using WCF or like and then do your LINQ queries).
You can avoid the 6 user limitation by separating the database connection from the application instances
Use the KB/SQL ODBC thru a middle tier ( either a DAL in your application)
or a separate service (windows service ).
This component can talk to the MUMPS database using at most 6 separate threads ( in line with the KB/SQL limitation).
The component can use ADO.NET for ODBC to communicate with the KBSQL ODBC driver.
You can then consume the data from your application using LINQ for ADO.NET.
You may need to use a queuing system like MSMQ to manage queuing of the data requests. if the 6 concurrent connections are insufficient for the volume of requests.
It is a good design practice to queue requests and use LINQ asynchronous calls to avoid blocking user interaction.
All current MUMPS language implementations have the ability to specify MUMPS programs that respond to a TCP/IP connection. The native MUMPS database is structured as a hierarchy of ordered multi-key and value pairs, essentially a superset of the NoSQL paradigm.
KB/SQL is a group of programs that respond to SQL/ODBC queries, translate them into this MUMPS "global" data queries, retrieve and consolidate the results from MUMPS, and then send back the data in the form that the SQL/ODBC protocol expects.
If you have the permissions/security authorization for your implementation that allows you to create and run MUMPS programs (called "routines"), then you can respond to any protocol you desire from the those programs. MUMPS systems can produce text or binary results on a TCP/IP port, or a host operating system file. Many vendors explicitly keep you from doing this in their contracts to provide healthcare and financial solutions.
To my knowledge the LINQ syntax is a proprietary Microsoft product, although there are certainly LINQ-like Open Source efforts out there. I have not seen any formal definition of a line protocol for LINQ, but if there is one, a MUMPS routine can be written to communicate using that protocol. It would have to do something similar to KB/SQL however, since neither the LINQ syntax nor the SQL syntax are very close to the native MUMPS syntax.
The MUMPS data structuring and storage mechanism can be mechanically translated into the an XML syntax. This may still require an extensive effort as it is highly unlikely that the vendor of your system will provide a DTD defined for this mechanically created XML syntax, and you will still have to deal with encoded values and references which are stored in the MUMPS based system in their raw form.
What vendor and version of MUMPS are you using? The solution will undoubtedly be dependent on the vendor's api they have exposed.
Related
I am using System.Runtime.Caching in windows application(c#)
This works fine for single machine. Now I want to share cache between multiple instances running on different computers.
So I am thinking of using MEMCACHED or Hazelcast IMDG
Any c# implementation or idea/ results how this works. Please note I am doing this for windows application. Any other suggestion is also welcome.
Thanks
Using In memory cache, needs clear understanding of following use cases:
Is it only a memory dump ?
Does it needs Querying ability, like IQueryable, provided by the frameworks like Entity Framework for the Sql Server ?
If its only in memory dump then Redis, MemCache are fine, as they just store data in the memory as binary, de-serialized once fetched at the client location, which is not good if you are querying lots of data and want processing like filtering, sorting, pagination to be done at the source.
In case IQueryable processing is required, then check out ApacheIgnite.Net. Hazelcast and ScaleOut, out of them I find Apache Ignite to be most advanced, since it just doesn't support the Expression trees, via IQueryable, but also Ansi Sql, which is a very big advantage over other Cache
Also another point remains, why not replace Sql Server or similar database by a modern in memory system like VoltDB or Document db like Raven, as they are much faster, well integrated and completely exclude the need to separate Cache
I would suggest you to use Redis for caching purpose.
Here's a good c# library for Redis: ServiceStack.Redis
If this question seems common to you, I apologise, I did a quick search around this site and a few google searches and could not find a satisfying answer.
My question is this;
I have only been a software developer for 3-4 years now. This may seem like a time long enough to answer this question myself however in all my time, I have never had to develop software where the main body of data-storage is not required to be in an on-line database. This time however, my latest development requires only for its data to be stored only to disk.
The actual data itself is light-weight. In-code the main asset will be a class with only a few, string based properties on it which must be persisted. My initial thoughts are on simple serialisation. On application close new assets are simply serialised and stored on disk as a file. I also though maybe for backup purposes (or if it is somehow a better option to a serialised class) an XML file would be appropriate.
I cannot think of any distinct disadvantages of either of these approaches, it is this fact which causes me to ask this question publicly. In my experience, there is rarely a solution to a problem which does not have it's downsides.
Serialization (binary or XML) is appropriate for a small amount of data. The problem with this approach is when you get large amounts of data (that you may need to query).
If you are on a windows platform and in need of a proper database, you can use the embedded database engine that comes with windows - ESENT. It is the backing store of Exchange and RavenDB.
Here are the .NET wrapper libraries for it.
ManagedEsent provides managed access to ESENT, the embeddable database engine native to Windows. ManagedEsent uses the esent.dll that is part of Microsoft Windows so there are no extra unmanaged binaries to download and install.
The most lightweight solution, is of course to use XML and serialization. The main advantage of that is that it is very easy, requiring little code, and is easily editable using a text editor. The other advantage of this is being able to have multiple files, and they will be easy to transfer from PC to PC.
Here is a nice tutorial on XML serialization.
However, if your application is going to be reading, writing, and changing the data a lot, and there is only one source of data, it would be better to use a light-weight database. Many people like SQLite, while I personally prefer Firebird.
See this question for using SQLite with C#, and see here for information for using Firebird with .net.
Another embedded database option is Sql Server Compact Edition. The latest version of this is v4 and it seems to be much improved over previous versions.
It's functionally equivalent to using an XML file, or an access database, or even a plain old text file, in that you don't need to have a Sql Server service running or install anything special on the machine that your application runs on.
I've been using Sqlite in a project and it works very well and it's easy to use too, one thing to keep it mind when using Sqlite though is that it's designed to be used in a single user environment, so if you use it as the database for the backend of a website for instance you're likely to find that it'll struggle under the slightest of load..
Check out this link for the C# wrapper:
http://sqlite.phxsoftware.com/
I also use NHibernate and NHibernate.Linq to interact with the data, you can get a build of both which are compatible here: http://www.dennisdoomen.net/2009/07/nhibernate-210-ga-with-linq-and-fluent.html
NHibernate.Linq allows you to use those nice Linq query syntax on your Sqlite db:
var onePiece = from s in session.Linq() where s.Name == "One Piece" select s;
I work on a C# client application (SlimTune Profiler) that uses relational (and potentially embedded) database engines as its backing store. The current version already has to deal with SQLite and SQL Server Compact, and I'd like to experiment with support for other systems like MySQL, Firebird, and so on. Worse still, I'd like it to support plugins for any other backing data store -- and not necessarily ones that are SQL based, ideally. Topping off the cake, the frontend itself supports plugins, so I have an unknown many-to-many mapping between querying code and engines handling the queries.
Right now, queries are basically handled via raw SQL code. I've already run into trouble making complex SELECTs work in a portable way. The problem can only get worse over time, and that doesn't even consider the idea of supporting non-SQL data. So then, what is the best way to query wildly disparate engines in a sane way?
I've considered something based on LINQ, possibly the DbLinq project. Another option is object persistence frameworks, Subsonic for example. But I'm not too sure what's out there, what the limitations are, or if I'm just hoping for too much.
(An aside, for the inevitable question of why I don't settle on one engine. I like giving the user a choice of the engine that works best for them. SQL Compact allows replication to a full SQL Server instance. SQLite is portable and supports in-memory databases. I can imagine a situation where a company wants to drop in a MySQL plugin so that they can easily store and collate an application's performance data over the course of time. Last and most importantly, I find the idea that I should have to be dependent on the implementation details of my underlying database engine to be absurd.)
Your best bet is to use an interface for all of your database access. Then for each database type you want to support to do the implementation of the interface for that database. That is what I've had to do for projects in the past.
The problem with many database systems and storage tools is that they aim to solve different problems. You might not even want to store your data in a SQL database but instead store it as files in the App_Data folder of a web application. With an interface method you could do that quite easily.
There generally isn't a solution that fits all database and storage solutions well or even a few of them well. If you find one that claims it does I still wouldn't trust it. When you have a problem with one of the databases it's going to be much easier for you to dig through your objects than it will be to go dig through theirs.
Use an object-relational mapper. This will provide a high level of abstraction away from the different database engines, and won't impose (many) limitations on the kind of queries you can run. Many ORMs also include LINQ support. There are numerous questions on SO providing recommendations and comparisons (e.g. What is your favorite ORM for .NET? appears to be the most recent and has links to several others).
I would recommend the repository pattern. You can create a class that encapsulates all the actions that you need the database for, and then create a different implementation for each database type you want to support. In many cases, for relationional data stores, you can use the ADO.NET abstractions (IDbConnection, IDataReader, IDataAdapter, etc) and create a single generic repository, and only write specific implementations for the database types that do not provide an ADO.NET driver.
public interface IExecutionResultsRepository
{
void SaveExecutionResults(string name, ExecutionResults results);
ExecutionResults GetExecutionResults(int id);
}
I don't actually know what you are storing, so you'd have to adapt this for your actual needs. I'm also guessing this would require some heavy refactoring as you might have sql statements littered throughout your code. And pulling these out and encapsulating them might not be feasible. But IMO, that's the best way to achieve what you want to do.
In this program I'm writing, it would need frequent database communication, and at the moment I'm using just XML files. Is there really a benefit from using MySQL or SQL in general over XML. Just note that I'm using C# so MySQL is not very fun to deal with in it (from what little experience I have).
In terms of maintaining data stored in XML files vs. a relational database (Mysql, in your case), the database is far more robust than simple XML files. But this is simply an exercise in determining the needs of your application.
MySql, like many other RDBMSs, will provide much more than just a place to park your data. The biggest advantage to using a modern db such as MySql is ACID support. This means you get all-or-nothing transactions, ensuring consistency through your data.
You also get referential integrity to ensure that related records stay intact and don't leave you with abandoned references to other data records. We could go on and on to discuss the value of locking or the power of stored procedures.
But really, you should consider the needs of your application. If you do significant gymnastics to keep your data in order or you care about shared access and file locks while trying to read and write data, you need to punt on your XML file basis. No need trying to find ways around these issues when a basic mysql database will solve those issues.
If there's truly relational data...you'll almost always benefit from using a RDBMS. Retrieving data will be faster with the backing of a query engine rather than tying together XML nodes. You'll also get referential integrity when inserting data into the structure.
There is an ADO.NET provider for MySQL, so you shouldn't have any more difficulty dealing with a MySQL database than MS SQL Server.
You could even download DbLinq and give their LINQ to MySQL functionality a shot. Could make things even easier (or you could use Entity Framework with the MySQL ADO.NET provider).
The size of XML documents can be a large factor. In XML you either produce large and complicated text files with a huge amount of additional data or your data is split up accross several files. Managing these files can be a headache. Using a SQL database will allow you waste less disk space.
SQL is faster than using XML.
Any SQL database will give you access to a whole set of permissions and role capabilities that may be difficult to enforce using XML.
If you have relational data, a database would work. As an alternative to MySQL, if you aren't looking for a centralized solution, you can use SQLite. SQLite runs in-process (meaning the program running it is it's own "database server") and requires no installation other than distributing the DLL file containing it.
Robert Simpson has written System.Data.SQLite, a SQLite Data Provider for the .Net framework. It's free and open source (like SQLite) and works and feels as native as System.Data.SqlClient does. It supports standard ADO.Net conventions, Linq, and the Entity Framework.
I've used System.Data.SQLite for projects at work for applications that need to run fast and cache data locally for comparison between multiple runs (data processing and job scheduling). Firefox is a good example of an application using SQLite, Firefox 3 uses SQLite for it's Cookies, the Downloads history, Form autocomplete, and most importantly your web browsing history.
Again SQLite is meant for direct application use and lacks features like user authentication and schema permissions. It has issues if multiple programs try to write to the same database (those can be worked around but nothing like what a real RDBMS can do). It's biggest advantage is it doesn't need to be installed and set up to work like MySQL does. In the C# case all you have to do is reference System.Data.SQLite and copy the .dll file along with your program and it'll work.
I am just beginning to write an application. Part of what it needs to do is to run queries on a database of nutritional information. What I have is the USDA's SR21 Datasets in the form of flat delimited ASCII files.
What I need is advice. I am looking for the best way to import this data into the app and have it easily and quickly queryable at run time. I'll be using it for all the standard things. Populating controls dynamically, Datagrids, calculations, etc. I will also need to do user specific persistent data storage as well. This will not be a commercial app, so hopefully that opens up the possibilities. I am fine with .Net Framework 3.5 so Linq is a possibility when accessing the data (just don't know if it would be the best solution or not). So, what are some suggestions for persistent storage in this scenario? What sort of gotchas should I be watching for? Links to examples are always appreciated of course.
It looks pretty small, so I'd work out an appropriate object model, load the whole lot into memory, and then use LINQ to Objects.
I'm not quite sure what you're asking about in terms of "persistent storage" - aren't you just reading the data? Don't you already have that in the text files? I'm not sure why you'd want to introduce anything else.
I would import the flat files into SQL Server and access via standard ADO.NET functionality. Not only is DB access always better (more robust and powerful) than file I/O as far as data querying and manipulation goes, but you can also take advantage of SQL Server's caching capabilities, especially since this nutritional data won't be changing too often.
If you need to download updated flat files periodically, then look into developing a service that polls for these files and imports into SQL Server automatically.
EDIT: I refer to SQL Server, but feel free to use any DBMS.
My temptation would be to import the data into SQL Server (Express if you aren't looking to deploy the app) as it's a familiar source for me. Alternatively you can probably create an ODBC data source using the text file handler to get you a database-like connection.
I agree that you would benefit from a database, especially for rapid querying, and even more so if you are saving user changes to the data. In order to load the flat file data into a SQL Server (including Express), you can use SSIS.
Use Linq or text data to list method
1.create a list.
2.Read the text file line by line (or all lines).
3.process the line - get required data and attach to the list.
4.process the list for any further use.
the persistence storage will be files and List is volatile.