sharing static table with one instance between applications on same server? - c#

I have 5 websites running on same server and I have some sql tables that I access frequently. that was time consuming and by defining them shared I pumped them in memory. So my application is accessing those static data tables from memory quite efficiently. But I realized I did a mistake and I occupied my memory unnecessarily while I keep same tables 5 times for each web application. Now I need to find the best way to share that table only 1 time. my options are;
1) using a local database - sql CE. my original sql database is on other server. so it makes the access slower but I can add a sql CE database(to be honest I never used it and dont know if it gives me anything) to access only these tables while they have static rows.
2) I read over forums that making a WCF using tcp binding. not sure if it will give me any advantage. any idea?
3) windows service: is it faster than wcf? programming is defiantly faster as i have experience with winservice but nor sure about performance.
Please let me know if you have any comment on my ideas or any other new idea?
thanks a lot.

If you are running on Windows Server 2008, you could use Microsoft's AppFabric Server distributed caching. Here are couple of articles to give you an idea:
http://www.hanselman.com/blog/InstallingConfiguringAndUsingWindowsServerAppFabricAndTheVelocityMemoryCacheIn10Minutes.aspx
http://msdn.microsoft.com/en-us/magazine/dd861287.aspx

Related

Concurrent database access on shared network drive

I'm part of a small team that currently uses an Access database for scheduling a larger team's availability. This has presented some issues with corruption of the Access database. Additionally, I want to implement additional functionality over time.
I've set out to create an application for the 4-5 of us to use that will solve the concurrent database issue, as well as give the team more functionality.
Since this is a shared network drive, I won't have access to SQL Server (from my guess). I thought maybe a web service would be the way to go, but I don't really want to front the bill for this. Additionally, when I eventually leave the team I don't want to maintain this.
Some ideas I've come up with is an application written in C# that acts as the front-end with SQLite embedded as the back-end. However, I've spent days trying to get the Entity Framework to work with SQLite and am at the point of giving up.
I'm trying to decide what else I can do to solve this issue. Is there another technology I can use?
As was said, it sounds like you try to reinvent the DMBS wheel.
If you have a Database that multiple clients can use at the same time, "sharing a access file on a network share" will simply not cut it. You need a proper DBMS. You have simply outgrown the scale Access was designed for. propably even the scale it was intended for.
You said cost might be an issue, but it is not really: There are dozens of DBMS out there, with a number being Freeware. MySQL is a shining example of a free DBMS. Conver that whole Access thing into a MySQL Database. Write a frontend for the MySQL Database. Done.
If you already have a computer providing the share across the network, that same computer can provide the MySQL Server. Setting up a DBMS with 1+ instances can be a bit more involved then just enabling a share, but not much more then programming a WebService.

Best way to make sql server instance available remotely? Linked Servers or replication? Other?

A co-worker and I are working on some Pharmacy software (in C#) which deals with the management of patient profiles, patient drug prescriptions, etc. All of these different sets of data are stored in a sql server database (we're using 2008 standard but future versions are fine too). Each store has its own sql server instance on a local machine.
Our Goal:
We want to have "Store A" be able to access "Store B's" databases if need be. Basically in the event that perhaps a pharmacy customer is out of town and visits one of the other pharmacy branches.
Things I've thought of:
My initial thoughts were to basically keep an online server instance of sql server which could be accessed through a dns link (or perhaps IP). I was trying to figure out the best way to keep these in sync and I came across sql servers replication. Problem is I was going to use Transactional Replication with updating subscribers but since it's deprecated It's not really a long term option anymore. Microsoft suggests using p2p replication, but that requires enterprise edition and we're really trying to avoid that if we can. I wanted to use a transactional type of replication since it does a much better job of keeping records consistent (not having to wait for something like a merge agent job to run every hour or something like that).
Something I've thought about more recently is maybe having an internet based sql server instance, which would contain nothing but linked servers back to each stores local machine. I wouldn't have to worry about sync problems if other stores just worked directly off each others local machines. But I've read of a lot of people saying that this is a horrible security vulnerability so I'm not sure if this is even a plausible idea but I think maybe there's some way to make this work?
Anyways so this is the basic gist of what we're trying to do. I don't know if replication or linked servers would be the better route to take.
Edit:
What about bi-directional replication? I was reading a little bit about this but I'm a little unsure about if this is what I need or not. I don't want to have to stagger primary keys between servers or anything, since they are pretty important in identifying prescription numbers and stuff like that. But if I could do bi-directional replication, that could be good too.
Not really an answer but I have more space...
SQL Azure is a the 'cloud' version of SQL Server. A VPN is a way of creating your own private network over the internet. Do some research on these terms. Many applications are going cloud nowadays. You should really consider the likelihood that there will be no internet access.
With regards to replication, you can 'roll your own' replication if you own this application and you are happy to support it.
The basic premise is:
Create a trigger on every table which writes the PK of every change to a log table
Create a process which manages copying and merging only changed info (based on the log table) using subscribers and publishers

Push data from Sql Server to desktop application

I have a pretty simple .net 4 desktop application written in c# which needs to display some data inserted to a table on an SQL Server (2005). The data itself is quite simple, just one row of about 10 columns, (mostly counts of other data).
I could just poll the sql server from the application every x interval, but my preference is to have the sql server push the data out to this application if possible, as the timing of the "new data" is often irregular.
In short, I'd like to know if this is possible. Doing some research before posting this question, I found a few possibilities.
1) SignalR: I found this question which seemed promising, but this seems to be in the context of a web application rather than a desktop one. Upon review of the signalR wiki, it seemed to me that it requires some kind of web service or other http connection which I'd prefer to avoid.
2) Sql server change tracking, from this question. Firstly, I'm not on sql 2008 so I assume I'd have to install or configure it (which isn't a problem) but I'm also not sure if this will provide what I need.
I will mention as well that this client application could exist on 100+ different pcs which would all need to be notified on the data change.
So, is such a thing possible? I apologize if the question is a little vague - and thanks in advance for your help!
The SQLDependencyclass is supposed to cater to the very scenario that you are referring to.
While i do not have any personal experience using this, this article seems to be in line with your scenario

Database on a server without installation?

Right now I am having a customer who is working with several businesses. He is working with their data but is not allowed to directly access their databases. We thought of using SQLite or SQL CE and storing a copy/part of the original database as a file on a network share. Now the problem is that SQL CE is not supporting it and SQLite highly recommends not to do so.
First of all the performance is a huge problem, since our customer is working with a lot of data (up to several gb). The second problem is that SQLite has problems (actually the underlying os functionality for file locking is the problem) with concurrent usage of the database, when it is stored on a network share. I did a lot of research on that topic and many people say that it is just a matter of time that the database gets currupt.
Does anyone know a better solution to that problem or a workaroung which lets me use SQLite? It does not need to be a file based database, as long as nothing needs to be installed or run on the server.
Thanks, David.
If you are going to store data on a network share and have concurrent users accessing it you are going to need a db that can handle concurrent access. MS Access will quickly die if under concurrent access as will SQL Lite.
SQL Server Express is free and works very well. PostgreSQL as suggested by Maxim is an open source full featured db that will do the job very well but may be overkill.
You could also look at Redis ... fast lightweight in memory no sql db that also has capability to persist to file.
You can try PostgreSQL. It is very easy to configure, and is rather reliable. It also support server export/import options.
And any of this makes sense, if you client is able to get his hands on an exported database somehow.

MongoDB - Hosting on multiple servers

I am wanting to use MongoDB on my Windows Server and I am using the .NET code at:
https://github.com/atheken/NoRM/wiki/
I have 2 web servers that I need to host MongoDB on and keep the database on both instances in sync. What should I be looking at to accomplish this? It seems the master/slave replication option is ideal.
If I do this, can I keep my connection string as?
mongodb://localhost/MyDatabase?strict=false
Thanks for any help. This is my first attempt as using MongoDB.
MongoDB doesn't support this kind of peer-to-peer replication, only master-slave where data is always written to a primary database then sync'd out to secondary replicas. You can, however, distribute reads across the replicas by using the slaveOk option. Check out replica sets for more info. To distribute writes, take a look at sharding.
Also, it might not be ideal to host MongoDB and your web server on the same box. Mongo is greedy when it comes to memory, and if the database grows larger than available RAM then web server performance could really suffer.

Categories