I am designing a program that will build and maintain a database, and act as a central server. This is the 'first stage' of a grander plan. Coming later will be 3-5 remote programs built around the information put into this database.
The requirements are:
The remote programs must be able to access the information in the database.
The remote programs must be able to set alerts when information in the database changes.
The remote programs must be able to request the central server to go out and fetch new / different data.
So, the question is this: how do I expose this data and events to the outside world? My two choices are:
Have them communicate directly with my 'server' application. This seems easier to:
do event notifications (although I suppose I'm probably missing something in SQL).
It also seems like this is more 'upgradeable' - that is I don't need to worry about the database updating and crashing all my remote programs because something changed. I can account for this and transform it the data to a version the child program will understand.
Just go ahead and let them connect directly to the database.
This nice thing about this is that it's solved. I can use LINQ for SQL. The only thing the main server application needs to do is let the remote programs know where the database is.
I'm unsure how to trigger / relay 'events' for field changes in a database over different programs that may or may not be on the same computer.
Forgive my ignorance on this question. I feel woefully unprepared to ask it, but I'm having a hard time figuring out where to get started with this. It is my first real DB project :-/
Thanks!
If the other programs are going to need to know about updates to the database, then the best solution is to manage all db updates through your server application so it can alert clients of the changes. Otherwise it will be tough for the clients to be aware of changes to the db. This also has the advantage of hiding the implementation details of your storage solution from the clients, so you are free to change databases, etc...
My suggestion would be to go with option 1. Build out a web service that can provide the information they all need. This will be the most flexible and allow you to reduce duplicate backend code that would happen with direct communication with the database.
I would recommend looking at some Data Source design patterns first. This types of patterns will help you come up with solutions about how to manage the states of your data. Otherwise I think that I would require some more information about your requirements for the clients to make any further useful suggestions.
I recommend you learn about SQL Server and/or databases first. You don't appear to realize that most of what you want from your "central server" can all be done by SQL Server itself.
A central databse is the simplest option and the cheapest to both build and maintain.
There are however a few scenarios where a central database could cause problems:
High load on one of the systems: A high load on one of the systems could reduce performance on the other systems. For example someone running an internal report stops you being able to take orders on your eCommerce site.
With several systems writing to the same database there is a greater chance of locking.
With several systems dependent on the same database schema, how do you upgrade? All systems at the same time?
If you need to take down the database all systems stop.
Related
I'm part of a small team that currently uses an Access database for scheduling a larger team's availability. This has presented some issues with corruption of the Access database. Additionally, I want to implement additional functionality over time.
I've set out to create an application for the 4-5 of us to use that will solve the concurrent database issue, as well as give the team more functionality.
Since this is a shared network drive, I won't have access to SQL Server (from my guess). I thought maybe a web service would be the way to go, but I don't really want to front the bill for this. Additionally, when I eventually leave the team I don't want to maintain this.
Some ideas I've come up with is an application written in C# that acts as the front-end with SQLite embedded as the back-end. However, I've spent days trying to get the Entity Framework to work with SQLite and am at the point of giving up.
I'm trying to decide what else I can do to solve this issue. Is there another technology I can use?
As was said, it sounds like you try to reinvent the DMBS wheel.
If you have a Database that multiple clients can use at the same time, "sharing a access file on a network share" will simply not cut it. You need a proper DBMS. You have simply outgrown the scale Access was designed for. propably even the scale it was intended for.
You said cost might be an issue, but it is not really: There are dozens of DBMS out there, with a number being Freeware. MySQL is a shining example of a free DBMS. Conver that whole Access thing into a MySQL Database. Write a frontend for the MySQL Database. Done.
If you already have a computer providing the share across the network, that same computer can provide the MySQL Server. Setting up a DBMS with 1+ instances can be a bit more involved then just enabling a share, but not much more then programming a WebService.
A co-worker and I are working on some Pharmacy software (in C#) which deals with the management of patient profiles, patient drug prescriptions, etc. All of these different sets of data are stored in a sql server database (we're using 2008 standard but future versions are fine too). Each store has its own sql server instance on a local machine.
Our Goal:
We want to have "Store A" be able to access "Store B's" databases if need be. Basically in the event that perhaps a pharmacy customer is out of town and visits one of the other pharmacy branches.
Things I've thought of:
My initial thoughts were to basically keep an online server instance of sql server which could be accessed through a dns link (or perhaps IP). I was trying to figure out the best way to keep these in sync and I came across sql servers replication. Problem is I was going to use Transactional Replication with updating subscribers but since it's deprecated It's not really a long term option anymore. Microsoft suggests using p2p replication, but that requires enterprise edition and we're really trying to avoid that if we can. I wanted to use a transactional type of replication since it does a much better job of keeping records consistent (not having to wait for something like a merge agent job to run every hour or something like that).
Something I've thought about more recently is maybe having an internet based sql server instance, which would contain nothing but linked servers back to each stores local machine. I wouldn't have to worry about sync problems if other stores just worked directly off each others local machines. But I've read of a lot of people saying that this is a horrible security vulnerability so I'm not sure if this is even a plausible idea but I think maybe there's some way to make this work?
Anyways so this is the basic gist of what we're trying to do. I don't know if replication or linked servers would be the better route to take.
Edit:
What about bi-directional replication? I was reading a little bit about this but I'm a little unsure about if this is what I need or not. I don't want to have to stagger primary keys between servers or anything, since they are pretty important in identifying prescription numbers and stuff like that. But if I could do bi-directional replication, that could be good too.
Not really an answer but I have more space...
SQL Azure is a the 'cloud' version of SQL Server. A VPN is a way of creating your own private network over the internet. Do some research on these terms. Many applications are going cloud nowadays. You should really consider the likelihood that there will be no internet access.
With regards to replication, you can 'roll your own' replication if you own this application and you are happy to support it.
The basic premise is:
Create a trigger on every table which writes the PK of every change to a log table
Create a process which manages copying and merging only changed info (based on the log table) using subscribers and publishers
I'm trying to prepare to build a database driven .net application and I have hit a roadblock early on due to my lack of knowledge on this topic. Searching around didn't yield anything so here I am asking for help.
I'm receiving weekly data in xml format that will be added to a database and then reports generated using that data. I have a limited license on the xml files so only I can download them and I need to get the results to my end users as well. As far as I can see, I have 2 options:
Feed the data from the xml files into a web hosted database and then have each user connect to the database.
Upload the xml data to a server, have each user download it and keep a local copy of their own database. I'm thinking this will invalidate my license to the original data.
Things / questions of note:
The database holds weekly sports historical data for about the last 10 years.
I need to limit access to the database to only subscribed users.
I'll need to decide how the database will be built.
I need to decide what kind of hosting I'll need.
As you can see, quite an ambitious project for someone new to this. I haven't asked any specific questions so far:
What kind of hosting solutions shall I look for?
Should I use SQL? (Complete newbie on this subject)
Should I use clickonce and then host the application?
Do you have any book or tutorial recommendations that would cover a project like this?
Do I need a script to feed the xml into the database if I go that route? Will that script reside on the server and do it automatically even if I'm not there to instigate it?
I hope the general topic isn't too vague. I tried to actually ask specific questions on it and I'm aware I don't have any code to show as it's just in the early stages of thinking.
The question is a bit vague since you are early on in the decision-making process. However, I do believe that I can offer some help in directing your thinking as you proceed. I think in the situation you are describing, one key thing you should consider is to host your data via JSON/WCF/REST. If you look into these technologies, you will see that there are different ways you can offer your data based upon your developing requirements. For example, how are you going to do authentication? Are you going to allow third-party clients?
What you really don't want to do is allow direct database access, even for authenticated users. Instead, put something in front of it. If you are working in the .NET space, look into all of the different things WCF offers and pick one based upon what fits best. Once you pick that, then you will know what you need for hosting and deployment. Even if you are going to provide the clients as well as the server, this is still a good way to protect your data and provide a way to expand your offering in the future.
Is there a standard messaging protocol(s) / API(s) available to keep databases in sync. Or alternatively API(s) for creating and parsing messages.
Our company is working with another company to provide two different software packages to two different kinds of users. The data sits in two separate databases but parts of it have to remain in sync.
Their system is pretty much a black box to us. And vice versa.
So what would be required would be to track updates, and turn these into messages and send them to a web service, map these back to the destination database fields, and commit them.
The database schemas do not match.
I am aware that we are going to have to roll most of this ourself, but some ideas around messaging or techniques would be good.
One solution : SQL Server Integration Service. It appears from SQL Server 2005. This is exactly what you need. It was called DTS in SQL Server 2000 for Data Transformation Service. This was created to import/export/transform data from one point to an other. This is really easy to use from SQL Server 2005 (DTS is quite horrible).
So basically, you will have to write packages to import data from their database, transform, filter, etc. it exactly how you need it to insert it into your database. And vice versa.
Regarding the black box fact, you should generate the database relational design to make it easier.
EDIT
Just in case of you need to install it, I remember bugs from the SQL Server 2005 installer not installing SSIS at all. I had to satisfy all warnings in the installer system requirements step to obtain it.
You have two problems:
track the changes that have to be synced
apply the changes to the peer
There is a solution that combines a solution to both issues and I'm sure you are aware of it: replication. Merge Replication would allow both sites to update the data and would also provide merge conflict resolution. But replication only works when the table schema is similar and puts a big constraint on development as schema changes have to be carefully coordinated between the sites. In practice, when the sites are operated by independent companies, is quite difficult to maintain for a long term.
If you want to roll your own the change tracking part has built in support in SQL Server:
Change Tracking
Change Data Capture
Both can be used for a sync solution as a mean to detect what changed.
Applying the changes can be resolved by a web service, but there are also built-in solutions in SQL Server that allow for far higher scalability and throughput: Service Broker. Relying on a message defined API for sync allows the two sites to evolve at their own pace and change the schema almost at will, as long as the communication API (the message protocol)remains unchanged.
The answers provided give me some good ideas, but I think we are going to end up doing something a bit different.
We are using MSMQ, and defining a standard messaging system which we will roll ourselves.
As to how we will know what things have changed I am not sure at the moment.
I am working on an application at the minute that will originally be just installed on a client machine with a lightweight database (may SqlLite).
After a while I want to add a web version of the same piece of software and with this the smart client will then be able to sync with the online version.
Has anyone done anything similar, I am looking to know:
What is the best way of syncing, are there patterns around it?
Are there any frameworks out there to handle syncing?
Is there any gotcha's I should be aware of from the start (maybe security concurrency)?
What would be the best way to architect this?
Thansk in advance...
So, Microsofts Sync Framework will help with this. Introduction
Couple of issues stand right out at the beginning.
If you are going to have the data exisit on the client first, then sync to a server at some point later, you need to think about what happens when a number of clients all sync to the server, esp. around conflict resolution.
There are events that get raised on the server side to idetify when a conflict occurs, but you need to decide who wins. (one on server, one from incoming client). Depending on wht you choose to do here, the second sync is likely to modify the client data.
Think carefully about what to sync. If its a contacvts database, is it good enough to have just the client name and telephone number sync, or do you need to whole contact history as well?
Think in terms of syncing a table, using rows where the key is all the same value. Even if this is a constructed table with triggers etc. This makes the framework sync a much simpler process and less prone to errors (tables needed to be synched in different orders).
If its an invoicing program, maybe an upload only table of orders is needed, with all the assoiciated invoice, history, reporting tables etc being updated on the server, rather than updating them on the client and syncing multiple tables....