I am designing a Windows application in C# which when run for the first time creates a local database (SQLite) and the data (around 200 MB or even more) to this is feed as a data stream from a remote server based on the criteria specified by the user.
Currently I have planned to access the database server directly from the application.
Questions:
Is it a good idea to use the database server directly from the application as the server manages connections automatically and I save time in developing a TCP/IP interface.
What can be the second option? Providing a TCP/IP server or interface (Isn't it time consuming to write it?).
As the data is large should I use compression?
With WCF you can avoid the cost of writing TCP/IP code and have a standalone server or a web service hosted on IIS. Moreover, compression can be added without code change.
You have to try with and without compression. The compression rate highly depends on the data and compression time can also be an issue.
Without going into large detail I can say you can use ASP.NET C# (You can choose any .NET language) and you can send and receive data using POST.
Why would you need compression? You are only sending results? If it is big you can use an existing library. I have used sevenzipsharp in the past without much issue.
Edit: There may be an option on the server to gzip output data so you may not need to use anything.
Assuming that your intention is to pull down a subset of the data on the server dependent on client queries for local storage then for reasons of security and control you probably ought to be looking at using web services rather than exposing your database directly to the internet.
There are a large number of options for creating services - WCF being the principal method for new .NET applications and straightforward to implement at both server and client ends - in this case I'd also probably take a look at ADO.NET Data Services as providing a shortcut to a rich set of services.
It is usually best to use ADO.NET or LINQ to SQL (Entity Framework) to connect to your Database directly unless the User is going to be disconnected while using the application.
If you are going to have the user disconnect then continue using SQLite or you can use ADO.NET which can save an XML file of the data and access it like a Table from the users machine without the additional dependence of SQLite.
I would not use compression because C# does not have a built-in library for it and would require an additional dependency.
Try to just use the .NET Framework without additional DLL's and you will have a more flexible application that is easier to install and maintain.
ADO/Entity Framework - http://msdn.microsoft.com/en-us/library/h43ks021.aspx
Related
Assume that I have a third-party database application with SDK that can be used to retrieve data out of the database in XML.
On the other side, I have developed a website using Laravel framework of PHP. The website is supposed to display data from the database of the application.
In regards to above I have the following questions:
As far as I understand, I can either store the requested data in my website database or just show it without storing. What technique do you suggest?
How do I achieve xml data transfer from the database server to the website?
Taking into account that I have experience of development in C#, I assume that I have to develop some web-service that would run on the database server, retrieve the required data and send it to my website. So the web-service has to receive the requests from my Laravel website, retrieve data from database server accordingly and pass the xml response to my website that would finally display it. Am I on the right way? If so, could you please guide me on how to code and bind these parts?
Thank you in advance.
I have to agree with #Serge in the comments - there are many ways to do this because it is a very broad question.
My answer was mostly going to deal with how regularly the third party database was going to be updated but judging from your comments, I'm assuming it will be fairly often? In which case, I would likely connect directly to the third party database from your laravel app using the firebird driver found here: https://github.com/jacquestvanzuydam/laravel-firebird (Please note, I have never used this so I cannot comment on it's quality) instead of writing a C# web service. I don't know much about firebird itself but you will likely want to connect using an SSH tunnel or VPN for security reasons.
Then I would either store data in MySQL if you know it isn't likely to change very often (in this case you would use a laravel command, run on a schedule, to pull data out of firebird every [X] days/hours/minutes depending on the data) or, if the data is likely to change on each potential web request, using some form of caching system (redis, memcache, file cache etc) to speed up the web requests.
Sorry if that isn't particularly helpful - if you can provide more information maybe I can help you out further :)
Good luck!
I am working out the nitty gritty of a potential server / many-client project and it's in a realm I haven't been before. Disregarding the scale of the project for a moment and assuming this ever goes ahead....
My current idea is that the server should be a fat server with a thin rich client on each workstation, built in C# .NET and probably using WinForms for the user interface, and distributed via ClickOnce for easy and compliant software updating.
Database <-> Server (business logic) <-> Rich Thin Client
Instead of a fat client:
Database <-> Client
I am looking into WCF for the server. Is this advisable for a client-server architecture with the following usage case?
Anywhere between 10 and 100 receptionists and practitioners using the client (company growth would increase this)
Windows 7 and up being primary workstation operating system
Minimal data traffic desirable
Potential for large data to be stored alongside the database in some manner (patient images, video and such)
Is it wise for the server to be performing business logic as much as possible and only sending information and results and doing the basics on the client (validation etc)? It seems logical to me.
Does anyone have some good information on beginning such a big project?
I believe the current software being used is actually a fat client with a direct MSSQL connection.
Not only this, it is also non-distributed and each clinic has it's own separate database causing many problems with data integrity and collation for reporting and such.
This is perfectly valid plan for your architecture. WCF will do fine. If you need a tutorial on authentication sharing between the web application that authenticates users and the ClickOnce module run from within the application, I wrote one once:
http://www.wiktorzychla.com/2008/02/clickonce-webservice-and-shared-forms.html
This was written years ago and while I show how to share authentication between the ClickOnce and ASMX web service, sometime later I wrote another tutorial on sharing the authentication between a Silverlight module and a WCF service.
http://www.wiktorzychla.com/2010/04/aspnet-forms-authentication-sharing-for.html
Combining these two will give you ClickOnce + WCF authentication sharing.
I'm writing an application which will involve interaction with the database. The application will be used by users from the scale of 100 to 1000 and the database should be able to store up to 100,000 rows of data.
Previously I have used Microsoft Access but my users were required to install microsoft access database engine for my application to function as intended. I want my application to as light weight/portable as possible, so is there any better alternatives where users will not be required to install any third party components to run my application?
Look at mongoDB, it is an open source non relational database that has picked up popularity. Its is very fast too.
Depends on whether the DB will be server side or client side.
If it's server side it's up to you really, I would personally go SQL Server as I know that best, or even a mySql/phpMyAdmin combo. But if it's to reside on the client machine try SQLite (just a warning though it is exactly as the name suggests, LITE, so a lot of the more complex SQL might not be supported). SQLite may be exactly what you're looking for depending on the complexity involved in your project.
ALSO: SQLite is supported on iPhone, Blackberry and Android as well. So if you wanna go mobile, no problem.
Your application could connect to a cloud database (like SQL Azure). That will not require any third party components and it will be accessible from anywhere/any device.
Do you need a only one database for all the users or every user has it's own database? If it will be on the server side, i would prefer SQL Servers (ex. MSSQL, MySQL). But for clients side, SQLLite would do the work.
You should consider if a SaaS model is relevant, in other words if you could benefit from hosting all your users' databases on the cloud and let clients access it by remote. In your scenario I would consider a SaaS model if: (1) your users need to scale up their individual database beyond the capacity of their local machine, (2) there is a need to share data between the databases of different users - or maybe this could enable good features you don't have in your system today, or (3) users find it a drag to have to run the database on their local machine, because of the resources it uses up, uptime needed, backups, etc. If any of these are sometimes true, hosting all the databases on the cloud with some sort of multi-tenancy arrangement might be a good solution and will make your application as lightweight as possible because you don't need to include a database at all.
I developed a small expense report system used to replace an excel file that was very old.
The setup was pretty simple:
-UI is jquery, asp.net, simple css
-Back end is sql server 2005
-There are some additional tiers for the business layer.
Typical tiered application. It's running on IIS 6 as an Intranet site. It's a lot simpler to use then the excel sheet but of course there is the guy who doesn't seem to have an internet connection so how can he submit his expense report?
I thought of an offline system that serializes objects but this again seems to be a bit complex as eventually these serialized objects need to get back to the database layer. I'm thinking more on the lines of just telling them to submit an excel spreadsheet to an "admin" who does have an internet connection and have them put in his / her expense report. But I fear the "admins" will hate this.
Even if I whip up a desktop client application, I'd still have to hit the network to get to the data layer.
Any other ideas of how to handle the guy who cannot VPN or get on the network but once loved his excel application that I havent thought of?
We implemented a few solutions where the users don't have a connection. One was a WinForms app, and others were ASP.NET apps. For the ASP.NET apps, we modified them to run using IIS on the laptop or client PC, since you can install IIS on a client OS.
What we ended up using as a pattern was to use a DataSet locally to hold/parse.manipulate data on the client PC, and then serialize it to an XML using the built in DataSet.WriteXml method. Then we created an "Upload" screen (asp.net page or WinForms page) that shows the pending uploads. This uploads the DataSet to an ASP.NET Web Service. Since the DataSet object is serializable, using it in a web service is the least amount of work. The web service handles actually inserting/updating the database.
It is actually a lot less work than it sounds like, and the pattern has been so successful for us that it's become our standard method for handling disconnected apps. The troubleshooting on these apps has been negligible, which to me speaks volumes for the simplicity and reliability, compared to other approaches we've tried.
You can implement SQL Server Merge replication. There was a little bit of a learning curve, but I figured it out and implemented it in a couple of weeks so its definitely "do-able". You can have a desktop app that updates a sql server express db and then synchronizes the changes using replication at your choosing (maybe when the user is connected to the network).
How Merge Replciation Works
http://msdn.microsoft.com/en-us/library/ms151329.aspx
You can do a variation of the email xls file to admin. Instead of the admin re-inputting the data, you could have them upload the xls file and your system could parse it and enter the data. Works quite well especially if the xls format is unified.
Using the Office toolkit and macros build into the users Excel application the functionality to export the data into an XML dataset and transmit them when the opportunity permits. On the server end build an adapter that reads those XML files as they arrive and inserts the appropriate records as required.
I work on a Joomla web site, installed on a MySQL database and running on IIS7. It's all working fine.
I now need to add functionality that lets (Joomla-)registered users change some configuration data. Though I haven't done this yet, it looks straightforward enough to do with Joomla. The data is private so all external access will be done through HTTPS.
I also need an existing c# program, running on another machine, to read that configuration data. Sure enough, this data access needs to be as fast as possible. The data will be small (and filtered by query), but the latency should be kept to a minimum. A short-term, client-side cache (less than a minute, in case a user updates his configuration data) seems like a good idea.
I have done practically zero database/asp programming so far, so what's the best way of doing that last step? Should the c# program access the database 'directly' (using what? LINQ?) or setup some sort of Facade (SOAP?) service? If a service should be used, should it be done through Joomla or with ASP on IIS?
Thanks
I ended up using a WCF service façade written in c# that returns the data from the database. The service only exposes a couple of functions that query parameters as arguments. The SQL queries are not exposed, nor is the database connection string. The WCF service uses the mysql connector/net 6.3.1 to talk to mysql. The WCF service is accessible only over https, and requires a username & password.