PHP Web Service vs mySQL Connector on Windows Application - c#

I have a Windows Application in C#. This application interacts with a remote mySQL database. Should I create a PHP web service to do these (insert/add/delete/update) or use mySQL connector for c#? I'm not sure which way is better.
Thanks!

Wether the MySQL server is in the same network (phisically) or not. From my point of view the according solution would be to create a dedicated Web-Service that provides you the CRUD functionality for your application.
This, because it gives you some Separation of Concerns (SoC) as you can separate business logic tier from the data access tier.
See also: Single Responsibility Principle | "In object-oriented programming, the single responsibility principle states that every object should have a single responsibility, and that responsibility should be entirely encapsulated by the class. All its services should be narrowly aligned with that responsibility."
Every part of your application serves a specific purpose which makes it easier to maintain over a big timespan.
Now this sounds pretty nice and cool, but do we really need to abstract everything away?
As with everything else, it depends.
Here is a little Use-Case:
The Database needs to be modified: Tables keys are removed, new tables are added old tables are removed.
What will you, as developer, need to do in order to keep your application working?
Using MySQLConnector:
Start "hacking" inside your source code and making sure all queries run as expected. And if something goes wrong during that process, it will be a pain to fix it because it's all kind of nested within the application logic.
Using a dedicated Web-Service:
Just make sure your methods are updated to match the new database design | No need to change anything on the application-side [except method arguments in some cases].
Cheers

Why use PHP web-service when the application is in c# ? just use the MySql connector for C#.
Secondly web-service performance will be slow compared to MySql connector for C#.

Related

C# Using multiple applications to connect a single database using Entity Framework

I am currently developing a Windows form application, that I plan to run on a cloud setup, the application will calculate new data, update within the database and act as sort of control panel for a live data feed RestFul API that I wish to create using ASP.NET MVC 5 Web API.
I am wondering is it viable to connect these 2 separate applications to a single database? It is unlikely that I'd have database entry clash issues as each application has a separate task of reading or writing data for certain tables.
If viable would that mean every-time i make table changes I'd have to update both Entity Framework database models? (Not a major chore).
Is there a better solution to this? Should I scrap the idea of running a Windows Form application to control certain elements of the backend of the public API?
What would be the future issues with designing something like this, if any?
So you have a bunch of options there, assuming you have a layered architecture:
Share your DB, DAL and also Business Layer
Extend your WEB API and utilize it in your WinForms
Reuse DAL only (not the best approach, as business systems are not only data, but also - behavior - which resides in Business Layer)
Share the DB only - this is the worst option, with numerous drawbacks
See options 1 and 2 on an image:
Create a Data access layer, as a seperate component.
like a DAL.dll
Each application has a Logic layer, where "whatever you do" is handled.
Each layer, now uses a sort of Interfacelayer, that will translate objects from either layer of your applications, to the objects of the DAL.
When you change the DB now - you merely have to update the interface layer.
(Of course if you are adding more features, you will have to update all layers, but that isn't really any different.
I suggest this appoach, as it will make your debugging task much easier. And the slight extra code overhead won't affect performance, unless you have a massive communication requirement.
If you want more specifics, I would need examples of a classes from either program, and your SQL table design.
But there is nothing wrong with your approach.

What's the best way to reuse database access across multiple projects to ensure that when updated it does not break any projects?

I have probably written the same LINQ to SQL statement 4-5 times across multiple projects. I don't even want to have to paste it. We use DBML files combined with Repository classes. I would like to share the same Library across multiple projects, but I also want to easily update it and ensure it doesn't break any of the projects. What is a good way to do this? It is OK if I have to change my approach, I do not need to be married to LINQ to SQL and DBML.
We have both console apps and MVC web apps accessing the database with their own flavor of the DBML, and there have been times when a major DB update has broken them.
Also, since currently each project accesses the DB from itself, which is sometimes on another server, etc. Would it be possible to eliminate the DB layer from being within each project all together? It might help the problem above and be better for security and data integrity if I could manage all the database access through a centralized application that my other applications could use directly rather than calling the database directly.
Any ideas?
The way I handle this is using WCF Data Services. I have my data models and services in one project and host this on IIS. My other projects (whatever they may be) simply add a service reference to the URI and then access data it needs over the wire. My database stuff happens all on the service, my individual projects don't touch the database at all - they don't even know a database exists.
It's working out pretty well but there are a few "gotchas" with WCF. You can even create "WebGet" methods to expose commonly used methods via the service.
Let me know if you want to see some example code :-)

Remote database good practice

we are creating a WinForms .NET4 app with MS SQL Server and we are deciding between two scenarios:
1) WinForms application directly connects to the MS SQL Server.
2) Use 3-layer architecture and insert a WebServices in between.
Questions:
1) Is it a good practice to open SQL connection publicly to the "world"?
2) Which scenario would you recommend. App is data oriented, quite simple and not planning any other client, only the WinForms one.
Thanks in advance.
James
Definitely go with the option having a web services layer. This allows you:
to continue using your domain model (POCO and serialization).
to avoid opening your SQL Server to the internet.
to apply advanced business logic in your web services.
to remove SQL logic from your client application; all the data access belongs on the app tier.
to apply security rules/constraints as you need. Block a customer/user or IP address for various reasons.
When you say "quite simple and not planning any other client", i would take that with a grain of salt, apps always grow and morph as people realise what they can do and what else they can include. You need to rephrase that as "it is initially going to be a small simple app".
WebServices may be overkill for you at this point in time, but if you follow a nice n-tier architecture they will be very simple to add at a later date, with minimal refactoring.
As for exposing SQL to the world - no this is NOT a good practice. You can secure it very well, and ensure the logins that are used by the app (or users if they have their own logins) have minimal rights - just enough to run the stored procedures or execute the CRUD statements on the tables they need access to. But if you mess up the security while it is exposed to the world then kiss your SQL Server and its data goodbye. This is a complex subject in itself, so you are better to post individual questions when you have them.

Are there existing data layers I can use for an application I'm building?

I'm writing a .NET application and the thought of implementing a data layer from scratch is icky to me. (By data layer I'm referring to the code that talks to the database, not the layer which abstracts the database access into domain objects [sometimes called the data access layer and used interchangeably with data layer].)
I'd like to find an existing generic data layer implementation which provides standard crud functionality, error handling, connection management - the works. I'll be talking to SQL Server only.
It doesn't matter to me if the library is in C# or VB.NET and I don't care if it's LINQ or ADO.NET. As long as it works.
** I want to emphasize that I'm not looking for data access technologies or mechanisms (e.g. LINQ, ORM tools, etc.) but rather existing libraries.)
If you are talking to only SQL Server the Linq to SQL is your best option. It is pretty easy to get up and running. You will get both the Data Layer and the Abstraction. All you have to do is provide a connection string to Linq to SQL and it will handle the rest.
If you are going to connect to other database than SQL you would want to with NHibernate.
NHibernate takes a little more work than Linq to SQL to get up and running. MS provided in Visual Studio a nice tool that can get you reading from a SQL database pretty quick.
Honestly as much of a fan as I've always been with NHibernate. With the latest release of Enterprise Library 5 Data Access Block that they added in the dynamic mapping support natively. I would have to strongly consider not using NHibernate on a project and instead use a forward database generation tool from my domain objects to create my database (perhaps even use NHibernate solely for the scheme export) or something like CodeSmith and use EntLib.
You can use easyobjects has a very small learning curve, and is very extensible.
From their web:
EasyObjects.NET is a powerful data-access architecture for the .NET Framework. When used in combination with code generation, you can create, from scratch, a complete data layer for your application in minutes.
I'd like to find an existing generic data layer implementation which provides standard crud functionality, error handling, connection management - the works. I'll be talking to SQL Server only.
Might want to check out Subsonic. Though I personally find it quite limited, it's certainly not an ORM, but a "query tool." It will make CRUD operations easy and straightforward, and it generates partial POCO classes for every table in your database, rather than trying to map from a database to a domain layer.
Microsoft's Entity Framework might be what you are looking for to releave you from writing "the code that talks to the database".
The best things are that it already ships with Visual Studio and - depending on your requirements - you can use most functionality out-of-the box or manually adjust it to your custom business logic via T4 templates.
You can use it for forward and reverse engeneering and being a microsoft technology it integrates well with other MS products like SQL server.
I started using it 3 months ago in my current project at work which is composed of several windows and WCF services to convert third party data into our own database scheme. From the experiences we made with it, we'll be using the EF in future project a lot more.
What would you expect this framework to do with your exceptions? If it can't connect to your database, what should it do - crash the application, show an error message (winforms or WPF or ASP)... the questions are endless.
An ORM such as those suggested elsewhere in these answers is likely to be the closest you're going to get. Expecting a third party framework to provide all your exception handling isn't realistic - how would a third party know how your application is supposed to behave?
The direct answer to your question asking for "an existing generic data layer implementation which provides standard crud functionality, error handling, connection management - the works" is simple: use ADO.NET. The answers everyone else have provided actually go beyond that functionality, but your responses suggest that you think that there's something even further beyond - something that implements your data layer for you. My suggestion is that what you're looking for probably doesn't exist.

is a database intermediary good system design?

background: we've got a number of server processes and client apps that are used entirely internally, in a fairly controlled environment. we capture a significant amount of data every day that goes into a couple database machines. most everything is c#, with a few c++ apps.
just about every app has some basic (if not extensive) dependence on database data, whether it's for historical data, daily-calculated values, or assorted parameters. as the whole environment has gotten a bit more sprawling, I've been wondering about the sense in sticking an intermediary in between all client and server apps and the database, a sort of "database data broker". any app that needs values from the db makes a request to the data broker, instead of a dll wrapper function that calls a stored proc.
one immediate downside is that the data would make two trips across the network: from db to broker, and from broker to calling app. seems like poor form, but the amount of data would be small enough in each request that I'm ok with it as far as performance goes.
one (seeming) upside is that it would be trivial to set up a test environment, as it would entail just setting up a test data broker, and there's no maintaining of db connection strings locally anywhere else. also, I've been pondering creating a mini request language so you wouldn't have to enumerate functions for each dataset you might request (instead of GetX() and GetY(), there would be Get("name = X")
am I over-engineering this, or is it possibly a worthy architecture?
edit: thanks for all the great comments so far, great food for thought.
It depends on what you're trying to accomplish with it. According to Rocky Lhotka, you should only add a tier if you are forced to, kicking and screaming all the way.
I agree with him: don't tier unless you need to. I think there are valid reasons to add additional tiers, usually for purposes of security, scalability and maintainability. The question becomes: is yours a valid reason?
It looks like the major reason is maintainability. Does it outweigh the benefits you get by not having the tier?
only you can answer these:
what are the benefits of doing this?
what are the problems/risks of doing this?
do you need this to make testing easier or even possible?
if you make this change and when it goes live and crashes will you be fired?
if you make the changes and it goes live will you get a promotion?
etc...
As the former architect of a system that also used a database heavily as a "hub," I can say that there are several drawbacks that you should be aware of. Our system used databases:
As a transaction store (typical OLTP stuff)
As a staging queue (submitted but unprocessed transactions)
As a historical data store (results of processed transactions)
As an interoperation layer (untranslated commands or transactions issued from other systems)
One of the major drawbacks is ownership costs. When your databases become the single point of failure for so many types of operations, it becomes necessary to ensure that they are all hosted in high-availability environments. This not only expensive from a hardware perspective, but it is also expensive to support deployments to HA environments, since developers typically have very limited visibility to the internals.
A second drawback is that you have to seriously design integrity in to all of your tables. In a typical SOA environment, you have complete control over how data is modified. When you expose it through database tables, you must consider that any application with the right credentials will have the ability to modify data. Because of this, you must carefully consider utilitarian implementations of constraints. If you had a single service managing persistence, you could be much looser in constraints on the database and enforce them in code.
Third, if you ever want to expose any functionality that the database tables currently allow you to provide to outside parties, you must write service code anyway, so you might be better served doing it strategically as opposed to reacting to requests.
Fourth, UI interaction directly with the data layer creates security risks, especially if the client is a thick client.
Finally, writing code that responds to events (service calls) is much easier than polling code. Typically, organizations that rely heavily on database polling end up reinventing the wheel every time a new project requires a new "monitoring service." It can be avoided by creating a "framework," but those have their own pitfalls (primarily around prescription versus adoption).
This is just a laundry list of problems I have encountered. It's not necessarily meant to dissuade you from using databases for these functions, but it helps to know the dangers ahead of time so you can at least plan for them if they ever do become issues.
EDIT
Just thought of another scenario that caused us pains. Versioning your changes can be difficult. For example, if you need to change the shape of a table (normalize/denormalize), it has a cascading effect if multiple applications rely on it. In a SOA scenario, it is much easier, because you can keep your old API, change the internal interaction so that it works with the changed tables, and allow consumers to migrate to the new version on their own schedule.
A data broker sounds like a really good way to abstract out the multiple data sources for your apps. It would be easy to consolidate, change repositories, or otherwise move data around if needed in the future.
I may be misunderstanding something, but it seems to me like you should consider some entity framework. That is a framework you can use to "map" your interaction with the db to some domain objects. That way you work locally on domain objects that gets filled form your db, and when it is time to persist the state of your objects to the base, the framework handles all the connections back and forth. In this way you can also easily mock up these domain objects for unit testing without needing a db connection.
Check out NHibernate for a good entity framework alternative.
If you already have the database related know-how I think it's not a bad decission.
Good things that I can think of:
if the data model is consistent you can plug in new tools easily without making any changes in the other apps.
maybe you can have running the database more reliabily than your apps, so if one of them fails, the other one can still be working.
you can make backups and rollbacks using the database tools.
you can do emergency fixes manipulating the data directly with sql or some visual tool.
But if you have to learn new frameworks along the way, maybe the benefits are not worth the extra initial effort.
"any app that needs values from the db makes a request to the data broker"
When database technology was being invented over 40 years ago, the people doing that inventing had ideas along the lines of "any app that needs values from the db makes a request to the dbms".
Have you ever pondered the possibility that YOU ALREADY HAVE a "data broker", and that there might be very little added value in creating a second one of your own ?

Categories