Linq To SQL, WebServices, Websites - Planning it all - c#

Several "parts" (a WinForms app for exmaple) of my project use a DAL that I coded based on L2SQL.
I'd like to throw in several WebApps into the mix, but the issue is that the DAL "offers" much more data than the WebApps need. Way more.
Would it be OK if I wrapped the data that the websites need within a web-service, and instead of the website connecting directly to the DAL it would go through the web-service which in turn would access the DAL?
I feel like that would add a lot of overhead, but on the other hand, I definitely don't like the feeling of knowing that the WebApps have the "capabilities" of accessing much more data than they actually need.
Any input would be greatly appreciated.
Thank you very much for the help.

You can either create web services, or add a repository layer that presents only the data that your applications require. A repository has the additional benefit of being a decoupling layer, making it easier to unit test your application (by providing a mock repository).
If you plan on eventually creating different frontends (say, a web UI and a WPF or Silverlight UI), then web services make a lot of sense, since they provide a common data foundation to build on, and can be accessed across tiers.

If your data access layer were pulling all data as IQueryable, then you would be able to query your DAL and drill down your db calls with more precision.
See the very brief blog entry I wrote on Repository and Service layers using Linq to SQL. My article is built around MVC but the concept of Repository and Service layers would work just fine with WebForms, WinForms, Web Services, etc.
Again, the key here is to have your Repository or your Dal return an object AsQueryable whereby you wait until the last possible moment to actually commit to requesting data.
Your structure would look something like this
Domain Layer
Repository Layer (IQueryable)
Service layer for Web App
Website
Service layer for Desktop App
Desktop App
Service layer for Web Services
Web Service
Inside your Service layer is where you customize the specific calls based on the application your developing for. This allows for greater security and configuration on a per-app basis while maintaining a complete repository that doesn't need to be modified until you swap out your ORM (if you ever decide you need to swap out your ORM)

There is nothing inherently wrong with having more than you need in this case. The entire .NET 4 Client Profile contains over 50MB of assemblies, classes, etc. I might use 5% of it in my entire career. That doesn't mean I don't appreciate having all of it available in case I need it.
If you plan to provide the DAL to developers that should not have access to portions of the data, write a wrapper or derive a new DAL. I would avoid the services route unless you're confident you can accommodate for the overhead.

Sounds like you are on the right track. If many applications are going to use the this data you gain a few advantages by having services with DTOs.
If the domain model changes, just the mapping to the DTO needs to change. You can isolate the consuming application from these changes.
Less data over the wire
You can isolate you applications from the implementation of the DAL.
You can expose different services (maybe different DTOs) for different applications if it is necessary to restrict what parts of the object model should be exposed.

Related

database access from multiple applications

I have a windows form application(c#) and an asp.NET web application which both access Sql Server database. I want to centralize the database access. Which metedologies should i follow? What is the common approach to this issue?
Writing DAL and Model Libraries and using them in both application?
Writing WCF service including DAL model and using this service with both applicaiton?
None of the above?
Can you give me any idea?
Thank you.
I would go with the WCF approach. Keep in mind that when (not if, when) you have to make changes that pertain to one app, but not the other (yet), you will have to account for that in the common layer, so using interfaces may make your life a little easier.
The cleanest way is to wrap the DB with a WCF services.
If you don't write large amounts of data in one go you can use a WCF Data Service; this directly wraps an Entity Framework model and you can configure access to tables and methods in various ways.
What you want is to have one place where the DB is accessed, so that if there is an issue, you can fix it in one location, for instance.
Furthermore, if you want to log all calls to a particular table, for instance, the only way to make sure that will be done is by centralizing all calls to the DB this way and not allow anybody direct access to the DB.
Wrap the service, then keep the connection string secret.
I think using the SOA approach is really better (WCF or WebServices with a DAL layer) because this way you don't need to publish your DAL dll with the Windows Forms exe. Then, all changes to your data model will automatically happens to your both UI clients.
Remember that this can cause its own problems:
Concern with security so that your Services cannot be accessed directly by URL, allowing someone to run your methods.
Concern about maintenance, because changes in data layer that needs to affect only one interface will be more difficult to control and needs to be better planned before (with the creation of new methods specific to certain intercace).
Decrease in performance, because the HTTP access is always more costly than direct communication with a dll.
Risk of lack of communication with the server, something that is expected to ASP.NET but requires additional concerns in the Windows Forms client to behave properly in these cases.
Option 1 seems simpler and I would do the same.
Option 2 with WCF will add additional code to your product and hence maintenance. Also this would mean an additional layer as well.
Corporate programmers like the second option (WCF service including DAL).

Designing an API: Use the Data Layer objects or copy/duplicate?

Struggling with this one today.
Rewriting a web-based application; I would like to do this in such a way that:
All transactions go through a web services API (something like http://api.myapplication.com) so that customers can work with their data the same way that we do / everything they can do through our provided web interface they can also do programmatically
A class library serves as a data layer (SQL + Entity Framework), for a couple of design reasons not related to this question
Problem is, if I choose not to expose the Entity Framework objects through the web service, it's a lot of work to re-create "API" versions of the Entity Framework objects and then write all the "proxy" code to copy properties back and forth.
What's the best practice here? Suck it up and create an API model class for each object, or just use the Entity Framework versions?
Any shortcuts here from those of you who have been down this road and dealt with versioning / backwards compatibility, other headaches?
Edit: After feedback, what makes more sense may be:
Data/Service Layer - DLL used by public web interface directly as well as the Web Services API
Web Services API - almost an exact replica of the Service Layer methods / objects, with API-specific objects and proxy code
I would NOT have the website post data through the web services interface for the API. That way leads to potential performance issues of your main website. Never mind that as soon as you deploy a breaking API change you have to redeploy the main website at the same time. There are reasons why you wouldn't want to be forced to do this.
Instead, your website AND web services should both communicate directly to the underlying business/data layer(s).
Next, don't expose the EF objects themselves. The web service interface should be cleaner than this. In other words it should try and simplify the act of working with your backend as much as possible. Will this require a fair amount of effort on your part? yes. However, it will pay dividends when you have to change the model slightly without impacting currently connected clients.
It depends on project complexity and how long you expect it to live. For small, short living projects you can share domain objects across all layer's. But if it's big project, and you expect it to exist, work well, and update for next 5 years....
In my current project (which is big), I first started with shared entities across all layers, then i discovered that I need separate entities for Presentation, and now (6 month's passed) I'm using separate classes for each layer (persistence, service, domain, presentation) and that's not because i'm paranoid or was following some rules, just I couldn't make all work with single set of classes across layers... Make you conclusions..
P.S. There are tools that can help you convert your objects, like Automapper and Value Injecter.
I would just buck up and create an API specifically aimed at the needs of the application. It doesn't make much sense to what amounts to exposing the whole DB layer. Just expose what needs to be exposed in order to make the app work, and nothing else.

Shared Object Library with Persistence

I have a quick question that I am hoping is fairly simple to answer. I am attempting to develop a shared Employee object library for my company. The idea is to create a centralized database that contains information about our employees (Reporting Hierarchy, Office Locations, General Info, etc) and then create an shared object library for this database.
My question is what is the best way to create this library so it can be shared among applications.
Do I create a self contained library that stores the database connection (I can see concurrency issues here and it doesn't feel right).
Client -> Server and then deploy the "client library" for use among any application.
OR would a Web/WCF service be more ideally suited to this situation.
There are many options because the question can be translated broadly. I suggest taking to heart all answers. Having said that here's my spin on it...
I used to view software layers as vertical because of n-tier training, and have a hard time breaking away from those notions to something conceptually broader and less restrictive. I strive to view .NET assembles as just pieces of a puzzle.
You're right to separate connection string from code and that's easily supported by .NET .config file, or application settings.
I often prefer a small, core library having the business logic, concepts and flows although each of those can be broken out. And within that concept you can still break out business from data access as different assemblies to swap in a new kind of data access. But sticking with the core module (a kind of "business kernel" or "engine" if you will).
You can express your "business kernel" through many presentation types, for example
textual/Console I-O
GUI: WinForms, WPF, Silverlight, ASP.NET, LED/pixelboard, etc
as cmdlets for Powershell interactions
web service expressions
kinds of mobile apps
etc.
You can accelerate development using patterns to bend software to your will and related implementations like: Microsoft Enterprise Library, loosen the coupling with dependency injection e.g. Ninject (one of many), or inversion of control techniques, etc.
I usually prefer to have a middle tier layer (so some sort of Web/WCF service between the client and the database). This way you separate the clients from the database, so that you can control the number of connections, or you can change the schema of the database in a way that will be transparent for the clients.
Depending on your situation, you can either make the clients connect to the WCF service (preferred in most cases), or create a dll that will wrap the connection to the service and perform some additional processing on the client side.
It depends how deep you need to integrate you library into main application. If you want to extend application domain with custom entities, you have following options:
Built-in persistence into library. You will need to pass connection string to repository class, but also database must include the hardcoded scheme for your library. If you use LINQ to SQL as data access library, you may mark up you entities with mapping attributes (see http://msdn.microsoft.com/en-us/library/system.data.linq.mapping.aspx)
Provide domain library only, but implement persistence outside, if your data layer supports POCO mapping (EF 4 do).
Usually, putting domain model into separated assembly causes few problems:
Integration into application. Application itself usually provides few services, like data access, security, logging, web services etc. If your application have ideal design and layers fully decoupled from each other, there is no problem to add new entities, but usually data access layer requires inheritance from base class, logger is singleton, security checks are hardcoded into business logic methods etc. Such applications must be refactored, services must be extracted into interfaces, and such interfaces must be passed to components in separated assembly.
Entity references. If you use rich domain model, you probably want to reference entities declared in another assembly . Partially this problem can be solved by generics, but you need to have special design of your data access layer that allows you to get lists of generic entities, or get entity by id etc.
Database integration. It may be hard to maintain database changes, if some entities are developed separately from others, espesially by other team.
Just be sure to keep your connection method separate from your data access layer, and then you can change the connection method later if requirements change. If you have a simple DLL that holds your real logic, then adding a communication layer on top should be simple. This will also allow you to use all three methods you mentioned and have all your actual logic in a single DLL used amongst all three.

SOA with WCF responsibilities and dependencies

I am moving onto a new team that has implemented a solution using SOA with WCF. The services are all very vertical, for example: a CustomerService, an AddressService, an AccountService, etc. To return the fully populated objects the services may call another service over a wcf endpoint.
There are a few very high level vertical areas, but underneath they can reuse a lot of the core service logic.
How valid is the following new architecture:
The webservices are thin layers that handle remote calls; they are strictly for communication. The real functionality would be implemented in something lets call, "business or domain services".
Domain Service responsibilities:
Reference data access / repository interfaces for working with the infrastructure
Call multiple repository methods to create fully populated objects
Process data against the complex business rules
Call other domain services (not having to call WCF)
This would give us domain services that can be tested outside of specific WCF and SQL Server implementations.
The web services reusing the different business services seems to be the biggest gain and yet the biggest potential pitfall.
On one hand the logic can be reused for multiple services, eliminating web service calling web service calling web service.
On the other hand, if someone changes one of the assemblies multiple services need to be updated, potentially breaking multiple applications.
Have people tried this and had success? Are there better approaches?
At first blush, it sounds like the design you've walked into might be an SOA antipattern identified in this article: a group of 'chatty services,' a term the authors use to describe a situation in which ...
developers realize a service by implementing a
number of Web services where each
communicates a tiny piece of data.
Another flavor of the same antipattern
is when the implementation of a
service ends up in a chatty dialog
communicating tiny pieces of
information rather than composing the
data in a comprehensive document-like
form.
The authors continue:
Degradation in performance and costly
development are the major consequences
of this antipattern. Additionally,
consumers have to expend extra effort
to aggregate these too finely grained
services to realize any benefit as
well as have the knowledge of how to
use these services together.
That can be a valid approach. The pitfall about updating multiple services depends on how closely related the services are. Do you have a use case where if Customer Service is updated and Address Service is not the clients can still work or is it more common that all services are used by the same client and hence should be updated together. Remember the service only changes if the WSDL changes and not implementation. If you manage not to change the DataContracts and OperationContracts of the front end services there is no worries.
One approach you may investigate is using in-proc WCF services for your domain services. Alternately the front end webservices can use domain managers/engines in separate;y layered assemblies which in turn uses repositories. You can have a coarse grain of webservice class implementations and fine grained managers for domain entities that are mocakble and unit testable.

web app data layer dis/advantages

I am wondering about the long term advantages (if any) of layering my web app by separating my business logic and data from my web forms. (ie a form, business logic, data not in the same file, but each in it's own class in another folder by itself or combined with other like classes). I like to make everything as modular as possible and to do so efficiently it seems that keeping all the code in one file - in the web form makes organization and reuse much easier. There are certain functions that are used across the site like managing connections that would be in their own classes and files. I am pretty new to c#, sorry if I am messing up the terminology.
Thanks
The separation of code into layers brings benefits beyond just C# language.
If your data access code is kept in a separate layer, it will be easy to adjust it to work with a different database. Database-specific code will be encapsulated in this layer while clients will work with database-agnostic interfaces. Therefore changes here will not affect the business layer implementation.
If your business logic is kept in one place, you will be able to offer its services to other applications, for example, to serve requests made via web services.
If your code is clean and well structured, the maintenance efforts will be kept lower. Whenever you need to change something, you'll know where to find the responsible code, what to change and how to assure the change will not affect the rest of the code.
As for ASP.NET, not following the separation of concerns has caused many projects to turn into a giant code blurb - presentation code performs business decisions, code-behind talks directly to the database whenever no suitable business method exists, database gets written to from many places, dataflow is following multiple paths which are difficult to trace, changes in one path not introduced to all of them will break integrity and cause data corruption => Result? Almost unmaintainable code black box where any change requires more and more effort until it stalls - project is "finished". Technical bankruptcy.
We usually layer our application as follows (each of the layer is in a separate project of the solution and consequently in a separate Dll:
What I would always go for (first) is to have a layered application
Presentation Layer (JUST UI and databinding logic)
Interface Layer to
the Business Layer (defining the
contracts for accessing the BL)
Business Layer implementation (the
actual logic, data validation etc...)
Interface Layer to the Data Access
Layer (defining the contracts for
accessing the DAL)
Data Access Layer
implementation
You can then use some factory for retrieving the corresponding objects. I would take a look at some library, possibly using dependency injection like Spring.Net or Microsoft Unity from the MS patterns and practices.
The advantages are the following:
separation of logic where it belongs to
no business logic in the UI (developers have to pay attention to this)
all of your applications look the same and consequently developers knowing this architecture will immediately know where to search for the corresponding logic
exchangeable DAL. The interfaces define the contracts for accessing the corresponding layer.
Unit testing becomes easier, just focusing on the BL logic and DAL
Your application could have many entry points (web interface, Winforms client, webservice). All of them can reference the same business logic (and DAL).
...
Just could not live without that..

Categories