Core company data is held and managed in physically separate, third-party, line-of-business applications: Finance, Transport Management. Customers are created in the Finance app (SQL Server), delivery information is held in the Transport Management app (Oracle). Communication between the two is point-to-point.
We need to build a new application (well upgrade the old one, but essentially from scratch) to process customer claims for damaged or short deliveries. Claims, customer and delivery data is currently manually entered in to MS Access. This will be migrated to a SQL server DB. The app development platform is VS2008 (C#).
I would like to avoid having all of the customer and delivery data in the claims database, since we already hold it elsewhere, so I plan to produce WCF based feeds from the LOB systems (and possibly the claims db) which can then be used as the data sources for the customer claims app. There will be claim-specific data entry but the core customer and delivery data would not need to be updated in the LOB apps.
So far I have in mind
database-->ORM-->WCF \
database-->ORM-->WCF --->BLL-->UI
database-->ORM-->WCF /
but it feels wrong as I will be creating separate service feeds for Customers, Deliveries and Claims (object-oriented services?). What I also can't quite grasp is how and where I join and work across data sources within the app to produce, say, a report showing claims against deliveries per customer (i.e. where I would traditionally write a query or view to get all of this from multiple tables in one DB).
Am I on the right track or I am missing the big picture here - should I just run regular extracts in to a claims db and work with traditional n-tier / n-layer architecture?
I don't think your design is too far off from where it should be.
If you have apps that will access finance data via the WCF service or the Transport service, those make sense to build. They also make sense to build because each of those services just supports what it needs to know about (ties in with the Single Responsibility Principle).
Where it might not feel right is where your UI app needs to know about and call 3 separate services to get its job done. In situations like that we've often built a wrapper service that makes the call to the appropriate service. Meaning your UI app would reference a WCF service and that service would then call the Finance service or the Transport service or the Claims service. Downside - each call results in multiple calls... yes. But it abstracts the logic away from your UI app and provides the benefit of giving you a place to manipulate or combine data from the other services or to add other business logic that is appropriate for the app. You also still have the benefit of the Finance service still supporting the finance apps without your UI app's business needs getting in the way or muddying up the code for its benefit.
I'm sure that there are different solution paths for this. This is just how we've handled in a couple of applications.
EDIT (answering your follow up question took too much space to make a comment).
If the data you can get from the Transport service is enough to satisfy the question that is asked by saying "getCustomerDelieveries" then no I wouldn't break it out to another wrapper service. If you need more data, then what other apps would also benefit from that service providing more customer information? Do those apps rely soley on the Transport service? This is one of those where the answer has to "feel" right to you, since you know the most about your systems.
Perhaps you need to break the SRP rule and have your Transport service get more customer information from the finance db or service. Or if apps that rely on the Transport service routinely need more customer data then thought could be given to expanding the customer table in the Transport db.
No rule, principle or philosophy should be applied so rigidly that you can't break it if it makes more sense for your app. It's going to be a balance and there is no right or wrong answer, just what works better for this situation.
You started this post by talking about a new UI app that would support the Claims part of your business and it needed both Finance and Transport data (as well as its own). That is a perfect candidate to call a wrapper service. It needs data from 3 distinct and separate data sources. Your Transport service has limited customer information which works well for some apps but perhaps not so well for others. If you write a wrapper service that mirrors your Transport service 100% and additionally provides a bit more customer data, what have you gained? More data for the apps that consume it but also more maintainance for you whenever you add functionality to the Transport service. What other value could this wrapper provide?
In this case, to me, having the Transport service get more customer data from the Finance service "feels" better. Your Transport db has some data but not enough. It's almost like the Transport service needs to make up for this short coming by fufilling the data need itself.
I should use an orchestration service with WWF (or other orchestration tool).
I like this view:
DAL, BLL, SIL --> WCf1
DAL, BLL, SIL --> WCF2
wcf1 and wcf2 are joined by an orchestration service over them. This way the services remains autonomous and decoupled, and you could reuse them in other orchestrations.
When generating reports, it's usually tolerable to deliver not most up-to-date data, so it may be a good idea to dedicate separate DB as source for reporting queries. Your master DBs will receive updates from UI (taking advantage of transactions and conflicts detection) and then replicate data to reporting DB.
This architectural pattern is called CQS (Command Query Separation), read this great article by Udi Dahan.
Related
our company has 7 factories those are located in different geographical areas and interconnected by leased line and all are in one domain in windows server. I have to develop a procurement system which will be used by users who are in each location. I can have one centralized database. I am thinking to design this using 3 tier architecture. currently the design in my mind is as below. (I haven't designed a system in 3-tier before). 1.install business layer in a server in each location. 2.install data access layer in head office in which the database in located. 3.install presentation layer in each users computer(can be swing application or web browser).
is this method worth? other questions are 1. what is the advantage of installing business layer in each locations server or install it in head office server is enough? 2.what are the technologies to passing messages between tiers in different locations. e.g: call a method "savePurchaseOreder(purchaseOrder)" in data access layer in head office from object in business layer in remote location
Aren't you conflating "tier" with "geographical location"? These really are distinct concepts. It's very typical to build 3-tier or n-tier applications with all tiers other than the browser located geographically together in the same data center. "Tiers" are about creating clean abstractions: you make a logical boundary between types of functionality, for example your "data persistence tier", "business logic tier", and "user interface tier". Geography's not really involved with that.
Re your questions:
Business logic server in locations or in head office? Likely depends on the bandwidth of your private WAN links. You may achieve very fast response times for communication between browser clients and your business tier server by locating the business tier server geographically close to the users. But if your WAN connection to your central database is slow, that won't matter too much. Conversely, if you've got great WAN links between office locations and the head office, distributing business logic servers to the local offices won't make that much difference, and they might be easier to maintain if they were central.
Method of communication between distributed servers. You could use RMI if that interests you, or simpler REST calls over TCP/IP. REST with TCP/IP would most likely be easier for you to implement.
Hope this helps.
--Mark
If you have enough bandwidth from the factories, you can deploy your business and data logic in your HQ and serve the application as web-based to the remote locations.
You may also want to consider an alternative approach where you deploy app servers to your factories to host the business/data logic and web servers to serve the presentation. Then you will need to sync the data in the database (i.e. replication) back to your HQ. This has higher maintenance but it does isolate failures to one factory should the system fail and also minimize the risk when upgrading the system.
A more advance approach would be to explore the cloud. Here's an article I wrote about deploying on-premise systems to the cloud.
http://serena-yeoh.blogspot.com/2014/01/layered-applications-and-windows-azure.html
Why dont you create one web application and host it in one of the offices? Other offices can simply use it through the web browser...
If security is a concern, I am sure your network team will be able to help you with Intranet setup or VPN connectivity, so that the website cannot be accessed by outside individuals.
I have a task to create a desktop version of our web app that will be distributed to our customers. I decided to go with wpf. The web app is three tier app with dal, bbl and pl. I could reuse a lot of it in my wpf but question is: is it a good idea to allow remote connections directly to sql server (i could compile connection string directly into the app). Or should i go wcf way and access db through a web service (this will require wrapping business metods around with service methods. Additional coding...)?
Any input is highly appreciated.
I would HIGHLY suggest using a WCF service. That way you are in control of what is running queries against your database, and it helps with keeping code updated. A great example why remote connections are bad :
Remote Access World
You use a stored procedure GetAnObject
In the process of upgrading your software, you change GetAnObject to return something different
Now all old versions are broken =(
WCF World
You use a stored procedure GetAnObject
In the process of upgrading, you change what it returns
In your WCF you just have to write some code to convert your object into the old one and send to legacy client. New clients are using a different WCF method call.
Also, I personally am not a fan of opening up my SQL server to any joe blow on a pc who can guess the login.
If you publish a version with an error in it, you at least have a chance the error is in the WCF side and therefore don't have to hassle clients with a 'my bad' upgrade. They never have to know where was a problem.
In a perfect world your WCF service and Web app can share a codebase, so you have less to maintain.
My suggestion is to go via a service (WCF or any other). The extra layer of indirection is decoupling which helps a great deal in maintenance and scalability. For example, if you already had a service which your web application used, it would have been much easier for you to just focus on creating WPF UI.
It all depends on what you want. Are you planning to use the DB layer and business layer across applications? If your answer is yes maybe WCF is the way to go.
We have a bunch of apps where we connect to the DB from WPF app directly because we wanted to avoid the extra layer of indirection which WCF adds.
I am on a project where I will be creating a Web service that will act as a "facade" to several stand alone systems (via APIs) and databases. The web service will be the sole method that a separate web application will use to communicate with these external resources.
I know for a fact that the communication methodology of one of the APIs that the web service must communicate with will change at some undetermined point in the future.
I expect the web service itself to abstract the details of the change in communication methodology between the Web application and the external API. My main concern is how to design the internals of the web service. What are some prescribed ways of using OO design to create an appropriate level of abstraction such that the change in communication method can be handled cleanly? Is there a recommended design pattern?
As you described, it sounds like you are already using the facade pattern here. The web service is in fact the facade to the other services. If an API between the web service and one of the external resources changes, the key is to not let this affect the API of the web service itself. Users of the web services should not need to know the internals of how the web service communicates with the external resources.
If the web service has methods doX and doY for example, none of the callers of doX and doY should care what is going on under the hood. So as long as you maintain the API between the clients of the web service and the web service, you should be set.
I've frequently faced a similar problem, where I would have a new facade (typically a Java class), and then some new "middleware" that would eventually communicate to services located somewhere else.
I would have to support multiple mediums of communication, including in-process, and via the net (often with encryption).
My usual solution is define a notion of a data packet, with its subtypes containing specific forms of data (e.g., specific responses, specific requests), etc. The important thing is that all the packets must be Serializable in some form (Java has a notion for this, I'm not sure about C++).
I then have an agent and a provider. The agent takes program-domain requests, creates packats. It moves them to a stub-skeleton that is responsible only for communicating. The remote stub takes the packet and gives it to a provider. The provider translates it back to a domain object which it then provides to the actual services. It takes the response, sends it back to the agent via the skeleton-stub, etc.
The advantage of this approach is that I create several layers of abstraction. The agent/provider are focused on domain level and its translation into packets and back. The skeleton-stub pair is responsible for marhsalling and sending packets back and forth. By swapping my skeleton-stub pair with subtypes, I can have the same program communicate in different ways (e.g., embedded in the same JVM, via something like JMS, directly via sockets, etc.)
This shouldn't affect the service you create at all (from the user's perspective). Services are about contracts - your service will provide a contract with its users - they send you a specific request and you send back a specific response. You also have a contract with this other API. If they change how they want to communicate, you can handle that internally, but as long as your contract with your users does not change they wont notice a thing.
One way to accomplish this is to not simply pass through the exact object that you get from the "real" API. You can create your own object that you send back in response. You then translate their object into your object. That way if the "real" API changes things on their end you can choose how to send that back on your end.
As the middle man you should be set up so that your end users need to know nothing about the originating API.
We are developping a web application that uses external web services as the main data source. The web services have been created and are maintained by one of our close partners. Even though they are supposed to work all the time, they are not 100% reliable. From time to time, they stop being reachable or they start throwing exceptions.
What would be a good way of monitoring external web services and getting informed when something wrong happens?
Limitation:
Web services are host externally on our partner's servers
We don't have the source code of these web services
We have no control over the general infrastructure
I thought of creating a simple .NET application that calls the web services regularly and report when there is a problem (by email, in a log file or in a db). But maybe you have better ideas?
I thought of creating a simple .NET
application that calls the web
services regularly and report when
there is a problem (by email, in a log
file or in a db). But maybe you have
better ideas?
As well as reporting to your company that the services are down, you might also want to inform the vendor, eg by emailing their tech support or placing an automated call to their hotline or something.
If these services are business critical, perhaps you can agree an SLA with the vendor as part of your contract.
I don't know of anything else you can do, except maybe to implement local caching of the data if this makes sense in your scenario. This would insulate you, at least a little, from temporal failures in the web services.
G'day,
There's a few aspects you have to consider here.
Are the external web services living behind a load-balancing layer? In that case, you're pretty much limited as to the usefullness of what you can report back to the other company.
Do you have SLAs in place with the company to help ensure the provision of their web services? If you do, then you'll need to support any claims with recorded data which changes the extent of the monitoring needed.
What about asking external companies like Gomez to monitor the company's web service application for you? They have an excellent range of services. BTW I don't work for Gomez, just use their services.
Does your company have SLAs with any customers for the provision of your application? Once again, if you do, you're going then need to mitigate the cost of any such penalties by definitely having SLAs with the other company.
Edit: I forgot to say that any probes you do should be of at least two types.
availability of the external platform, and
availability of your particular service
HTH
'Avahappy,
I'm working with an n-Tier application using WinForm and WCF
Engine Service (Windows Service) => WCF Service => Windows Form Client Application
The problem is that the WinForm Client Application need to be 100% available for work even if Engine Service is down.
So how can I make a disconnected architecture in order to make my winform application always available ?
Thanks.
Typically you implement a queue that's internal to your application.
The queue will forward the requests to the web service. In the event the web service is down, it stays queued. The queue mechanism should check every so often to see if the web service is alive, and when it is then forward everything it has stored up.
Alternatively, you can go direct to the web service, then simply post it to the queue in the event of initial failure. However, the queue will still need to check on the web service every so often.
EDIT:
Just to clarify, yes all of the business logic would need to be available client side. Otherwise you would need to provide a "verify" mechanism when the client connects back up.
However, this isn't a bad thing. As you should be placing the business logic in it's own assembly(ies) anyway.
Have a look at Smart Client Factory: http://msdn.microsoft.com/en-us/library/aa480482.aspx
Just to highlight the goals (this is sniped from the above link):
They have a rich user interface that
takes advantage of the power of the
Microsoft Windows desktop.
They connect to multiple back-end
systems to exchange data with them.
They present information coming from
multiple and diverse sources through
an integrated user interface, so the
data looks like it came from one
back-end system.
They take advantage of local storage
and processing resources to enable
operation during periods of no
network connectivity or intermittent
network connectivity.
They are easily deployed and
configured.
Edit
I'm going ansewr this with the usual CYA statement of it really depends. Let me give you some examples. Take an application which will watch the filesystem for files to be generated in any number of different formats (DB2, Flatfile, xml). The application will then import the files, displaying to the user a unified view of the document. And allow him to place e-commerce orders.
In this app, you could choose to detect the files zip them up and upload to the server do the transforms (applying business logic like normalization of data etc). But then what happens if the internet connection is down. Now the user has to wait for his connection before he can place his e-Commerce order.
A better solution would be to run the business rules in the client transforming the files. Now let's say, you had some business logic which would based on the order determine additional rules such as a salesman to route it to or pricing discounts...These might make sense to sit on the server.
The question you will need to ask is what functionality do I need to make my application function when the server is not there. Anything thing which falls within this category will need to be client side.
I've also never used Click Once deployment we had to roll our own updater which is a tale for another thread, but you should be able to send down updates preety easily. You could also code your business logic in an assembly, that you load from a URL, so while it runs client side it can be updated easily.
You can do all your processing off line, and use some thing like Microsoft Sync Framework to sync the data between the client and the server.
Assuming both server and client are .net, you can use same code base to do the data validation both on the server and the client. This way you will have a single code base that will serve both server and client.
You can use frameworks like CSLA.NET to simplify this validation process.