We have a service-based system. The system is tiered or layered so that a single service call from an outside entity might hit one, two or three other services, depending on the type of call and system state.
What we want to be able to do is to track the progress of a given call across these different services.
Ideally, as the external call comes in, a tracking number is generated and this follows all subsequent calls throughout our system.
Are there any specific design patterns or WCF features (implementations of the pattern) that we can use to track this progress?
This page gives an example of using session IDs, but it's not clear what the right thing to do is once there are several services involved.
This page may also have some relevance.
We are specifically interested in C# / WCF implementation, but references to any resources that are relevant are interesting (Java / PHP / whatever).
I would use aspect orientated programming to handle this. I believe the right piece is a Service Behavior.
This will let you create an attribute that you stick on your service methods (or service). That has BeforeAction/AfterAction/OnError.
For the entry point service I would have the action create your personal sessionid, stuff it into the WCF context, and then use the Before, After, and Error methods to post data to your data store or however you plan identify progress.
Chris's response is a good one, but we found this information from Microsoft very, very useful --- and it's built-in to IIS / WCF support.
Enabling the log indicated in the screen capture below really helped.
Related
I am pretty new to coding .NET web services. I find myself writing a .NET WCF application with about 20 web services. I would like to log the following events:
Request (With Payload - who and what)
Response (Did the query work, was data returned)
Errors (Was there some sort of error)
I wrote a simple function that does a SQL insert at each point. Every web service request gets at least two inserts - one at the request and another at the response. Each of the 10+ methods needs at least 4 of these logging calls. Too much maintenance in my book.
I think this approach too hard and cumbersome - I will need to do a great deal of work to maintain it. I have used LOG4J (I didn't configure) with Axis2 in the past, it was able to log all of the above in the web server. The exceptions needed to be thrown, but the request/response logging was handled automagically. I don't know .NET well enough to feel like I have a good handle on my options.
At this point I am considering Log4net or the Enterprise Library Semantic Logging Application Block. Do I have better choices? Any suggestions as to which course might be easiest for a relative newbie?
Thanks,
Matt
The solution to your problem needs to be divided in two:
How to easily capture the request/response for each operation contract? (that is w/o adding special code at the beginning/end of each method)?
How to effectively load the data into the data base?
For 1: You can create a message inspector and register it as a service behavior. This means that you will have a single point that intercepts all the requests and responses- allowing you to handle them in one place. This article that explains how to write and deploy such an inspector. It is very easy.
For 2: log4net and ENtLiB are definitely primary candidates.
Some considerations for choosing:
In terms of logging capabilities, for most scenarios, they are both equally powerful.
log4net's API is friendlier in my opinion
EntLib is a large (but well factored) library. But it does carry much more dependencies that log4not. If you don't plan to use other modules than logging, it is probably an overkill
Regardless of your choice, you need to take into account performance considerations. You did not mention how many request you expect, but in large numbers, logging each one can impede performance.
There are various strategies to deal with that but that's a story for after you profile your application.
In the event that you are just looking for transaction audit logging (perhaps to support the development, debugging and early implementation phases), then you may want to consider using the WCF Tracing and Message Logging capabilities. The WCF tracing functionality provides a relatively simple, built-in method to monitor communication to/from WCF services. For test and debugging environments, configure informational or verbose activity tracing and enable message logging. The combination of activity tracing and message logging should prove beneficial when initially deploying and testing new services or adding new operations and/or communication bindings to existing services. If you are concerned about maintenance, then you can setup the trace log to use a set size and then "re-cycle" the file space.
The following links provide a good overview:
http://msdn.microsoft.com/en-us/library/ms733025.aspx
http://msdn.microsoft.com/en-us/library/aa702726.aspx
Basically, I have a new desktop application my team and I are working on that will run on Windows 7 desktops on our manufacturing floor. This program will be used fairly heavily as it gets introduced and will need to interact with our manufacturing database. I would estimate there will (eventually) be around 100 - 200 machines running this application at the same time.
We're lucky here, we get to do everything from scratch, so we define the database, any web services, the program design, and any interaction between the aforementioned.
As it is right now, our legacy applications just have direct access to a database, which is icky. We want to not do that with the new application.
So my question is, how do I do this? Vague, I know, but basically I have a lot at my disposal here, and I'm not entirely sure what the right direction to go is.
My initial thought, based on what I've perceived others doing, is to basically wall off the database by using webservices. i.e. all database interactions from the floor MUST occur through the webservices, providing a layer of security by doing much of the database logic behind closed doors. Webservice calls are then secured to individual users via Active Directory.
As I've found though, that has some implications of its own... We have to abstract the data before it reaches the application. There's still potential for malicious abuse by using webservice calls repeatedly to ruin or spam data. We've looked at Entity Framework and really like what it provides, but as best I can tell, that's going to be unavailable by the time we're at the application level in this instance.
It just seems like I can't come to a conclusion on what is "right". So, what is right?
WebServices sounds like a right approach. Implementing a SOA-oriented layer on the webservices layer gives you a lot of control over what happens to the data at the database server.
I don't quite share your doubts about repeated calls doing any damage - first you can have an audit log of every single call so that detecting possible misuses is obvious. But you also could implement a role based security so that web service methods are exposed to users in roles, which means that not everyone will be able to call just any method.
You could even secure your webservices with forms authentication so that authentication is done against any datasource, not only the active directory.
And last thing, the application itself could be published as a ClickOnce application so that it is downloaded and executed from the web page and it automatically updates itself just as you publish new versions.
If you need some technical guidance, I've blogged on that years ago:
http://netpl.blogspot.com/2008/02/clickonce-webservice-and-shared-forms.html
My suggestion since you are greenfield is to use an API wrapper approach with Servicestack.
Check out: http://www.servicestack.net/ServiceStack.Northwind/
Doing that you can use servicestack authentication, abstract away your db layer (because you could move to a different DB provider, change its location, provide queues for work items etc...) and in time perhaps move your whole infrastructure to an internal intranet app.
Plus Servicestack is incredibly fast, interoperable with almost any protocol you through at it, and provides for running it through MONO, so you are not stuck with a MS backend that could be very expensive.
My two cents. :)
First of all this question is not appropiate for StackOverflow, you might get close-votes really quickly.
Second, You may want to have a look at WCF RIA Services for this.
These will allow you to create basic CRUD operations for all your entities, and stuff like that.
I never used this myself, no I'm not sure what the potential issues might be.
Otherwise, Just do what we did:
Create generic (<T>) interfaces and services and contracts and everything. This will allow you to adapt your CRUD functionality in your Services, DAOs, ViewModels and such to any entity type.
I have a specific case and I want to know the best practice way to handle it.
I make a specific .NET framework (web application). This web application acts like a platform or framework to many other web applications through the following methodology :
We create our dependent web applications (classes for the project business, rdlc reports) in a separate solutions then build them.
After that we add references to the resulted dll in the framework.
And create set of user controls (one for each dependent web application) and put them in a folder in the framework it self.
It works fine but any modification to a specific user control or any modification to any one of the dependent web applications. We have to add the references again and publish the whole framework !!
What I want to do is make those different web applications and the framework loosely coupled. So I could publish the framework one and only one and any modifications to the user controls or the different web applications just publish the updated part rather than the whole framework .
How to refactor my code so I can do this?
The most important thing is :
Never publish the whole framework if the change in any dependent application, just publish the updated part belongs to this application .
If loose coupling is what you are after, develop your "framework(web application)" to function as a WCF web service. Your client applications will pass requests to your web services and receive standard responses in the form of predefined objects.
If you take this route, I recommend that you implement an additional step: Do not use the objects passed to your client applications directly in your client code. Instead, create versions of these web service objects local to each client application and upon receiving your web service response objects, map them to their local counterparts. I tend to implement this with a facade project in my client solution. The facade handles all calls to my various web services, and does the mapping between client and service objects automatically with each call. It is very convenient.
The reason for this is that the day that you decide to modify the objects that your web service serves, you only have to change the mapping algorithms in your client applications... the internal code of each client solution remains unchanged. Do not underestimate how much work this can save you!
Developing WCF web services is quite a large subject. If you are interested, a book that I recommend is Programming WCF Services. It offers a pretty good introduction to WCF development for those who come from a .NET background.
I totally agree with levib, but I also have some tips:
As an alternative to WCF (with its crazy configuration needs), I would recommend ServiceStack. Like WCF it lets you receive requests and return responses in the form of predefined objects, but with NO code generation and minimal configuration. It supports all kinds of response formats, such as JSON, XML, JSV and CSV. This makes it much easier to consume from f.ex. JavaScript and even mobile apps. It even has binaries for MonoTouch and Mono for Android! It is also highly testable and blazing fast!
A great tool for the mapping part of your code is AutoMapper, it lets you set up all your mappings in a single place and map from one object type to another by calling a simple method.
Check them out! :)
Decades of experience says: avoid the framework and you won't have a problem to solve.
Frameworks evolve like cancer. The road to hell is paved with good intentions, and a good portion of those good intentions are embodied in a colossal tumour of a framework all in the name of potential re-use that never really happens.
Get some experience and knowledge when it comes to OO and design, and you'll find endless solutions to your technical problem, such as facades, and mementos, and what have you, but they are not solutions to your real problem.
Another thing, if you are using MS technology, don't bother with anything beyond what .NET offers. Stick with what the MS gods offer because as soon as you digress and become committed to some inhouse framework, your days are numbered.
We are currently building an application that makes use of a non-simple approval process, which involves multiple levels of approval, returning, reviewing, notifications etc..
Because of the said requirement, we were asked to make use of a workflow framework also to facilitate process transparency.
On the prototype we have succesfully incorporated the workflow and it works fine. however, we cannot determine the actions that should be available to the user. For example, I have the following recieve operations: create(), managerApprove(), RAApprove(), ORMApprove().. now if I call them in order, using the correct user name then they will work. Obviously, if I don't call them in order, then it will throw a FaultException because its not in the correct state. Question is, how will I know which functions are available to expose in the UI - say, if its currently waiting for manager approval then just show an approval button for the manager...
As a workaround, I've created another WCF service that retrieves the same data from the database and then determines the correct UI state (which actions can be performed by the user). I think this is a duplication of logic, since that's supposed in the WF already.
Also, if the WF changes then it my seperate WCF service may potentially break. For example, if I switch the approval order in the workflow then I need to update the logic in the WCF service as well. Otherwise, it would show an invalid page state and clicking approve will invoke the wrong method and cause a FaultException.
Any help will be much appreciated... I'm really new to WF4.
UPDATE:
My colleague put my question this way:
What's the best design for a web app that adopts WF?
The main reasons why WF is being considered
- The workflows involved are long running
- Workflows are human workflows - they need to coordinate actions of real people
- Process Transparency
Also, how should the workflow integrate with the UI? - how will the UI know what state in should be in and what pages to show which users?
The workflow itself doesn't expose the information directly. It is there as each pending Receive is a named bookmark and the bookmark name contains the SOAP action it supports as well as the service contract and namespace. The easiest way of getting at this into is by adding the SqlWorkflowInstanceStore to the WorkflowServiceHost and checking the column with the pending bookmarks. It isn't perfect as this will give you the information as it was last persisted which is not necessarily the current state but it has worked for me in a number of applications. Just make sure to set the TimeToPersist to a pretty low value and add some Persist activities in strategic places.
A very simple approach would be simulating the workflow by managing the status of the approvals. Imagine that you have different buttons/pages for different users to approve different stages ("create", "manager approval", "RA approveal", etc) of the approval process. This is a very old school approach.
If you use this approach, you would need to distribute your workflow (logics/process) accross different places (pages). Obviously, this is a downside of this aproach, specially when your workflow changes a lot or your solutions needs to run different versions of a workflow.
If you want to use Workflow Foundation, the easiest way is what Maurice has suggested.
The other option is to use other tools which would scale more and are more flexible than WF. I have used WF (not the lastest release though), BizTalk, and SharePoint.
If your solution requires interacting with other applications, I would recommned using BizTalk.
More of a design/conceptual question.
At work the decision was made to have our data access layer be called through webservices. So our website would call the webservices for any/all data to and from the database. Both the website & the webservices will be on the same machine(so no trip across the wire), but the database is on a separate machine(so that would require a trip across the wire regardless). This is all in-house, the website, webservice, and database are all within the same company(AFAIK, the webservices won't be reused by another other party).
To the best of my knowledge: the website will open a port to the webservices, and the webservices will in turn open another port and go across the wire to the database server to get/submit the data. The trip across the wire can't be avoided, but I'm concerned about the webservices standing in the middle.
I do agree there needs to be distinct layers between the functionality(such as business layer, data access layer, etc...), but this seems overly complex to me. I'm also sensing there will be some performance problems down the line.
Seems to me it would be better to have the (DAL)assemblies referenced directly within the solution, thus negating the first port to port connection.
Any thoughts(or links) both for and against this idea would be appreciated
P.S. We're a .NET shop(migrating from vb to C# 3.5)
Edit/Update
Marked Dathan as answer, I'm still not completely sold(I'm still kind of on the fence, though leaning it may not be as bad as I feared), he provided a well thought out answer. I appreciated all the feedback.
Both designs (app to web service to db; app to db via DAL) are pretty standard. Web services are often used when interfacing with clients to standardize the semantics of data access. The web service is usually able to more accurately represent the semantics of your data model than the underlying persistence store, and thus helps the maintainability of the system by abstracting and encapsulating IO-specific concerns. Web services also serve the additional purpose of providing a public interface (though "public" may still mean internal to your company) to your data via a protocol that's commonly accessible across firewalls. When using a DAL to connect directly to the DB, it's possible to encapsulate the data IO concerns in a similar way, but ultimately your client has to have direct access to the database. By restricting IO to well-defined semantics (usually CRUD+Query), you add an additional layer of security. This isn't such a big deal for you, since you're running a web app, though - all DB access is already done from trusted code. The web service does provide an increase in robustness against SQL injection, though.
All web service justifications aside, the real questions are:
How much will it be used? The website/web service/database format does impose slightly higher overhead on the web server - if the website gets hammered, you want to consider long and hard before putting another service on the same machine. Otherwise, the added small inefficiency is probably not a big deal. On the other hand, if the site is getting hammered, you probably want to scale horizontally anyway, and you should be able to scale the web service at the same time.
How much to you gain? One of the big reasons for having a web service is to provide data accessibility to client code - particularly when multiple possible application versions need to be supported. Since your web app is the only client to use the web service, this isn't a concern - it's probably actually less effort to version the app by itself.
Are you looking to expand? You say it probably won't ever be used by any client other than the single web app, but these things have a way of gaining in size. If there's any chance your web app might grow in scope or popularity, consider the web service. By designing around a web service, you're already targeting a modular, multi-host solution, so your app will probably scale with fewer growing pains.
In case you couldn't guess, I'm a web service fan. But the above are also my honest (if somewhat biased) opinions on the subject. If you do go the web service route, be sure to make it simple - keep application logic in the app and service logic in the service, and try to draw a bright line between them when extending the two. And do design your service for efficiency and configure the hosting to keep it running as smoothly as possible.
This is a questionable design, but your shop isn't the only one using it.
Since you're using .NET 3.5 and running on the same machine, you should use WCF with the netNamedPipesBinding, which uses binary data transfer over named pipes, only on the same machine. That should mitigate the performance issue somewhat.
I like the idea because it gives you flexibility. We use a very similar approach because we can have more than 1 type of database storing our data (MSSQL or Oracle) depending on our customer install choices.
It also gives customers the ability to hook into our database if they choose not to use our front end web site. As a result we get an open API for little to no extra effort.
If speed is your most critical issue than you have to lessen your layers. However in most cases the time it takes for your web Service to process the request from the database does not add allot of time. (This is assuming you do your Web Service Layer Correctly, you can easily make it slow if you don't watch it.)