I've been handed a project that needs some work doing and the original team that created it have all since left the company. This has been sat "on-the-shelf" for 4 years and everyone but our client had forgotten about it. They want it delivering now and it doesn't work.
The system is a relatively simple ASP Web Forms application for submitting data to another service via 2 WSDL interfaces, logging that request in a SQL database and submitting the response to another service via OPC.
I can set up all of those interfaces for testing except the WSDL. I just have the software here to run. Is there any way I can easily create a service to simulate the final one so I can test my software. I only have the 2 WSDL files to go on. These aren't complicated services. I'm only using 4 methods total.
I've been led to believe that the original creator of this system did something similar but I can't find what he used or any documentation about it. I expect it was run on his laptop and was lost when he left the company.
The WCF service client should be wrapped and exposed via an interface to your software. That way, you can mock the interface and test how your software responds to various inputs/outputs from the mocked service client. You control all aspects of what is returned, including potentially throwing exceptions as the real WCF service client would.
This is basically the reason why the "I" in SOLID exists - because substitution of implementation based on interface is simple to do.
Related
I have a WCF service that is locally created in C# (with a [ServiceContract] and [DataContract]) and uses webHttpBinding for binding.
I would like to invoke "GET" and "POST" operations outlined in the contract from a WF workflow, but I just can't seem to get anything to work. Here are some of the things I've tried (from VisualStudio 2012):
Adding a service reference. I hear this is supposed to create the activities I want (after compiling), but I can only find my service by running it and finding it by address. I cannot "Discover" the service, even if I include the projects for the service in my solution that has the Workflow project. No activities are created as a result.
Importing a service contract. This seems to work a little better, in that I get activities, but I only get ReceiveAndSendReply activities. It doesn't seem to matter whether I make the contract as part of my Workflow project (and import it), or create a normal reference to the service's WCF project (and then import the contract through that reference).
Filling in the fields of the Send and ReceiveReplyForSend components of a SendAndReceiveReply activity directly (this was actually what I tried first). No matter what I try, I always run into a wall with the error, "Manual addressing is enabled on this factory, so all messages sent must be pre-addressed."
This seems like something that should be fundamental to WF and a lot easier, but I'm just not getting it. Am I missing something simple?
We currently have an application (Client/Server) that communicates through WCF. We would like to move away from the WCF approach and use a REST approach instead.
There are a few reasons for this, such as overhead (in terms of size) and the possibility to use the same access method for both our Windows client (currently a WinForm client) and mobile devices.
We are also sometimes running the server on the Mono framework, and even though we have it up and running, we have seen some differences regarding how WCF is working on the Mono stack compared to the .NET Framework (so I would not like to use the WebHTTPBinding in WCF to handle REST).
The service also needs to be self-hosted (i.e. not in IIS).
The problem when shifting from WCF to other alternatives is related to contracts. I would like to make it possible to unit test the REST calls, and I would like a contract to be involved, enabling the clients to use proxy classes that they do not have to create by themselves - pretty much like WSDL.
The main idea for handing out proxy classes to developers is that the clients should be able to rely on the service provider to get the correct proxy classes and that they should not need to care about the URLs used.
Is there any way this could be done automatically, and if so - using what framework or method?
Having looked brifely at WebAPI, I came across an example of generating a proxy (http://www.codeproject.com/Tips/535260/Proxy-Object-Generation-for-MVC-and-WebAPI-Control). This would simplify for the developers, but would mean that I manually need to create the proxy for the developers to use.
Any suggestions would be appreciated :)
For Client side unit tests, you should create mock for your rest service responses.
Otherwise You can create a static mock page with all of your service responses.
I'm about to design my Web service API, most of the functions of my API is basically very simular to my web application.
Now the question is, should I create 1 single method and reuse them for both the web application and the web service api? (This seems to be the logical solution, however its very complicated; it's much easier to duplicate the method used by the web application, and keep both separate, ie one method for the web application and one method for the web service.)
How do you guys do it?
1) REUSE: one main method and reuse them for both web application and web service application (I like this but it's complicated)
WebAppMethodX --uses-->
COMMONFUNCTIONMETHOD_X
APIMethodX ---uses---->
COMMONFUNCTIONMETHOD_X
ie Commonfunctionmethod_x contains reusable set of common features
PRO: less code, less maintenance, less bugs.
CON: very complicated
2) DUPLICATE: two methods, one method for the web application and one method for the web service.
WebAppMethodX
APIMethodX
PRO: simple
CON: duplication = more code, more maintenance, more bugs!
Your use case will very likely be different for your public webservice API than for your internal application API. Create a common service project / tier and use that same tier from both your web app and your public-facing webservice API. Create a separate http-invokable method for each of your web app and your webservice.
It comes down to there being
1) different security concerns. For instance, it is nice (often required) to provide a sample client application making use of your public API so that others can easily get up to speed with what you've provided. That client API may need to pass object constructs that you provide them that have been stripped of internal, secure logic/content. (Remember that compiled C# might as well be clear text with Reflector!)
2) different needs and constraints. For instance, for an internal application call you're going to sometimes enforce different business rules vs. your public facing webservice API (often with the latter being much more constrained to scope).
If you design your business logic into your service layer and invoke those classes/methods well from your web project and your webservice project respectively you're going to have a lot of code reuse anyway without trying to overcomplicate things by mixing use cases.
One method. Otherwise when you find a bug and fix it in one, then forget to in the other... you will cry.
One method, in the web service, and have your web application call it.
I don't understand what "one main method" for both means. Web applications don't have a main method; they're deployed to an app server.
One other point to note: you should write your service in terms of a POCO interface. Once you do that, deployment becomes a choice you make.
It depends..
Normally, I would separate them. This way you remove interdependency between two high level processes. code reuse is good within a process but sometimes you want to be able to use a different app on the same service.
If the two are highly dependant on each other, however, you will want to reuse the same functions so that changing it in one place will change it in another. Thus avoiding more potential issues with the development process.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I am having trouble deciding between two possible design choices. I have a web site which has a pretty extensive business layer and DAL (website, bll, and dal are all in multiple separate dlls). I need to design a windows service that can take some of my business objects, write them to a file, and store them locally within our network. The files are then imported into a 3rd party program which does further processing on them.
I can design this service one of two ways:
Wrap the service around the business layer and DAL. This would be quick and easy but the downside is every time the business layer changes, the service will have to be updated.
Add a web service to the web site and just query the web service for what I need. The windows service wouldn't have to use the business layer and as long as the web service doesn't change, I'll be good. The only downside is that I may have to create some basic business objects to parse the web service's xml into.
The windows service will have to poll the business layer/dal or web service every 10-20 minutes or so. The windows service is necessary because the web site is hosted offsite and thus doesn't have access to any of our local resources. I am leaning towards option 2 but I'm torn.
Given the two choices, which is the better option? Are there other possible options that I haven't considered? Also, how do you usually design for situations where you have one core set of libraries that are primarly used by a website but may end up being used either for data retrieval or to perform some function?
I'm not sure what the criteria is for storing certain business objects as files on the network, but if you're doing this on a regular basis then presumably you are trying to track changes of some kind, so there is another solution: Build the logic directly into the business/persistence layer.
If this secondary file storage is a business requirement, then it ought to be embedded directly in that tier and triggered by some sort of event. That way, instead of having an what is essentially an ad-hoc post-processing job that can get out of sync with the rest of the system, you have just one coherent system.
Invert the design - instead of wrapping a web service around the business services and using it for ad-hoc reporting, create a web service that encapsulates the data you need to receive from the export on a regular basis, and have your business tier send messages to it when new data is ready. You can send messages asynchronously so as not to tie up the business services, and depending on your reliability requirements you could set up a message queue (it's easier than it sounds, WCF already knows how to use MSMQ as the delivery mechanism, it's just a few configuration settings to change).
I can't say with any certainty that this is better than your first two options without knowing a good deal more about the architecture, the amount and type of data, the scheduling and reporting requirements, etc., but it is something you should consider. If you think that your business services are likely to change fairly frequently, then it might work better have it push data outward to a "warehouse" type abstraction rather than having a mining process to pull it.
Otherwise, I think I would go with option 2. I don't know if you've worked with WCF services before but you should know that you never actually have to parse XML. Everything is done through data contracts and when you generate a proxy for the web service, you get strongly-typed .NET objects. If you can pass your domain objects directly through the service API then it's really very little work at all to create the web service.
The real downside to a web service is that you have to take steps to ensure that your service contract never substantially changes (otherwise it can break clients). So you might eventually end up needing to create Data Transfer Objects on the service side to use as the public API instead of passing through domain objects. But in many cases you won't need to do this for a good long while, so go ahead and try it out, you'll see that it's pretty straightforward.
A variant of option two:
Add a WCF service to the site, exposing the information required as basic DTO DataContracts.
You could use AutoMapper or similar within the WCF service to handle the boring bit of converting your business objects to DTOs.
From your point two I understand, that you would just add the web-api for this extra-service. Thus, you would have to update two parts for any changes (extra-service, web-api, dll). With option one you would only have to update two parts (extra-service, dll), thus I would go with one.
BUT if you target for a general web api which you always have to maintain, go with option two.
For more flexibility instead of hard-wrapping your service around business and DAL, and instead of relying on the web site (through integrated web service) make use of design concepts like: interfaces, dynamic Type loading, Inversion of Control so your service is a thin decoupled layer that communicates with the business and DAL and allows for dynamic updates of the business and DAL without recompiling the service. Maybe put assemblies in the Global Assembly Cache of the machine to be shared across various other projects assemblies and apps.
I know it seems like throwing out jargon for the sake of it but that's how I would start to think.
Edit:
Loading types dynamically is actually amazing and easy. This is a quick C# pseudo code for one way, and without testing it might actually be right.
// Get a System.Type from string representation
Type t = Type.GetType("type name");
// Create instance of type.
object o = Activator.CreateInstance(t);
// Cast it to the interface (or actual Type) you're working with.
IMyInterface strongObject = (IMyInterface)o;
// ... and continue from there with the instance.
Instructions about how to formulate the string representation of a type name can be found in MSDN under Type.AssemblyQualifiedName, Type.GetType and similar places. In short you can see a lot of assembly qualified type names in the app.config or web.config files because they use the same format.
Do you use auto-generated WCF service references in line of business applications? Or do you roll your own? And why?
EDIT
For anyone looking to roll their own, I found this article which may prove useful: Understanding WCF Services in Silverlight 2. There's another article on the site for Silverlight 3 which may be a useful addition: Understanding WCF Faults in Silverlight 3.
I typically roll my own, or tweak the ones generated by the auto-generated wizard.
I have two scenarios, most of the time:
I control both ends of the wire - in that case, I share the assembly with the service and data contracts between the service and client, and in that case, I write my own clients from scratch, as ClientBase<T> descendants or using a ChannelFactory<T>. Unfortunately, this is not an option with a Silverlight client, as far as I know :-(
I get WSDL+XSD from a third party - in that case, I typically use svcutil.exe to generate a first version of the client proxy, and then I tweak that to suit my needs (especially the configs generated by svcutil or VS "Add Service Reference" are horrendously bad.....)
I just like to have that extra control of doing it myself and totally knowing what's going on.
I haven't had to use Silverlight to access a service I didn't control, but in accessing a WCF service that I do control, yeah, I use the standard auto-generated WCF references. Rolling my own would just be too painful when the service is changing regularly.
If you control both ends of the service, you should also strongly investigate RIA Services, which implements a much more elegant way of keeping your Siverlight client in sync with your WCF service than having to manually regenerating your service references each time the interface changes.