I've just getted started with WCF Data Services, so I appologize if I do not make sense.
I'm creating an online event logger/viewer. To do this I have created an ado.net entity data model and a wcf data service. This works fine, and I'm able to add Events to my service.
I'm now working on creating a windows client to browse the events and I was wondering if there is any approach to updating the client with new events on a regular basis. As there will be a large amount of events it seems ineffective to download all the events for each and every refresh.
To provide more information, I can mention the following:
1. A custom TraceListener class in software A posts events to the data service.
2. Since wcf data services can act as a data source, I elected to go for this approach instead of a regular web service.
3. I'm currently creating the client in WPF.
I'm looking forward to any answers to this question.
Thanks,
Stefan
WCF Data Services exposes your data using the OData protocol. This means that your client can easily query your data service using LINQ.
Per request that the client sends to the server, keep a timestamp. Next request, ask only for those events that occurred after the timestamp, using a LINQ query on your service reference-generated proxy.
var newEvents = myServiceRef.Events.Where(x => x.Timestamp >= lastTimestamp);
See also http://www.odata.org/ for more on the OData protocol, and http://msdn.microsoft.com/en-us/library/ee622463.aspx for more on using LINQ to access WCF Data Services.
Related
I have seen lot of examples of using signalR in combination with API as push notification events for client to notify of updated data available. Is it possible to use SignalR as an client application that schedules REST API Calls in the case the server doesn't have any signalR service. Here is what I am trying to do. I am client that uses API provided by vendor to make calls to pull data and map it to our database. There is no way of knowing that there is new data until I make a REST API call to check.
This is where I am thinking that the long poling feature of signalR would come in handy. Idea is to create a local executable application that uses SignalR to call the api to check if there is new data and then execute a service (another .net executable application) that will pull the data and consume it (map it to our database).
Can this work?
This may seem a little crazy, but while reading about OData I've come across some articles that claim run of the mill WCF Web Services can be exposed as OData endpoints (via some black magic or other).
The thing is, it may be practical for me to expose a SignalR web service with an OData endpoint. Is this something that is possible to do with the currently available frameworks? The SignalR service is used to extract data from one of the connected clients known as a "provider" which multiple consuming clients can request data from.
Edit -
I have a set of existing SignalR hubs for each type of entity (or resource in terms of OData), and each of these Hubs expose methods in a similar fashion to a Repository eg.
public class CustomersHub : Hub
{
public IEnumerable<CustomerData> GetCustomers( )
public IEnumerable<CustomerData> GetCustomers(IEnumerable<int> ids);
// with odata this method may not be necessary.
public IEnumerable<CustomerData> FindCustomersByName(IEnumerable<string> names);
/// ...
}
Hopefully this shows a striking resemblance to ODataController derived classes. In this situation the Hubs are the resources.
Something very similar to what I am asking for was implemented for WCF Data Services in the WCF Data Service Toolkit which AFAIK isn't active anymore, plus I'm trying to do this with SignalR.
If you ask why I am using SignalR for this type of service, it is because the data does not reside on the server the Web Service is hosted on, certain clients connected to the Web Service have that data. Inside the methods is a call to another client (besides the Caller) who is sent a request for the data.
From the official OData specification (Introduction):
The Open Data Protocol (OData) enables the creation of REST-based data services, which allow resources, identified using Uniform Resource Locators (URLs) and defined in a data model, to be published and edited by Web clients using simple HTTP messages. This specification defines the core semantics and the behavioral aspects of the protocol.
None of this characteristics maps over SignalR, which is a realtime, non-resource-based technology based on a variety of HTTP/HTML techniques/hacks in order to push information. You can of course use SignalR to implement something along the lines of what you described, but as long as that would not implement a REST and purely HTTP-based request/response approach (and with SignalR it would not), that would not be OData by definition. Have a look at the specs and you will quickly see how that does not map over SignalR.
UPDATE
Even after your edit, it does not make sense to me. It's like you want some magic happening to enable you using walkie-talkies through the post office just because you can query their catalog.
Maybe you should distinguish your clients in consumers and producers, offer OData endpoints to the formers and hubs to the others, and do the necessary "magic" yourself.
Also, with SignalR I don't think you would have any IQueryable support needed to have out of the box OData plumbing.
I'm trying to create a WCF service for our existing product.
The service should provide normal "webservice" features (one-way), but also act independently.
Sample scenario:
The client connects to the server
The server saves the client in a collection
Now I use an admin client / database entry to tell the client to do sth. (For example change config for log4net/NHibernate)
I've read some things about callbacks (mostly a chat system), but I'm still not sure if this will work.
Now my question is, will WCF be suitable for such a scenario or should I use TCPClient/TCPListener?
Currently I'm working on a design for a Windows Service application to fetch reports from an Oracle database, aggregate them to a message and send it to an external WCF SOAP service.
I would be grateful for some design suggestions concerning Windows services.
Should Windows Services use e.g. dedicated WAS/self-hosted WCF service (net.pipe/net.tcp) that provides data to achieve better separation / reusability?
So I would add a WCF service (net.pipe) that provides data (e.g. a GetReport method).
The Windows Service application would call GetReport and call the remote SOAP service to forward the aggregated message. The remote service and its client code are likely to change. It might be adapted for different customer projects.
If I understand correctly, your windows service will periodically fetch some data from the database and upload that data to a remote web service.
This means that your windows service is a client in terms of WCF communication and you won't need to implement any WCF server code inside it.
All you'll have to do is to connect to the remove web service and upload the data, e.g. using a client proxy generated for this remove service.
I don't think that it is required to add another WCF service that provides the data instead of querying the database directly as long as you don't have the requirement that another application will use the same WCF service. Until then I wouldn't add the service for the following reasons:
Another WCF service increases the complexity of the deployment and makes it harder to install and configure.
The connection to the new WCF service is another point that can break.
If you handle lots of data, getting them from the database directly is much more efficient instead of transferring them over a service protocol. As I understand your question, you aggregate the data in the windows service not in the database. Therefore you'd have to move the aggregation code to the new service also.
As said before, this recommendation will change once you have another potential client to the new service. In order to prepare for that, you should of course choose a design in your windows service that separates concerns well and is a good starting point to move some components later.
I have a very simple entity framework (.edmx) file, and a .svc rest service.
Everything works fine for CRUD operations.
I have many databases thats shares the exactly same schema.
My next step is to let the client pass inn a parameter that could be the connection string or some other value identifying the user so that the service serves data from the correct database.
Now, the only parameter is the uri for the ServiceRoot
I see in the datamodel that I can pass inn a connection string, but how can i do this from the client without making many service files.
I am assuming you are using WCF Data Services to expose the edmx file. I am no expert in this toolset but I suspect the only direct way is to create a service for each database.
This is a great question and it is a scenario that I hope will be addressed in the future WCF HTTP stack.
In the meanwhile, there is some positive news. I have experimented in the past with creating a large number of service hosts (around 1000) and my experimentation showed that it was quite efficient to start up and did not consume large amounts of RAM. The key is to create the service hosts in code rather than via the config files. Obviously, you don't want to be hand writing an XML config file with thousands of service entries in it!
It may not be the ideal solution but I believe it would work.
If you're using WCF Data Services you should be able to pass the information identifying the data source to use in the HTTP request. Either as a custom option in the URL or as a custom HTTP header (I would probably use the custom header as it's much easier to work with from the client).
Depending on the way you host the service you should be able to access the headers of the request on the server. You can use the ASP.NET way to do this (static variables), or you can hook into the processing pipeline of the WCF Data Services which should allow you to access those headers as well.