I am developing a line of business application which has to, for reasons out of my control, use a client server architecture.
I.e. clients all connect to an application server, application server connects to database etc.
To do this in the past I have created a WCF service which exposes CRUD type methods for the database. Methods like this, exist in WCF:
Customer GetCustomer(int customerId);
List<Customer> GetAllCustomers();
etc...
However I have always found the same 2 problems with this:
1) There's a LOT of plumbing code which connects: client -> app server -> db server
2) When client applications need to grab more complex data, I end up having to add methods on the server side which end up something horrible like this:
Customer GetCustomerByNameWhereCustomerHasBoughtProduct(string name, int productCode);
OR
Or returning way more data than need and processing on the client side. Which is slow and really bad for the database. Something like:
List<Customer> customers = _Service.GetAllCustomers();
List<Product> products = _Service.GetAllProducts();
List<Customer> customersWhoBoughtX = (from c in customers where
c.OrderLog.Contains(products.Where(p => p.Code == x)
select c).ToList()
What am I doing wrong here because this must be solvable some way.
Is there a way to expose a database through a wcf service using conventions? Or any other idea that could help what I'm doing?
Ideally I would say the clients could connect to the database directly, however I am told this is an issue which can't be changed.
I would really appreciate some pointers.
Thanks
Consider exposing your entities using OData. Then on the client you can write LINQ queries in a way similar to writing EF LINQ queries. Here's an article with the details:
http://www.vistadb.net/tutorials/entityframework-odata-wcf.aspx
Related
I work on a multi-tier application and I need to optimize a long-running process in three ways :
Avoiding EF update concurrency problems.
Improving speed.
Informing the user of the progress.
Actually, the client code calls a WCF service using a method that does all the work (evaluating the number of entities to update, querying the entities to update, updating them and finally, saving them back to the database).
The process is very long and nothing is sent back to the user except the final result once the process is done. The user can stay in front of the wait form for up to 10 minutes, not knowing what is happening.
The number, and depth of the queried entities can become really big and I sometimes hit OutOfMemoryExceptions. I had to change the service method to process entity updates 100 entities at a time, so my DbContext will be refreshed often and won't become too big.
My actual problem is that I cannot inform the user each time an entity is updated because my service method does the whole process before returning it's result to the user.
I read about implementing a duplex service but since I have to return two different callbacks to the user (one callback to return the number of entities to update and another callback for the result of each entity update) I have to use multiple interface inheritance on a generic callback interface and it's becoming a little messy (well, to my taste).
Wouldn't it be better to have one WCF service method to return the number of entities to evaluate, and another WCF method that will return a simple entity update result, which will be hit for every entity to update ? My DBContext will be living only for the time of a single entity update, so it would not grow very much, which I think is good. However, I am concerned about hitting the WCF service really often during that process.
What are you thoughts ? What can you suggest ?
Have you thought about adding a WCF host to your client? That way you get full two way comms.
Client connects to server and gives server connection details back to client
Client request long running operation to begin
Server sends multiple updates to the clients WCF host as work progresses.
Server sends work complete to client.
This leaves your client free to do other things, consuming the messages from the server as you see fit. Maybe updating a status area as messages come in.
You could even get the server to maintain a list of clients and send updates to them all.
--------EDIT---------
When I say WCF host I mean a ServiceHost
It can be created automatically from your XML in App.config or though code directly.
var myUri = new Uri[0];
myUri[0] = new Uri("net.tcp://localhost:4000");
var someService = new SomeService(); //implements ISomeService interface
var host = new ServiceHost(someService, myUri);
var binding = new NetTcpBinding(); // need to configure this
host.AddServiceEndpoint(typeof(ISomeService), binding, "");
host.Open();
Proxy is a term I use for what a client uses to connect to the server, it was in an early example I came across and its stuck with me since. Again can be created both ways.
var binding = new NetTcpBinding(); // need to configure this
var endpointAddress = new EndpointAddress("net.tcp://localhost:4000");
var factory = new ChannelFactory<ISomeService>(binding, endpointAddress);
var proxy = factory.CreateChannel();
proxy.DoSomeWork();
So in a typical client/server app you have
CLIENT APP 1 SERVER APP CLIENT APP 2
proxy------------->ServiceHost<-------proxy
What I am suggesting is that you can make the client be a "server" too
CLIENT APP 1 SERVER APP CLIENT APP 2
proxy------------->ServiceHostA<------proxy
ServiceHostB<------proxy1
proxy2------------>ServiceHostB
If you do that, you can still split your large task into smaller ones if needed (you mentioned memory issues), but from the sounds of things they still might take some time and this way progress updates can still be sent back to the client or even all clients if you want everyone to be aware of whats happening. No callbacks needed, though you can still use them if you want.
Avoiding EF update concurrency problems.
See this question/answer Long running Entity Framework transaction
Improving speed.
Some suggestions:
Try using SQL Profiler to see what SQL query is being executed, and optimize the linq query
Or try improving the query itself or calling a stored procedure.
Can the updates be done in parallel? different threads? different processors?
Informing the user of the progress.
I would suggest changing the client to call an async method, or a method which then starts the long running operation asynchronously. This would return control back to the client immediately. Then it would be up to the long running operation to provide feed back as to its progress.
See this article for updating progress from a background thread
Update
the "architecture" I would suggest would be as follows:
. Service . . .
________ . _________ _______ ____
| | . | WCF | | EF | | |
| Client |---->| Service |->| Class |->| DB |
|________| . |_________| |_______| |____|
.
. .
The WCF service is only responsible for accepting client requests, and starting off the long running operation in the EF Class. The client should send an async request to the WCF service so it retains control and responsiveness. The EF class is responsible for updating the database, and you may choose to update all or a subset or records at a time. The EF class can then notify the client via the WCF service of any progress it has made - as required.
I'm currently using the NEST ElasticSearch C# Library for interacting with ElasticSearch. My project is an MVC 4 WebAPI project that basically builds a RESTful webservice for accessing directory assistance information.
We've only just started working with NEST, and have been stumbling over the lack of documentation. What's there is useful, but it's got some very large holes. Currently, everything we need works, however, we're running into an issue with connections sometimes taking up to a full second. What we'd like to do is use some sort of connection pooling, similar to how you'd interact with SQL Server.
Here is the documentation on how to connect using nest: http://mpdreamz.github.com/NEST/concepts/connecting.html
Here is the relevant code snippet from our project:
public class EOCategoryProvider : IProvider
{
public DNList ExecuteQuery(Query query)
{
//Configure the elastic client and it's settings
ConnectionSettings elasticSettings = new ConnectionSettings(Config.server, Config.port).SetDefaultIndex(Config.index);
ElasticClient client = new ElasticClient(elasticSettings);
//Connect to Elastic
ConnectionStatus connectionStatus;
if (client.TryConnect(out connectionStatus))
{
// Elastic Search Code here ...
} // end if
} // end ExecuteQuery
} // end EOCategoryProvider
From looking at the documentation, I can't see any provisions for a connection pool. I've been thinking about implementing my own (having, say 3 or 4 ElasticClient objects stored, and selecting them round-robin style), but I was wondering if anyone had a better solution. If not, does anyone have advice on the best way to implement a connection pool by hand? Any articles to point to?
Thanks for anything you guys come up with.
Update: This seems to have been related to calling TryConnect on every request, and the particular network setup. The problem completely disappeared when using a machine on the same network as the Elastic box; My development machine (which averages 350ms to the Elastic box) seemed to fail to make http connections sometimes, which caused the long times in TryConnect.
You don't have to call TryConnect() each time you do a call to Elasticsearch. It's basically a sanity check call for when your application starts.
NEST is the C# REST client for Elasticsearch and the default IConnection uses WebRequest.Create which already pools TCP connections.
Review the actual implementation: https://github.com/elastic/elasticsearch-net/blob/master/src/Elasticsearch.Net/Connection/HttpConnection.cs
Reusing ElasticClient won't offer any performance gains since each call already gets its own HttpWebRequest. The whole client is built stateless on purpose.
I am however very interested in why calls are taking 1 second for you. Could you post the actual NEST code, how you are are measuring the calls and describe your data.
Disclaimer: I'm the author of NEST.
This is about my solution to that question
It is been a long time since my last c# coding, and it is my first time to write a Web Service...
Previous Question:
I need to use a DLL on an Ubuntu with Python. Final solution is using a web service for that propose...
My problem is, the API is used for a kind of payment. There are three basic function of the DLL to be used in the webservice... First one is used for connection to the server, second one is asking available payments, third one is selecting one and making the payment...
Since my system is using Python, I wish to keep the logic that selects the payment method on python, not on the web service.
And my problem is, when I make a connection, webservice must create a connection object, and do the following two steps using that connection. That it may dispose that connection object and create a new one for the next connection and payment.
So, my Python code will do something like that...
Use web service and create a connection
Get a list of available payments from web service (these two functions can be used as a single function in the web service)
Do some calculation and select the proper payment in python...
Send payment method info to web service...
All these steps must be done with the connection object from the first step.
As I said before, I do not have much knowledge about web services and using them on python... So I'm confused whether I may use the same connection object for steps 2 and 4. If I create the connection object as a global in my web service on the connection step, then my following function calls use that object? In OOP that's the way it must be, but I can not be sure if it will be same in web services?
Some code snippet :
namespace paymentType{
public class x : System.Web.Services.WebService{
ConnectionObj conn;
ConnResult result;
[WebMethod]
public void ConnectToServer(String deviceId){
conn = new ConnectionObj();
result = baglanti.Connect(deviceId);
}
[WebMethod]
public List<int> GetCompanyList(){
List<int> kurumlar = new List<int>();
if (sonuc.CRCStatus){
if (baglanti.CompanyList != null) { blah blah blah...}
Since conn is a global, can i set it in the function call ConnectToServer and use the baglanti object for the other functions...
UPDATE: Let me try to get it more clear...
When I connect to remote server (via function in the DLL), remote server accepts my connection and give me a somewhat unique id for that connection. Then I ask for available payments for a customer. Server sends all available ones with a transaction id belong to that transaction. And in the final step, I use the transaction id that I want for doing the payment. Problem is, each transaction id is usable within the connection that it was created. So, I must request for transaction id and confirm the one I want in the same connection...
But as far as I see, best solution is using a single function call and do all the job on the web service since API provider considers removing the connection-transactionId lock might cause some security vulnerabilities...
But on the other hand, I do not want to handle it on the web service...
One more question... On the connection step, creating the connection and using set/get functions or returning the connection object and pass it back to the web service for each following step might work?
If you're communicating using a web service, it should preferrably be stateless – that is, you should always send any context information the service implementation needs in the request. While technologies that let you implement stateful web services exist, they'd likely make things more complicated, not less.
I'm not clear from your description on why you need the connection object to be created in Step 1, or why you can't just create a different connection object for steps 2 and 4 – which is how I'd implement this.
I have one database with one mirror in high-safety mode (using a witness server at the moment but planing to take him out), this database will be used to store data gathered by a c# program.
I want to know how can I check in my program the state of all the SQL instances and to cause/force a manual failover.
is there any c# API to help me with this?
info: im using sql server 2008
edit: I know I can query sys.database_mirroring but for this I need the principal database up and runing, I would like to contact each sql instance and check their status.
Use SQL Server Management Objects (SMO).
SQL Server Management Objects (SMO) is a collection of objects that are designed for programming all aspects of managing Microsoft SQL Server. SQL Server Replication Management Objects (RMO) is a collection of objects that encapsulates SQL Server replication management.
I have used SMO in managed applications before - works a treat.
To find out the state of an instance, use the Server object - is has a State and a Status properties.
after playing around a bit I found this solution (i'm not if this is a proper solution, so leave comments plz)
using Microsoft.SqlServer.Management.Smo.Wmi;
ManagedComputer mc = new ManagedComputer("localhost");
foreach (Service svc in mc.Services) {
if (svc.Name == "MSSQL$SQLEXPRESS"){
textSTW.Text = svc.ServiceState.ToString();
}
if (svc.Name == "MSSQL$TESTSERVER"){
textST1.Text = svc.ServiceState.ToString();
}
if (svc.Name == "MSSQL$TESTSERVER3") {
textST2.Text = svc.ServiceState.ToString();
}
}
this way i'm just looking for the state of the services (Running/Stoped) and is much faster, am I missing something?
I'm trying to build a Silverlight App that accesses and presents data from a MySQL database. I'm trying to use Entity Framework to model the MySQL data and RIA Services to make the data via EF available to Silverlight.
My Silverlight App is showing the correct columns in the datagrid, but it does not show the data (alternate link to image) :
When I look at the DomainService file (used for RIA Services), I see this:
public IQueryable<saw_order> GetSaw_order(int intOrder)
{
return this.Context.saw_order
.Where(o => o.Wo == intOrder);
}
To test this step, I modified the LINQ to remove the where so that all I had was return this.Context.saw_order;. When I did this, I was able to check the MySQL server and verify that the query was in fact sent to the MySQL server and the MySQL server was "Writing to NET" and trying to send data back. The query sent from my test machine was valid.
From my test above, it seems that data is correctly being sent to the MySQL server but is lost somewhere on its return. My difficulty now is trying to figure out where in the chain (Entity Framework to RIA Services to Silverlight client) the data is getting lost and I'm not sure how to debug this at different points.
For example, what are other ways I might test Entity Framework to make sure EF is not the problem? How might I test RIA services? Should I test on the Silverlight Client?
I'm struggling with learning C# and am not sure what to do to test. How might I "catch" the return in the DomainService so I can do some basic debugging.
Any help is very much appreciated.
Change your code like this:
var qry = this.Context.saw_order.Where(o => o.Wo == intOrder);
return qry;
If you put a breakpoint in at the return, then you can try executing the query in the immediate window and see if it is executing correctly.
From my test above, it seems that data
is correctly being sent to the MySQL
server but is lost somewhere on its
return. My difficulty now is trying to
figure out where in the chain (Entity
Framework to RIA Services to
Silverlight client) the data is
getting lost and I'm not sure how to
debug this at different points.
I use tools like:
Linqpad: This is for testing my linq to sql statements. It is pretty straightforward and easy to use.
Fiddler: Fiddler will tell you what is going on between the server and the client.