I have a Windows Communication Foundation (WCF) service running. When a service is called, sometimes the service has to start again and populate all the static values because I guess the webserver shuts down the service after period of inactivity. If it is called constantly, it stays active and all the values are populated. I have a static integer in that class that gets used by the service calls and I have that at 30 for example, but I sometimes set that static value using service call so that the new value is 20, but since the service gets deconstructed by the web server after a period of inactivity, the value is again initialized to 30. Is there a way to permanently set the 30 value to 20 when the WCF service gets constructed (static constructors) get called? Is there any better way to do this? I have 2-3 of these values that I want to change permanently whenever I want through a web service call.
I would suggest using the built in Settings static class
Settings.Default["StaticValue"] = "30";
Settings.Default.Save();
http://msdn.microsoft.com/en-us/library/aa730869(v=vs.80).aspx
Even without the restart, if you are only storing this value in memory it's not stored "permanently". A database or similar would be great, but if you are not using a database for anything already, using one to store a single value seems like overkill. What about writing to a file?
Related
I have sample webapp deployed to Azure. The app cached a variable using MemoryCacheEntryOptions to store a value (from database) which expire in 5 minutes.
However after 5 minutes via Chrome debugging tool, I still can query the cache, the cache value expected to be empty or whatever the new value which currently stored updated in the database.
I even tried to clear cache in the web browser, but cache seem still retain the previous value.
However when I restart the web site, and open the web app again the cache value is no longer exist.
Would any setting in Azure might affect the cache expiry?
private readonly MemoryCacheEntryOptions _cacheEntryOptions;
protected CacheService(IMemoryCache memoryCache)
{
_ memoryCache = memoryCache;
_cacheEntryOptions = new MemoryCacheEntryOptions
{
AbsoluteExpirationRelativeToNow = TimeSpan.FromSeconds(300)
};
}
Debugging the behavior of a web application is notoriously hard, as all you got to control it is the Browser - and you never get exclusive access.
Even if you did not refresh the page, any number of things might have queried the server. The culprits start around "any search engines webcrawler" and end around "somewhat aggressive security tools" (because some viruses might use web servers). You could try a way shorter timeout. But ideally you want to have both the Server and the client you access it with run in separate virtual machines, which are only connected via the Hypervisor. That way you can be certain nobody is interfering.
I work on a multi-tier application and I need to optimize a long-running process in three ways :
Avoiding EF update concurrency problems.
Improving speed.
Informing the user of the progress.
Actually, the client code calls a WCF service using a method that does all the work (evaluating the number of entities to update, querying the entities to update, updating them and finally, saving them back to the database).
The process is very long and nothing is sent back to the user except the final result once the process is done. The user can stay in front of the wait form for up to 10 minutes, not knowing what is happening.
The number, and depth of the queried entities can become really big and I sometimes hit OutOfMemoryExceptions. I had to change the service method to process entity updates 100 entities at a time, so my DbContext will be refreshed often and won't become too big.
My actual problem is that I cannot inform the user each time an entity is updated because my service method does the whole process before returning it's result to the user.
I read about implementing a duplex service but since I have to return two different callbacks to the user (one callback to return the number of entities to update and another callback for the result of each entity update) I have to use multiple interface inheritance on a generic callback interface and it's becoming a little messy (well, to my taste).
Wouldn't it be better to have one WCF service method to return the number of entities to evaluate, and another WCF method that will return a simple entity update result, which will be hit for every entity to update ? My DBContext will be living only for the time of a single entity update, so it would not grow very much, which I think is good. However, I am concerned about hitting the WCF service really often during that process.
What are you thoughts ? What can you suggest ?
Have you thought about adding a WCF host to your client? That way you get full two way comms.
Client connects to server and gives server connection details back to client
Client request long running operation to begin
Server sends multiple updates to the clients WCF host as work progresses.
Server sends work complete to client.
This leaves your client free to do other things, consuming the messages from the server as you see fit. Maybe updating a status area as messages come in.
You could even get the server to maintain a list of clients and send updates to them all.
--------EDIT---------
When I say WCF host I mean a ServiceHost
It can be created automatically from your XML in App.config or though code directly.
var myUri = new Uri[0];
myUri[0] = new Uri("net.tcp://localhost:4000");
var someService = new SomeService(); //implements ISomeService interface
var host = new ServiceHost(someService, myUri);
var binding = new NetTcpBinding(); // need to configure this
host.AddServiceEndpoint(typeof(ISomeService), binding, "");
host.Open();
Proxy is a term I use for what a client uses to connect to the server, it was in an early example I came across and its stuck with me since. Again can be created both ways.
var binding = new NetTcpBinding(); // need to configure this
var endpointAddress = new EndpointAddress("net.tcp://localhost:4000");
var factory = new ChannelFactory<ISomeService>(binding, endpointAddress);
var proxy = factory.CreateChannel();
proxy.DoSomeWork();
So in a typical client/server app you have
CLIENT APP 1 SERVER APP CLIENT APP 2
proxy------------->ServiceHost<-------proxy
What I am suggesting is that you can make the client be a "server" too
CLIENT APP 1 SERVER APP CLIENT APP 2
proxy------------->ServiceHostA<------proxy
ServiceHostB<------proxy1
proxy2------------>ServiceHostB
If you do that, you can still split your large task into smaller ones if needed (you mentioned memory issues), but from the sounds of things they still might take some time and this way progress updates can still be sent back to the client or even all clients if you want everyone to be aware of whats happening. No callbacks needed, though you can still use them if you want.
Avoiding EF update concurrency problems.
See this question/answer Long running Entity Framework transaction
Improving speed.
Some suggestions:
Try using SQL Profiler to see what SQL query is being executed, and optimize the linq query
Or try improving the query itself or calling a stored procedure.
Can the updates be done in parallel? different threads? different processors?
Informing the user of the progress.
I would suggest changing the client to call an async method, or a method which then starts the long running operation asynchronously. This would return control back to the client immediately. Then it would be up to the long running operation to provide feed back as to its progress.
See this article for updating progress from a background thread
Update
the "architecture" I would suggest would be as follows:
. Service . . .
________ . _________ _______ ____
| | . | WCF | | EF | | |
| Client |---->| Service |->| Class |->| DB |
|________| . |_________| |_______| |____|
.
. .
The WCF service is only responsible for accepting client requests, and starting off the long running operation in the EF Class. The client should send an async request to the WCF service so it retains control and responsiveness. The EF class is responsible for updating the database, and you may choose to update all or a subset or records at a time. The EF class can then notify the client via the WCF service of any progress it has made - as required.
This is about my solution to that question
It is been a long time since my last c# coding, and it is my first time to write a Web Service...
Previous Question:
I need to use a DLL on an Ubuntu with Python. Final solution is using a web service for that propose...
My problem is, the API is used for a kind of payment. There are three basic function of the DLL to be used in the webservice... First one is used for connection to the server, second one is asking available payments, third one is selecting one and making the payment...
Since my system is using Python, I wish to keep the logic that selects the payment method on python, not on the web service.
And my problem is, when I make a connection, webservice must create a connection object, and do the following two steps using that connection. That it may dispose that connection object and create a new one for the next connection and payment.
So, my Python code will do something like that...
Use web service and create a connection
Get a list of available payments from web service (these two functions can be used as a single function in the web service)
Do some calculation and select the proper payment in python...
Send payment method info to web service...
All these steps must be done with the connection object from the first step.
As I said before, I do not have much knowledge about web services and using them on python... So I'm confused whether I may use the same connection object for steps 2 and 4. If I create the connection object as a global in my web service on the connection step, then my following function calls use that object? In OOP that's the way it must be, but I can not be sure if it will be same in web services?
Some code snippet :
namespace paymentType{
public class x : System.Web.Services.WebService{
ConnectionObj conn;
ConnResult result;
[WebMethod]
public void ConnectToServer(String deviceId){
conn = new ConnectionObj();
result = baglanti.Connect(deviceId);
}
[WebMethod]
public List<int> GetCompanyList(){
List<int> kurumlar = new List<int>();
if (sonuc.CRCStatus){
if (baglanti.CompanyList != null) { blah blah blah...}
Since conn is a global, can i set it in the function call ConnectToServer and use the baglanti object for the other functions...
UPDATE: Let me try to get it more clear...
When I connect to remote server (via function in the DLL), remote server accepts my connection and give me a somewhat unique id for that connection. Then I ask for available payments for a customer. Server sends all available ones with a transaction id belong to that transaction. And in the final step, I use the transaction id that I want for doing the payment. Problem is, each transaction id is usable within the connection that it was created. So, I must request for transaction id and confirm the one I want in the same connection...
But as far as I see, best solution is using a single function call and do all the job on the web service since API provider considers removing the connection-transactionId lock might cause some security vulnerabilities...
But on the other hand, I do not want to handle it on the web service...
One more question... On the connection step, creating the connection and using set/get functions or returning the connection object and pass it back to the web service for each following step might work?
If you're communicating using a web service, it should preferrably be stateless – that is, you should always send any context information the service implementation needs in the request. While technologies that let you implement stateful web services exist, they'd likely make things more complicated, not less.
I'm not clear from your description on why you need the connection object to be created in Step 1, or why you can't just create a different connection object for steps 2 and 4 – which is how I'd implement this.
I must build a Application that will use Webclient multiple times to retrieve every "t" seconds information from a server.
Here is a small plan to show you what I'm doing in my application:
Connect to the Web Client "USER_LOGIN" that returns me a GUID(user unique ID). I save it and keep it to use it in future Web Client calls.
Connect to the Web Client "USER_GETINFO" using the GUID I saved before as parameter. This Web Service returns an array of strings holding all my personal user information( my Name, Age, Email, etc...). => I save the array information this way: Textblock.Text = e.Result[2].
Starting a Dispatcher.Timer with a 2 seconds Tick to start my Loop. (Purpose of this is to retrieve information and update it every 2 seconds)
Connect to the Web Client "USER GETFRIEND", wich is in my Timer, giving him the GUID as parameter. It returns me an array filled with my friends informations(Name, email, message, etc...). I inserted this WebClient in the timer so my friend list refreshes every 2 seconds.
I am able to create all the steps without any error until step 3. When I call the "USER_GETFRIEND" Web Client I am facing two major problems:
On one side I noticed that my number of Thread increased dramatically. => I always thought that when a WebClient had finished its instructions it would shut down by itself, but apparently that does not happen in Asyncronous calls.
And on the other side I was surprised to see that using the same proxy for two Webclient calls(ie: if i declare test.MainSoapClient proxy = new test.MainSoapClient()), the data i would retrieve from "USER_GETFRIEND" e.Result, was sent directly to my "USER_GETINFO" array. And so my Name and Email adresses on the UI were replaced by the same value in the USER_GETFRIEND array. So my Name is changed to my friends email and so on...
I would like to know if it's possible to close a WebClient call(or Thread) that I am not using anymore to prevent any conflicts? Or if someone has any suggestion concerning my code and the way i should develop my application please feel free to propose.
I got the answer a few weeks ago and figured out it was important to answer my own question.
My whole problem was that I wasn't unsubscribing from my asynchronous calls and that I was using the same proxy class from "Add Service reference":
So when I was using:
proxy.webservice += new Eventhandler<whateverinhere>(my_method);
I never did:
proxy.webservice -= new Eventhandler<whateverinhere>(my_method);
Hope it will help someone.
Ok so part two of I have no will power experiment is:
Summary Question -
Is there a way to set the CanStop property on a windows service dynamically?
Whole Spiel -
I have a service that is currently checking and killing processes (IE Games) I have told it to if it's day I'm not allowed. Great. I set the CanStop to false so that I can't just kill the service if I give into the addiction. I have a program that will have a password check (Someone else enters the password) that will stop the service if the password is correct. (If I have serious withdrawals) Problem is using the ServiceController class.
Far as I can tell, ServiceController just is a decorator (yah design patern guess) and so I have no way to get at the actual service it represents. First attempt was Property Info, but I was too dumb to realize what that would be pointless. Second was Field Info because I thought there might be a private field that "represents" the service. As you might guess, both failed.
Any ideas?
EDIT 1
I'd like to avoid having the CanStop value somewhere I can get to it easily like a config file or registry. So I am attempting, though not successfully, to make this completely handled in program.
New (Failed) Attempts:
ManagementObject service;
ManagementBaseObject stopService;
service = new
ManagementObject("Win32_Service.Name='StopProgram'");
stopService = service
.InvokeMethod("StopService", null,
null);
Not sure this did anything. I assume it couldn't stop because of the CanStop situation.
The "CanStop" is a attribute of the services registration in the windows service control manager. You can't change it mid-stride.
And, of course, if you're smart enough to write your own service then you're smart enough to bring up task-man and simply kill the service process. CanStop will not prevent you from pulling the rug out from under the service. CanStop only tells the service control manager not to send "Stop" commands to the service.
If you want to allow something to pass then use a global event to enable/disable the checking the service does -- or just remove the games from the PC! :-)
Rather than trying to directly access and control the Service, could you set a flag somewhere, (like the registry or a file), that is then checked by your service before it executes the Event you're trying to control.