Ensuring RavenDB 3.0 FileSystem Synchronization Before Action - c#

Here is the setup:
There are 2 Servers : one at the client site and one on another server
The connection between the two of them is not always on.
I have to have code that replicates the functionality from the Server UI: FileSystem > Status > "Sync Now"
I have to be able to watch the process to ensure that it completes without conflicts before moving I move on the next step.
Can anyone point me to the proper classes in the Raven Client Library to do this? Examples would be greatly appreciated.

You are looking for:
DestinationSyncResult[] syncResults = await store.AsyncFilesCommands.Synchronization.SynchronizeAsync();
This will force your server to push all changes to destinations and return all the details about processed files and errors if happened any. Investigate also more methods exposed by IAsyncFilesSynchronizationCommands:
store.AsyncFilesCommands.Synchronization.XXXXX
You can also use the Changes API mechanism to be notified about server activity. It works the same way like for RavenDB databases. For example:
store.Changes().Where(x => x.Direction == SynchronizationDirection.Outgoing).ForSynchronization().Subscribe(x => { });

Related

ASP.Net Server Side code, does it keep running after user logged out? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
To give this question context we have an ASP.Net MVC Project which requires you to be authenticated to use the system (typical saas product). The project includes an inactivity timer which will log the user out if they leave the screen alone for too long. The project is an SPA type project and Web API is used to get/post relevant data.
I am currently having to develop a routine that archives a potentially huge amount of data and the process itself is fine. What I'm not sure of is once the process is started, a post is sent to web api and the server side code starts running, does it continue to run if the inactivity timeout occurs or the user logs out manually for some reason?
I assume it would but I don't like to rely on assumptions.
EDIT: For example
For below comments/answers. Screen will have a list of tickboxes for the data they wish to archive so is not a set list of data so this project does need to process the task.
The following code is on the client side when running (checks etc omitted and data variable contains all true/false values for ticks):
self.Running = true;
self.showProgress();
http.ajaxRequest("post", "/api/archive/runarchive", data)
.done(function () {
self.Running = false;
})
.fail(function () {
self.Running = false;
app.showMessage("You do not have permission to perform this action!");
});
For reference the showProgress function used to pick up progress to display on screen. This is also run when accessing the screen in case an archive process is still running it can be displayed:
self.showProgress = function () {
http.ajaxRequest("get", "/api/archive/getarchiveprocess")
.done(function (result) {
if (result.ID == -1) {
$("#progressBar").hide();
$("#btnArchive").show();
if (self.Running) setTimeout(self.showProgress, 2000);
else app.showMessage("The Archive Process has finished.");
}
else {
$("#progressBar").show();
$("#btnArchive").hide();
$("#progressBarInner").width(result.Progress + '%');
$("#progressBarInner").attr("data-original-title", result.Progress + '%');
setTimeout(self.showProgress, 2000);
}
});
};
Server Side:
[HttpPost]
public void RunArchive(dynamic data)
{
// Add table row entry for the archive process for reference and progress
// Check each tick and update tables/fields etc
// Code omitted as very long and not needed for example
// table row for reference edited during checks for showProgress function
}
So basically I'm asking if the RunArchive() function on the controller will keep running until it's finished despite user logging off and being unauthenticated in some way. I'm aware any IIS, App Pool refresh etc would.
It sounds like web api is the one doing the heavy work and once that starts it will continue to run regardless of what happens on the UI side of things.
This being said, there is a timeout for webapi requests that you can control in web.config.
You might want to consider another alternative. Whenever you're talking about heavy processing tasks, you're better offloading those to another service.
Your API is supposed to be responsive and accessible by your users and it needs to respond fast to allow for a better experience. If you get 100 users doing heavy work, your API will basically crumble.
The API could simply send commands to a queue of stuff that needs to be run and another service can pick them up and execute them. This keeps your API lightweight while the work is still being done.
You're talking about archiving which probably involves a database and there is no reason why you can't have something else do that job.
You could keep track of jobs in the database, you could build a table which holds statuses and once a job is done, the external service changes the status in the database and your UI can then show the result.
So the API could work like this:
add message to queue
add job details to db with status of "new" for example and a unique id which allows the queue item to be linked to this record.
Service B picks up the job from the queue and updates status in db to "running".
Job finishes and Service B updates status to "complete".
the UI reflects these statuses so the users know what's going on.
Something like this should would make for a better user experience I would think.
Feel free to change whatever doesn't make sense, it's a bit hard to give suggestions when you don't know the details of what needs to be done.
This Service B could be a windows service for example or whatever else you want that can do the job. The user permissions come into play in the beginning only, a work item would be added to the queue only if the user has the permission to initiate that. This gives you the certainty that only authorized jobs are added.
After that, Service B won't care about user permissions and will do the job to the end irrespective about users being logged in or not.
This is largely guess work at this point, but you should be able to get an idea of how to do this.
If you have more specific requirements you should add those to the initial question.
Even if the process isn't killed by the user logging out, you also need to consider that IIS can recycle app pools, and by default is set to do so once a day, as well as on memory contention, either of which will kill your long running process.
I would highly recommend you check out Hangfire.io, which is designed to help with long running processes in ASP.Net sites.

Detect Internet Connectivity on client machine

I have this web application (MVC using C#) that serves like an advertisement in my client's office. My client will open this "advertisement page" and display it on a big screen to their customers.
What happen is, every 30 minutes or so, the page will automatically refresh to fetch latest data from the database, however, they are using WIFI to connect to our server and sometimes the connection is very slow (or lost connection completely). My client requested me to write a code to prevent the page from refreshing if the connectivity is bad or no internet connection. (They do not want to show "No Internet Connection" on their advertisement TV)
I know I cannot do anything from the server side code because it is the client's machine that want to detect the internet connection, so leaving client side code as the only option. I am not good at this, can anyone help me out?
I'd suggest a "ping" sent via ajax:
var timeStart= new Date().getTime();
$.ajax({
url:"url-to-ping-response-file",
success:function(){
var timeNow = new Date().getTime();
var ping = timeNow - timeStart;
//less than one second
if(ping < 1000){
window.location.reload();
}
}
});
You can use the Circuit Breaker Pattern to gracefully handle intermittently connected environments.
Here are 2 open source JavaScript implementations. I have never used either of them, so I cannot attest to their quality.
https://github.com/yammer/circuit-breaker-js
https://github.com/mweagle/circuit-breaker
You can also make use of
if (navigator.onLine) {
location.reload();
}
This will not detect slow internet. Now, I don't understand your web layout but for sites that I work on I tend to get HTML content and DATA as separate calls. I do this with a MVVM/MVC pattern which is worth learning. I use angularjs it is very awesome.
Now.. you can also use good old jQuery to replace the content have a read of this Replace HTML page with contents retrieved via AJAX you could couple this with the .onLine check.
http://www.w3schools.com/jsref/prop_nav_online.asp

Update many entities in one WCF call versus hitting the WCF service for each entity to update

I work on a multi-tier application and I need to optimize a long-running process in three ways :
Avoiding EF update concurrency problems.
Improving speed.
Informing the user of the progress.
Actually, the client code calls a WCF service using a method that does all the work (evaluating the number of entities to update, querying the entities to update, updating them and finally, saving them back to the database).
The process is very long and nothing is sent back to the user except the final result once the process is done. The user can stay in front of the wait form for up to 10 minutes, not knowing what is happening.
The number, and depth of the queried entities can become really big and I sometimes hit OutOfMemoryExceptions. I had to change the service method to process entity updates 100 entities at a time, so my DbContext will be refreshed often and won't become too big.
My actual problem is that I cannot inform the user each time an entity is updated because my service method does the whole process before returning it's result to the user.
I read about implementing a duplex service but since I have to return two different callbacks to the user (one callback to return the number of entities to update and another callback for the result of each entity update) I have to use multiple interface inheritance on a generic callback interface and it's becoming a little messy (well, to my taste).
Wouldn't it be better to have one WCF service method to return the number of entities to evaluate, and another WCF method that will return a simple entity update result, which will be hit for every entity to update ? My DBContext will be living only for the time of a single entity update, so it would not grow very much, which I think is good. However, I am concerned about hitting the WCF service really often during that process.
What are you thoughts ? What can you suggest ?
Have you thought about adding a WCF host to your client? That way you get full two way comms.
Client connects to server and gives server connection details back to client
Client request long running operation to begin
Server sends multiple updates to the clients WCF host as work progresses.
Server sends work complete to client.
This leaves your client free to do other things, consuming the messages from the server as you see fit. Maybe updating a status area as messages come in.
You could even get the server to maintain a list of clients and send updates to them all.
--------EDIT---------
When I say WCF host I mean a ServiceHost
It can be created automatically from your XML in App.config or though code directly.
var myUri = new Uri[0];
myUri[0] = new Uri("net.tcp://localhost:4000");
var someService = new SomeService(); //implements ISomeService interface
var host = new ServiceHost(someService, myUri);
var binding = new NetTcpBinding(); // need to configure this
host.AddServiceEndpoint(typeof(ISomeService), binding, "");
host.Open();
Proxy is a term I use for what a client uses to connect to the server, it was in an early example I came across and its stuck with me since. Again can be created both ways.
var binding = new NetTcpBinding(); // need to configure this
var endpointAddress = new EndpointAddress("net.tcp://localhost:4000");
var factory = new ChannelFactory<ISomeService>(binding, endpointAddress);
var proxy = factory.CreateChannel();
proxy.DoSomeWork();
So in a typical client/server app you have
CLIENT APP 1 SERVER APP CLIENT APP 2
proxy------------->ServiceHost<-------proxy
What I am suggesting is that you can make the client be a "server" too
CLIENT APP 1 SERVER APP CLIENT APP 2
proxy------------->ServiceHostA<------proxy
ServiceHostB<------proxy1
proxy2------------>ServiceHostB
If you do that, you can still split your large task into smaller ones if needed (you mentioned memory issues), but from the sounds of things they still might take some time and this way progress updates can still be sent back to the client or even all clients if you want everyone to be aware of whats happening. No callbacks needed, though you can still use them if you want.
Avoiding EF update concurrency problems.
See this question/answer Long running Entity Framework transaction
Improving speed.
Some suggestions:
Try using SQL Profiler to see what SQL query is being executed, and optimize the linq query
Or try improving the query itself or calling a stored procedure.
Can the updates be done in parallel? different threads? different processors?
Informing the user of the progress.
I would suggest changing the client to call an async method, or a method which then starts the long running operation asynchronously. This would return control back to the client immediately. Then it would be up to the long running operation to provide feed back as to its progress.
See this article for updating progress from a background thread
Update
the "architecture" I would suggest would be as follows:
. Service . . .
________ . _________ _______ ____
| | . | WCF | | EF | | |
| Client |---->| Service |->| Class |->| DB |
|________| . |_________| |_______| |____|
.
. .
The WCF service is only responsible for accepting client requests, and starting off the long running operation in the EF Class. The client should send an async request to the WCF service so it retains control and responsiveness. The EF class is responsible for updating the database, and you may choose to update all or a subset or records at a time. The EF class can then notify the client via the WCF service of any progress it has made - as required.

Connection Pooling with NEST ElasticSearch Library

I'm currently using the NEST ElasticSearch C# Library for interacting with ElasticSearch. My project is an MVC 4 WebAPI project that basically builds a RESTful webservice for accessing directory assistance information.
We've only just started working with NEST, and have been stumbling over the lack of documentation. What's there is useful, but it's got some very large holes. Currently, everything we need works, however, we're running into an issue with connections sometimes taking up to a full second. What we'd like to do is use some sort of connection pooling, similar to how you'd interact with SQL Server.
Here is the documentation on how to connect using nest: http://mpdreamz.github.com/NEST/concepts/connecting.html
Here is the relevant code snippet from our project:
public class EOCategoryProvider : IProvider
{
public DNList ExecuteQuery(Query query)
{
//Configure the elastic client and it's settings
ConnectionSettings elasticSettings = new ConnectionSettings(Config.server, Config.port).SetDefaultIndex(Config.index);
ElasticClient client = new ElasticClient(elasticSettings);
//Connect to Elastic
ConnectionStatus connectionStatus;
if (client.TryConnect(out connectionStatus))
{
// Elastic Search Code here ...
} // end if
} // end ExecuteQuery
} // end EOCategoryProvider
From looking at the documentation, I can't see any provisions for a connection pool. I've been thinking about implementing my own (having, say 3 or 4 ElasticClient objects stored, and selecting them round-robin style), but I was wondering if anyone had a better solution. If not, does anyone have advice on the best way to implement a connection pool by hand? Any articles to point to?
Thanks for anything you guys come up with.
Update: This seems to have been related to calling TryConnect on every request, and the particular network setup. The problem completely disappeared when using a machine on the same network as the Elastic box; My development machine (which averages 350ms to the Elastic box) seemed to fail to make http connections sometimes, which caused the long times in TryConnect.
You don't have to call TryConnect() each time you do a call to Elasticsearch. It's basically a sanity check call for when your application starts.
NEST is the C# REST client for Elasticsearch and the default IConnection uses WebRequest.Create which already pools TCP connections.
Review the actual implementation: https://github.com/elastic/elasticsearch-net/blob/master/src/Elasticsearch.Net/Connection/HttpConnection.cs
Reusing ElasticClient won't offer any performance gains since each call already gets its own HttpWebRequest. The whole client is built stateless on purpose.
I am however very interested in why calls are taking 1 second for you. Could you post the actual NEST code, how you are are measuring the calls and describe your data.
Disclaimer: I'm the author of NEST.

Function call on server by multiple clients: Isolate each client calls

My project was standalone application then I decided to split it as client & server because I need powerful CPU usage and portability at the same time. Now multiple clients can connect to one server.
It was easy when 1 by 1 processing did the job. Now I need to call the same function & scope area again & again at the same time -via client requests-
Please can anyone give me some clue how should I handle these operations, I need to know how can I isolate clients' processes from each other at the server side? My communication is asynchronous, server receives a request and starts a new thread. I think I pass a parameter which one carries the client information, and another parameter as job id -to help client back, client may ask for multiple jobs and some jobs finish quicker than others-
Should I instantiate the class Process on each call? Can I use a static method, etc, any explanation will be of great help!
Below is the part of my code to need modification
class static readonly Data
{
public variable listOfValues[]
}
class Process
{
local variable bestValue
function findBestValue(from, to)
{
...
if(processResult > bestValue) bestValue = processResult
...
}
...
for(i=0;i<10;i++) startThread(findBestValue(i*1000,i*1000+999));
...
}
EDIT: I think I have to instantiate a
new Process class and call the
function for each client and ignore
the same client for same job since job is already running.
Not getting into your application design, since you didn't talk much about it, I think that your problem is ideal for using WCF WebServices. You get client isolation by design because every request will start in it's own thread. You can create WCF host as standalone application/windows service.
You can wrap your communication with WCF service and configure it to be PerCall service (meaning each request will be processed separately from others).
So you'll clean up your buisness logic from syncronization stuff. That's the best way, because managing and creating threads is not difficult to implement, but it is difficult to implement correctly and optimized for resources consumption.

Categories