I understand that you can run .NET application from network share. I have done so many times and it works. My question is relating to performance. Will the app run slower with 10 concurrent users than with 2? Is there some magical number of concurrent users that provides a rule-of-thumb saying if you have more than 20 users don't host the app this way?
Does each client copy the app and run from the copy in a temp folder? Or are they each accessing the source file? Where does Application.ExecutablePath point in this situation?
EDIT: There is no Access database involved. All data persistance will be handled through an SQL server database with stored procedures, etc.
The executable is copied from the Windows share to the memory of the workstation computer, and executed there. There shouldn't be any discernible difference in performance, assuming the executable itself has no sharing issues.
Of course, if your app is sharing a database, there might be concurrency issues there, but that has nothing to do with where the workstation gets the executable.
Application.ExecutablePath will point to the folder on the network share. Environment.SpecialFolder.Desktop should still point to the workstation.
Related
I am using the managed ESENT PersistentDictionary class to get my C# application to store data on a network share. It all works great, with one caveat. When I try to access the database from multiple client computers (all running my app), I sometimes get an exception with the message "System path already used by another database instance".
From the documentation, I gather than ESENT supports concurrency, so this should not be a problem. What am I doing wrong?
Thank you.
There's a slight misunderstanding. ESENT supports multi-threaded concurrency, but not multi-process concurrency. esent.dll opens the file exclusively, so any other process will get ACCESS_DENIED (with the exception of read-only mode -- multiple processes can open a database in read-only mode).
In addition, the file-locking over SMB isn't quite as rigid as with local file system access, and the caching behaviour is also different. It's not recommended that you have a database on a remote SMB share, although you'll probably not have a problem with it in real life. (And some of that guidance was based on older versions of SMB. Newer versions might have changed the implementation details enough so that it works perfectly -- I guess I just don't know enough. :)
In order to have multi-machine access, you'll have to write your own server process to handle requests from other machines. Sorry.
-martin
We run a Windows Forms application developed in C# in our company, and one problem is giving us headaches.
When we run the application from a local machine, in drive C:, for example, the application loads and runs fast. It's heavily database-based, which means it does a lot of queries to our MSSQL server, and it runs all queries in less than 1 second, while running from a local drive.
If we run the same application from a mapped network drive (not a UNC path, a M: mapped drive), it loads fast, but the queries takes ages to complete, and hardly we can see the result.
ClickOnce is not an option for us (due to reasons that are not subject to discussion here), and we have several other 3rd party applications that runs fast, loaded from the same mapped M: drive.
I did some research, and the closest question I could find is this one:
http://stackoverflow.duapp.com/questions/2554716/my-c-net-application-is-running-slower-when-the-exe-is-located-on-the-network
When I right-click the application there's no "unblock" option available, which tells me that there's no secondary stream attached to the file and it's "trusted" by the machine.
Also, I tried adding <loadFromRemoteSources enabled="true"/> in the .config file, but it caused no changes in the application performance so far.
The application is not signed, and the slowness happens with both debug and release versions of the application.
What are we doing wrong ?
PS: I'm still trying to pinpoint the exact command that's taking longer to work, but no luck so far.
EDIT: Adding new information. It seems that the problem wasn't the network "per se", but the fact that the application was doing a background task and failing because it was running from the network. This failure wasn't wrapped around a try-catch block, and was preventing the background task to return properly, creating a major delay on the application response.
That means it was our development bug, not Windows fault. Thanks for the answers, I'll vote to close this question.
I have recently found one scenario where exactly this was happening in .net winforms sql-server application.
On one machine, the application was lightning-fast, on another one, queries took seconds.
Second machine was configured to use VPN dialed via PPTP. The VPN was automatically reconnecting whenever the computer got online – even if the machine was in company network (where no VPN was needed). VPN auto-redial trick always seemed to be very useful... until I found that connection to the SQL server basically always went through the VPN because of this. Manually disconnecting the VPN instantly helped: responses got fast again.
I do not say this is definite solution in your case but this is one of things what causes almost unacceptable slowness of queries. I observed this first hand.
I'm currently running some computationally intensive simulations, but they are taking a long time to complete. I've already split the workload across all the available physical cores in my processor. What I'm wondering is how to go about splitting the workload further and assigning it to other computers. I'm contemplating buying a couple Xeon servers and using them for the number crunching.
The one big issue I have is that I'm currently running the program within Visual Studio (Ctrl F5) as there are two methods which I'm constantly making small changes to.
Any suggestions on how/if it's possible to assign the workload to other computers / if it's possible to still run the program with VS or would I need to create an *.exe each time I wanted to run it?
It depends on the problem you're solving.
You can use map/reduce and Hadoop if it's easily parallelizable, like SETI#Home.
You can use something like MPI if it's not, like linear algebra.
Isn't the crux of your problem in this statement "The one big issue i have is that im currently running the program within Visual Studio (Ctrl F5) as there are two methods which im constantly making small changes to."?
Is it the "one big issue" because if you distribute then you can't afford modifying the code on all of the nodes when doing the job so you think about something distributing it for you? If this is the case then I assume that you already know how to split the algo or data in a way that nodes can take take of small parts of the job.
If it's the case - sorry if I misunderstood - then externalise the part that you are "constantly making small changes to" into a file or a DataBase encoded in some simple or more elaborate form depending on what you are changing so you don't need to have your nodes change constantly. Deploy the code on all nodes, connect them to the DB or file which contains the varying bit and enjoy your new Ferrari!
You could use the WMI service to start your process on the remote computers. You would build your exe to a shared directory that is visible to the remote computer, then use WMI on the remote computer to launch the exe.
There are plenty of examples out there to do this, but to illustrate, a simple method which assumes no authentication complications is to use a .VBS script file:
strComputer = "acomputer"
strCommandLine = "calc.exe"
Set objWMIService = GetObject("winmgmts:\\" & strComputer & "\root\cimv2")
Set objProcess = objWMIService.Get("Win32_Process")
intReturnValue = objProcess.Create(strCommandLine, , , intPID)
WScript.Echo "Process ID: " & intPID
You can also use PsExec from SysInternals to handle all the details of making this work.
After building the exe in Visual Studio, you could run it on your local machine to ensure it does what you want, then when you are ready to launch it on the remote systems, you can execute a batch script similar to the above VBS to launch the exe on the remote systems.
You will still need to provide some mechanism to divide up the workload so that each client knows what part of the problem it is supposed to work on. You could provide this information in the command line used to start the remote apps, in a config file in the directory with the exe, in a database table, or use a separate command-and-control type server that the clients connect back to (although with that approach you'll soon get to the stage where you would have been better off with learning to use an existing solution rather than rolling your own).
You may also want to include a remote 'kill switch' of some sort. You could use PsKill from SysInternals, or if you want a more graceful shutdown, something simple like the existence of a particular file in the same directory as the exe can serve as a flag for the remote processes to shut themselves down.
You could also consider adding CSScript support to the client so that the remote client programs are static and load and compile a CSScript file to do the work. This might be useful if you encounter some kind of difficulty in frequently redeploying and restarting the client programs, or if you need them to all be slightly different (you might write a program to generate separate script files for each client for example).
If I deploy a C# console app, which does the following:
reads message (ActiveMQ)
processes message contents
writes result to database (SQL Server)
Would there be any issues with running this multiple times e.g. what if I created a batch file and ran 100 instances? Would there be any conflict given that each instance would be using the same shared DLLs e.g. Apache.NMS.ActiveMQ.
The other option would be to deploy the app multiple times, but I'd rather not have to manage duplicated folders. I'm also avoiding threading at the moment but that will be an option for further development in future.
Just want to clarify what happens with those DLLs, and check that there wouldn't be a threading type conflict, e.g. one instance writing the results of another instance's processing to the database...
No, there will be no problem with loading the same DLL files into multiple processes as you describe. You would only run into problems running multiple instances of the same application if the process needed exclusive access to a shared resource, like a file. With regard to writing to a database, as long as you design your application so that multiple clients can write data without overwriting data or causing some sort of inconsistency with the domain integrity of the data then again, no problem.
However, I would strongly suggest you look at making you application multi-threaded if it is concurrency you need, or Application Domains if it is isolation you need. Running multiple processes is much more expensive in terms of resources than either of these two options.
I would like to be able to do an "inplace" update with my program. Basically, I want to be able to login remotely where the software is deployed, install it while other users are still using it (in a thin client way), and it update their program.
Is this possible without too much of a hassle? I've looked into clickonce technology, but I don't think that's really what I'm looking for.
What about the way firefox does it's updates? Just waits for you to restart the program, and notifies you when it's been updated.
UPDATE: I'm not remoting into the users' PC. This program is ran on a server, and I remote in and update it, the users run it directly off the server through remote access.
ClickOnce won't work because it requires a webserver.
I had some example code that I can't find right now but you can do something similar to Firefox with the System.Deployment.Application namespace.
If you use the ApplicationDeployment class, you should be able to do what you want.
From MSDN, this class...
Supports updates of the current deployment programmatically, and handles on-demand downloading of files.
Consider the MS APIs with BITS, just using bitsadmin.exe in a script or the Windows Update Services.
Some questions:
Are the users running the software locally, but the files are located on a networked share on your server?
Are they remoting into the same server you want to remote into, and execute it there?
If 2. are they executing the files where they are placed on the server, or are they copying them down to a "private folder"?
If you cannot change the location of the files, and everyone is remoting in, and everyone is executing the files in-place, then you have a problem. As long as even 1 user is running the program, the files will be locked. You can only update the files once everyone is out.
If, on the other hand, the users are able to run their own private copy of the files, then I would set up a system where you have a central folder with the latest version of the files, and when a user starts his program, it checks if the central folder has newer versions than the user is about to execute. If it does, copy the new version down first.
Or, if that will take too long, and the user will get impatient (what, huh, users getting impatient?), then having the program check the versions after startup, and remind the user to exit would work instead. In this case, the program would set a flag that upon next startup would do the copying, only now the user is aware of it happening.
The copying part would easily be handled by either having a separate executable that does the actual copying, and executing that instead, or the program could copy itself temporarily to another location and run that copy with parameters that says "update the original files".
While you can design your code to modify itself (maybe not in C#?), this is generally a bad idea. This means that you must restart something to get the update. (In Linux you are able to replace files that are in use, however an update does not happen until the new data is loaded into memory i.e. application restart)
The strategy used by Firefox (never actually looked into it) is storing the updated executable in a different file which is checked for when program starts to load. This allows the program to overwrite the program with the update before the resource is locked by the OS. You can also design you program more modular so that portions of it can be "restarted" without requiring a restart of the entire program.
How you actually do this is probably provided by the links given by others.
Edit:: In light of a response given to Lasse V. Karlsen
You can have your main program looking for the latest version of the program to load (This program wouldn't be able to get updates without everyone out). You then can remove older versions once people are no longer using it. Depending on how frequent people restart their program you may end up with a number of older programs versions.
ClickOnce and Silverlight (Out of browser) both support your scenario, if we talk about upgrades. Remote login to your users machine? Nope. And no, Firefox doesn't do that either as far as I can tell..
Please double-check both methods and add them to your question, explaining why they might not do what you need. Otherwise it's hard to move on and suggest better alternatives.
Edit: This "I just updated, please restart" thing you seem to like is one method call for Silverlight applications running outside of the browser. At this point I'm fairly certain that this might be the way to go for you.
ClickOnce doesn't require a webserver, it will let you publish updates while users are running the software. You can code your app to check for new update every few minutes and prompt the user to restart the app if a new version is found which will then take them through the upgrade process.
Another option is a Silverlight OOB application, but this would be more work if your app is already built as WinForms/WPF client app.
Various deployment/update scenarios (for .NET applications) are discussed with there pros and cons in Microsoft's Smart Client Architecture and Design Guide. Though a little bit old I find that most still holds today, as it is describing rather the basic architectural principles than technical details. There is a PDF version, but you find it online as well:
Deploying and Updating Smart Client Applications
Is this possible without too much of a hassle?
Considering the concurrency issues with thin clients and the complexity of Windows installations, yes hot updates will be a hassel without doing it the way the system demands.