Lately I've been reading up on HDFS (Hadoop) and GFS (Google) and find myself wondering if there are any similar native implementations for windows and/or .NET. A lot of the applications I develop include features to support user generated content, and currently, that means relying on some type of storage service such as Mosso or S3, or resorting to some type of NAS in my server farm. I'm interested in a setup that would allow me to mimic the Mosso or S3 style of storage locally so that my files are automatically stored on multiple machines and have high availability.
Is there anything that serves this need for C# besides Windows' built-in DFS (which requires Active Directory, which isn't running on my server farm)?
Open AFS, SQL Data Services or just run Hadoop on windows?
Related
I have a windows network (not connected to domain) and I need to provide some automation on each PC at certain time of the day. There are several tasks - launch executables, managing FS, transfering files. All this actions must be implemented via RDP, using C#. What is common approach to achieve this? I don't have experience using RDP within software. So are there .NET classes or free libraries I can use to get RDP functionality in my software. Thank you!
All the tasks you have listed relyed much more on security issues for machines within your network and a user logged-in priveledges a rather than a usage of RPD.
Within a windows domain the tasks like yours are usually delegated to ActiveDirectory administration and policies.
In case of a not Windows Domain Network you will need to use a mechanism that will be presented in following configuration:
a client installed on each particular machine under proper permissions. The client should implement a subscriber pattern.
a server installed on a "commander" machine. the server should inplement a publisher pattern.
There should be a lot of ready solution that should implement the concept of content disribution and starting specific scripts. I think that your investment in such tools research and evaluation will be much more time- and cost- effective rather than writing an app that "uses RPD functionality"
But if there is a reason that prevents usage of 3rd parties, I would go for implementaion of WCF service that will be installed on all clients. This service should be "trained" to do all your suff on client. Server side you will need an appliaction or a service that will publish events for clients or trigger known clients methods.
I am at a dead end an I could really use some help.
I intern for a huge company. My projects involves creating an application to automate/simplify the work of a retiring employee.
The problem here lies in the strict company policies. I am a developer stuck at business end of the company. Therefor IT gives me nothing:
I don't have a server (nor web nor database)
I can't create a server, because no pc will be running and we can't keep them logged in due to single sign on with company cards.
I can't install anything on the pc's in the network.
I can access a share file server, that is backed up every day.
The libraries involved have to be free
A central database has to be accessed by a dozen of users (at once)
The database will recieve new data every day and will grow accordingly
The users will both read and write from/to the database
Preferably C#.NET or WPF solution
Application needs to open files stored on the shared drive. ( Only once, the important information will be extracted and stored in the database.. the file will then be removed)
My initial idea was to use silverlight (which runs standalone) in combination with SQLite. I ran a test and Silverlight files stored on the shared drive work. (Silverlight is installed on every pc on the network) This is my preferred front end. However (correct me if i'm wrong) I tried SQLite-net and I needed to add the sqlite3.dll to my windows/system32 folder, but on the network PC's I don't have access to the Windows folder, so this can not be done.
Also I read that SQLite or files in general can become corrupt when accessed by multiple users as one, so maybe I thought locking was an idea.
Which solutions are there to my problem?
I worked for a company for several years writing software for police departments to manage traffic collision reports. Police stations usually have little-to-no IT support, so we faced many similar limitations. The company actually did pretty well using Microsoft Access databases, with the setup looking something like this:
The shared drive had an Access database file (.mdb or .accdb) which was the actual "database".
Client computers (at the officers' desks) had Access applications with local "utility" tables for temporary storage, UI defined in Forms, and logic defined in Modules. Each of the client machines were connected to the repository on the shared drive by using linked tables. Local client configuration was stored either in the Access application in a config table, or in a text file on the machine.
It's not the cleanest solution, but it would allow you to create and maintain a unified solution using files that don't need to be installed and don't require any funny permissions, as long as everyone has read/write access to the shared drive.
Create a website. Today you can host ASP web apps in a stand alone .exe. By doing so you can make sure that the shared files are only accessed by one process. You can also limit the access to sqlite.
It also means that you do not have to distribute anything. Simply start your application and tell your users which url and port they have to browse too.
As for permissions, only the account running your webhost requires access to shared files etc.
You should take a look at ScimoreDB. It's an embedded database that supports multi-process read/write access. If needed it can also act as a client/server database; even as a distributed database with multiple nodes.
It's free to use and deploy. It has support for C++ and .NET. Only disadvantage is that it only works on Windows.
I have developed a desktop application that will be used without my presence. Therefore, my program may register different events - input, executed operations, working time. Is it possible to modify app for sending logs to any cloud for remote viewing reports?
I have accounts on Wuala and Google Drive, but not a problem to register somewhere else (DropBox? SkyDrive? I don't know what cloud will be easier for work in this situation).
Or you can suggest another, more elegant, solution.
I am using C#, .Net 2.0. Logs are simple .txt files.
I need to implement a Windows Virtual Disk that is visible as a separate disk device in Windows Explorer and transfer all files/dir transferred forth and back to a remote WebService - sth like a DropBox.
Do I have to implement/use a kernel driver SDK? Or is it possoible to use only shell extensions? What I need is to intercept all file/dir operations on that disk and map them to a corresponding WebService calls (file creation/deletion/move/edit and data transfer).
Thanks
You will need a combination of kernel-mode driver and Windows Service/Application for that:
http://dokan-dev.net/en/ (free)
http://www.eldos.com/cbfs/ (commercial)
Windows 7+ allows you to mount a VHD as a disk. The API is described in this MSDN article -The Virtual Disk API In Windows 7.
I don't know if it's possible with just shell extensions, but scanning the article I see the API AttachVirtualDisk, and you should be able to P/Invoke that.
Have you considered WebDAV? This wouldn't require you to install anything on the client, since the functionality is integrated into Windows (I believe since XP). It is using REST, so you could even implement it yourself or look for a solution on codeplex. If it is just about remote storage, there is an IIS addon you can use for that (I believe it is build for IIS 7).
You can go with namespace extensions, this is what DropBox does. Wuala and some other cloud storage provides, on the other hand, use our Callback File System to create a virtual disk.
I'm architecting a WPF application using the PnP Composite Application Guidance. The application will be run locally, within our intranet.
Modules will be loaded dynamically based on user roles. The modules must therefore be accessible to the application through a network share, thus accessible from the client machines.
What I'd like to do is keep all the module .dlls in a location not accessible to staff, but still be able to provide them to the composite application when demanded and when the current user is authenticated to use that module.
My thought is to load the .dlls by streaming them down from a WCF service, where the WCF service (on the server) can access the .dll repository, but none of the client machines can access it. Authentication would also be handled by the service.
I suspect that I might be overcomplicating things somehow.
Is this something that can be done with a simple filesystem configuration and programmatically passing credentials when accessing the shared folder? If I do this, would access only be granted to the calling application, or would the logged-on user now be able to navigate to the shared folder?
Is this, in any way, a solved problem with MEF or any other project of which you're aware? (I hope this isn't LMGTFY-worthy -- I haven't been able to come up with anything.)
At Argonne National Laboratory we keep all sharable DLL and other objects (.INI files, PowerBuilder PBD libraries, application software, etc.) on a simple and internally public file server and objects are being downloaded over the network on a per need basis as defined by each client/server application. Thus we are minimizing the maintenance of middleware (Oracle Client, PowerBuilder, Java, Microsoft, ODBC, etc.) to a single file server location with basically no software installed on the end user PC. Typically we physically download less than a few KB Registry Keys to the individual end user PC; this includes the full Oracle Client, which if installed on the PC alone would take up 650+ MB disk space and several thousand Registry Keys, and costly to maintain on the enterprise. Instead our Oracle Client on the PC is about 17KB.
The only “software" on the client side are Registry Keys containing variables pointing to server locations (f.ex. ORACLE_HOME: \<server name>\ORACLE\v10\Ora10g ).
This has been a very cost effective solution we have been using for 10+ years, making all middleware and application software upgrades totally transparent to more than 2000 users Lab wide. Over the years we have done thousands of object upgrades on the central file server without ever having to install a single upgrade on the end user Desktop. Although this has some risks (“thou shall not copy DLLs over the network”, etc.) and is a heavily customized solution, it has worked flawlessly for us throughout for a large number of applications and middleware.
This is a somewhat surprisingly simple solution in today’s advanced technology, but it has been totally efficient and cost effective for us. Several vendors (Citrix and others) have looked at our solution somewhat perplexed, but every vendor of deployment techniques who have seen our deployment has come to the same conclusion, basically: “you do not need us”.
when loading modules you need to keep in mind that:
Once loaded, an assembly can't be unloaded (unless you unload the entire application domain) - so if users can log in and out using the same instance, you may have a problem.
"the load context" matters (see http://blogs.msdn.com/suzcook/archive/2003/05/29/57143.aspx) - this may cause problems if you have dependencies between modules or dependencies on assemblies that are not in the "load context"
If the restricted access to dlls is due to a licensing issue, maybe you need to refine the licensing mechanism somehow (not tie it to access to the actual code, but to some other checks)?