I'm writing a windows service that will have a 'LocalService' account type. I have a file that stores what it has to do.
I also have a windows form GUI where that file is also accessed to add/remove instances of the action for the service to perform. (dont know if its relavant but the service downloads tables from a webservice and exports them to any database the user has access to. these downloads are scheduled to happen regularly)
The service will only be installed on a user account.
I was planning on storing the file in user appdata folder however while debugging the service I got the error "Access to path [path] is denied"
Where would you recommend storing this file so it is accessible from both programs?
Thanks
EDIT: Looking a bit more, I've realised that
Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData)
finds a different path for the service and the windows form app..
And that that app cant access the service appdata just as the service cant seem to access user appdata. so the same question stands!
ANOTHER EDIT:
So it turns out
Environment.GetFolderPath(Environment.SpecialFolder.CommonApplicationData)
is accessible from a local service and a user program - doh
...but some places seem to be read only...
Three options as I see it:
Run the service under the user's login id
Upside - both processes will have identical access to the various parts of the file system, so should remove your immediate problem
Downside - if the user changes their password the two will get out of sync.
write to some "neutral" part of the file system (or perhaps the registry) where shared access won't be a problem. The trouble with AppData is that as you've found, Windows sets up all kinds of protection around it in order to ringfence different users from each other.
Upside - no problems writing
Downside - you're effectively inventing your own standard. 15 years ago this would have been a no-brainer, the registry, but these days I get the impression that the registry is frowned upon (even though ms still rely on it!). If you do go down the registry route, make sure you're aiming at hklm not hkcu else you'll have the same problem!
During your setup, do some tricks to set up access to the relevant folders. But this is basically tearing down the protection that Windows sets up. Doesn't sound too sensible to me.
Related
I need to find a way to block user access to my database that will be installed in his pc.
So, here on the company we have a problem. We need to block user access to our database that will be installed on their pc, what I mean by this is...
We have 2 softwares. A web App ERP and an instalable finances App.
We reached the conclusion that it was unnecessary to have 2 standalone apps, and that we should put the finances app inside our ERP.
But this comes with a problem, theres a big part of our users that don't trust the web, and web apps, they think that what is on their pc is what is
safe, and is where it should be.
We don't want to maintain the 2 standalone softwares needlessly.
We asked our users if they'd be happy with a progressive web app, their answer was the same.
Then we tried to make a way to run our ERP on their pc whilst offline, as an executable, but that comes with a lot of troubles, we need to install IIS, PostgreSQL, .net frameworks, pgadmin, our metadata database (which it shouldn't be accessible in any way shape or form by the user!), etc... that lets our app run on the users pc.
Of course we don't want to do that, but we got no choice left. We need to at least block our metadata database from being accessed, since the whole structure of the web app is there and we don't want to share it with the competition
Our solution was installing all that was needed inside a virtual drive and run the app from there. but all the files and databases are available to the user for him to mess with.
How can we restrict acess to that virtual drive the best possible, and protect our intelligence property? is it even feasable? I've run out of ideas and don't know what else to do, so any help is welcome.
Should I take another route or is it a lost cause?
Whoever has control of the database machine has control of the database. So if the database is running on the client's machine, there is no way to keep an administrative user out of the database.
So if the users don't trust a web application, they will have to trust their system administrators (or themselves, if they have administrator rights to their machines).
I am at a dead end an I could really use some help.
I intern for a huge company. My projects involves creating an application to automate/simplify the work of a retiring employee.
The problem here lies in the strict company policies. I am a developer stuck at business end of the company. Therefor IT gives me nothing:
I don't have a server (nor web nor database)
I can't create a server, because no pc will be running and we can't keep them logged in due to single sign on with company cards.
I can't install anything on the pc's in the network.
I can access a share file server, that is backed up every day.
The libraries involved have to be free
A central database has to be accessed by a dozen of users (at once)
The database will recieve new data every day and will grow accordingly
The users will both read and write from/to the database
Preferably C#.NET or WPF solution
Application needs to open files stored on the shared drive. ( Only once, the important information will be extracted and stored in the database.. the file will then be removed)
My initial idea was to use silverlight (which runs standalone) in combination with SQLite. I ran a test and Silverlight files stored on the shared drive work. (Silverlight is installed on every pc on the network) This is my preferred front end. However (correct me if i'm wrong) I tried SQLite-net and I needed to add the sqlite3.dll to my windows/system32 folder, but on the network PC's I don't have access to the Windows folder, so this can not be done.
Also I read that SQLite or files in general can become corrupt when accessed by multiple users as one, so maybe I thought locking was an idea.
Which solutions are there to my problem?
I worked for a company for several years writing software for police departments to manage traffic collision reports. Police stations usually have little-to-no IT support, so we faced many similar limitations. The company actually did pretty well using Microsoft Access databases, with the setup looking something like this:
The shared drive had an Access database file (.mdb or .accdb) which was the actual "database".
Client computers (at the officers' desks) had Access applications with local "utility" tables for temporary storage, UI defined in Forms, and logic defined in Modules. Each of the client machines were connected to the repository on the shared drive by using linked tables. Local client configuration was stored either in the Access application in a config table, or in a text file on the machine.
It's not the cleanest solution, but it would allow you to create and maintain a unified solution using files that don't need to be installed and don't require any funny permissions, as long as everyone has read/write access to the shared drive.
Create a website. Today you can host ASP web apps in a stand alone .exe. By doing so you can make sure that the shared files are only accessed by one process. You can also limit the access to sqlite.
It also means that you do not have to distribute anything. Simply start your application and tell your users which url and port they have to browse too.
As for permissions, only the account running your webhost requires access to shared files etc.
You should take a look at ScimoreDB. It's an embedded database that supports multi-process read/write access. If needed it can also act as a client/server database; even as a distributed database with multiple nodes.
It's free to use and deploy. It has support for C++ and .NET. Only disadvantage is that it only works on Windows.
Microsofts preferred way to handle application configuration and runtime data seems obvious at a first glance: App.config, which will be stored in the application execution directory (C:\Program Files\ProductLocation in most cases.) where only privileged users have write access. (Makes sense to me, because a casual user shouldn't be able to alter essential application configurations).
For normal user configuration, there's a user.config which will be copied into each users personal application data directory (%APPDATA%).
But this leads to a few questions:
How can I alter configurations for every user without executing the process as administrator?
Where should I store application data that doesn't get deployed with the application, instead should be generated when the application is started the first time?
How is it possible to have e.g. dynamic connections strings, like for a database health monitor application?
I checked out the program data folder (%PROGRAMDATA% -> C:\ProgramData), but it seems this place is read-only for the standard user. (Windows Installer does create folders in here if needed, but they're all read-only.) -> What happend to %ALLUSERS%?
Example where the Microsoft way may fail in my eyes:
A financial application where every user should store his information in the same database (a SqlCE file db), where as the application has to run with user privileges (I don't want to be administrator to manage my wallet). The application needs connection to a database that isn't available at runtime and may be generated in during the first run using EntityFramework. So it could be possible that even the connection string has to be dynamic, and not configured in the app.settings where such information is fixed.
This is stupid! Users could read sensitive information from other users by directly accessing the file database!
-> Security is not only a file permission thing, there could also be database users, certificates, cryptography etc.)
Do I have to develop my own settings handler as a workaround to the Microsoft intended way?
I guess this question is asked a numerous times on SO, but every answer I found showed up workarounds, different solutions. Questions regarding "best practice" are closed immediately, so I tried to provide a practical example here.
I'm architecting a WPF application using the PnP Composite Application Guidance. The application will be run locally, within our intranet.
Modules will be loaded dynamically based on user roles. The modules must therefore be accessible to the application through a network share, thus accessible from the client machines.
What I'd like to do is keep all the module .dlls in a location not accessible to staff, but still be able to provide them to the composite application when demanded and when the current user is authenticated to use that module.
My thought is to load the .dlls by streaming them down from a WCF service, where the WCF service (on the server) can access the .dll repository, but none of the client machines can access it. Authentication would also be handled by the service.
I suspect that I might be overcomplicating things somehow.
Is this something that can be done with a simple filesystem configuration and programmatically passing credentials when accessing the shared folder? If I do this, would access only be granted to the calling application, or would the logged-on user now be able to navigate to the shared folder?
Is this, in any way, a solved problem with MEF or any other project of which you're aware? (I hope this isn't LMGTFY-worthy -- I haven't been able to come up with anything.)
At Argonne National Laboratory we keep all sharable DLL and other objects (.INI files, PowerBuilder PBD libraries, application software, etc.) on a simple and internally public file server and objects are being downloaded over the network on a per need basis as defined by each client/server application. Thus we are minimizing the maintenance of middleware (Oracle Client, PowerBuilder, Java, Microsoft, ODBC, etc.) to a single file server location with basically no software installed on the end user PC. Typically we physically download less than a few KB Registry Keys to the individual end user PC; this includes the full Oracle Client, which if installed on the PC alone would take up 650+ MB disk space and several thousand Registry Keys, and costly to maintain on the enterprise. Instead our Oracle Client on the PC is about 17KB.
The only “software" on the client side are Registry Keys containing variables pointing to server locations (f.ex. ORACLE_HOME: \<server name>\ORACLE\v10\Ora10g ).
This has been a very cost effective solution we have been using for 10+ years, making all middleware and application software upgrades totally transparent to more than 2000 users Lab wide. Over the years we have done thousands of object upgrades on the central file server without ever having to install a single upgrade on the end user Desktop. Although this has some risks (“thou shall not copy DLLs over the network”, etc.) and is a heavily customized solution, it has worked flawlessly for us throughout for a large number of applications and middleware.
This is a somewhat surprisingly simple solution in today’s advanced technology, but it has been totally efficient and cost effective for us. Several vendors (Citrix and others) have looked at our solution somewhat perplexed, but every vendor of deployment techniques who have seen our deployment has come to the same conclusion, basically: “you do not need us”.
when loading modules you need to keep in mind that:
Once loaded, an assembly can't be unloaded (unless you unload the entire application domain) - so if users can log in and out using the same instance, you may have a problem.
"the load context" matters (see http://blogs.msdn.com/suzcook/archive/2003/05/29/57143.aspx) - this may cause problems if you have dependencies between modules or dependencies on assemblies that are not in the "load context"
If the restricted access to dlls is due to a licensing issue, maybe you need to refine the licensing mechanism somehow (not tie it to access to the actual code, but to some other checks)?
I'm currently in the process of creating a Windows service application which will monitor changes to certain keys made in the HKEY_USERS registry. The way I do this is by collecting the SID of the current user. The issue I'm having is that its returning the administrators SID due to the service currently running as local system.
What I need the system to do is collect and return the SID of the currently logged in user (by this I dont mean the local service, local system or network service but the person whos logged into windows via the GINA), so what I need the service to do is run as that user. This will also allow the service to write back to the users network drive which is the intention of this program.
The issue I'm having is that when I try and install a user service using installutil.exe it asks for a username and password now I've tried my own credentials (I have an admin and non admin account) but it isn't having any of it plus I want the user to change depending on the person logging on and not to be fixed. Is there any way to do this?
The "The current user" assumption is a desktop Windows concept, and with Fast User Switching even that is not true anymore. The Windows services layer is rather common across desktop and server variants, and doesn't really deal well with this. It sits below the interactive sessions layer. One of the ways this manifests itself is in the ability to run services even if there are zero users logged in.
This all seems a bit confused. There can be any number of people logged on via remote desktops etc. If you as a service want to see their registry, you definitely wont get there via HKCU. If you want something like this, you should be using an autorun exe rather than a service. Anything like inspecting sessions and injecting stuff into them to access the loaded registry hive in the session is way overkill and not likely to be clean in any way.
You can find a process that runs with every user like explorer.exe then get the SID of the user that runs the process (you can use WMI like in the function here)