I'm refactoring a WPF application and dealing with cleaning up storage of settings. I've re-written most to use the application's settings (Properties.Settings.Default) and this technically is working right now it seems to generate rather ugly paths in the %appdata% folder such as:
C:\Users\me\AppData\Local\Company_Name_With_Underscores\program.exe_Url_xs3vufrvyvfeg4xv01dvlt54k5g2xzfr\3.0.1.0\
These also then result in a new version number folder for each version that don't get cleaned up ever unless apparently I manually do so via file io functions.
Other programs don't seem to follow this format, including Microsoft programs, so I'm under the impression this seems like one of those 'technically the best way but not practical so nobody uses' solutions. Is this the case or am I missing something to make this more practical?
I ask mainly because I can foresee issues if we ever have to direct a customer to one of these folders to check or send us a file from there.
I'm using .NET 4.0 right now.
Related
I have a large bespoke container file (~3TB) in size, and another application that needs to read from it.
However, the application doesn't understand the structure of the container, so I have to convert it first, which means creating another ~3TB file; I'm hoping to streamline this process.
What I'd like to do is to create a file/pipe/something on the file system, and when the other applications reads from it, my application simply returns the correct data from within the container.
I'm not sure if this can be done in C# and I don't really want to have to hook any OS components, so I was thinking that a named pipe might work, but I'm not sure, if anyone has any suggestions or ideas, I'd appreciate it.
If you don't control the consuming application and it expects to be reading from the file system, there may be a way of doing this but it's a fair bit of work.
Recent releases of Windows 10 have included the Windows Projected File System. Windows takes care of all of the file system interception and you just have to be able to answer questions like "what files are meant to be in this directory?" and the like. I believe it's now used for OneDrive and that's one of the intended uses - where the actual files may normally reside in cloud storage rather than locally.
You do have to make file content available as Windows demands it. The one thing to say though is that it's not an easy job direct from C#. If you're going to try binding to this API, it really helps if you understand a bit of C or C++ too.
Earlier this year I was looking to create a managed binding to this API to make consumption easier from .NET languages. It's not, however, currently in a releasable state. But the basics worked and proves that this is a viable approach.
Once .NET Core 3 is fully released I'll probably dust this off again and make it work well, but for now it's a WIP
There are number of possible solutions to do file backup application. I need to know which method would be rock-solid and professional way to perform copying of data files even though the file is being used or very large sized.
There is a known method called Volume shadow copy (VSS), however I've read that it is an overkill for a simple copying operation and instead the PInvoke BackupRead can be used.
.NET framework provides it's own methods:
File.Copy was (and possibly still is) problematic with large files and sharing the resources
FileStream seems to be suitable for backup purposes however I didn't locate comprehensive description and I am not sure if I'm correct.
Could you please enlighten me which method should be used (maybe I have overlooked some options) and why? If the VSS or PInvoke methods are preferred could you please also provide an example how to use it or some reference to comprehensive documentation (particularly I'm interested in the correct settings to create file handle, which would allow sharing the resources when the file is in use).
Thanks in advance.
Everything you're going to try in a live (i.e. currently running OS) volume will suffer from not being able to open some files. The reason is, applications and the OS itself opens files exclusively - that is, they open the files with ShareMode=0. You won't be able to read those files.
VSS negotiates with VSS-aware applications to release their open files for the duration, but relatively few applications outside Microsoft are VSS aware.
An alternative approach is to boot to another OS (on a USB stick or another on-disk volume) and do your work from there. For example, you could use the Microsoft Preinstallation environment (WinPE). You can, with some effort run a .Net 4.x application from there. From such an environment, you can get to pretty much any file on the target volume without sharing violations.
WinPE runs as local administrator. As such, you need to assert privileges, such as SE_BACKUP_NAME, SE_RESTORE_NAME, SE_SECURITY_NAME, SE_TAKE_OWNERSHIP_NAME, and you need to open the files with a flag called BACKUP_SEMANTCS...as described here.
The BackupRead/BackupWrite APIs are effective, if awkward. You can't use asynchronous file handles with these APIs...or at least MS claims you're in for "subtle errors" if you do. If those APIs are overkill, you can just use FileStreams.
There are bunches of little gotchas either way. For example, you should know when there are hardlinks in play or you'll be backing up redundant data...and when you restore, you don't want to break those links. There are APIs for getting all the hard links for a given file...NtQueryInformationFile, for example.
ReparsePoints (Junctions and SymLinks) require special handling, too...as they are low-level redirects to other file locations. You can run in circles following these reparse points if you're not careful, and even find yourself inadvertently backing up off-volume data.
Not easy to deal with all this stuff, but if thoroughness is an issue, you'll encounter them all before you're done.
My team is developing a new application (C#, .Net 4) that involves a repository for shared users content. We need to decide where to store it. The requirements are as follows:
Share files among users.
Support versions.
Enable search by tags and support further queries such as "all the files created by people from group X"
Different views for different people (team X sees its own content and nobody else can see theirs).
I'm not sure what's best, so:
can I search over SVN using tags (not SVN tags of course, more like stackoverflow's tags)?
Is there any sense in thinking of duplication - both SVN and SQL - the content?
Any other suggestions?
Edit
The application enables users to write validation tests that they later execute. Those tests are shared among many groups on different sites. We need versioning for the regular reasons - undo changes, sudden deletions etc. This calls for SVN.
The thing is, we also want to add the option to find all the tests that are tagged "urgent" and were executed by now, for tracking purposes.
I hope I made myself more clear now :)
Edit II
I ran into SvnQuery and it looks good, but does it have an API I can use? I'd rather use their mechanism with my own GUI.
EDIT III
My colleague strongly supports using only a database and forget file based storage. He claims it is better for persistence (which is needed - a test is more than the list of commands to execute). I'd appreciate inputs on this issue, as I think it should be possible to do it this way or the other.
Thanks!
Firstly, consider using GIT rather than SVN. It's much faster, and I suspect it's more appropriate in your use-case: it's designed to be distributed, meaning your users will be able to use it without an internet access, and you won't have any overhead related to communicating with the server when saving documents.
Other than that, I'm not making full sense of your question but it seems like the gist of it might be better rephrased like so: "Can I do tag-based searches/access restriction onto my version control system, or do I need to create a layer on top to do so?"
If so, the answer is that you need a layer on top. Some exist already, both web-based (e.g., Trac) and desktop-based (e.g. GitX). They won't necessarily implement exactly what you need but they can be a good starting point to do what you're seeking.
You could use SVN.
Shared files: obvious and easy. It also supports the centralised locking that you might need for binary files.
Versions. Obviously.
Search... Now we're getting into difficult territory. There is a Lucene addon that allows web searching of your repo - opengrok, svnquery or svn-search. These would be your best starting points for that.
There is no way to stop people seeing what's present in a svn repo, but you can stop them from accessing it. I don't know if the access control could be extended easily to provide hidden folders, you could ask the svn developers.
There's some great APIs for working with SVN, probably the most accessible is SharpSVN which gives you a .net assembly, but there's Python and C and all sorts available.
As mentioned, there are web tools which sit on top of SVN to provide a view into it, there's Trac, and Redmine and several repo-viewers like webSVN, so there's plenty of sample code to use to cook up your own.
Would you use a DVCS like git or mercurial? I woulnd't. Though these have good mechanisms in themselves, it doesn't sound like they're what tyou're after. These allow people to work on their own and share with others on a peer-to-peer basis (though you can set a 'central' repo and work with that as everyone's peer). They do not work in a centralised, shared way. For example, if you and I both edit a test case locally andthen push to the central repo, we might have issues merging. We will have issues merging if the file is a binary or otherwise non-mergable file. In this case you have a problem with losing one person's changes. That's one, main reason for not using a DVCS in your case.
If you're trying to get shared tests together, have you looked at some apps that already do this. I noticed TestRail recently that sounds like what you're trying to do. It's not free (alas) but it's cheap.
So, my application depends on a huge number of small files. The actual number is somewhere around 90,000. Now, I use a component that needs an access to these files, but the only way it accepts them is by the use of an URI.
So far I have simply added a directory containing all the files to my debug-folder while I have developed the application. However, now I have to consider the deployment. What are my options on including all these files with my deployment?
So far I have come up with a couple of different solutions, none of which I've managed to make work completely. First was to simply add all the files to the installer which would then copy them to their places. This would, in theory at least, work, but it'd make maintaining the installer (a standard MSI-installer generated with VS) an absolute hell.
The next option I came up with was to zip them into a single file and add this as a part of the installer and then unzip them by the use of a custom action. The standard libraries, however, do not seem to support complex zip-files, making this a rather hard option.
Finally, I realized that I could create a separate project and add all the files as resources in that project. What I don't know is how do the URIs pointing to resources stored in other assemblies work. Meaning, is it "standard" for everything to support the "application://,,,:Assembly"-format?
So, are these the only options I have, or are there some other ones as well? And what would be the best option to go about this?
I would use a single zip-like archive file, and not unzip that file on your hard disk but leave it as is. This is also the approach used by several well known applications that depend on lots of smaller files.
Windows supports using zip files as virtual folders (as of XP), users can see and edit their content with standard tools like Windows Explorer.
C# also has excellent support for zip files, if you're not happy with the built in tools I recommend one of the major Zip libraries out there - they're very easy to use.
In case you worry about performance, caching files in memory is a simple exercise. If your use case actually requires the files to exist on disk, also not an issue, just unzip them on first use - it's just a few lines of code.
In short, just use a zip archive and a good library and you won't run into any trouble.
In any case, I would not embed this huge amount of files in your application directly. Data files are to be separate.
You could include the files in a zip archive, and have the application itself unzip them on first launch as part of a final configuration, if it's not practical to do that from the installer. This isn't entirely atypical (e.g. it seems like most Microsoft apps do a post-install config on first run).
Depending on how the resources are used, you could could have a service that provides them on demand from a store of some kind and caches them, rather than dumping them somewhere. This may or may not make sense depending on what these resources are for, e.g. if they're UI elements a delay on first access might not be acceptable.
You could even serve them using http from a local or non-local server, or a SQL server if it's already using one, caching them as well, which would be great for maintenance, but may not work for the environment.
I wouldn't do anything that involves an embedded resource for each file individually, that would be hell to maintain.
Another option could be to create a self-extract zip/rar archive and extract it from the installer.
One of the options is to keep them in compound storage and access them right in the storage. The article on our site describes various types of storages and their advantages / specifics.
I'm reviewing a .NET project, and I came across some pretty heavy usage of .ini files for configuration. I would much prefer to use app.config files instead, but before I jump in and make an issue out of this with the devs, I wonder if there are any valid reasons to favor .ini files over app.config?
Well, on average, .INI files are probably more compact and in a way more readable to humans. XML is a bit of a pain to read, and its quite verbose.
However, app.config of course is the standard .NET configuration mechanism that is supported in .NET and has lots of hooks and ways to do things. If you go with .INI files, you're basically "rolling your own all the way". Classic case of "reinventing the wheel".
Then again: is there any chance this is a project that started its life before .NET ? Or a port of an existing pre-.NET Windows app where .INI files were the way to go?
There's nothing inherently wrong with .INI files I think - they're just not really suported in .NET anymore, and you're on your own for extending them, handling them etc. And it certainly is a "stumper" if you ever need to bring outside help on board - hardly any .NET developer will have been exposed to .INI files while the .NET config system is fairly widely known and understood.
Ini files are quite okay in my book. The problem is GetPrivateProfileString() and cousins. Appcompat has turned that into one ugly mutt of an API function. Retrieving a single ini value takes about 50 milliseconds, that's a mountain of time on a modern PC.
But the biggest problem is that you can't control the encoding of the INI file. Windows will always use the system code page to interpret strings. Which is only okay as long as your program doesn't travel far from your desk. If it does, it has a serious risk of producing gibberish when you don't restrict the character set used in your INI file to ASCII.
XML doesn't have this problem, it is well supported by the .NET framework. Whether by using settings or managing your config yourself.
Personally I never user .ini /xml config files to anything more than loading all the values into a singleton or something and then use them runtime like this...
That being said i firmly believe that you should look at the kind of data and the use of data. If the data is in the context of the application in terms of settings and comfigurations then i believe that the app.config file is the right place to hold these settings.
If on the other hand the data is concerned about loading projects, images or other resources concerned with the content of the application then i believe the .ini (does anyone use .ini files anymore? I am thinking a .xml file for storing these information). In short: Segment the content of the data being stored according to the domain and the context.
INI files are preferable for multi-platform applications (e.g., Linux & Windows), where the customer may occasionally edit the configuration parameters directly, and where you want a more user-friendly/-recognizable file name without the extra effort.