Is it possible to have a single key in a resx file reference a List, or some other collection?
I noticed this question asked elsewhere, but those who answered didn't seem to understand the question.
In other cases, I see discussion of how to iterate through keys in a resx file.
I suspect it isn't possible to have a single key which has a collection as a value. I suspect that iterating the resx file is indeed the only substitute - but I want to know what others have found.
Related
This question already has answers here:
How to read embedded resource text file
(23 answers)
Closed 8 years ago.
I googled my query a lot and got so many references, but it created more confusion for me, So I am putting my question here. I do have my Resource file under Resource folder with path Resources\RR.resx, where Key and Value are stored. I want to know a way through which I can access those Key-Value pairs in back-end code.
(Note: Please make sure it's resource file not resource dictionary)
Considering your example, you can access the Key-Value pair in your code-behind file like,
"Resource.RR.[KeyName]". Here " 'Resource' would be your folder name/Namespace, 'RR' would be your calss name and [KeyName] would be your key.
However, you should also consider that ass soon as we add any resource file default Access Modifier would be 'Internal', you may have to change to public or use it accordingly.
Properties.Resources.ResourceKey;
Please read localization http://msdn.microsoft.com/en-us/library/vstudio/aa992030(v=vs.100).aspx for more information
This question already has answers here:
What is the need of serialization of objects in Java? [closed]
(9 answers)
Closed 9 years ago.
I have been reading through all that I could find to understand what Serialization is. People often say that we need serialization so that we can convert the in-memory object into a form that is easy to transport over the network and persist on the disk.
My question is - What is so wrong with the in-memory object's structure that makes it difficult to be transported or stored?
Some people also say that the object is binary in form and needs to be serialized. AFAIK, everything in the computer storage or memory is binary. What is it that they want to convey?
Some technical details would be appreciated.
EDIT 1:
I have been looking into C# code samples but all of them use the "Serialization" available in the framework. Maybe I need to work with C++ to see the details and to experience the pain.
A simple and obvious example; a file object is a reference to a file in the file system, but in itself not very useful. Serializing it will at best pass a filename to the receiver, which may not have the file system to read the file from.
Instead, when you pass it over the network, you can serialize it to instead contain the contents of the file, which is usable by the receiver.
Serializing is basically just taking an object which in memory may not have very usable content (pointers/references to data stored elsewhere/...), and converting it to something that is actually usable by the receiver.
For instance, if you have an object of this class
public class myClass {
private String myString = null;
// getters and setters
}
The in memory representation will be the just a pointer to another object (the String). You cannot recall the original state of the object just by storing the binary form.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I would like to design my own database engine for educational purposes, for the time being. Designing a binary file format is not hard nor the question, I've done it in the past, but while designing a database file format, I have come across a very important question:
How to handle the deletion of an item?
So far, I've thought of the following two options:
Each item will have a "deleted" bit which is set to 1 upon deletion.
Pro: relatively fast.
Con: potentially sensitive data will remain in the file.
0x00 out the whole item upon deletion.
Pro: potentially sensitive data will be removed from the file.
Con: relatively slow.
Recreating the whole database.
Pro: no empty blocks which makes the follow-up question void.
Con: it's a really good idea to overwrite the whole 4 GB database file because a user corrected a typo. I will sell this method to Twitter ASAP!
Now let's say you already have a few empty blocks in your database (deleted items). The follow-up question is how to handle the insertion of a new item?
Append the item to the end of the file.
Pro: fastest possible.
Con: file will get huge because of all the empty blocks that remain because deleted items aren't actually deleted.
Search for an empty block exactly the size of the one you're inserting.
Pro: may get rid of some blocks.
Con: you may end up scanning the whole file at each insert only to find out it's very unlikely to come across a perfectly fitting empty block.
Find the first empty block which is equal or larger than the item you're inserting.
Pro: you probably won't end up scanning the whole file, as you will find an empty block somewhere mid-way; this will keep the file size relatively low.
Con: there will still be lots of leftover 0x00 bytes at the end of items which were inserted into bigger empty blocks than they are.
Rigth now, I think the first deletion method and the last insertion method are probably the "best" mix, but they would still have their own small issues. Alternatively, the first insertion method and scheduled full database recreation. (Probably not a good idea when working with really large databases. Also, each small update in that method will clone the whole item to the end of the file, thus accelerating file growth at a potentially insane rate.)
Unless there is a way of deleting/inserting blocks from/to the middle of the file in a file-system approved way, what's the best way to do this? More importantly, how do databases currently used in production usually handle this?
the engines you name are very different... and your engine seems to have not so much in common with them... your engine sound similar to the good old dBase format...
For deletion the idea with the bit is good... make the part with overwriting deleted items with 0x00 configurable...
For Insertion you should keep a list of free blocks with their respective size... this list gets updated when you delete an item and when you grow the file and when you shrink the filt... this way you can determine very fast how to handle an insertion...
Why not start by looking at how existing systems work? If this is for your own education that will benefit you more in the long run.
Look at the tried and true B-Tree/B+Tree for starters. Then look at some others like Fractal Tree indexes, SSTables, Hash Tables, Merge Tables, etc.
Start by understanding how a 'database' stores and indexes data. There are great open source and documented examples of this both in the NoSQL space as well as the more traditional RDBMS world. Take apart something that exists, understand it, modify it, improve it.
I've been down this road, though not for educational purposes. The .NET space lacked any thread-safe B+Tree that was disk-based, so I wrote one. You can read some about it on my blog at http://csharptest.net/projects/bplustree/ or go download the source and take it apart: http://code.google.com/p/csharptest-net/downloads/list
There are open source databases why dont you look at them first. MySQL source code can be a good start. You can download the source and get into it.
Also, you can start investigating the data structures being used by databases, then look at persistence strategies and so forth.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I want to recurse several directories and find duplicate files between the n number of directories.
My knee-jerk idea at this is to have a global hashtable or some other data structure to hold each file I find; then check each subsequent file to determine if it's in the "master" list of files. Obviously, I don't think this would be very efficient and the "there's got to be a better way!" keeps ringing in my brain.
Any advice on a better way to handle this situation would be appreciated.
You could avoid hashing by first comparing file sizes. If you never find files with the same sizes you don't have to hash them. You only hash a file once you find another file with the same size, then you hash them both.
That should be significantly faster than blindly hashing every single file, although it'd be more complicated to implement that two-tiered check.
I'd suggest keeping multiple in-memory indexes of files.
Create one that indexes all files by file length:
Dictionary<int, List<FileInfo>> IndexBySize;
When you're processing new file Fu, it's a quick lookup to find all other files that are the same size.
Create another that indexes all files by modification timestamp:
Dictionary<DateTime, List<FileInfo>> IndexByModification;
Given file Fu, you can find all files modified at the same time.
Repeat for each signficiant file characteristic. You can then use the Intersect() extension method to compare multiple criteria efficiently.
For example:
var matchingFiles
= IndexBySize[fu.Size].Intersect(IndexByModification[fu.Modified]);
This would allow you to avoid the byte-by-byte scan until you need to. Then, for files that have been hashed, create another index:
Dictionary<MD5Hash, List<FileInfo>> IndexByHash;
You might want to calculate multiple hashes at the same time to reduce collisions.
Your approach sounds sane to me. Unless you have very good reason to assume that it will not suffice your performance requirements, I'd simply implement it this way and optimize it later if necessary. Remember that "premature optimization is the root of evil".
the best practice , as John Kugelman said , is first to compare two files with the same size , if they have different sizes , its obvious that they are not duplicates.
if you find two files with same size , for better performance , you can compare the first 500 KB of two files , if the first 500 KB are same , you can compare the rest of the bytes. in this way you dont have to read all bytes of a (for example ) 500 MB file to gain its hash, so you save time and boost performance
For a byte-comparison where you're expecting many duplicates, then you're likely best off with the method you're already looking at.
If you're really concerned about efficiency and know that duplicates will always have the same filename, then you could start by comparing filenames alone and only hash bytes when you find a duplicate name. That way you'd save the time of hashing files that have no duplicate in the tree.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I'm learning c# , and I reached the LinkedList<T> type and I still need to know more, like when should I use it, how do I create one, how do I to use it. I just want information.
If any one knows a good article about this subject, or if you can show me some examples with explanation, such as how to create, how to add and remove, and how to deal with nodes and elements.
Thanks in advance. I really enjoy asking questions around here with all the pros answering and helping.
[EDIT] Changed reference to LinkedList<T> instead of "array linkedlist." I think this was what was meant based on the context.
You can find more information on LinkedList<T> at MSDN, including an example of how to create and use one. Wikipedia has a reasonable article on linked lists, including their history, usage, and some implementation details.
Linked is a collection to store sequences of values of the same type, which is a frequent task, e.g. to represent a queue of cars waiting at a stoplight.
There are many different collection types like linked list, array, map, set etc. Which one to use when depends on their properties, e.g.:
do you need some type of ordering?
is the container associative, storing key-value pairs like a dictionary?
can you store the same element twice?
performance measures - how fast is e.g. inserting, removing, finding an element? This is usually given in Big-O notation, telling you how the time required scales with the number of elements in the collection.
Memory footprint and layout. This also can affect performance, due to good/bad locality.
This collection class implements a doubly linked list. It allows you to quickly determine the immediate sibling for a specified item in the collection. Removing an item from the collection automatically resizes it so that it does not leave any gaps.
For more info on LinkedList class, check out LinkedList at MSDN.
Do you know what a standard Linked List is? It's like one of those (doubly linked) but using .NET Generics to allow you to easily store any Type inside of it.
Honestly, I don't use it, I prefer the more basic List or Dictionary.
For more info on Linked Lists, check out wikipedia. As for generics, there are tons of articles here and at MSDN.
linklist are collections.. they can be use as replacements for arrays.. they can dynamically grow in size and has special helper methods that can help the development or the problem solving be faster.. try to view its methods and properties to understand more.
linklist is a generic collection.. meaning can used to declare type safety declarations..