apologies if I'm doing something wrong, this is my first post.
I'm currently working with C# and want to save a bunch of data out to a JSON file and load it back, but I'm having trouble figuring out how to get it in the following format.
// Primary ID
001
{
// Secondary ID
01
{
// Tertiary ID
01
{
string: "this is some information.",
int: 9371
}
}
// Secondary ID
02
{
// Tertiary ID
01
{
string: "blah blah blah.",
int: 2241
}
}
}
I'd essentially like to be able to call up information with a particular set of IDs for example 001-02-01 which would return a string ("blah blah blah.") and an int (2241).
The reason I want to go about it like this instead of just having one longer ID is so that when the JSON file becomes very large, I'm hoping to be able to speed up the search for information by passing each ID in turn.
If that makes no sense and it would be equally as fast to just pass in one longer ID and not be bothered by this whole nested ID segments concept then please let me know!
If, however what I'm thinking is correct and it would help the speed of finding particular data by structuring it out like this, how would I go about doing that? With nested C# classes in arrays?
The most simple way and efficient way would be to have all data as same type. Currently, you seem to go for each object is of type of the given id:
{
"01":{},
"02" :{}
}
this will not go too well if trying to use a serializable class.
I would recommend the following:
{
"items" : [
{"id":"01" }, { "id":"02" },...
]
}
Then you can serialize/deserialize easily with
[Serializable]
public class Item
{
public string id = null;
}
[Serializable]
public class RootObject
{
public List<Item> items = null;
}
and then in Unity:
void Start(){
string str = GetJson(); // However you get it
RootObject ro = JsonUtility.FromJson<RootObject>(str);
}
if you want to speed up the fetching and your collection is large, convert to dictionary.
Dictionary<string, Item> dict = null;
void Start(){
string str = GetJson(); // However you get it
RootObject ro = JsonUtility.FromJson<RootObject>(str);
this.dict = new Dictionary<string,Item>();
foreach(Item item in ro.items){
Item temp = temp;
this.dict.Add(item.Id, temp);
}
ro = null;
}
Now you can access real fast.
Item GetItem(string id)
{
if(string.IsNullOrEmpty(id) == true){ return null; }
Item item = null;
this.dict.TryGetValue(id, out item);
return item;
}
If you end up storing millions of records in your file and want to start doing something more performant it would be easier to switch to a decent document database like MongoDB rather than trying to reinvent the wheel.
Worry about writing good standard code before worrying about performance problems that don't yet exist.
The following example is not in your language of choice but it does explain that JSON and arrays of 1,000,000 objects can be searched very quickly:
const getIncidentId = () => {
let id = Math.random().toString(36).substr(2, 6).toUpperCase().replace("O", "0")
return `${id.slice(0, 3)}-${id.slice(3)}`
}
console.log("Building array of 1,000,000 objects")
const littleData = Array.from({ length: 1000000 }, (v, k) => k + 1).map(x => ({ cells: { Number: x, Id: getIncidentId() } }))
console.log("Getting list of random Ids for array members [49, 60, 70000, 700000, 999999]")
const randomIds = ([49, 60, 70000, 700000, 999999]).map(i => littleData[i].cells.Id)
console.log(randomIds)
console.log("Finding each array item that contains a nested Id property in the randomIds list.")
const foundItems = littleData.filter(i => randomIds.includes(i.cells.Id))
console.log(foundItems)
Related
I'm creating a game in Unity3D + C#.
What I've got at the moment: an SQL datatable, consisting of 8 columns holding a total of 3 entries and a list "_WeapList" that holds every entry (as shown below).
public struct data
{
public string Name;
public int ID, dmg, range, magazin, startammo;
public float tbtwb, rltimer;
}
List<data> _WeapList;
public Dictionary<int, data>_WeapoList; //probable change
[...]
//reading the SQL Table + parse it into a new List-entry
while (rdr.Read())
{
data itm = new data();
itm.Name = rdr["Name"].ToString();
itm.ID = int.Parse (rdr["ID"].ToString());
itm.dmg = int.Parse (rdr["dmg"].ToString());
itm.range = int.Parse (rdr["range"].ToString());
itm.magazin = int.Parse (rdr["magazin"].ToString());
itm.startammo = int.Parse (rdr["startammo"].ToString());
itm.tbtwb = float.Parse(rdr["tbtwb"].ToString());
itm.rltimer = float.Parse(rdr["rltimer"].ToString());
_WeapList.Add(itm);
_WeapoList.Add(itm.ID, itm);//probable change
}
Now I want to create a "Weapon"-Class that will have the same 8 fields, feeding them via a given ID
How do I extract the values of a specific item (determined by the int ID, which is always unique) in the list/struct?
public class Weapons : MonoBehaviour
{
public string _Name;
public int _ID, _dmg, _range, _magazin, _startammo;
public float _tbtwb, _rltimer;
void Start()
{//Heres the main problem
_Name = _WeapoList...?
_dmg = _WeapoList...?
}
}
If your collection of weapons may become quite large or you need to frequently look up weapons in it, I would suggest using a Dictionary instead of a List for this (using the weapon ID as the key). A lookup will be much quicker using a Dictionary key than searching through a List using a loop or LINQ.
You can do this by modifying your code to do this as follows:
public Dictionary<int, data>_WeapList;
[...]
//reading the SQL Table + parse it into a new List-entry
while (rdr.Read())
{
data itm = new data();
itm.Name = rdr["Name"].ToString();
itm.ID = int.Parse (rdr["ID"].ToString());
itm.dmg = int.Parse (rdr["dmg"].ToString());
itm.range = int.Parse (rdr["range"].ToString());
itm.magazin = int.Parse (rdr["magazin"].ToString());
itm.startammo = int.Parse (rdr["startammo"].ToString());
itm.tbtwb = float.Parse(rdr["tbtwb"].ToString());
itm.rltimer = float.Parse(rdr["rltimer"].ToString());
_WeapList.Add(itm.ID, itm);//probable change
}
Then, to access elements on the list, just use the syntax:
_WeapList[weaponID].dmg; // To access the damage of the weapon with the given weaponID
Guarding against invalid IDs:
If there's a risk of the weaponID supplied not existing, you can use the .ContainsKey() method to check for it first before trying to access its members:
if (_WeapList.ContainsKey(weaponID))
{
// Retrieve the weapon and access its members
}
else
{
// Weapon doesn't exist, default behaviour
}
Alternatively, if you're comfortable using out arguments, you can use .TryGetValue() instead for validation - this is even quicker than calling .ContainsKey() separately:
data weaponData;
if (_WeapList.TryGetValue(weaponID, out weaponData))
{
// weaponData is now populated with the weapon and you can access members on it
}
else
{
// Weapon doesn't exist, default behaviour
}
Hope this helps! Let me know if you have any questions.
Let specificWeapon be a weapon to be searched in the list, then you can use the following code to select that item from the list of weapons, if it is not found then nullwill be returned. Hope that this what you are looking for:
var selectedWeapon = WeapList.FirstOrDefault(x=> x.ID == specificWeapon.ID);
if(selectedWeapon != null)
{
// this is your weapon proceed
}
else
{
// not found your weapon
}
You can use LINQ to search specific object through weaponId
var Weapon = _WeapList.FirstOrDefault(w=> w.ID == weaponId);
I have a User class that accumulates lots of DataTime entries in some List<DateTime> Entries field.
Occasionally, I need to get last 12 Entries (or less, if not reached to 12). It can get to very large numbers.
I can add new Entry object to dedicated collection, but then I have to add ObjectId User field to refer the related user.
It seems like a big overhead, for each entry that holds only a DateTime, to add another field of ObjectId. It may double the collection size.
As I occasionally need to quickly get only last 12 entries of 100,000 for instance, I cannot place these entries in a per-user collection like:
class PerUserEntries {
public ObjectId TheUser;
public List<DateTime> Entries;
}
Because it's not possible to fetch only N entries from an embedded array in a mongo query, AFAIK (if I'm wrong, it would be very gladdening!).
So am I doomed to double my collection size or is there a way around it?
Update, according to #profesor79's answer:
If your answer works, that will be perfect! but unfortunately it fails...
Since I needed to filter on the user entity as well, here is what I did:
With this data:
class EndUserRecordEx {
public ObjectId Id { get; set; }
public string UserName;
public List<EncounterData> Encounters
}
I am trying this:
var query = EuBatch.Find(u => u.UserName == endUser.UserName)
.Project<BsonDocument>(
Builders<EndUserRecordEx>.Projection.Slice(
u => u.Encounters, 0, 12));
var queryString = query.ToString();
var requests = await query.ToListAsync(); // MongoCommandException
This is the query I get in queryString:
find({ "UserName" : "qXyF2uxkcESCTk0zD93Sc+U5fdvUMPow" }, { "Encounters" : { "$slice" : [0, 15] } })
Here is the error (the MongoCommandException.Result):
{
{
"_t" : "OKMongoResponse",
"ok" : 0,
"code" : 9,
"errmsg" : "Syntax error, incorrect syntax near '17'.",
"$err" : "Syntax error, incorrect syntax near '17'."
}
}
Update: problem identified...
Recently, Microsoft announced their DocumentDB protocol support for MongoDB. Apparently, it doesn't support yet all projection operators. I tried it with mLab.com, and it works.
You can use PerUserEntries as this is a valuable document structure.
To get part of that array we need to add projection to query, so we can get only x elements and this is done server side.
Please see snippet below:
static void Main(string[] args)
{
// To directly connect to a single MongoDB server
// or use a connection string
var client = new MongoClient("mongodb://localhost:27017");
var database = client.GetDatabase("test");
var collection = database.GetCollection<PerUserEntries>("tar");
var newData = new PerUserEntries();
newData.Entries = new List<DateTime>();
for (var i = 0; i < 1000; i++)
{
newData.Entries.Add(DateTime.Now.AddSeconds(i));
}
collection.InsertOne(newData);
var list =
collection.Find(new BsonDocument())
.Project<BsonDocument>
(Builders<PerUserEntries>.Projection.Slice(x => x.Entries, 0, 3))
.ToList();
Console.ReadLine();
}
public class PerUserEntries
{
public List<DateTime> Entries;
public ObjectId TheUser;
public ObjectId Id { get; set; }
}
At the moment I'm adding functionality to our service that will take in an object that is about to be logged to trace and mask any sensitive fields that are included in the object.
The issue is that we can get objects with different layers. The code I have written so far only handles a parent field and a single child field and uses a nasty embedded for loop implementation to do it.
In the event that we have a third embedded layer of fields in an object we want to log, this wouldn't be able to handle it at all. There has to be a more efficient way of handling generic parsing of a dynamic object, but so far it's managed to avoid me.
The actual code that deserializes and then masks field sin the object looks like this:
private string MaskSensitiveData(string message)
{
var maskedMessage = JsonConvert.DeserializeObject<dynamic>(message);
LoggingProperties.GetSensitiveFields();
for (int i = 0; i < LoggingProperties.Fields.Count(); i++)
{
for (int j = 0; j < LoggingProperties.SubFields.Count(); j++)
{
if (maskedMessage[LoggingProperties.Fields[i]] != null)
{
if (maskedMessage[LoggingProperties.Fields[i]][LoggingProperties.SubFields[j]] != null)
{
maskedMessage[LoggingProperties.Fields[i]][LoggingProperties.SubFields[j]] = MaskField(LoggingProperties.SubFieldLengths[j]);
}
}
}
}
return maskedMessage.ToString(Formatting.None);
}
And it works off of a LoggingProperties class that looks like this:
public static class LoggingProperties
{
// Constants indicating the number of fields we need to mask at present
private const int ParentFieldCount = 2;
private const int SubFieldCount = 4;
// Constant representing the character we are using for masking
public const char MaskCharacter = '*';
// Parent fields array
public static string[] Fields = new string[ParentFieldCount];
// Subfields array
public static string[] SubFields = new string[SubFieldCount];
// Array of field lengths, each index matching the subfield array elements
public static int[] SubFieldLengths = new int[SubFieldCount];
public static void GetSensitiveFields()
{
// Sensitive parent fields
Fields[0] = "Parent1";
Fields[1] = "Parent2";
// Sensitive subfields
SubFields[0] = "Child1";
SubFields[1] = "Child2";
SubFields[2] = "Child3";
SubFields[3] = "Child4";
// Lengths of sensitive subfields
SubFieldLengths[0] = 16;
SubFieldLengths[1] = 16;
SubFieldLengths[2] = 20;
SubFieldLengths[3] = 3;
}
}
}
The aim was to have a specific list of fields for the masking method to look out for that could be expanded or contracted along with our systems needs.
The nested loop method though just seems a bit roundabout to me. Any help is appreciated.
Thanks!
UPDATE:
Here's a small example of a parent and child record that would be in the message prior to the deserialize call. For this example say I'm attempting to mask the currency ID (So in properties the fields could be set like this: Parent1 = "Amounts" and Child1 = "CurrencyId"):
{
"Amounts":
{
"Amount":20.0,
"CurrencyId":826
}
}
An example of a problem would then be if the Amount was divided into pounds and pence:
{
"Amounts":
{
"Amount":
{
"Pounds":20,
"Pence":0
},
"CurrencyId":826
}
}
This would another layer and yet another embedded for loop...but with that I would be making it overly complex and difficult if the next record in a message had only two layers.
Hope this clarifies a few things =]
Okay, I've really tried but I couldn't figure out an elegant way. Here's what I did:
The first try was using reflection but since all the objects are of type JObject / JToken, I found no way of deciding whether a property is an object or a value.
The second try was (and still is, if you can figure out a good way) more promising: parsing the JSON string into a JObject with var data = JObject.Parse(message) and enumerating its properties in a recursive method like this:
void Mask(data)
{
foreach (JToken token in data)
{
if (token.Type == JTokenType.Object)
{
// It's an object, mask its children
Mask(token.Children());
}
else
{
// Somehow mask it but I couldn't figure out to do it with JToken
// Pseudocode, it doesn't actually work:
if (keysToMask.Contains(token.Name))
token.Value = "***";
}
}
}
Since it doesn't work with JTokens, I've tried the same with JProperties and it works for the root object, but there's a problem: although you can see if a given JProperty is an object, you can not select its children object, JProperty.Children() gives JToken again and I found no way to convert it to a JProperty. If anyone knows how to achieve it, please post it.
So the only way I found is a very dirty one: using regular expressions. It's all but elegant - but it works.
// Make sure the JSON is well formatted
string formattedJson = JObject.Parse(message).ToString();
// Define the keys of the values to be masked
string[] maskedKeys = {"mask1", "mask2"};
// Loop through each key
foreach (var key in maskedKeys)
{
string original_pattern = string.Format("(\"{0}\": )(\"?[^,\\r\\n]+\"?)", key);
string masked_pattern = "$1\"censored\"";
Regex pattern = new Regex(original_pattern);
formatted_json = pattern.Replace(formatted_json, masked_pattern);
}
// Parse the masked string
var maskedMessage = JsonConvert.DeserializeObject<dynamic>(formatted_json);
Assuming this is your input:
{
"val1" : "value1",
"val2" : "value2",
"mask1" : "to be masked",
"prop1" : {
"val3" : "value3",
"val1" : "value1",
"mask2" : "to be masked too",
"prop2" : {
"val1" : "value 1 again",
"mask1" : "this will also get masked"
}
}
}
This is what you get:
{
"val1": "value1",
"val2": "value2",
"mask1": "censored",
"prop1": {
"val3": "value3",
"val1": "value1",
"mask2": "censored",
"prop2": {
"val1": "value 1 again",
"mask1": "censored"
}
}
}
I have a form where I collect data from users. When this data is collected, I pass it to various partners, however each partner has their own rules for each piece of data, so this has to be converted. I can make this happen, but my worries are about the robustness. Here's some code:
First, I have an enum. This is mapped to dropdown a dropdown list - the description is the text value, and the int mapped to the value.
public enum EmploymentStatusType
{
[Description("INVALID!")]
None = 0,
[Description("Permanent full-time")]
FullTime = 1,
[Description("Permanent part-time")]
PartTime = 2,
[Description("Self employed")]
SelfEmployed = 3
}
When the form is submitted, the selected value is converted to its proper type and stored in another class - the property looks like this:
protected virtual EmploymentStatusType EmploymentStatus
{
get { return _application.EmploymentStatus; }
}
For the final bit of the jigsaw, I convert the value to the partners required string value:
Dictionary<EmploymentStatusType, string> _employmentStatusTypes;
Dictionary<EmploymentStatusType, string> EmploymentStatusTypes
{
get
{
if (_employmentStatusTypes.IsNull())
{
_employmentStatusTypes = new Dictionary<EmploymentStatusType, string>()
{
{ EmploymentStatusType.FullTime, "Full Time" },
{ EmploymentStatusType.PartTime, "Part Time" },
{ EmploymentStatusType.SelfEmployed, "Self Employed" }
};
}
return _employmentStatusTypes;
}
}
string PartnerEmploymentStatus
{
get { return _employmentStatusTypes.GetValue(EmploymentStatus); }
}
I call PartnerEmploymentStatus, which then returns the final output string.
Any ideas how this can be made more robust?
Then you need to refactor it into one translation area. Could be something like a visitor pattern implementation. Your choices are distribute the code (as you are doing now) or visitor which would centralize it. You need to build in a degree of fragility so your covering tests will show problems when you extend in order to force you to maintain the code properly. You are in a fairly common quandry which is really a code organisational one
I did encounter such a problem in one of my projects and I solved it by using a helper function and conventions for resource names.
The function is this one:
public static Dictionary<T, string> GetEnumNamesFromResources<T>(ResourceManager resourceManager, params T[] excludedItems)
{
Contract.Requires(resourceManager != null, "resourceManager is null.");
var dictionary =
resourceManager.GetResourceSet(culture: CultureInfo.CurrentUICulture, createIfNotExists: true, tryParents: true)
.Cast<DictionaryEntry>()
.Join(Enum.GetValues(typeof(T)).Cast<T>().Except(excludedItems),
de => de.Key.ToString(),
v => v.ToString(),
(de, v) => new
{
DictionaryEntry = de,
EnumValue = v
})
.OrderBy(x => x.EnumValue)
.ToDictionary(x => x.EnumValue, x => x.DictionaryEntry.Value.ToString());
return dictionary;
}
The convention is that in my resource file I will have properties that are the same as enum values (in your case None, PartTime etc). This is needed to perform the Join in the helper function which, you can adjust to match your needs.
So, whenever I want a (localized) string description of an enum value I just call:
var dictionary = EnumUtils.GetEnumNamesFromResources<EmploymentStatusType>(ResourceFile.ResourceManager);
var value = dictionary[EmploymentStatusType.Full];
I've tried looking for an existing question but wasn't sure how to phrase this and this retrieved no results anywhere :(
Anyway, I have a class of "Order Items" that has different properties. These order items are for clothing, so they will have a size (string).
Because I am OCD about these sorts of things, I would like to have the elements sorted not by the sizes as alphanumeric values, but by the sizes in a custom order.
I would also like to not have this custom order hard-coded if possible.
To break it down, if I have a list of these order items with a size in each one, like so:
2XL
S
5XL
M
With alphanumeric sorting it would be in this order:
2XL
5XL
M
S
But I would like to sort this list into this order (from smallest size to largest):
S
M
2XL
5XL
The only way I can think of to do this is to have a hard-coded array of the sizes and to sort by their index, then when I need to grab the size value I can grab the size order array[i] value. But, as I said, I would prefer this order not to be hard-coded.
The reason I would like the order to be dynamic is the order items are loaded from files on the hard disk at runtime, and also added/edited/deleted by the user at run-time, and they may contain a size that I haven't hard-coded, for example I could hard code all the way from 10XS to 10XL but if someone adds the size "110cm" (aka a Medium), it will turn up somewhere in the order that I don't want it to, assuming the program doesn't crash and burn.
I can't quite wrap my head around how to do this.
Also, you could create a Dictionary<int, string> and add Key as Ordering order below. Leaving some gaps between Keys to accomodate new sizes for the future. Ex: if you want to add L (Large), you could add a new item as {15, "L"} without breaking the current order.
Dictionary<int, string> mySizes = new Dictionary<int, string> {
{ 20, "2XL" }, { 1, "S" },
{ 30, "5XL" }, { 10, "M" }
};
var sizes = mySizes.OrderBy(s => s.Key)
.Select(s => new {Size = s.Value})
.ToList();
You can use OrderByDescending + ThenByDescending directly:
sizes.OrderByDescending(s => s == "S")
.ThenByDescending( s => s == "M")
.ThenByDescending( s => s == "2XL")
.ThenByDescending( s => s == "5XL")
.ThenBy(s => s);
I use ...Descending since a true is similar to 1 whereas a false is 0.
I would implement IComparer<string> into your own TShirtSizeComparer. You might have to do some regular expressions to get at the values you need.
IComparer<T> is a great interface for any sorting mechanism. A lot of built-in stuff in the .NET framework uses it. It makes the sorting reusable.
I would really suggest parsing the size string into a separate object that has the size number and the size size then sorting with that.
You need to implement the IComparer interface on your class. You can google how to do that as there are many examples out there
you'll have to make a simple parser for this. You can search inside the string for elements like XS XL and cm" if you then filter that out you have your unit. Then you can obtain the integer that is the value. If you have that you can indeed use an IComparer object but it doesn't have that much of an advantage.
I would make a class out of Size, it is likely that you will need to add more functionality to this in the future. I added the full name of the size, but you could also add variables like width and length, and converters for inches or cm.
private void LoadSizes()
{
List<Size> sizes = new List<Size>();
sizes.Add(new Size("2X-Large", "2XL", 3));
sizes.Add(new Size("Small", "S", 1));
sizes.Add(new Size("5X-Large", "5XL", 4));
sizes.Add(new Size("Medium", "M", 2));
List<string> sizesShortNameOrder = sizes.OrderBy(s => s.Order).Select(s => s.ShortName).ToList();
//If you want to use the size class:
//List<Size> sizesOrder = sizes.OrderBy(s => s.Order).ToList();
}
public class Size
{
private string _name;
private string _shortName;
private int _order;
public string Name
{
get { return _name; }
}
public string ShortName
{
get { return _shortName; }
}
public int Order
{
get { return _order; }
}
public Size(string name, string shortName, int order)
{
_name = name;
_shortName = shortName;
_order = order;
}
}
I implemented TShirtSizeComparer with base class Comparer<object>. Of course you have to adjust it to the sizes and objects you have available:
public class TShirtSizeComparer : Comparer<object>
{
// Compares TShirtSizes and orders them by size
public override int Compare(object x, object y)
{
var _sizesInOrder = new List<string> { "None", "XS", "S", "M", "L", "XL", "XXL", "XXXL", "110 cl", "120 cl", "130 cl", "140 cl", "150 cl" };
var indexX = -9999;
var indexY = -9999;
if (x is TShirt)
{
indexX = _sizesInOrder.IndexOf(((TShirt)x).Size);
indexY = _sizesInOrder.IndexOf(((TShirt)y).Size);
}
else if (x is TShirtListViewModel)
{
indexX = _sizesInOrder.IndexOf(((TShirtListViewModel)x).Size);
indexY = _sizesInOrder.IndexOf(((TShirtListViewModel)y).Size);
}
else if (x is MySelectItem)
{
indexX = _sizesInOrder.IndexOf(((MySelectItem)x).Value);
indexY = _sizesInOrder.IndexOf(((MySelectItem)y).Value);
}
if (indexX > -1 && indexY > -1)
{
return indexX.CompareTo(indexY);
}
else if (indexX > -1)
{
return -1;
}
else if (indexY > -1)
{
return 1;
}
else
{
return 0;
}
}
}
To use it you just have a List or whatever your object is and do:
tshirtList.Sort(new TShirtSizeComparer());
The order you have "hard-coded" is prioritized and the rest is put to the back.
I'm sure it can be done a bit smarter and more generalized to avoid hard-coding it all. You could e.g. look for sizes ending with an "S" and then check how many X's (e.g. XXS) or the number before X (e.g. 2XS) and sort by that, and then repeat for "L" and perhaps other "main sizes".