How to store list of static data in C#? - c#

I am working on a website and I want a drop-down to display the list of cities. Where should I store the list of cities for faster access? I do not want to store this data in DB.
Is it a good idea to store it in XML file?

I would store it in Cache, possibly with a Sql Dependency or File Dependency.
public DataTable GetCities(bool BypassCache)
{
string cacheKey = "CitiesDataTable";
object cacheItem = Cache[cacheKey] as DataTable;
if((BypassCache) || (cacheItem == null))
{
cacheItem = GetCitiesFromDataSource();
Cache.Insert(cacheKey, cacheItem, null,
DateTime.Now.AddSeconds(GetCacheSecondsFromConfig(cacheKey),
TimeSpan.Zero);
}
return (DataTable)cacheItem;
}
If you want to store it in XML, that's fine too but you'll have to publish the file to all servers in the farm each time there is a change.

Store it in a text file. This avoids the overhead of XML parsing. Load using File.ReadAllLines().

You can store the list in an XML file or other flat file format, but I guess it depends on what your reasons are for not wanting to store it in the database.
You mentioned faster access, but you might want to expound on that. If you mean you don't want the overhead of accessing the database on every request, then have you thought about storing it in the database and caching the list on application start-up instead? This way, you get the benefits of a database, yet only pay the overhead once.
For small applications, however, an XML file would be just fine.

If the list will never change, then just declare it as a const array of strings in your code.
If it may change occasionally, then put it in an xml file or a database table, but cache it when you have read it so it only needs to be read once in any session.

I believe XML is the best solution, and it would be better to use DOM parser rather than SAX.
You can also load the file into the session whenever it is not loaded to decrease the number of reads of the XML, but this will use more RAM on the server, be sure not to load unnecessary data because it will be loaded into the server's RAM for each session. You can load it only for logged in users if it makes sense.

What I'm about to suggest is HORID style, but I think the quickest you can get and with the smallest "footprint":
public static readonly string [] CityList = new string []
{
"Sydney",
"New York",
"London"
};
Now, I hope you don't like the solution, and can give us all a little more context so that we might be able to offer an elegant and maintainable solution to your problem; but if all your after is speed...

Related

Regarding WPF, need advice on data sources and data binding

I am not new to WPF, but I am still a rookie. Let's say, I want to build an application which stores data about a person in an unique and separate file, and not in a database, sort of, like, Notepad. My application should do the following things.
It should be able to save a person's info in an unique file.
It should be able to open an user specified file and auto fill the properties/form.
How do I achieve this? Is the XML binding only way to achieve this, or is there an any other alternative? What I mean is, If I use XML binding I can write code which will enable the user to open and save different XML files, but I also read that binding to XML should be avoided from the architecture perspective. So, is there an alternative solution for my problem?
I think if you try doing the stuff by using a Reading and writing the things to a CSV(Comma separated values) file(If not planning to implement databases) then you can achieve what you wanted.
Also if you are planning to have a separate file for each user its not at all a good idea.
Its not possible to explain everything thing here . So please have a look to link posted below , in which it has explained in detail how to achieve Reading and Writing to a csv file .
This example has been posted from here for getting full detail please look to following link Reading and writing to a csv file
Apparently your requirement is to save person details into a unique file. If you really want to use that approach, one option is using XMLSerialization.
You can create your normal person object for data binding.
When you want to save data into the specific person's file you can serialize the object and save file with a proper name (person id or so)
When you want to get Person data back from the file, you can deserialize the it directly to a person object.
// Serialize and write to file
Person person = myPerson;
var serializer = new XmlSerializer(person.GetType());
using (var writer = XmlWriter.Create("person1.xml"))
{
serializer.Serialize(writer, person);
}
// Deserialize back to an instance
var serializer = new XmlSerializer(typeof(Person));
using (var reader = XmlReader.Create("person1.xml"))
{
var person= (Person)serializer.Deserialize(reader);
}
For saving user data, such as sessions and settings. There are plenty of ways you can do this.
Saving to data to txt files. See here.
Saving data to a database. See here.
My personal favourite, saving to the Settings file. See here.
These are only some of the ways you can save data locally.
Note that I mentioned saving data to a database because it is something that you shouldn't completely knock, especially if you will be saving lots of data.
To answer your question more directly, I would suggest that you go with option 3. For relatively small sets of data, like user info and user settings, it would be your best bet to save them to the built in Settings file. It's dead easy.
Good luck!

'Streaming' data into Sql server

I'm working on a project where we're receiving data from multiple sources, that needs to be saved into various tables in our database.
Fast.
I've played with various methods, and the fastest I've found so far is using a collection of TableValue parameters, filling them up and periodically sending them to the database via a corresponding collection of stored procedures.
The results are quite satisfying. However, looking at disk usage (% Idle Time in Perfmon), I can see that the disk is getting periodically 'thrashed' (a 'spike' down to 0% every 13-18 seconds), whilst in between the %Idle time is around 90%. I've tried varying the 'batch' size, but it doesn't have an enormous influence.
Should I be able to get better throughput by (somehow) avoiding the spikes while decreasing the overall idle time?
What are some things I should be looking out to work out where the spiking is happening? (The database is in Simple recovery mode, and pre-sized to 'big', so it's not the log file growing)
Bonus: I've seen other questions referring to 'streaming' data into the database, but this seems to involve having a Stream from another database (last section here). Is there any way I could shoe-horn 'pushed' data into that?
A very easy way of inserting loads of data into an SQL-Server is -as mentioned- the 'bulk insert' method. ADO.NET offers a very easy way of doing this without the need of external files. Here's the code
var bulkCopy = new SqlBulkCopy(myConnection);
bulkCopy.DestinationTableName = "MyTable";
bulkCopy.WriteToServer (myDataSet);
That's easy.
But: myDataSet needs to have exactly the same structure as MyTable, i.e. Names, field types and order of fields must be exactly the same. If not, well there's a solution to that. It's column mapping. And this is even easier to do:
bulkCopy.ColumnMappings.Add("ColumnNameOfDataSet", "ColumnNameOfTable");
That's still easy.
But: myDataSet needs to fit into memory. If not, things become a bit more tricky as we have need a IDataReader derivate which allows us to instantiate it with an IEnumerable.
You might get all the information you need in this article.
Building on the code referred to in alzaimar's answer, I've got a proof of concept working with IObservable (just to see if I can). It seems to work ok. I just need to put together some tidier code to see if this is actually any faster than what I already have.
(The following code only really makes sense in the context of the test program in code download in the aforementioned article.)
Warning: NSFW, copy/paste at your peril!
private static void InsertDataUsingObservableBulkCopy(IEnumerable<Person> people,
SqlConnection connection)
{
var sub = new Subject<Person>();
var bulkCopy = new SqlBulkCopy(connection);
bulkCopy.DestinationTableName = "Person";
bulkCopy.ColumnMappings.Add("Name", "Name");
bulkCopy.ColumnMappings.Add("DateOfBirth", "DateOfBirth");
using(var dataReader = new ObjectDataReader<Person>(people))
{
var task = Task.Factory.StartNew(() =>
{
bulkCopy.WriteToServer(dataReader);
});
var stopwatch = Stopwatch.StartNew();
foreach(var person in people) sub.OnNext(person);
sub.OnCompleted();
task.Wait();
Console.WriteLine("Observable Bulk copy: {0}ms",
stopwatch.ElapsedMilliseconds);
}
}
It's difficult to comment without knowing the specifics, but one of the fastest ways to get data into SQL Server is Bulk Insert from a file.
You could write the incoming data to a temp file and periodically bulk insert it.
Streaming data into SQL Server Table-Valued parameter also looks like a good solution for fast inserts as they are held in memory. In answer to your question, yes you could use this, you just need to turn your data into a IDataReader. There's various ways to do this, from a DataTable for example see here.
If your disk is a bottleneck you could always optimise your infrastructure. Put database on a RAM disk or SSD for example.

Making a Simple Database Program in C#

I'm currently writing a simple text analysis program in C#. Currently it takes simple statistics from the text and prints them out. However, I need to get it to the point where in input mode you input sample text, specifying an author, and it writes the statistics to a database entry of that specific author. Then in a different mode the program will take text, and see if it can accurately identify the author by pulling averages from the DB files and comparing the text's statistics to sample statistics. What I need help with is figuring out the best way to make a database out of text statistics. Is there some library I could use for this? Or should I simply do simple reading and writing from text files that I'll store the information in? Any and all ideas are welcome, as I'm struggling to come up with a solution to this problem.
Thanks,
PardonMyRhetoric
You can use and XmlSerializer to persist your data to file really easily. There are numerous tutorials you can find on google that will teach you how in just a few minutes. However, most of them want to show you how to add attributes to your properties to customize the way it serializes, so I'll just point out that those aren't really necessary. So long as you have the [Serializeable] tag over your class all you need is something that looks like this to save:
void Save()
{
using (var sw = new StreamWriter("somefile.xml"))
(new XmlSerializer(typeof(MyClass))).Serialize(sw, this);
}
and something like this in a function to read it:
MyClass Load()
{
XmlSerializer xSer = new XmlSerializer(typeof(MyClass));
using (var sr = new StreamReader("somefile.xml"))
return (MyClass)xSer.Deserialize(sr);
}
I don't think in this stage you'll need a database. Try to select appropriate data structures from the .NET framework itself. Try to use dictionary or lists, don't use arrays for this, and the methods you write will become simpler. Try to learn LINQ - it's like queries to database, but to regular data structures. When you'll get this and the project will grow, try to add a database.

MongoDB key-value DB alternative with compression

I am currently using MongoDB to store lots of real-time signals of some sensors. The stored information includes a timestamp, a numeric value and a flag that indicates the quality of the signal.
Queries are very fast but the amount of disk used is exorbitant and would like to try other non-relational database most suitable to my purposes.
I've been looking at http://nosql-database.org/ but I don't know which database is the best for my needs.
Thank you very much :)
http://www.mongodb.org/display/DOCS/Excessive+Disk+Space
MongoDB stores field names inside every document, which is great because it allows all documents to have different fields, but creates a storage overhead when fields are always the same.
To reduce the disk consumption, try shortening the field names, so instead of:
{
_id: "47cc67093475061e3d95369d",
timestamp: "2011-06-09T17:46:21",
value: 314159,
quality: 3
}
Try this:
{
_id: "47cc67093475061e3d95369d",
t: "2011-06-09T17:46:21",
v: 314159,
q: 3
}
Then you can map these field names to something more meaningful inside your application.
Also, if you're storing separate _id and timestamp fields then you might be doubling up.
The ObjectId type has a timestamp embedded in it, so depending on how you query and use your data, it might mean you can do without a separate timestamp field all together.
Disk space is cheap, don't care about this, development with new database will cost much more... If you on windows you can try RavenDB.
Also mb take a look into this answer about reducing mongo database size:
You can do this "compression" by running mongod --repair or by
connecting directly and running db.repairDatabase().

receiving everyday XML files - 12 types need to do search on these everyday

Asp.NET - C#.NET
I need a advice regarding a design problem below:
I'll receive everyday XML files. It changes the quantity e.g. yesterday 10 XML files received, today XML 56 files received and maybe tomorrow 161 XML files etc.
There are 12 types (12 XSD)... and in the top there is a attribute called FormType e.g. FormType="1", FormType="2" , FormType="12" etc. up to 12 formtypes.
All of them have common fields like Name, adres, Phone.
But e.g. FormType=1 is for Construction, FormType=2 is for IT, FormType 3=Hospital, Formtype=4 is for Advertisement etc. etc.
As I said all of them have common attributes.
Requirements:
Need a search screen so the user can do search on these XML contents. But I don't have any clue how to approach this. e.g. Search the text in some attributes for the xml's received from Date_From and Date_To.
Problem:
I've heard about putting the XML's in a Binary field and do XPATH query or whatever but don't know the word's to search on google.
I was thinking to create a big database.table and read all XML's and put in the Database Table. But the issue is some xml attributes are very huge like 2-3 pages. and the same attributes in other XML file are empty..
So creating NVARCHAR(MAX) for every XML attribute and putting them in table.field.... After some period my DATABASE will be a big big monster...
Can someone advice what is the best approach to handle this issue?
I'm not 100% sure I understand your problem. I'm guessing that the query's supposed to return individual XML documents that meet some kind of user-specified criteria.
In that event, my starting point would probably be to implement a method for querying a single XML document, i.e. one that returns true if the document's a hit and false otherwise. In all likelihood, I'd make the query parameter an XPath query, but who knows? Here's a simple example:
public bool TestXml(XDocument d, string query)
{
return d.XPathSelectElements(query).Any();
}
Next, I need a store of XML documents to query. Where does that store live, and what form does it take? At a certain level, those are implementation details that my application doesn't care about. They could live in a database, or the file system. They could be cached in memory. I'd start by keeping it simple, something like:
public IEnumerable<XDocument> XmlDocuments()
{
DirectoryInfo di = new DirectoryInfo(XmlDirectoryPath);
foreach (FileInfo fi in di.GetFiles())
{
yield return XDocument.Load(fi.Filename);
}
}
Now I can get all of the documents that fulfill a request like this:
public IEnumerable<XDocument> GetDocuments(query)
{
return XmlDocuments.Where(x => TextXml(x, query));
}
The thing that jumps out at me when I look at this problem: I have to parse my documents into XDocument objects to query them. That's going to happen whether they live in a database or the file system. (If I stick them in a database and write a stored procedure that does XPath queries, as someone suggested, I'm still parsing all of the XML every time I execute a query; I've just moved all that work to the database server.)
That's a lot of I/O and CPU time that gets spent doing the exact same thing over and over again. If the volume of queries is anything other than tiny, I'd consider building a List<XDocument> the first time GetDocuments() is called and come up with a scheme of keeping that list in memory until new XML documents are received (or possibly updating it when new XML documents are received).

Categories