just a quick question that I am sure someone on here can give me a detailed answer about.
Basically, I am using a DataGridView to display records from a database, which in my case is simply a text file which is being parsed. I feel this is simple, and if I want to select records based on certain parameters, I iterate through the list searching for matches. However, I wonder if this is ineffective vs using a full blown DB such as Mongo or SQL.
Can I get away with this if my software is relatively simple? I really prefer to sway from complicating things, when they don't need to be complicated.
By the way, I am expecting to have a DB (sometimes) larger than 100k entries, so take that into consideration.
#DavidStampher
Even though you may be using just one table or file, I would strongly suggest using a database system for this. Database engines are optimized for speed so it's not so frustrating or time-consuming when performing via query versus trying to update a single text file.
I'm only suggesting MySQL as an option because it's the one I am most familiar with. Other users may have different or better suggestions.
You can easily download and install one from MySQL installer. The setup is relatively simple and should take less than 10 minutes. You could create a new schema, add a table, then query up to do what you need.
I would suggest creating a new user other than root, just in case someone manages to hack into your account.
If you would like the easiest way to manage the database rather than going through the old fashioned phpMyAdmin, download MySQL Workbench. It's pretty cool and relatively easy to use.
Let me know if you have questions. :-)
Related
My program has 3 text fields, Title, Website, and PictureURL. When I click the 'save' button I want it to add the 3 entries into a log of some sort (LINQ or XML seems like the best choice). Only 1 user will be accessing the program at a time. The log will be local on the machine, and not on an external server. After the 3 fields have been saved as a single entry to the log, I want to be able to load each group of entries from the log back into the textboxes. Would either be a simpler solution or a more appropriate choice for this type of project? I am new to both hence my uncertainty for which would be better.
With given set of requirements indeed it would be better to stick with XML storage since you have not neither big amount of data nor complex search and grouping conditions nor remote and distributed access. So LINQ-to-XML would suit perfect for such simple desctop application. Keep it simple.
Why not LINQ to XML? Assuming local storage is going to be, as you stated, an XML file:
http://msdn.microsoft.com/en-us/library/bb387098.aspx
It's hard to give a good answer without knowing more about your situation.
If you are just running this locally on one machine, and do not anticipate the log growing overly large, I'd say XML wold be the better choice, as it requires less setup and overhead than a database.
However, if it needs to scale for size or users, you'll want to use a database. But that will add additional complexity, despite the fact that LINQ to SQL makes it simpler to use.
I want to save a whole MS SQL 2008 Database into XML files... using asp.net.
Now I am bit lost here.. what would be the best method to achieve this? Datasets?
And I need to restore the database later again.. using these XML files. I am thinking about using datasets for reading the tables and writing to xml and using the SQLBulkCopy class to restore the database again. But I am not sure whether this would be the right approach..
Any clues and tips for me?
If you will need to restore it on the same server type (I mean SQL Server 2008 or higher) and don't care about ability to see actual data inside the XML do the following:
Programmatically backup the DB using "BACKUP DATABASE" T-SQL
Compress the backup
Convert the backup to Base64
Place the backup as the content of the XML file (like: <database name="..." compressionmethod="..." compressionlevel="...">the Base64 content here</database>
On the server where you need to restore it, download the XML, extract the Base64 content, use the attributes to know what compression was used. Decompress and restore using T-SQL "RESTORE" command.
Would that approach work?
For sure, if you need to see the content of the database, you would need to develop the XML scheme, go through each table etc. But, you won't have SPs/Views and other items backed up.
Because you are talking about a CMS, I'm going to assume you are deploying into hosted environments where you might not have command line access.
Now, before I give you the link I want to state that this is a BAD idea. XML is way too verbose to transfer large amounts of data. Further, although it is relatively easy to pull data out, putting it back in will be difficult and a very time consuming development project in itself.
Next alert: as Denis suggested, you are going to miss all of your stored procedures, functions, etc. Your best bet is to use the normal sql server backup / restore process. (Incidentally, I upvoted his answer).
Finally, the last time I dealt with XML and SQL Server we noticed interesting issues that cropped up when data exceeded a 64KB boundary. Basically, at 63.5KB, the queries ran very quickly (200ms). At 64KB, the query times jumped to over a minute and sometimes quite a bit longer. We didn't bother testing anything over 100KB as that was taking 5 minutes on a fast/dedicated server with zero load.
http://msdn.microsoft.com/en-us/library/ms188273.aspx
See this for putting it back in:
How to insert FOR AUTO XML result into table?
For kicks, here is a link talking about pulling the data out as json objects: http://weblogs.asp.net/thiagosantos/archive/2008/11/17/get-json-from-sql-server.aspx
you should also read (not for the faint of heart): http://www.simple-talk.com/sql/t-sql-programming/consuming-json-strings-in-sql-server/
Of course, the commentors all recommend building something using a CLR approach, but that's probably not available to you in a shared database hosting environment.
At the end of the day, if you are truly insistent on this madness, you might be better served by simply iterating through your table list and exporting all the data to standard CSV files. Then, iterating the CSV files to load the data back in ala C# - is there a way to stream a csv file into database?
Bear in mind that ALL of the above methods suffer from
long processing times due to the data overhead; which leads to
a high potential for failure due to the various time outs (page processing, command, connection, etc); and,
if your data model changes between the time it was exported and reimported then you're back to writing custom translation code and ultimately screwed anyway.
So, only do this if you really really have to and are at least somewhat of a masochist at heart. If the purpose is simply to transfer some data from one installation to another, you might consider using one of the tools like SQL Compare and SQL Data Compare from RedGate to handle the transfer.
I don't care how much (or little) you make, the $1500 investment in their developer bundle is much cheaper than the months of time you are going to spend doing this, fixing it, redoing it, fixing it again, etc. (for the record I do NOT work for them. Their products are just top notch.)
Red Gate's SQL Packager lets you package a database into an exe or to a VS project, so you might want to take a look at that. You can specify which tables you want to consider for data.
Is there any specific reason you want to do this using xml?
Which is faster in C#:to read tiny XML files or to read tiny SQL tables with small amount of data ?
I wonder if is really necessary to create a table in SQL then establish a connection just to read 10 or 11 parameters.
What would you reccomend?
It's really depends on what You need. Nothing stops You from even combining the two worlds as XML can easily be stored in SQL Server.
If You want to actually have the authentication of the SQL Server, have it backed up, versioned or whatever, You can easily design a mixed XML, SQL, table solution. If You really need some propertyBag persistence area files are ok, but they still require care ie. access control, taking care when it is not present etc (reading a file still does throw a lot of exceptions and IT does it with some good reason).
Ask Yourself questions like: do I need restricted access, how will I report changes (if any),
do I need version history,
do I read all the parameters or only part of
it?
what Do i need to do if someone
changes an entry?
what should I do when there is no entry?
does it need to be extensible (new parameters added/removed)?
should it be encrypted?
does the database layer needs to know about it?
Just some thought from the top of my head.
Luke
If you just have a handful of 'settings' that you want to read, I would definitely go with a small XML file. I can't say definitively that it would be faster, but given that you would eliminate the over head of establishing the connection, authenticating, etc it would definitely be simpler.
And if you can use LINQ to XML, its really easy to do.
Speed is not the only consideration. You don't have as much admin overhead with XML files as you would with SQL Server.
If the file is local, it will certainly be faster to read using direct file than networked SQL access. Far less between you and the data. No impact on your process from other SQL usage.
reading a lot of files is slow so if you have tons of xml files i would vote for SQL especially if we consider the fact that you have to parse the xml files as well which is way more complicated and more time consuming then making a connection to a DB especially if the DB is on the localhost :)
SQL based method: pros
Easy to migrate, configure
SQL based method: cons
Connection can be down, connection takes time to establish, DB admin will wonder why there is a tiny table that has no meaning, codebase become unnecessarily complex
File based method: pros
Fast, no overhead on DB
File based method:cons
Migration is an issue. Configuration is an issue. Can easily get corrupted.
I'm having a bit of a problem deciding how to store some data. To see it from a simple perspective, it will be a simple table of data but there will be many tables. There will be about 7 columns in each table, but again there will be a lot of tables (and they will be created at runtime, whenever the customer wants a clean grid)
The data has to be stored locally in a file (and there will not be multiple instances of the software running).
I'm using C# 4.0 and I have been looking at using XML files(one file per table, or storing multiple tables in a file), sqlite, sql server CE, access etc. I will be happy if someone here has some comments or suggestions on how to do/not to do. Stability and reliability(e.g. no trashed databases because of unstable third party software) is probably my biggest concern.
If you are looking to store the data locally in a file, I would recommend the sqlite option since it seems your data is created in the form of a database table already. Sqlite is already built to handle multiple tables and columns so it means less mental overhead for you, the developer.
http://web.archive.org/web/20100208133236/http://www.mikeduncan.com/sqlite-on-dotnet-in-3-mins/ is a decent tutorial to give a quick overview on how to set it up and get going.
As for what NOT to do: don't try to make your own scheme to save the data to a file, it's a well understood problem that has been solved many times over, why re-invent the wheel?
XML wont be a good choice if you are planning to make several queries, since loading text files may be painful when they grow (talking about files over 1mb). If you plan to mantain the data low, the xml would be good to keep it simple. I still won't use it, but if you have a background, then the benefits will be heavier than the learning curve.
If you have no expertise in any of them, and the data is light my suggestion is SQLite, I beleive is the best lightweight DB for .Net and the prvider is very good. you can find it easily on Google.
I would tell you that Access is not recommendable, but this is a personal oppinion. Many people use it and I think is for some reason. So you should check it out and try it.
Again, my final recommendation is SQLite, unless you know very well another one, in which case you'll have to think how much your data is going to grow. If you plan to have a DB around 100mb, any of them, except xml would do; If you think it'll grow bigger than that, consider SQLite heavily
I need to analyze tens of thousands of lines of data. The data is imported from a text file. Each line of data has eight variables. Currently, I use a class to define the data structure. As I read through the text file, I store each line object in a generic list, List.
I am wondering if I should switch to using a relational database (SQL) as I will need to analyze the data in each line of text, trying to relate it to definition terms which I also currently store in generic lists (List).
The goal is to translate a large amount of data using definitions. I want the defined data to be filterable, searchable, etc. Using a database makes more sense the more I think about it, but I would like to confirm with more experienced developers before I make the changes, yet again (I was using structs and arraylists at first).
The only drawback I can think of, is that the data does not need to be retained after it has been translated and viewed by the user. There is no need for permanent storage of data, therefore using a database might be a little overkill.
It is not absolutely necessary to go a database. It depends on the actual size of the data and the process you need to do. If you are loading the data into a List with a custom class, why not use Linq to do your querying and filtering? Something like:
var query = from foo in List<Foo>
where foo.Prop = criteriaVar
select foo;
The real question is whether the data is so large that it cannot be loaded up into memory confortably. If that is the case, then yes, a database would be much simpler.
This is not a large amount of data. I don't see any reason to involve a database in your analysis.
There IS a query language built into C# -- LINQ. The original poster currently uses a list of objects, so there is really nothing left to do. It seems to me that a database in this situation would add far more heat than light.
It sounds like what you want is a database. Sqlite supports in-memory databases (use ":memory:" as the filename). I suspect others may have an in-memory mode as well.
I was facing the same problem that you faced now while I was working on my previous company.The thing is I was looking a concrete and good solution for a lot of bar code generated files.The bar code generates a text file with thousands of records with in a single file.Manipulating and presenting the data was so difficult for me at first.Based on the records what I programmed was, I create a class that read the file and loads the data to the data table and able to save it in database. The database what I used was SQL server 2005.Then I able to manage the saved data easily and present it which way I like it.The main point is read the data from the file and save to it to the data base.If you do so you will have a lot of options to manipulate and present as the way you like it.
If you do not mind using access, here is what you can do
Attach a blank Access db as a resource
When needed, write the db out to file.
Run a CREATE TABLE statement that handles the columns of your data
Import the data into the new table
Use sql to run your calculations
OnClose, delete that access db.
You can use a program like Resourcer to load the db into a resx file
ResourceManager res = new ResourceManager( "MyProject.blank_db", this.GetType().Assembly );
byte[] b = (byte[])res.GetObject( "access.blank" );
Then use the following code to pull the resource out of the project. Take the byte array and save it to the temp location with the temp filename
"MyProject.blank_db" is the location and name of the resource file
"access.blank" is the tab given to the resource to save
If the only thing you need to do is search and replace, you may consider using sed and awk and you can do searches using grep. Of course on a Unix platform.
From your description, I think linux command line tools can handle your data very well. Using a database may unnecessarily complicate your work. If you are using windows, these tools are also available by different ways. I would recommend cygwin. The following tools may cover your task: sort, grep, cut, awk, sed, join, paste.
These unix/linux command line tools may look scary to a windows person but there are reasons for people who love them. The following are my reasons for loving them:
They allow your skill to accumulate - your knowledge to a partially tool can be helpful in different future tasks.
They allow your efforts to accumulate - the command line (or scripts) you used to finish the task can be repeated as many times as needed with different data, without human interaction.
They usually outperform the same tool you can write. If you don't believe, try to beat sort with your version for terabyte files.