I have asp.net 3.5 C# and a SQL Server 2008 back end.
There is a table that I use the most, this table has around 100 rows and doesn't change often. My code (web service with a cache) is called from JQuery to search a record by ID and return a JSON response to client side.
Recently the server that hosts my site had a big problem and had to migrate to a new server and my site was down for 3 days. I started thinking, to save my data to a XML, or Json file and do not touch the database anymore.
I need your input. I know how to work with the XML file (I will use the LINQ), but don’t know how to read a JSON file from the client side with JQUERY. Maybe I should read it on server side as StreamReader?
Which method do you like better (XML, or JSON). Any help would be greatly appreciated.
You can actually call an AJAX query right to a JSON file and it'll work from jQuery. This is known as a RESTfull api.
For instance:
http://www.myserver.com/api/customers/12
Could be a file 12 in the foler api/customers or it could be a script returning a json response. The idea is that the url represents the resource you're looking for.
However I highly suggest you don't go this approach, even if you load down the 100 rows and search for it in javascript it's a bad idea, as it'll put load on the client and defeats the purpose of AJAX (To only retrieve relevant information).
If you're determined to do away with your database I would suggest using an xml file, as you keep the processing on the server-side and and ensure it's behaviour. Any code that runs on the client is subject to:
Tampering
Version Issues
Taint from plugins
It's also harder to debug.
I suggest exposing a method on your webservice that looks up the row by ID, and then uses a simple LINQ to XML query to retrieve the data you want.
ID search should be very simple, and you could even cache the results if you're getting hit hard (to reduce disk reads & xml parses), but that may be overkill :)
Related
I am fairly new to ASP.Net MVC, which is why I could use some direction.
I am building a site for a client that is not using a Database.
I have several (~20) youtube videos I would like to embed. The client is no longer producing these videos and this list will not be updated often. I have created a template view for the video and information. I would like to setup a model that can query a youtube video from the data set.
My initial thought is to create a JSON File, and a model class to query the information. Is that the best way to accomplish this?
JSON seems like a great idea to me. With only about 20 records total, you're near the point where it doesn't even make sense to be data driven: just have 20 static pages with shared css and google custom search engine for queries. However, I still tend to prefer relying on a data source whenever I can, and I like JSON for this.
JSON will work well here because you can use a *.js file that will be cached by most browsers, and you can execute your searches on data without even needing to refresh the page. Especially if you're using a templating system like Knockout or Ember, you can have this be entirely a client application: no server code. Such an application would be very fast from the user perspective, especially if you use a cdn for the template engine, such that many users will already have it cached on first load.
You can use XML document to store structured data, load it, and use XPath to query it (be mindful of XPath Injection vulnerabilities). Or use the same XML to deserialize into a data model and use LINQ to query it.
(B/w, this is by far not the only option - just one-and-a-half that comes immediately to mind)
I would put the data in flat text file of my preferred format (personally json also) and then I would deserialize that into a list of objects and use LINQ queries on it. Given the small amount of data in question I would use a flat file in favor of a database even if I had the option.
You could also use a resx file as part of the project or the in built settings as suggested in the comments. Regardless of how you do it, the amount of data is small enough that you may as well just read it into a collection in memory and then query that collection.
Since it doesn't need to be updated very often, an easy approach would be to just create a hard coded list in code that's used to generate links from. If you want to be able to update the links in the future without modifying code then XML or JSON are likely your best bets.
The company I work for is running a C# project that crawling data from around 100 websites, saving it to the DB and running some procedures and calculations on that data.
Each one of those 100 websites is having around 10,000 events, and each event is saved to the DB.
After that, the data that was saved is being generated and aggregated to 1 big xml file, so each one of those 10,000 events that were saved, is now presented as a XML file in the DB.
This design looks like that:
1) crawling 100 websites to collects the data and save it the DB.
2) collect the data that was saved to the DB and generate XML files for each event
3) XML files are saved to the DB
The main issue for this post, is the selection of the saved XML files.
Each XML is about 1MB, and considering the fact that there are around 10,000 events, I am not sure SQL Server 2008 R2 is the right option.
I tried to use Redis, and the save is working very well (and fast!), but the query to get those XMLs works very slow (even locally, so network traffic wont be an issue).
I was wondering what are your thoughts? please take into consideration that it is a real-time system, so caching is not an option here.
Any idea will be welcomed.
Thanks.
Instead of using DB you could try a cloud-base system (Azure blobs or Amazon S3), it seems to be a perfect solution. See this post: azure blob storage effectiveness, same situation, except you have XML files instead of images. You can use a DB for storing the metadata, i.e. source and event type of the XML, the path in the cloud, but not the data itself.
You may also zip the files. I don't know the exact method, but it can surely be handled on client-side. Static data is often sent in zipped format to the client by default.
Your question is missing some details such as how long does your data need to remain in the database and such…
I’d avoid storing XML in database if you already have the raw data. Why not have an application that will query the database and generate XML reports on demand? This will save you a lot of space.
10GBs of data per day is something SQL Server 2008 R2 can handle with the right hardware and good structure optimization. You’ll need to investigate if standard edition will be enough or you’ll have to use enterprise or data center licenses.
In any case answer is yes – SQL Server is capable of handling this amount of data but I’d check other solutions as well to see if it’s possible to reduce the costs in any way.
Your basic arch doesn't seem to be at fault, its the way you've perceived the redis, basically if you design your key=>value right there is no way that the retrieval from redis could be slow.
for ex- lets say I have to store 1 mil objects in redis, and say there is an id against which I am storing my objects, this key is nothing but a guid, the save will be really quick, but when it comes to retrieval, do I know the "key" if i KNOW the key it'll be fast, but if I don't know it or I am trying to retrieve my data not on the basis of key but on the basis of some Value in my objects, then off course it'll be slow.
The point is - when it comes to retrieval you should just work against the "Key" and nothing else, so design your key like a pre-calculated value in itself; so when I need to get some data from redis/memcahce, I could make the KEY, and just do a single hit to get the data.
If you could put more details, we'll be able to help you better.
In my situation, I have a C# DLL I wrote myself that has been registered in a SQL Server database containing sales/customer data. As of now, I'm a bit stuck.
The DLL makes a call to a remote server to obtain a token. The token is then added to the database. Ideally, the next step is to retrieve data from the SQL server into the DLL and then build and post a JSON file to a remote server, using the token the DLL retreived.
Where I am stuck is there are 134 elements, with different data types, in the receipt section of my JSON file alone. I will need to be able to handle all of that data in my C# DLL and in the future I may need to pull a lot more data into this JSON file to be posted. I've done some reasearch and using user defined type (UDT) wouldn't quite work and from what I can tell, is an option I should stay away from. My other two options I know of would be to either export to XML and parse it in my DLL or to create and read in 134+ variables.
My question is: Is there a simpler way to do this besides XML/hard coding? It would be ideal if there was a way to use an array or an object but neither seem to be supported according to what I've read here
Thank you.
Important note: Because of the database and the JSON library I'm using, I'm working in .Net framework 2.0
I would recommend you to use XML serialization on the C# side. You create an object that models your database schema.
As you are using .NET 2.0 you have already a good set of base classes to model your database schema in an object oriented way. Even nullable columns can be mapped to nullable objects to save memory and network space.
From your SQL side you use the FOR XML clause, that will change the output of your query from tabular to XML. You have to make just one good SP that will create XML in the exact hierarchy as your C# objects.
This XML has to match the names and the casing of the classes and the properties of your c# class(es).
Then you will de-serialize this XML from the C# side in no more than 10 lines of code. No matter how big or how complex the data hierarchy is, and you will have instantly in memory objects that you can immediately serialize into JSON again.
Let me know if you need some good examples on how to achieve this. And please clarify if you are running inside of the SQL Server CLR execution context, as you might need special permissions for serializing/deserialize data.
I guess its a very primitive way of achieving what Entity Framework does. but it works.
You should probably stick with using XML as your data is semi-structured. Especially if you know your schema will be changing overtime. SQL Server is not yet an OODBMS.
I am basically new to this kind of work.I am programming my application in C# in VS2010.I have a crystal report that is working fine and it basically gets populated with some xml data. That XMl data is coming from other application that is written in Python on another machine.
That Python script generates some data and that data is put on the memory stream. I basically have to read that memory stream and write my xml which is used to populate my crystal report. So my supervisor wants me to use remote procedure call.
I have never done any remote procedure calling. But as I have researched and understood. I majorly have to develop a web or WCF service I guess. I don't know how should I do it. We are planning to use the http protocol.
So, this is how it is supposed to work. I give them the url of my service and they would call that service and my service should try to read the data they put on the memory stream. After reading the data I should use part of the data to write my xml and this xml is used to populate my crystal report.
The other part of the data ( other than the data used to write the xml) should be sent to a database on the SQl server. This is my complete problem definition. I need ideas and links that will help me in solving this problem.
As John wrote, you're quite late if it's urgent and your description is quite vague. There are 1001 RPC techniques and the choice depends on details. But taking into account that you seem just to exchange some xml data, you probably don't need a full RPC implementation. You can write a HTTP server in python with just a few lines of code. If it needs to be a bit more stable and log running, have a look at twisted. Then just use pure html and the WebClient class. Not a perfect solution, but worked out quite well for me more than once. And you said it's urgent! ;-)
I have a web service that returns data, quite a large set, could be 600 rows, by 20 columns.
What is the fastest most efficient way to load this data into an html table in Jquery code?
I tried creating the table html by looping through the data returned and creating a table DOM inside a string, but the looping part is very slow. I have heard of Jquery Templates, but I am not sure this technology is fast enough for large sets of data....
Thanks
Is it possible for you to alter the web service or have another service call it and parse the data server side and return HTML? Processing the JSON on the client-side is going to be your bottleneck. If you can have the service return the required HTML to you, then it's a simple element.html(data) on the client side.
Edit: The question of returning JSON or HTML and the pros and cons of each have been discussed here quite a bit:
1, 2, 3, 4, 5
It seems this is a matter of design. loading 600 x 20 data items at once is not a good idea. Consider clients with low system resources like pocket PCs or TCs (thin client) would suffer to visit such a page.
You need to cache webservice data and load it in chunks into client browser based on the user action. You can use some Ajax controls to do so.
If your goal is to have the user be able to interact with the data as fast as possible, may be you want to consider something like infinite scroll (also called continuous scroll) pattern so you build the grid as needed from the scrolling of the user and not spend the whole time rendering the grid upfront.
Some links:
http://www.infinite-scroll.com/
http://net.tutsplus.com/tutorials/javascript-ajax/how-to-create-an-infinite-scroll-web-gallery/
I think this is where JSON DB might be best useful... you could write a server-side page that responds with json db formatted data for a few rows.. then do your own ajax code to load the rows and process them in your choice of display model like your own <table> with "overflow:auto;" and add rows to that table in chunks.. or use something like 'infinite scroll' already suggested.