One part of my application requires a bunch of images (representing scale) on screen. Because of the wide variety of possibilities, I'd rather generate the images programmatically than pre-create and store all possible images (some of which may never be used). This seems doable using the methed described in this question and answer.
However, the two pages which will use these images will have plenty of them (potentially a couple hundred on one of the pages). My question, then, is will this negatively impact the performance of the application, and if so, how drastically? The pages could potentially be reloaded several times as values change.
Would it be best to generate the images when the page is loaded? Best to precreate them and store several hundred, possibly only using a few? Or would it be best to programmatically create them the first time they are loaded, and then store them under the assumption that since they have been used once, they will likely be used again (assuming they would still be valid - it is quite possible for them to become invalid and need to be replaced)?
EDIT: Each of these images represents a number, which is an application-wide variable. It is expected that most of these numbers will be different, although there may be some few that are equal.
Why not do both, programatically generate images as needed but cache them (i.e. save them as files on the server) so they can be reused.
Further to your edit: If the images are simple image representations of numbers then just pregenerate 0 to 9 and then programatically glue them together at runtime.
Related
I a making a state saving replay system in Unity3D and I want the replay to be written to a file. What file format is best to use when saving an replay? Xml maybe? For now I'm storing the transform data and I've implemented the option to add additional frame data.
It highly depends on what you are trying to do. There are tradeoffs for every solution in this case.
Based on the extra information you have given in the comments, the best solution I can think of in this case is marshalling your individual "recording sessions" onto a file. However, there is a little overhead to be done in order to achieve this.
Create a class called Frame, create another class called Record which has a List<Frame> frames. That way, you can place any information that you would like to be captured in each frame as attributes in the Frame class.
Since you can't marshal a generic type, you will have to marshall each frame individually. I suggest implementing a method in the Record class called MarshalRecording() that handles that for you.
However, in doing that, you will find it difficult to unmarshal your records because they may have different sizes in binary form and they wouldn't have a separator indicating where a frame ends and where the next frame begins. I suggest appending the size information at the beginning of each marshalled frame, that way you will be able to unmarshal all of the frames even if they have different sizes.
As #PiotrK pointed out in his answer, you could use protobuf. However, I don't recommend it for your specific use. Personally, I think this is overkill (Too much work for too little results, protobufs can be a PITA some times).
If you are worried about storage size, you could LZ4 the whole thing (if you are concatenating the binary information in memory), or LZ4 each frame (if you are processing each frame individually and then appending it to a file). I recommend the latter one because, depending on the amount of frames, you may run out of memory while you marshal your record.
Ps: Never use XML, it's cancerous!
The best way I know is to use some kind of format generator (like Google Protobuf). This has several advantage over using default C# serializer:
Versioning of format is as easy as possible; You can add new feature to the format and that will not break replays already existing in the field
The result is stored as either text or binary, with binary favorable and having very small footprint (for example if some of your data has default values, it won't be present in output at all - loader will handle them gracefully)
It's Google technology! :-)
I work at a shop that has sales both weekly and monthly. To advertise these sales, somebody in the company spends a shift making small and large signs in MS word from a standardized template. This is really time consuming, and is prone to mistakes.
I want to design a program to pull the necessary product information from our database and put it into these signs.
I want to use wordart or a wordart substitute to create many of the objects, as this will ensure a standard size (to fit on the signs) and style. I don't care so much for the effects, I am just concerned with the height and width of the words as a whole.
I have created a small program that does this using the Interop library, and while it creates a near perfect replica of the original sign, I fear it might be too slow to pull off doing 30-50 signs in one sitting.
Is there an alternative to MS wordart that would allow me to create either an image or other text object that can be scaled to fit within a certain size?
If you are trying to replace (or relieve) an employee from making signs by writing code, and your code is near perfect but just a bit slow, then you should profile your code to see why it is slow. I can't image that your code is slower than the employee :D. So you shouldn't discard Word interop just because of the speed, if it does just exactly what you want it to do.
Also, since Word-art is a Word-thing, doing that without Word is a huge amount of work. If you have the correct fonts, you might be able to do this from .NET using GDI+ (the standard image drawing interface). However, this will require some tutorial reading and trial-and-error. There is a praised GDI+ FAQ with lots of information on the subject.
A possible cause for slow interop is the creation (and destruction) of Word instances.
Use .docx and xml direct change
I need to generate etags for image files on the web. One of the possible solutions I thought of would be to calculate CRCs for the image files, and then use those as the etag.
This would require CRCs to be calculated every time someone requests an image on the server, so its very important that it can be done fast.
So, how fast are algorithms to generate CRCs? Or is this a stupid idea?
Use instead a more robust hashing algo such as SHA1.
Speed depends on the size of the image. Most time will be spent on loading data from the disk, rather than in CPU processing. You can cache your generated hashes.
But I also advise on creating etag based on last update date of the file which is much quicker and does not require loading the whole file.
Remember, etag must only be unique for a particular resource so if two different images have the same last update time, it is fine.
Most implementations use the last modified date or other file headers as the ETag including Microsoft's own, and I suggest you use that method.
Depends on the method used, and the length. Generally pretty fast, but why not cache them?
If there won't be changes to the files more often than the resolution of the system used to store it (that is, of file modification times for the filesystem or of SQLServer datetime if stored in a database), then why not just use the date of modification to the relevant resolution?
I know RFC 2616 advises against the use of timestamps, but this is only because HTTP timestamps are 1sec resolution and there can be changes more frequent than that. However:
That's still fine if you don't change images more than once a second.
It's also fine to base your e-tag on the time as long as the precision is great enough that it won't end up with the same for two versions of the same resource.
With this approach you are guaranteed a unique e-tag (collisions are unlikely with a large CRC but certainly possible), which is what you want.
Of course, if you don't ever change the image at a given URI, it's even easier as you can just use a fixed string (I prefer string "immutable").
I would suggest calculate hash when adding a image into a data base once and then just return it by SELECT along with a image itself.
If you are usign Sql Server and images not very large (max 8000 bytes) you can leverage HASHBYTES() function which able to generate SHA-1, MD5, ...
I am developing my first multilingual C# site and everything is going ok except for one crucial aspect. I'm not 100% sure what the best option is for storing strings (typically single words) that will be translated by code from my code behind pages.
On the front end of the site I am going to use asp.net resource files for the wording on the pages. This part is fine. However, this site will make XML calls and the XML responses are only ever in english. I have been given an excel sheet with all the words that will be returned by the XML broken into the different languages but I'm not sure how best to store/access this information. There are roughly 80 words x 7 languages.
I am thinking about creating a dictionary object for each language that is created by my global.asax file at application run time and just keeping it stored in memory. The plus side for doing this is that the dictionary object will only have to be created once (until IIS restarts) and can be accessed by any user without needing to be rebuilt but the downside is that I have 7 dictionary objects constantly stored in memory. The server is a Win 2008 64bit with 4GB of RAM so should I even be concerned with memory taken up by using this method?
What do you guys think would be the best way to store/retrieve different language words that would be used by all users?
Thanks for your input.
Rich
From what you say, you are looking at 560 words which need to differ based on locale. This is a drop in the ocean. The resource file method which you have contemplated is fit for purpose and I would recommend using them. They integrate with controls so you will be making the most from them.
If it did trouble you, you could have them on a sliding cache, i.e. sliding cache of 20mins for example, But I do not see anything wrong with your choice in this solution.
OMO
Cheers,
Andrew
P.s. have a read through this, to see how you can find and bind values in different resource files to controls and literals and use programatically.
http://msdn.microsoft.com/en-us/magazine/cc163566.aspx
As long as you are aware of the impact of doing so then yes, storing this data in memory would be fine (as long as you have enough to do so). Once you know what is appropriate for the current user then tossing it into memory would be fine. You might look at something like MemCached Win32 or Velocity though to offload the storage to another app server. Use this even on your local application for the time being that way when it is time to push this to another server or grow your app you have a clear separation of concerns defined at your caching layer. And keep in mind that the more languages you support the more stuff you are storing in memory. Keep an eye on the amount of data being stored in memory on your lone app server as this could become overwhelming in time. Also, make sure that the keys you are using are specific to the language. Otherwise you might find that you are storing a menu in german for an english user.
In our desktop application, we have implemented a simple search engine using an inverted index.
Unfortunately, some of our users' datasets can get very large, e.g. taking up ~1GB of memory before the inverted index has been created. The inverted index itself takes up a lot of memory, almost as much as the data being indexed (another 1GB of RAM).
Obviously this creates problems with out of memory errors, as the 32 bit Windows limit of 2GB memory per application is hit, or users with lesser spec computers struggle to cope with the memory demand.
Our inverted index is stored as a:
Dictionary<string, List<ApplicationObject>>
And this is created during the data load when each object is processed such that the applicationObject's key string and description words are stored in the inverted index.
So, my question is: is it possible to store the search index more efficiently space-wise? Perhaps a different structure or strategy needs to be used? Alternatively is it possible to create a kind of CompressedDictionary? As it is storing lots of strings I would expect it to be highly compressible.
If it's going to be 1GB... put it on disk. Use something like Berkeley DB. It will still be very fast.
Here is a project that provides a .net interface to it:
http://sourceforge.net/projects/libdb-dotnet
I see a few solutions:
If you have the ApplicationObjects in an array, store just the index - might be smaller.
You could use a bit of C++/CLI to store the dictionary, using UTF-8.
Don't bother storing all the different strings, use a Trie
I suspect you may find you've got a lot of very small lists.
I suggest you find out roughly what the frequency is like - how many of your dictionary entries have single element lists, how many have two element lists etc. You could potentially store several separate dictionaries - one for "I've only got one element" (direct mapping) then "I've got two elements" (map to a Pair struct with the two references in) etc until it becomes silly - quite possibly at about 3 entries - at which point you go back to normal lists. Encapsulate the whole lot behind a simple interface (add entry / retrieve entries). That way you'll have a lot less wasted space (mostly empty buffers, counts etc).
If none of this makes much sense, let me know and I'll try to come up with some code.
I agree with bobwienholt, but If you are indexing datasets I assume these came from a database somewhere. Would it make sense to just search that with a search engine like DTSearch or Lucene.net?
You could take the approach Lucene did. First, you create a random access in-memory stream (System.IO.MemoryStream), this stream mirrors a on-disk one, but only a portion of it (if you have the wrong portion, load up another one off the disk). This does cause one headache, you need a file-mappable format for your dictionary. Wikipedia has a description of the paging technique.
On the file-mappable scenario. If you open up Reflector and reflect the Dictionary class you will see that is comprises of buckets. You can probably use each of these buckets as a page and physical file (this way inserts are faster). You can then also loosely delete values by simply inserting a "item x deleted" value to the file and every so often clean the file up.
By the way, buckets hold values with identical hashes. It is very important that your values that you store override the GetHashCode() method (and the compiler will warn you about Equals() so override that as well). You will get a significant speed increase in lookups if you do this.
How about using Memory Mapped File Win32 API to transparently back your memory structure?
http://www.eggheadcafe.com/articles/20050116.asp has the PInvokes necessary to enable it.
Is the index only added to or do you remove keys from it as well?