Xamarin - Storage data in smartphone memory - c#

I am work with xamarin and I need to storage data in memory of my Android device. In order to have the data once the game has reopened. How can I do? where can I find an example code?

Depends on data type, structure and your specific needs the approach may vary. Since we are talking about a game most probably you need a database. Luckily the official documentation nicely covering this topic.
Beside that if you using .NET Standard take a look on EntityFramework.
P.S.: Generally I would recommend to make a research (as the options above are not the only one) to compare existing solutions and than decide which way to go.

Akavache could be a good solution. It's fairly simple and flexible too.

Related

Can No-SQL be used on the xbox 360 with xna and c# or vb.net?

I was wondering is their a dll library for no-sql written entirely in c# that can create databases for use on the xbox 360?
BerkeleyDB is a KeyValue embedded database, essentially the NoSQL equivalent of SQLite. Here is a C# Tutorial.
However, I strongly suggest finding an alternative solution to this. Unless you have a ton of data you're better off holding all your objects in RAM and persisting on a need basis with a JSON, XML, or whatever you want serializer. LINQ makes it incredibly easy to query in-memory objects the same way you'd query a database.
I found another possible solution thats open source :
http://ostrivdb.codeplex.com/
The reason for this is coding for xml on xbox byitself sucks and gets too complicated in a massive game project.
I will still click on Dharuns answer because he updated with c# tutorial but I wanted to show what I was talking about.

Reuse C#-Project on iPad

I am trying to port an existing C#-WPF-Project as an iPad-App.
As far as I found out by now, the best way to go would be to use MonoTouch and reuse as much C#-Logic as possible.
As the original project is written with WPF for an actual TabletPC, my question is, if there is any way, to reuse the WPF-sources or at least minimize the part I have to write again.
If there are any good alternatives to MonoTouch, I would appreciate tips too :)
UPDATE: Your comments were helpful, but are not 100% what I was looking for. MonoCross looks nice, but as far as I understand, it just "hides" the iOS-specific part. What I would really love, would be a way to reuse the handwritten "special" WPF-Controls. (Or at least minimize the work/time to transfer them.) This would be awesome.
UPDATE 2: Maybe I should add, that I would also accept some "complicated" three-step-technique. For example, is there a way to translate the XAML-WPF-files to HTML5 (or something equally powerful) and then use Titanium or PhoneGap? The Languages and Frameworks shouldn't be the big problem, I just try to find a way to reuse as much as possible :)
Please see this previous question which is related and may be of interest on creating cross-platform iOS, Android and WP7 applications.
In response to your question, no it is not possible to re-use WPF Guis on iPad, IPhone, Android. Only Windows Phone will support silverlight views. To workaround this you must use a Model View controller architecture (as iOS, Android won't support databinding via MVVM) on all three and create separate views for each architecture.
While this may sound laborious, please note that if you correctly architecture your application so that key business logic and presentation logic is inside the Controller (or services) layers then you can re-use a large proportion of your code. This is the same limitation as if you create cross-platform code to dual deploy to Silverlight and WPF on Windows. The Xaml files often have to be specific to each framework, however often *.cs user controls and viewmodels / code logic can be shared.
UPDATE: Following your Update(2) in question.
Yes, you can use a third party server to translate XAML-WPF-files to HTML5 - the ComponentArt Dashboard Server. This claims to translate WPF/Silverlight applications written using strict MVVM to HTML5/JS for portability across multiple devices. I can't vouch for how effective this is and I do know it is expensive, however if you are seriously stuck and want to port WPF -> HTML5 then it is worth investigating this.
Best regards,

Distributed object, large data

I have a general question about large in-memory objects and distributed computing.
I have a large object like Class.Object that stores a lot of data like upwards of 200,000 objects and counting. As it is right now is a simple object created and running in memory and the clients call the data in it. Because speed is also important, I'm serializing this monster into the desk like C# BinaryFormatter and loading and running them. This is a WCF project so the object stays in the memory. My question is how should I scale this across multiple server kind of like distributed computing. Is there a tool in C# like "database sharding" or something like that. Is there a database that I can save this information to. This object isn't just like a simple database table. It has references up and down the classes. Everything is being referenced. There are hashtables etc. Google seems to do this kind of monster indexes using "shards" and splitting the data across different servers. Is there a tool and mechanism to do this in .NET. What is the approach here? I'm using Windows Server AppFabric to save it in memory and load it, but it seems like I need to split this monster object into pieces?
Any pointers and help is appreciated.
Never listened personally about some already ready to run db sharding solutions in .NET. Would be interesting to read from other posts on this question.
But for general knowledge and may be also pretty useful link can be this one:
CodeProject destributed computing with Silverlight
Excelent article, by me.
Good luck.
I guess I get no "upvotes" for this answer but the solution to your problem is not some technical sharding but better design. If you need to keep so much objects in memory all the time you need to have some really good incensitive for this. Isn't it possible to load only a portion of this at a time? If a client "call data in it" the client don't have to get all the monster back does he? If no try breaking the thing down to managable parts that a cliente really needs.

Most efficient internal database for a media player

I'm currently learning C# and .NET (coming from a UNIX background), and have just started writing a media player. I was hoping for some suggestions on the best way to store the internal database of songs. SQL? Some kind of text file? I don't really have any experience in this area so all points will be really appreciated.
Cheers!
You should probably use SQLite, and you can use LINQ on that to take full advantage of C# 3.5.
http://www.codeproject.com/KB/linq/linqToSql_7.aspx
There is also SQL Server Compact. Linq to Sql works with this as well.
There is a whole spectrum of requirements involved here, to name a few:
multi user?
exepected size(s)
do you want to store the multi media binaries as well?
for complex structured data text files won't do very well.
for storing binaries I wouldn't use XML
So it's probably going to be: What Sql database to use? You can search for discussions about SQLite, Sql Express, SqlCE etc.
A more fundamental question should probably be asked before we move along toward recommending one technology over another...
That of architecture. From the brief description above, it seems like what you are building is a Windows Media Player Library-like piece of functionality. If that's the case, the suggestion of a SQL database might seem appropriate, but the complication of synchronization of the filesystem (you weren't planning on turning the media files to be played into a monolithic datastore, were you?)
If you are instead only worried about persisting playlists.... a text-based format seems appropriate.
Playlists might want to be text-based (which, to me, includes XML representations of an object graph), but library information would seem to want to be in a more robust, more queryable datastore.
An object database could also be appropriate, as it lets you work with a much more transparent view of persistence compared to other suggestions. Isolating the number of new topics you're dealing with while you learn can be an important way to manage your learning curve. db4o has a .Net variation that I haven't looked at recently.

Serializing vs Database

I believe that the best way to save your application state is to a traditional relational database which most of the time its table structure is pretty much represent the data model of our system + meta data.
However other guys in my team think that today it's best to simply serialize the entire object graph to a binary or XML file.
No need to say (but I'll still say it) that World War 3 is going between us and I would like to hear your opinion about this issue.
Personally I hate serialization because:
The data saved is adhered only to your development platform (C# in my case). No other platforms like Java or C++ can use this data.
Entire object graph (including all the inheritance chain) is saved and not only the data we need.
Changing the data model might cause severe backward compatibility issues when trying to load old states.
Sharing parts of the data between applications is problematic.
I would like to hear your opinion about that.
You didn't say what kind of data it is -- much depends on your performance, simultaneity, installation, security, and availability/centralization requirements.
If this data is very large (e.g. many instances of the objects in question), a database can help performance via its indexing capabilities. Otherwise it probably hurts performance, or is indistinguishable.
If your app is being run by multiple users simultaneously, and they may want to write this data, a database helps because you can rely on transactions to ensure data integrity. With file-based persistence you have to handle that yourself. If the data is single-user or single-instance, a database is very likely overkill.
If your app has its own soup-to-nuts installation, using a database places an additional burden on the user, who must set up and maintain (apply patches etc.) the database server. If the database can be guaranteed to be available and is handled by someone else, this is less of an issue.
What are the security requirements for the data? If the data is centralized, with multiple users (either simultaneous or sequential), you may need to manage security and permissions on the data. Without seeing the data it's hard to say whether it would be easier to manage with file-based persistence or a database.
If the data is local-only, many of the above questions about the data have answers pointing toward file-based persistence. If you need centralized access, the answers generally point toward a database.
My guess is that you probably don't need a database, based solely on the fact that you're asking about it mainly from a programming-convenience perspective and not a data-requirements perspective. Serialization, especially in .NET, is highly customizable and can be easily tailored to persist only the essential pieces you need. There are well-known best practices for versioning this data as well, so I'm not sure there's an advantage on the database side from that perspective.
About cross-platform concerns: If you do not know for certain that cross-platform functionality will be required in the future, do not build for it now. It's almost certainly easier overall to solve that problem when the time comes (migration etc.) than to constrain your development now. More often than not, YAGNI.
About sharing data between parts of the application: That should be architected into the application itself, e.g. into the classes that access the data. Don't overload the persistence mechanism to also be a data conduit between parts of the application; if you overload it that way, you're turning the persisted state into a cross-object contract instead of properly treating it as an extension of the private state of the object.
It depends on what you want to serialize of course. In some cases serialization is ridicilously easy.
(I once wrote kind of a timeline program in Java,
where you could draw en drag around and resize objects. If you were ready you could save it in file (like myTimeline.til). On that momenet hundreds of objects where saved, their position on the canvas, their size, their colors, their innertexts, their special effects,...
You could than ofcourse open myTimeLine.til and work further.
All this only asked a few lines of code. (just made all classes and their dependencies
serializable) and my coding time took less than 5 minutes, I was astonished myself! (it was the first time I used serialization ever)
Working on a timeline you could also 'saveAs' for different versions and the 'til' files where very easy to backup and mail.
I think in my particular case it would be a bit idiot to use databases. But that's of course for document-like structures only, like Word to name one.)
My point thus first : there are certainly several scenarios in which databases wouldn't be the best solution. Serialization was not invented by developers just because they were bored.
Not true if you use XMLserialization or SOAP
Not quite relevant anymore
Only if you are not carefull, plenty of 'best practices' for that.
Only if you want it to be problematic, see 1
Of course serialization has besides the speed of implementation other important advantages like not needing a database at all in some cases!
See this Stackoverflow posting for a commentary on the applicability of XML vs. the applicability of a database management system. It discusses an issue that's quite similar to the subject of the debate in your team.
You have some good points. I pretty much agree with you, but I'll play the devil's advocate.
Well, you could always write a converter in C# to extract the data later if needed.
That's a weak point, because disk space is cheap and the amount of extra bytes we'll use costs far less than the time we'll waste trying to get this all to work your way.
That's the way of the world. Burn the bridges and require upgrades. Convert the data, or make a tool to do that, and then no longer support the old version's way of doing it.
Not if the C# program hands off the data to the other applications. Other applications should not be accessing the data that belongs to this application directly, should they?
For transfer and offline storage, serialization is fine; but for active use, some kind of database is far preferable.
Typically (as you say), without a database, you need to deserialize the entire stream to perform any query, which makes it hard to scale. Add the inherent issues with threading etc, and you're asking for pain.
Some of your other pain points about serialization aren't all true - as long as you pick wisely. Obviously, BinaryFormatter is a bad choice for portability and versioning, but "protocol buffers" (Google's serialization format) has versions for Java, C++, C#, and a lot of others, and is designed to be version tolerant.
Just make sure you have a component that handles saving/loading state with a clean interface to the rest of your application. Then whatever choice you make for persistence can easily be revisited later.
Serializing an object graph to a file might be a good quick and dirty initial solution that is very quick to implement.
But if you start to run into issues that make a database a better choice you can plug in a new version with little or no impact on the rest of the application.
Yes propably true. The downside is that you must retrieve the whole object which is like retrieving all rows from a table. And if it's big it will be a downside. But if it ain't so big and with my hobbyprojects they are not, so maybe they should be a perfect match?

Categories