I currently have a C# winform application in which you enter data that is ultimately relational. The amount of data being stored isn't huge. The original version used SQL CE to store the information. However, I found it to be quite slow. Also, I wanted to be able to save application files using my own extension.
I had changed my approach to basically keep my data loaded in-memory using class objects. To save, I simply serialize everything using ProtoBuf and deserialize when opening a file. This approach is lightning fast and changes are never persisted until a user clicks save. However, I find it a little cumbersome to query my hierarchical data. I query data using Linq-To-Objects. I'll have ClassA having a GUID key. I can reference ClassA in ClassB via the GUID. However, I can't really do an easy SQL join-type query to get ClassB properties along with ClassA properties. I get around it by creating a navigation property on ClassB to ClassA that simple returns ClassA via a LINQ query on the GUID. However, this results in a lot of collection scanning.
What options are out there that give me fast, single-user, relational file storage? I would still like to work in-memory where changes aren't persisted until a user uses File|Save. I would also like to be able to continue querying the data using LINQ. I'm looking at SQLite as an option. Are there better options or approaches out there for me?
UPDATE
I was unaware of the AsReference option in the ProtoMember attribute [ProtoMember(5, AsReference = true)]. If I abandon foreign keys in my classes and simply reference the related objects, then it looks like I'm able to serialize and deserialize using ProtoBuf while keeping my object references. Thus, I can easily use Linq-To-Objects to query my objects. I need to stop thinking from the database side of things.
If you have all your objects in some sort of hierarchical structure, you can also store the exact same objects in other structures at an overhead of 4 bytes/object (32bit machines).
Assuming you have a base object like:
public class HierarchyElement
{
public List<HierarchyElement> Children { get; set; }
public HierarchyElement Parent { get; set; }
}
So you have the root element in a local variable, which via the Children property, and the Children property of those first children, etc etc store an unknown number of objects in a hierarchy.
However, while you are building that object, or after deserialising it, you can add a reference to each HierarchyElement to a List (or other flat structure of your choice).
You can then use this flat list to do your Linq queries against.
Related
I'm new to GAE and I'm still trying to figure things out. We're developing an Android app which uses Cloud Datastore to store images, videos, text, audios, etc. So we have now over 15 types of content objects.
I've been modelling each type of object as a distinct ndb Model class, but I'm wondering if this kind of design could affect performance.
Specifically, wouldn't it be better to write a simple class (e.g ContentObject) which simply had a content_type, and a few generic fields as string, number and blob?
I guess I'd go for the latter if I had to worry about creating/maintaining tables (or simply knowing that there are regular db tables behind).
I really like the first option, but I had to ask, just in case.
There are no performance differences to worry about between the 2 approaches.
With dedicated models you'll have to write a bit more code - each model needs to be handled separately. But it's simpler code, especially if eventually you will have some properties which only exist for some entities or are handled differently, which would require conditional logic with a generic model.
Building queries is also simpler with dedicated models if there are property differences, using a single model may require filling in unused properties (maybe by using default values) if they are used for sorting/filtering query results (entities with missing properties aren't indexed by the respective properties so they won't show up in the results).
On the other hand you'll need separate queries for each model, you can't obtain results for different kinds in the same query. And you'll need to maintain separate composite indexes for each kind (with a total limit of 200 such indexes per application).
If you're worrying about code duplication, which could also be a reason for which you'd consider a shared model, it's also possible to combine the common properties in a single ndb model class, with a single/common implementation for handling those common properties, and inherit that class in dedicated subclasses handling the differences. Something like this:
class Content(ndb.Model):
type = ndb.StringProperty() # not really needed, cls._get_kind() can be used instead
blob = ndb.StringProperty()
# other generic/common content properties and related methods
class Video(Content):
has_cc = ndb.BooleanProperty()
# other video-specific content properties and related methods
But this is just an implementation approach, from the datastore perspective you're still using dedicated models - in the above example a video entity will have a Video kind, not a Content kind.
There are no tables with the datastore, the only thing shared between entities of the same kind is their ndb model (which is specific just for the more performant ndb client library, other client libraries don't have one) and the search indexes definitions.
I am working on a game that is intended to be played off line. For the past I had done games which required an online connection. We would store all relevant data online in a database, and I would fetch the database data, parse the data into objects, via JSON, and access them as needed.
For my offline environment, I'm not sure how I can best replicate storing all the types of game data. I really preferred the organization of the database and I don't want to create a whole bunch of game files and variables to manage everything. I'd prefer to mimick having a database, only it compiles with the game.
I considered using an excel file, but I feel that would be too easy to hack.
Do you have a suggestion on how I can mimick an offline database?
My specific engine is with Unity3D and I'm using C#.
Thanks
You can use a local database technology, they are used just like you would use an online database but the data is stored in a file stored on the local machine. If your in .NET two come to mind:
SQLite - http://www.sqlite.org/
This is the defaco local database technology, its open source and has very widespread use. There is even a library to connect unity to it: https://github.com/Busta117/SQLiteUnityKit
SQL CE
This is Micrsofts single file database. It is very similar to the features of SQL Server, its not open source but it has built in drivers already in the .NET framework so it maybe simpler to use. Issue with this could be if you want your game to run across platforms take a look here:
http://answers.unity3d.com/questions/26118/can-you-use-sql-compact-35-with-unity.html
I would recommend going with SQLite as it seems there is more support for it in Unity
Unity can do this for you in the form of PlayerPrefs. According to the Unity documentation, it:
Stores and accesses player preferences between game sessions.
However, it can be and often is used to store things such as highscores, tutorial information, and various other offline data.
Simple example:
// this gets the value with the key "score"
// defaults to 0 if it doesn't exist
PlayerPrefs.GetInt("score");
// this sets the value with the key "score"
// previous values will be overritten
PlayerPrefs.SetInt("score", 9001);
Be aware that the values are not encrypted in any form automatically, however you can wrap the function as follows:
public static int GetIntData(string key, int value)
{
if(PlayerPrefs.HasKey(key))
{
return YourDecryptionMethod(PlayerPrefs.GetInt(key));
}
else
{
return 0;
}
}
public static void SetIntData(string key, int value)
{
return PlayerPrefs.GetInt(key, YourEncryptionMethod(value));
}
You could take this even further and encrypt the keys if you wanted to. However for very large data sets, you should consider using a database instead (see this answer).
Additional
If you like storing data as JSON, XML or some other format, there's nothing to stop you using one of the many parsers available for popular data formats with the same encrypt/decrypt wrapper solution.
For example if you're using C# and want to store data as XML, you can use System.Xml.
Take a look here for information on using external DLLs in your project, should you choose not to go with a Mono/.NET class.
Avoid using PlayerPrefs for game state that you are concerned about being easily hacked. PlayerPrefs are stored in plain text, they really should only be used for storing configuration related values.
If you are looking for a native way to handle it (i.e. not SQLite), you can regular .NET file IO. Simply have your data structure in a serializable POCO object model. When you save, you can use something like:
BinaryFormatter bin = new BinaryFormatter();
FileStream file = File.Open(Application.persistentDataPath + "/savegame.dat", FileMode.Open);
bf.Serialize(file, mySerializablePoco);
file.Close();
That will write a binary file to disk that can be deserialized during a load process. You can add encryption if you want as well.
If you aren't caring to modify the data, but just want the structure of setting predefined objects, you can use ScriptableObject. I did one project where we did not want to use SQLite so we made something like:
public class MyRepository : ScriptableObject {
public List<Car> Cars;
public List<Driver> Drivers;
}
[Serializable]
public class Car {
public string Manufacturer;
}
... ect
ScriptableObject is created and designed during edit time. You actually create it with some editor code and it will write out an *.asset file into your project. At that point you can Resource.Load it at runtime or wire it up to a public variable on a MonoBehavior script.
Check out: http://unity3d.com/learn/tutorials/modules/beginner/live-training-archive/scriptable-objects
I have started to model some city-transport data (bus lines and bus stops) for a community project. The data arrived to me as JSON files, and I'd like to create some classes from it, considering the already available data at first.
There is a BusLine object, whose JSONs don't contain information about which BusStop are related to it.
And there is a large collection of BusStop, of which one property is BusLines, a collection of (references to) bus lines which pass about that stop.
So far I have modelled this (C# style, but intended just for visualization at first):
public class BusLine
{
public String code;
public String name;
public List<DirectPosition> route;
}
public class BusStop
{
public String code;
public DirectPosition location;
public List<BusLine> busLines;
}
My doubt, from now, is this: most probably, I'll want to know the BusStops associated with a given BusLine. I imagine some possible ways of doing it, but am not sure at all how this rather trivial situation should be addressed. My naive thoughts:
Create a getStops() method that would look somewhere to check which stops existed along that route, and create such list on-the-fly;
Create an explicit List<BusStop> stops property in BusLine class (that sounds very wrong);
Eliminate containment altogether and create a third, "Relation" kind of class that would manage (somehow) the relations between those classes. That would mean the knowledge about those relations, extracted from the JSON files, wouldn't be stored "inside" the entities, but somewhere else.
I am pretty sure this is a common pattern (I'd bet there's at least one design pattern for that), but my current level of knowledge gives me no clue...
Thanks for any help!
If you use an object database, what happens when you need to change the structure of your object model?
For instance, I'm playing around with the Google App Engine. While I'm developing my app, I've realized that in some cases, I mis-named a class, and I want to change the name. And I have two classes that I think I need to consolidate.
However,I don't think I can, because the name of the class in intuitively tied into the datastore, and there is actual data stored under those class names.
I suppose the good thing about the "old way" of abstracting the object model from the data storage is that the data storage doesn't know anything about the object model --it's just data. So, you can change your object model and just load the data out of the datastore differently.
So, in general, when using a datastore which is intimate with your data model...how do you change things around?
If it's just class naming you're concerned about, you can change the class name without changing the kind (the identifier that is used in the datastore):
class Foo(db.Model):
#classmethod
def kind(cls):
return 'Bar'
If you want to rename your class, just implement the kind() method as above, and have it return the old kind name.
If you need to make changes to the actual representation of data in the datastore, you'll have to run a mapreduce to update the old data.
The same way you do it in relational databases, except without a nice simple SQL script: http://code.google.com/appengine/articles/update_schema.html
Also, just like the old days, objects without properties don't automatically get defaults and properties that don't exist in the schema still hang around as phantoms in the objects.
To rename a property, I expect you can remove the old property (the phantom hangs around) add the new name, populate the data with a copy from the old (phantom) property. The re-written object will only have the new property
You may be able to do it the way we are doing it in our project:
Before we update the object-model (schema), we export our data to a file or blob in json format using a custom export function and version tag on top. After the schema has been updated we import the json with another custom function which creates new entities and populates them with old data. Of course the import version needs to know the json format associated with each version number.
I've got a Silverlight application that requires quite a bit of data to operate and it requires it all up-front. It's using RIA Services (and the Entity Framework) to get all that information. It takes 10-15 seconds to get all the data, but the data only changes about once a month.
What I'd like to do is toss that data into Isolated Storage so that the next time they load up the app, I can just grab it, see if its updated, and if not use that data they've already got and save a ton of time sending things over the wire.
The structure of the graph I need to store is (more-or-less) a typical tree structure. A model has components, a component has features, a feature has options. The issue that I'm coming up against is that when I ask to have this root entity (the model) serialized, it's only serializing the top-level object and ignoring all of the "child" objects.
Does anyone know of a convenient way to get it to serialize/deserialize the whole graph?
IF RIA services is the problem then i might have a hint.
Do transfer collecitons of objects through RIA you need to do alittle tweaking of the domain model.
Lets say you have a receipt with a list of ReceiptEntries. Then you'd do this.
public Receipt {
public guid Id;
public List<ReceiptEntry> Entries;
}
public ReceiptEntry {
public guid ReceiptId;
}
you have to tell RIA how to associate these objects.
[Include()]
[Composition()]
[Association("ReceiptEntries", "Id", "ReceiptId"]
public Receipt {
public guid Id;
public List<ReceiptEntry> Entries;
}
Then it will serialize the list of objects.
I might write weird syntax cause I'm used to VB.net or have some minor faults in the sample code, just threw it up. But if the problem is that RIA doesnt send over the objects the way it shuold, then you should investigate this scenario. If you didnt already.