I want to make models for my framework, written in go, and I'm not sure how to compose them in a way that shares the common database interaction methods: save, update, delete.
I would normally do this by creating a Model abstract parent class to all concrete models, but Go doesn't have inheritance. You're supposed to use embedding and composition instead, but I don't see how I can embed a model class and have it save the data of the class holding it.
I see the other option, of creating a model class that embeds a concrete model type within it, but I don't really see an interface that would apply to all the models unless it was empty. That brings with it the insecurity that anything can be considered a model.
What do?
In my projects I do something like this:
type Storable interface {
// called after unmarshalling from the database
Init() error
// called when an object is being deleted
// this is useful if the object needs to delete other objects,
// change state on a remote server, etc.
Destroy() error
// called after Init, helps separate initialization from
// sanity checks (useful to detect errors before using a potentially
// invalid object)
Validate() error
// type of this object, stored in the database in `Save` and `Update`
// so it can be read out in `Get`
Type() string
}
If you're working with an SQL database, you could do something like this:
type Schema map[string]reflect.Type
type SQLStorable interface {
Storable
Schema() Schema
}
Then in the database, I have functions like this:
func Get(id string) (Storable, error)
func Save(Storable) error
func Update(id string, Storable) error
func Delete(id string) error
// register a type with the database (corresponds to the Type() in Storable)
func Register(typ string, reflect.Type)
I keep a cache of objects in the database: map[string]Storable. This allows me to implement caching logic to reduce lookup times (don't need to reconstruct objects each time it's read from the database).
In my project, I have lots of packages that need to talk with objects from other packages. Since managing dependency chains would be a nightmare, I've set up a messaging system that uses the database:
type Message map[string]interface{}
func Send(id string, Message)
And I've added a Receive function to Storable that takes a Message and returns an error. This has reduced many headaches so far and has lead to a more pluggable design.
I'm not sure if this is the "Go way", but it avoids the idea of inheritance and solves the problem. In the database logic, I use tons of reflection to grab the data from the database and populate an object with it. It leads to some unfortunate type assertions, but I guess that can't really be helped when trying to keep things abstract.
Related
I am building a simple rate limiter to train my oop skills and I am having some doubts regarding composition and orm.
I have the following code:
interface RateLimiterService {
void hit(String userid, long timestamp) throws UserDoesNotExistEx, TooManyHitsEx; // Option A
SingleRateLimiter getUser(String userid) throws UserDoesNotExistEx; // Option B
Optional<SingleRateLimiter> getUser(String userid); // Option C
}
class LocalRateLimiterService implements RateLimiterService {
// Uses an hash table userid -> SingleRateLimiter
}
interface SingleRateLimiter {
void hit(long timestamp) throws TooManyHitsEx;
}
class TimestampListSRL implements SingleRateLimiter {
// Uses a list to store the timestamps and purges the expired ones at each call
}
class TokenBucketSRL implements SingleRateLimiter {
// Uses the token bucket aproach
}
My doubts are:
Which option should I use for the RateLimiterService interface?
Option A is usually called "method forwarding" or "delegation" or "Law of Demeter". It protects the composed object by only exposing the intended methods and/or by possibly adding some extra validation logic before forwarding the call. Therefore, it seems like a good solution when that is needed. However, when this is not the case (as in my example), this option creates a lot of redundant repetitions which add nothing usefull.
Option B breaks encapsulation in a way but it avoids method repetions (the DRY principle). By picking A or B you always end up breaking some well known principles/good practices. Is there another option?
Option C is the same as B but returns an optional instead of throwing an exception. Which approach is considered better?
If the classes that implement the RateLimiterService had a single composing SingleRateLimiter instead of a collection of SingleRateLimiters (doesn't make much sense in this case but trying to be generic to other situations when the composed object is not a colection), would the best Option change to other alternative from the one in 1.?
If I wanted to add a database to this system, what would be the best approach to "talk to" the database?
creating a class DBRateLimiterService that implements RateLimiterService and has a private connection object to the database (is basically a DAO)? In this case, this class does not know anything besides the userid of the inner SingleRateLimiters, since there are multiple implementations available/possible. So how can I do this approach without changing the current OOP architecture?
In addition, I would need to create a DAO for each SingleRateLimiter implementation too, right? In this case, the SingleRateLimiter is not a simple model object that has only getters and setters so it should also be a DAO, right? Its hit method must be implemented as a transaction in most cases (if not all). If this is the right approach, how can the two DAOs operate together and map to the same database table?
What other options could serve for this?
I currently have a C# winform application in which you enter data that is ultimately relational. The amount of data being stored isn't huge. The original version used SQL CE to store the information. However, I found it to be quite slow. Also, I wanted to be able to save application files using my own extension.
I had changed my approach to basically keep my data loaded in-memory using class objects. To save, I simply serialize everything using ProtoBuf and deserialize when opening a file. This approach is lightning fast and changes are never persisted until a user clicks save. However, I find it a little cumbersome to query my hierarchical data. I query data using Linq-To-Objects. I'll have ClassA having a GUID key. I can reference ClassA in ClassB via the GUID. However, I can't really do an easy SQL join-type query to get ClassB properties along with ClassA properties. I get around it by creating a navigation property on ClassB to ClassA that simple returns ClassA via a LINQ query on the GUID. However, this results in a lot of collection scanning.
What options are out there that give me fast, single-user, relational file storage? I would still like to work in-memory where changes aren't persisted until a user uses File|Save. I would also like to be able to continue querying the data using LINQ. I'm looking at SQLite as an option. Are there better options or approaches out there for me?
UPDATE
I was unaware of the AsReference option in the ProtoMember attribute [ProtoMember(5, AsReference = true)]. If I abandon foreign keys in my classes and simply reference the related objects, then it looks like I'm able to serialize and deserialize using ProtoBuf while keeping my object references. Thus, I can easily use Linq-To-Objects to query my objects. I need to stop thinking from the database side of things.
If you have all your objects in some sort of hierarchical structure, you can also store the exact same objects in other structures at an overhead of 4 bytes/object (32bit machines).
Assuming you have a base object like:
public class HierarchyElement
{
public List<HierarchyElement> Children { get; set; }
public HierarchyElement Parent { get; set; }
}
So you have the root element in a local variable, which via the Children property, and the Children property of those first children, etc etc store an unknown number of objects in a hierarchy.
However, while you are building that object, or after deserialising it, you can add a reference to each HierarchyElement to a List (or other flat structure of your choice).
You can then use this flat list to do your Linq queries against.
I have started to model some city-transport data (bus lines and bus stops) for a community project. The data arrived to me as JSON files, and I'd like to create some classes from it, considering the already available data at first.
There is a BusLine object, whose JSONs don't contain information about which BusStop are related to it.
And there is a large collection of BusStop, of which one property is BusLines, a collection of (references to) bus lines which pass about that stop.
So far I have modelled this (C# style, but intended just for visualization at first):
public class BusLine
{
public String code;
public String name;
public List<DirectPosition> route;
}
public class BusStop
{
public String code;
public DirectPosition location;
public List<BusLine> busLines;
}
My doubt, from now, is this: most probably, I'll want to know the BusStops associated with a given BusLine. I imagine some possible ways of doing it, but am not sure at all how this rather trivial situation should be addressed. My naive thoughts:
Create a getStops() method that would look somewhere to check which stops existed along that route, and create such list on-the-fly;
Create an explicit List<BusStop> stops property in BusLine class (that sounds very wrong);
Eliminate containment altogether and create a third, "Relation" kind of class that would manage (somehow) the relations between those classes. That would mean the knowledge about those relations, extracted from the JSON files, wouldn't be stored "inside" the entities, but somewhere else.
I am pretty sure this is a common pattern (I'd bet there's at least one design pattern for that), but my current level of knowledge gives me no clue...
Thanks for any help!
If you use an object database, what happens when you need to change the structure of your object model?
For instance, I'm playing around with the Google App Engine. While I'm developing my app, I've realized that in some cases, I mis-named a class, and I want to change the name. And I have two classes that I think I need to consolidate.
However,I don't think I can, because the name of the class in intuitively tied into the datastore, and there is actual data stored under those class names.
I suppose the good thing about the "old way" of abstracting the object model from the data storage is that the data storage doesn't know anything about the object model --it's just data. So, you can change your object model and just load the data out of the datastore differently.
So, in general, when using a datastore which is intimate with your data model...how do you change things around?
If it's just class naming you're concerned about, you can change the class name without changing the kind (the identifier that is used in the datastore):
class Foo(db.Model):
#classmethod
def kind(cls):
return 'Bar'
If you want to rename your class, just implement the kind() method as above, and have it return the old kind name.
If you need to make changes to the actual representation of data in the datastore, you'll have to run a mapreduce to update the old data.
The same way you do it in relational databases, except without a nice simple SQL script: http://code.google.com/appengine/articles/update_schema.html
Also, just like the old days, objects without properties don't automatically get defaults and properties that don't exist in the schema still hang around as phantoms in the objects.
To rename a property, I expect you can remove the old property (the phantom hangs around) add the new name, populate the data with a copy from the old (phantom) property. The re-written object will only have the new property
You may be able to do it the way we are doing it in our project:
Before we update the object-model (schema), we export our data to a file or blob in json format using a custom export function and version tag on top. After the schema has been updated we import the json with another custom function which creates new entities and populates them with old data. Of course the import version needs to know the json format associated with each version number.
I am parsing xml files in my application and I am struggling a bit on how to design this.
I allow for uploading of "incomplete" trees of our xml schemas, meaning that as long as the tree is well formed, it can be any sub-child of the root node. Most of the nodes have child nodes that contain only some text (properties), but I have not included any of that in my small xml structure example.
<root>
<childFoo>
<childBar>
</childBar>
</childFoo>
</root>
Any one of those nodes are allowed.
Now, i have designed a XmlInputService that has methods to parse the various nodes. And i just detect in the controller what kind of node it is, and hand it over to the service method accordingly.
So to keep my code DRY and nice, I re-use my methods in the higher levels. If I pass of a document of type Root to the service, it will parse whatever fields in root that belong directly in root in there, and pass of the children nodes (that represent children in my domain class structure) to the appropiate parsing method in the service.
Now, if a user uploads xml that contains constraint violations, i.e. an element with a non-unique name etc, I obviously want to roll this back.
Lets say i call parseRoot() and go downwards, calling parseChildFoo().
In there i call parseChildBar() for every Bar child in there. If one of the Bar children cannot validate because of constraints or whatever, I obviously want to cascade the roll back of the transaction all the way up to parseFoo().
How would I achieve this?
If you have a grails Service that has method that takes care of the parsing, you should throw a an exception that extends java.lang.RuntimeException from your service so that the user can be informed that they need to modify their xml. So, your controller will catch that exception and provide the user with a meaningful error message
The rolling back of any database-modifications will be done automatically by Grails/Spring whenever a runtimeexception is thrown from a service method.
The advantage of the approach that I am describing over Victor's answer is that you don't have to write any code to let the transaction roll-back in case of failure. Grails will do it for you. IMO, to use the withTransaction closure inside a service method makes no sense.
More info here
Make those validity rules validation constraints on domain objects.
When save() violates the constraints, throw an exception and catch it on top parse level, then roll back an entire transaction.
Like:
meServiceMethod() {
...
FooDomainClass.withTransaction { status ->
try {
parseRoot(xml)
}
catch (FooBarParentException e) {
status.setRollbackOnly()
// whatever error diagnostics
}
}
...
}
Or you can simply let the exception fly out of service method to controller - service methods are transactional by default.