Is it possible to set the prefetch cache manually when saving a model object? - django-models

I want to minimize SQL SELECTs.
Let's say I save an object called "Asimov" of a model Author. Another model called Publication has a foreign key relationship to Author. So I retrieve the publications for author using
Author.objects.publication_set.all()
and I get an empty list. This results in an INSERT and a SELECT.
At the moment I created the author, however, I knew there were no existing instances of Publication for this author. So I would have liked to somehow set the prefetch cache of the author's publication_set with an empty list so that the SELECT would never happen.
I suspect it could be possible to set the prefetch cache manually but I haven't tried because I don't know how.

Related

Suitable mechanism to relate json data to database?

So im working on a side project involving the Steam API and I am not sure if this would be a suitable and effective way to do what I want.
Basically I have a User model with an inventory field relating to a certain game. I fetch the Users game inventory when they log-in and store the json result within this single field. I have another model which contains the item schema for every item in the game.
Basically I want to relate the information stored within the database for a particular item(item model) to that items information within the json result(inventory items).
I previously tried to save the inventory in a separate table and created a relation between the user, user-inventory and items in this table however the problem with this was ensuring that the inventory stored in my database was the same as that fetched (exactly identical). Under this scenario I would basically need to delete(multiples may exists of the same item) every inventory item and then insert the inventory items retrieved at login back in.
My question is this. Is this an appropriate solution, bad practice or is there a better way to do what i need?
Thanks
Are you sure that you have to delete all of the inventory items and re-add the new ones? Could you iterate through each of the inventory entries and add/update/delete as needed?
I think that the JSON solution would work. It's probably a matter of preference at this point. I always feel like it's nicer to stay with a traditional relational data structure until you really need to optimize. But I don't see why the json wouldn't work. You can create a utility function on your user model that extracts the item ids and fetches all of the related records.

Looking for Denormalization Advice for Google App Engine

I am working on a system, which will run on GAE, which will have several related entities and I am not sure of the best way to store the data. This post is a request for advice from others who may have similar experience....
The system will have users, with profile data and an image. Those users will be able to create "events" and add journal entries to it. For the purpose of the system, the "events" will likely have 1 or 2 journal entries in them, and anything over 10 would likely never happen. Other users will be able to add comments to users' entries as well, where popular ones may have hundreds or even thousands of comments. When a random visitor uses the system, they should be able to see the latest events (latest, being defined by those with latest journal entries in them), search by tag, and a very perform basic text search. Then upon selecting an event to view, it should be displayed with all journal entries, and all user comments, with user images alongside comments. A user should also have a kind of self-admin page, to view/modify/delete their events and to view/modify/delete comments they have made on other events. So, doing all this on a normal RDBMS would just queries with some big joins across several tables. On GAE it would obviously need to work differently. Here are my initial thoughts on the design of the entities:
Event entity - id, name, timstamp, list
property of tags, view count,
creator's username, creator's profile
image id, number of journal entries
it contains, number of total comments
it contains, timestamp of last update to contained journal entries, list property of index words for search (built/updated from text from contained journal entries)
JournalEntry entity - timestamp,
journal text, name of event,
creator's username, creator's profile
image id, list property of comments
(containing commenter username and
image id)
User entity - username, password hash, email, list property of subscribed events, timestamp of create date, image id, number of comments posted, number of events created, number of journal entries created, timestamp of last journal activity
UserComment entity - username, id of event commented on, title of event commented on
TagData entity - tag name, count of events with tag on them
So, I'd like to hear what people here think about the design and what changes should be made to help it scale well. Thanks!
Rather than store Event.id as a property, use the id automatically embedded in each entity's key, or set unique key names on entities as you create them.
You have lots of options for modeling the relationship between Event and JournalEntry: you could use a ReferenceProperty, you could parent JournalEntries to Events and retrieve them with ancestor queries, or you could store a list of JournalEntry key ids or names on Event and retrieve them in batch with a key query. Try some things out with realistically-distributed dummy data, and use appstats to see what works best.
UserComment references an Event, while JournalEntry references a list of UserComments, which is a little confusing. Is there a relationship between UserComment and JournalEntry? or just between UserComment and Event?
Persisting so many counts is expensive. When I post a comment, you're going to write a new UserComment entity and also update my User entity and a JournalEntry entity and an Event entity. The number of UserComments you expect per Event makes it unwise to include everything in the same entity group, which means you can't do these writes transactionally, so you'll do them serially, and the entities might be stored across different network nodes, making the whole operation slow; and you'll also be open to consistency problems. Can you do without some of these counts and consider storing others in memcache?
When you fetch an Event from the datastore, you don't actually care about its list of search index words, and retrieving and deserializing them from protocol buffers has a cost. You can get around this by splitting each Event's search index words into a separate child EventIndex entity. Then you can query EventIndex on your search term, fetch just the EventIndex keys for EventIndexes that match your search, derive the corresponding Events' keys with key.parent(), and fetch the Events by key, never paying for the retrieval or deserialization of your search index word lists. Brett Slatkin explains this strategy here at 14:35.
Updating Event.viewCount will fail if you have a lot of views for any Event in rapid succession, so you should try out counter sharding.
Good luck, and tell us what you learn by trying stuff out.

Where should I break up my user records to keep track of revisions

I am putting together a staff database and I need to be able to revise the staff member information, but also keep track of all the revisions. How should I structure the database so that I can have multiple revisions of the same user data but be able to query against the most recent revision? I am looking at information that changes rarely, like Last Name, but that I will need to be able to query for out of date values. So if Jenny Smith changes her name to Jenny James I need to be able to find the user's current information when I search against her old name.
I assume that I will need at least 2 tables, one that contains the uid and another that contains the revisions. Then I would join them and query against the most recent revision. But should I break it out even further, depending on how often the data changes or the type of data? I am looking at about 40 fields per record and only one or two fields will probably change per update. Also I cannot remove any data from the database, I need to be able to look back on all previous records.
A simple way of doing this is to add a deleted flag and instead of updating records you set the deleted flag on the existing record and insert a new record.
You can of course also write the existing record to an archive table, if you prefer. But if changes are infrequent and the table is not big I would not bother.
To get the active record, query with 'where deleted = 0', the speed impact will be minimal when there is an index on this field.
Typically this is augmented with some other fields like a revision number, when the record was last updated, and who updated it. The revision number is very useful to get the previous versions and also to do optimistic locking. The 'who updated this last and when' questions usually come once the system is running instead of during requirements gathering, and are useful fields to put in any table containing 'master' data.
I would use the separate table because then you can have a unique identifier that points to all the other child records that is also the PK of the table which I think makes it less likely you will have data integrity issues. For instance, you have Mary Jones who has records in the address table and the email table and performance evaluation table, etc. If you add a change record to the main table, how are you going to relink all the existing information? With a separate history table, it isn't a problem.
With a deleted field in one table, you then have to have an non-autogenerated person id and an autogenrated recordid.
You also have the possiblity of people forgetting to use the where deleted = 0 where clause that is needed for almost every query. (If you do use the deleted flag field, do yourself a favor and set a view with the where deleted = 0 and require developers to use the view in queries not the orginal table.)
With the deleted flag field you will also need a trigger to ensure one and only one record is marked as active.
#Peter Tillemans' suggestion is a common way to accomplish what you're asking for. But I don't like it.
The structure of a database should reflect the real-world facts that are being modeled.
I would create a separate table for obsolete_employee, and just store the historical information that would need to be searched in the future. This way you can keep your real employee data table clean and keep only the old data that is necessary. This approach will also simplify reporting and other features of the application that are not related to searching historical data.
Just think of that warm feeling you'll get when you type select * from employee and nothing but current, correct goodness comes flowing back!

Entity Deletion Strategy

Say you have a ServiceCall database table that records down all the service calls made to you. Each of this record contains a many to one relationship to Customer record, where it stores which customer made the Service Call.
Ok, suppose the Customer has stop doing business with you and you do not need the Customer's record in your database. No longer need the Customer's name to appear in the dropdown list when you create a new ServiceCall record.
What do you do?
Do you allow the user to delete the Customer's record from the database?
Do you set a special column IsDeleted to true for that Customer's record, then make sure all dropdown list will not load all records that has IsDeleted set to true? Although this keeps the old records from breaking at innerjoins, it also prevents user from adding a new record with the same name as the old Customer, won't it?
Do you disallow deletion at all? Just allow to 'disable' it?
Any other strategies you used? I am guessing everyone have their way, I just need to see your opinions.
Of course the above is quite simplified, usually a ServiceCall record will link to many other entity tables. All of which will face the same problem when they are required to be deleted.
I prefer to set an IsDeleted flag, one of the benefits is you can still report on historical information (all teh data is still there).
As to the issue of not being able to insert another customer with the same name, this isn't a problem if you use an ID column (eg CustomerId) which is generally auto populated.
I agree with #Tetraneutron's answer.
Additionally, you can create a VIEW that lists only the active customers, to make it more convenient to populate drop-down lists and such.

How to separate automatically populated tables from manually populated tables, properly, in SQL Server?

Lets say I have the following 2 tables in a database:
[Movies] (Scheme: Automatic)
----------------------------
MovieID
Name
[Comments] (Scheme: Manual)
----------------------------
CommentID
MovieID
Text
The "Movies" table gets updated by a service every 10 minutes and the "Comments" table gets updated manually by the users of the database.
Normally you'd just create a simple foreign-key relationship between the two tables with cascading updates and deletes but in this case I want to be able to keep the manually entered data even if the movie it refers to gets deleted (the update service isn't that reliable). This should only be a problem in one-to-many releationships from an automatic table to a manual table. How would you separate the manual and the automatically populated parts of the database?
I was planning to add a foreign-key that isn't maintaining referencial integrity and only cascades updates, not deletions. But are there any pitfalls I should be aware of by doing it this way? I mean, except the fact that I might end up with some of the manual data that doesn't actually reference anything.
Edit / Clarification:
Just to clarify. The example tables are totally made up. In reality the DB will contain objects like servers, applications, application notes, versions numbers etc. Server related information will be populated automatically but some application details will be filled in manually. It could be information like special configurations and such. Even if the server record gets deleted the application notes on that server are still valuable and shouldn't be deleted.
I'd suggest you use an import table that gets updated by the service and then populate the movies tables from that. Then you get to keep movies that are deleted in the movies table. Possible tagging them as deleted or obsolete, but you'd still be able to keep them for historical purposes.
I think you should use a soft delete for that scenario. I don't think you want to have comments you don't know which movie they belong to.
Agree; an example route would be to copy the movies table and add a status field which indicates each record's present state (live/checking/deleted). Then the autoimport should go into a temporary table, set the status of all movies to 'checking', then use the temporary table to update the real movies table, setting the movie status to live when it's found in the temporary table. Once complete, set any movie which still has a status of 'checking' to deleted, since they weren't found in the autoimport. At the application end, select any movie which doesn't have status = deleted.
"I was planning to add a foreign-key
that isn't maintaining referencial
integrity and only cascades updates,
not deletions."
Since you appear to be using surrogate keys, updates will not be relevant to foreign elements. Additionally, since you do not care about orphaning data, then why use the referential constraint at all? You use constraints to ensure that something exists, which you do not appear to require in this situation.

Resources