Porting from SQLite to Redis - database

I was using SQLite for an app in which I used to store 8-10 columns. I used to retrieve the data based upon a combination of any number of those attributes. Now I want to port to Redis. So I was building a test app for it.
But I am unable to think of how to design my redis system in such a way that I would be able to retrieve the data based upon any of those attributes. Any of you has any advice/experience?

I think the best advice is to avoid sticking to the relational model when porting something from a RDBMS to Redis. And beyond the model, an important difference is to focus on data access paths as well as the data structures.
Redis does not include a query language (but rather commands a la memcached), and cannot therefore reply to arbitrary queries. If an access path to the data is not part of the data structure, then the data cannot be efficiently retrieved.
Redis is not the best NoSQL store when it comes to supporting arbitrary queries. For instance, you would be better served by something like MongoDB.
Now, if you really want to implement your stuff with Redis, you could try to use a strategy similar to tagging engines. Your records can be stored in hash objects. For each column part of the arbitrary queries you need to support, you build reverse indexes using sets.
For instance:
# Set up the records: one hash object per record
hmset user:1 name Bilbo type Hobbit job None
hmset user:2 name Frodo type Hobbit job None
hmset user:3 name Gandalf type Maiar job Wizard
hmset user:4 name Aragorn type Human job King
hmset user:5 name Boromir type Human job Warrior
# Set up the indexes: one set per value per field
sadd name:Bilbo 1
sadd name:Frodo 2
sadd name:Gandalf 3
sadd name:Aragorn 4
sadd name:Boromir 5
sadd type:Hobbit 1 2
sadd type:Maiar 3
sadd type:Human 4 5
sadd job:None 1 2
sadd job:Wizard 3
sadd job:King 4
sadd job:Warrior 5
# Perform a query: we want the humans who happen to be a king
# We just have to calculate the intersection of the corresponding sets
sinterstore tmp type:Human job:King
sort tmp by nosort get user:*->name get user:*->job get user:*->type
1) "Aragorn"
2) "King"
3) "Human"
By combining union, intersection, difference, more complex queries can be implemented. For non discrete values, or for range based queries, ordered sets (zset) have to be used (and can be combined with normal sets).
This method is usually quite fast if the values are discriminant enough. Please note you do not have the flexibility of a RDBMS though (no regular expressions, no prefixed search, range queries are a pain to deal with, etc ...)

Related

Avoiding in Filter in AppEngine Datastore

I have two Entities with me on Google DataStore:
A:
id,
name,
.... a lot of other columns
B:
id,
Key(A, <id_of_A_record>) --> Reference Property A,
URL,
size
... and more columns
I have a bunch of A ids with me and I want to query with those ids.
Now I am able to achieve this using
A_ids = [1, 2, 3, 4, 5, 6]
B.all().filter('A IN', A_ids).fetch(None)
However, if there are 6 A_ids, there are 6 db calls, therefore beating the purpose of IN. Is they any way in which I can achieve this by avoiding the IN filter (too many DB calls)?
Thank you!
The in filter is actually just a convenience function provided by your ORM. Behind the scenes, it's doing a series of ORs to make it happen.
The filter name in (value1, ..., valueN) is converted into (name = value1) OR ... OR (name = valueN) (also a DisjunctionNode)
https://googleapis.dev/python/python-ndb/latest/query.html#google.cloud.ndb.query.FilterNode
Even ORs aren't supported by datastore itself, it's just emulated by the client library:
The nature of the index query mechanism imposes certain restrictions on what a query can do. Datastore mode queries do not support substring matches, case-insensitive matches, or so-called full-text search. The NOT, OR, and != operators are not natively supported, but some client libraries may add support on top of Datastore mode.
https://cloud.google.com/datastore/docs/concepts/queries#restrictions_on_queries
I think the client library performs ORs by launching separate queries, which is why you are seeing those 6 db calls. I don't think you'll be able to escape it without restructuring your data.

Google Datastore - Search Optimization Technique

I am dealing with a real-estate app. A Home will hvae typical properties like Price, Bed Rooms, Bath Rooms, SqFt, Lot size etc. User will search for Homes and such a query will require multiple inequality filters like: Price between x and y, rooms greater than z, bathrooms more than p... etc.
I know that multiple inequality filters are not allowed. I also do not want to perform any filtering in my code and/because I want to be able to use Cursors.
so I have come up with two solutions. I am not sure if these are right - so wonder if gurus can shed some light
Solution 1: I will discretize the values of each attribute and save them in a list-field, then use IN. For example: If there are 3 bed rooms, instead of storing beds=3, I will store beds = [1,2,3]. Now if a user searches for homes with say at least two bedrooms, then instead of writing the filter as beds>2, I will write the filter as "beds IN [2]" - and my home above [1,2,3] will qualify - so so will any home with 2 beds [1,2] or 4 beds [1,2,3,4] and so on
Solution 2: It is similar to the first one but instead of creating a list-property, I will actually add attributed (columns) to the home. So a home with 3 bed rooms will have the following attributed/columns/properties: col-bed-1:true, col-bed-2:true, col-bed-3:true. Now if a user searches for homes with say at least two bedrooms, then instead of writing the filter as beds>2, I will write the filter as "col-bed-2 = true" - and my home will qualify - so will any home with 2 beds, 3 beds, 4 beds and so on
I know both solutions will work, but I want to know:
1. Which one is better both from a performance and google pricing perspective
2. Is there a better solution to do this?
I do almost exactly your use case with a python gae app that lists posts with housing advertisements (similar to craigslist). I wrote it in python and searching with a filter is working and straightforward.
You should choose a language: Python, Java or Go, and then use the Google Search API (that has built-in filtering for equalities or inequalities) and build datastore indexes that you can query using the search API.
For instance, you can use a python class like the following to populate the datastore and then use the Search API.
class Home(db.Model):
address = db.StringProperty(verbose_name='address')
number_of_rooms = db.IntegerProperty()
size = db.FloatProperty()
added = db.DateTimeProperty(verbose_name='added', auto_now_add=True) # readonly
last_modified = db.DateTimeProperty(required=True, auto_now=True)
timestamp = db.DateTimeProperty(auto_now=True) #
image_url = db.URLProperty();
I definitely think that you should avoid storing permutations for several reasons: Permutations can explode in size and makes the code difficult to read. Instead you should do like I did and find examples where someone else has already solved an equal or similar problem.
This appengine demo might help you.

Is it possible to re-order query results in memory?

and thanks in advance for any and all help!!
I'm running a query on the datastore that looks like this:
forks = Thing.query(ancestor=user.subscriber_key).filter(
Thing.status==True,
Thing.fork_of==thing_key,
Thing.start_date <= user.day_threshold(),
Thing.level.IN([1,2,3,4,5])).order(
Thing.level)
This query works and returns the results I expect. However, I would like to sort it on one additional field (Thing.last_touched). If I add this to the sort, it won't work because Thing.last_touched is not the property to which the inequality filter is applied. I can't add an additional inequality filter, since we're only allowed one, plus it's not needed (actually, that's why Thing.leve.IN is there.. not needed as a filter, but required for the sort).
So, what I'm wondering is, could I run the query with the filters that I want, and then run code to sort the query results myself? I know I could pull all the parameters I want to sort and store them in dictionaries and sort them that way, but it seems to me there ought to be a way to handle this with the query.
I've searched for days for this but have had no luck.
Just in case you need it, here's the class definition of Thing:
class Thing(ndb.Model):
title = ndb.StringProperty()
level = ndb.IntegerProperty()
fork = ndb.BooleanProperty()
recursion_level = ndb.IntegerProperty()
fork_of = ndb.KeyProperty()
creation_date = ndb.DateTimeProperty(auto_now_add=True)
last_touched = ndb.DateTimeProperty(auto_now=True)
status = ndb.BooleanProperty()
description = ndb.StringProperty()
owner_id = ndb.StringProperty()
frequency = ndb.IntegerProperty()
start_date = ndb.DateTimeProperty(auto_now_add=True)
due_date = ndb.DateTimeProperty()
One of the main reasons that Google AppEngine is so fast even when dealing with insane amounts of data is because of the very limited query options. All standard queries are "scans" over an index, i.e. there is some table (index) that keeps references to your actual data entires in order sorted by ONE of the data's properties. So, let's say you add the following entries:
Thing A: start-date = Wednesday (I'm just going to use weekdays for simplicity)
Thing B: start-date = Friday
Thing C: start-date = Monday
Thing D: start-date = Thursday
Then, AppEngine will create an index that looks like this:
1 - Monday -> Thing C
2 - Wednesday -> Thing A
3 - Thursday -> Thing D
4 - Friday -> Thing B
Now, any query will correspond to a continuous block in this (or another) index. If you, for example, say "All Things with start-date >= Tuesday", it will return entries in row 2 through 4 (i.e. Thing A, Thing D, and Thing B in that exact order!). If you query for "< Thursday", you get 1-2. If you say "> Tuesday and <= Thursday" you get 2-3.
And if you are doing inequality filters on a different property, AppEngine will use a different index.
This is why you can only do one inequality filter and why the sort-order is always also specified by the property that you do an inequality filter of. Because AppEngine is not designed to be able to return items 1, 2, 4 (with a gap*) out of an index, or items 4, 2, 3 (no gap, but out of order).
So, if you need to sort your entries on a different property other than the one you use for inequality filtering, you basically have 3 choices:
Perform your query with the inequality filter, read all results into memory, and sort them in your code afterwards (I think this is what you mean by storing them in a dictionary)
Perform your query WITHOUT the inequality filter, but sorted on the right property. Then, as you loop over the returned entries, simply check the inequality yourself and drop the ones that don't match
Perform your query with the inequality filter and just return the items in the wrong order, and let the client-application worry about sorting them! ;)
Generally I would assume that you have much more unused resources available client-side to do the sorting, so I would probably go for option 3 in most cases. But if you need to sort the entries server-side (e.g. for a mobile-app targeted at older smart-phones), it will depend on the size of your database and the fraction of entries that usually match your inequality filter, whether option 1 or option 2 are better. If your inequality filter only removes a small fraction of the entries, option 2 might be much faster (as it doesn't require any O(>n) sorting), but if you have a huge database of entries and only a very small number of them will match the inequality, definitely go for option 1.
BTW: The talk "App Engine Datastore Under the Covers" from Google I/O 2008 might be a very helpful resource. It's a bit technical, but it gives a great overview of this topic and I consider it must-know information if you want to do anything in AppEngine. Note, though, that this talk is a bit out-dated. There are a bunch more things that you can do with queries now-a-days. But ALL of these extra things (if I understand correctly) are API functions that in the end just generate a set of several simple queries (exactly like the ones described in this talk) and then just combine the results of these in memory in your application (just like you would if you did your own sorting).
*There are some exceptions where AppEngine can generate the intersection of two (or more?) index-scans to drop items from the results, but I don't think that you could use that to change the order of the returned entries.

Best databasing practice for tracing non-homogenous events

Suppose I have the the following event data scheme:
event_record_unique_id: long
event_timestamp: long
session_id: long
event_id: int
event_data: data # concrete type depends on event_id
... so, the contents of the data may depend on, let's say 500, event_ids, leading to 200 different concrete data types for "data". For example:
{
event_record_unique_id: 17126721
event_timestamp: 1234
session_id: 3452
event_id: 50
event_data: {
user_id: 123
page_id: 789
}
}
{
event_record_unique_id: 1712672123
event_record_unique_id: 17126723
event_timestamp: 1234
session_id: 3454
event_id: 51
event_data: {
user_id: 124
button_id: 789
}
}
{
event_timestamp: 1234
session_id: 3454
event_id: 51
event_data: {
crash_report: "text"
device_id: "12312"
}
}
Also:
many of the event_data attributes appear in many of the concrete event_data objects
I need to perform indexed searches on some of the event_data attributes (e.g. find me all the records where user_id=X )
there's a continuing need to keep on adding event types and new attributes
the above data structure is always trivially flattened so that a single record can be represented equivalently as a row with N columns where (and attributes name/type collision
are solved by renaming attributes).
The naive RDBMS approach would involved making ~500 tables (one per concrete type of "data"). I've discounted this approach (= excessive waste of human effort in modelling). Plus, I cannot easily search all records over user_id (since user_id appears in very many tables).
Flattening the structure in an RDBMS is also quite costly (N-8 of the elements are NULL and contain no information).
Mongodb-type document database solutions appear to be a good, however, space costs seems quite high if attribute names are held with each record, not much better than an RDBMS. However, this does allow me to index by fields in the data object.
For me, an ideal data representation of this would be a table that is optimized to allow rows with many null elements (e.g. by keeping an active column bitmask per row). Or a document DB in which a document collection maintains a library of document schemas used enable compacting the data (and each document having reference to its schema).
What kind of database would people recommend for the above example case?
MS SQL Server 2008 and up have Sparse Columns. Up to 30,000 can be added in a table, and they can be indexed (filtered indexes are recommended). Or so says BOL, I have not used them myself. This would results in a single very large table that might support what you need.
With that said, I don't know it would be particularly efficient. Some math:
Assume 10 rows a second
becomes 10*60*60*24 = 864,000 rows a day
or 315,360,000 rows a year
with a very rough over-estimate of 50 bytes a row
is about 14GB a year
for how many years do you have to keep the data?
and double that if it's more like 20 rows per second
So storage doesn't seem too way out of line... but I don't know, you want to work up some serious size projection factors. And that's just storage, what do you want or need to do with the data? Is retrieval time for specified rows important? What about analysis and data mining? I'm a SQL guy through and through, and I think it could be done, but this pretty much is the kind of problem that Hadoop and NoSQL solutions were devised for, and it could well be worth your time to thoroughly investigating those options.

Multi-valued Database (UniVerse) -- SM (MV) vs SM (VS) and ASSOC()

I have a question taken from pg 16 of IBM's Nested Relational Database White Paper, I'm confused why in the below CREATE command they use MV/MS/MS rather than MV/MV/MS, when both ORDER_#, and PART_# are one-to-many relationships.. I don't understand what value, vs sub-value means in non-1nf database design. I'd also like to know to know more about the ASSOC () clause.
Pg 16 of IBM's Nested Relational Database White Paper (slight whitespace modifications)
CREATE TABLE NESTED_TABLE (
CUST# CHAR (9) DISP ("Customer #),
CUST_NAME CHAR (40) DISP ("Customer Name"),
ORDER_# NUMBER (6) DISP ("Order #") SM ("MV") ASSOC ("ORDERS"),
PART_# NUMBER (6) DISP (Part #") SM ("MS") ASSOC ("ORDERS"),
QTY NUMBER (3) DISP ("Qty.") SM ("MS") ASSOC ("ORDERS")
);
The IBM nested relational databases implement nested tables as repeating attributes and
repeating groups of attributes that are associated. The SM clauses specify that the attribute is either repeating (multivalued--"MV") or a repeating group (multi-subvalued--"MS"). The ASSOC clause associates the attributes within a nested table. If desired, the IBM nested relational databases can support several nested tables within a base table. The following standard SQL statement would be required to process the 1NF tables of Figure 5 to produce the report shown in Figure 6:
SELECT CUSTOMER_TABLE.CUST#, CUST_NAME, ORDER_TABLE.ORDER_#, PART_#, QTY
FROM CUSTOMER_TABLE, ORDER_TABLE, ORDER_CUST
WHERE CUSTOMER_TABLE.CUST_# = ORDER_CUST.CUST_# AND ORDER_CUST.ORDER_# =
ORDER _TABLE.ORDER_#;
Nested Table
Customer # Customer Name Order # Part # Qty.
AA2340987 Zedco, Inc. 93-1123 037617 81
053135 36
93-1154 063364 32
087905 39
GV1203948 Alphabravo 93-2321 006776 72
055622 81
067587 29
MT1238979 Trisoar 93-2342 005449 33
036893 52
06525 29
93-4596 090643 33
I'll go ahead and answer my own question, while pursuing IBM's UniVerse SQL Administration for DBAs I came across code for CREATE TABLE on pg 55.
ACT_NO INTEGER FORMAT '5R' PRIMARY KEY
BADGE_NO INTEGER FORMAT '5R' PRIMARY KEY
ANIMAL_ID INTEGER FORMAT '5L' PRIMARY KEY
(see distracting side note below) This amused me at first, but essentially I believe this to be a column directive the same as a table directive like PRIMARY ( ACT_NO, BADGE_NO, ANIMAL_ID )
Later on page 5-19, I saw this
ALTER TABLE LIVESTOCK.T ADD ASSOC VAC_ASSOC (
VAC_TYPE KEY, VAC_DATE, VAC_NEXT, VAC_CERT
);
Which leads me to believe that tacking on ASSOC (VAC_ASSOC) to a column would be the same... like this
CREATE TABLE LIVESTOCK.T (
VAC_TYPE ... ASSOC ("VAC_ASSOC")
VAC_DATE ... ASSOC ("VAC_ASSOC")
VAC_NEXT ... ASSOC ("VAC_ASSOC")
VAC_cERT ... ASSOC ("VAC_ASSOC")
);
Anyway, I'm not 100% sure I'm right, but I'm guessing the order doesn't matter, and that rather than these being an intransitive association they're just a order-insensitive grouping.
Onward! With the second part of the question pertaining to MS and MV, I for the life of me can not figure out where the hell IBM got this syntax from. I believe it to be imaginary. I don't have access to a dev machine I can play on to test this out, but I can't find it (the term MV) in the old 10.1 or the new UniVerse 10.3 SQL Reference
side note for those not used to UniVerse the 5R and 5L mean 5 characters right or left justified. That's right a display feature built into the table meta data... Google for UniVerse FORMAT (or FMT) for more info.
Just so you know, Attribute, Multivalue and Sub-Multivalue comes from the way they structure their data.
Essentially, all data is stored in a tree of sorts.
UniVerse is a Multivalue Database. Generally, it does not work in the say way as Relational DBs of the SQL work function.
Each record can have multiple attributes.
Each attribute can have multiple multivalues.
Each multivalue can have multiple sub-multivalues.
So, if I have a record called FRED
Then, FRED<1,2,3> refers to the 1st attribute, 2 multivalue position and 3 subvalue position.
To read more about it, you need to learn more about how UniVerse works. The SQL section is just a side part of it. I suggest you read the other manuals to understand what you are working with.
EDIT
Essentially, the code above is telling you that:
There may be multiple orders per client. These are stored at an MV level in the 'table'
There may be multiple parts per order. These are stored at the MS level in the 'table'
There may be multiple qtys per order. These are stored at the MS level in the 'table'. since they are at the same level, although they are 1-n for orders, they are 1-1 in regards to parts.

Resources