This question may be a little vague, but let me try to explain it clearly. I have been reading a database related tutorial, and it mentioned tables are serialized to bytes to be persisted on the disk. When we deserialize them, we can locate each column based on the size of its type.
For example, we have a table:
---------------------------------------------------
| id (unsigned int 8) | timestamp (signed int 32) |
---------------------------------------------------
| Some Id | Some time |
---------------------------------------------------
When we are deserializing a byte array loaded from a file, we know the first 8 bits are the id, and the following 32 bits are the timestamp.
But the tutorial never mentioned how strings are handled in databases. They are not specific to a limited size, like 32 bits, and they are not predictable the size wise (there can always be a long long String, who knows). So how exactly does databases handle strings?
I know in RDBMS, you need to specify the size of the string as Varchar(45) for example, then it becomes easier. But what about dbs like MongoDB or Redis, which does not require a specification for string sizes, do they just assume it to be a specific length and increase the size once a longer one comes in?
That is basically my vague non-specific question, hope someone can give me some ideas on this. Thank you very much
In MongoDB, documents are serialized as BSON (Binary JSON-like objects). See BSON spec for more details regarding the datatypes for each type.
For string type, it is stored as:
<unsigned32 strsizewithnull><cstring>
From this line in the MongoDB source.
So a string field is stored with its length (including the null terminator) in the BSON object. The string itself is UTF-8 encoded as per BSON spec, so it can be encoded using a variable amount of bytes per symbol. Together with other fields that makes up a document, they are compressed using Snappy by default. This compressed representation is the one persisted to disk.
WiredTiger is a no-overwrite storage engine. If that document is updated, WiredTiger creates a new document and updates the internal pointer to the new one, and mark the old document as "space available for reuse".
I have to store some attributes in DynamoDB and confused if some of JSON attributes should be stored as String/Binary. I understand that storing it as binary will reduce the size of attribute.
I considered DDB limits as 1 Read/Write IOPS consumes 4KB.
My total data in item is less than 4KB even if I store it as String.
What things should I consider to choose binary vs String ?
Thanks.
Given that your item sizes are less than 4KB uncompressed, whether to encode attributes in byte or string depends on whether the attribute will be a partition / range key of the table and your typical read patterns.
A partition key has a max sz of 2048 bytes, or ~2Kb.
A sort key (if you specify one on the table) has a max sz of 1024 bytes, or ~1Kb.
If you foresee your string attribute exceeding the above maximums on any items, it would make sense to compress to binary first to keep your attribute sizes in congruence with DynamoDB requirements.
Depending on how many items are in your typical query and your tolerance for throttled queries, your RCU's may not satisfy a Query / Scan where you perform the read in a single request.
For instance,
If you have 1KB items and want to query 100 items in a single request, your RCU req will be as follows:
(100 * 1024 bytes = 100 KB) / 4 KB = 25 read capacity units
Converting some attributes to binary could reduce your RCU requirement in this case. Again it largely depends on your typical usage pattern.
See http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ProvisionedThroughput.html#HowItWorks.ProvisionedThroughput.Reads
I have two blobs each of size 0.9 MB.
Is it fine to store both in a single entity by calling
anEntity.setProperty( "blob1" , blob1) ;
anEntity.setProperty( "blob2" , blob2) ; // will this hit 1 MB limit ?
My confusion is whether 1 MB limit is per-property or for entity as a whole.
Thanks.
Just for the sake of having the answers as answers :
From Tim Hoffman's comment : It's for both, a single entity can not be larger than 1MB and given that rule then no property can be bigger than 1MB. Also remember there is an overhead storing the key and the property name as well as the blob, so in fact a property max size will be slightly less than 1MB.
From Gilberto Torrezan's comment : You should use the Google Cloud Storage for that case. The Datastore is not meant to store large blobs.
I have a static database of ~60,000 rows. There is a certain column for which there are ~30,000 unique entries. Given that ratio (60,000 rows/30,000 unique entries in a certain column), is it worth creating a new table with those entries in it, and linking to it from the main table? Or is that going to be more trouble than it's worth?
To put the question in a more concrete way: Will I gain a lot more efficiency by separating out this field into it's own table?
** UPDATE **
We're talking about a VARCHAR(100) field, but in reality, I doubt any of the entries use that much space -- I could most likely trim it down to VARCHAR(50). Example entries: "The Gas Patch and Little Canada" and "Kora Temple Masonic Bldg. George Coombs"
If the field is a VARCHAR(255) that normally contains about 30 characters, and the alternative is to store a 4-byte integer in the main table and use a second table with a 4-byte integer and the VARCHAR(255), then you're looking at some space saving.
Old scheme:
T1: 30 bytes * 60 K entries = 1800 KiB.
New scheme:
T1: 4 bytes * 60 K entries = 240 KiB
T2: (4 + 30) bytes * 30 K entries = 1020 KiB
So, that's crudely 1800 - 1260 = 540 KiB space saving. If, as would be necessary, you build an index on the integer column in T2, you lose some more space. If the average length of the data is larger than 30 bytes, the space saving increases. If the ratio of repeated rows ever increases, the saving increases.
Whether the space saving is significant depends on your context. If you need half a megabyte more memory, you just got it — and you could squeeze more if you're sure you won't need to go above 65535 distinct entries by using 2-byte integers instead of 4 byte integers (120 + 960 KiB = 1080 KiB; saving 720 KiB). On the other hand, if you really won't notice the half megabyte in the multi-gigabyte storage that's available, then it becomes a more pragmatic problem. Maintaining two tables is harder work, but guarantees that the name is the same each time it is used. Maintaining one table means that you have to make sure that the pairs of names are handled correctly — or, more likely, you ignore the possibility and you end up without pairs where you should have pairs, or you end up with triplets where you should have doubletons.
Clearly, if the type that's repeated is a 4 byte integer, using two tables will save nothing; it will cost you space.
A lot, therefore, depends on what you've not told us. The type is one key issue. The other is the semantics behind the repetition.
Are very large TextProperties a burden? Should they be compressed?
Say I have a information stored in 2 attributes of type TextProperty in my datastore entities.
The strings are always the same length of 65,000 characters and have lots of repeating integers, a sample appearing as follows:
entity.pixel_idx = 0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,5,5,5,5,5,5,5,5,5,5,5,5....etc.
entity.pixel_color = 2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,1,1,1,1,1,1,1,1,1,1,1,1,...etc.
So these above could also be represented using much less storage memory by compressing say using only each integer and the length of its series ( '0,8' for '0,0,0,0,0,0,0,0') but then its takes time and CPU to compress and decompress?
Any general ideas?
Are there some tricks for testing different attempts to the problem?
If all of your integers are single-digit numbers (as in your example), then you can reduce your storage space in half by simply omitting the commas.
The Short Answer
If you expect to have a lot of repetition, then compressing your data makes sense - your data is not so small (65K) and is highly repetitive => it will compress well. This will save you storage space and will reduce how long it takes to transfer the data back from the datastore when you query for it.
The Long Answer
I did a little testing starting with the short example string you provided and that same string repeated to 65000 characters (perhaps more repetitive than your actual data). This string compressed from 65K to a few hundred bytes; you may want to do some additional testing based on how well your data actually compresses.
Anyway, the test shows a significant savings when using compressed data versus uncompressed data (for just the above test where compression works really well!). In particular, for compressed data:
API time takes 10x less for a single entity (41ms versus 387ms on average)
Storage used is significantly less (so it doesn't look like GAE is doing any compression on your data).
Unexpectedly, CPU time is about 50% less (130ms versus 180ms when fetching 100 entities). I expected CPU time to be a little worse since the compressed data has to be uncompressed. There must be some other CPU work (like decoding the protocol buffer) which is even more CPU work for the much larger uncompressed data.
These differences mean wall clock time is also significantly faster for the compressed version (<100ms versus 426ms when fetching 100 entities).
To make it easier to take advantage of compression, I wrote a custom CompressedDataProperty which handles all of the compressing/decompressing business so you don't have to worry about it (I used it in the above tests too). You can get the source from the above link, but I've also included it here since I wrote it for this answer:
from google.appengine.ext import db
import zlib
class CompressedDataProperty(db.Property):
"""A property for storing compressed data or text.
Example usage:
>>> class CompressedDataModel(db.Model):
... ct = CompressedDataProperty()
You create a compressed data property, simply specifying the data or text:
>>> model = CompressedDataModel(ct='example uses text too short to compress well')
>>> model.ct
'example uses text too short to compress well'
>>> model.ct = 'green'
>>> model.ct
'green'
>>> model.put() # doctest: +ELLIPSIS
datastore_types.Key.from_path(u'CompressedDataModel', ...)
>>> model2 = CompressedDataModel.all().get()
>>> model2.ct
'green'
Compressed data is not indexed and therefore cannot be filtered on:
>>> CompressedDataModel.gql("WHERE v = :1", 'green').count()
0
"""
data_type = db.Blob
def __init__(self, level=6, *args, **kwargs):
"""Constructor.
Args:
level: Controls the level of zlib's compression (between 1 and 9).
"""
super(CompressedDataProperty, self).__init__(*args, **kwargs)
self.level = level
def get_value_for_datastore(self, model_instance):
value = self.__get__(model_instance, model_instance.__class__)
if value is not None:
return db.Blob(zlib.compress(value, self.level))
def make_value_from_datastore(self, value):
if value is not None:
return zlib.decompress(value)
I think this should be pretty easy to test. Just create 2 handlers, one that compresses the data, and one that doesn't, and record how much cpu each one uses (using the appstats package for whichever language you are developing with.) You should also create 2 entity types, one for the compressed data, one for the uncompressed.
Load in a few hundred thousand or a million entities (using the task queue perhaps). Then you can check the disk space usage in the administrator's console, and see how much each entity type uses. If the data is compressed internally by app engine, you shouldn't see much difference in the space used (unless their compression is significantly better than yours) If it is not compressed, there should be a stark difference.
Of course, you may want to hold off on this type of testing until you know for sure that these entities will account for a significant portion of your quota usage and/or your page load time.
Alternatively, you could wait for Nick or Alex to pop in and they could probably tell you whether the data is compressed in the datastore or not.