Difference between google appengine and actual big table - google-app-engine

I know that app engine is implemented on big table, can anyone describe the difference between actual implementation of big table and google's implementation of big table .i.e (App engine)

Bigtable provides a basic key/value store, described in the paper here. Values are stored in rows and columns. Row and column keys are arbitrary byte strings. For more details see the paper. The basic operations Bigtable provides are lookups on individual row and column keys, and ranges of rows.
On top of Bigtable, there's an abstraction layer called Megastore. Megastore uses the bigtable primitives to construct a more versatile database platform. It adds indexing - using separate bigtables as indexes - and queries using those indexes. It also adds replication support. It's Megastore that provides most of what we think of as the App Engine datastore, such as composite indexes and the variety of queries the datastore provides.
Finally, App Engine implements a few things of its own on top of Megastore, such as the format of App Engine entity keys, giving each app its own datastore, and implementing certain operations like 'IN' and '!=' in an abstraction layer in each language's SDK.

Related

Modeling Forward and Reverse Query Questions in Bigtable

Let's say that we have the following three entities:
Organization
- id
Role
- id
Member
- id
A Role can be granted to a Member within an Organization, thus giving that Member certain access control rights to that Organization. I'd like to be able to answer the following two queries:
List the IDs of all the members who have a given Role within a given Organization (e.g. given a Role ID and Org ID give me the list of Members).
List all of the IDs of the Roles that a member has been granted within a given Organization (e.g. given a Member ID and Org ID give me the list of Roles).
I'm trying to find recommendations on how to model this in Bigtable (ideally with a single row for atomic mutations)... I'm also open to other technology recommendations (I'm trying to design within the constrains my company has given me).
If we model the relationship described above using the Bigtable row key org#{orgID}#role#{roleID}#member#{memberID}, I can easily answer the first question. However, it doesn't allow me to easily answer the second question. If I duplicate data and store another row key org#{orgID}#member#{memberID}#role#{roleID} then I can easily answer the second question, but now I have two rows to manage and atomic updates cannot be guaranteed between the two, so that may lead to consistency issues.
Has anyone in the community ran into a similar problem, and if so, how did you solve it?
Cloud Bigtable doesn't natively support secondary indexes, which is what you would need to only need a single row and be able to efficiently run both of those queries without requiring a full table scan. The alternative to that that you've already identified would be to write two rows via a process that would ensure eventual consistency. This might be sufficient for your needs depending on the underlying requirements of your system.
Depending on your constraints (cloud provider, data scale, atomicity, multi-region replication, etc.), you might be better served with a standard relational database (e.g. Postgres, MySQL), or Google Cloud Spanner.
Possible approaches with Spanner to accomplish this:
Have a single table that represents a a Member <-> Role relationship. Have RoleID being the primary index for the row, and then add a Secondary Index for MemberID and you'd be able to run queries against either.
Go the traditional relational database route of having Member, Role and MemberRole joining table. With Spanner you should have atomic updates via a Transaction. When querying you could potentially have issues with reads going across multiple splits, but you'd have to do some real world testing to see what your performance would be like.
Disclosures:
I lead product management for Cloud Bigtable.
I co-founded the JanusGraph project.
Reading through your problem statement, i sounds like you want to use either a relational database, or a graph database. Each one will have its own pros/cons.
Relational DBMS approach
As Dan mentioned in his answer, you can use a managed MySQL or PostgreSQL via Google Cloud SQL, or Google Cloud Spanner, depending on your needs for scale, replication, consistency, compatibility with existing code/frameworks, etc.
Graph database approach
Alternatively, you can use a graph database which can help you model this information easily and query it efficiently.
For example, you can deploy Janusgraph on GKE with Bigtable and Elasticsearch and query the data using the Gremlin language, which is a standard graph traversal/query language supported by many graph databases.
Note that JanusGraph + Bigtable inherits the transactionality of Bigtable (which as you noted, is row-level atomic). Since JanusGraph stores each vertex in a separate row in Bigtable, only single-vertex updates will be atomic. If you want transactional updates via JanusGraph, you may need to use a different storage backend, e.g.,
BerkeleyDB (local, non-distributed storage backend)
FoundationDB (recent contribution by the JanusGraph community)
There are many other graph databases you can consider, some of which also support Gremlin or other graph query languages. For example, you can deploy Neo4j on GCP if you prefer, which supports Gremlin as well as Cypher.

Meaning of words used to describe the BigTable data model

I am learning how to use the Google App Engine. My question is on the meaning of words used to describe a BigTable database.
I have read google's paper on big table This paper describes the BigTable data model in terms of rows, cells, column families, columns and column keys. I felt the examples given in the paper gave me a good idea of how to start designing a BigTable database.
I then looked at the Google App Engine datastore documentation and API . This uses the terms entity, properties, key, entity groups and index to describe the data model.
My question is:
What is the relationship between the terms used in the paper and the terms used in the API?
The Datastore is built upon Megastore which is built upon an implementation of BigTable.
When developing for appengine you don't really need to concern yourself with the Megastore and BigTable concepts, as it's all out of your control under the hood and the datastore builds much more on top of them anyway. It would be like trying to model your MySQL data after learning how BerkleyDB is implemented.... interesting but ultimately not that useful for your application.
Read through the megastore paper, which will probably give you mostly what you are looking for, and also checkout some of the Google IO talks that bring up the High Replication datastore as these touch a little on what goes on inside.

How to create database table in Google App Engine

How to create database table in Google App Engine
You don't. You create Entities of different kinds. Datastore is not a relational database[*].
If you want to imagine that GAE creates one "table" for each kind, the "columns" of that "table" being the properties of the entities, then you're welcome to do so. But I don't think it helps.
[*] I don't know whether it meets some technical definition, but it certainly doesn't drive like SQL-based databases.
According to http://code.google.com/appengine/docs/python/datastore/
App Engine Datastore is a schemaless object datastore providing
robust, scalable storage for your web application, with the following
features:
No planned downtime
Atomic transactions
High availability of reads and writes
Strong consistency for reads and ancestor queries
Eventual consistency for all other queries
The Python Datastore interface includes a rich data modeling API and a SQL-like query language called GQL.
In simple words just create you model class, create an object of this class and after first call of put() method for this object the "table"(I think the term here is kind) will be created on the fly. But you definitely have to read the documentation and check some examples. The will help you to understand the specifics of Google Datastore and how it differs from the common RDBMS
In simple words, i would say that with Google BigTable you don't need to create your tables because there are already six Big Tables ready to store whatever you want.

Google's Bigtable vs. A Relational Database [duplicate]

This question already has answers here:
Closed 13 years ago.
Duplicates
Why should I use document based database instead of relational database?
Pros/Cons of document based database vs relational database
I don't know much about Google's Bigtable but am wondering what the difference between Google's Bigtable and relational databases like MySQL is. What are the limitations of both?
Bigtable is Google's invention to deal with the massive amounts of information that the company regularly deals in. A Bigtable dataset can grow to immense size (many petabytes) with storage distributed across a large number of servers. The systems using Bigtable include projects like Google's web index and Google Earth.
According to Google whitepaper on the subject:
A Bigtable is a sparse, distributed, persistent multidimensional sorted map. The map is indexed by a row key, column key, and a timestamp; each value in the map is an uninterpreted array of bytes.
The internal mechanics of Bigtable versus, say, MySQL are so dissimilar as to make comparison difficult, and the intended goals don't overlap much either. But you can think of Bigtable a bit like a single-table database. Imagine, for example, the difficulties you would run into if you tried to implement Google's entire web search system with a MySQL database -- Bigtable was built around solving those problems.
Bigtable datasets can be queried from services like AppEngine using a language called GQL ("gee-kwal") which is a based on a subset of SQL. Conspicuously missing from GQL is any sort of JOIN command. Because of the distributed nature of a Bigtable database, performing a join between two tables would be terribly inefficient. Instead, the programmer has to implement such logic in his application, or design his application so as to not need it.
Google's BigTable and other similar projects (ex: CouchDB, HBase) are database systems that are oriented so that data is mostly denormalized (ie, duplicated and grouped).
The main advantages are:
- Join operations are less costly because of the denormalization
- Replication/distribution of data is less costly because of data independence (ie, if you want to distribute data across two nodes, you probably won't have the problem of having an entity in one node and other related entity in another node because similar data is grouped)
This kind of systems are indicated for applications that need to achieve optimal scale (ie, you add more nodes to the system and performance increases proportionally). In an RDBMS like MySQL or Oracle, when you start adding more nodes if you join two tables that are not in the same node, the join cost is higher. This becomes important when you are dealing with high volumes.
RDBMS' are nice because of the richness of the storage model (tables, joins, fks). Distributed databases are nice because of the ease of scale.

What database does Google use?

Is it Oracle or MySQL or something they have built themselves?
Bigtable
A Distributed Storage System for Structured Data
Bigtable is a distributed storage
system (built by Google) for managing structured data
that is designed to scale to a very
large size: petabytes of data across
thousands of commodity servers.
Many projects at Google store data in
Bigtable, including web indexing,
Google Earth, and Google Finance.
These applications place very
different demands on Bigtable, both in
terms of data size (from URLs to web
pages to satellite imagery) and
latency requirements (from backend
bulk processing to real-time data
serving).
Despite these varied
demands, Bigtable has successfully
provided a flexible, high-performance
solution for all of these Google
products.
Some features
fast and extremely large-scale DBMS
a sparse, distributed multi-dimensional sorted map, sharing characteristics of both row-oriented and column-oriented databases.
designed to scale into the petabyte range
it works across hundreds or thousands of machines
it is easy to add more machines to the system and automatically start taking advantage of those resources without any reconfiguration
each table has multiple dimensions (one of which is a field for time, allowing versioning)
tables are optimized for GFS (Google File System) by being split into multiple tablets - segments of the table as split along a row chosen such that the tablet will be ~200 megabytes in size.
Architecture
BigTable is not a relational database. It does not support joins nor does it support rich SQL-like queries. Each table is a multidimensional sparse map. Tables consist of rows and columns, and each cell has a time stamp. There can be multiple versions of a cell with different time stamps. The time stamp allows for operations such as "select 'n' versions of this Web page" or "delete cells that are older than a specific date/time."
In order to manage the huge tables, Bigtable splits tables at row boundaries and saves them as tablets. A tablet is around 200 MB, and each machine saves about 100 tablets. This setup allows tablets from a single table to be spread among many servers. It also allows for fine-grained load balancing. If one table is receiving many queries, it can shed other tablets or move the busy table to another machine that is not so busy. Also, if a machine goes down, a tablet may be spread across many other servers so that the performance impact on any given machine is minimal.
Tables are stored as immutable SSTables and a tail of logs (one log per machine). When a machine runs out of system memory, it compresses some tablets using Google proprietary compression techniques (BMDiff and Zippy). Minor compactions involve only a few tablets, while major compactions involve the whole table system and recover hard-disk space.
The locations of Bigtable tablets are stored in cells. The lookup of any particular tablet is handled by a three-tiered system. The clients get a point to a META0 table, of which there is only one. The META0 table keeps track of many META1 tablets that contain the locations of the tablets being looked up. Both META0 and META1 make heavy use of pre-fetching and caching to minimize bottlenecks in the system.
Implementation
BigTable is built on Google File System (GFS), which is used as a backing store for log and data files. GFS provides reliable storage for SSTables, a Google-proprietary file format used to persist table data.
Another service that BigTable makes heavy use of is Chubby, a highly-available, reliable distributed lock service. Chubby allows clients to take a lock, possibly associating it with some metadata, which it can renew by sending keep alive messages back to Chubby. The locks are stored in a filesystem-like hierarchical naming structure.
There are three primary server types of interest in the Bigtable system:
Master servers: assign tablets to tablet servers, keeps track of where tablets are located and redistributes tasks as needed.
Tablet servers: handle read/write requests for tablets and split tablets when they exceed size limits (usually 100MB - 200MB). If a tablet server fails, then a 100 tablet servers each pickup 1 new tablet and the system recovers.
Lock servers: instances of the Chubby distributed lock service. Lots of actions within BigTable require acquisition of locks including opening tablets for writing, ensuring that there is no more than one active Master at a time, and access control checking.
Example from Google's research paper:
A slice of an example table that
stores Web pages. The row name is a
reversed URL. The contents column
family contains the page contents, and
the anchor column family contains the
text of any anchors that reference the
page. CNN's home page is referenced by
both the Sports Illustrated and the
MY-look home pages, so the row
contains columns named
anchor:cnnsi.com and
anchor:my.look.ca. Each anchor cell
has one version; the contents column
has three versions, at timestamps
t3, t5, and t6.
API
Typical operations to BigTable are creation and deletion of tables and column families, writing data and deleting columns from a row. BigTable provides this functions to application developers in an API. Transactions are supported at the row level, but not across several row keys.
Here is the link to the PDF of the research paper.
And here you can find a video showing Google's Jeff Dean in a lecture at the University of Washington, discussing the Bigtable content storage system used in Google's backend.
It's something they've built themselves - it's called Bigtable.
http://en.wikipedia.org/wiki/BigTable
There is a paper by Google on the database:
http://research.google.com/archive/bigtable.html
Spanner is Google's globally distributed relational database management system (RDBMS), the successor to BigTable. Google claims it is not a pure relational system because each table must have a primary key.
Here is the link of the paper.
Spanner is Google's scalable, multi-version, globally-distributed, and
synchronously-replicated database. It is the first system to
distribute data at global scale and support externally-consistent
distributed transactions. This paper describes how Spanner is
structured, its feature set, the rationale underlying various design
decisions, and a novel time API that exposes clock uncertainty. This
API and its implementation are critical to supporting external
consistency and a variety of powerful features: non-blocking reads in
the past, lock-free read-only transactions, and atomic schema changes,
across all of Spanner.
Another database invented by Google is Megastore. Here is the abstract:
Megastore is a storage system developed to meet the requirements of
today's interactive online services. Megastore blends the scalability
of a NoSQL datastore with the convenience of a traditional RDBMS in a
novel way, and provides both strong consistency guarantees and high
availability. We provide fully serializable ACID semantics within
fine-grained partitions of data. This partitioning allows us to
synchronously replicate each write across a wide area network with
reasonable latency and support seamless failover between datacenters.
This paper describes Megastore's semantics and replication algorithm.
It also describes our experience supporting a wide range of Google
production services built with Megastore.
As others have mentioned, Google uses a homegrown solution called BigTable and they've released a few papers describing it out into the real world.
The Apache folks have an implementation of the ideas presented in these papers called HBase. HBase is part of the larger Hadoop project which according to their site "is a software platform that lets one easily write and run applications that process vast amounts of data." Some of the benchmarks are quite impressive. Their site is at http://hadoop.apache.org.
Although Google uses BigTable for all their main applications, they also use MySQL for other (perhaps minor) apps.
And it's maybe also handy to know that BigTable is not a relational database (like MySQL) but a huge (distributed) hash table which has very different characteristics. You can play around with (a limited version) of BigTable yourself on the Google AppEngine platform.
Next to Hadoop mentioned above there are many other implementations that try to solve the same problems as BigTable (scalability, availability). I saw a nice blog post yesterday listing most of them here.
Google primarily uses Bigtable.
Bigtable is a distributed storage system for managing structured data that is designed to scale to a very large size.
For more information, download the document from here.
Google also uses Oracle and MySQL databases for some of their applications.
Any more information you can add is highly appreciated.
Google services have a polyglot persistence architecture. BigTable is leveraged by most of its services like YouTube, Google Search, Google Analytics etc. The search service initially used MapReduce for its indexing infrastructure but later transitioned to BigTable during the Caffeine release.
Google Cloud datastore has over 100 applications in production at Google both facing internal and external users. Applications like Gmail, Picasa, Google Calendar, Android Market & AppEngine use Cloud Datastore & Megastore.
Google Trends use MillWheel for stream processing. Google Ads initially used MySQL later migrated to F1 DB - a custom written distributed relational database. Youtube uses MySQL with Vitess. Google stores exabytes of data across the commodity servers with the help of the Google File System.
Source: Google Databases: How Do Google Services Store Petabyte-Exabyte Scale Data?
YouTube Database – How Does It Store So Many Videos Without Running Out Of Storage Space?

Resources