Data maturity monitors - database

Does anyone know what data maturity monitors are and How to create data maturity monitors for a table in GCP?
I am new to gcp data engineering. I have tried in google but couldnt find what data maturity monitors are.

Related

How to design a NoSQL based database of a sensor based IOT device service?

We are designing an IOT based service which collects information like current temperature, location, status and voltage of the device at regular intervals. We are planning to use a NoSQL database (MongoDB) for storage.
We might have to deploy thousands of such devices, which might scale to millions in no time. Traditional data storage might not be the best approach in this case.
How to design the database in such a manner that it is scalable horizontally?
Take a look at MongoDB's design patterns for various applications:
https://www.mongodb.com/blog/post/building-with-patterns-a-summary
The bucket pattern is commonly used for IOT applications that collect large amounts of streaming sensor data.

Should we plan purchase module or inventory first for data warehouse design

I have been developing a data warehouse for Analytical needs. As I am a new learner, I have started first working on my own work area of manufacturing related tables and made reports for these. I have been following Kimball design and so adding step by step new processes.
Now, I would like to start working on Inventory or purchase transactions.
I am interested to know, what is the recommended flow of processes for a good data warehouse design, for example should I first work on purchase transactions or Inventory transactions and so on for all other processes. As of now, I would like to consider it for manufacturing organization. Is there anywhere documented such a practice.
Thank you in advance for the help.

Data sets for Data warehouse research

I am working on a data warehouse project for my final year degree. My intention is to create a data warehouse within the Business Intelligence spectrum, that incorporates customers' information from a company's point-of-sales data for a given year or location for example. However, my problem is finding the right raw data for this purpose as I had been in contact with tens of companies who refused to share their own data!
Is there any source for a large free data set I can use to fit the above scenario i.e. Business Intelligence purpose?
Any help would be greatly appreciated!

Which DB for storing geolocation (coordinates) for many users and checking if they are close to each other in real time?

IOS app will send from time to time coordinates to server through REST api. On server inside DB registered users are stored with their coordinates. Each user may have many friends which he want to track if they are close to him. So server after each coordinate update (from user) has to check if his friends are close to him (with a radius also stored in DB per user). It has to handle MANY users.
So many writes to DB all the time and after each write there is computation of friends that are nearby.
As a server technology I would like to use node.js, but I am wondering about which (or what type od DB) use? Are there any databases which will help with all geolocation calculations or are specialy "tweaked" for such computation?
What would you recommend?
MongoDB has geospatial indexes and is pretty common in the node.js world. It has a few query types that are useful when working with geo data.

What database does Google use?

Is it Oracle or MySQL or something they have built themselves?
Bigtable
A Distributed Storage System for Structured Data
Bigtable is a distributed storage
system (built by Google) for managing structured data
that is designed to scale to a very
large size: petabytes of data across
thousands of commodity servers.
Many projects at Google store data in
Bigtable, including web indexing,
Google Earth, and Google Finance.
These applications place very
different demands on Bigtable, both in
terms of data size (from URLs to web
pages to satellite imagery) and
latency requirements (from backend
bulk processing to real-time data
serving).
Despite these varied
demands, Bigtable has successfully
provided a flexible, high-performance
solution for all of these Google
products.
Some features
fast and extremely large-scale DBMS
a sparse, distributed multi-dimensional sorted map, sharing characteristics of both row-oriented and column-oriented databases.
designed to scale into the petabyte range
it works across hundreds or thousands of machines
it is easy to add more machines to the system and automatically start taking advantage of those resources without any reconfiguration
each table has multiple dimensions (one of which is a field for time, allowing versioning)
tables are optimized for GFS (Google File System) by being split into multiple tablets - segments of the table as split along a row chosen such that the tablet will be ~200 megabytes in size.
Architecture
BigTable is not a relational database. It does not support joins nor does it support rich SQL-like queries. Each table is a multidimensional sparse map. Tables consist of rows and columns, and each cell has a time stamp. There can be multiple versions of a cell with different time stamps. The time stamp allows for operations such as "select 'n' versions of this Web page" or "delete cells that are older than a specific date/time."
In order to manage the huge tables, Bigtable splits tables at row boundaries and saves them as tablets. A tablet is around 200 MB, and each machine saves about 100 tablets. This setup allows tablets from a single table to be spread among many servers. It also allows for fine-grained load balancing. If one table is receiving many queries, it can shed other tablets or move the busy table to another machine that is not so busy. Also, if a machine goes down, a tablet may be spread across many other servers so that the performance impact on any given machine is minimal.
Tables are stored as immutable SSTables and a tail of logs (one log per machine). When a machine runs out of system memory, it compresses some tablets using Google proprietary compression techniques (BMDiff and Zippy). Minor compactions involve only a few tablets, while major compactions involve the whole table system and recover hard-disk space.
The locations of Bigtable tablets are stored in cells. The lookup of any particular tablet is handled by a three-tiered system. The clients get a point to a META0 table, of which there is only one. The META0 table keeps track of many META1 tablets that contain the locations of the tablets being looked up. Both META0 and META1 make heavy use of pre-fetching and caching to minimize bottlenecks in the system.
Implementation
BigTable is built on Google File System (GFS), which is used as a backing store for log and data files. GFS provides reliable storage for SSTables, a Google-proprietary file format used to persist table data.
Another service that BigTable makes heavy use of is Chubby, a highly-available, reliable distributed lock service. Chubby allows clients to take a lock, possibly associating it with some metadata, which it can renew by sending keep alive messages back to Chubby. The locks are stored in a filesystem-like hierarchical naming structure.
There are three primary server types of interest in the Bigtable system:
Master servers: assign tablets to tablet servers, keeps track of where tablets are located and redistributes tasks as needed.
Tablet servers: handle read/write requests for tablets and split tablets when they exceed size limits (usually 100MB - 200MB). If a tablet server fails, then a 100 tablet servers each pickup 1 new tablet and the system recovers.
Lock servers: instances of the Chubby distributed lock service. Lots of actions within BigTable require acquisition of locks including opening tablets for writing, ensuring that there is no more than one active Master at a time, and access control checking.
Example from Google's research paper:
A slice of an example table that
stores Web pages. The row name is a
reversed URL. The contents column
family contains the page contents, and
the anchor column family contains the
text of any anchors that reference the
page. CNN's home page is referenced by
both the Sports Illustrated and the
MY-look home pages, so the row
contains columns named
anchor:cnnsi.com and
anchor:my.look.ca. Each anchor cell
has one version; the contents column
has three versions, at timestamps
t3, t5, and t6.
API
Typical operations to BigTable are creation and deletion of tables and column families, writing data and deleting columns from a row. BigTable provides this functions to application developers in an API. Transactions are supported at the row level, but not across several row keys.
Here is the link to the PDF of the research paper.
And here you can find a video showing Google's Jeff Dean in a lecture at the University of Washington, discussing the Bigtable content storage system used in Google's backend.
It's something they've built themselves - it's called Bigtable.
http://en.wikipedia.org/wiki/BigTable
There is a paper by Google on the database:
http://research.google.com/archive/bigtable.html
Spanner is Google's globally distributed relational database management system (RDBMS), the successor to BigTable. Google claims it is not a pure relational system because each table must have a primary key.
Here is the link of the paper.
Spanner is Google's scalable, multi-version, globally-distributed, and
synchronously-replicated database. It is the first system to
distribute data at global scale and support externally-consistent
distributed transactions. This paper describes how Spanner is
structured, its feature set, the rationale underlying various design
decisions, and a novel time API that exposes clock uncertainty. This
API and its implementation are critical to supporting external
consistency and a variety of powerful features: non-blocking reads in
the past, lock-free read-only transactions, and atomic schema changes,
across all of Spanner.
Another database invented by Google is Megastore. Here is the abstract:
Megastore is a storage system developed to meet the requirements of
today's interactive online services. Megastore blends the scalability
of a NoSQL datastore with the convenience of a traditional RDBMS in a
novel way, and provides both strong consistency guarantees and high
availability. We provide fully serializable ACID semantics within
fine-grained partitions of data. This partitioning allows us to
synchronously replicate each write across a wide area network with
reasonable latency and support seamless failover between datacenters.
This paper describes Megastore's semantics and replication algorithm.
It also describes our experience supporting a wide range of Google
production services built with Megastore.
As others have mentioned, Google uses a homegrown solution called BigTable and they've released a few papers describing it out into the real world.
The Apache folks have an implementation of the ideas presented in these papers called HBase. HBase is part of the larger Hadoop project which according to their site "is a software platform that lets one easily write and run applications that process vast amounts of data." Some of the benchmarks are quite impressive. Their site is at http://hadoop.apache.org.
Although Google uses BigTable for all their main applications, they also use MySQL for other (perhaps minor) apps.
And it's maybe also handy to know that BigTable is not a relational database (like MySQL) but a huge (distributed) hash table which has very different characteristics. You can play around with (a limited version) of BigTable yourself on the Google AppEngine platform.
Next to Hadoop mentioned above there are many other implementations that try to solve the same problems as BigTable (scalability, availability). I saw a nice blog post yesterday listing most of them here.
Google primarily uses Bigtable.
Bigtable is a distributed storage system for managing structured data that is designed to scale to a very large size.
For more information, download the document from here.
Google also uses Oracle and MySQL databases for some of their applications.
Any more information you can add is highly appreciated.
Google services have a polyglot persistence architecture. BigTable is leveraged by most of its services like YouTube, Google Search, Google Analytics etc. The search service initially used MapReduce for its indexing infrastructure but later transitioned to BigTable during the Caffeine release.
Google Cloud datastore has over 100 applications in production at Google both facing internal and external users. Applications like Gmail, Picasa, Google Calendar, Android Market & AppEngine use Cloud Datastore & Megastore.
Google Trends use MillWheel for stream processing. Google Ads initially used MySQL later migrated to F1 DB - a custom written distributed relational database. Youtube uses MySQL with Vitess. Google stores exabytes of data across the commodity servers with the help of the Google File System.
Source: Google Databases: How Do Google Services Store Petabyte-Exabyte Scale Data?
YouTube Database – How Does It Store So Many Videos Without Running Out Of Storage Space?

Resources