What Datastore/Database runs on top of S3? - database

What Datastore/Database runs on top of Amazon S3 or S3-compatible storage?
I understand that S3 is an Object Storage and thus not a database, but a database must have something to store data into, thus, my question is if there is a Database or Datastore that saves its data on an Amazon S3 or S3-compatible storage instead of a local file system.

Here are some databases and database-like products that use S3 (or can use S3).
Amazon Athena
S3 Select
Apache HBase
Redshift
Also, if you want some theory, here’s a MIT paper about Building a Database on S3.
This is by no means exhaustive, but it’s probably a good place to start.
Update
Here are some more that aren't AWS owned software.
Cassandra
Hadoop—this isn't a database, but S3 already provides you with key-value storage, and Hadoop can provide you with querying.
s3-db
Ultimately, you need to consider what sort of query functionality you need and what sort of consistency you can tolerate.

Related

S3 Metadata storage, SQL or NoSQL?

I see lots of questions, articles, and answers on using DynamoDB (NoSQL) database to store metadata for an S3. I actually have more experience using relational databases than NoSQL. Wouldn't a "RELATIONAL" database be the best choice for the metadata because of all the different parameters (metadata) (relationships) you might want to search for an image stored in S3. This is what I would think. Also, when I look at this link, it seems DynamoDB is a bit problematic.
Is DynamoDB suitable as an S3 Metadata index?

Which database to choose in order to store data coming from flat files CSV, html

I need to design a scalable database architecture in order to store all the data coming from flat files - CSV, html etc. These files come from elastic search. most of the scripts are created in python. This data architecture should be able to automate most of the daily manual processing performed using excel, csv, html and all the data will be retrieved from this database instead of relying on populating within csv, html.
Database requirements:
Database must have a better performance to retrieve data on day to day basis and it will be queried by multiple teams.
ER model, schema will be developed for the data with logical relationship.
The database can be within cloud storage.
The database must be highly available and should be able to retrieve data faster.
This database will be utilized to create multiple dashboards.
The ETL jobs will be responsible for storing data in the database.
There will be many reads from the database and multiple writes each day with lots of data coming from Elastic Search and some of the cloud tools.
I am considering RDS, Azure SQL, DynamoDB, Postgres or Google Cloud. I would want to know which database engine would be a better solution considering these requirements. I also want to know how ETL process should be designed- lambda or kappa architecture.
To store the relational data like CSV and excel files, you can use relational database. For flat files like HTML, which doesn't required to be queried, you can simply use Storage account in any cloud service provider, for example Azure.
Azure SQL Database is a fully managed platform as a service (PaaS) database engine that handles most of the database management functions such as upgrading, patching, backups, and monitoring without user involvement. Azure SQL Database is always running on the latest stable version of the SQL Server database engine and patched OS with 99.99% availability. You can restore the database at any point of time. This should be the best choice to store relational data and perform SQL query.
Azure Blob Storage is Microsoft's object storage solution for the cloud. Blob storage is optimized for storing massive amounts of unstructured data. Your HTML files can be stored here.
The ETL jobs can be performed using Azure Data Factory (ADF). It allows you to connect almost any data source (including outside Azure) to transform the stored dataset and store it into desired destination. Data flow transformation in ADF is capable to perform all the ETL related tasks.

What "cloud storage " means in snowflake database storage layer?

I am just confuse with the explanation given on multiple forum for database storage for snowflake. When they say that data is stored in the form of columner and optimised files in cloud storage, does it mean S3 bucket or azure blob storage? Does Snowflake store data or uses cloud host storage ?
According to the paper The Snowflake Elastic Data Warehouse (2016) - see paragraph 3.1 Data Storage:
Snowflake initially chose Amazon S3 to store table data, query
results, and temp data generated by query operators (e.g. massive
joins) once local disk space is exhausted, as well as for large query
results. Metadata such as catalog objects, which table consists of
which S3 files, statistics, locks, transaction logs, etc. is stored in
a scalable, transactional key-value store, which is part of the Cloud
Services layer.
Since then, and as of today, Snowflake has been made available to run also on Azure and Google Cloud.
Therefore, when setting up a Snowflake account, the user is presented with the option of a cloud storage provider to use: for AWS Snowflake will use Simple Storage Service (S3), for Azure it will use Azure Blob Storage, and for Google Cloud it will use Google Cloud Storage (GCS).
The database storage is in files in S3, Azure Blob on Azure and GCP buckets (or whatever they're called). Data and storage are completely separate, unlike server based RDBMS such as REDSHIFT, where the servers have both compute and storage. See the Snowflake documentation for more detail.

Where are snowflake tables stored?

I'm considering snowflake for a customer, but I can't tell fr the documentation where do they store the data? Seems to be s3 but why such expensive storage costs? Is the data in the user's s3 or snowflakes s3?
Snowflake is cloud based analytical data warehouse provided as Saas and it is not built on an existing database or “big data” software platform such as Hadoop and its available on below cloud environment
AWS
Azure
GCP
Based on the choice of your cloud environment your storage and computation region will be decided. If you selected snowflake on AWS, your data will be stored in snowflake managed S3 bucket(By default snowflake compress your data before it stores in your final target table), Its depends upon you and your business choice on which cloud your data should be stored.

Presto integration with MSSQL

I'm looking for a tutorial or something that allow me to learn Presto step by step.
The idea is to start integrating file's and MSSQL, which is my knowledge area.
Unfortunately, since it is a relatively new area, I didn't find anything more than Facebook page or the Presto.io page, however it is not good enough for someone that want to start knowing the big data world from scratch.
I will appreciate your help and/or orientation in this area.
Presto has 2 primary use cases:
querying data stored in a cluster (on Hadoop's HDFS) or in a cloud (e.g. Amazon S3)
data federation, i.e. querying (and joining) data from multiple data sources (e.g. HDFS, S3, traditional RDBMS like PostgreSQL or SQL Server)
As far as SQL Server support is concerned -- Presto supports connecting to SQL Server since https://github.com/prestosql/presto/commit/072440cbb2c8df2a689c4c903dd325013eae41a0.
When it comes to querying files -- Presto uses Hive's Metastore to keep track of metadata (everything besides actually reading the data). Thus the files must reside on HDFS or S3 to be accessible (other cloud data stores like Azure's Blob are, AFAIK, not supported yet).

Resources