The goal that I wish to achieve is to generate a file of the table, so that afterwards that can be checked for data (monthly calculations). What I have done so far is to create a Backup using the PipeLine option from DynamoDB to an S3 bucket, but:
It is taking too long, the pipeline has been running for more than 24h since the table I am exporting is 7 GB in DynamoDB size (which is compressed and it will take even more time to finish with the backup);
I will need to do that monthly, which means that I will only need the data between first and last day of the month, while the PIPELINE can create a backup I could not find an option to make it so that only the changes in the table from specific timelines is exported;
The files that the Pipeline export are around 10 MB each and that means hundreds of files, instead of a couple (for example 100 MB files or 1 GB files).
In this case I am interested if there is a different way which I can make a full backup of current information and afterwards do a month to month on the changes that where performed (something like a monthly incremental) and not to have millions of 10 MB files.
Any comments, clarifications, code samples, corrections are appreciated.
Thanks for your time.
You have, basically, two options:
Implement your own logic by DynamoDB Steams and process your data by
your own logic
Use combination on AWS Glue for ETL processing and,
possible, AWS Athena for query your data from S3. Be careful and use
Apache Parquet format for better query performance and cache your
results somewhere else
Related
We have some files between 500KB - 20 MB size in Sharepoint portal. We would like to convert those files to CSV and then stage them to Snowflake. There is no real need for real time ingestion. I am thinking of two options. Which option will be better?
Load the file(CSV) into the cloud providers object storage. Create an external stage. Then have a python program scheduled every hour to ingest the data from stage to snowflake table
Use SNOWPIPE
I am more inclined to #1 primarily because I will have a control on the warehouse. Also, it will allow me to bunch up the files and then load to snowflake.
If you don't need to load your source data in real time option 1 makes more sense, but you need to manage and maintain it.
Option 2 is set up once and it will load the files automatically, but will be more costly because you don't have control over warehouse usage.
I have a similar situation and using option 1 like load.
I'm expecting to stream 10,000 (small, ~ 10KB) files per day into Snowflake via S3, distributed evenly throughout the day. I plan on using the S3 event notification as outlined in the Snowpipe documentation to automate. I also want to persist these files on S3 independent of Snowflake. I have two choices on how to ingest from S3:
s3://data-lake/2020-06-02/objects
/2020-06-03/objects
.
.
/2020-06-24/objects
or
s3://snowpipe specific bucket/objects
From a best practices / billing perspective, should I ingest directly from my data lake - meaning my 'CREATE or replace STORAGE INTEGRATION' and 'CREATE or replace STAGE' statements references top level 's3://data-lake' above? Or, should I create a dedicated S3 bucket for the Snowpipe ingestion, and expire the objects in that bucket after a day or two?
Does Snowpipe have to do more work (and hence bill me more) to ingest if I give it a top level folder that has thousands and thousand and thousands of objects in it, than if I give it a small tight, controlled, dedicated folder with only a few objects in it? Does the S3 notification service tell Snowpipe what is new when the notification goes out, or does Snowpipe have to do a LIST and compare it to the list of objects already ingested?
Documentation at https://docs.snowflake.com/en/user-guide/data-load-snowpipe-auto-s3.html doesn't offer up any specific guidance in this case.
The INTEGRATION receives a message from AWS whenever a new file is added. If that file matches the fileformat, file path, etc. of your STAGE, then the COPY INTO statement from your pipe is run on that file.
There is minimal overhead for the integration to receive extra messages that do not match your STAGE filters, and no overhead that I know of for other files in that source.
So I am fairly certain that this will work fine either way as long as your STAGE is set up correctly.
We have been using a similar setup with ~5000 permanent files per day into a single Azure storage account with files divided into different directories that correspond to different Snowflake STAGEs for the last 6 months with no noticeable extra lag on the copying.
I have a "database choice" and arhitecture question.
Use-case:
Clients will upload large .json files (or other format like .tsv, it is irrelevant) where each line is a data about their customers (e.g name, address etc.)
We need to stream this data later on to process it and store results which will also be some large file where each line is data about each customer (approximately same as uploaded file).
My requirements:
Streaming should be as fast it could (e.g > 1000 rps) and we could have multiple process running in parallel (for multiple clients)
Database should be scalable and fault tolerant. Because there could easily be uploaded many GB of data it should be easy for me to implement automatically adding new commodity instances (using AWS) if storage gets low.
Database should have kind of replication because we don't want to lose data.
No index is required since we are just streaming data.
What would you suggest for database for this problem? We tried to upload it to Amazon S3 and let them take care of scaling etc. but there is a problem of slow read/streaming.
Thanks,
Ivan
Initially uploading the files to S3 is fine, but then pick them up and push each line to Kinesis (or MSK or even Kafka on EC2s if you prefer); from there, you can hook up the stream processing framework of your choice (Flink, Spark Streaming, Samza, Kafka Streams, Kinesis KCL) to do transformations and enrichment, and finally you’ll want to pipe the results into a storage stack that will allow streaming appends. A few obvious candidates:
HBase
Druid
Keyspaces for Cassandra
Hudi (or maybe LakeFS?) on top of S3
Which one you choose is kind of up to your needs downstream in terms of query flexibility, latency, integration options/standards, etc.
I have a large dataset (>40G) which I want to store in S3 and then use Athena for query.
As suggested by this blog post, I could store my data in the following hierarchical directory structure to enable usingMSCK REPAIR to automatically add partitions while creating table from my dataset.
s3://yourBucket/pathToTable/<PARTITION_COLUMN_NAME>=<VALUE>/<PARTITION_COLUMN_NAME>=<VALUE>/
However, this requires me to split my dataset into many smaller data files and each will be stored under a nested folder depending on the partition keys.
Although using partition could reduce amount of data to be scanned by Athena and therefore speed up a query, would managing large amount of small files cause performance issue for S3? Is there a tradeoff here I need to consider?
Yes, you may experience an important decrease of efficiency with small files and lots of partitions.
Here there is a good explanation and suggestion on file sizes and number of partitions, which should be larger than 128 MB to compensate the overhead.
Also, I performed some experiments in a very small dataset (1 GB), partitioning my data by minute, hour and day. The scanned data decreases when you make the partitions smaller, but the time spent on the query will increase a lot (40 times slower in some experiments).
I will try to get into it without veering too much into the realm of opinion.
For the use cases which I have used Athena, 40 GB is actually a very small dataset by the standards of what the underlying technology (Presto) is designed to handle. According to the Presto web page, Facebook uses the underlying technology to query their 300 PB data warehouse. I routinely use it on datasets between 500 GB and 1 TB in size.
Considering the underlying S3 technology, S3 was used to host Dropbox and Netflix, so I doubt most enterprises could come anywhere near taxing the storage infrastructure. Where you may have heard about performance issues and S3 relates to websites storing multiple, small, pieces of static content on many files scattered across S3. In this case, a delay in retrieving one of these small pieces of content might affect user experience on the larger site.
Related Reading:
Presto
I have a following scenario:
Measurements are uploaded through a web service in form of files
Those files are later copied to HDFS
Each measurement contains a number of features (values), for one or more parameters
Measurements might have different number of values
Measurements are processed using machine learning algorithms on Hadoop
Not all measurements are taken, but for a certain user, for certain time period (e.g. perform processing on files from user X uploaded during period Y-Z)
Intermediate results are stored on HDFS, as well as the final result
My question is related to second point - Those files are later copied to HDFS - I'm worried that it could be a problem that there is a large number of small files (e.g. 1MB).
My idea is to store that files in a database, so I would avoid the problem with small files and also be able to query data (select data for user for period). Is that a better approach?
If the answer is positive, which databases can I use? So I need the database to be:
Compatible with Hadoop (Big data)
Rows may contain different number of values (like in case of time series)
Retrieve measurements for certain user for certain period
Records are input to MapReduce job
I think that HBase is perfect for you necessity.
I had also the "small file problem" and I solved it using HBase.
Storing small file in HDFS directly it's a bad practice and could be a problem.
From the HBase project site:
Apache HBase is the Hadoop database. Use it when you need random,
realtime read/write access to your Big Data. This project's goal is
the hosting of very large tables -- billions of rows X millions of
columns -- atop clusters of commodity hardware.
HBase is made for Hadoop
Rows can stores different columns in a column family and updated values have timestamp, so you can go back in the history of the cell
HBase and Hadoop are made for MaReduce jobs ( Rows can be input/output for a job)
In my case I had a lot of small file (200 Kb / 1 Mb) and now I store these files in a table with some column as Header/Information and a column for the binary content of the file and the file name as key (the file name is a UUID)