We have 900+ columns coming in stage. I want to check position of columns should not change in inbound files before processing. What can be the best way to do this.
Snowflake supports using standard SQL to query data files located in an internal (i.e. Snowflake) stage or named external (Amazon S3, Google Cloud Storage, or Microsoft Azure) stage. This can be useful for inspecting/viewing the contents of the staged files, particularly before loading or after unloading data.
Details & Syntax: https://docs.snowflake.com/en/user-guide/querying-stage.html#querying-data-in-staged-files
Related
When everytime my azure pipeline runs a new files gets created in azure data lake storage, so now I want my external table already created for this table to point to the latest file created in data lake
I have multiple parquet files of the same table in blob storage, we want to read the latest parquet file in external table in snowflake.
Have you checked out this section in the Snowflake documentation. It covers the steps required to configure Automatic Refresh of External Tables using Azure Event Grid. If this is not suitable for your use case, can you provide more detail on your issue and why.
I want to store images in a sql database. The size of the image is between 50kb to 1mb. I was reading about a FileStream and a FileTable but I don't know which to choose. Each row will have 2 images and some other fields.
The images will never be updated/deleted and about 3000 rows will be inserted a day.
Which is recommend in this situation?
Originally it was always a bad idea to store files (= binary data) in a database. The usual workaround is to store the filepath in the database and ensure that a file actually exists at that path. It wás possible to store files in the database though, with the varbinary(MAX) data type.
sqlfilestream was introduced in sql-server-2008 and handles the varbinary column by not storing the data in the database files (only a pointer), but in a different file on the filesystem, dramatically improving the performance.
filetable was introduced with sql-server-2012 and is an enhancement over filestream, because it provides metadata directly to SQL and it allows access to the files outside of SQL (you can browse to the files).
Advice: Definitely leverage FileStream, and it might not be a bad idea to use FileTable as well.
More reading (short): http://www.databasejournal.com/features/mssql/filestream-and-filetable-in-sql-server-2012.html
In SQL Server, BLOBs can be standard varbinary(max) data that stores the data in tables, or FILESTREAM varbinary(max) objects that store the data in the file system. The size and use of the data determines whether you should use database storage or file system storage.
If the following conditions are true, you should consider using FILESTREAM:
Objects that are being stored are, on average, larger than 1 MB.
Fast read access is important.
You are developing applications that use a middle tier for application logic.
For smaller objects, storing varbinary(max) BLOBs in the database
often provides better streaming performance.
Benefits of the FILETABLE:
Windows API compatibility for file data stored within a SQL Server database. Windows API compatibility includes the following:
Non-transactional streaming access and in-place updates to FILESTREAM data.
A hierarchical namespace of directories and files.
Storage of file attributes, such as created date and modified date.
Support for Windows file and directory management APIs.
Compatibility with other SQL Server features including management tools, services, and relational query capabilities over FILESTREAM and file attribute data.
It depends. I personally will preffer link to the image inside the table. It is more simple and the files from the directory can be backed up separately.
You have to take into account several things:
How you will process images. Having only link allows you easily incorporates imges inside web pages (with propper config of the Web server).
How much are the images - if they are stored in the DB and they are a lot - this will increase the size of the DB and backups.
Are the images change oftenly - in that case it may be better to have them inside DB to have actual state of the backup inside DB.
I frequently need to validate CSVs submitted from clients to make sure that the headers and values in the file meet our specifications. Typically I do this by using the Import/Export Wizard and have the wizard create the table based on the CSV (file name becomes table name, and the headers become the column names). Then we run a set of stored procedures that checks the information_schema for said table(s) and matches that up with our specs, etc.
Most of the time, this involves loading multiple files at a time for a client, which becomes very time consuming and laborious very quickly when using the import/export wizard. I tried using an xp_cmshell sql script to load everything from a path at once to have the same result, but xp_cmshell is not supported by AzureSQL DB.
https://learn.microsoft.com/en-us/azure/azure-sql/load-from-csv-with-bcp
The above says that one can load using bcp, but it also requires the table to exist before the import... I need the table structure to mimic the CSV. Any ideas here?
Thanks
If you want to load the data into your target SQL db, then you can use Azure Data Factory[ADF] to upload your CSV files to Azure Blob Storage, and then use Copy Data Activity to load that data in CSV files into Azure SQL db tables - without creating those tables upfront.
ADF supports 'auto create' of sink tables. See this, and this
Snowflake allows to put files of different structure in just one stage using different paths.
On the other hand we can put files of the same structure in separate stage.
Is stage a store for several tables of a schema or is stage a mean to store data for a partitioned table?
What is the usual practice?
There are a few different types of stages in Snowflake:
Internal Stages (Named, User and Table): With these types of stages, you upload the files directly to Snowflake. If you wanted to load data into multiple tables from a single stage you can either use a "Named" or "User" stage. A "Table" stage is automatically created when you create a table and it's for loading data into a single table only. With all internal stages, you typically upload data into Snowflake using SnowSQL from your local machine or a server and then run a copy command into a table.
External Stages (External Stages): External stages are the most common in my experience. You create a stage inside Snowflake that points to a cloud provider's blob storage service (s3, gcs, azure blob). The files are not stored in Snowflake like they are with an Internal Stage, they are stored in s3 (or whatever) and you can run copy commands to load into any table.
There is no right answer, you can either use Internal (Named or User) or External stages to load into multiple tables. My preference is to use an external stage, that way the data resides outside of Snowflake and can be loaded into other tools too if necessary.
We have to read data from CSV files and map two files with respect to one column and push data to Cloud SQL using Google Cloud Dataflow.
We are able to read data from CSV files but stuck with the next steps. Please provide me information or links regarding the following:
Merging/joining to flat files based on one column or condition with multiple columns
Copying merged pcollection into Сloud SQL database
Here's some pointers that may be helpful:
https://cloud.google.com/dataflow/model/joins describes the ways to join PCollection's in Dataflow
There is currently no built-in sink for writing to CloudSQL, however you can either simply process the results of your join using a ParDo which writes each individual record or in batches (flushing periodically or in finishBundle()) - or if your needs are more complex than that, consider writing a CloudSQL sink - see https://cloud.google.com/dataflow/model/sources-and-sinks