In the Snowflake Web UI, you have the option to rename and/or save worksheet "code". Where is this code stored? Is it local to the machine, in a table on snowflake, or out in the ether of the web?
Example below: Tab named "DEV Acct Perf CE" contains a series of SQL statements. Where are those statements stored?
They are stored in S3, Azure BLOB, or Google Cloud Storage depending on where you're running Snowflake. It's stored to Snowflake-managed storage, so the only place you can access it is through the web UI. The newer UI in preview allows sharing between users. The current UI is single-user, so you'd need to copy & paste any statements.
Edit: You can see where they're stored, but I think the body of the worksheet is encrypted.
You can see where they're stored by doing this:
ls #~/worksheet_data/;
I downloaded mine and tried gunzip on the body, but that didn't work. I also tried selecting it in Snowflake using the JSON file format, but that didn't work either. I think the body field may be encrypted in addition to being compressed.
Last I checked they were stored in your personal internal stage.
Try this:
list #~;
There should be a folder there called worksheets if I recall correctly. I never tried to open them to see what the files look like, but I did move the files from one user to another successfully when I had to recreate a user for one of my users.
https://docs.snowflake.com/en/user-guide/data-load-local-file-system-create-stage.html#user-stages
Related
I have a email folder in Outlook that contains 100s of emails which record my discussions with a developer of some bespoke software. I want to import these into SQL to create a knowledge base of information that can be searched upon to extract all the decisions that we have made during the course of the 2 year project.
Having sreached the net, I found that it is very easy to dump the contents of an email folder into Access using the import data functionality. In fact I have linked the table and so believe (never used Access before!!) that I now have an Access table that is connected in 'real-time' to the Outlook folder. This is eactly what I want BUT in SLQ as this is something that I am very familiar with using.
So I have tried to import the Access database into SQL (which also appears to be relatively easy) but keep getting the message that 'The source database ...contains no visible tables or views'. Checking SQL pemissions, I am owner of this new databse.
Two questions please. First, cant believe that going through Access is the simplest way to do this and presume that I will loose the 'real-time' link - am I right? Second, given that I can see my Access database has a visible table, why am I getting the error?
The easiest and quickest way is to create a VBA macro where you can populate your SQL database from Outlook emails. You can build the table structure according to your needs and extract the required information from Outlook using VBA. I'd suggest processing emails in chunks using the Find/FindNext or Restrict methods of the Items class, so you will not reach the reference counter limit. The MailItem properties you may find described in MSDN.
BTW The internal store (if you use the cached mode) in Outlook acts like a database. So, why do you need to introduce yet a new database?
My basic workflow is this: I check an FTP server for a specific file. If the file exist, I pick up the file and sends it to a Blob Storage. My problem is this: I want to filter the file content, eg. remove first and last row since they dont contain any real data before I send it to the blob. The first row consist of a time stamp and the last row contains a "row count". The file contains comma separated fields. How do I accomplish this? Is it even possible?
Thanks
Ausgar
There is no simple solution for this problem. You can try converting csv to json, delete unnecessary data from it, and create a blob based on that json, but this is sounds harder than it should be.
Consider using Azure functions:
Azure Functions allows you to run small pieces of code (called
"functions") without worrying about application infrastructure. With
Azure Functions, the cloud infrastructure provides all the up-to-date
servers you need to keep your application running at scale.
It will be much easier to do such file manipulation there.
To start with, I'm not sure if this is possible with the existing features of Snowpipe.
I have a S3 bucket with years of data, and occasionally some of those files get updated (the contents change, but the file name stays the same). I was hoping to use Snowpipe to import these files into Snowflake, as the "we won't reimport files that have been modified" aspect is appealing to me.
However, I discovered that ALTER PIPE ... REFRESH can only be used to import files staged no earlier than seven days ago, and the only other recommendation Snowflake's documentation has for importing historical data is to use COPY INTO .... However, if I use that, then if those old files get modified, they get imported via Snowflake since the metadata preventing COPY INTO ... from re-importing the S3 files and the metadata for Snowpipe are different, so I can end up with that file imported twice.
Is there any approach, short of "modify all those files in S3 so they have a recent modified-at timestamp", that would let me use Snowpipe with this?
If you're not opposed to a scripting solution for this, one solution would be to write a script to pull the set of in scope object names from AWS S3 and feed them to the Snowpipes REST API. The code you'd use for this is very similar to what is required if you're using an AWS Lambda to call the Snowpipe REST API when triggered via an S3 event notification. You can either use the AWS SDK to get the set of objects from S3, or just use Snowflake's LIST STAGE statement to pull them.
I've used this approach multiple times to backfill historical data from an AWS S3 location where we've enabled Snowpipe ingestion after data had already been written there. Even in the scenario where you don't have to worry about a file being updated in place, this can still be an advantage over just falling back to a direct COPY INTO because you don't have to worry if there's any overlap between when the PIPE was first enabled and the set of files you push to the Snowpipe REST API since the PIPE load history will take care of that for you..
I'm trying to fix and add some functionality to an Access Database that a group I work with uses. They have a FileName.accdb file which holds the queries and forms. The data seems to be stored in one of two other database files FileName_be.accdb and/or FileName_bp.accdb both stored in a 'Back End' Folder beside the FileName.accdb file.
I was hoping someone might be able to explain how all this might link together, there is no documentation on how it was organized.
The other thing that seems odd to me is that 3 files are similar in size:
FileName.accdb = 11MB
FileName_be.accdb = 10.1MB
FileName_bp.accdb file = 7.5MB
The _bp and _be files both only have the database tables, but the _bp file seems to be more up to date.
Your database is split into Frontend and Backend. See e.g.
Microsoft Access Split Database Architecture.
FileName.accdb should have linked tables, queries, forms and code.
FileName_be.accdb sounds like a backend, having only tables.
FileName_bp.accdb - if it's newer, maybe "bp" is "backend production", but that's just a guess.
Open FileName.accdb and open a linked table in design view. In the property sheet, the Description will tell you where the table is linked from. The tooltip in the navigation bar will do so too.
Alternatively, you can use External Data -> Linked Table Manager to re-map these file locations.
I want to export all my queries as individual files for purposes of putting them into mercurial source control, but I don't know how to export the individual queries as individual files without having to open each one, then save to the folder, then add into the project, or some equally convoluted process.
I wouldn't mind having to add each one individually, but how do I get them out of the database as individual files without opening them all and doing each one save as? Ostensibly I would like them named with the name they have in the database right now.
I could easily dump the whole lot into one long file using database tasks, but that's not really super helpful is it?
I have SSMS 2k5 and 2k8 (and VS 2k5, 2k8, 2010 to boot) to work with, any thoughts?
Right click on the database. Select Generate Script. On the last page. Script To file you can choose single file or file per object
When you script a database in SSMS you have the option of one file per objects.
SMO is useful with a small app to iterate through
Third party tools like Red Gate SQL Compare (there are other free tools) can script too
I would write a small C# program which extracts your database object via SMO and stores them in your filesystem the way you want.
It is rather easy to write stored procedures which fetches the definition into the result as text. sp_helptext could be used as start.
Than you can use PowerShell to write the Output to the file system.
It sounds as if this would fit rather good into the Really Simple Data Dictionary codeplex project. link text