Is there a way to perform a query for records of a table with an extra clause to exclude a user that the data has been last updated by without adding a new column? For example, I want to query for records that have not been last touched by my current user. Does Snowflake store and allow use of something like this behind the scenes?
Thanks in advance!
Related
Problem: I have accidentally overwrite a view in SnowFlake using CREATE OR REPLACE VIEW.
Question: Is there anyway to retrieve the old view i.e. the SQL code?
You can use QUERY_HISTORY to find the previous DDL used to create the query.
You can filter results using QUERY_TYPE which will help to you to find quickly the right query type.
If you can't find it in the query history tab using QUERY_TYPE > CREATE, you can search for it over the previous 365 days in the query history. This previous post has the SQL to run:
View DDL history of CREATE VIEW statement in Snowflake
Note that this is a big query if your account has run lots of queries over the last year. You can modify it to reduce the scan if necessary if you know more information, such as the month of creation.
If you want a totally informal and lightweight way to version your objects, I wrote one for my internal use that I decided to share. It's a table and stored procedure. Call the stored procedure with the object type and three-part name, and it adds a version row to the table. If it finds a pervious version, it marks the old one obsolete as of the current_timestamp and increments the number of the new version by 1.
https://github.com/GregPavlik/SimpleVersioning/blob/main/install.sql
I have a 290 million source data set and I get a daily download of 12 million records daily which contain data from the previous days downloads. I am having trouble inserting the daily records into the source and excluding the records I already have. Some of the records that are new may not be from the previous day they could be several days back so a date restriction wont work. Please help.
I just had this exact same issue basically in your Data flow of your SSIS you need to add a Lookup. Have it match the data your inserting to the new data based on the PK. then you can separate the data from here, choose Redirect Rows to no match output. This will make the green arrow contain all data that is no present.
Lookup component using a key field and with the no match output, do an insert (you could also with the match output do an update; though 290million rows IS going to take A WHILE)...
I've got a table in SQLite, and it already has many rows stored in it. I know realise I need another column in the table. Up to now I've just deleted the database and started again because the data has just been test data. But now the data in the database can't be deleted.
I know the query to add a column to the table, my question is what is a good way to do this so that it works for both existing users and new users? (I have updated the CREATE query I have for when the table is not found (because it's a new user or an existing user has cleared the database). It seems wrong to have an ALTER query in software that ships, and check every time. Is there some way of telling SQLite to automatically add the column if it doesn't exist during the UPDATE query I now need?
If I discover I need more columns in the future, is having a bunch of ALTER statements on startup (or somewhere?) really the best way to do it?
(If relevant this is for a node js app)
I'd just throw a table somewhere that marks what version of your database it is, and check that to determine if an update is needed. Either that or if you have a table already where there's always going to be just one record in it add a new field 'DatabaseVersion' to it.
So for example if you check the version number, and find it's a version 1 database when the newest version should be version 3, you know which updates to perform on it.
You can use PRAGMA user_version to store the version number of the database and check if the database needs to be updated.
I need to provide a date for an app to check for updates.
For this I need to get the last date some of my tables in my database were modified.
I was checking for the last updated record in the table like this:
MyTable.find(:first, :order => "updated_at DESC")
but then I notice, that if I delete a record, I will get the previous updated record, which will be "obsolete", I need to get the date where the last record was deleted or modified.
Is there a way to obtain this without having some sort of global variable keeping up all of the changes that are being made?
Thanks
Maybe you need something like the Audited gem.
I receive new data files every day. Right now, I'm building the database with all the required tables to import the data and perform the required calculations.
Should I just append each new day's data to my current tables? Each file contains a date column, which would allow for a "WHERE" query in the future if I need to analyze data for one particular day. Or should I be creating a new set of tables for every day?
I'm new to database design (coming from Excel). I will be using SQL Server for this.
Assuming that the structure of the data being received is the same, you should only need one set of tables rather than creating new tables each day.
I'd recommend storing the value of the date column from your incoming data in your database, and also having a 'CreateDate' column in your tables, with a default value of 'GetDate()' so that it automatically gets populated with the current date when the row is inserted.
You may also want to have another column to store the data filename that the row was imported from, but if you're already storing the value of the date column and the date that the row was inserted, this shouldn't really be necessary.
In the past, when doing this type of activity using a custom data loader application, I've also found it useful to create log files to log success/error/warning messages, including some type of unique key of the source data and target database - ie. if coming from an Excel file and going into a database column, you could store the row index from Excel and the primary key of the inserted row. This helps tracking down any problems later on.
You might want to consider having a look at SSIS (SqlServer Integration Services). It's the SqlServer tool for doing ETL activities.
yes, append each day's data to the tables; 1 set of tables for all data.
yes, use a date column to identify the day that the data was loaded.
maybe have another table with a date column and a clob column. The date to contain the load date and the clob to contain the file that you imported.
Good question. You most definitely should have a single set of tables and append the data daily. Consider this: if you create a new set of tables each day, what would, say, a monthly report query look like? A quarterly report query? It would be a mess, with UNIONs and JOINs all over the place.
A single set of tables with a WHERE clause makes the querying and reporting manageable.
You might do a little reading on relational database theory. Wikipedia is a good place to start. The basics are pretty straightforward if you have the knack for it.
I would have the data load into a stage table regardless and append to the main tables after. Once a week i would then refresh all data in the main table to ensure that the data remains correct as per the source.
Marcus