Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I intend to write a python script that will upload csv files to a table in snowflake. I'll be using the python connector.
But, before uploading the data, I want to remove all previous records from the table.
I'm having trouble finding a way to truncate the table every time I run the script.
I assume you are loading the data by running a COPY INTO -command. Actually there is no parameter like "OVERWRITE=TRUE" - this parameter only exists for unloading data to a stage (i.e. COPY INTO ) but not loading from your stage into Snowflake.
Consequence: You have to run a truncate-statement before your COPY INTO-statement.
TRUNCATE TABLE IF EXISTS myTable;
COPY INTO ...
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I try to create a tablespace in Microsoft SQL server and it didn't work - I get this error:
Unknown object type 'TABLESPACE' used in a CREATE, DROP, or ALTER statement.
This is my code
CREATE TABLESPACE ruqaiya
DATAFILE 'c:\ ruqaiya.dbf'
SIZE 20m;
Microsoft SQL Server does not have tablespaces only datafiles and secondary datafiles.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I have a file that contains Client information (client number, client name, client type, etc.) and I imported that into a SQL table.
Client information can change, and when it changes it changes it in the file.
Now, what I want to do is create a SSIS package that will read the file and search for any differences between the file and SQL table and if any changes are picked up, it needs to update the table according to the file (the file will always contain the latest information).
How would I achieve this? Is this possible from an SSIS perspective?
There's different option to achieve it.
Load the file initially in a Stage Table and merge that into production table, it will insert data if it do not match and if it match than you can update the production table accordingly. Get more info on - https://www.mssqltips.com/sqlservertip/1704/using-merge-in-sql-server-to-insert-update-and-delete-at-the-same-time/
Load the data into Stage table than use lookup transformation in SSIS to achieve it. Find the link for lookup transformation - https://www.red-gate.com/simple-talk/sql/ssis/implementing-lookup-logic-in-sql-server-integration-services/
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
My project is a Data Historian System.
which reads data(contains 10,0000 records) every 5 second from sources and inserts into database for reports and analyses.the format of data is simple(iNT, INT, Float, DateTime).
should i have to use OLAP Database Approach?
is SQL Server suitable for this case?
thanks...
That sounds crazy inefficient: there are several alternative approaches you might want to consider:
Use an update trigger to write table inserts / changes to a history
table. You should add the change date to the history table so that
the "effective" record for any particular datetime can be
determined.
In SQL Server, a timestamp column can be used to drive record
version identification, and you can use the same kind of polling
approach you suggested, but saving only new / changed records.
SQL Server has a Change Data Capture to identify changed rows:
details here.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I'm working on an SSIS package to get data from a couple of db2 tables, and insert into preliminary tables. From there I need to merge that into the destination tables.
I'm having trouble as I don't understand why the staging/preliminary tables are necessary. Is there something that happens when merging from the staging tables to the destination table? Is it just a way to get rid of duplicate data?
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
We have a table with millions of records. We need to archive records older than 3 whats best way of archiving old data of SQL Server database tables?
I guess it depends on the structure of your database, and what you need to do with this archive records. Do they need to be accessible from your application? Do you just need them somewhere so that you can query against it in the future using ad-hoc queries?
Options may include: creating an "Archive Database" where you move the older table records and everything linked to it (foreign key tables), creating an ArchiveTable, or something more complex like Creating Partitioned Tables (Sql Server 2005 +)
More Info:
Partitioning & Archiving tables in SQL Server (Part 1: The basics)
Partitioning & Archiving tables in SQL Server (Part 2: Split, Merge and Switch partitions)