SSIS package to update SQL table [closed] - sql-server

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I have a file that contains Client information (client number, client name, client type, etc.) and I imported that into a SQL table.
Client information can change, and when it changes it changes it in the file.
Now, what I want to do is create a SSIS package that will read the file and search for any differences between the file and SQL table and if any changes are picked up, it needs to update the table according to the file (the file will always contain the latest information).
How would I achieve this? Is this possible from an SSIS perspective?

There's different option to achieve it.
Load the file initially in a Stage Table and merge that into production table, it will insert data if it do not match and if it match than you can update the production table accordingly. Get more info on - https://www.mssqltips.com/sqlservertip/1704/using-merge-in-sql-server-to-insert-update-and-delete-at-the-same-time/
Load the data into Stage table than use lookup transformation in SSIS to achieve it. Find the link for lookup transformation - https://www.red-gate.com/simple-talk/sql/ssis/implementing-lookup-logic-in-sql-server-integration-services/

Related

how to execute truncate table with snowflake python connector? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I intend to write a python script that will upload csv files to a table in snowflake. I'll be using the python connector.
But, before uploading the data, I want to remove all previous records from the table.
I'm having trouble finding a way to truncate the table every time I run the script.
I assume you are loading the data by running a COPY INTO -command. Actually there is no parameter like "OVERWRITE=TRUE" - this parameter only exists for unloading data to a stage (i.e. COPY INTO ) but not loading from your stage into Snowflake.
Consequence: You have to run a truncate-statement before your COPY INTO-statement.
TRUNCATE TABLE IF EXISTS myTable;
COPY INTO ...

Which Database Software Is Suitable For Data Historian? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
My project is a Data Historian System.
which reads data(contains 10,0000 records) every 5 second from sources and inserts into database for reports and analyses.the format of data is simple(iNT, INT, Float, DateTime).
should i have to use OLAP Database Approach?
is SQL Server suitable for this case?
thanks...
That sounds crazy inefficient: there are several alternative approaches you might want to consider:
Use an update trigger to write table inserts / changes to a history
table. You should add the change date to the history table so that
the "effective" record for any particular datetime can be
determined.
In SQL Server, a timestamp column can be used to drive record
version identification, and you can use the same kind of polling
approach you suggested, but saving only new / changed records.
SQL Server has a Change Data Capture to identify changed rows:
details here.

Copy tables containing BLOB columns between Oracle Databases [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
On adhoc basis, we want to copy contents from 4 of our Oracle production tables to QA/UAT environments.
This is not a direct copy and we need to copy data based on some input criteria for filtering.
Earlier we were using Sybase database hence BCP utility worked with charm there. However, we have recently migrated to Oracle and need similar data copy requirement.
Based on the analyses till now, I have analyzed below options -
RMAN (Recovery Manager) - Cannot use as it does not allow us to copy selected tables or filtering on data.
SQLLDR (SQL Loader) – Cannot use this as we have BLOB columns and hence not sure how to create a CSV file for these BLOBS. Any suggesstions?
Oracle Data pump (Expdp/Imbdp) – Cannot use this as even though it allows copying selected tables it does not allow us to filter data using some query with joins (I know it allows to add query but it works only on single table). A workaround is to create temp tables with desired dataset and dmp them using EXPDP and IMPDP. Any suggesstions if I have missed anything in this approach?
Database Link – This is the best approach which seems possible in this use case. But needs to check if DBA will allow us to create links to/from PRD db.
SQL PLUS COPY - Cannot use this as it does not work with BLOB fields.
Can someone please advise on which should be the best approach w.r.t performance.
I would probably use a DATAPUMP format external table. So it would be something like
create table my_ext_tab
organization external
(
type oracle_datapump
default directory UNLOAD
location( 'my_ext_tab.dmp' )
)
as
<my query>
You can then copy the file across to your other database, create the external table, and then insert into your new table via an insert, something like:
insert /*+ APPEND */ into my_tab
select * from my_ext_tab
You can also use parallelism to read and write the files
Taking all your constraints into account, it looks like Database links is the best option. You can create views for your queries with joins and filters on the PROD environment and select from these views through the db links. That way, the filtering is done before the transfer over the network and not after, on the target side.

Archiving SQL Server data table data? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
We have a table with millions of records. We need to archive records older than 3 whats best way of archiving old data of SQL Server database tables?
I guess it depends on the structure of your database, and what you need to do with this archive records. Do they need to be accessible from your application? Do you just need them somewhere so that you can query against it in the future using ad-hoc queries?
Options may include: creating an "Archive Database" where you move the older table records and everything linked to it (foreign key tables), creating an ArchiveTable, or something more complex like Creating Partitioned Tables (Sql Server 2005 +)
More Info:
Partitioning & Archiving tables in SQL Server (Part 1: The basics)
Partitioning & Archiving tables in SQL Server (Part 2: Split, Merge and Switch partitions)

How to insert data from one table to other table in sql server [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I have two sqlserver database both having same database and tables structure now I want to insert data from one specific table to other database table, both database table having same structure but user for both database is diffrent
I tried with this query but this does not work
insert into Database1.dbo.Audit_Assessment1
select * from Database2.dbo.Audit_Assessment1
Please help me
SQL Server Management Studio's "Import Data" task (right-click on the DB name, then tasks) will do most of this for you. Run it from the database you want to copy the data into.
If the tables don't exist it will create them for you, but you'll probably have to recreate any indexes and such. If the tables do exist, it will append the new data by default but you can adjust that (edit mappings) so it will delete all existing data.
Pulled from https://stackoverflow.com/a/187852/435559
1-You can use Linked server , set up it on view option on top left and select registered server . Then you can open a new query window and write your query.
2-You can use replication.
Snapshot replication if it is just one time or sometimes .
Transactional replication if your insert is repeatedly.
Read more about replication :
http://msdn.microsoft.com/en-us/library/gg163302.aspx
Read more about linked servers :
http://msdn.microsoft.com/en-us/library/aa560998.aspx
Try approaching this differently. Why not script out the table you need and manipulate that way?
From the scripted out insert statement you should be able to easily modify this to go into your new database.
Sounds like your login doesn't have insert permissions on Database2.dbo.Audit_Assessment1. The error about it being an invalid object name is probably because you don't currently have view definition permissions either.

Resources