Importing data into Oracle via Web Enterprise Manager with unique constraints - database

I am not at all familiar with Oracle so bear with me!
I am using version Oracle 10G with the web front end called Enterprise Manager. I have been given some CSV files to import however when I use the Load Data from User Files option I think I can set everything up but when the job runs it complains that there are unique constraints, I guess because there is duplicate data trying to be inserted.
How can I get the insert to create a new primary key similar to a MSSQL auto inc number?

Oracle does not have an analog to the MSSQL auto incrementing field. The feature has to be simulated via triggers and Oracle sequences. Some options here are to:
create a trigger to populate the columns you want auto incremented from a sequence
delete the offending duplicate keys in the table
change the values in your CSV file.
You might look at this related SO question.

There is no autoinc type in Oracle. You have to use a sequence.
By using a before insert trigger, you could get something similar to what you get by using an autoinc in SQL Server.
You can see here how to do it.

Related

Import CSV into SQL Server database, keeping ID column values

I am working to migrate a SQLite database to SQL Server and I need to use IntelliJ IDEA to import all the data from the SQLite tables in to the MSSQL database.
I have exported the data to CSV format, but when I import into SQL Server, I need to maintain the existing ID columns (as foreign keys refer to it).
Normally, I can do this by executing SET IDENTITY_INSERT xxx ON; prior to my INSERT statements.
However, I do not know how to do this when importing CSV using IntelliJ.
The only other option I see is to export the data as a series of SQL INSERT statements, but that is very time consuming as the schemas between the two databases are slightly different (not to mention the SQL syntax).
Is there another way to import this data?
I don't know how to perform an Identity Insert ON in an IntelliJ query, but I do know how to work around this problem. Import your data into a temporary table destination, then execute a query within SQL Server that
Sets Identity Insert ON
Inserts the data from the temporary table into the final destination
Sets Identity Insert OFF
What this really does is prevent you from having to spend (potentially) hours finding out how to implement an Identity Insert ON in IntelliJ when you may never need to do this again. It is straightforward and simple to code as well.
However, if you want to learn if there is a way to do this in IntelliJ, go for it. That would be a more optimal method.

SSIS Package Creation via MVS 2015 ODBC IDENTITY_INSERT

Today I am trying to figure out how to get rows with identity columns inserted into a Microsoft SQL 2016 database via an SSIS package that I am constructing using MVS 2015 (with SSDT 14.0.61709.290). I am pulling data from another data source (which works without issue) and I am inserting rows into a destination table that has been previously defined on the destination SQL server like so:
create table [DB_NAME].[dbo].[TableName]
(
key_value IDENTITY(1,1) primary key,
...other values...
)
GO
When I move values from the old data source to the new data source I get the error:
[MSQL Deal [70]] Error: Open Database Connectivity (ODBC) error
occurred. state: '23000'. Native Error Code: 544. [Microsoft][SQL
Server Native Client 11.0][SQL Server]Cannot insert explicit value for
identity column in table 'TableName' when IDENTITY_INSERT is set to
OFF.
There are a tremendous number of forums and results that come up when searching that indicate that there should be a checkbox that permits identity inserts when modifying the column mappings page of the destination. This option does not exist and the "Advanced Editor" interface in MVS/SSDT 2015/2017 has column mappings only and no options for handling inserts for identity columns.
Also I have tried to add a step to my control flow that turns identity insert on, but for some reason enabling IDENTITY_INSERT at this level does not work and my package still fails on all insert attempts.
Now I am going to be completely honest, I am fully aware that I have alternative options to get this to work - but keep in mind I am building dev test and production databases that I am trying to keep scripted and automated and idiot proof for when it gets further down the line toward deployment. I don't want to have to introduce an intermediate step that forces one of our DBAs to wait for the first SSIS package to finish, run a SQL query that will enable identity inserts for a specific table, run the next package, then run a query to disable identity inserts. I would have to do this many times....
Did SSIS 2015 (and I tried this using MVS/SSDT 2017) completely drop support for identity inserts? Do I have to use a different interface with my DSN to get this to work (ODBC?)?
Is this still an option but it is hidden somewhere really really really well?
ODBC Destination has no option for identity insert, you can use an OLEDB Destination instead if it, and it is contains a Keep Identity check box, which can be used

SQL Server : best way to check values before insert to table

I am working in a company that has software that can connect to a database and push values to a table.
I have a problem that some properties do not insert into the database.
I check regular insert query in the SQL Server Management Studio, and the insert is ok there.
I want to check the values that came from my software company before insert to the table.
Friends, please help me.
Thanks
You can use extended events(light weight version of profiler).You may choose filters as per your requirement and in set session filters screen you can try scoping to a single database or a table or even some text using like syntax
Below are the steps

Talend Truncate Table does not empty table

I use TOS to transfer a SQL Server table to another SQL Server. That works more or less. But I have one issue with truncating the table. In the properties for the output table I define "Truncate Table" for the table action and "Insert" for the data action. At the second run I get a lot of duplicate key errors. If I run the "TRUNCATE TABLE" manually in the SQL Server Management Studio, the job works fine.
Are there any known issues with truncate table? Talens Version is 5.3.2
Thanks in advance
I mimicked the scenario and it works fine in Talend Platform for data Management version 5.6.1. I cannot test it on the TOS, but perhaps you can upgrade to the newest TOS version and try again. To be thorough I tried it using separate connection components and built-in connections. The only difference is that using separate connection object requires a commit object.
The workaround I recommend is this:
create a proc to truncate your table and call it from a tMSSqlSP component
connect this to your original subjob which transfers the data between the two tables using an OnSubJobOK flow.
In your tMSSqlOutput component (which performs the truncate/insert) in for Action on Table use Default (so it will not truncate the table)
for Action on data use Insert
I tried this method and it works. This workaround will save you the time and frustration of dealing with the TOS issue.

SSMA timestamp. What's it for, how is it used?

I rencently used the SQL Server Migration Assistant to import a database into SQL Server 2005. I noticed that a number of tables that were imported have been ammended with a new column called SSMA_timestamp.
Can anyone tell me what this is for and how it would be used?
The added SSMA_timestamp columns are not only used during migration. They actually help avoid errors when Access updates records in tables linked to SQL Server. So if you are still using an Access front end linked to the migrated SQL Server database, it would be best to not drop the SSMA_timestamp columns.
From the MSDN article Optimizing Microsoft Office Access Applications Linked to SQL Server:
Supporting Concurrency Checks
Probably the leading cause of updatability problems in Office Access–linked tables is that Office Access is unable to verify whether data on the server matches what was last retrieved by the dynaset being updated. If Office Access cannot perform this verification, it assumes that the server row has been modified or deleted by another user and it aborts the update.
There are several types of data that Office Access is unable to check reliably for matching values. These include large object types, such as text, ntext, image, and the varchar(max), nvarchar(max), and varbinary(max) types introduced in SQL Server 2005. In addition, floating-point numeric types, such as real and float, are subject to rounding issues that can make comparisons imprecise, resulting in cancelled updates when the values haven't really changed. Office Access also has trouble updating tables containing bit columns that do not have a default value and that contain null values.
A quick and easy way to remedy these problems is to add a timestamp column to the table on SQL Server. The data in a timestamp column is completely unrelated to the date or time. Instead, it is a binary value that is guaranteed to be unique across the database and to increase automatically every time a new value is assigned to any column in the table. The ANSI standard term for this type of column is rowversion. This term is supported in SQL Server.
Office Access automatically detects when a table contains this type of column and uses it in the WHERE clause of all UPDATE and DELETE statements affecting that table. This is more efficient than verifying that all the other columns still have the same values they had when the dynaset was last refreshed.
The SQL Server Migration Assistant for Office Access automatically adds a column named SSMA_TimeStamp to any tables containing data types that could affect updatability.
I think this is generated so that the Migration assistant can detect changes to the data during the migration.
Unless you are continuing to use Access as a front end to this specific database you have migrated to SQL Server (in which case see Simon's answer), I don't think they will be used for anything after migration is complete, so it should be safe to drop these new columns once you are sure everything is done.
<!-- Set project preference.
Preference path/name/value can be found in preferences.prefs file stored in SSMA project directory.
Preference path is the node name path starting from root to leaf node separating by "/". -->
<set-project-preference preference-path="prefs/ssma-for-access/a2ss/conversion"
preference-name="timestamp-columns-opt"
preference-value="never" />
From SSMA GUI you can also click tools--> default project setting --> conversion --> Tables --> add timestamp columns --> set to Never

Resources