I am looking for the right tool to export specific rows (WHERE condition) of some oracle database tables. There is one column with CLOB Data which can be larger than 4000 characters, therefore exports as "INSERT INTO" statements do not work.
Using exp works but also exports the DDL, which gives errors when using imp as the Table is already existing.
Use the IGNORE=Y parameter when importing the dump file. This tells the import to ignore creation errors. Find out more.
Related
I have multiple Excel files that have the same format. I need to import them into SQL Server.
The issue I currently have is that there are two text columns that I need to ignore completely as they are free text and the character length for some rows exceeds what the server allows me to import which results in a truncation error.
Because I don't need these columns for my analysis, the table I'm importing to doesn't include these columns but for some reason the SSIS packages still picks up those columns and cuts the import job halfway through.
I tried using max character length for those columns which still results in the truncation error.
I need to create an SSIS package that ignores the two columns completely without deleting the columns from Excel.
You can specify which columns you need to ignore from the Edit Mappings dialog.
I have added the image for your reference:
If you just create the SSIS package in SSDT the Excel file can be queried to return only the required columns. In the package, create an Excel Connection Manager using the Excel file. Then on the Control Flow of the package add a Data Flow Task that has an Excel Source component in it. On this source, change the data access mode to SQL command and the file can then be queried similar to SQL. In the following example TabName is the name of the Excel tab containing the data that will be returned. If either the tab or any column names contain spaces they will need to be enclosed in square brackets, i.e. TabName would be [Tab Name].
Import/Export Wizard
Since you mentioned in the comments that you are using SQL Server Import/Export Wizard. You can solve that if you have a fixed columns (range) that you are looking to import (example: first 10 columns).
In Import/Export wizard, after selecting destination options you will be asked if you want to read from tables or query:
Select the query option, then use a simple select query and specify the columns range after the sheet name. As example:
SELECT * FROM [Sheet1$A:C]
The query above will read from the first 3 columns in Sheet1 since A:C represent the range between first column A and third column C.
Now, you can check the columns from the Edit Mappings dialog:
SSIS
You can use the same logic within SSIS package, just write the same SQL command in the Excel Source after changing the Access Mode to SQL Command.
The solution is simple. I needed to write a query that will exclude the columns. So instead of selecting "Copy data from one or more tables" you select "write a query" and exclude the columns you don't need. This one worked 100%
The process is one where I would get 28 fixed width files and combine it into one table. In the past, this was done via FoxPro. As I have learned today, there were duplicates for which FoxPro did not reject or have any issues with. I have discovered that I need to write a merge statement in order to import the 28 and not get tripped up by duplicate primary key errors when I try to import each one separately using the Import Wizard.
I use Management Studio with a SQL Server Express front end and therefore can't create SSIS packages.
I am going to break this up into two questions so as to not make this too convoluted. First, I have since converted the fixed width files into tab-delimited text files by using Excel.
First question: Can one construct a merge statement that brings the files (tab-delimited) into SQL Server from the C drive? I could import each using the import wizard but that is cumbersome. I know how to write a merge statement but it demands that the data already exist in SQL Server. Below is an example. The question is how would I bring it in from outside.
Merge Industry as TARGET
Using Table1 as SOURCE
On (TARGET.Primary keys 1-9 = SOURCE.Primary keys 1-9)
No, you can't import data during or as part of a MERGE statement. The MERGE operation is purely for the 'upsert' situation; constructing logic on combining two result sets with criteria for matches and mismatches.
To get data into SQL Server you can either work via the UI (which is pretty boring and error prone when you have 28 files), or you can use some of the built in commands such as BULK INSERT.
Perhaps you could BULK INSERT the files one by one, and merge after each import.
If you wanted to continue using Foxpro but eliminate the duplicate records the first piece of advice would be to quit using the Import Wizard.
Wizards may be convenient to use, but they come with their own set of 'baggage' which can be problematic.
Aside from saying that they are in fixed field length format, you don't indicate which format(s) the 28 import files are in (CSV, SDF, TXT, ect.). Regardless you can farily easily write Foxpro code to handle all of the importing without the use of a 'Wizard'.
Then once all of the records have been imported you can readily eliminate the duplicates with something like the following:
SELECT ImportDBF && Assuming it is used EXCLUSIVELY
DELETE ALL
INDEX ON <primary key> UNIQUE TAG Uniq && Create an Index on only UNIQUE instances of your Primary key field
RECALL ALL && Recall only those UNIQUE records
DELETE TAG Uniq && Eliminate the temporary Index
PACK && PACK out the duplicate records
Now your Foxpro data table should be ready to go.
Good Luck
I have one database with an image table that contains just over 37,000 records. Each record contains an image in the form of binary data. I need to get all of those 37,000 records into another database containing the same table and schema that has about 12,500 records. I need to insert these images into the database with an IF NOT EXISTS approach to make sure that there are no duplicates when I am done.
I tried exporting the data into excel and format it into a script. (I have doe this before with other tables.) The thing is, excel does not support binary data.
I also tried the "generate scripts" wizard in SSMS which did not work because the .sql file was well over 18GB and my PC could not handle it.
Is there some other SQL tool to be able to do this? I have Googled for hours but to no avail. Thanks for your help!
I have used SQL Workbench/J for this.
You can either use WbExport and WbImport through text files (the binary data will be written as separate files and the text file contains the filename).
Or you can use WbCopy to copy the data directly without intermediate files.
To achieve your "if not exists" approache you could use the update/insert mode, although that would change existing row.
I don't think there is a "insert only if it does not exist mode", but you should be able to achieve this by defining a unique index and ignore errors (although that wouldn't be really fast, but should be OK for that small number of rows).
If the "exists" check is more complicated, you could copy the data into a staging table in the target database, and then use SQL to merge that into the real table.
Why don't you try the 'Export data' feature? This should work.
Right click on the source database, select 'Tasks' and then 'Export data'. Then follow the instructions. You can also save the settings and execute the task on a regular basis.
Also, the bcp.exe utility could work to read data from one database and insert into another.
However, I would recommend using the first method.
Update: In order to avoid duplicates you have to be able to compare images. Unfortunately, you cannot compare images directly. But you could cast them to varbinary(max) for comparison.
So here's my advice:
1. Copy the table to the new database under the name tmp_images
2. use the merge command to insert new images only.
INSERT INTO DB1.dbo.table_name
SELECT * FROM DB2.dbo.table_name
WHERE column_name NOT IN
(
SELECT column_name FROM DB1.dbo.table_name
)
I'm trying to export some tables from SQL Server 2005 and then create those tables and populate them in Oracle.
I have about 10 tables, varying from 4 columns up to 25. I'm not using any constraints/keys so this should be reasonably straight forward.
Firstly I generated scripts to get the table structure, then modified them to conform to Oracle syntax standards (ie changed the nvarchar to varchar2)
Next I exported the data using SQL Servers export wizard which created a csv flat file. However my main issue is that I can't find a way to force SQL Server to double quote column names. One of my columns contains commas, so unless I can find a method for SQL server to quote column names then I will have trouble when it comes to importing this.
Also, am I going the difficult route, or is there an easier way to do this?
Thanks
EDIT: By quoting I'm refering to quoting the column values in the csv. For example I have a column which contains addresses like
101 High Street, Sometown, Some
county, PO5TC053
Without changing it to the following, it would cause issues when loading the CSV
"101 High Street, Sometown, Some
county, PO5TC053"
After looking at some options with SQLDeveloper, or to manually try to export/import, I found a utility on SQL Server management studio that gets the desired results, and is easy to use, do the following
Goto the source schema on SQL Server
Right click > Export data
Select source as current schema
Select destination as "Oracle OLE provider"
Select properties, then add the service name into the first box, then username and password, be sure to click "remember password"
Enter query to get desired results to be migrated
Enter table name, then click the "Edit" button
Alter mappings, change nvarchars to varchar2, and INTEGER to NUMBER
Run
Repeat process for remaining tables, save as jobs if you need to do this again in the future
Use the SQLDeveloper migration tools
I think quoting column names in oracle is something you should not use. It causes all sort of problems.
As Robert has said, I'd strongly advise agains quoting column names. The result is that you'd have to quote them not only when importing the data, but also whenever you want to reference that column in a SQL statement - and yes, that probably means in your program code as well. Building SQL statements becomes a total hassle!
From what you're writing, I'm not sure if you are referring to the column names or the data in these columns. (Can SQLServer really have a comma in the column name? I'd be really surprised if there was a good reason for that!) Quoting the column content should be done for any string-like columns (although I found that other characters usually work better as the need to "escape" quotes becomes another issue). If you're exporting in CSV that should be an option .. but then I'm not familiar with the export wizard.
Another idea for moving the data (depending on the scale of your project) would be to use an ETL/EAI tool. I've been playing around a bit with the Pentaho suite and their Kettle component. It offered a good range of options to move data from one place to another. It may be a bit oversized for a simple transfer, but if it's a big "migration" with the corresponding volume, it may be a good option.
I know that I can import .csv file into a pre-existing table in a sqlite database through:
.import filename.csv tablename
However, is there such method/library that can automatically create the table (and its schema), so that I don't have to manually define: column1 = string, column2 = int ....etc.
Or, maybe we can import everything as string. To my limited understanding, sqlite3 seems to treat all fields as string anyway?
Edit:
The names of each column is not so important here (assume we can get that data from the first row in the CSV file, or they could be arbitrary names) The key is to identify the value types of each column.
This seems to work just fine for me (in sqlite3 version 3.8.4):
$ echo '.mode csv
> .import data_with_header.csv some_table' | sqlite3 db
It creates the table some_table with field names taken from the first row of the data_with_header.csv file. All fields are of type TEXT.
You said yourself in the comment that its a nontrivial problem to determine the types of columns. (Imagine a million rows that all look like numbers, but one of those rows has a Z in it. - Now that row has to be typed "string".)
Though non-trivial, it's also pretty easy to get the 90% scenario working. I would just write a little Python script to do this. Python has a very nice library for parsing CSV files and its interface to sqlite is simple enough.
Just load the CSV, guess and check at the column types. Devise a create table that encapsulates this information, then emit your insert intos. I can't imagine this taking up more than 20 lines of Python.
This is a little off-topic but it might help to use a tool that gives you all the SQL functionality on an individual csv file without actually using SQLite directly.
Take a look at TextQL - a utility that allows querying of csv files directly which uses SQLite engine in memory:
https://github.com/dinedal/textql
textql -header -sql "select * from tbl" -source some_file.csv