Netezza External table trims decimals after 0 - netezza

I am trying to create an external table file from the netezza database. There is one Numeric column in the table with value 0.00. The external table file generated stores only 0 and truncates the rest of it. However, when on using nzsql from the command prompt to pull the same data it stores the entire field as it is in the database. I have tried the IGNOREZERO command when using the create external table but this flag only works if the field is char or varchar.
table field in database:
___________
|NUMERIC_COL|
-------------
| 0.00 |
-------------
The command used to create external table:
nzsql -u -pw -db -c "CREATE EXTERNAL TABLE 'file' USING (IGNOREZERO false) AS SELECT numeric_col FROM table;"
output: 0
Now, if I use nzsql to select the same field
nzsql -u -pw -db -c "SELECT numeric_col FROM table;"
output: 0.00
Is there a flag/command I could use to save the decimals in the external table. Thanks!

Related

How to use bcp for columns that are Identity?

I want to restore my table with BCP by code in the below.
BCP framework.att.attendance in "D:\test\mhd.txt" -T -c
But the column (id) is identity in this table.
When data is restored with BCP I want id columns to be unchanged.
In other words, if the id of the first row is '7' before BCP, I want to import data and the id of the first row will be still be '7'.
What should I do?
BCP IMPORT
-E
-E Specifies that identity value or values in the imported data file are to be used for the identity column.
If -E is not given, the identity values for this column in the data file being imported are ignored.

Why SQL Server imports badly my CSV file with bcp?

I'm using bcp to insert a CSV file in my SQL table, but i'm having weird results.
All the environment is on Linux, I create the CSV file with IFS=\n in Bash. BCP is running from Bash too like this:
bcp Table1 in "./file.csv" -S server_name -U login_id-P password -d databse_name -c -t"," -r"0x0A"
The file.csv is about 50k rows. BCP tells me he copied 25K rows. No warnings, no errors. I have tried on a smaller sample with only 2 rows and here is the result:
file.csv data:
A.csv, A, B
C.csv, C, D
My table Table1:
CREATE TABLE Table1 (
ID INT NOT NULL IDENTITY(1,1) PRIMARY KEY,
a VARCHAR(8),
b VARCHAR(8),
c VARCHAR(8)
FOREIGN KEY (a) REFERENCES Table2(a)
);
Table2 is a very basic table just like Table1
What I can see in Microsoft SQL Server Manager Studio when i do SELECT * FROM Table1
ID | a | b | c |
1 A BC C.csv E
I export with bcp like this:
bcp Table1 out"./file_export.csv" -S server_name -U login_id-P password -d databse_name -c -t"," -r"0x0A"
What I get in file_export.csv
1, A, B
C, C.csv, D, E
I've check on NotePad++ any strange End Of File, they are all EF (50k Rows) each values are separated by comas.
UPDATE:
Changing the options -t and -r does not change anything. So my CSV is well formatted. I think the issue the fact that I'm trying to import x columns in a tables of x+1 fields (The auto-incremented ID).
So i'm using the simplest command for bcp
bcp Table1 in "./file.csv" -S server_name -U login_id-P password -d databse_name -c
Here is what i get in my SELECT * FROM Table1;
ID a b c
1 A BC.csv C,D
Here is what I expected:
1,A.csv,A,B
2,C.csv,C,D
Why does my first field is "eaten" and replace by 1 (ID)? Which makes everything swift to the right.

Importing all mysql database at once

mysql allows you to export complete database at once but I found if really very tough to import the complete database at once.
I used mysqldump -u root -p --all-databases > alldb.sql and when I am trying to import the complete database by mysql -u root -p < alldb.sql command its giving me very weird error.
Error
SQL query:
--
-- Database: `fadudeal_blog`
--
-- --------------------------------------------------------
--
-- Table structure for table `wp_commentmeta`
--
CREATE TABLE IF NOT EXISTS `wp_commentmeta` (
`meta_id` BIGINT( 20 ) UNSIGNED NOT NULL AUTO_INCREMENT ,
`comment_id` BIGINT( 20 ) UNSIGNED NOT NULL DEFAULT '0',
`meta_key` VARCHAR( 255 ) COLLATE utf8mb4_unicode_ci DEFAULT NULL ,
`meta_value` LONGTEXT COLLATE utf8mb4_unicode_ci,
PRIMARY KEY ( `meta_id` ) ,
KEY `comment_id` ( `comment_id` ) ,
KEY `meta_key` ( `meta_key` ( 191 ) )
) ENGINE = INNODB DEFAULT CHARSET = utf8mb4 COLLATE = utf8mb4_unicode_ci AUTO_INCREMENT =1;
MySQL said: Documentation
#1046 - No database selected
Its saying #1046 - No database selected , my question is when the mysql knows that I have exported the complete database at once then how can I specify just one database name?
I don't now if I am right or wrong but I tried it multiple times, I found the same problem. Please let me know how we can upload or import complete database at once.
Rohit , i think you will have to create the database and then issue a USE and then run the SQL. I looked at the manual here https://dev.mysql.com/doc/refman/5.7/en/mysql-batch-commands.html and it also provides an option for you to mention the db_name when you connect itself
something like mysql -u root -p < sql.sql (assuming the db is already created. It may be worth a try doing that way
shell> mysqldump --databases db1 db2 db3 > dump.sql
The --databases option causes all names on the command line to be treated as database names. Without this option, mysqldump treats the first name as a database name and those following as table names.
With --all-databases or --databases, mysqldump writes CREATE DATABASE and USE statements prior to the dump output for each database. This ensures that when the dump file is reloaded, it creates each database if it does not exist and makes it the default database so database contents are loaded into the same database from which they came. If you want to cause the dump file to force a drop of each database before recreating it, use the --add-drop-database option as well. In this case, mysqldump writes a DROP DATABASE statement preceding each CREATE DATABASE statement.
from : https://dev.mysql.com/doc/refman/5.7/en/mysqldump-sql-format.html

Is it possible to bulk load textfile into Table instead for External Table in Netezza?

I am bulk loading data in to Netezza from the text file using EXTERNAL TABLE and after loading data in to external table I am updating those some columns in the same external table since you cannot update external table I have to stage all the data from external table to one Temp Table and than I able able to do the updates whereas Is there any other way where I can directly bulk load the textfile data into the Table instead of External Table in Netezza?
If you are using straight ODBC. I would consider a "Transient External Table"
INSERT INTO target_table SELECT * FROM EXTERNAL 'C:\FileName.txt'
using( delim '|' datestyle 'MDY' datedelim '/' REMOTESOURCE 'ODBC' MAXERRORS 50
LOGDIR 'C:\');
Look at nzload http://www.enzeecommunity.com/message/12759
Example:
To load the database dev as user admin with the password production, specifying the table name areacode, using tab delimiters, and specifying the input file as phone-prefix.
dat, enter:
nzload -u admin -pw production -db dev -t areacode -delim '\t' -df
phone-prefix.dat
Try nzload, as #cairnz said. Also, If you are connecting over ODBC, you can use the REMOTESOURCE ODBC option to load from a text file into a table, bypassing the creation of a separate external table. Take a look at the Netezza Data Loading Guide PDF provided by IBM.

Mysql dump of single table

I am trying to make some normal (understand restorable) backup of mysql backup. My problem is, that I only need to back up a single table, which was last created, or edited. Is it possible to set mysqldump to do that? Mysql can find the last inserted table, but how can I include it in mysql dump command? I need to do that without locking the table, and the DB has partitioning enabled.... Thanks for help...
You can use this SQL to get the last inserted / updated table :-
select table_schema, table_name
from information_schema.tables
where table_schema not in ("mysql", "information_schema", "performance_schema")
order by greatest(create_time, update_time) desc limit 1;
Once you have the results from this query, you can cooperate it into any other language (for example bash) to produce the exact table dump).
./mysqldump -uroot -proot mysql user > mysql_user.sql
For dumping a single table use the below command.
Open cmd prompt and type the path of mysql like c:\program files\mysql\bin.
Now type the command:
mysqldump -u username -p password databasename table name > C:\backup\filename.sql
Here username - your mysql username
password - your mysql password
databasename - your database name
table name - your table name
C:\backup\filename.sql - path where the file should save and the filename.
If you want to add the backup table to any other database you can do it by following steps:
login to mysql
type the below command
mysql -u username -p password database name < C:\backup\filename.sql

Resources