db2exfmt unable to open output file - database

I am trying to create query explain tables using db2exfmt.
I am using db2 CLP and I am following below steps:
Connect to sample
set current explain mode explain
My query = select * from staff where JOB = 'Sales'
db2 set current explain mode no
db2exfmt -d sample -# 0 -w -1 -g -TIC -n % -s % -o output.txt
After the last step, I am getting this output:
Connecting to the Database.
Connect to Database Successful.
Unable to open output file.
I am not sure why it is not able to open output file. How should I resolve this issue?

It appears that you don't have write access to the C:\Program Files\IBM\SQLLIB\BIN directory, so db2exfmt can't open the output file for writing.
Change to a directory you do have write permissions, or specify a file name with path for the -o option.

Related

DB2 relocate with .dbf file on DB2 v. 10.5 LUW

Hi I have a DB2 database at
/db2/ins/data/ins/dbtest
but it origin is
/db2/oldins/data/oldins/dbtest1
I copied the files to the folders as needed.
My relocate.cfg look like:
DB_NAME=dbtest1,dbtest
DB_PATH=/db2/oldins/data/dbtest1/metalog/,/db2/ins/data/ins/dbtest/metalog
INSTANCE=oldins,ins
STORAGE_PATH=/db2/oldins/data/dbtest1/data/,/db2/ins/data/ins/dbtest/data/
LOG_DIR=/db2/oldins/data/dbtest1/metalog/oldins/NODE0000/SQL00001/LOGSTREAM0000/,/db2/ins/data/ins/dbtest/metalog/NODE0000/SQL00001/
LOGARCHMETH1=DISK:/db2/backup/ins/dbtest/archivlogfiles/
I get this error:
DBT1006N The "/db2/oldins/data/dbtest1/data/dbtest1_TS.dbf/SQLTAG.NAM" file or device could not be opened.
The system is DB2 v. 10.5 LUW.
The file does exist and the priviledges are correct.
How do I add this to the relocate.cfg file or what do I need to do?
Thank you for any help.
Here is one of simple test case how to use db2relocatedb.
[Db2] Simple test case shell script for db2relocatedb command
https://www.ibm.com/support/pages/node/1099185
It has topic about:
- db2relocatedb for changing container path
And it tells that we need to change 'path' by 'mv' command before run db2relocatedb command as below:
# mv storage path manually and run db2relocatedb with relocate.cfg file
mv /home/db2inst1/db/stor1 /home/db2inst1/db/new1
mv /home/db2inst1/db/stor2 /home/db2inst1/db/new2
db2relocatedb -f relocate.cfg
It is recommended to review it.
Hope this helps.

sfdx force:data:bulk:upsert request contains invalid data

Having some trouble using the bulk:upsert command to update Account objects via a csv file. Hopefully someone can help me with this. Below is what I'm doing:
My csv file name is account.csv and it contains the following data:
Id,Name
0012F00000QjhC7QAJ,LimTest 1
0012F00000QjhkSQAR,LimTest 2
Below is the command that I'm running:
sfdx force:data:bulk:upsert -s Account -f account.csv -i Id -u dev
Above command gets submitted sucessfully. But the job failed.
The Batch status is as of below:
When I view the request, it looks like below:
It works after I manually created an empty file and copied and pasted the data into this new file. The original file, account.csv, was created using this command:
sfdx force:data:soql:query -q "select Id, Name from Account" -r csv -u dev > account.csv
I guess the above command must have created the file in a different encoding that the bulk:upsert does not know how to handle.

Execute dynamic query and print to file

I have a script with dynamic query. I want to execute the query and output its result to a file. I can't seem to figure out how to output result of an "execute" statement.
Sample code below.
declare #sql_text varchar(300)
select #sql_text = select 1
exec (#sql_text) > output.txt
To give more context. My actual script would be looping through the dynamic query and output to different files (dynamic filename as well).
You set the output file via the -o parameter to the isql client to execute the SQL. This will send the output to a file from any SQL be that normal or dynamic SQL.
So put the SQL in an input file and then run
isql -U user - P password -S -i input_filename -o output.txt
You can't call directly to a operating system file from within ASE itself without enabling xp_cmdshell which is a potential security issue (as it allows O/S commands to be run as the user running the Sybase dataserver) and is therefore prohibited in most sites.

Postgres pg_dump issue

Good Day,
I've been trying to restore a dump file using the psql client and I'm getting this error:
psql.bin:/home/user/Desktop/dump/dumpfile.sql:907:
ERROR: more than one function named "pg_catalog.avg"
CONTEXT: COPY pg_aggregate, line 1, column aggfnoid: "pg_catalog.avg"
I created the dump file from a different Postgres DB (version: 9.4.5) using the command:
pg_dump --username=pgroot ${tables} --no-owner --no-acl --no-security
--no-tablespaces --no-unlogged-table-data --data-only dbname > dumpfile.sql
Where ${tables} is a variable in the for:
-T table1 -T table2 -T table3 ...
This is because I'm dumping specific tables listed in a new-line delimited file. Hence its not the entire database but specific tables I want to dump.
I tried loading the the dump file int another Postgres DB (9.6) using the following command:
psql -d dbname -U superuser -v "ON_ERROR_STOP=1" -f
${DUMP_DIR}dumpfile.sql -1 -a > ${LOG_ERR_DIR}dumpfile.log
2>${LOG_ERR_DIR}dumpfile.err
This gave the error mentioned above. It seems this error is occurring because the dump file tries to add the function "pg_catalog.avg" to the database and it gives an error because it already exists.
The sql file generated by the pg_dump does not have anywhere in it where it creates the pg_catalog.avg function, so i don't know why this is occurring.
So I tried dropping the database and creating it from template0, and still I got the error. It seems to me that its a bug based on the follwoing post:
Re: BUG #6176: pg_dump dumps pg_catalog tables
I'm stuck trying to reslove this issue. If anyone can help me resolve this issue please respond?
Thank you in advance,
j3rg
I found out what was causing this issue. It seems that there was an extra newline in the file containing the table listing. This was causing an extra table argument with no table specified and in turn pg_dump exported the sys tables into the file. I file I was searching for the avg function was the wrong file too.

csv output from windows batch + sqlcmd only returns first column

i have looked all over the internet and cant seem to find a solution to this problem.
i am trying to output query results as a CSV through using a combination of sqlcmd and windows batch. here is what i have so far:
sqlcmd.exe -S %DBSERVER% -U %DBUSER% -P %DBPASS% -d %USERPREFIX% -Q "SELECT Username, UserDOB, UserGender FROM TABLE" -o %USERDATA%\%USERPREFIX%\FACT_BP.CSV -h-1 -s","
is there something i'm missing here? some setting that only looks at the first column of the query results?
any advice at all would be a huge help - i'm lost.
Here is the reference page from MSDN on SQLCMD.
http://technet.microsoft.com/en-us/library/ms162773.aspx
I placed this command in a batch file in C:\temp as go.bat.
sqlcmd -S(local) -E -dmaster
-Q"select cast(name as varchar(16)), str(database_id,1,0), create_date from sys.databases"
-oc:\temp\sys.databases.csv -h-1 -s,
Notice I hard coded the file name and removed the "" around the field delimiter.
I get the expected output below.
Either the command does not like the system variables or something else is wrong. Please try my code as a base line test. It works for SQL 2012.
Also, the number of lines is always dumped to file. You must clear this out of the file. That is why I do not use SQLCMD for ETL.
Why not use BCP instead?
I have writing several articles on my website.
http://craftydba.com/?p=1584

Resources