Managing error messages generated by Postgres \copy command - database

I am working on a tool to import large sets of files into a Postgres database. Currently I have a working prototype - a bash script going over the list of files, and using psql with the \copy command to import each file.
I would like to add some error handling; I'm thinking of parsing error messages to generate feedback for users, but I can't find a specification, or a list of error messages that are generated by the \copy command in particular.
Is there a tool, or a library, or even a reference list that I could use? I am constrained to use either Shell or Node with the Postgres module.

That should be fairly simple; just check the return code:
psql -c "\copy ${atable} FROM '${afile}' (FORMAT 'csv')"
if [ $? -ne 0 ]; then
echo "copy failed!"
fi

Related

Postgres pg_dump issue

Good Day,
I've been trying to restore a dump file using the psql client and I'm getting this error:
psql.bin:/home/user/Desktop/dump/dumpfile.sql:907:
ERROR: more than one function named "pg_catalog.avg"
CONTEXT: COPY pg_aggregate, line 1, column aggfnoid: "pg_catalog.avg"
I created the dump file from a different Postgres DB (version: 9.4.5) using the command:
pg_dump --username=pgroot ${tables} --no-owner --no-acl --no-security
--no-tablespaces --no-unlogged-table-data --data-only dbname > dumpfile.sql
Where ${tables} is a variable in the for:
-T table1 -T table2 -T table3 ...
This is because I'm dumping specific tables listed in a new-line delimited file. Hence its not the entire database but specific tables I want to dump.
I tried loading the the dump file int another Postgres DB (9.6) using the following command:
psql -d dbname -U superuser -v "ON_ERROR_STOP=1" -f
${DUMP_DIR}dumpfile.sql -1 -a > ${LOG_ERR_DIR}dumpfile.log
2>${LOG_ERR_DIR}dumpfile.err
This gave the error mentioned above. It seems this error is occurring because the dump file tries to add the function "pg_catalog.avg" to the database and it gives an error because it already exists.
The sql file generated by the pg_dump does not have anywhere in it where it creates the pg_catalog.avg function, so i don't know why this is occurring.
So I tried dropping the database and creating it from template0, and still I got the error. It seems to me that its a bug based on the follwoing post:
Re: BUG #6176: pg_dump dumps pg_catalog tables
I'm stuck trying to reslove this issue. If anyone can help me resolve this issue please respond?
Thank you in advance,
j3rg
I found out what was causing this issue. It seems that there was an extra newline in the file containing the table listing. This was causing an extra table argument with no table specified and in turn pg_dump exported the sys tables into the file. I file I was searching for the avg function was the wrong file too.

How to query Maya in script for supported file translator plugins?

I'm trying to specify an FBX file in MEL using the command
file -f -pmt 0 -options "v=0;" -typ "FBX" -o
on one computer this works great. On another, it fails but DOES work if I use
-typ "Fbx"
I think I'd like to query for the supported translators in my script, then either select the correct one or report an error. Is this possible? Am I mis-diagnosing the problem?
MEL has a command called pluginInfo. You could write a simple function that will return the proper spelling based on that. pluginInfo -v -query "fbxmaya"; will provide the version of the fbx plugin. I haven't used MEL in a while so I'm not gonna try to make this perfect but maybe something like if(pluginInfo -v -query "fbxmaya") ) string fbxType = "FBX" else( string fbxType = "Fbx"). Then just plug that var into file -f -pmt 0 -options "v=0;" -typ $fbxType -o.
It might be a different version of fbx. You'd have to provide another line which determines the version of fbx on that particular machine and pipes in the correct spelling.

perl script without using DBI

I have to make a perl script populate a database in PostgreSQL without using DBI or any sort of database interface model. I am a beginner to scripting so naturally, I'v been stuck on this for quite a while. I only have this much so far.
open my $pipe, '|-', "psql -d postgres -U postgres", #options or die;
# NOT SURE WHAT TO DO AFTER THIS
close $pipe;
edit 1: Now i'm trying to do this.
for ($count = $iters; $count >= 1; $count--) {
$randdecimal = rand();
$pipe "INSERT INTO random_table (runid, random_number) VALUES ($runid, $randdecimal)";
}
but it gives me a syntax error
Like the others say, DBI is much better than printing to a pipe.
However, there is a halfway house. Just print all your SQL to STDOUT and then do something like:
myscript.pl | psql -v ON_ERROR_STOP=1 --single-transaction -f -
This lets you easily check your script output / send it to a file. The psql options stop on the first error, wrap everything in a transaction and read from STDIN. You might want the usual -h/-U options too.
Personally, I tend to have two terminals open and just write to a .sql file then \i from a psql prompt. I like having a record of what command I ran.

has anyone faced this error "Error: No valid counters" using type perf?

Has anyone faced this error, Error: No valid counters, using typeperf utility while writing it to SQL database. I have tried variety of different things but every time I try to write it in SQL database using counters in a file it fails with the No valid counters error.
The command was executed in the following fashion:
C:\>typeperf -cf "E:\DBA\CounterCollector\counters_eg.txt" -si 15 -sc 10 -f SQL -o SQL:SQLServerDS!log5
The counters_eg.txt file contains:
"\\<computername>\PhysicalDisk(* *)\Avg. Disk Queue Length"
I am able to write in SQL database by specifying the counters individually at command prompt.
example:
C:\Windows\system32>typeperf -f SQL -o SQL:SQLServerDS!log4 "\\<computername>\PhysicalDisk(* *)\Avg. Disk Queue Length"
Note: I have replaced the server name by <computername>.
Include a double '%%', i.e.
typeperf "\\<remote-IP>\Process(*)\%% Processor Time" -sc 1
Figured it out:
After following the example from
https://www.simple-talk.com/sql/performance/collecting-performance-data-into-a-sql-server-table/
I kept on getting the same error message "Error: No valid counters". The counter.txt is exactly the same like the example provided by Feodor but when I put the counter names on the command line individually, they get processed successfully. The problem I was getting was when I tried to run the entire syntax.
Instead of using what Feodor used:
"TYPEPERF -f SQL -s ALF -cf “C:\CounterCollect\Counters.txt” -si 15 -o SQL:SQLServerDS!log1 -sc 4",
I tweaked it a little bit (after looking at the second example from http://technet.microsoft.com/en-us/library/cc753182.aspx) and finally it WORKED! It is a matter of switching the parameters.
After following the demo by Feodor, I used this below syntax, and it worked for me. I am using SQL Server 2012 and here is the command:
TYPEPERF -cf "C:\PerfMonCollect\Counters.txt" -si 5 -sc 4 -f SQL -o SQL:SQLdatasource!log1".
Your counters list may be damaged. Run perfmon GUI utility and make sure that you are able to see the counters in there.
make sure your file name is correct. counters.txt NOT counters.txt.txt . show extensions then check the file name. also, you can try the RUN command and paste your target to the text file and see if it works.
I had the same issue and it drove me crazy.
I had this error now and solved it by adding the user running typeperf to local administrators group on the servers that threw the error.
I was getting this error on a server(Windows Server 2012 R2) I had admin rights on, I had to manually build performance counters and it was sorted. Here's the link https://support.microsoft.com/en-us/help/2554336/how-to-manually-rebuild-performance-counters-for-windows-server-2008-6
The problem is that the file should contain only file names, without " quote marks.
Removing all " from counterlist resolved the issue for me.

Is there a way to get the currently executing file from sqlcmd?

If I call sqlcmd with the -i command line switch, I'd like to be able to get the name of the file. So, I call
sqlcmd -S <servername> -E -i filename.sql
I'd like to be able to somehow have the contents of the script be able to print the filename without having to hard code it in the file. Looking at the variables and commands that are documented in BOL, I don't see anything like this, but just wanted to make sure. Thanks in advance.
Among the list of sqlcmd Scripting Variables, I don't see anything that has the name for the input file.
But you can send the file name as a parameter when you call sqlcmd.
Input file (filename.sql)
PRINT '$(p1)'
Sqlcmd:
sqlcmd -S .\Server -i filename.sql -v p1="filename.sql"
May be you should explore powershell for this

Resources