Sybase BCP out with null values - sybase

Is there a way to get BCP out to return NULL values rather then just blanks
Current Output:
'apse0420', '', '2', '2'
What I want:
'apse0420', 'NULL', '2', '2'
BCP out code:
bcp <db>..<table> out test.txt -S <server> -U <username> -P <password> -c -t ", " -r "\n"

create a view from table and convert NULL columns into string 'NULL'. BCP out the view

Related

How to speed up bash random name generation?

i have problem with my code performance. It is running very slow. I need to generate million+ random persons for my postgres db and insert them into db. Person has parameters name,birthdate,gender,age. I created lists for first names and last names from which i am randomly selecting name. Can someone help me?
Here is my code:
#docker params
name="`docker ps | rev | cut -d " " -f1 | rev | grep -v NAMES`"
dbs_name="DBS_projekt"
#load names from files
firstName=(`cat generatorSource/firstNames.txt`)
firstNameCount="`wc -l generatorSource/firstNames.txt | tr -s ' ' | cut -d ' ' -f2`"
secondName=(`cat generatorSource/lastNames.txt`)
secondNameCount="`wc -l generatorSource/lastNames.txt| tr -s ' ' | cut -d ' ' -f2`"
#gender array
gender=("Male" "Female" "Other")
#actual date
now=$(date | rev | cut -d " " -f1 | rev)
array=()
for ((x = 1; x <= 1000;x++))
do
array+="INSERT INTO persons(name,birthdate,gender,age) VALUES"
for (( n=1; n<=1000; n++ ))
do
secondrand=$(( ( RANDOM % $secondNameCount ) ))
firstrand=$(( ( RANDOM % $firstNameCount ) ))
genderand=$(( ( RANDOM % 3 ) ))
year=$(( ( RANDOM % 118 ) + 1900 ))
month=$(((RANDOM % 12) + 1))
day=$(((RANDOM % 28) + 1))
age=$(expr $now - $year)
if [ $n -eq 1000 ]; then
array+="('${firstName[$firstrand]}
${secondName[$secondrand]}','$year-$month-$day',
'${gender[$genderand]}','$age');"
else
array+="('${firstName[$firstrand]}
${secondName[$secondrand]}','$year-$month-$day',
'${gender[$genderand]}','$age'),"
fi
done
done
#run psql in docker and run insert commands
docker exec -i $name psql -U postgres << EOF
\c $dbs_name
$array
EOF
Note that you declare "array" as an array, but you use it as a string.
array=()
...
array+="INSERT INTO persons(name,birthdate,gender,age) VALUES"
This is what's happening:
$ array=()
$ declare -p array
declare -a array='()'
$ array+="first"
$ array+="second"
$ declare -p array
declare -a array='([0]="firstsecond")'
To insert an element into an array, you must use parentheses:
$ array=()
$ array+=("first")
$ array+=("second")
$ declare -p array
declare -a array='([0]="first" [1]="second")'
I suspect this may be one source of slowness: you're constructing one gigantic string. Add the parentheses as shown, and then change the docker call to
IFS=$'\n'
docker exec -i $name psql -U postgres << EOF
\c $dbs_name
${array[*]}
EOF

Issue with record length when creating CSV in SQLCMD

I am experiencing an issue when generating a csv file from SQLCMD. I have a test table which looks like this.
Which has 2 numeric columns and 2 nvarchar columns (one 64 characters long and the other 2000 characters long).
If I issue the command select * from test what I get is this
number1 number2 varcharacter1 varcharacter2
1 2 test longertext
2 3 test2 longertext2
3 4 test3 longertext3
If I want to output this as csv I use this sql:
select cast(number1 as nvarchar) + ',' +
varcharacter2 + ',' +
varcharacter1 + ',' +
cast(number2 as nvarchar)
from test
This gives me this which looks OK
(No column name)
1,longertext,test,2
2,longertext2,test2,3
3,longertext3,test3,4
The issue is when I enter it via SQLCMD. If I issue the following command
Scripts>sqlcmd -S localhost\STUDIO -d studiodb -U sa -P Studio2016! -h-1 -i test.sql -o test.csv
What I get is a CSV file which contains the data, but when I open it in TextPad the last column is padded out to 2000 characters long.
1,longertext,test,2
2,longertext2,test2,3
3,longertext3,test3,4
What is causing this and how can I fix it? What am I doing wrong?

SED Match/Replace URL and Update Serialized Array Count

Below is an example snippet from a sql dump file. This specific row contains a meta_value of a Wordpress PHP serialized array. During database restores in dev., test., and qc. environments I'm using sed to replace URLs with the respective environment sub-domain.
INSERT INTO `wp_postmeta`
(`meta_id`,
`post_id`,
`meta_key`,
`meta_value`)
VALUES
(527,
1951,
'ut_parallax_image',
'a:4:{
s:17:\"background-image\";
s:33:\"http://example.com/background.jpg\";
s:23:\"mobile-background-image\";
s:37:\"www.example.com/mobile-background.jpg\";
}')
;
However, I need to extend this to correct the string length in the serialized arrays after replace.
sed -r -e "s/:\/\/(www\.)?${domain}/:\/\/\1${1}\.${domain}/g" "/vagrant/repositories/apache/$domain/_sql/$(basename "$file")" > "/vagrant/repositories/apache/$domain/_sql/$1.$(basename "$file")"
The result should look like this for dev.:
INSERT INTO `wp_postmeta`
(`meta_id`,
`post_id`,
`meta_key`,
`meta_value`)
VALUES
(527,
1951,
'ut_parallax_image',
'a:4:{
s:17:\"background-image\";
s:37:\"http://dev.example.com/background.jpg\";
s:23:\"mobile-background-image\";
s:41:\"www.dev.example.com/mobile-background.jpg\";
}')
;
I'd prefer to not introduce any dependancies other than sed.
Thanks #John1024. #Fabio and #Seth, I not sure for perfomance, but these code work and without wp-cli:
localdomain=mylittlewordpress.local
maindomain=strongwordpress.site.ru
cat dump.sql | sed 's/;s:/;\ns:/g' | awk -F'"' '/s:.+'$maindomain'/ {sub("'$maindomain'", "'$localdomain'"); n=length($2)-1; sub(/:[[:digit:]]+:/, ":" n ":")} 1' | sed ':a;N;$!ba;s/;\ns:/;s:/g' | sed "s/$maindomain/$localdomain/g" | mysql -u$USER -p$PASS $DBNAME
PHP serialized string exploded by ';s:' to multiline string and awk processed all lines by #John1024 solution.
cat dump.sql | sed 's/;s:/;\ns:/g'
Redirect output to awk
awk -F'"' '/^s:.+'$maindomain'/ {sub("'$maindomain'", "'$localdomain'"); n=length($2)-1; sub(/:[[:digit:]]+:/, ":" n ":")} 1'
After all lines processed, multiline implode to one line (as then exists in original dump.sql). Thanks #Zsolt https://stackoverflow.com/a/1252191
sed ':a;N;$!ba;s/;\ns:/;s:/g'
Addition sed replacement need for any other strings in wordpress database.
sed "s/$maindomain/$localdomain/g"
And load into main server DB
... | mysql -u$USER -p$PASS $DBNAME
Your algorithm involves arithmetic. That makes sed a poor choice. Consider awk instead.
Consider this input file:
$ cat inputfile
something...
s:33:\"http://example.com/background.jpg\";
s:37:\"www.example.com/mobile-background.jpg\";
s:33:\"http://www.example.com/background.jpg\";
more lines...
I believe that this does what you want:
$ awk -F'"' '/:\/\/(www[.])?example.com/ {sub("example.com", "dev.example.com"); n=length($2)-1; sub(/:[[:digit:]]+:/, ":" n ":")} 1' inputfile
something...
s:37:\"http://dev.example.com/background.jpg\";
s:37:\"www.example.com/mobile-background.jpg\";
s:41:\"http://www.dev.example.com/background.jpg\";
more lines...
WP-CLI handles serialized PHP arrays during a search-replace http://wp-cli.org/commands/search-replace/. I wanted to try a native shell solution, but having WP-CLI was worth the extra overhead in the end.
Here is a sample text file you asked for (it's a database export).
Original (https://www.example.com) :
LOCK TABLES `wp_options` WRITE;
INSERT INTO `wp_options` VALUES (1,'siteurl','https://www.example.com','yes'),(18508,'optionsframework','a:48:{s:4:\"logo\";s:75:\"https://www.example.com/wp-content/uploads/2014/04/logo_imbrique_small3.png\";s:7:\"favicon\";s:62:\"https://www.example.com/wp-content/uploads/2017/04/favicon.ico\";}','yes')
/*!40000 ALTER TABLE `wp_options` ENABLE KEYS */;
UNLOCK TABLES;
Result needed (http://example.localhost) :
LOCK TABLES `wp_options` WRITE;
INSERT INTO `wp_options` VALUES (1,'siteurl','http://example.localhost','yes'),(18508,'optionsframework','a:48:{s:4:\"logo\";s:76:\"http://example.localhost/wp-content/uploads/2014/04/logo_imbrique_small3.png\";s:7:\"favicon\";s:64:\"https://example.localhost/wp-content/uploads/2017/04/favicon.ico\";}','yes');
/*!40000 ALTER TABLE `wp_options` ENABLE KEYS */;
UNLOCK TABLES;
As you can see :
there is multiple occurence on the same line
escape characters aren't counted in length number (eg: "/")
some occurence aren't preceded by "s:" length number (no need to replace, it can be done after awk with a simple sed)
Thanks in advance !
#Alexander Demidov's answer is great, here's our implementation for reference
public static function replaceInFile(string $replace, string $replacement, string $absoluteFilePath): void
{
ColorCode::colorCode("Attempting to replace ::\n($replace)\nwith replacement ::\n($replacement)\n in file ::\n(file://$absoluteFilePath)", iColorCode::BACKGROUND_MAGENTA);
$replaceDelimited = preg_quote($replace, '/');
$replacementDelimited = preg_quote($replacement, '/');
$replaceExecutable = CarbonPHP::CARBON_ROOT . 'extras/replaceInFileSerializeSafe.sh';
// #link https://stackoverflow.com/questions/29902647/sed-match-replace-url-and-update-serialized-array-count
$replaceBashCmd = "chmod +x $replaceExecutable && $replaceExecutable '$absoluteFilePath' '$replaceDelimited' '$replace' '$replacementDelimited' '$replacement'";
Background::executeAndCheckStatus($replaceBashCmd);
}
public static function executeAndCheckStatus(string $command, bool $exitOnFailure = true): int
{
$output = [];
$return_var = null;
ColorCode::colorCode('Running CMD >> ' . $command,
iColorCode::BACKGROUND_BLUE);
exec($command, $output, $return_var);
if ($return_var !== 0 && $return_var !== '0') {
ColorCode::colorCode("The command >> $command \n\t returned with a status code (" . $return_var . '). Expecting 0 for success.', iColorCode::RED);
$output = implode(PHP_EOL, $output);
ColorCode::colorCode("Command output::\t $output ", iColorCode::RED);
if ($exitOnFailure) {
exit($return_var);
}
}
return (int) $return_var;
}
#!/usr/bin/env bash
set -e
SQL_FILE="$1"
replaceDelimited="$2"
replace="$3"
replacementDelimited="$4"
replacement="$5"
if ! grep --quiet "$replace" "$SQL_FILE" ;
then
exit 0;
fi
cp "$SQL_FILE" "$SQL_FILE.old.sql"
# #link https://stackoverflow.com/questions/29902647/sed-match-replace-url-and-update-serialized-array-count
# #link https://serverfault.com/questions/1114188/php-serialize-awk-command-speed-up/1114191#1114191
sed 's/;s:/;\ns:/g' "$SQL_FILE" | \
awk -F'"' '/s:.+'$replaceDelimited'/ {sub("'$replace'", "'$replacement'"); n=length($2)-1; sub(/:[[:digit:]]+:/, ":" n ":")} 1' 2>/dev/null | \
sed -e ':a' -e 'N' -e '$!ba' -e 's/;\ns:/;s:/g' | \
sed "s/$replaceDelimited/$replacementDelimited/g" > "$SQL_FILE.replaced.sql"
cp "$SQL_FILE.replaced.sql" "$SQL_FILE"

Bash - read Postgresql zero separated fields into array

I want to read the output of a psql query produced with --field-separator-zero into an array inside my bash script. The best I have tried was the following:
psql -w -t --quiet --no-align --field-separator-zero -c $'select nickname,first_name,last_name,email from users' | while IFS= read -d '' -a USERS; do
echo ${USERS[0]} ${USERS[1]} ${USERS[2]} ${USERS[3]};
done;
The above would return each field of a row as a new array. Changing the delimiter to anything else would make the process work, but the problem is the nickname field might contain any character, so I'm forced to use the safe NUL char as a delimiter. Is there any way to do this ?
I'm assuming here that nickname is a unique key; make the appropriate modifications if a different field should be used in that role.
The below code reads the data into a series of associative arrays, and emits each row in turn.
Note that associative arrays are a Bash 4 feature; if you're on Mac OS, which ships 3.2, use MacPorts or a similar tool to install a modern release.
declare -A first_names=( ) last_names=( ) emails=( )
while IFS= read -r -d '' nickname && \
IFS= read -r -d '' first_name && \
IFS= read -r -d '' last_name && \
IFS= read -r -d '' email; do
first_names[$nickname]=$first_name
last_names[$nickname]=$last_name
emails[$nickname]=$email
done < <(psql ...)
echo "Found users: "
for nickname in "${!emails[#]}"; do
printf 'nickname - %q\n' "$nickname"
printf 'email - %q\n' "${emails[$nickname]}"
printf 'first name - %q\n' "${first_names[$nickname]}"
printf 'last name - %q\n' "${last_names[$nickname]}"
echo
done
This technique is described in BashFAQ #1 -- search for -print0 to find its mention.

How to download Postgres bytea column as file

Currently, i have a number of files stored in postgres 8.4 as bytea. The file types are .doc, .odt, .pdf, .txt and etc.
May i know how to download all the file stored in Postgres because i need to to do a backup.
I need them in their original file type instead of bytea format.
Thanks!
One simple option is to use COPY command with encode to hex format and then apply xxd shell command (with -p continuous hexdump style switch). For example let's say I have jpg image in bytea column in samples table:
\copy (SELECT encode(file, 'hex') FROM samples LIMIT 1) TO
'/home/grzegorz/Desktop/image.hex'
$ xxd -p -r image.hex > image.jpg
As I checked it works in practice.
Try this:
COPY (SELECT yourbyteacolumn FROM yourtable WHERE <add your clauses here> ...) TO 'youroutputfile' (FORMAT binary)
Here's the simplest thing I could come up with:
psql -qAt "select encode(file,'base64') from files limit 1" | base64 -d
The -qAt is important as it strips off any formatting of the output. These options are available inside the psql shell, too.
base64
psql -Aqt -c "SELECT encode(content, 'base64') FROM ..." | base64 -d > file
xxd
psql -Aqt -c "SELECT encode(content, 'hex') FROM ..." | xxd -p -r > file
If you have a lot of data to download then you can get the lines first and then iterate through each one writing the bytea field to file.
$resource = pg_connect('host=localhost port=5432 dbname=website user=super password=************');
// grab all the user IDs
$userResponse = pg_query('select distinct(r.id) from resource r
join connection c on r.id = c.resource_id_from
join resource rfile on c.resource_id_to = rfile.id and rfile.resource_type_id = 10
join file f on rfile.id = f.resource_id
join file_type ft on f.file_type_id = ft.id
where r.resource_type_id = 38');
// need to work through one by one to handle data
while($user = pg_fetch_array($userResponse)){
$user_id = $user['id'];
$query = 'select r.id, f.data, rfile.resource_type_id, ft.extension from resource r
join connection c on r.id = c.resource_id_from
join resource rfile on c.resource_id_to = rfile.id and rfile.resource_type_id = 10
join file f on rfile.id = f.resource_id
join file_type ft on f.file_type_id = ft.id
where r.resource_type_id = 38 and r.id = ' . $user_id;
$fileResponse = pg_query($query);
$fileData = pg_fetch_array($fileResponse);
$data = pg_unescape_bytea($fileData['data']);
$extension = $fileData['extension'];
$fileId = $fileData['id'];
$filename = $fileId . '.' . $extension;
$fileHandle = fopen($filename, 'w');
fwrite($fileHandle, $data);
fclose($fileHandle);
}
DO $$
DECLARE
l_lob_id OID;
r record; BEGIN
for r in
select data, filename from bytea_table
LOOP
l_lob_id:=lo_from_bytea(0,r.data);
PERFORM lo_export(l_lob_id,'/home/...'||r.filename);
PERFORM lo_unlink(l_lob_id);
END LOOP;
END; $$
Best I'm aware, bytea to file needs to be done at the app level.
(9.1 might change this with the filesystem data wrapper contrib. There's also a lo_export function, but it is not applicable here.)
If you want to do this from a local windows, and not from the server, you will have to run every statement individually, and have PGAdmin and certutil:
Have PGAdmin installed.
Open cmd from the runtime folder or cd "C:\Program Files\pgAdmin 4\v6\runtime"
Run in PGAdmin query to get every statement that you will have to paste in cmd:
SELECT 'set PGPASSWORD={PASSWORD} && psql -h {host} -U {user} -d {db name} -Aqt -c "SELECT encode({bytea_column}, ''base64'') FROM {table} WHERE id='||id||'" > %a% && CERTUTIL -decode %a% "C:\temp{name_of_the_folder}\FileName - '||{file_name}||' ('||TO_CHAR(current_timestamp(),'DD.MM.YYYY,HH24 MI SS')||').'||{file_extension}||'"'
FROM table WHERE ....;
Replace {...}
It will generate something like:
set PGPASSWORD=123 psql -h 192.1.1.1 -U postgres -d my_test_db -Aqt -c "SELECT encode(file_bytea, 'base64') FROM test_table_bytea WHERE id=33" > %a% && CERTUTIL -decode %a% "C:\temp\DB_FILE\FileName - test1 - (06.04.2022,15 42 26).docx"
set PGPASSWORD=123 psql -h 192.1.1.1 -U postgres -d my_test_db -Aqt -c "SELECT encode(file_bytea, 'base64') FROM test_table_bytea WHERE id=44" > %a% && CERTUTIL -decode %a% "C:\temp\DB_FILE\FileName - test2 - (06.04.2022,15 42 26).pdf"
Copy paste all the generated statements in CMD. The files will be saved to your local machine.

Resources