sfdx force:data:bulk:upsert request contains invalid data - salesforce

Having some trouble using the bulk:upsert command to update Account objects via a csv file. Hopefully someone can help me with this. Below is what I'm doing:
My csv file name is account.csv and it contains the following data:
Id,Name
0012F00000QjhC7QAJ,LimTest 1
0012F00000QjhkSQAR,LimTest 2
Below is the command that I'm running:
sfdx force:data:bulk:upsert -s Account -f account.csv -i Id -u dev
Above command gets submitted sucessfully. But the job failed.
The Batch status is as of below:
When I view the request, it looks like below:

It works after I manually created an empty file and copied and pasted the data into this new file. The original file, account.csv, was created using this command:
sfdx force:data:soql:query -q "select Id, Name from Account" -r csv -u dev > account.csv
I guess the above command must have created the file in a different encoding that the bulk:upsert does not know how to handle.

Related

Complete novice question: I want to delete a table via command prompt (postgis)

I am a regular FME user and I know how to create a postgis database to be used in FME (using PGadmin). I now want to use FME to backup and then delete a generated table. I use the 'systemcaller' transformer for this, the systemcaller basically opens command prompt.
I was able to make a backup file using cd C:\Program Files\PostgreSQL\118\bin\ && set PGPASSWORD=password&& pg_dump.exe -U postgres -p 5433 -d bag -t tablename -F c -f C:/Users/username/Downloads/tablename.backup", but I am having a hard time getting the table to be deleted.
I tried the 'drop table' command, but this is not recognized. It will recognize drobdb, but I obviously do not want to drop the whole database.
What commandline should I use to drop te table 'testtable' in the database 'testdatabase'?

When executing a batch file from python, the output is only the first line

I'm using the 'qwinsta' cmd command to get the session ID of a remote computer and output it to a textfile, so I create a new batch file and write the command then I try running the batch file through python but it only returns the first line of the output. When I run the batch file by simply double-clicking it it works properly.
Using python 2.7:
def run_qwinsta(self, computerName):
qwinsta_check = open("q.bat", "w")
qwinsta_check.write('PsExec -u <username> -p <password> \\\\' + computerName + ' qwinsta' + ' > "q.txt" ')
qwinsta_check.close()
os.system("q.bat")
Expected results:
SESSIONNAME USERNAME ID STATE TYPE DEVICE
>services 0 Disc
console <username> 1 Active
rdp-tcp 65536 Listen
Actual results:
SESSIONNAME USERNAME ID STATE TYPE DEVICE
I would recommend you to avoid writing the batchfile, If you can. You can execute your batch command from os.system(). Also you can try using subprocess (documentation here) and then redirecting the stdout and stderr to file.
EDIT:
PsExec is a good choice, but If you want another way, you can also use ssh.
You can run PsExec from os.system() and then write the response to text file on the remote machine. The default PsExec working directory is System32 there you can find your text file.
Tested code:
import os
os.system('Psexec \\\\SERVER cmd.exe /c "tasklist > process_list.txt"')
I used tasklist, because I don't have qwinsta on my remote machine.
If you want to store the PsExec response on your machine you can use subprocess and then redirect the stdout to text file.
Tested code:
import subprocess
process_list = open("process_list.txt", "w")
subprocess.call(['Psexec', '\\\\SERVER', 'tasklist'], stdout=process_list)
process_list.close()
Actually I used Python 3.x, because I don't have Python 2.x, but it should work on both.
If this still didn't solve your problem, please provide more details to your question!

Postgres pg_dump issue

Good Day,
I've been trying to restore a dump file using the psql client and I'm getting this error:
psql.bin:/home/user/Desktop/dump/dumpfile.sql:907:
ERROR: more than one function named "pg_catalog.avg"
CONTEXT: COPY pg_aggregate, line 1, column aggfnoid: "pg_catalog.avg"
I created the dump file from a different Postgres DB (version: 9.4.5) using the command:
pg_dump --username=pgroot ${tables} --no-owner --no-acl --no-security
--no-tablespaces --no-unlogged-table-data --data-only dbname > dumpfile.sql
Where ${tables} is a variable in the for:
-T table1 -T table2 -T table3 ...
This is because I'm dumping specific tables listed in a new-line delimited file. Hence its not the entire database but specific tables I want to dump.
I tried loading the the dump file int another Postgres DB (9.6) using the following command:
psql -d dbname -U superuser -v "ON_ERROR_STOP=1" -f
${DUMP_DIR}dumpfile.sql -1 -a > ${LOG_ERR_DIR}dumpfile.log
2>${LOG_ERR_DIR}dumpfile.err
This gave the error mentioned above. It seems this error is occurring because the dump file tries to add the function "pg_catalog.avg" to the database and it gives an error because it already exists.
The sql file generated by the pg_dump does not have anywhere in it where it creates the pg_catalog.avg function, so i don't know why this is occurring.
So I tried dropping the database and creating it from template0, and still I got the error. It seems to me that its a bug based on the follwoing post:
Re: BUG #6176: pg_dump dumps pg_catalog tables
I'm stuck trying to reslove this issue. If anyone can help me resolve this issue please respond?
Thank you in advance,
j3rg
I found out what was causing this issue. It seems that there was an extra newline in the file containing the table listing. This was causing an extra table argument with no table specified and in turn pg_dump exported the sys tables into the file. I file I was searching for the avg function was the wrong file too.

db2exfmt unable to open output file

I am trying to create query explain tables using db2exfmt.
I am using db2 CLP and I am following below steps:
Connect to sample
set current explain mode explain
My query = select * from staff where JOB = 'Sales'
db2 set current explain mode no
db2exfmt -d sample -# 0 -w -1 -g -TIC -n % -s % -o output.txt
After the last step, I am getting this output:
Connecting to the Database.
Connect to Database Successful.
Unable to open output file.
I am not sure why it is not able to open output file. How should I resolve this issue?
It appears that you don't have write access to the C:\Program Files\IBM\SQLLIB\BIN directory, so db2exfmt can't open the output file for writing.
Change to a directory you do have write permissions, or specify a file name with path for the -o option.

backup of database using shellscript using crontab fails?

My shellscript for taking backup of databse works fine normally.
But when i try to run through crontab there is no backup.
this is mycrontab
* * * * * /home/mohan/sohan/backuptest.sh
content of backuptest.sh are
#!/bin/bash
name=`date +%Y%m%d`.sql
#echo $name
mysqldump -u abc --password=abc my_db > $name
backup.sh works fine when normally run .But fails to generate backup when run through crontab
A couple of possibilities... first that your programs/commands cannot be found when run from cron, and second that your database cannot be found when run from cron.
So, first the programs. You are using date and mysqldump, so at youir Terminal prompt you need to find where they are located, like this:
which date
which mysqldump
Then you can either put the full paths that you get as output above into your script, or add a PATH= statement at the second line that incorporates both paths.
Secondly, your database. Where is it located? If it is in /home/mohan/sohan/ for example, you will need to change your script like this:
#!/bin/bash
name=`/bin/date +%Y%m%d`.sql
cd /home/mohan/sohan
/usr/local/bin/mysqldump -u abc --password=abc my_db > $name

Resources