Unable to export PostgreSQL table to CSV file in pgAdmin4: environment can only contain strings - pgadmin-4

When right clicking on a PostgreSQL table within pgAdmin 4 version 5.3, I set up these parameters to export it to a CSV file:
But when I click OK, this error is showing: "Import/Export job failed: environment can only contain strings" for which I absolutely don't know what to do (and also where does it come from):
How to solve this error?
It only appears on a Windows 10 OS. I never faced this issue on Ubuntu e.g.

The error message shown by pgAdmin is misleading here and doesn't help.
PgAdmin uses pg_dump and other tools to perform certain actions like Import/Export or Backup. Before starting these external processes pgAdmin sets environmental variables for example:
'PGSSLMODE': 'disable' # Correct env variable
'PGSSLCERT': None # Malformed env variable, missing ''
Your issue is now that one of these variables is set incorrectly by pgAdmin and thus is malformed. Malformed means for example the quotes are missing like shown above. A env variable needs to be quoted.
Possible error cause:
You configured "Client certificate", "Client certificate key" and "Root certificate" inside the "SSL" tab of your connection and at least one of those files does NOT exist anymore, then your error will be shown.
I reported this bug/bad error handling here (requires a free forum account to view).

Related

Getting ORACLE_HOME must be set and %ORACLE_HOME%\database must be writeable createdb terminated unsucessfully while creating db in postgres

I am getting the following error and the process of creating a DB gets terminated automatically while creating a DB in PostgreSQL.
ORACLE_HOME must be set and %ORACLE_HOME%\database must be writeable
createdb terminated unsucessfully.
How to create a Database in Postgres using createdb command?
Running the same command on the Git Bash also doesn't work, the process doesn't getting terminated and it showing as in the picture below.
I wanted to leave this as a comment but I cannot, there are several issues that may be happening here, the first that comes to mind is that your command prompt is not showing as admin. This can also be a path issue, because sometimes cmd will not recognize paths for software except when you try to use it on the folder its installed. Another common issue with Windows is that sometimes you want to run the GUI software instead of cmd, and if you absolutely need cmd try powershell with admin rights.
The error it gives makes me believe that its either a path error or a user rights error, I also agree with the comments. My suggestion if you keep having issues like this who are either depending on your windows path system or your system requirements in general is to try using Anaconda or (conda), this will give you a clean environment to properly see what you did wrong.

INS-30131 Initial setup required for the execution of installer validations failed

i just uninstall oracle 12c manually on windows 10 and now am reinstalling it shows errors:
Cause - Failed to access the temporary location. Action - Ensure that
the current user has required permissions to access the temporary
location. Additional Information:
 - Framework setup check failed on all the nodes
- Cause: Cause Of Problem Not Available
 - Action: User Action Not Available Summary of the failed nodes desktop-81i87ss
 - Version of exectask could not be retrieved from node "desktop-81i87ss"  - Cause: Cause Of Problem Not Available
 - Action: User Action Not Available
In some cases, there is a possibility that your computer name contains some special character, unique installer throws same error in that case. Try removing any special character by renaming your system to a new name and then try installation again.
For Example: My system had name "LAPTOP-GENCONIAN". Problem was solved for me after renaming to "GENCONIAN"
I come across similar issue and was able to resolve with this method
Open CMD
CD into the folder in which setup file exits
3.Run the command - setup.exe -ignorePrereq -J"-Doracle.install.client.validate.clientSupportedOSCheck=false"
Now the intallation window will open without validation check and you can continue

"Directory Name Invalid" error from Customized TFS Build Definition

I am trying to run an (empty) batch file from a customized TFS Build Definition, but every time the process hits the "Run Script" build activity, I get a "Directory Name is invalid" error.
We are using TFS 2013 Update 4 on Windows Server 2008 R2 Standard, and I am running Visual Studio 2013 from a Win 8.1 Pro on my dev machine.
The batch file in question is at "C:\Builds\SP_Base" on the TFS Server (as shown in the test condition in my customized build template. Here's the template itself (based on GitTemplate.12.xaml, since we are using Git as our source control):
This is the definition for our "Run Script" action:
From the log file, we can see that the test for the directory with the batch file passes without an issue. The same log file then shows the error:
Does anybody know how to resolve this, please?
I've seen other threads discussing the "directory name invalid" issue in other contexts, and the closest match was the one referring to the fact that cmd.exe gets invoked without sufficient privileges.
If we are looking at a symptom of a similar issue here, then what should I do to invoke cmd.exe from a TFS build process without errors?
Currently this is what I have if I look at cmd.exe's properties:
In answer to my question about how to invoke cmd.exe from TFS build process...
I found I can use InvokeProcess activity instead of RunScript in my customized build template. This article helped.
This is my new custom template xaml (including error handling for InvokeProcess):
Also, having added variables ExitCode (Int32) and ErrorMessage (String) as per article, the properties of the InvokeProcess activity now look as follows:
Please note the leading "/c" term in Arguments property for InvokeProcess. Without it, the activity will run and return no error, but the script will not get executed.
Hope this helps somebody with a similar issue.

pnp4nagios not logging performance data for new host

We've just updated Nagios from 3.5.x to the current version (4.0.7) and subsequently added a new host for monitoring.
The new host shows as 'Down' in Nagios, and this seems to be related to the fact that pnp4nagios is not logging performance data (the individual checks for users, http etc are all find).
Initially there was an error that the directory
/usr/local/pnp4nagios/var/perfdata/newhost.com
that contains the xml setup and rrd files for the new host was missing), so I manually created this directory, but now it complains that the files are missing.
Does anyone know the appropriate steps to overcome this issue?
Thanks,
Toby
PS I'd tag this 'pnp4nagios', but that tag doesn't exist and I can't create them
UPDATE
It's possible that pnp4nagios is a red herring/symptom. Looking more closely I realise that Nagios actually believes the host is down, even though all services are up. The host status information is '(Host check timed out after 30.01 seconds)'...does this make any more sense?
It's indeed very unlikely that pnp4nagios has something to do with your host being down. pnp actually exports output and performance data to feed the rrd database and xml files (via npcd module or evenhandler command).
The fact that nagios reports the host check timed out after 30 sec means that :
- you have a problem with your host check command, please double-check the syntax
- this check command times out after a certain timelapse (most likely defined in nagios.conf) because the plugin was still running.
I'd recommend running this command from the server's prompt. You want to do something like :
/path/to/libexec/check_command -H ipaddress -args
For example:
/usr/local/libexec/nagios/check_ping -H 192.168.1.1 -w 200,40% -c 500,80% -timeout 120
See if something might be hanging. Having the output would be helpful.
Once your host check returns correct output and performance data to nagios, pnp will hopefuly do the rest.
In the unlikely event it helps anyone, pnp4nagios was indeed a red herring. The problem was that ping wasn't enabled for the host being checked, and this is the test for whether a host is up or not. Hence this was failing, despite other services being reported as working.

How do I turn SQL logging on in Postgres 8.2?

I've got the following settings in my postgres conf:
log_destination = 'stderr'
redirect_stderr = on
log_directory = '/tmp/psqlog'
log_statement = 'all'
And yet no logs are logged. What am I missing here? There is reference on the internet to a variable called "logging_collector", but when I try and set that, postgres dies on startup with a FATAL: unknown variable.
This is on MacOS 10.4.
Ta.
I believe that you need to change log_destination to "syslog" or a specific directory. Output that goes to stderr will just get tossed out. Here's the link to the doc page, but I'll see if I can find an example postgresql.conf somewhere http://www.postgresql.org/docs/8.2/static/runtime-config-logging.html
This mailing list entry provides some info on setting up logging with syslog http://archives.postgresql.org/pgsql-admin/2004-03/msg00381.php
Also, if you're building postgres from source, you might have better luck using a os x package from Fink or MacPorts. Doing all of the configuration yourself can be tricky for beginners, but the packages normally give you a good base to work from.

Resources