C:\Users\raxz>snowsql -a dr61159.ap-southeast-1.aws -u raxz
Password:
250001 (n/a): Could not connect to Snowflake backend after 0 attempt(s).Aborting
If the error message is unclear, enable logging using -o log_level=DEBUG and see the log to find out the cause. Contact support for further help.
Goodbye!
can any one help me
thanks
raxz
The issue is with the account value being used. It should not have .aws. Here is the correct account value for your case:
C:\Users\raxz>snowsql -a dr61159.ap-southeast-1 -u raxz
The account identifiers details are mentioned here in the following documentation:
https://docs.snowflake.com/en/user-guide/admin-account-identifier.html#non-vps-account-locator-formats-by-cloud-platform-and-region
I am in the process of doing export and import with postgres database.
I had used the following command to take the backup of postgres db
C:\dirs> pg_dump -U postgres -p 15432 -W -F t cgate-next-demo > .\dbexport_10th_February_2022.tar
Password:*****
I have unzipped dbexport_10th_February_2022.tar file and proceeded with database import. As a initial step, I had dropped the database.
#drop database if exists "cgate-next-demo";
And I had recreated the empty database.
#create database "cgate-next-demo";
In order to do this, I have logged in to psql once,
C:\dirs> psql -U postgres -p 15432
Password for user postgres:*****
postgres=#
For database import I have used the following command.
C:\dirs> psql -U postgres -p 15432 -d cgate-next-demo <restore.sql
While I do that I have got the following error. I took this excerpt from console logs.
ERROR: could not open file "$$PATH$$/6052.dat" for reading: No such
file or directory HINT: COPY FROM instructs the PostgreSQL server
process to read a file. You may want a client-side facility such as
psql's \copy.
Can someone guide on what would've caused this issue.
You are doing this in the wrong fashion. Rather than unpacking the archive, pass it as argument to pg_restore. That will do everything for you.
I encountered this error when trying to connect to my Snowflake account by SnowSQL. Any suggestion what might be the issue and how to resolve it?
% snowsql -a https://*****.us-east-2.aws.snowflakecomputing.com/ -u *****
Password:
250003 (n/a): Failed to execute request: HTTPSConnectionPool(host='https', port=443): Max retries exceeded with url: //*****.us-east-2.aws.snowflakecomputing.com/.snowflakecomputing.com:443/session/v1/login-request?request_id=6585191e-6947-487e-acae-c2cfc777bd1c (Caused by NewConnectionError('<snowflake.connector.vendored.urllib3.connection.HTTPSConnection object at 0x7f8dc80205f8>: Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known',))
If the error message is unclear, enable logging using -o log_level=DEBUG and see the log to find out the cause. Contact support for further help.
You may try the below syntax:
snowsql -a [accountname].us-east-2.aws -u [username]
Details: https://docs.snowflake.com/en/user-guide/snowsql-start.html#connection-syntax
One thing I always try is to make sure I can login from the console/UI with the username and password - before I tackle snowsql connectivity issues. You might have already tried, lmk.
Also, it appears you left off your account name from the URL... was that on purpose (for confidentiality) or a possible problem with URL?
The correct account for use with the snowsql command appears to be account.region.cloud provider. For example: XXXXXXX.eu-west-2.aws.
The whole command with an example account and username:
snowsql -a ocXXXXX.eu-west-2.aws -u myusername
I am running this command on Webfaction:
ionice -c2 -n6 pg_dump --blobs -U mhjohnson_flavma -f dump.sql
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR: canceling statement due to statement timeout
Any ideas on how to change the timeout?
Your server probably has statement-timeouts configured in one way or another. (cf. here)
As a quick solution, you could use PGOPTIONS="-c statement_timeout=0" pg_dump [...] to temporarily overwrite this setting for the dumping process.
We are switching hosts and the old one provided a SQL dump of the PostgreSQL database of our site.
Now, I'm trying to set this up on a local WAMP server to test this.
The only problem is that I don't have an idea how to import this database in the PostgreSQL 9 that I have set up.
I tried pgAdmin III but I can't seem to find an 'import' function. So I just opened the SQL editor and pasted the contents of the dump there and executed it, it creates the tables but it keeps giving me errors when it tries to put the data in it.
ERROR: syntax error at or near "t"
LINE 474: t 2011-05-24 16:45:01.768633 2011-05-24 16:45:01.768633 view...
The lines:
COPY tb_abilities (active, creation, modtime, id, lang, title, description) FROM stdin;
t 2011-05-24 16:45:01.768633 2011-05-24 16:45:01.768633 view nl ...
I've also tried to do this with the command prompt but I can't find the command that I need.
If I do
psql mydatabase < C:/database/db-backup.sql;
I get the error
ERROR: syntax error at or near "psql"
LINE 1: psql mydatabase < C:/database/db-backu...
^
What's the best way to import the database?
psql databasename < data_base_dump
That's the command you are looking for.
Beware: databasename must be created before importing.
Have a look at the PostgreSQL Docs Chapter 23. Backup and Restore.
Here is the command you are looking for.
psql -h hostname -d databasename -U username -f file.sql
I believe that you want to run in psql:
\i C:/database/db-backup.sql
That worked for me:
sudo -u postgres psql db_name < 'file_path'
I'm not sure if this works for the OP's situation, but I found that running the following command in the interactive console was the most flexible solution for me:
\i 'path/to/file.sql'
Just make sure you're already connected to the correct database. This command executes all of the SQL commands in the specified file.
Works pretty well, in command line, all arguments are required, -W is for password
psql -h localhost -U user -W -d database_name -f path/to/file.sql
Just for funsies, if your dump is compressed you can do something like
gunzip -c filename.gz | psql dbname
As Jacob mentioned, the PostgreSQL docs describe all this quite well.
make sure the database you want to import to is created, then you can import the dump with
sudo -u postgres -i psql testdatabase < db-structure.sql
If you want to overwrite the whole database, first drop the database
# be sure you drop the right database !!!
#sudo -u postgres -i psql -c "drop database testdatabase;"
and then recreate it with
sudo -u postgres -i psql -c "create database testdatabase;"
Follow the steps:
Go to the psql shell
\c db_name
\i path_of_dump [eg:-C:/db_name.pgsql]
I tried many different solutions for restoring my postgres backup. I ran into permission denied problems on MacOS, no solutions seemed to work.
Here's how I got it to work:
Postgres comes with Pgadmin4. If you use macOS you can press CMD+SPACE and type pgadmin4 to run it. This will open up a browser tab in chrome.
If you run into errors getting pgadmin4 to work, try killall pgAdmin4 in your terminal, then try again.
Steps to getting pgadmin4 + backup/restore
1. Create the backup
Do this by rightclicking the database -> "backup"
2. Give the file a name.
Like test12345. Click backup. This creates a binary file dump, it's not in a .sql format
3. See where it downloaded
There should be a popup at the bottomright of your screen. Click the "more details" page to see where your backup downloaded to
4. Find the location of downloaded file
In this case, it's /users/vincenttang
5. Restore the backup from pgadmin
Assuming you did steps 1 to 4 correctly, you'll have a restore binary file. There might come a time your coworker wants to use your restore file on their local machine. Have said person go to pgadmin and restore
Do this by rightclicking the database -> "restore"
6. Select file finder
Make sure to select the file location manually, DO NOT drag and drop a file onto the uploader fields in pgadmin. Because you will run into error permissions. Instead, find the file you just created:
7. Find said file
You might have to change the filter at bottomright to "All files". Find the file thereafter, from step 4. Now hit the bottomright "Select" button to confirm
8. Restore said file
You'll see this page again, with the location of the file selected. Go ahead and restore it
9. Success
If all is good, the bottom right should popup an indicator showing a successful restore. You can navigate over to your tables to see if the data has been restored propery on each table.
10. If it wasn't successful:
Should step 9 fail, try deleting your old public schema on your database. Go to "Query Tool"
Execute this code block:
DROP SCHEMA public CASCADE; CREATE SCHEMA public;
Now try steps 5 to 9 again, it should work out
Summary
This is how I had to backup/restore my backup on Postgres, when I had error permission issues and could not log in as a superuser. Or set credentials for read/write using chmod for folders. This workflow works for a binary file dump default of "Custom" from pgadmin. I assume .sql is the same way, but I have not yet tested that
I use:
cat /home/path/to/dump/file | psql -h localhost -U <user_name> -d <db_name>
Hope this will help someone.
If you are using a file with .dump extension use:
pg_restore -h hostname -d dbname -U username filename.dump
I noticed that many examples are overcomplicated for localhost where just postgres user without password exist in many cases:
psql -d db_name -f dump.sql
You can do it in pgadmin3. Drop the schema(s) that your dump contains. Then right-click on the database and choose Restore. Then you can browse for the dump file.
I used this
psql -d dbName -U username -f /home/sample.sql
Postgresql12
from sql file:
pg_restore -d database < file.sql
from custom format file:
pg_restore -Fc database < file.dump
I had more than 100MB data, therefore I could not restore database using Pgadmin4.
I used simply postgres client, and write below command.
postgres#khan:/$ pg_restore -d database_name /home/khan/Downloads/dump.sql
It worked fine and took few seconds.You can see below link for more information.
https://www.postgresql.org/docs/8.1/app-pgrestore.html