C:\Users\raxz>snowsql -a dr61159.ap-southeast-1.aws -u raxz
Password:
250001 (n/a): Could not connect to Snowflake backend after 0 attempt(s).Aborting
If the error message is unclear, enable logging using -o log_level=DEBUG and see the log to find out the cause. Contact support for further help.
Goodbye!
can any one help me
thanks
raxz
The issue is with the account value being used. It should not have .aws. Here is the correct account value for your case:
C:\Users\raxz>snowsql -a dr61159.ap-southeast-1 -u raxz
The account identifiers details are mentioned here in the following documentation:
https://docs.snowflake.com/en/user-guide/admin-account-identifier.html#non-vps-account-locator-formats-by-cloud-platform-and-region
Related
I use snowflake and snowsight on windows OS. And, I did the below command but the error happened.
Do you have the answer about this error 250001 (n/a)?
C:\Docs>snowsql -a lvXXXXX.ap-northeast-1.aws -u username -o log_level=DEBUG
Password:
250001 (n/a): Could not connect to Snowflake backend after 0 attempt(s).Aborting
If the error message is unclear, enable logging using -o log_level=DEBUG and see the log to find out the cause. Contact support for further help.
Goodbye!
The error message appears to be an issue with "Failed to check OCSP response cache file."
You can try to connect using insecure mode to bypass the OCSP checks.
Use this:
snowsql -a lvXXXXX.ap-northeast-1.aws -u username -o insecure_mode=True
If the connection is successful, it means there may be a firewall/proxy that is blocking the connection to the OCSP server. Note: OCSP connects via port 80.
I encountered this error when trying to connect to my Snowflake account by SnowSQL. Any suggestion what might be the issue and how to resolve it?
% snowsql -a https://*****.us-east-2.aws.snowflakecomputing.com/ -u *****
Password:
250003 (n/a): Failed to execute request: HTTPSConnectionPool(host='https', port=443): Max retries exceeded with url: //*****.us-east-2.aws.snowflakecomputing.com/.snowflakecomputing.com:443/session/v1/login-request?request_id=6585191e-6947-487e-acae-c2cfc777bd1c (Caused by NewConnectionError('<snowflake.connector.vendored.urllib3.connection.HTTPSConnection object at 0x7f8dc80205f8>: Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known',))
If the error message is unclear, enable logging using -o log_level=DEBUG and see the log to find out the cause. Contact support for further help.
You may try the below syntax:
snowsql -a [accountname].us-east-2.aws -u [username]
Details: https://docs.snowflake.com/en/user-guide/snowsql-start.html#connection-syntax
One thing I always try is to make sure I can login from the console/UI with the username and password - before I tackle snowsql connectivity issues. You might have already tried, lmk.
Also, it appears you left off your account name from the URL... was that on purpose (for confidentiality) or a possible problem with URL?
The correct account for use with the snowsql command appears to be account.region.cloud provider. For example: XXXXXXX.eu-west-2.aws.
The whole command with an example account and username:
snowsql -a ocXXXXX.eu-west-2.aws -u myusername
encountering Error while try to connect using CMD (Admin)
250003 (n/a): Failed to execute request: HTTPSConnectionPool(host='https Failed to establish a new connection.
Snowflake account is on Azure.
Thanks in Advance
Based on your comment, the problem is when using the account option:
>snowsql -a XXXX.west-us-2.azure.snowflakecomputing.com -u username
The account should not include this part .snowflakecomputing.com, so your call should be:
>snowsql -a XXXX.west-us-2.azure -u username
I also faced the same issue while giving the hostname as https://xxxxxxxx.us-east-1.snowflakecomputing.com, then I removed https://. It connected successfully.
How can I use pg_dumpall with Heroku? The default "database backup" feature from Heroku is pg_dump with the click of a button, which doesn't include roles, so I want to do pg_dumpall ... I'm trying pg_dumpall -h myherokuurl.compute-1.amazonaws.com -l mypassword -U myUser > dump.sql
I'm getting this error:
pg_dumpall: error: query failed: ERROR: permission denied for table pg_authid
pg_dumpall: error: query was: SELECT oid, rolname, rolsuper, rolinherit, rolcreaterole, rolcreatedb, rolcanlogin, rolconnlimit, rolpassword, rolvaliduntil, rolreplication, rolbypassrls, pg_catalog.shobj_description(oid, 'pg_authid') as rolcomment, rolname = current_user AS is_current_user FROM pg_authid WHERE rolname !~ '^pg_' ORDER BY 2
My first thought was to create a new user with the correct privileges. So, I logged using heroku pg:psql DATABASE -a my-app-name then tried create user myUser with password 'mypassword' but got the error ERROR: permission denied to create role
I'm honestly not sure what's going on I'm kind of just guessing. Any troubleshooting ideas would be appreciated! (in the meantime I'm just trying to learn more about Postgres)
If your problem is just about the pg_authid catalog, you should be able to use recent versions of pg_dumpall with the --no-role-passwords option.
This commonly works in hosted environments where pg_authid is inaccesible, e.g. on AWS. The only downside is that the passwords of Postgres users will be missing from the dump.
However, you appear to have a more limited, perhaps shared environment, where you can't even create new Postgres users. I am not certain if there is any chance to get pg_dumpall working there.
i was working with Cassandra 1.2.4 probably, after restoring some key-space when i tried to query in a key-space it gave me Request did not complete within rpc_timeout
so i checked system.log & output.log under /var/log/cassandra path
i just find this exception:
Exception in thread Thread[ReadStage:42,5,main]
java.lang.RuntimeException: org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.EOFException
what is the reason ? and how can i get rid of rpc_timeout
thanks in advance,
Seems somehow your SStables are corrupted. You can try rebuilding them using nodetool's
scrub [keyspace] operation.
If you can't access a specific keyspace,
> ./nodetool -u <username> -pw <password> -h <cassandra_ip> scrub <keyspace>
or if you can't access any keyspace,
> ./nodetool -u <username> -pw <password> -h <cassandra_ip> scrub
cqlsh returns rpc_timeout when any error occur in server. (remote procedure call -to server- timed out).
i think you problem was after a backup/restore and the restoring step may not perform correctly and your sstables corrupted. this may be helpful.