I can't log into snowsql via Windows Command Prompt in order to GET files when connected over my companies' VPN. I believe this is a whitelist issue. I've already ran Select SYSTEM$WHITELIST(); then SnowCD and my results are listed below.
What is the IP range for Snowflake?
Thanks!
Performing 33 checks for 13 hosts
Check for 11 hosts failed, display as follow:
==============================================
Host: <redacted>.snowflakecomputing.com
Port: 443
Type: SNOWFLAKE_DEPLOYMENT
Failed Check: Certificate Check
Error: certificate checker timeout
Suggestion: Check your connection to <redacted>.snowflakecomputing.com
==============================================
Host: sfc-ds1-customer-stage.s3.us-west-2.amazonaws.com
Port: 443
Type: STAGE
Failed Check: Certificate Check
Error: certificate checker timeout
Suggestion: Check your connection to sfc-ds1-customer-stage.s3.us-west-2.amazonaws.com
==============================================
Host: sfc-ds1-customer-stage.s3-us-west-2.amazonaws.com
Port: 443
Type: STAGE
Failed Check: Certificate Check
Error: certificate checker timeout
Suggestion: Check your connection to sfc-ds1-customer-stage.s3-us-west-2.amazonaws.com
==============================================
Host: sfc-ds1-customer-stage.s3.amazonaws.com
Port: 443
Type: STAGE
Failed Check: Certificate Check
Error: certificate checker timeout
Suggestion: Check your connection to sfc-ds1-customer-stage.s3.amazonaws.com
==============================================
Host: sfc-snowsql-updates.s3.us-west-2.amazonaws.com
Port: 443
Type: SNOWSQL_REPO
Failed Check: Certificate Check
Error: certificate checker timeout
Suggestion: Check your connection to sfc-snowsql-updates.s3.us-west-2.amazonaws.com
==============================================
Host: ocsp.snowflakecomputing.com
Port: 80
Type: OCSP_CACHE
Failed Check: HTTP checker
Error: http check timeout
Suggestion: Check the connection to your http host or transparentProxy
==============================================
Host: ocsp.sca1b.amazontrust.com
Port: 80
Type: OCSP_RESPONDER
Failed Check: HTTP checker
Error: http check timeout
Suggestion: Check the connection to your http host or transparentProxy
==============================================
Host: ocsp.rootca1.amazontrust.com
Port: 80
Type: OCSP_RESPONDER
Failed Check: HTTP checker
Error: http check timeout
Suggestion: Check the connection to your http host or transparentProxy
==============================================
Host: ocsp.rootg2.amazontrust.com
Port: 80
Type: OCSP_RESPONDER
Failed Check: HTTP checker
Error: http check timeout
Suggestion: Check the connection to your http host or transparentProxy
==============================================
Host: o.ss2.us
Port: 80
Type: OCSP_RESPONDER
Failed Check: HTTP checker
Error: http check timeout
Suggestion: Check the connection to your http host or transparentProxy
==============================================
Host: ocsp.digicert.com
Port: 80
Type: OCSP_RESPONDER
Failed Check: HTTP checker
Error: http check timeout
Suggestion: Check the connection to your http host or transparentProxy
The design of Snowflake’s platform takes full advantage of the radical elasticity offered by AWS. The underlying system which you will communicate to and from are both using services offered by AWS which do not use any stable elements. That includes IP addresses. Since many customers wish to restrict the network communications they have with Snowflake, several methods that do not rely on IP addresses have been offered.
So generally, we do not recommend any IP whitelisting. If possible, we recommend using hostnames instead. https://support.snowflake.net/s/article/faq-what-ip-address-range-does-snowflake-use
When using client applications, you will have to ensure the endpoints (output from the system$whitelist function) are whitelisted on the ports which are specified for seamless communication. You will need to open port both 443 and 80 for specific endpoints.
https://docs.snowflake.com/en/user-guide/hostname-whitelist.html#hostname-whitelisting
If IP-only controls are required, I did come across this blog where AWS provides those region-wide IP ranges as a JSON file:
https://aws.amazon.com/de/blogs/aws/aws-ip-ranges-json/ . This is really the only thing I can think to offer (the IP range from which all Snowflake's dynamic IPs may come from). This amounts to the IPs for an entire AWS region.
Note that these are subject to change and any method using these will need to account for those potential changes. As I mentioned, Snowflake does not recommend this approach. Whenever possible, it is best to whitelist the hostnames/endpoints.
there is some missing information that I think may help us answer your question a little better/easier. You mentioned that you are using SnowCD from Windows Command prompt. I am assuming you are using this from a computer within your companies network or just from home. Is this correct?
Assuming the above is correct, I would guess that you or someone responsible for your Snowflake account at the "ACCOUNTADMIN" level has defined a Network Policy within Snowflake which has a whitelist and blacklist of IP's. If your IP is not one of those IP's or doesn't fall within the range defined within the whitelist of IP's you are going to get blocked and get the messages you see from SnowCD. I would figure out what your IP is then, using the ACCOUNTADMIN role add the IP and/or range to your Network Policy Whitelist.
If you are running a Windows box via an AWS EC2 instance, that changes a little bit in that you are likely going to need to whitelist your AWS VPC outgoing address(es).
I hope this helps. If my assumptions are incorrect, please give me some more detail and I will hopefully be able to help. Thanks!
Related
I am trying to browse ntds.dit database file using LDAP in DSRM mode using dsamain command line utility tool as shown below:
dsamain /dbpath C:\snapshot\Windows\NTDS\ntds.dit /ldapport 5000
But it's giving me the error
The directory service failed to open a TCP port for exclusive use in DSRM mode.
One thing to note here, is that the same command with same file is working perfectly in normal mode.
What I have tried till now is:
Tried to host it on multiple ports
Tried to add all the ports in the inbound rule to allow the connection if in any case the port is blocked in DSRM mode.
Tried creating a new snapshot and mounting the same
Checked if any other process is using the same port but it was not the case and I have tried to use some many different random free ports.
But everything failed.
I am attaching the event trace below:
EVENTLOG (Warning): NTDS General / Security : 3051
The directory has been configured to not enforce per-attribute authorization during LDAP add operations. Warning events will
be logged, but no requests will be blocked.
This setting is not secure and should only be used as a temporary troubleshooting step. Please review the suggested mitigations in the link below.
For more information, please see https://go.microsoft.com/fwlink/?linkid=2174032.
EVENTLOG (Warning): NTDS General / Security : 3054
The directory has been configured to allow implicit owner privileges when initially setting or modifying the nTSecurityDescriptor
attribute during LDAP add and modify operations. Warning events will be logged, but no requests will be blocked.
This setting is not secure and should only be used as a temporary troubleshooting step. Please review the suggested mitigations in the link below.
For more information, please see https://go.microsoft.com/fwlink/?linkid=2174032.
EVENTLOG (Informational): NTDS General / Service Control : 1000
Microsoft Active Directory Domain Services startup complete
EVENTLOG (Warning): NTDS LDAP / LDAP Interface : 2509
The Directory Service failed to open a TCP port for exclusive use.
Additional Data:
Port number:
5002
Error Value:
0 The operation completed successfully.
EVENTLOG (Warning): NTDS LDAP / LDAP Interface : 2509
The Directory Service failed to open a TCP port for exclusive use.
Additional Data:
Port number:
5003
Error Value:
0 The operation completed successfully.
EVENTLOG (Warning): NTDS LDAP / LDAP Interface : 2509
The Directory Service failed to open a TCP port for exclusive use.
Additional Data:
Port number:
5002
Error Value:
0 The operation completed successfully.
EVENTLOG (Warning): NTDS LDAP / LDAP Interface : 2509
The Directory Service failed to open a TCP port for exclusive use.
Additional Data:
Port number:
5003
Error Value:
0 The operation completed successfully.
I have 2 domain controllers, one is PDC and also with Root CA(not best practice) and dns. the other one is just a domain controller.
I have done all needed configuration for ldaps in the second domain controller and tested ldp working fine from both the workstation and the DC itself. however I could not connect to it from linux server.
when using openssl s_client -connect dc02.domainname:636 -showcerts. it always returned no peer certificate available.
CONNECTED(00000003)
write:errno=104
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 0 bytes and written 289 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.2
Cipher : 0000
Session-ID:
Session-ID-ctx:
Master-Key:
Key-Arg : None
Krb5 Principal: None
PSK identity: None
PSK identity hint: None
Start Time: 1610484233
Timeout : 300 (sec)
Verify return code: 0 (ok)
---
Firewall and everything are OK since connection test are all good. I do know from where I can troubleshoot this problem as I can see certs are in computer store and service keystore.
Could someone provide me some hint as from where I can continue this troubleshooting and investigation?
Thanks.
This may depend on the permission of the cert that you are attempting to grab from the server you may want to add an everyone group and allow it to read-only or add the Linux Server to the domain via SSSD and then give the server access in the same menu. Also, ensure the cert you are attempting to push is publicly visible.
REF (How to add a Linux machine to AD): https://www.datasunrise.com/blog/professional-info/integrating-a-linux-machine-into-windows-active-directory-domain/
CA Properties
It turns out that port 88 is also needed when doing ldaps authentication and etc. when open port 88 for both tcp and udp, the test and everything works fine.
Thanks for the suggestions.
I'm trying to configure spring boot datasource as a remote IBM DB2 database. I have added the following configurations in my application.properties file:
spring.jpa.hibernate.ddl-auto=none
spring.datasource.url=jdbc:db2://<dbhost>:<dbport>/<db>
spring.datasource.username=<username>
spring.datasource.password=<password>
I even added the same properties in application.yml:
spring:
datasource:
url: jdbc:db2://dashdb-txn-sbox.services.eu-gb.bluemix.net:3000/BLUDB:sslConnection=true;
username: <username>
password: <password>
driverClassName: com.ibm.db2.jcc.DB2Driver
jpa:
properties:
hibernate:
dialect: org.hibernate.dialect.DB2Dialect
However, I'm still getting this error:
A communication error occurred during operations on the connection's underlying socket, socket input stream, or socket output stream. Error location: Reply.fill() - socketInputStream.read (-1). Message: Read timed out. ERRORCODE=-4499, SQLSTATE=08001
This question is more about configuration than programming.
See this FAQ for JDBC ERRORCODE -4499
which mentions:
(A.5) Message: Read timed out
This message is returned when client is waiting for reply from the
server and the server did not reply in time. Could be caused by client
timeout. Ensure no timeouts set in JDBC driver properties:
blockingReadConnectionTimeout=0 (default)
commandTimeout=0 (default)
loginTimeout = 0 (default)
Could also be caused by server or network issues.
If the issue is persistent, ensure you are using the latest jdbc Db2 driver ( at the present date that would be version 4.26.14 or higher).
You can use jdbc trace (follow the instructions in IBM Db2 documentation to enable jdbc trace) to look under the covers to see exactly what is happening.
Ensure the remote Db2-server has sufficient compute resources to respond in time. You may need to open a ticket with your cloud vendor (IBM) if the jdbc trace suggests a server side issue that is not under your direct control.
50001 is the usual (default) port number for ssl connections, not 3000 as you have in your question
I researched about this question and could not find any pointers to this error. I am essentially trying to connect to server using libcurl program.
https://curl.haxx.se/libcurl/c/ftpsget.html
The program compiles fine but gives a run time error as follows:
Trying (Some IP)...
* Connected to (Some server name) (same ip as above) port 21 (#0)
* found 173 certificates in /etc/ssl/certs/ca-certificates.crt
* found 697 certificates in /etc/ssl/certs
* ALPN, offering http/1.1
* gnutls_handshake() failed: An unexpected TLS packet was received.
* Closing connection 0
curl told us 35
I am having access to this server through a username& password.
Kinda old question, but incase it helps others...
You might need to use --insecure --ftp-ssl --tlsv1 and maybe ftps:// but sometimes ftp://
Hope it helps!
EDIT:
URL attached
FTPS connection error: gnutls_handshake failed
I’m trying to connect to an LDAP directory over SSL using the Windows LDAP C-API. This fails with error code 0x51 = LDAP_SERVER_DOWN while the event log on the client computer has this:
„The certificate received from the remote server does not contain the expected name. It is therefore not possible to determine whether we are connecting to the correct server. The server name we were expecting is eim-tsi2.sam.develop.beta.ads. The SSL connection request has failed. The attached data contains the server certificate.”
This is can’t be true since “Ldap Admin” is able to connect over SSL and port 636.
The LDAP directory is an Oracle DSEE which has the CA and the server certificate in the appropriate cert store.
The client has the CA installed in the “Trusted Root Certification Authorities” and there in the „Local Computer“ physical store. I assumed this to be the right place for the CA since my little client program uses the Windows LDAP C-API; LDAP Admin indeed expects the CA there.
Here is an excerpt of my program omitting the error handling and other obvious source code:
ld = ldap_sslinit(host, LDAP_SSL_PORT, 1);
// Set options: LDAP version, timeout ...
rc = ldap_set_option(ld, LDAP_OPT_SSL, LDAP_OPT_ON);
// Now connect:
rc = ldap_connect(ld, NULL);
Result:
0x51 = LDAP_SERVER_DOWN
Connecting without SSL succeeds so the LDAP server is generally accessible.
Since Ldap Admin is able to connect over SSL, I assume the certificates are valid and in the right place. But obviously the LDAP API expects them somewhere else and cannot get the server certificate from the server. I configured the certs as described here: https://msdn.microsoft.com/en-us/library/aa366105%28v=vs.85%29.aspx
What am I doing wrong?
Sometimes it helps reading error messages more carefully. The entry in the event viewer caused by an unsuccessful bind over SSL was "The server name we were expecting is eim-tsi2.sam.develop.beta.ads."
I should have noticed that the name should have been eim-tsi2.cgn.de.(etc.), instead. So the domain name part was wrong.
This is a bug in Schannel which can be solved by an entry in the registry as described here: https://support.microsoft.com/en-us/kb/2275950.
I still do not know why LDAPAdmin was able to connect without that additional registry key although it also uses the WINLDAP API and therefore should have run into the same error. But that doesn’t matter any more.
Thanks, Andrew, for your help.