I have a service broker configured in two different databases on two different servers. The problem is that I can't receive message because I have the problem:
Connection handshake failed. The login 'public' does not have CONNECT permission on the endpoint. State 84.
I have endpoints with certyficates, I gave permission to connect to a specific user who has a certificate(I did it on two servers because it is always on availability group), while looking for a problem I noticed that the certificate from the initiating server is different from the certificate from the target server:
-initiator - signature algorithm: sha1RSA, key length: 1024 (sql ver. 11.0.7 ...)
-target - signature algorithm: sha256RSA, key length: 2048 (sql ver. 15.0.4 ...)
When I grant permission:
grant connect on endpoint :: BrokerEndPoint to PUBLIC
Servers communicate but this does not solve the problem.
Can different types of certificates be a problem?
grant connect on endpoint :: BrokerEndPoint to PUBLIC
Doing this basically bypasses authorization, as everybody is authorized to connect. I think you should try to fix the user/roles permission.
I noticed that the certificate from the initiating server is different from the certificate from the target server:
This shouldn't make any difference.
It looks like the problem is you somehow misconfigured the users/login/certs chain. Is so darn complicated that is easy to break... Here is a redux of the proper setup:
There are two layers of security: conversation security (between services in DBs) and transport security (between endpoints in instances). You are now talking about transport security (endpoint to endpoint).
Endpoint security is always between the SQL instances involved. If you have AGs then each node needs to be separately configured, as the endpoints are instance level concepts and do not follow AG failovers.
An endpoint will use the certificate configured (CREATE ENDPOINT ... FOR SERVICE_BROKER (AUTHENTICATION = CERTIFICATE <certname>)). The certificate must have an accessible private key to be usable (ie encrypted with a key that can be opened from the service master key, usually via the master database master key).
During handshake, the authentication and authorization is mutual. Say to instances, S1 and S2 need to connect, then:
S1 will use certificate C1, S2 uses certificate C2
S1 needs to have C2 (public key only) in its master database, and S2 needs to have C1 (public key only) in its master.
The C2 certificate in S1 master is owned by a database user (a database principal), say it's US2. This user has a login (a server principal, say UL2). The login UL2 must be granted CONNECT permission on the S1 endpoint.
Vice-versa: the C1 certificate in S2 master is owned by an user (US1) that has a login (UL1). This login UL1 needs to be granted CONNECT permission on S2 endpoint.
For troubleshooting, enable the "Audit Broker Login" event in Profiles (in the Security Audit group). This event will fire with details of why a handshake fails, when it fails.
Ty for your time, I checked the connection and data again and found no problem anywhere. As I was bothered by the fact that maybe it was a problem that I wrote about so for the test I created another connection but this time on the server with SHA256 certificate and as I thought it is a problem here. To confirm my theory, I replaced the certificate on the initial server about which I wrote earlier to SHA256 (I deleted and re-created the endpoint with this certificate) and on the target server I replaced this certificate and the problem was solved. So it's like I thought certificates must have the same type of encoding.
Related
One of my setup is the following:
*An offline root CA.
*An online SubCA.
*A server with the CRL.
*A domain controller.
*Domain-joined Windows client machines that are able to receive certificates and renew them only if their current certificate is valid (not expired and not revoked), if the cert is revoked, the renewal fails.
*A server configured with the NDES role.
*Realm-joined Linux client machines that are able to receive and automatically renew their certificates via the NDES server.
The first certificate enrollment by the linux client machine requires the use of the one-time password retrieved at http:///certsrv/mscep_admin using the NDES service account credentials. Then, the certificates are renewed automatically if their currently installed certificate is valid (no new password retrieval needed and no manager approval needed).
My problem: If from the SubCA I revoke the linux client certificate and manually publish the CRL to make sure that the revocation appears in it immediately, then the automatic renewal still succeeds.. and the renewed certificate is not in the CRL so the client machine ends up having a truly valid cert.
I have tried clearing the CRL cache at the NDES server and redownloading the CRL: the serial number of the revoked certificate is in it but still it gets renewed.
I have tried playing around with the values at Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Cryptography\MSCEP with the registry editor, hoping to enforce CRL checking via a CRLFlags value but did not succeed.
I've tried many things, read through all the content that I could find on NDES but I am running out of ideas of things to try.
In another setup, with a Radius server we noticed something similar: if a client certificate is valid in terms of expiration date but manually revoked, then it still gets accepted by the radius server as if it was never revoked. Similarly, the crl cache does not seem to be the issue.
Any help would be greatly appreciated.
My SNOWFLAKE database is SSO login enabled and the SSO connectivity works perfectly fine when I connect through my chrome browser.
When I try to connect to SNOWFLAKE database using DBeaver (external browser) I get the below error .
NOTE : I can confirm that I am able to see the identity verification (through explorer browser) page and the identity has also been verified. I feel the issue happens when the explorer browser confirms the identity verification back to DBeaver.
Can anyone please help ?
The above error is a generic message and could be seen due to misconfigurations either at the identity provider end or at the service provider end.
It is recommended to verify the configurations for your identity provider and make sure all the steps are performed correctly.
Below could be the common reasons for this error, However, there might be other improper configs as well which could lead to a similar error message.
a. Mismatch in user configuration details at Idp(Identity provider) and Snowflake.
b. SSO certificates are incorrect.
Solution
a. Username configured at the Identity provider end should match with the login_name at snowflake end for that user. For instance, If the SAML response shows NameID as abc#xyz.com. Then login_name configured at Snowflake end should be same as abc#xyz.com
SAML response snippet:
abc#xyz.com
Set the login_name same as the NameID configured at the identity provider side.
alter user set login_name='abc#xyz.com';
b. SSO certificate configured at Snowflake end should match with the certificate configured at the identity provider end.
Note:
The certificate value contains a number of new lines. Remove the new lines, forming a certificate value with a single line.
If the above suggestions did not help then please check the error codes for the failed login attempt in Snowflake Information Schema using the below query. And check the reason for that error code here. The below query retrieve up to 100 login events of every user your current role is allowed to monitor in the last hour and you can modify it appropriately.
select * from table(information_schema.login_history(dateadd('hours',-1,current_timestamp()),current_timestamp())) order by event_time
I understand that the principle of Kerberos is to allow authentication between users and services on an unsecured network. Tickets generated by the authentication and ticket-granting service support secure communications and don't require a password to be transmitted across the network.
The flow relies on the auth server in the KDC (s) having a shared secret with the client (c).
However, at some point, the user itself must have been created and generally, users are created from client machines (you don't usually log onto the domain controller to create users)
So how do the user and secret key (Kac) get created in the first place and stored in the KDC database if the password/secret is never sent across the network?
The administration of principals in a KDC's database is outside the scope of the normal Kerberos protocol. Usually it's done using some auxiliary protocol, and each KDC can implement it in any way it wants.
For example, MIT Kerberos has the (SunRPC-based) kadmin protocol, and the kadmin client indeed sends the actual administrator-specified password to the kadmind service running on the KDC. (The RPC message is encrypted using the Kerberos session key, of course.) Heimdal has its own kadmin protocol, mostly incompatible with MIT's but working the same way.
(Both also have "local" versions of the kadmin tool, which directly accesses the KDC database backend – this is how the initial admin accounts are created, typically by running kadmin.local on the server console or through SSH.)
Microsoft Active Directory has several user administration protocols, some of them dating to pre-AD days, but the primary mechanism is LDAP (usually over an GSSAPI/Kerberos-encrypted session, but occassionally TLS-encrypted).
To create a new account in MS AD, the administrator creates an LDAP 'User' or 'Computer' entry with the plain-text 'userPassword' attribute, and the domain controller automatically transforms this attribute into Kerberos keys (instead of storing it raw). The commonly used "AD Users & Computers" applet (dsa.msc) is really an interface to the LDAP directory.
All of the above implementations also support a second administration protocol, the kpasswd protocol whose sole purpose is to allow an existing user to change their password. As you'd expect, it also works by transmitting the user's new password over the network, again making use of Kerberos authentication and encryption. (Password changes can also be done via AD's LDAP or MIT/Heimdal's kadmin, but kpasswd has the advantage of being supported by all three.)
As a final side note, the PKINIT extension uses X.509 certificates to authenticate the AS-REQ – in which case the client doesn't know their own shared secret, so the KDC in fact sends the whole Kc to the client over the network (encrypted using a session key negotiated via DH, somewhat like TLS would). This is mostly used in Active Directory environments with "smart card" authentication.
I am new to service broker and I am setting up a distributed topology. Two physical servers in the same domain, to replace a hacked up two way replication that continues to cause data corruption.
I am using the same domain account for both ends of the conversation and I am getting "The certificate's private key cannot be found" in profiler.
I had the same setup working with two separate local logins previously, so I am thinking that my issue is the fact that I am now attempting to use the same user account on both ends.
So my question is, does service broker require separate user accounts on each end of the conversation if I am using domain account with windows authentication, and if so is there any real advantage to using domain accounts instead of the way I had it previously with two local logins.
We have one domain with trust (not-transitive) to two other domains. The base domain user can login without any problems, but the users from other domains cannot.
We get exception from ADFS like this:
The Federation Service encountered an error during an attempt to
connect to a LDAP server at {trusted domain}.
Additional Data Domain Name: {trusted domain} LDAP server hostname:
{trusted domain dc} Error from LDAP server: Exception Details: A
local error occurred.
User Action Check the network connectivity to the LDAP server. Also,
check whether the LDAP server is configured properly.
After reserching we found out, it's the one-way trust problem. The problem is, we don't have any posibility to change the trust configuration or to set up other ADFS on trusted domains.
Is there some possibility to get it to work? Maybe some work around solution?
Is it possible to change the FormSignin page, search the user manualy with DirectoryServices and manualy create the token?
Thanks All!
Not sure if there's a way to do it if you keep your ADFS service account in the trusting domain (in a one-way trust scenario). You would need to allow that account to be able to query LDAP in the trusted domain, which would usually mean a two-way trust.
Absent that, you may try to setup use an ADFS service account from the trusted domain. Of course, this would only work for one of your domains (unless the two other domains have trusts between themselves).