I'd like to use CockroachDB Serverless for my Ecto application. How do I specify the connection string?
I get an error like this when trying to connect.
[error] GenServer #PID<0.295.0> terminating
** (Postgrex.Error) FATAL 08004 (sqlserver_rejected_establishment_of_sqlconnection) codeParamsRoutingFailed: missing cluster name in connection string
(db_connection 2.4.1) lib/db_connection/connection.ex:100: DBConnection.Connection.connect/2
CockroachDB Serverless says to connect by including the cluster name in the connection string, like this:
postgresql://username:<ENTER-PASSWORD>#free-tier.gcp-us-central1.cockroachlabs.cloud:26257/defaultdb?sslmode=verify-full&sslrootcert=$HOME/.postgresql/root.crt&options=--cluster%3Dcluster-name-1234
but I'm not sure how to get Ecto to create this connection string via its configuration.
The problem is that Postgrex is not able to parse all of the information from the connection URL - notable the SSL configuration. The solution is to specify the connection parameters explicitly, including the cacertfile SSL option. Assuming that you have downloaded your cluster's CA certificate to priv/certs/ca-cert.crt, you can use the following config as a template:
config :my_app, MyApp.Repo,
username: "my_user",
password: "my_password",
database: "defaultdb",
hostname: "free-tier.gcp-us-central1.cockroachlabs.cloud",
port: "26257",
ssl: true,
ssl_opts: [
cacertfile: Path.expand("priv/certs/ca-cert.crt"),
],
parameters: [options: "--cluster=my-cluster-123"]
Possible Other Issues
Table Locking
Since that CockroachDB also does not support the locking that Ecto/Postgrex attempts on the migration table, the :migration_lock config needs to be disabled as well:
config :my_app, MyApp.Repo,
# ...
migration_lock: false
Auth generator
Finally, the new phx.gen.auth generator defaults to using the citext extension for storing a user's email address in a case-insensitive manner. The line in the generated migration that executes CREATE EXTENSION IF NOT EXISTS citext should be removed, and the column type for the :email field should be changed from :citext to :string.
This configuration allows Ecto to connect to CockroachDB Serverless correctly:
config :myapp, MyApp.repo,
username: "username",
password: "xxxx",
database: "defaultdb",
hostname: "free-tier.gcp-us-central1.cockroachlabs.cloud",
port: 26257,
ssl: true,
ssl_opts: [
cert_pem: "foo.pem",
key_pem: "bar.pem"
],
show_sensitive_data_on_connection_error: true,
pool_size: 10,
parameters: [
options: "--cluster=cluster-name-1234"
]
I have created a Neptune instance in my AWS and a Load Balancer to access it from my local machine to play around.
I'm basically redirecting all connections on the :80 at my LB to :8182 in my Neptune.
So I can easily query it through the browser. In fact, this is the output for the /status:
// 20191211170323
// http://my-lb/status
{
"status": "healthy",
"startTime": "Mon Dec 09 20:06:21 UTC 2019",
"dbEngineVersion": "1.0.2.1.R2",
"role": "writer",
"gremlin": {
"version": "tinkerpop-3.4.1"
},
"sparql": {
"version": "sparql-1.1"
},
"labMode": {
"ObjectIndex": "disabled",
"Streams": "disabled",
"ReadWriteConflictDetection": "enabled"
}
}
Problem is when I try to connect with it through Gremlin Console or Java code I'm getting the following errors:
gremlin> :remote connect tinkerpop.server conf/remote-neptune.yaml
ERROR org.apache.tinkerpop.gremlin.driver.Handler$GremlinResponseHandler - Could not process the response
io.netty.handler.codec.http.websocketx.WebSocketHandshakeException: Invalid handshake response getStatus: 403 Forbidden
at io.netty.handler.codec.http.websocketx.WebSocketClientHandshaker13.verify(WebSocketClientHandshaker13.java:226)
at io.netty.handler.codec.http.websocketx.WebSocketClientHandshaker.finishHandshake(WebSocketClientHandshaker.java:276)
at org.apache.tinkerpop.gremlin.driver.handler.WebSocketClientHandler.channelRead0(WebSocketClientHandler.java:69)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:438)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:297)
at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1408)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:930)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:682)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:617)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:534)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:906)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(Thread.java:748)
And my remote-neptune.yaml is as simple as:
hosts: [my-lb]
port: 80
connectionPool: { enableSsl: false}
serializer: { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV3d0, config: { serializeResultToString: true }}
I have updated my AWS credentials although I don't think that's related since I'm accessing it through the LB.
And the weirdest part is that this same scenario was working like a week ago :/
Any ideas?
Thanks!
Looks like the problem has auto resolved, but just sharing a few things to watch out for in case this happens again in the future. If you see connection issues, your first line of operation should be to check if its a network connectivity issue. (You mentioned that you were going to check if something changed with regards to security groups, so do update if that was indeed that case). To check if it indeed is a SG issue - log into your client instance, and do a simple telnet call to the DB endpoint.
telnet <endpoint> <port>
If it responds with "Connected", then you can be sure that your SGs are correct, and now you are dealing with an Application layer problem.
As called out in comments, some of the possible culprits could be:
You previously had a setup without IAM Auth in Neptune (not on ALB) and now you enabled IAM Auth. (Emphasis - I'm referring to IAM Auth on the database, and not some other component in between).
Gremlin client-server mismatches.
Some explicit settings on the ALB that could hinder the requests.
And a few others. To summarize, try to classify if it is a L2/L3 issue or an L7 issue and start investigating based off that.
I want to setup Service Fabric cluster using ARM template with AD integration. I am following the instructions given
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-creation-create-template
I get the following error
message": "Common names and thumbprints should not be both defined for
a particular certificate.",
{
"apiVersion":"2018-02-01",
"type":"Microsoft.ServiceFabric/clusters",
"name":"[parameters('clusterName')]",
"location":"[parameters('clusterLocation')]",
"dependsOn":[
"[concat('Microsoft.Storage/storageAccounts/', parameters('supportLogStorageAccountName'))]"
],
"properties":{
"addonFeatures":[
"DnsService",
"RepairManager"
],
"certificate":{
"thumbprint":"[parameters('certificateThumbprint')]",
"x509StoreName":"[parameters('certificateStoreValue')]"
},
"certificateCommonNames":{
"commonNames":[
{
"certificateCommonName":"[parameters('certificateCommonName')]",
"certificateIssuerThumbprint":""
}
],
"x509StoreName":"[parameters('certificateStoreValue')]"
},
"azureActiveDirectory":{
"tenantId":"[parameters('aadTenantId')]",
"clusterApplication":"[parameters('aadClusterApplicationId')]",
"clientApplication":"[parameters('aadClientApplicationId')]"
},
"clientCertificateCommonNames":[
],
"clientCertificateThumbprints":[
],
"clusterState":"Default",
"diagnosticsStorageAccountConfig":{
"blobEndpoint":"[reference(concat('Microsoft.Storage/storageAccounts/', parameters('supportLogStorageAccountName')), variables('storageApiVersion')).primaryEndpoints.blob]",
"protectedAccountKeyName":"StorageAccountKey1",
"queueEndpoint":"[reference(concat('Microsoft.Storage/storageAccounts/', parameters('supportLogStorageAccountName')), variables('storageApiVersion')).primaryEndpoints.queue]",
"storageAccountName":"[parameters('supportLogStorageAccountName')]",
"tableEndpoint":"[reference(concat('Microsoft.Storage/storageAccounts/', parameters('supportLogStorageAccountName')), variables('storageApiVersion')).primaryEndpoints.table]"
},
"fabricSettings":[
{
"parameters":[
{
"name":"ClusterProtectionLevel",
"value":"[parameters('clusterProtectionLevel')]"
}
],
"name":"Security"
}
],
"managementEndpoint":"[concat('https://',reference(concat(parameters('lbIPName'),'-','0')).dnsSettings.fqdn,':',parameters('nt0fabricHttpGatewayPort'))]",
"nodeTypes":[
{
"name":"[parameters('vmNodeType0Name')]",
"applicationPorts":{
"endPort":"[parameters('nt0applicationEndPort')]",
"startPort":"[parameters('nt0applicationStartPort')]"
},
"clientConnectionEndpointPort":"[parameters('nt0fabricTcpGatewayPort')]",
"durabilityLevel":"Bronze",
"ephemeralPorts":{
"endPort":"[parameters('nt0ephemeralEndPort')]",
"startPort":"[parameters('nt0ephemeralStartPort')]"
},
"httpGatewayEndpointPort":"[parameters('nt0fabricHttpGatewayPort')]",
"isPrimary":true,
"reverseProxyEndpointPort":"[parameters('nt0reverseProxyEndpointPort')]",
"vmInstanceCount":"[parameters('nt0InstanceCount')]"
}
],
"provisioningState":"Default",
"reliabilityLevel":"Silver",
"upgradeMode":"Automatic",
"vmImage":"Windows"
},
"tags":{
"resourceType":"Service Fabric",
"clusterName":"[parameters('clusterName')]"
}
}
the error says it all, remove the certificate section of your template
"certificate":{
"thumbprint":"[parameters('certificateThumbprint')]",
"x509StoreName":"[parameters('certificateStoreValue')]"
},
The error message is clear Common names and thumbprints should not be both defined for a particular certificate and the docs clearly says if you want to find certificate by Common Name, your have to remove the certificate thumbprint setting.
It mentions it on step 1
In the parameters section, add a certificateCommonName parameter: ...
Also consider removing the certificateThumbprint, it may no longer be
needed.
step 2
add "commonNames": ["[parameters('certificateCommonName')]"], and
remove "thumbprint": "[parameters('certificateThumbprint')]",.
and 3
add a certificateCommonNames setting with a commonNames property and
remove the certificate setting (with the thumbprint property) as in
the following example:
We are currently running RabbitMQ 3.5.6, and though it is able to successfully bind to the LDAP server, logins to the management UI via LDAP credentials are failing. I've been unable to track down the cause of this.
Our end goal is to have users be able to log into the RabbitMQ management UI with their LDAP credentials, and have RabbitMQ assign them permissions based on the groups that they are a member of in LDAP.
Upon login, both with a local account that I created for testing purposes and with my LDAP credentials I am presented with an internal server error:
Got response code 500 with body {"error":"Internal Server Error","reason":"{error,\n {try_clause,\n [{\"CN=rabbit,OU=System,OU=People,DC=domain,DC=tld\",\n \"LDAP_PASSWORD\"}]},\n [{rabbit_auth_backend_ldap,with_ldap,3,\n [{file,\"rabbitmq-auth-backend-ldap/src/rabbit_auth_backend_ldap.erl\"},\n {line,271}]},\n {rabbit_auth_backend_ldap,user_login_authentication,2,\n [{file,\"rabbitmq-auth-backend-ldap/src/rabbit_auth_backend_ldap.erl\"},\n {line,59}]},\n {rabbit_access_control,try_authenticate,3,\n [{file,\"src/rabbit_access_control.erl\"},{line,91}]},\n {rabbit_access_control,'-check_user_login/2-fun-0-',4,\n [{file,\"src/rabbit_access_control.erl\"},{line,77}]},\n {lists,foldl,3,[{file,\"lists.erl\"},{line,1262}]},\n {rabbit_mgmt_util,is_authorized,6,\n [{file,\"rabbitmq-management/src/rabbit_mgmt_util.erl\"},{line,121}]},\n {webmachine_resource,resource_call,3,\n [{file,\n \"webmachine-wrapper/webmachine-git/src/webmachine_resource.erl\"},\n {line,186}]},\n {webmachine_resource,do,3,\n [{file,\n \"webmachine-wrapper/webmachine-git/src/webmachine_resource.erl\"},\n {line,142}]}]}\n"}
The rabbitmq.config that I am currently using is below, followed by the log entries generated by RabbitMQ.
%% -*- mode: erlang -*-
[
{rabbit,
[{tcp_listeners, []},
{ssl_listeners, [{"10.7.232.1", 5672}]},
{log_levels, [{connection, info}, {channel, info}]},
{reverse_dns_lookups, true},
{ssl_options, [{certfile, "/usr/local/etc/rabbitmq/rmqs01.cer"},
{keyfile, "/usr/local/etc/rabbitmq/rmqs01.key"}]},
{auth_backends, [rabbit_auth_backend_ldap, rabbit_auth_backend_internal]},
{auth_mechanisms, ['PLAIN']}
]},
{rabbitmq_auth_backend_ldap,
[{servers, ["dc01.domain.tld", "dc02.domain.tld", "dc03.domain.tld"]},
%%{user_dn_pattern, "cn=${username},ou=People,dc=domain,dc=tld"},
{user_dn_pattern, []},
{use_starttls, true},
%% necessary for our ldap setup
{dn_lookup_attribute, "sAMAccountName"},
{dn_lookup_base, "OU=People,DC=domain,DC=tld"},
{dn_lookup_bind, [{"CN=rabbit,OU=System,OU=People,DC=domain,DC=tld", "rmqpassword"}]},
{port, 389},
{timeout, 30000},
{other_bind, [{"CN=rabbit,OU=System,OU=People,DC=domain,DC=tld", "rmqpassword"}]},
{log, network},
%% ACL testing
{resource_access_query,
{for, [{resource, exchange,
{for, [{permission, configure,
{ in_group, "OU=Systems,OU=People,DC=domain,DC=tld" } },
{permission, write, {constant, true}},
{permission, read, {constant, true}}
]}},
{resource, queue, {constant, true}} ]}}
]},
{rabbitmq_management,
%%{http_log_dir, "/var/log/rabbitmq/access.log"},
[{listener,
[{port, 15672},
{ip, "10.7.232.1"},
{ssl, true},
{ssl_opts,
[{certfile, "/usr/local/etc/rabbitmq/rmqs01.cer"},
{keyfile, "/usr/local/etc/rabbitmq/rmqs01.key"}
]}
]}
]}
].
=INFO REPORT==== 10-May-2016::10:17:47 ===
LDAP CHECK: login for username
=INFO REPORT==== 10-May-2016::10:17:47 ===
LDAP connecting to servers: ["dc01.domain.tld",
"dc02.domain.tld",
"dc03.domain.tld"]
=ERROR REPORT==== 10-May-2016::10:17:47 ===
webmachine error: path="/api/whoami"
{error,
{try_clause,
[{"CN=rabbit,OU=System,OU=People,DC=domain,DC=tld",
"rmqpassword"}]},
[{rabbit_auth_backend_ldap,with_ldap,3,
[{file,"rabbitmq-auth-backend-ldap/src/rabbit_auth_backend_ldap.erl"},
{line,271}]},
{rabbit_auth_backend_ldap,user_login_authentication,2,
[{file,"rabbitmq-auth-backend-ldap/src/rabbit_auth_backend_ldap.erl"},
{line,59}]},
{rabbit_access_control,try_authenticate,3,
[{file,"src/rabbit_access_control.erl"},{line,91}]},
{rabbit_access_control,'-check_user_login/2-fun-0-',4,
[{file,"src/rabbit_access_control.erl"},{line,77}]},
{lists,foldl,3,[{file,"lists.erl"},{line,1262}]},
{rabbit_mgmt_util,is_authorized,6,
[{file,"rabbitmq-management/src/rabbit_mgmt_util.erl"},{line,121}]},
{webmachine_resource,resource_call,3,
[{file,
"webmachine-wrapper/webmachine-git/src/webmachine_resource.erl"},
{line,186}]},
{webmachine_resource,do,3,
[{file,
"webmachine-wrapper/webmachine-git/src/webmachine_resource.erl"},
{line,142}]}]}