i am new to mongodb repleca set
spring boot 2.2.2
mongo db driver 3.12.1
spring.data.mongodb.uri=mongodb://user:pass#server1:27017,server2:27017,server3:27017/DB?replicaSet=‌mongodb_rs
always getting the following error while fetching data from DB
19-02-2020 12:05:27.753 [http-nio-2053-exec-1] INFO org.mongodb.driver.cluster.info - No server chosen by com.mongodb.client.internal.MongoClientDelegate$1#594215be from cluster description ClusterDescription{type=REPLICA_SET, connectionMode=MULTIPLE, serverDescriptions=[]}. Waiting for 30000 ms before timing out
Related
I am trying to setup a mssql debezium connector with AWS MSK Connect but keep getting the following error messages:
Connector error log:
[Worker-0a949760f6b805d4f] [2023-02-15 19:57:56,122] WARN [src-connector-014|task-0] [Consumer clientId=dlp.compcare.ccdemo-schemahistory, groupId=dlp.compcare.ccdemo-schemahistory] Bootstrap broker b-3.stuff.morestuff.c7.kafka.us-east-1.amazonaws.com:9098 (id: -2 rack: null) disconnected (org.apache.kafka.clients.NetworkClient:1079)
This error happens continuously for a bit then I see this error:
org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata
In the cluster logs I see a corresponding error when I get the disconnect error:
[2023-02-15 20:08:21,627] INFO [SocketServer listenerType=ZK_BROKER, nodeId=3] Failed authentication with /172.32.34.126 (SSL handshake failed) (org.apache.kafka.common.network.Selector)
I have an ec2 client that i've setup to connect to my cluster and am able to connect and run commands against the cluster using IAM auth. I have setup a topic and produced and consumed from the topic using the console producer/consumers. I've also verified that when the connector start up it is creating the __amazon_msk_connect_status_* and __amazon_msk_connect_offsets_* topics.
I've verified that ip in the logs is the ip assigned to my connector by checking the Elastic Network Interface it was attached to.
Also for testing purposes I've opened up all traffic from 0.0.0.0/0 for the SG they are running in and also made sure the IAM role has msk*, msk-connect*, kafka*, and s3*.
I've also verified CDC is enabled on the RDS and that it is working properly. I see changes being picked and added to the CDC tables.
I believe the issue is related to IAM auth still but am not certain.
Cluster Config:
auto.create.topics.enable=true
delete.topic.enable=true
worker config:
key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.kafka.connect.storage.StringConverter
config.providers.secretManager.class=com.github.jcustenborder.kafka.config.aws.SecretsManagerConfigProvider
config.providers=secretManager
config.providers.secretManager.param.aws.region=us-east-1
request.timeout.ms=90000
errors.log.enable=true
errors.log.include.messages=true
Connector Config:
connector.class=io.debezium.connector.sqlserver.SqlServerConnector
tasks.max=1
database.history.consumer.sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
schema.include.list=dbo
database.history.producer.sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler
database.history.consumer.sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler
database.history.consumer.security.protocol=SASL_SSL
database.instance=MSSQLSERVER
topic.prefix=dlp.compcare.ccdemo
schema.history.internal.kafka.topic=dlp.compcare.ccdemo.history
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter=org.apache.kafka.connect.storage.StringConverter
database.history.sasl.mechanism=AWS_MSK_IAM
database.encrypt=false
database.history.sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
database.history.producer.sasl.mechanism=AWS_MSK_IAM
database.history.producer.sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
database.user=debezium
database.names=Intermodal_CCDEMO
database.history.producer.security.protocol=SASL_SSL
database.server.name=ccdemo_1
schema.history.internal.kafka.bootstrap.servers=b-1:9098
database.port=1433
database.hostname=my-mssql-rds.rds.amazonaws.com
database.history.sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler
database.password=${secretManager:dlp-compcare:dbpassword}
table.include.list=dbo.EquipmentSetup
database.history.security.protocol=SASL_SSL
database.history.consumer.sasl.mechanism=AWS_MSK_IAM
I was able to do this same process but with a postgres rds with no issues.
I've tried everything I can think of so any an all help would be greatly appreciated!
I also referenced the following when setting up the cluster/connector:
https://catalog.workshops.aws/msk-labs/en-US/mskconnect/source-connector-setup
https://thedataguy.in/debezium-with-aws-msk-iam-authentication/
https://debezium.io/documentation/reference/stable/connectors/sqlserver.html#sqlserver-connector-properties
Streaming MSSQL CDC to AWS MSK with Debezium
https://docs.aws.amazon.com/msk/latest/developerguide/mkc-debeziumsource-connector-example.html
I try to set up the compression for the messages transferred between mongodb atlas cluster to spring application. I followed this link: https://mongodb.github.io/mongo-java-driver/3.6/driver/tutorials/compression/. My connection string in spring is:
data:
mongodb:
uri: mongodb+srv://userName:password#xxxxxxx.mongodb.net/test?compressors=zlib&retryWrites=true&w=majority
database: mydb
It does not let server compress the result to transfer. Looks I missed config on the server-side for the zlib compression. What should I do to enable this? Searched through MongoDB Atlas management pages, but could not find anywhere to set it up.
I created a collection(say test_collection) on a Solr cloud instance using the Solr admin tool with a custom configset.
So write operations can be performed on this server are being carried out via the following service:
https://servername:port/solr/test_collection/update?commit=true
The following service is used by a middleware(Mulesoft) to write records to Solr in batches of 50 records.
However every time a write is performed, all records get loaded in Solr, except for a batch of 50 records which fail with the following exception:
"errortype": "HTTP:CONNECTIVITY",
"errormessage": "HTTP POST on resource 'https://servername:port/solr/test_collection/update' failed:
Remotely closed."
I am a bit new to postgresql db. I have done a setup over Azure Cloud for my PostgreSQL DB.
It's Ubuntu 18.04 LTS (4vCPU, 8GB RAM) machine with PostgreSQL 9.6 version.
The problem that occurs is when the connection to the PostgreSQL DB stays idle for some time let's say 2 to 10 minutes then the connection to the db does not respond such that it doesn't fulfill the request and keep processing the query.
Same goes with my JAVA Spring-boot Application. The connection doesn't respond and the query keep processing.
This happens randomly such that the timing is not traceable sometimes it happens in 2 minutes, sometimes in 10 minutes & sometimes don't.
I have tried with PostgreSQL Configuration file parameters. I have tried:
tcp_keepalive_idle, tcp_keepalive_interval, tcp_keepalive_count.
Also statement_timeout & session_timeout parameters but it doesn't change anyway.
Any suggestion or help would be appreciable.
Thank You
If you are setting up PostgreSQL DB connection on Azure VM you have to be aware that there are Unbound and Outbound connections timeouts . According to
https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-outbound-connections#idletimeout ,Outbound connections have a 4-minute idle timeout. This timeout is not adjustable. For inbound timeou there is an option to change in on Azure Portal.
We run into similar issue and were able to resolve it on client side. We changed Spring-boot default Hikari configuration as follow:
hikari:
connection-timeout: 20000
validation-timeout: 20000
idle-timeout: 30000
max-lifetime: 40000
minimum-idle: 1
maximum-pool-size: 3
connection-test-query: SELECT 1
connection-init-sql: SELECT 1
However there has been inconsistencies in the ci_session table in the installed MySQL server. Some ip addresses are being inserted repeatedly. However when we login from our system, those ipaddresses are not logged/inserted into the ci_session table. Does this mean that only certain ipaddresses are inserted in the ci_session table?
Can you able to provide guidance on this?
PHP version :7.2.15
MySQL version: 5.0.12
Codeigniter version : 3.1.3
Using session library : default system session library
Server : aws