Mongodump not working well with Atlas cluster - database

When I run mongodump on one of my clusters in Atlas it throws the following error:
root#anuj-exportify-EXP02:/home/anuj/Exportify/DataMigration# mongodump --uri="mongodb+srv://<USERNAME>:<PASSWORD>#<CLUSTER-NAME>.mongodb.net/exportifydb"
2023-01-17T13:51:44.172+0530 writing exportifydb.schedules to
2023-01-17T13:51:44.173+0530 writing exportifydb.automated_logs to
2023-01-17T13:51:44.173+0530 writing exportifydb.automated_logs_bkp to
2023-01-17T13:51:44.173+0530 writing exportifydb.logs to
2023-01-17T13:51:44.190+0530 Failed: error writing data for collection `exportifydb.automated_logs` to disk: error reading collection: Failed to parse: { find: "automated_logs", skip: 0, snapshot: true, $db: "exportifydb" }. Unrecognized field 'snapshot'.
Can anyone guide me that where am I going wrong?

Related

Unable to Mount /opt/flink/conf to flink job manager

I tried to run Apache Flink application as a read-only filesystem, As Flink needs to update the conf files I mounted the /opt/flink/conf, but it gives the below error:
Failure executing: POST at:
/apis/apps/v1/namespaces/flink/deployments. Message: Deployment.apps
"" is invalid:
spec.template.spec.containers[0].volumeMounts[3].mountPath: Invalid
value: "/opt/flink/conf": must be unique. Received status:
Status(apiVersion=v1, code=422,
details=StatusDetails(causes=[StatusCause(field=spec.template.spec.containers[0].volumeMounts[3].mountPath,
message=Invalid value: "/opt/flink/conf": must be unique,
reason=FieldValueInvalid, additionalProperties={})], group=apps,
kind=Deployment, name=, retryAfterSeconds=null, uid=null,
additionalProperties={}), kind=Status, message=Deployment.apps "" is
invalid: spec.template.spec.containers[0].volumeMounts[3].mountPath:
Invalid value: "/opt/flink/conf": must be unique,
metadata=ListMeta(_continue=null, remainingItemCount=null,
resourceVersion=null, selfLink=null, additionalProperties={}),
reason=Invalid, status=Failure, additionalProperties={}).
How do I run Apache Flink as a read-only file system?

kafka ConnectException: Failed to find any class that implements Connector and which name matches io.debezium.connector.sqlserver.SqlServerConnector,

Running kafka on windows. Getting the error below while trying to start kafka connect using the command:
.\bin\windows\connect-standalone.bat .\config\worker.properties .\config\connector.properties
Using the plugin plugin.path=C:\Kafka\kafka_2.12-2.7.0\plugins\debezium-connector-sqlserver\ on connect-standalone.properties file.
Any idea why the plugin is not recognized by kafka conenct?
Error:
[2021-02-18 08:21:16,384] ERROR Failed to create job for .\config\connector.properties (org.apache.kafka.connect.cli.ConnectStandalone)
[2021-02-18 08:21:16,384] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone)
java.util.concurrent.ExecutionException: org.apache.kafka.connect.errors.ConnectException: Failed to find any class that implements Connector and which name matches io.debezium.connector.sqlserver.SqlServerConnector, available connectors are: PluginDesc{klass=class org.apache.kafka.connect.file.FileStreamSinkConnector, name='org.apache.kafka.connect.file.FileStreamSinkConnector', version='2.7.0', encodedVersion=2.7.0, type=sink, typeName='sink', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.file.FileStreamSourceConnector, name='org.apache.kafka.connect.file.FileStreamSourceConnector', version='2.7.0', encodedVersion=2.7.0, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.c
you should set the plugin path as
plugin.path=C:\Kafka\kafka_2.12-2.7.0\plugins
dont set to the deepest file path
In windows, It works after modifying the plugin path in connect-standalone.properties as \\{computername}\c$\kakfa.

Error in createIndexes: Failed to send "createIndexes" command with database "mydb": Failed to read 4 bytes: socket error or timeout

I have been recently migrating python code to C using libmongoc-1.0 1.15. I am having troubles creating indexes. I am following the example here. I think it has something to do with me using MongoDB 4.2 since it changed all indexes to be background by default, but I thought version 1.15.3 of libmongoc does support everything new in 4.2.
{ "createIndexes" : "mycol", "indexes" : [ { "key" : { "x" : 1, "y" : 1 }, "name" : "x_1_y_1" } ] }
{ }
Error in createIndexes: Failed to send "createIndexes" command with database "mydb": Failed to read 4 bytes: socket error or timeout
Any thoughts?
"failed to send "createIndexes" command with database "testdb" mongodb error"
I was having a similar issue, in our case, one of the replica sets was having the issue after fixing the issue related replica set and restarting the cluster issue solved

I can not insert a row into the table using syslog-ng

destination d_pgsql {
sql(type(pgsql)
host("ip.of.you.host") username("logwriter")
password(“logwriterpassword") port("5432")
database("syslog")
table("logs_${HOST}_${R_YEAR}${R_MONTH}${R_DAY}")
columns("datetime varchar(16)", "host varchar(32)", "program varchar(20)", "pid
varchar(10)", "message varchar(800)")
values("$R_DATE", "$HOST", "$PROGRAM", "$PID", "$MSG")
indexes("datetime", "host", "program", "pid", "message"));
};
log { source(src); destination(d_pgsql); };
When try to restart, syslog-ng gets the error:
[2018-11-14T15:38:57.863699] Unable to initialize database access (DBI); rc='-1', error='No such file or directory (2)'
[2018-11-14T15:38:57.863877] Error initializing message pipeline; plugin_name='sql', location='/usr/local/etc/syslog-ng.conf:49:5'
/usr/local/etc/rc.d/syslog-ng: WARNING: failed to start syslog_ng
I have already read other posts on the internet, everyone suggested checking if libdb is installed. I have it. So what could it be? I do not have an idea anymore. I am using FreeBSD.

Solr times out when I try to rebuild index for django

I am trying to build my solr index for Django on ubuntu for the first time with ./manage.py rebuild_index and I get the following error:
Removing all documents from your index because you said so.
Failed to clear Solr index: Connection to server 'http://localhost:8983/solr/update/?commit=true' timed out: HTTPConnectionPool(host='localhost', port=8983): Request timed out. (timeout=10)
All documents removed.
Indexing 4 dishess
Failed to add documents to Solr: Connection to server 'http://localhost:8983/solr/update/?commit=true' timed out: HTTPConnectionPool(host='localhost', port=8983): Request timed out. (timeout=10)
I have access to localhost:8983/solr/ and localhost:8983/solr/admin via my web browser
You can bump up the TIMEOUT in settings.py.
For example
HAYSTACK_CONNECTIONS = {
'default': {
'ENGINE': 'haystack.backends.solr_backend.SolrEngine',
'URL': 'http://127.0.0.1:8080/solr/default',
'INCLUDE_SPELLING': True,
'TIMEOUT': 60 * 5,
},
}
Important thing here is that you shoudn't increase default timeout, because it could possibly block all your workers as haystack works synchronously.
The best way to avoid this is to define multiple connections for reads and writes with different timeouts and define.
http://django-haystack.readthedocs.org/en/latest/settings.html#haystack-connections
And use routers for read and write separation http://django-haystack.readthedocs.org/en/v2.4.0/multiple_index.html#automatic-routing

Resources