Wanting to test a mongoDB server up/down procedure connected to Node/Mongoose, we found out that Mongoose can sometimes open hundreds of TCP sockets (which is not necessary and potentially blocking for the user who is limited to a certain amount of sockets). This occurs in the following case and environment :
Node supervised with PM2 and MongoDB surevised with daemontools
At normal and clean startup :
$ netstat -alpet | grep mongo
tcp 0 0 *:27017 *:* LISTEN mongo 65910844 22930/mongod
tcp 0 0 localhost.localdomain:27017 localhost.localdomain:54595 ESTABLISHED mongo 6591110422930/mongod
The last "ESTABLISHED" line repeated 5 times since the option (poolSize: 5) is specified in Mongoose ("mongo" is the user running mongod under daemontools)
When we have the Node procedure :
mongoose.connection.on('disconnected', function () {
var options = {server: { auto_reconnect:true, poolSize: 5 ,socketOptions: { connectTimeoutMS: 5000 } }
}
console.log('Mongoose default connection disconnected ' + mongoose.connection.readyState);
mongoose.connect( dbURI, options );
});
and we bring down the MongoDB by daemontools (mongodbdaemon is a simple $mongod command) :
svc -d /service/mongodbdaemon
there is of course no mongod running in the system (tested by the netstat command ) and the web server pages called which are using mongoose announce what is normal :
{"name":"MongoError","message":"topology was destroyed"}
The problem occurrs at this stage. Since the time we bring down MongoDB, Mongoose accumulates all the connect() calls in the 'disconnected' event handler. This means that the longer we wait before bringing up MongoDB, the more TCP connections will be opened.
So bringing up MongoDB by
svc -u /service/mongodbdaemon
gives the following :
$ netstat -alpet | grep mongo | wc -l
850 'ESTABLISHED' TCP connections to mongod !
If we bring down again mongod, the hundreds of connections remain in the TIME_WAIT state until Linux cleans the socket pool.
Questions
Can we check if a MongoDB instance is available before connecting to it ?
Can we configure Mongoose not to accumulate reconnecting() tries every millisecond or so ?
Is there a buffer for pending connection operations (as there is for mongoose.insert[...]) that we can access or clean manually ?
Problem reproductible on a CentOS 6.7 / mongoDB 3.0.6 / mongoose 4.1.8 / node 4.0.0
Edit :
From the official mongoose site where I posted this question after posting it here, I received an answer : "using auto_reconnect : true, on the initial connect() operation (which is set by default) there is no reason to reconnect() in a disconnect event callback".
This is true and it works jute fine, but the question is now why does this happen and how to avoid it (it is serious enough on the Linux system level to be an issue in mongoose).
Thanks !
Related
The story behind it: I'm trying to command an Android box (with a proprietary launcher) that also manages TV channels. To enter the channel section it is not sufficient to type a number, but a specific key must be pressed. I want to find a way to replicate that key, and then use this command on Home Assistant. I could try with a bluetooth sniffer, but it will be after the failure of my actual attempt.
I ran this adb command after pressing the specific key for TV channels:
adb.exe shell dumpsys activity broadcasts history
And the last broadcast in history is this (timvision is the name of the box):
Historical Broadcast foreground #0:
BroadcastRecord{560a0f4 u0 android.intent.action.GLOBAL_BUTTON} to user 0
Intent { act=android.intent.action.GLOBAL_BUTTON flg=0x10000010 (has extras) }
targetComp: {timvision.launcher/timvision.launcher.TimVisionKeyReceiver}
extras: Bundle[{android.intent.extra.KEY_EVENT=KeyEvent { action=ACTION_UP, keyCode=KEYCODE_LAST_CHANNEL, scanCode=377, metaState=0, flags=0x8, repeatCount=0, eventTime=10092564, downTime=10092515, deviceId=27, source=0x301, displayId=-1 }}]
caller=android 3514:system/1000 pid=3514 uid=1000
enqueueClockTime=2022-04-16 14:43:54.577 dispatchClockTime=2022-04-16 14:43:54.578
dispatchTime=-3s691ms (+1ms since enq) finishTime=-3s502ms (+189ms since disp)
resultTo=null resultCode=0 resultData=null
nextReceiver=1 receiver=null
Deliver +189ms #0: (manifest)
priority=0 preferredOrder=0 match=0x0 specificIndex=-1 isDefault=false
ActivityInfo:
name=timvision.launcher.TimVisionKeyReceiver
packageName=timvision.launcher
enabled=true exported=true directBootAware=false
resizeMode=RESIZE_MODE_RESIZEABLE
Is possible to replicate this broadcast ? I tried this (with extra_key values too) but seems is not allowed:
adb.exe shell am broadcast -a android.intent.action.GLOBAL_BUTTON -n timvision.launcher/timvision.launcher.TimVisionKeyReceiver
Error:
java.lang.SecurityException: Permission Denial: not allowed to send broadcast android.intent.action.GLOBAL_BUTTON from pid=32487, uid=2000
Alternatives or ideas are welcome.
Thanks
It is not the solution to the question but it is the solution to my problem. So I post it.
The key to enter in the TV channels section is listed on the official Android documentation and is KEYCODE_LAST_CHANNEL with code 229.
On Home Assistant the service to launch is this:
service: androidtv.adb_command
data:
command: input keyevent 229
target:
entity_id: media_player.your_android_tv_entity
Setting up JanusGraph i noticed the following in the console:
09:04:12,175 INFO ReflectiveConfigOptionLoader:173 - Loaded and initialized config classes: 10 OK out of 12 attempts in PT0.023S
09:04:12,230 INFO Reflections:224 - Reflections took 28 ms to scan 1 urls, producing 2 keys and 2 values
09:04:12,291 WARN GraphDatabaseConfiguration:1445 - Local setting index.search.index-name=entity (Type: GLOBAL_OFFLINE) is overridden by globally managed value (janusgraph). Use the ManagementSystem interface instead of the local configuration to control this setting.
09:04:12,294 WARN GraphDatabaseConfiguration:1445 - Local setting index.search.backend=solr (Type: GLOBAL_OFFLINE) is overridden by globally managed value (elasticsearch). Use the ManagementSystem interface instead of the local configuration to control this setting.
09:04:12,300 INFO CassandraThriftStoreManager:628 - Closed Thrift connection pooler.
and then i see the following:
Exception in thread "main" java.lang.IllegalArgumentException: Could not instantiate implementation: org.janusgraph.diskstorage.es.ElasticSearchIndex
How do i stop using elasticsearch and switch to Solr?
My properties file is as follows:
index.search.backend=solr
index.search.directory=/path/to/directory/for/solr/index/something
index.search.index-name=something
index.search.solr.mode=http
index.search.solr.http-urls=http://127.0.0.1:8983/solr
storage.backend=cassandrathrift
storage.hostname=127.0.0.1
cache.db-cache = true
cache.db-cache-clean-wait = 20
cache.db-cache-time = 180000
cache.db-cache-size = 0.25
The answer to this basically the same as this one for Titan. JanusGraph was forked from Titan.
You are probably trying to connect to an existing graph that was previously configured to use Elasticsearch. By default, the keyspace is named janusgraph.
1) You could connect to a different keyspace by updating conf/janusgraph-cassandra.properties
gremlin.graph=org.janusgraph.core.JanusGraphFactory
storage.backend=cassandrathrift
storage.hostname=127.0.0.1
storage.cassandra.keyspace=mygraph
2) You could drop the existing keyspace. If you used bin/janusgraph.sh start from the quick start directions (which starts a single node Cassandra and a single node Elasticsearch),
bin/janusgraph.sh clean
Or if you have a standalone Cassandra installation:
$CASSANDRA_HOME/bin/cqlsh -e 'drop keyspace if exists janusgraph'
Then you would be able to connect with the default conf/janusgraph-cassandra.properties.
I've tried following the guide at http://gatling.io/docs/2.2.3/realtime_monitoring/index.html to log my test results to influxdb and display the data in a grafana that I have previously set up. However I can't see any of the data that gatling is supposed to log anywhere in influxdb.
I've edited by influxdb.conf file so that it contains the following fields:
[[graphite]]
enabled = true
database = "gatlingdb"
bind-address = ":2003"
protocol = "tcp"
consistency-level = "one"
name-separator = "."
templates = [
"gatling.*.*.*.count measurement.simulation.request.status.field",
"gatling.*.*.*.min measurement.simulation.request.status.field",
"gatling.*.*.*.max measurement.simulation.request.status.field",
"gatling.*.*.*.percentiles50 measurement.simulation.request.status.field",
"gatling.*.*.*.percentiles75 measurement.simulation.request.status.field",
"gatling.*.*.*.percentiles95 measurement.simulation.request.status.field",
"gatling.*.*.*.percentiles99 measurement.simulation.request.status.field"
]
and my gatling.conf file contains the following fields:
data {
writers = [console, file, graphite] # The list of DataWriters to which Gatling write simulation data (currently supported : console, file, graphite, jdbc)
console {
#light = false # When set to true, displays a light version without detailed request stats
}
graphite {
#light = false # only send the all* stats
host = "127.0.0.1" # The host where the Carbon server is located
port = 2003 # The port to which the Carbon server listens to (2003 is default for plaintext, 2004 is default for pickle)
protocol = "tcp" # The protocol used to send data to Carbon (currently supported : "tcp", "udp")
rootPathPrefix = "gatling" # The common prefix of all metrics sent to Graphite
#bufferSize = 8192 # GraphiteDataWriter's internal data buffer size, in bytes
#writeInterval = 1 # GraphiteDataWriter's write interval, in seconds
}
Whenever i run my gatling tests I see no error messages or anything that indicates that anything is wrong, but I cannot see anything in the influxd logs that indicates that anything has been logged to influxdb, nor can I see any data in the gatlingdb database. I am using influxdb v0.10 and gatling v2.2.3 on Ubuntu
Can anyone help me figure out what I am doing wrong?
Updated to influxdb v1.1 and the problem seemed to have resolved itself from doing that
every one:
i am new to qpid and encounter some problem. the exchange created by me can’t route message to the queue, as follows:
first i create a durbale queue “test-queue-1” in the qpid use the quid-config command:
qpid-config add queue test-queue-1 --durable
next i create a durable direct exchange “test-exchange-1" in the qpid also use the qpid-config command:
qpid-config add exchange direct test-exchange-1 --durable
the last, in bind them as follow command:
qpid-config bind test-exchange-1 test-queue-1 test-queue-1
everything seems ok in the qpid-tool:
Object Summary:
ID Created Destroyed Index
========================================================================================
128 12:28:28 - org.apache.qpid.broker:queue:qmfc-v2-hb-iZ23c6sri0pZ.12680.1
129 12:28:28 - org.apache.qpid.broker:queue:qmfc-v2-iZ23c6sri0pZ.12680.1
130 12:28:28 - org.apache.qpid.broker:queue:qmfc-v2-ui-iZ23c6sri0pZ.12680.1
131 12:28:28 - org.apache.qpid.broker:queue:reply-iZ23c6sri0pZ.12680.1
132 12:24:17 - org.apache.qpid.broker:queue:test-queue-1
133 12:28:28 - org.apache.qpid.broker:queue:topic-iZ23c6sri0pZ.12680.1
116 12:27:20 -
and
org.apache.qpid.broker:binding:org.apache.qpid.broker:exchange:test-exchange-1,org.apache.qpid.broker:queue:test-queue-1,test-queue-1
now i am ready to test them, start the recv/send demo program:
[devel#iZ23c6sri0pZ build]$ ./recv amqp://127.0.0.1/test-queue-1
send the message:
[devel#iZ23c6sri0pZ build]$ ./send -a amqp://127.0.0.1/test-exchange-1 hi,everyone
but the "recv program” can’t recv any message.
if i send message like this :
[devel#iZ23c6sri0pZ build]$ ./send -a amqp://127.0.0.1/test-queue-1 hi,everyone
the “recv program” can recv the message:
Address: amqp://127.0.0.1/test-queue-1
Subject: Hello Subject
Content: "hi,everyone"
who can tell me why?i read the amqp protocol, maybe the routing-key in the message don’t match the binding-key, but if this, how could i set the routing-key?
my recv/send writed by proton-c , version 0.8. qpidd is 0.32 version.
When you send a message to a qpid direct exchange, it gets routed to a bound queue based on the routing-key of the message. In proton-c you can set the routing key by setting the message-subject using the function
PN_EXTERN int pn_message_set_subject (pn_message_t* msg,const char* subject)
Unfortunately this is not implemented in the example send.c that is shipped with proton-c v0.8 You can insert the following line somewhere around here and rebuild your send executable
pn_message_set_subject(message, "my-routing-key");
You can also with some effort, add a new command-line option to accept and use a routing-key from ./send
The java example implements a -s option to set the message subject.
I too think it's a binding issue.
Try binding with following,
qpid-config bind test-exchange-1 test-queue-1 test-exchange-1
#Feng Fang: "test-exchange-1" is a routing key which is you are using while sending a message. If that does not give a try with "test-exchange-1/test-exchange-1"
Keep rest as-is and give a try.
I hope this helps!
I am trying to build my solr index for Django on ubuntu for the first time with ./manage.py rebuild_index and I get the following error:
Removing all documents from your index because you said so.
Failed to clear Solr index: Connection to server 'http://localhost:8983/solr/update/?commit=true' timed out: HTTPConnectionPool(host='localhost', port=8983): Request timed out. (timeout=10)
All documents removed.
Indexing 4 dishess
Failed to add documents to Solr: Connection to server 'http://localhost:8983/solr/update/?commit=true' timed out: HTTPConnectionPool(host='localhost', port=8983): Request timed out. (timeout=10)
I have access to localhost:8983/solr/ and localhost:8983/solr/admin via my web browser
You can bump up the TIMEOUT in settings.py.
For example
HAYSTACK_CONNECTIONS = {
'default': {
'ENGINE': 'haystack.backends.solr_backend.SolrEngine',
'URL': 'http://127.0.0.1:8080/solr/default',
'INCLUDE_SPELLING': True,
'TIMEOUT': 60 * 5,
},
}
Important thing here is that you shoudn't increase default timeout, because it could possibly block all your workers as haystack works synchronously.
The best way to avoid this is to define multiple connections for reads and writes with different timeouts and define.
http://django-haystack.readthedocs.org/en/latest/settings.html#haystack-connections
And use routers for read and write separation http://django-haystack.readthedocs.org/en/v2.4.0/multiple_index.html#automatic-routing