MongoDB hidden node still receiving connections - database

I'm not sure if this question been asked before or if the following behavior of MongoDB is normal. Searching online output no results to this scenario.
Initially, we had a 3 node deployment, 1 Primary, 1 Secondary, and 1 Arbiter.
We wanted to add a ReadOnly replica to the cluster and remove the Arbiter node as well in the process. We added the following to the new node:
priority: 0
hidden: true
votes: 1
And removed the Arbiter in the same reconfiguration process so we always have 3 voting members and it leaves us with 1 Primary and 1 Secondary and 1 ReadOnly Node.
The complete process went through smoothly, however, we still end up seeing connections to the ReadOnly replica.
But when checking via db.currentOp(), no queries show up.
Based on the documentation on MongoDB website,
Hidden members are part of a replica set but cannot become primary and are invisible to client applications.
Is there a way to investigate why there are connections coming in? And if this is normal behavior?
EDIT: (for further clarification)
Assuming the following:
MongoDB A (Primary): 192.168.1.50
MongoDB B (Secondary): 192.168.1.51
MongoDB C (Hidden): 192.168.1.52
Client A: 192.168.1.60
Client B: 192.168.1.61
In the logs, we see the following:
2018-03-12T07:19:11.607+0000 I ACCESS [conn119719] Successfully authenticated as principal SOMEUSER on SOMEDB
2018-03-12T07:19:11.607+0000 I NETWORK [conn119719] end connection 192.168.1.60 (2 connections now open)
2018-03-12T07:19:17.087+0000 I NETWORK [listener] connection accepted from 192.168.1.60:47806 #119720 (3 connections now open)
2018-03-12T07:19:17.371+0000 I ACCESS [conn119720] Successfully authenticated as principal SOMEUSER on SOMEDB
So if the other MongoDB instances were connecting, that would be fine, but my question is regarding why the clients are able to connect even when the hidden option is true and if that behavior is normal.
Thank You

Related

Mongodb Primary shard cluster down

just wanted to understand a scenario where if primary shard cluster down.
so i have a setup of Mongo database where i have 4 shards running in replicaset.
shard-1 == Server 1 (Primary), shard-1 Server 2 (Secondary), shard-1 - Server 3 (Secondary)
shard-2 == Server 4 (Primary), shard-2 - Server 5(Secondary), shard-2 - Server 6(Secondary)
shard-3 == Server 7 (Primary), shard-3 - Server 8(Secondary), shard-3 - Server 9(Secondary)
i have single database and single collection, so assuming that is distributed across all 3 shards as chunks and balancer is doing it's job right?
so in such case if shard-1(cluster) goes down, will traffic movement will be normal or will be hampered.
I guess you mix sharding and replication.
What do you mean by "if shard-1(cluster) goes down"? This means you lose 3 servers at once! Is this a probable situation you like investigate? Then I would say, you did a poor design of your cluster.
Sharding needs to be enabled on database level and on collection level. Based on the information you provided, no answer can be given.
well i was just making a scenario where complete cluster might go down, let's say, i have a cluster in single DC and that DC is down, so i might be in a situation where complete cluster is unavailable, other 2 cluster's are in different DC and so they are up and running well.
ok, let's try another way, If the primary replica set member of a primary shard goes down, elections will be held, the new primary will be announced and everything is back to normal right?.
but only in case if cluster have another available nodes, what if the complete cluster goes down?.
what is the use case of other 2 sharded cluster?.
or do i have wrong understanding of sharding?.
also, yes i have enabled sharding on my DB and collection both.
ok, why i was coming this way, let me tell you my actual problem here, i have 4 shards and all in replicaset.
when i check sh.status(), i saw below output
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: yes
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
7641 : Failed with error 'aborted', from MCA2 to MCA4
databases:
{ "_id" : "MCA", "primary" : "MCA2", "partitioned" : true, "version" : { "uuid" : UUID("xxxxxxxxxxxx"), "lastMod" : 1 } }
xxxxxxx
shard key: { "xxxxx" : "hashed" }
unique: false
balancing: true
chunks:
MCA 1658
MCA2 1692
MCA3 1675
MCA4 1670
so my simple question is if this Primary MCA2 shard goes down, what will happen, will collection(xxxxxx) is inaccessible by application or what?
also, as per terminology, i have 3 nodes cluster so anyone can go down and other becomes primary for traffic serve, so as long as any of the node is alive in my primary shard is alive and can server traffic to application right?.
if yes then let's say complete replicaset is down of primary shard MCA2, what now?
if no, then what will happen.
changed the actual value of collection and shard key for security reason to (xxxxxxx)

Operational Error: An Existing connection was forcibly closed by the remote host. (10054)

Am getting this Operational Error, periodically probably when the application is not active or idle for long hours. On refreshing the page it will vanish. Am using mssql pyodbc connection string ( "mssql+pyodbc:///?odbc_connect= ...") in Formhandlers and DbAuth of gramex
How Can I keep the connection alive in gramex?
Screenshot of error
Add pool_pre_ping and pool_recycle parameters.
pool_pre_ping will normally emit SQL equivalent to “SELECT 1” each time a connection is checked out from the pool; if an error is raised that is detected as a “disconnect” situation, the connection will be immediately recycled. Read more
pool_recycle prevents the pool from using a particular connection that has passed a certain age. Read more
eg: engine = create_engine(connection_string, encoding='utf-8', pool_pre_ping=True, pool_recycle=3600)
Alternatively, you can add these parameters for FormHandler in gramex.yaml. This is required only for the first FormHandler with the connection string.
kwargs:
url: ...
table: ...
pool_pre_ping: True
pool_recycle: 60

WSO2 Message Broker Error while adding Queue - Invalid Object Name

I have just set up a WSO2 Message Broker 3.0.0 connecting to a SQL Server DB.
The DB for the Carbon MB component has been created successfully as well.
The DB for the Message Broker Data store is created and contains the table MB_QUEUE_MAPPING.
However when adding a Queue via the MB UI I see the following error in the stack trace:
[2015-12-16 15:00:41,472] ERROR {org.wso2.andes.store.rdbms.RDBMSMessageStoreImpl} - Error occurred while retrieving destination queue id for destina
tion queue TestQ
java.sql.SQLException: Invalid object name 'MB_QUEUE_MAPPING'.
at net.sourceforge.jtds.jdbc.SQLDiagnostic.addDiagnostic(SQLDiagnostic.java:372)
at net.sourceforge.jtds.jdbc.TdsCore.tdsErrorToken(TdsCore.java:2988)
at net.sourceforge.jtds.jdbc.TdsCore.nextToken(TdsCore.java:2421)
at net.sourceforge.jtds.jdbc.TdsCore.getMoreResults(TdsCore.java:671)
at net.sourceforge.jtds.jdbc.JtdsStatement.executeSQLQuery(JtdsStatement.java:505)
at net.sourceforge.jtds.jdbc.JtdsPreparedStatement.executeQuery(JtdsPreparedStatement.java:1029)
at org.wso2.andes.store.rdbms.RDBMSMessageStoreImpl.getQueueID(RDBMSMessageStoreImpl.java:1324)
at org.wso2.andes.store.rdbms.RDBMSMessageStoreImpl.getCachedQueueID(RDBMSMessageStoreImpl.java:1298)
at org.wso2.andes.store.rdbms.RDBMSMessageStoreImpl.addQueue(RDBMSMessageStoreImpl.java:1634)
at org.wso2.andes.store.FailureObservingMessageStore.addQueue(FailureObservingMessageStore.java:445)
at org.wso2.andes.kernel.AMQPConstructStore.addQueue(AMQPConstructStore.java:116)
at org.wso2.andes.kernel.AndesContextInformationManager.createQueue(AndesContextInformationManager.java:154)
at org.wso2.andes.kernel.disruptor.inbound.InboundQueueEvent.updateState(InboundQueueEvent.java:151)
at org.wso2.andes.kernel.disruptor.inbound.InboundEventContainer.updateState(InboundEventContainer.java:167)
at org.wso2.andes.kernel.disruptor.inbound.StateEventHandler.onEvent(StateEventHandler.java:67)
at org.wso2.andes.kernel.disruptor.inbound.StateEventHandler.onEvent(StateEventHandler.java:41)
at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:128)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
The "Add Queue" screen does not go away however the Queue does get added to the MB_QUEUE table just fine in the DB. Both tables MB_QUEUE_MAPPING & MB_QUEUE_COUNTER are blank.
The "List Queues" screen does blank despite a number of Queues in the MB_QUEUE table. Stack trace also shows errors but is not included as its not relevant to the error above.
I can create a Topic just fine however.
I want to know why MB would say the table MB_QUEUE_MAPPING is an Invalid object name when the table clearly exists ?
I suspect the way you have configure the mysql database is incorrect.So you can better try out one of these below two scenarios to make sure about this issue.
1) starting the server for the first time with the -Dsetup parameter or
2) you can refer the documentation(https://docs.wso2.com/display/MB300/Configuring+MySQL) "Configuring MySQL" and follow step by step instructions given in order.
I have tried out the second scenario and I did not get any exception when I am adding queue.And the document I have mentioned will have to be update as below.
you can see this command in the step 3.
mysql -u <db_user_name> -p -D<database_name> < '<WSO2MB_HOME>/dbscripts/mb-store/mysql-mb.sql ';
db_user_name - username of db.
database_name - database name that you have created in the step 1.
WSO2MB_HOME - home directory path for MB.
Hope this could help you to resolve this issue.
It seems user connecting to MSSQL database not having correct permission. Most probably SELECT permission. Reason why I am saying is, when you adding queue, it does get added. This means user has INSERT permission. Once queue added, page redirected to Queue List page. User must have SELECT permission to retrieve queue list. Topic are not getting added to database, it keeps in registry. You can verify user who connecting to MSSQL from configuration like below in wso2mb-3.0.0/repository/conf/datasources/master-datasources.xml.
<datasource>
   <name>WSO2_MB_STORE_DB</name>
   <jndiConfig>
       <name>WSO2MBStoreDB</name>
   </jndiConfig>
   <definition type="RDBMS">
         <configuration>
                    <url>jdbc:jtds:sqlserver://localhost:1433/wso2_mb</url>
                    <username>sa</username>
                    <password>sa</password>
                    <driverClassName>net.sourceforge.jtds.jdbc.Driver</driverClassName>
                    <maxActive>200</maxActive>
                    <maxWait>60000</maxWait>
                    <minIdle>5</minIdle>
                    <testOnBorrow>true</testOnBorrow>
                    <validationQuery>SELECT 1</validationQuery>
                    <validationInterval>30000</validationInterval>
                    <defaultAutoCommit>false</defaultAutoCommit>
         </configuration>
     </definition>
</datasource>

TeamCity LDAP Synchronization does not create new users

I've configured LDAP for TeamCity. First sync trail fail. According to the teamcity-ldap.log all users were found but no created:
[2015-01-30 08:04:53,077] INFO - jetbrains.buildServer.LDAP - User ... (remote ID: 'CN=...,OU=Users,OU=...,DC=...,DC=...') should be created, but automatic user creation is disabled.
I set teamcity.options.createUsers to true but no users were created.
[2015-01-30 08:13:26,375] INFO - jetbrains.buildServer.LDAP - Found 224 search results for search base='OU=Users,OU=....', filter='(objectClass=User)', scope=2, attributes=[mail, sAMAccountName, displayName]
[2015-01-30 08:13:26,375] INFO - jetbrains.buildServer.LDAP - Last synchronization statistics: created users=0, updated users=0, deleted users=0, remote users=224, matched users=2, created groups=0, updated groups=0, deleted groups=0, remote groups=0, matched groups=0, duration=250ms, errors=[]
What do I have to change that the users are created?
Thanks
The option:
teamcity.options.users.synchronize.createUsers=true
was not set.
From JetBrains:
Note: it is not recommended to use teamcity.options.users.synchronize.createUsers=true option, because it can be removed in the future versions of TeamCity.
As for now TeamCity can automatically create users in TeamCity, if they are found in one of the mapped LDAP groups and groups synchronization is turned on via teamcity.options.groups.synchronize option. So please confgure group synchronization.

RID master role Active Directory

When setting up a dev environment we copied an Active directory DC.
The original setup was 3 AD servers in a forest but i seized all master roles to the copied machine.
When running dcdiag it tells me this about the RID master role:
Starting test: RidManager
* Available RID Pool for the Domain is 15103 to 1073741823
* SRVDS004.xxx.local is the RID Master
* DsBind with RID Master was successful
* rIDAllocationPool is 14603 to 15102
* rIDPreviousAllocationPool is 14603 to 15102
* rIDNextRID: 15102
* Warning :Next rid pool not allocated
* Warning :There is less than 0% available RIDs in the current pool
......................... SRVDS004 passed test RidManager
It seems that RID next pool is empty.. I checked for DNS problems but they al seemed ok. Also i removed all reference to the other 2 DC's.
It has never worked since we copied it, but we only noticed it now because our SID's ran out because there are no RID's available in the pool and out 500 batch is empty..
Is there a way to check if that master role is realy running instead of this output which says it is running but doesn't seem to tell about why the problem occurs.
These links might help you:
AD Internals: Display RID Allocation Pools
AD Internals: Reset RID Allocation Pool
The first link is a blog I wrote that explains how the RID mechanism works in Active Directory. The second link is another blog I wrote and this explains how to reset the RID Allocation Pool and includes a PowerShell script to reset the RID Allocation Pool.
The RID master responsible for distributing non over lapin blocks of 500 hundreds RID's between the DC's of the domain. This is done to eliminate RID conflicts between different objects created on different DC's at the same time.

Resources