Sometimes when we drop a database from mongodb, not all the data is removed from the local database if replication is enabled. I wanted to know if it is safe to drop the local database.
By dropping the local database you "de-initialize" the Replica Set, i.e. afterwards you need to run rs.initiate() to get a running Replica Set.
However, you may drop the local database only when your node is running in Maintenance Mode!
The local database in replicaSet or sharded cluster members contain metadata for replication process but it is not replicated itself , if you check the local database content you will see the main consumer is the rs.oplog collection which by default occupy 5% of your partition , so if you have big partition the oplog capped collection will ocupy more space , the good news are that you may resize the oplog manually after version 3.6 with the command:
db.adminCommand({replSetResizeOplog: 1, size: 990})
where you limit the oplog collection to 990MB
( 990MB is the minimmum allowed size of rs.oplog )
Dropping the local database is not generally recommended.
In your case it looks you have 400GB partition and mongo automatically capped the rs.oplog to 20GB .
If you try to drop the database when replicaSet mode is active you will get an error:
rs1:PRIMARY> use local
switched to db local
rs1:PRIMARY> db.runCommand( { dropDatabase: 1 } )
{
"operationTime" : Timestamp(1643481374, 1),
"ok" : 0,
"errmsg" : "Cannot drop 'local' database while replication is active",
"code" : 20,
"codeName" : "IllegalOperation",
"$clusterTime" : {
"clusterTime" : Timestamp(1643481374, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
rs1:PRIMARY>
If you try dropping the rs.oplog collection only , it is also no possible in replication mode:
rs1:PRIMARY> db.oplog.rs.drop()
uncaught exception: Error: drop failed: {
"ok" : 0,
"errmsg" : "can't drop live oplog while replicating",
"$clusterTime" : {
"clusterTime" : Timestamp(1643482576, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1643482576, 1)
} :
_getErrorWithCode#src/mongo/shell/utils.js:25:13
DBCollection.prototype.drop#src/mongo/shell/collection.js:713:15
#(shell):1:1
rs1:PRIMARY>
so if you still want to drop it you will need to restart the member as standalone ( without replication mode active ) to be able to drop it.
Following is the content of typical local database(v4.4 in example):
> use local
switched to db local
> show collections
oplog.rs
replset.election
replset.initialSyncId
replset.minvalid
replset.oplogTruncateAfterPoint
startup_log
system.replset
system.rollback.id
>
and this is how you can drop it:
> use local
switched to db local
> db.runCommand( { dropDatabase: 1 } )
{ "dropped" : "local", "ok" : 1 }
>
Bear in mind after dropping the collection all local replication info will be lost , if the member was SECONDARY before restarting in standalone mode there will be no issues since after restarting in replication mode the member will get its configuration from the PRIMARY so local database will be recreated with all its collections.
If the member was PRIMARY and no other seeding members available , the replication info will be lost and you will need to rs.initiate() the collection once again.
Related
If anyone had played with Confluent MSSQL CDC Connector (https://docs.confluent.io/current/connect/kafka-connect-cdc-mssql/index.html)
I tried setting up this connector, downloading the jar and setting up config files as mentioned in docs. Running it is actually not throwing any error but it NOT able to fetch any changes from the SQL Server. Below is my config:
{
"name" : "mssql_cdc_test",
"connector.class" : "io.confluent.connect.cdc.mssql.MsSqlSourceConnector",
"tasks.max" : "1",
"initial.database" : "DBASandbox",
"username" : "xxx",
"password" : "xxx",
"server.name" : "rptdevdb01111.homeaway.live",
"server.port" : "1433",
"change.tracking.tables" : "dbo.emp"
}
This is the message I am getting in the logs (at INFO level) :
INFO Source task WorkerSourceTask{id=mssql_cdc_test-0} finished initialization and start (org.apache.kafka.connect.runtime.WorkerSourceTask:143)
Strange is even if I change the server.name to some junk value, it doesn’t bother and no errors. So, probably its NOT even trying to hit my sql server.
I did also enable change tracking on Database as well specified Table:
ALTER DATABASE DBASandbox
SET CHANGE_TRACKING = ON
(CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON)
ALTER DATABASE DBASandbox
SET ALLOW_SNAPSHOT_ISOLATION ON
ALTER TABLE dbo.emp
ENABLE CHANGE_TRACKING
WITH (TRACK_COLUMNS_UPDATED = ON)
Not sure whats wrong and how to debug it further. Any clue or insight will be helpful.
I am setting up an Apache Cassandra cluster and I want to segregate certain data to only certain datacenters. I know I can limit where the data is stored via replication factor, but that is not enough.
I have the keyspaces DC1DATA, DC2DATA, ALL, and I want my DC1 data to be
A) stored in DC1 - solved via replication factor
B) inaccessible from DC2 (like you cannot run a select statement even as admin user)
And I want both datacenter having access to the "ALL" keyspace.
Can I do that somehow?
This is what I am doing for setting up the keyspaces (example had 1 node x datacenter, total 2 nodes):
CREATE KEYSPACE dc1data
WITH REPLICATION = {
'class' : 'NetworkTopologyStrategy',
'dc1' : 1
} ;
CREATE KEYSPACE dc2data
WITH REPLICATION = {
'class' : 'NetworkTopologyStrategy',
'dc2' : 1
} ;
CREATE KEYSPACE all
WITH REPLICATION = {
'class' : 'NetworkTopologyStrategy',
'dc1' : 1,
'dc2' : 1
} ;
but I can still connect to any node in DC1 and do
cqlsh> use dc2data;
cqlsh:dc2data> create table if not exists test (
name text,
lastname text,
primary key ((lastname),name)
);
cqlsh:dc2data> insert into test (name, lastname) values ('Homer','Simpson');
cqlsh:dc2data> select * from test;
lastname | name
----------+----------
Simpson | Homer
That is what I want to avoid: seeing the dc2data keyspace from dc1, at all. Is that possible? Even to admin users?
really simple doubt, guess it is a bug, or something I got wrong
I have a databse in Azure, as Standard:S0 Tier, now 178 mb, and I want to make a copy (in a master's procedure) but with result database in BASIC pricing tier
Tought as:
CREATE DATABASE MyDB_2 AS COPY OF MyDB ( EDITION = 'Basic')
With unhappier result :
Database is created as pricing tier Standard:S0
Then tried:
CREATE DATABASE MyDB_2 AS COPY OF MyDB ( SERVICE_OBJECTIVE = 'Basic' )
or
CREATE DATABASE MyDB_2 AS COPY OF MyDB ( EDITION = 'Basic', SERVICE_OBJECTIVE = 'Basic' )
With even unhappy result :
ERROR:: Msg 40808, Level 16, State 1, The edition 'Standard' does not support the service objective 'Basic'.
tried also:
CREATE DATABASE MyDB_2 AS COPY OF MyDB ( MAXSIZE = 500 MB, EDITION = 'Basic', SERVICE_OBJECTIVE = 'Basic' )
with unhappier result :
ERROR:: Msg 102, Level 15, State 1, Incorrect syntax near 'MAXSIZE'.
.
May I be doing something not allowed ?
But if you copy your database via portal, you'd notice that basic tier is not available with message'A database can only be copied within the same tier as the original database.'. The behavior is documented here:'You can select the same server or a different server, its service tier and performance level, a different performance level within the same service tier (edition). After the copy is complete, the copy becomes a fully functional, independent database. At this point, you can upgrade or downgrade it to any edition. The logins, users, and permissions can be managed independently.'
I am trying to install OIF - Oracle Identity federation as per the OBE http://www.oracle.com/webfolder/technetwork/tutorials/obe/fmw/oif/11g/r1/oif_install/oif_install.htm
I have installed the Oracle 11gR2 11.2.0.3 with the charset = AL32UTF8 and db_block size of 8K and nls_length_semantics=CHAR. Created database and listener needed.
Installed weblogic 10.3.6
Started installation of OIM - Oracle identity management, chosen install and configure option and schema creation options.
Installation goes fine, but during configuration it fails. Below is the relevant part of the logs.
I have tried multiple times just to fail again and again. If someone can kindly shed some light as what is going wrong in here. Please let me know, if you need more info on the setup...
_File : ...//oraInventory/logs/install2013-05-30_01-18-31AM.out_
ORA-01450: maximum key length (6398) exceeded
Percent Complete: 62
Repository Creation Utility: Create - Completion Summary
Database details:
Host Name : vccg-rh1.earth.com
Port : 1521
Service Name : OIAMDB
Connected As : sys
Prefix for (non-prefixable) Schema Owners : DEFAULT_PREFIX
RCU Logfile : /data/OIAM/installed_apps/fmw/Oracle_IDM1_IDP33/rcu/log/rcu.log
RCU Checkpoint Object : /data/OIAM/installed_apps/fmw/Oracle_IDM1_IDP33/rcu/log/RCUCheckpointObj
Component schemas created:
Component Status Logfile
Oracle Internet Directory Failed /data/OIAM/installed_apps/fmw/Oracle_IDM1_IDP33/rcu/log/oid.log
Repository Creation Utility - Create : Operation Completed
Repository Creation Utility - Dropping and Cleanup of the failed components
Repository Dropping and Cleanup of the failed components in progress.
Percent Complete: 93
Percent Complete: -117
Percent Complete: 100
RCUUtil createOIDRepository status = 2------------------------------------------------- java.lang.Exception: RCU OID Schema Creation Failed
at oracle.as.idm.install.config.IdMDirectoryServicesManager.doExecute(IdMDirectoryServicesManager.java:792)
at oracle.as.install.engine.modules.configuration.client.ConfigAction.execute(ConfigAction.java:375)
at oracle.as.install.engine.modules.configuration.action.TaskPerformer.run(TaskPerformer.java:88)
at oracle.as.install.engine.modules.configuration.action.TaskPerformer.startConfigAction(TaskPerformer.java:105)
at oracle.as.install.engine.modules.configuration.action.ActionRequest.perform(ActionRequest.java:15)
at oracle.as.install.engine.modules.configuration.action.RequestQueue.perform(RequestQueue.java:96)
at oracle.as.install.engine.modules.configuration.standard.StandardConfigActionManager.start(StandardConfigActionManager.java:186)
at oracle.as.install.engine.modules.configuration.boot.ConfigurationExtension.kickstart(ConfigurationExtension.java:81)
at oracle.as.install.engine.modules.configuration.ConfigurationModule.run(ConfigurationModule.java:86)
at java.lang.Thread.run(Thread.java:662)
_File : ...///fmw/Oracle_IDM1_IDP33/rcu/log/oid.log_
CREATE UNIQUE INDEX rp_dn on ct_dn (parentdn,rdn)
*
ERROR at line 1:
ORA-01450: maximum key length (6398) exceeded
Edited by: 1008964 on May 30, 2013 12:10 PM
Edited by: 1008964 on May 30, 2013 12:12 PM
Update :
I looked at the logs again and tracked which sql statements were leading to the above error…
CREATE BIGFILE TABLESPACE "OLTS_CT_STORE" EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO DATAFILE '/data/OIAM/installed_apps/db/oradata/OIAMDB/gcats1_oid.dbf' SIZE 32M AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED;
CREATE TABLE ct_dn (
EntryID NUMBER NOT NULL,
RDN varchar2(1024) NOT NULL,
ParentDN varchar2(1024) NOT NULL)
ENABLE ROW MOVEMENT
TABLESPACE OLTS_CT_STORE MONITORING;
*CREATE UNIQUE INDEX rp_dn on ct_dn (parentdn,rdn)
TABLESPACE OLTS_CT_STORE
PARALLEL COMPUTE STATISTICS;*
I ran these statements from sqlplus and I was able to create the index without issues and as per the table space creation statement, autoextend is on. If RCU – repo creation utility runs to create the schemas needed, it fails with the same error as earlier. Any pointers ?
Setting NLS_LENGTH_SEMANTICS=BYTE worked
I created a database and dbml in visual studio 2010 using its wizards. Everything was working fine until i checked the tables data (also in visual studio server explorer) and none of my updates were there.
using (var context = new CenasDataContext())
{
context.Log = Console.Out;
context.Cenas.InsertOnSubmit(new Cena() { id = 1});
context.SubmitChanges();
}
This is the code i am using to update my database. At this point my database has one table with one field (PK) named ID.
**INSERT INTO [dbo].Cenas VALUES (#p0)
-- #p0: Input Int (Size = -1; Prec = 0; Scale = 0) [1]
-- Context: SqlProvider(Sql2008) Model: AttributedMetaModel Build:
4.0.30319.1**
This is LOG from the execution (printed the context log into the console).
The problem i'm having is that these updates are not persistent in the database. I mean that when i query my database (visual studio server explorer -> new query) i see the table is empty, every time.
I am using a SQL Server database file (.mdf).
EDIT (1): Immediate Window result
context.GetChangeSet()
{Inserts: 1, Deletes: 0, Updates: 0}
Deletes: Count = 0
Inserts: Count = 1
Updates: Count = 0
context.GetChangeSet().Inserts
Count = 1
[0]: {DBTest.Cena}
If you construct a DataContext without arguments, it will retrieve its connection string from your App.Config or Web.Config file. Open the one that applies, and verify that it points to the same database.
Put a breakpoint on context.SubmitChanges(); and in your immediate window in VS, do:
context.GetChangeSet();
There is an inserts property and it should have one record. That will help tell if its queuing up an insert.
HTH.