Confluent MSSQL CDC Connector not fetching changes - sql-server

If anyone had played with Confluent MSSQL CDC Connector (https://docs.confluent.io/current/connect/kafka-connect-cdc-mssql/index.html)
I tried setting up this connector, downloading the jar and setting up config files as mentioned in docs. Running it is actually not throwing any error but it NOT able to fetch any changes from the SQL Server. Below is my config:
{
"name" : "mssql_cdc_test",
"connector.class" : "io.confluent.connect.cdc.mssql.MsSqlSourceConnector",
"tasks.max" : "1",
"initial.database" : "DBASandbox",
"username" : "xxx",
"password" : "xxx",
"server.name" : "rptdevdb01111.homeaway.live",
"server.port" : "1433",
"change.tracking.tables" : "dbo.emp"
}
This is the message I am getting in the logs (at INFO level) :
INFO Source task WorkerSourceTask{id=mssql_cdc_test-0} finished initialization and start (org.apache.kafka.connect.runtime.WorkerSourceTask:143)
Strange is even if I change the server.name to some junk value, it doesn’t bother and no errors. So, probably its NOT even trying to hit my sql server.
I did also enable change tracking on Database as well specified Table:
ALTER DATABASE DBASandbox
SET CHANGE_TRACKING = ON
(CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON)
ALTER DATABASE DBASandbox
SET ALLOW_SNAPSHOT_ISOLATION ON
ALTER TABLE dbo.emp
ENABLE CHANGE_TRACKING
WITH (TRACK_COLUMNS_UPDATED = ON)
Not sure whats wrong and how to debug it further. Any clue or insight will be helpful.

Related

Snowflake Query returns Http status: Unprocessable Entity

I'm able to successfully connect to the Snowflake database through my .NET app, but I'm unable to run a SQL command due to the following error from the Snowflake:
Message: Http status: UnprocessableEntity
ResponseContent:
"code" : "391920",
"message" : "Unable to run the command. You must specify the warehouse to use by either setting the warehouse field in the body of the request or by setting the DEFAULT_NAMESPACE property for the current user.",
"sqlState" : "57P03",
"statementHandle" : "01a8
Here is my code I'm using.
public async Task<QueryResult> QuerySnowflake(string statement, string database, string schema)
{
var content = new
{
statement,
database,
schema
};
return await _httpClient.SnowflakePost<QueryResult>($"https://{_accountId}.snowflakecomputing.com/api/v2/statements", content, await GetHeaders(), _cancellationToken);
}
statement = SELECT * FROM SNOWFLAKE_SAMPLE_DATA.TPCH_SF1.CUSTOMER
database = SNOWFLAKE_SAMPLE_DATA
schema = TPCH_SF1
I have already tried the following:
ALTER USER my_username SET DEFAULT_NAMESPACE = SNOWFLAKE_SAMPLE_DATA.TPCH_SF1
GRANT SELECT ON ALL TABLES IN SCHEMA "TPCH_SF1" TO ROLE sysadmin
ALTER USER my_username SET DEFAULT_ROLE = sysadmin
All of these did not change the error response.
I don't think it needs a code change as it works with other Snowflake accounts (I'm using a new trial account). I believe I have my something wrong with my account (e.g. missing role, missing warehouse, missing permission, etc).
Any help would be very much appreciated.
The user does not have a default warehouse and none is specified in the connection request or a use command in the session. You can try sending this command before running your select:
use warehouse MY_WAREHOUSE;
You can also specify it in the connection, or specify a default for the user:
ALTER USER MY_USER SET DEFAULT_WAREHOUSE = MY_WAREHOUSE;

Is it safe to drop the local database in mongodb?

Sometimes when we drop a database from mongodb, not all the data is removed from the local database if replication is enabled. I wanted to know if it is safe to drop the local database.
By dropping the local database you "de-initialize" the Replica Set, i.e. afterwards you need to run rs.initiate() to get a running Replica Set.
However, you may drop the local database only when your node is running in Maintenance Mode!
The local database in replicaSet or sharded cluster members contain metadata for replication process but it is not replicated itself , if you check the local database content you will see the main consumer is the rs.oplog collection which by default occupy 5% of your partition , so if you have big partition the oplog capped collection will ocupy more space , the good news are that you may resize the oplog manually after version 3.6 with the command:
db.adminCommand({replSetResizeOplog: 1, size: 990})
where you limit the oplog collection to 990MB
( 990MB is the minimmum allowed size of rs.oplog )
Dropping the local database is not generally recommended.
In your case it looks you have 400GB partition and mongo automatically capped the rs.oplog to 20GB .
If you try to drop the database when replicaSet mode is active you will get an error:
rs1:PRIMARY> use local
switched to db local
rs1:PRIMARY> db.runCommand( { dropDatabase: 1 } )
{
"operationTime" : Timestamp(1643481374, 1),
"ok" : 0,
"errmsg" : "Cannot drop 'local' database while replication is active",
"code" : 20,
"codeName" : "IllegalOperation",
"$clusterTime" : {
"clusterTime" : Timestamp(1643481374, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
rs1:PRIMARY>
If you try dropping the rs.oplog collection only , it is also no possible in replication mode:
rs1:PRIMARY> db.oplog.rs.drop()
uncaught exception: Error: drop failed: {
"ok" : 0,
"errmsg" : "can't drop live oplog while replicating",
"$clusterTime" : {
"clusterTime" : Timestamp(1643482576, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1643482576, 1)
} :
_getErrorWithCode#src/mongo/shell/utils.js:25:13
DBCollection.prototype.drop#src/mongo/shell/collection.js:713:15
#(shell):1:1
rs1:PRIMARY>
so if you still want to drop it you will need to restart the member as standalone ( without replication mode active ) to be able to drop it.
Following is the content of typical local database(v4.4 in example):
> use local
switched to db local
> show collections
oplog.rs
replset.election
replset.initialSyncId
replset.minvalid
replset.oplogTruncateAfterPoint
startup_log
system.replset
system.rollback.id
>
and this is how you can drop it:
> use local
switched to db local
> db.runCommand( { dropDatabase: 1 } )
{ "dropped" : "local", "ok" : 1 }
>
Bear in mind after dropping the collection all local replication info will be lost , if the member was SECONDARY before restarting in standalone mode there will be no issues since after restarting in replication mode the member will get its configuration from the PRIMARY so local database will be recreated with all its collections.
If the member was PRIMARY and no other seeding members available , the replication info will be lost and you will need to rs.initiate() the collection once again.

Cannot connect Polybase to PostgreSQL

Trying to setup a connection to a PostgresSQL server with MSSQL Polybase. Today we use Linked Server to withdraw data from the Postgresdatabases into MSSQL and it works fine. But there is some functionality with Polybase that would solve some program issus regarding joining etc, and therefore Polybase is the solution. As long it works! ;-)
But I dont get it to work. And I can't find any real help with Google.
This is the code;
> CREATE DATABASE SCOPED CREDENTIAL PG_EXAMPLE WITH IDENTITY = 'pgUSER', Secret = 'verylongpassword';
> CREATE EXTERNAL DATA SOURCE PG_EXAMPLE_DATA
>WITH ( LOCATION = 'odbc://PG_SERVERNAME:5432',
>CONNECTION_OPTIONS = 'Driver={PostgreSQL Unicode(x64)}',
>PUSHDOWN = ON,
>CREDENTIAL = PG_EXAMPLE);
Trying to create a external table:
> CREATE EXTERNAL TABLE databas(
> namn [nvarchar](255) NULL,
> datorid [nvarchar](255) NULL
> ) WITH (
> LOCATION='exampel_databas_on_PGserver',
> DATA_SOURCE=PG_EXAMPLE_DATA
> );
ERROR MESSAGE
> Msg 105082, Level 16, State 1, Line 10
> 105082;Generic ODBC error: Error while executing the query .
Can anybody spread some light here, what I'm doing wrong. Somebody perhaps tried and got it to work??
Any help and suggestion is very mutch appreciated.
Thanks!!!
Check that LOCATION is the name of the table, it may be case sensitive, and you can't specify the database neither the schema. Try connecting with the "postgres" user if you can. Also try specifying the IP instead of the hostname.
I've used the ANSI driver, try with it as well: {PostgreSQL ANSI(x64)}.
As a last resource, check the driver version, I tested with psqlodbc_12_02_0000-x64.zip.

Kafka Connect CDC to MS SQL sourceOffset exception

We are using Confluent MS SQL CDC connector and the connection descriptor is :
curl -X POST -H \
"Content-Type: application/json" --data '{
"name" : "yury-mssql-cdc1",
"config" : {
"connector.class" : "io.confluent.connect.cdc.mssql.MsSqlSourceConnector",
"tasks.max" : "1",
"initial.database" : "test2",
"username" : "user",
"password" : "pass",
"server.name" : "some-server.eu-west-1.rds.amazonaws.com",
"server.port" : "1433",
"change.tracking.tables" : "dbo.foobar"
}
}' \
http://ip-10-0-0-24.eu-west-1.compute.internal:8083/connectors
the whole infrastructure is deployed at AWS... and the exception is :
ERROR Exception thrown while querying for ChangeKey
{databaseName=test2, schemaName=dbo, tableName=foobar}
(io.confluent.connect.cdc.mssql.QueryService:94)
java.lang.NullPointerException: sourceOffset cannot be null.
any help would be greatly appreciated.
I found the answer, I think the problem is the way SQL server CDC is configured. We should not use the old way of setting CDC ( EXEC sys.sp_cdc_enable_db and EXEC sys.sp_cdc_enable_table )
Instead, use the following command to configure SQL server CDC
ALTER DATABASE [db name] SET CHANGE_TRACKING = ON (CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON)
GO
ALTER DATABASE [db name] SET ALLOW_SNAPSHOT_ISOLATION ON
GO
ALTER TABLE [talbe name ] ENABLE CHANGE_TRACKING WITH (TRACK_COLUMNS_UPDATED = ON)
GO

Oracle Identity Federation - RCU OID Schema Creation Failure

I am trying to install OIF - Oracle Identity federation as per the OBE http://www.oracle.com/webfolder/technetwork/tutorials/obe/fmw/oif/11g/r1/oif_install/oif_install.htm
I have installed the Oracle 11gR2 11.2.0.3 with the charset = AL32UTF8 and db_block size of 8K and nls_length_semantics=CHAR. Created database and listener needed.
Installed weblogic 10.3.6
Started installation of OIM - Oracle identity management, chosen install and configure option and schema creation options.
Installation goes fine, but during configuration it fails. Below is the relevant part of the logs.
I have tried multiple times just to fail again and again. If someone can kindly shed some light as what is going wrong in here. Please let me know, if you need more info on the setup...
_File : ...//oraInventory/logs/install2013-05-30_01-18-31AM.out_
ORA-01450: maximum key length (6398) exceeded
Percent Complete: 62
Repository Creation Utility: Create - Completion Summary
Database details:
Host Name : vccg-rh1.earth.com
Port : 1521
Service Name : OIAMDB
Connected As : sys
Prefix for (non-prefixable) Schema Owners : DEFAULT_PREFIX
RCU Logfile : /data/OIAM/installed_apps/fmw/Oracle_IDM1_IDP33/rcu/log/rcu.log
RCU Checkpoint Object : /data/OIAM/installed_apps/fmw/Oracle_IDM1_IDP33/rcu/log/RCUCheckpointObj
Component schemas created:
Component Status Logfile
Oracle Internet Directory Failed /data/OIAM/installed_apps/fmw/Oracle_IDM1_IDP33/rcu/log/oid.log
Repository Creation Utility - Create : Operation Completed
Repository Creation Utility - Dropping and Cleanup of the failed components
Repository Dropping and Cleanup of the failed components in progress.
Percent Complete: 93
Percent Complete: -117
Percent Complete: 100
RCUUtil createOIDRepository status = 2------------------------------------------------- java.lang.Exception: RCU OID Schema Creation Failed
at oracle.as.idm.install.config.IdMDirectoryServicesManager.doExecute(IdMDirectoryServicesManager.java:792)
at oracle.as.install.engine.modules.configuration.client.ConfigAction.execute(ConfigAction.java:375)
at oracle.as.install.engine.modules.configuration.action.TaskPerformer.run(TaskPerformer.java:88)
at oracle.as.install.engine.modules.configuration.action.TaskPerformer.startConfigAction(TaskPerformer.java:105)
at oracle.as.install.engine.modules.configuration.action.ActionRequest.perform(ActionRequest.java:15)
at oracle.as.install.engine.modules.configuration.action.RequestQueue.perform(RequestQueue.java:96)
at oracle.as.install.engine.modules.configuration.standard.StandardConfigActionManager.start(StandardConfigActionManager.java:186)
at oracle.as.install.engine.modules.configuration.boot.ConfigurationExtension.kickstart(ConfigurationExtension.java:81)
at oracle.as.install.engine.modules.configuration.ConfigurationModule.run(ConfigurationModule.java:86)
at java.lang.Thread.run(Thread.java:662)
_File : ...///fmw/Oracle_IDM1_IDP33/rcu/log/oid.log_
CREATE UNIQUE INDEX rp_dn on ct_dn (parentdn,rdn)
*
ERROR at line 1:
ORA-01450: maximum key length (6398) exceeded
Edited by: 1008964 on May 30, 2013 12:10 PM
Edited by: 1008964 on May 30, 2013 12:12 PM
Update :
I looked at the logs again and tracked which sql statements were leading to the above error…
CREATE BIGFILE TABLESPACE "OLTS_CT_STORE" EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO DATAFILE '/data/OIAM/installed_apps/db/oradata/OIAMDB/gcats1_oid.dbf' SIZE 32M AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED;
CREATE TABLE ct_dn (
EntryID NUMBER NOT NULL,
RDN varchar2(1024) NOT NULL,
ParentDN varchar2(1024) NOT NULL)
ENABLE ROW MOVEMENT
TABLESPACE OLTS_CT_STORE MONITORING;
*CREATE UNIQUE INDEX rp_dn on ct_dn (parentdn,rdn)
TABLESPACE OLTS_CT_STORE
PARALLEL COMPUTE STATISTICS;*
I ran these statements from sqlplus and I was able to create the index without issues and as per the table space creation statement, autoextend is on. If RCU – repo creation utility runs to create the schemas needed, it fails with the same error as earlier. Any pointers ?
Setting NLS_LENGTH_SEMANTICS=BYTE worked

Resources