I am trying to setup a mssql debezium connector with AWS MSK Connect but keep getting the following error messages:
Connector error log:
[Worker-0a949760f6b805d4f] [2023-02-15 19:57:56,122] WARN [src-connector-014|task-0] [Consumer clientId=dlp.compcare.ccdemo-schemahistory, groupId=dlp.compcare.ccdemo-schemahistory] Bootstrap broker b-3.stuff.morestuff.c7.kafka.us-east-1.amazonaws.com:9098 (id: -2 rack: null) disconnected (org.apache.kafka.clients.NetworkClient:1079)
This error happens continuously for a bit then I see this error:
org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata
In the cluster logs I see a corresponding error when I get the disconnect error:
[2023-02-15 20:08:21,627] INFO [SocketServer listenerType=ZK_BROKER, nodeId=3] Failed authentication with /172.32.34.126 (SSL handshake failed) (org.apache.kafka.common.network.Selector)
I have an ec2 client that i've setup to connect to my cluster and am able to connect and run commands against the cluster using IAM auth. I have setup a topic and produced and consumed from the topic using the console producer/consumers. I've also verified that when the connector start up it is creating the __amazon_msk_connect_status_* and __amazon_msk_connect_offsets_* topics.
I've verified that ip in the logs is the ip assigned to my connector by checking the Elastic Network Interface it was attached to.
Also for testing purposes I've opened up all traffic from 0.0.0.0/0 for the SG they are running in and also made sure the IAM role has msk*, msk-connect*, kafka*, and s3*.
I've also verified CDC is enabled on the RDS and that it is working properly. I see changes being picked and added to the CDC tables.
I believe the issue is related to IAM auth still but am not certain.
Cluster Config:
auto.create.topics.enable=true
delete.topic.enable=true
worker config:
key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.kafka.connect.storage.StringConverter
config.providers.secretManager.class=com.github.jcustenborder.kafka.config.aws.SecretsManagerConfigProvider
config.providers=secretManager
config.providers.secretManager.param.aws.region=us-east-1
request.timeout.ms=90000
errors.log.enable=true
errors.log.include.messages=true
Connector Config:
connector.class=io.debezium.connector.sqlserver.SqlServerConnector
tasks.max=1
database.history.consumer.sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
schema.include.list=dbo
database.history.producer.sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler
database.history.consumer.sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler
database.history.consumer.security.protocol=SASL_SSL
database.instance=MSSQLSERVER
topic.prefix=dlp.compcare.ccdemo
schema.history.internal.kafka.topic=dlp.compcare.ccdemo.history
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter=org.apache.kafka.connect.storage.StringConverter
database.history.sasl.mechanism=AWS_MSK_IAM
database.encrypt=false
database.history.sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
database.history.producer.sasl.mechanism=AWS_MSK_IAM
database.history.producer.sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
database.user=debezium
database.names=Intermodal_CCDEMO
database.history.producer.security.protocol=SASL_SSL
database.server.name=ccdemo_1
schema.history.internal.kafka.bootstrap.servers=b-1:9098
database.port=1433
database.hostname=my-mssql-rds.rds.amazonaws.com
database.history.sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler
database.password=${secretManager:dlp-compcare:dbpassword}
table.include.list=dbo.EquipmentSetup
database.history.security.protocol=SASL_SSL
database.history.consumer.sasl.mechanism=AWS_MSK_IAM
I was able to do this same process but with a postgres rds with no issues.
I've tried everything I can think of so any an all help would be greatly appreciated!
I also referenced the following when setting up the cluster/connector:
https://catalog.workshops.aws/msk-labs/en-US/mskconnect/source-connector-setup
https://thedataguy.in/debezium-with-aws-msk-iam-authentication/
https://debezium.io/documentation/reference/stable/connectors/sqlserver.html#sqlserver-connector-properties
Streaming MSSQL CDC to AWS MSK with Debezium
https://docs.aws.amazon.com/msk/latest/developerguide/mkc-debeziumsource-connector-example.html
Related
I have deployed a .NET Core Azure Function App running on the consumption pricing plan which connects, through EF Core, to a MS SQL database hosted by my website provider.
I see the following error reported by App Insights when the database connection is attempted:
Microsoft.Data.SqlClient.SqlException (0x80131904): A network-related
or instance-specific error occurred while establishing a connection to
SQL Server. The server was not found or was not accessible. Verify
that the instance name is correct and that SQL Server is configured to
allow remote connections. (provider: TCP Provider, error: 0 - A
connection attempt failed because the connected party did not properly
respond after a period of time, or established connection failed
because connected host has failed to respond.)
System.ComponentModel.Win32Exception (10060): A connection attempt
failed because the connected party did not properly respond after a
period of time, or established connection failed because connected
host has failed to respond.
...
Error Number:10060,State:0,Class:20
I followed the instructions here to obtain the function app's outboundIpAddresses (using Azure Resource Explorer which I also double checked with the Azure CLI).
I passed the list of IP's to the support team at my hosting provider & yet still receive the same error.
I know it's not code related as when I run my function app locally, I can connect fine (my local IP is on the SQL Server allow list).
Why can the Azure function not connect to the database?
This is is a small home project so I can't afford the virtual network NAT gateway route.
running on the consumption pricing plan
Outbound IP addresses reported by functions running on the Consumption plan are not reliable.
As per Azure documentation:
When a function app that runs on the Consumption plan or the Premium plan is scaled, a new range of outbound IP addresses may be assigned. When running on either of these plans, you can't rely on the reported outbound IP addresses to create a definitive allowlist. To be able to include all potential outbound addresses used during dynamic scaling, you'll need to add the entire data center to your allowlist.
Instead, provide your hosting provider with the outbound IP addresses for the Azure region(/data center) where your Azure function is hosted in, to cover all possible IPs that your Azure function may be assigned.
The official Azure IP ranges for all regions are in a JSON file available for download here.
Firstly, download this file.
Secondly, search for AzureCloud.[region name] e.g. AzureCloud.uknorth or AzureCloud.centralfrance which will bring up the IP addresses for Azure cloud in your specific region.
{
"name": "AzureCloud.uknorth",
"id": "AzureCloud.uknorth",
"properties": {
"changeNumber": 11,
"region": "uknorth",
"regionId": 29,
"platform": "Azure",
"systemService": "",
"addressPrefixes": [
"13.87.120.0/22",
...
"2603:1027:1:1c0::/59"
],
"networkFeatures": []
}
}
Finally, provide your hosting provider with all the IP addresses listed in the fragment.
Your Azure function should then be able to consistently connect to your database.
N.B. The list can & will update over time albeit more to add than to change - currently, the last modified date is 26th April 2022 as stated in the details section on the download page.
If anything breaks, ensure to check the page for updates or to guarantee no possible outages, upgrade your pricing plan.
Extra thoughts...
As you mention this is for a small project, I'm not sure what Azure pricing is like but I'd host the same project on AWS.
For the function itself, AWS Lambda's free tier includes 1M free requests per month (like Azure) & 400,000 GB-seconds of compute time per month which should be plenty.
For the connectivity, you'd need a VPC (free), an Internet Gateway (free + negligible data transfer costs), an Elastic IP address (free) and a managed NAT gateway (roughly $1 a day depending on region).
Oh - and you'd get the added benefit of just having 1 Elastic IP address to provide to your hosting provider, which would always stay the same regardless of what 'pricing plan' you're on.
If anything, I'd also take a look at AWS as a potential option for future projects if anything but that's out of scope :)
I'm a newbie with Kafka and currently learning to streaming data changed from MSSQL to Amazon MSK using Debezium connector
I already have a MS SQL Server with CDC enabled, a MSK cluster which I can connect, create topic, produce and consume data manually through an EC2 client. Now I'm setting up a MSK Connect with Debezium SQL Server connector as custom plugin, here is my MSK Connector configurations:
connector.class = io.debezium.connector.sqlserver.SqlServerConnector,
tasks.max = 1
database.hostname = xxx,
database.port = xxx,
database.user = xxx,
database.password = xxx,
database.dbname = dbName,
database.server.name = serverName,
table.include.list = dbo.tableName,
database.history.kafka.bootstrap.servers = xxx,
database.history.kafka.topic = xxx
But my MSK connector keeps returning status Failed. I have searched Google though but it seems there is no instruction or guide related to my idea.
That makes me wondering whether my solution is possible? Could someone please shed some light and point me to the right direction?
Edited: some logs I got from CloudWatch
ERROR [AdminClient clientId=adminclient-1] Connection to node -2 () failed authentication due to: []: Access denied (org.apache.kafka.clients.NetworkClient:771)
INFO App info kafka.admin.client for adminclient-1 unregistered (org.apache.kafka.common.utils.AppInfoParser:83)
[INFO [AdminClient clientId=adminclient-1] Metadata update failed (org.apache.kafka.clients.admin.internals.AdminMetadataManager:235)
org.apache.kafka.connect.errors.ConnectException: Failed to connect to and describe Kafka cluster. Check worker's broker connection and security properties.
Caused by: org.apache.kafka.common.errors.SaslAuthenticationException: [4f91d358-fb7b-4f3b-8930-1b4aefce6d0b]: Access denied
[Worker-08134a52fe88cdc49] MSK Connect encountered errors and failed.
Many thanks,
If you are using IAM role based auth for your MSK cluster, your bootstrap server port will be 9098
Along with all the properties, you also have send these properties in your MSK connect config
database.history.consumer.security.protocol=SASL_SSL
database.history.consumer.sasl.mechanism=AWS_MSK_IAM
database.history.consumer.sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
database.history.consumer.sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler
database.history.producer.security.protocol=SASL_SSL
database.history.producer.sasl.mechanism=AWS_MSK_IAM
database.history.producer.sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
database.history.producer.sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler
Refer: https://aws.amazon.com/blogs/aws/introducing-amazon-msk-connect-stream-data-to-and-from-your-apache-kafka-clusters-using-managed-connectors/
I have a BizTalk server and a SQL server which BizTalk sends messages via WCF-SQL to. The BizTalk server has been calling to this server for over a year with no problems. I came in this morning any suddenly it can't (it was working on Friday).
The full error I'm getting when calling the WCF-SQL endpoint is:
A message sent to adapter "WCF-SQL" on send port "MyPort" with URI "mssql://mySQLServer" is suspended.
Error details: System.Transactions.TransactionManagerCommunicationException: Communication with the underlying transaction manager has failed. ---> System.Runtime.InteropServices.COMException:
The MSDTC transaction manager was unable to push the transaction to the destination transaction manager due to communication problems.
Possible causes are: a firewall is present and it doesn't have an exception for the MSDTC process, the two machines cannot find each other by their NetBIOS names, or the support for network transactions is not enabled for one of the two transaction managers. (Exception from HRESULT: 0x8004D02A)
at System.Transactions.Oletx.ITransactionShim.Export(UInt32 whereaboutsSize, Byte[] whereabouts, Int32& cookieIndex, UInt32& cookieSize, CoTaskMemHandle& cookieBuffer)
at System.Transactions.TransactionInterop.GetExportCookie(Transaction transaction, Byte[] whereabouts)
I've followed instructions from the following thread:
MSDTC on server 'server is unavailable
I've run msdtc -uninstall then msdtc -install and restarted the service several times.
I've rebooted the server several times.
I can connect to the database using Sql Server Management Studio
DTCPing when trying to connect from the SQL server to the Biztalk server results in (when DTCPing is running on the BizTalk):
Problem:fail to invoke remote RPC method
Error(0x6BA) at dtcping.cpp #303
-->RPC pinging exception
-->1722(The RPC server is unavailable.)
RPC test failed
when going from Biztalk to SQL I get this (even thought DTCPing is running on the other end)
Please refer to following log file for details:
C:\Temp\DTCPing\myserv.log
Invoking RPC method on dbaditest
RPC test is successful
++++++++++++RPC test completed+++++++++++++++
Please start PING from dbaditest to complete the test
neither server is running a firewall at all
I'm all out of things to try.
Edit: I can confirm that other servers/computers can connect to the SQL server. So I have to assume that it's the BizTalk server that is the problem.
Edit 2: I tried connecting from BizTalk Server to another SQL server on the network and got the same error. I'm moments away from throwing my hands up and rebuilding my dev environment -- ugg :(
Edit 3: I can telnet to port 135 from BizTalk to SQL Server, so there's nothing blocking it.
Edit 4: DTCTester results in:
tablename= #dtc24449
Creating Temp Table for Testing: #dtc24449
Warning: No Columns in Result Set From Executing: 'create table #dtc24449 (ival int)'
Initializing DTC
Beginning DTC Transaction
Enlisting Connection in Transaction
Error:
SQLSTATE=25S12,Native error=-2147168242,msg='[Microsoft][ODBC SQL Server Driver]Distributed transaction error'
Error:
SQLSTATE=24000,Native error=0,msg=[Microsoft][ODBC SQL Server Driver]Invalid cursor state
Typical Errors in DTC Output When
a. Firewall Has Ports Closed
-OR-
b. Bad WINS/DNS entries
-OR-
c. Misconfigured network
-OR-
d. Misconfigured SQL Server machine that has multiple netcards.
Aborting DTC Transaction
Releasing DTC Interface Pointers
Successfully Released pTransaction Pointer.
You've already taken some steps here, but carefully go through the MSDN Article on Troubleshooting MSDTC.
I'd be concerned that someone imaged another server off of yours, but uninstalling and reinstalling MSDTC should have fixed that. It might be worth checking on these registry values as well (from the above link):
Windows enhances security by requiring authenticated calls to the RPC interface. This functionality is configurable through the EnableAuthEpResolution and RestrictRemoteClients registry keys. To ensure that remote computers are able to access the RPC interface, follow these steps:
Click Start, click Run, type regedit.exe, and then click OK to start Registry Editor.
Navigate to HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows NT
Under the RPC key, create the following DWORD entries with the indicated values. If the RPC key does not exist then it must be created.
DWORD entry Default value Recommended value
EnableAuthEpResolution 0 (disabled) 1
RestrictRemoteClients 1 (enabled) 0
Close Registry Editor.
Restart the MSDTC Service.
Is your BizTalk/SQL computer name unique? (no conflicts with other machine)
Can you DTC connect to another SQL server from your BizTalk server? I would suggest you to use DTCTester
testing the DTC connection instead of DTCPing.
Not sure if this will help but thought I'd mention it.
From BOTH servers:
Start -> Admin Tools -> Component Services
Expand Component Services -> Computers -> My Computer -> Distributed Transaction Coordinator and right-click Local DTC. Go to Security tab and check over the settings there.
Enable Network DTC Access
Allow Remote Clients
Allow Inbound/Outbound as required
Select correct authentication
Enable XA Transactions as required
MSDTC Service should auto restart. These settings could perhaps have changed since Friday? I have had this happen before for reasons unknown
Wow, I finally figured it out. As most people said, it MUST be some kind of network issue (and I didn't disagree). The kicker was that my PC was allowed DTC from it to SQL, but the VM running on my PC didn't. What it ended up being was that we were pushed to install Symantec Endpoint Protection just last week (right before I left for the weekend).
I uninstalled it and all it working now.
I followed the example on this website to implement a GCM server using CCS. However the code has exception when it tries to connect to the gcm server (last line in the code below):
ConnectionConfiguration config = new ConnectionConfiguration(GCM_SERVER, GCM_PORT);
config.setSecurityMode(SecurityMode.enabled);
config.setReconnectionAllowed(true);
config.setRosterLoadedAtLogin(false);
config.setSendPresence(false);
config.setSocketFactory(SSLSocketFactory.getDefault());
connection = new XMPPTCPConnection(config);
connection.connect();
I looked up online and someone said I needed to enable billing for my app on appengine in order to use GCM server. I did so but it still does not work. I keep seeing the following error:
gcm.googleapis.com:5235 Exception: Permission denied: Attempt to
access a blocked recipient without permission. (mapped-IPv4)
Am I missing something?
I've recently installed the BizTalk 2013 HL7 adapter on my development machine. During setup, it asks for the logging account, which I provided and was successfully added and the installation finished without a hitch.
However, when I try to submit a message to the Receive port configured to use the HL7 pipeline, I'm always receiving the same errors
First there is an "Information" event log stating:
Login failed for user 'my-BizTalk-HOST-account'. Reason: Failed to open
the explicitly specified database. [CLIENT: 1.2.3.4]
Then immediately after there is:
There was a failure executing the receive pipeline: "BTAHL72XPipelines.BTAHL72XReceivePipeline, BTAHL72XPipelines, Version=1.3.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" Source: "BTAHL7 2.X Disassembler" Receive Port: "my-receive-port-name" URI: "0.0.0.0:11001" Reason: Unable to configure the logstore.
If I look at the details tab in the event, it shows in the Binary Data in Bytes, the name of my server, followed by master.
A few points to consider:
we do not have SQL Server logging enabled in the HL7 configuration
tool (just the event log)
my-BizTalk-HOST-account is not the account that is configured for HL7 logging anyway, so why is it being used?
I'm not sure why it's trying to access the master database (if that is indeed what the event log is telling me)
SQL logins/users for my-BizTalk-HOST-account are setup in the BizTalk databases with proper permissions
sending to any other receive location behaves fine, it's just those using the BTAHL72xReceivePipeline
Can anyone explain this or have a fix?