Which logging pattern is better in TIBCO BWCE-Open Shift eco system - elk

we are planning to migrate to TIBCO BWCE on Red Hat OpenShift, can you suggest which logging pattern is most suitable.
TIBCO CLE server : send all BWCE app logs to CLE EMS and then to Database. (BWCE --> EMS --> DB)
Logging in ELK via CLE client/EMS: Send all app logs to Logstack using EMS (BWCE --> EMS --> Logstack(ELK) )
Logging in ELK via CLE client/Kafka: Send all app logs to Logstack using Kafka (BWCE --> Kafka --> Logstack(ELK))
Logging in ELK via file logging: Send all app logs to Logstack using file logging (BWCE --> file(log4j) --> Logstack(ELK))
NOTE: We may use EMS for normal application/services requirement.

Logging in ELK via CLE client/EMS: Send all app logs to Logstack using EMS (BWCE --> EMS --> Logstack(ELK) )

Related

AWS MSK Connect w/ MSSQL Debezium connector fails with disconnect

I am trying to setup a mssql debezium connector with AWS MSK Connect but keep getting the following error messages:
Connector error log:
[Worker-0a949760f6b805d4f] [2023-02-15 19:57:56,122] WARN [src-connector-014|task-0] [Consumer clientId=dlp.compcare.ccdemo-schemahistory, groupId=dlp.compcare.ccdemo-schemahistory] Bootstrap broker b-3.stuff.morestuff.c7.kafka.us-east-1.amazonaws.com:9098 (id: -2 rack: null) disconnected (org.apache.kafka.clients.NetworkClient:1079)
This error happens continuously for a bit then I see this error:
org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata
In the cluster logs I see a corresponding error when I get the disconnect error:
[2023-02-15 20:08:21,627] INFO [SocketServer listenerType=ZK_BROKER, nodeId=3] Failed authentication with /172.32.34.126 (SSL handshake failed) (org.apache.kafka.common.network.Selector)
I have an ec2 client that i've setup to connect to my cluster and am able to connect and run commands against the cluster using IAM auth. I have setup a topic and produced and consumed from the topic using the console producer/consumers. I've also verified that when the connector start up it is creating the __amazon_msk_connect_status_* and __amazon_msk_connect_offsets_* topics.
I've verified that ip in the logs is the ip assigned to my connector by checking the Elastic Network Interface it was attached to.
Also for testing purposes I've opened up all traffic from 0.0.0.0/0 for the SG they are running in and also made sure the IAM role has msk*, msk-connect*, kafka*, and s3*.
I've also verified CDC is enabled on the RDS and that it is working properly. I see changes being picked and added to the CDC tables.
I believe the issue is related to IAM auth still but am not certain.
Cluster Config:
auto.create.topics.enable=true
delete.topic.enable=true
worker config:
key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.kafka.connect.storage.StringConverter
config.providers.secretManager.class=com.github.jcustenborder.kafka.config.aws.SecretsManagerConfigProvider
config.providers=secretManager
config.providers.secretManager.param.aws.region=us-east-1
request.timeout.ms=90000
errors.log.enable=true
errors.log.include.messages=true
Connector Config:
connector.class=io.debezium.connector.sqlserver.SqlServerConnector
tasks.max=1
database.history.consumer.sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
schema.include.list=dbo
database.history.producer.sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler
database.history.consumer.sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler
database.history.consumer.security.protocol=SASL_SSL
database.instance=MSSQLSERVER
topic.prefix=dlp.compcare.ccdemo
schema.history.internal.kafka.topic=dlp.compcare.ccdemo.history
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter=org.apache.kafka.connect.storage.StringConverter
database.history.sasl.mechanism=AWS_MSK_IAM
database.encrypt=false
database.history.sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
database.history.producer.sasl.mechanism=AWS_MSK_IAM
database.history.producer.sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
database.user=debezium
database.names=Intermodal_CCDEMO
database.history.producer.security.protocol=SASL_SSL
database.server.name=ccdemo_1
schema.history.internal.kafka.bootstrap.servers=b-1:9098
database.port=1433
database.hostname=my-mssql-rds.rds.amazonaws.com
database.history.sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler
database.password=${secretManager:dlp-compcare:dbpassword}
table.include.list=dbo.EquipmentSetup
database.history.security.protocol=SASL_SSL
database.history.consumer.sasl.mechanism=AWS_MSK_IAM
I was able to do this same process but with a postgres rds with no issues.
I've tried everything I can think of so any an all help would be greatly appreciated!
I also referenced the following when setting up the cluster/connector:
https://catalog.workshops.aws/msk-labs/en-US/mskconnect/source-connector-setup
https://thedataguy.in/debezium-with-aws-msk-iam-authentication/
https://debezium.io/documentation/reference/stable/connectors/sqlserver.html#sqlserver-connector-properties
Streaming MSSQL CDC to AWS MSK with Debezium
https://docs.aws.amazon.com/msk/latest/developerguide/mkc-debeziumsource-connector-example.html

How to get snowflake host and port number to create connection in SAP Analytics Cloud?

The SAP Analytics Cloud's Snowflake Connector needs these details for setting up a Snowflake connection
[
How can I get these details from Snowflake?
I'm trying to follow this guide
It appears that you're attempting to configure SAP Analytics Cloud's Snowflake Connector.
The host and port of your Snowflake account (also known as its deployment URL) can be taken from the URL you use to connect to Snowflake's Web UI. Here's an example:
For the above URL, the input in the Server field of the form will be mzf0194.us-west-2.snowflakecomputing.com:443 (the 443 port number is the default HTTPS port that Snowflake serves on).
Or alternatively, if you have access to any other Snowflake connected application (such as SnowSQL, etc.) that lets you run a SQL query, run the following to extract it:
select t.value:host || ':443' snowflake
from table(flatten(parse_json(system$whitelist()))) t
where t.value:type = 'SNOWFLAKE_DEPLOYMENT';
An example output that carries the host/port:
+---------------------------------------------+
| SNOWFLAKE |
|---------------------------------------------|
| p7b41m.eu-west-1.snowflakecomputing.com:443 |
+---------------------------------------------+
If you're uncertain about what these all mean, you'll need to speak to other, current Snowflake users or administrators in your organization.

Connecting to multiple databases on Kubernetes

I have a cluster with multiple databases. Applications on my cluster can access the database using the clusterIP service. For security reasons, I do not want to expose these databases publicly using a nodeport or a loadbalancer.
What I would like to do is upload a web based database client to Kubernetes and expose this client as a service, so that the database can be accessed.
Is something like this possible?
Personal opinion aside on 'web based database client' and security concern
What you are trying to achieve seems to be proxying your databases through a web app.
This would go like this:
NodePort/LB --> [WebApp] --> (DB1 ClusterIP:Port)--[DB1]
\--> (DB2 ClusterIP:Port)--[DB2]
\--> (DB3 ClusterIP:Port)--[DB3]
You just have to define a NodePort/LB Service to expose your WebApp publicly, and ClusterIP Services for each Database you want to be able to reach. As long as the WebApp is running in the same cluster, it will be able to connect to your internal databases, while they wouldn't be directly reachable from outside the Kubernetes cluster.
You would need to check, in any registry, if there is this web based client Docker image you want. If there is, you would deploy it as pod, and will expose this pod to access from your browser.

Can GCM cloud server access my app server database table?

I am using google cloud messaging in my web based android application. I want to send a message to all of my android apps through gcm (one by one, not simultaneously). Commonly, my web server sends request to gcm with data and then gcm sends that data to particular app. So if my database contains records of 10 apps then my web server will request gcm 10 times. Is there a way that my web server gives access of database table to gcm. Then gcm using that database table send messages to apps one by one. So my web server does not need to request the gcm server 10 times. Is it possible?
Thanks in advance for your kind reply!
There is no way Google can access your database, but you can send multicast messages to up to 1000 recipients using the registration_ids parameter instead of to in you HTTP request.
See also https://developers.google.com/cloud-messaging/server-ref#downstream
Upd.: you can also subscribe all your clients to a single topic and then send to that topic.
https://developers.google.com/cloud-messaging/topic-messaging

SQL Server Event Notifications & Service Broker - minimum req'd for multiple servers?

I'm trying to figure out the easiest way to send SQL Server Event Notifications to a separate server using service broker. I've built an endpoint on each server, a queue on each server, working on Dialogs and Contracts and activation... but do I need any of that?
CREATE EVENT NOTIFICATION says it can send the notification XML to a "target service" - so could I just create a contract on the "sending" server that points to a queue on a "receiving server", and use activation there?
Or do I need to have it send to a local queue and then forward on to the receiving server's queue? Thanks!
You can target the remote service, but you have to have the ROUTEs defined for bidirectional communication so that you get the Acknowledgement message back. I once had a script for creating a centralized processing server for all Event Notifications, and the other servers targeted it's service. If I can find it I'll post it on my blog and update this with a link.

Resources