The official MongoDB Kafka source connector does not publish clean extended JSON - mongodb-kafka-connector

I've setup a pretty simple mongo kafka source connector to stream mongo's oplog to kafka. However, I see that in the messages published by the connector, the serialized oplog events do not respect the extended JSON spec; for instance, a datetime field is represented as:
{"$date": 1597841586927}
When the spec says it should be formatted as:
{"$date": {"$numberLong": "1597841586927"}}
Why am I not getting clean extended JSON?
Note: my connector config file looks like this:
{
"name": "mongosource",
"config": {
"connector.class": "com.mongodb.kafka.connect.MongoSourceConnector",
"tasks.max": 1,
"connection.uri": "...",
"topic.prefix":"mongosource",
"database": "mydb",
"copy.existing": true,
"change.stream.full.document": "updateLookup",
}
}

The default json formatter of the source connector is a legacy one (see this issue on the connector's JIRA project).
From version 1.3.0 of this connector, there's a new config option that you can add to ask the connector to output proper extended JSON:
"output.json.formatter": "com.mongodb.kafka.connect.source.json.formatter.ExtendedJson"

Related

Formats not available for custom connector in Ververica

I created a custom connector based on filesystem connector (changed factoryIdentifier method to return a different identifier and also changed package name to avoid collision with the original filesystem connector).
Then I deployed my connector to Ververica platform using UI.
Custom connector works fine for format=raw however changing the format to something else (csv, json) yields error message:
csv format not found - it seems only raw format is available for my connector.
Is it possible to enable other formats for my connector or is there any work around for it?

Link a jira issue to another jira issue while exporting nunit report with XrayImportBuilder

I'm trying to export nunit result to Jira Xray using XrayImportBuilder. I need to link a jira issue to another issue and I got below error. Am I missing something? XrayImportBuilder uses v2 create enpoints "rest/api/2/issue"
ERROR: Unable to confirm Result of the upload.....
Upload Failed! Status:400 Response:{"error":"Error creating Test Execution -
Issue create failed! - issuelinks: Field 'issuelinks' cannot be set.
It is not on the appropriate screen, or unknown."
The code I used in jenkins pipeline:
step([$class: 'XrayImportBuilder',
endpointName: '/nunit/multipart',
importFilePath: 'nunit3.xml',
importToSameExecution: 'true',
projectKey: 'AT',
serverInstance: jiraServerId,
importInParallel: 'true',
inputInfoSwitcher: 'fileContent',
importInfo: """{
"fields":{
"project":{
"key":"AT"
},
"summary":"${summary}",
"issuetype":{
"name":"Test Execution"
}
},
"update":{
"issuelinks": [{
"add": {
"values": [
{
"type": {
"id": "10102"
},
"outwardIssues": [
{
"key": "AT-23"
}
]
}
]
}
}
]
}
}"""
])
I tried to run without update fields it worked but I ran with update-issuelinks field it failed.
In the documentation it says > importInfo and testImportInfo must comply with the same format as the Jira issue create/update REST API format.
But it doesn't work as it is expected in the API doc.
First, to clarify, the Jenkins plugin is like a "wrapper" for invoking Xray REST API endpoints; it doesn't use Jira REST API endpoints.
Xray itself may use internally Jira REST API endpoints (or similar APIs).
Even though it seems possible to create a issue and link it to other issues using Jira's REST API, as long as you have the "Linked Issues" on the create screen as shown below, if you import results to Xray, using Xray API, that is not supported for the time being at least for Xray server/DC.
For Xray on Jira Cloud, if you have the "linked Issues" on the mentioned screen, it should work.
For Xray server/DC
You should only consider setting fields on the destination issue (i.e. the Test Execution that will be created) using the "fields" object in that JSON.
If you wish to upload results and then link the Test Execution issue to some other issues, you'll have to make a Jira REST API request yourself after uploading the results (example). However, there's currently a limitation that inhibts the Jenkins plugin of setting a build variable with the Test Execution issue key that was created..
Therefore, this approach may not work; we could think on a workaround like getting the last Test Execution issue that was created but that would be untrustworthy.
You may also reach out Xray support team and ask for an improvement in order to support this in the future.

Kafka Salesforce sink connector - The Kafka record is missing the key _ObjectType which is used to identify the SObject

I am using confluent cloud to sink some data from Kafka to Salesforce via SObject Sink connector. The connection is made but I am running into an error "The Kafka record is missing the key _ObjectType which is used to identify the SObject. Please include the _ObjectType field in the record". Anyone has an idea of what is going wrong error?
Google searches aren't that useful as they lead me to Kafka Key/Value concept which is primarily for maintaining order of the messages
Are you trying to sink random records that you've created externally to the other connectors?
From overview docs
The Salesforce SObjects sink connector requires the Kafka records to have the same structure and format as the records output by the PushTopic source connector
So, if you aren't also using the "PushTopic source connector" or have records that deviate from the "structure and format" of those events (SObjects), then you should expect to see some error.
If I understand it right. You have to pass in the "Id" attribute along with its value in order to successfully write to salesforce. Because Id is the column which its value helps to identify if published record already exist or to be treated as an insert. Please try to pass in Id along with it value for example here in below I passed JSON payload as I have JSON converter in my configuration file for the connector.
"payload": {
"Id": "124421",
"FirstName": "name",
}

Can you edit a Logic App custom connector? and how does one deploy then maintain (update)

I created a logic app custom connector successfully (via the portal, not ARM) and it's in use (demo) working fine. It's a wrapper for an azure function but to provide better usability up front to less tech savy users i.e. expose properties VS providing json.
Any how once created my query is a simple one. Can it be edited 1. in the portal? 2. via ARM (if it was created by arm)? i.e. I want to add a better icon.
When I view the logic apps custom connector though in the portal and click EDIT all it does is populate the Connector Name and no more. See below. All original configuration, paramaters etc is missing.
So my queries.
Is this the norm?
On export of the custom connector (azure portal menu item) the template really has nothing in it. No content either of the connector details?
Is there an ARM template to deploy this?
If yes to 3, how do you go about modifying in the scenario you have to?
I also understand in using it in a logic app it created an API Connection reference. Does this stand alone, almost derived off the customer connector? And further uses say of a modified connector would create different API connections?
I feel I'm just missing some of the basic knowledge on how these are implemented. which in turn would explain the deploying, and maintenance.
Anyone :) ?
EDIT:
Think I've come to learn the portal is very buggy. The swagger editor loaded no content either and broke the screen. I've since tried a simpler connector i.e. no sample markup with escaped regex patterns and it seems to like going back into it to edit :) (Maybe one to report as a bug after all this)
That said then - Yes, edit should be possible but the other queries regarding ARM, export, redeploy and current connections still stands :)
You can deploy the Logic apps custom connector really easily. You need to do following steps
Configure you custom connector with proper settings and update it.
Once updated, click on the download link available at the top of the connector.
Download the ARM template skeleton using the Export Template.
In the properties section, just add a new property called swagger and paste the swagger you downloaded in step 2.
Parameterise your ARM template
Deploy using your choice of deployment using Azure DevOps , PowerShell etc.
Please refer to following ARM template for your perusal.
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"customApis_tempconnector_name": {
"defaultValue": "tempconnector",
"type": "String"
}
},
"variables": {},
"resources": [
{
"type": "Microsoft.Web/customApis",
"apiVersion": "2016-06-01",
"name": "[parameters('customApis_tempconnector_name')]",
"location": "australiaeast",
"properties": {
"connectionParameters": {
"api_key": {
"type": "securestring",
"uiDefinition": {
"displayName": "API Key",
"description": "The API Key for this api",
"tooltip": "Provide your API Key",
"constraints": {
"tabIndex": 2,
"clearText": false,
"required": "true"
}
}
}
},
"backendService": {
"serviceUrl": "http://petstore.swagger.io/v2"
},
"description": "This is a sample server Petstore server. You can find out more about Swagger at [http://swagger.io](http://swagger.io) or on [irc.freenode.net, #swagger](http://swagger.io/irc/). For this sample, you can use the api key `special-key` to test the authorization filters.",
"displayName": "[parameters('customApis_tempconnector_name')]",
"iconUri": "/Content/retail/assets/default-connection-icon.e6bb72160664a5e37b9923c3d9f50ca5.2.svg",
"swagger": { "Enter Swagger Downloaded from Step 2 here" }
}
}
]
}

spring cloud config serialize as array

in my cloud config I have this my-app.yml
spring:
profiles: dev
roles:
- nerds
- staff
but it appears to be serializing like so:
"source": {
"roles[0]": "nerds",
"roles[1]": "staff"
}
instead of
"source": {
"roles": [
"nerds",
"staff"
]
}
if I'm consuming my config from a node app, I now have to find all the props that match a regex /^roles and parse out the array, instead of just getting an array back natively.
Is there someway to configure cloud config to just return native arrays instead of decomposing it into indexed keys of an object?
As far as I know, there is no configuration to make config server serve native arrays because yml file is just an alternative representation of properties file in spring boot.
Instead, you can access your config server from your node application with the different endpoints that config server supports like belows.
/{application}-{profile}.yml
/{label}/{application}-{profile}.yml
{profile} can be multiple values that is separated by command(,). If you access your config server with one of above, config will be served as native yaml format that has the exactly same contents - already merged and overrided properties from multiple files - and it have array values as yaml list like you want. You can easily parse yaml to JSON in node.js as you know. I think that it could be a alternative solution for you.
You can find other endpoints that config server supports here - quick start section.

Resources