how do we get autogeneratedKeys back in Mule db:bulk-insert.
We can do this in db:insert using autoGeneratedKeys="true"
any clues
Thanks
Related
I am referring this document to perform mysql(installed on local machine) to pubsub data streaming using debezium connector.
My properties file looks like below
databaseName=testdb
databaseUsername=root
databaseAddress=localhost
databasePort=3306
gcpProject=GCP_project_name
databasePassword=password
whitelistedTables=instance-name.testdb.testtab
singleTopicMode=true
gcpPubsubTopicPrefix=debeziumTest
databaseManagementSystem=mysql
I have already created topic in pubsub with name "debeziumTest".
But the issue is, when i run
sudo mvn exec:java -pl cdc-embedded-connector -Dexec.args="/path/to/properties-file"
, it runs without any error:
but there is no data uploaded to pubsub.
Based on the documentation, table updates are pushed to a topic that matches this format- ${PREFIX}${DB_INSTANCE}.${DATABASE}.${TABLE}
In your case I believe you should create a topic with the name - "debeziumTestinstance-name.testdb.testtab"
This may not be the only problem based on the warnings I see in the logs you shared.
The problem seems to be with your whitelistedTables.
According to the docuumentation, you should use ${instance}.${database}.${table}, for your given example it should be whitelistedTables=testdb.databaseName.testTab (if testTab is your tablename)
I'm trying to access data from an API that has 10 different ID's (1 - 10) with each ID containing bits of mock information for a client. The base URL is
https://europe-west2-mpx-tools-internal.cloudfunctions.net/frontend-mock-api/clients
Adding the {clientId} to the end of the URL return the last 30 days of data for a specific client for instance: https://europe-west2-mpx-tools-internal.cloudfunctions.net/frontend-mock-api/clients/1 returns data like so:
But when I try to console.log data containing the (date, cost, impressions, clicks, conversions), nothing appears, I'm not getting an error
Here's my code from both files:
I'm essentially wondering how best to ensure I'm able to access the specific bits of data for each day and for each client from 1 - 10.
I'm not getting an error but believe I'm going wrong somewhere if anyone can help.
Thanks in advance and sorry if I've missed key info that's needed as I'm still learning!
I have a project using Debezium, mostly based on this example, which is then connected to an Apache Pulsar.
I have changed a few configurations. The file now looks like this:
database.history=io.debezium.relational.history.MemoryDatabaseHistory
connector.class=io.debezium.connector.mysql.MySqlConnector
offset.storage=org.apache.kafka.connect.storage.FileOffsetBackingStore
offset.storage.file.filename=offset.dat
offset.flush.interval.ms=5000
name=mysql-dbz-connector
database.hostname={ip}
database.port=3308
database.user={user}
database.password={pass}
database.dbname=database
database.server.name=test
table.whitelist=database.history_table,database.project_table
snapshot.mode=schema_only
schemas.enable=false
include.schema.changes=false
pulsar.topic=persistent://public/default/{0}
pulsar.broker.address=pulsar://{ip}:6650
database.history=io.debezium.relational.history.MemoryDatabaseHistory
As you may understand, what I'm trying to do is to monitor the history_table and the project_table modifications from the database and then write payloads onto an Apache Pulsar.
My problem is as follows. In whatever snapshot mode I use, when an offset has been written, I can't restart the Debezium without getting an error on the next database update.
Encountered change event for table database.history_table whose schema isn't known to this connector
It only happens with an existing offset.dat file. I think this is because the schema is null within the offset.dat file. Take this one for example:
¨Ìsrjava.util.HashMap⁄¡√`—F
loadFactorI thresholdxp?#wur[B¨Û¯T‡xpG{"schema":null,"payload":["mysql-dbz-connector",{"server":"test"}]}uq~U{"ts_sec":1563802215,"file":"database-bin.000005","pos":79574,"server_id":1,"event":1}x
I first suspected the schemas.enable=false or the include.schema.changes=false parameters that I used to make the JSON more concise, but their values don't change anything in the offset.dat file.
The problem lies in line database.history=io.debezium.relational.history.MemoryDatabaseHistory. The history will not survive restart. You should use FileDatabaseHistory instead of it.
I have a salesforce query that extracting users time report
SELECT ID,Logged_Date__c ,CreatedBy.Email, CreatedBy.id, CreatedBy.Name, Time_Spent_Hours__c, Activity__c, CaseId__r.CaseNumber, CaseId__r.Account.id, CaseId__r.Account.Name , Utilized__c
FROM Time_and_Placement_Tracking__c
The Activity__c returns with the activity text.
I was trying to use Activity__c.Id, Activity__r etc. but all returns with error.
Is there a way to get the Activity id?
Verify these
You need to get to the object definition and see the field info. You can use workbench or any other API tool if you are familiar with and get the object and field def's.
Check the data type for Activity__c field. It should be a lookup/master relation. If it is not, find the field which ties to Activity object.
Open the field to get the API name and use that in the query with a '__r' extension.
I am newbie to solr. So please bear with me.
Use cases
User can share his photos to friends or publicly or make it private.
User can search for people or photos(he should view only public photos, shared with him).
I have denormalized my relational data to solr schema. I have merged the user object & photo object into solr schema
So,
If user jack (user 3) is searching for "picnic" He shouldn't see the photo_1 but see photo_2.
If user venu is searching for "picnic" He should see the photo_1 and photo_2.
How can I force solr to look into friends_ids, share_level field? can I do with facet.field? Is dynamic fields work for this case? I have read some tutorials but I am not getting clear picture.
Hope you guys shed some light on this. So that I can take forward. I hope this should be possible.
Thanks in advance!
You should have a string field for share_level, with possible value of private, public, friends.
You should also have a multiValued field for friends_ids,
and it should store each user id instead of the current CSV format
NOTE: you should revised this column type, NEVER use CSV in mysql, use a proper entity relationship
So, once you have the field ready, and complete the reindex,
to search for photo will be just:
+name:$search +(share_level:public (+share_level:friends +friends_ids:$uid))
$search = picnic
$uid = 3 (jack)