Mule Salesforce query - salesforce

I am trying to populate a joiner table in Salesforce from data in a database. The joiner table has two lookups (of course to two different objects).
My Flow starts from querying a database. The query fetches has two fields NT_ACCOUNT & CXM_ID - These two exists as separate objects in Salesforce so i have to perform lookup to salesforce to get their corresponding salesforce Id's to create a record in my joiner table. I am not sure of the ideal way to do this
Below is the flow i have which takes NT_ACCOUNT and did a query against SF to get its Id in SFDC and create the joiner record. Now my question what is the ideal way populate the other loop in my joiner which is (a look up on SFDC for CXM_ID in database). Will it be a good option to query for CXM_ID in SFDC for its corresponding SFDC ID and combine both payloads and pass it on create (if yes any pointers on how to do that will help).
<batch:job name="Batch2">
<batch:threading-profile poolExhaustedAction="WAIT"/>
<batch:input>
<poll doc:name="Poll">
<fixed-frequency-scheduler frequency="1000"/>
<db:select config-ref="Generic_Database_Configuration" doc:name="Database">
<db:parameterized-query><![CDATA[SELECT NT_ACCOUNT,CXM_ID from Table where lastmodifieddate = 'within last 24 hours';]]></db:parameterized-query>
</db:select>
</poll>
</batch:input>
<batch:process-records>
<batch:step name="Batch_Step_1">
<sfdc:query-single config-ref="Salesforce__Basic_authentication1" query="dsql:#["SELECT id FROM FM_Account__c WHERE account_number__c = '"+payload.NT_ACCOUNT+"' LIMIT 1"]" doc:name="Salesforce"/>
<batch:commit size="200" doc:name="Batch Commit">
<data-mapper:transform config-ref="Map_To_Map_1" doc:name="Map To Map"/>
<sfdc:create config-ref="Salesforce__Basic_authentication1" type="Joiner_Table__c" doc:name="Salesforce">
<sfdc:objects ref="#[payload]"/>
</sfdc:create>
<logger message="#[message.payload]" level="INFO" doc:name="Logger"/>
</batch:commit>
</batch:step>
</batch:process-records>
</batch:job>

Related

Converting a Array into individual rows in Snowflake

I have a Kafka Topic, which receives an array with multiple object in it like shown below.
[{"Id":2318805,"Booster Station":"Comanche County #1","TimeStamp":"2021-09-30T23:53:43.019","Total Throughput":2167.52856445125},{"Id":2318805,"Booster Station":"Comanche County #2","TimeStamp":"2020-09-30T23:53:43.019","Total Throughput":217.52856445125},]
when i load this in snowflake, it becomes one huge row, with all objects, i would like to store each object as individual row in snowflake, how can i achieve this, i am open to tweak at kafka level as or Connector.
My Kafka is AWS MSK, and i am using snowflake connector plugin for loading data in snowflake
Type of field
You can use Snowflake's flatten table function to flatten the arrays to individual rows:
create or replace temp table T1 as
select(parse_json($$[{"Id":2318805,"Booster Station":"Comanche County #1","TimeStamp":"2021-09-30T23:53:43.019","Total Throughput":2167.52856445125},
{"Id":2318805,"Booster Station":"Comanche County #2","TimeStamp":"2020-09-30T23:53:43.019","Total Throughput":217.52856445125}]
$$)) as JSON
;
select VALUE from T1, table(flatten(JSON));
This is assuming that the Kafka messages are stored as variant type. If they are string, you can use the parse_json function to convert them to variant.
From there, you can convert the individual objects to columns if you want:
select VALUE:"Booster Station"::string as BOOSTER_STATION
,VALUE:Id::int as ID
,VALUE:TimeStamp::timestamp as TIME_STAMP
,VALUE:"Total Throughput"::float as TOTAL_THROUGHPUT
from T1, table(flatten(JSON));
BOOSTER_STATION
ID
TIME_STAMP
TOTAL_THROUGHPUT
Comanche County #1
2318805
2021-09-30 23:53:43.019000000
2167.528564451
Comanche County #2
2318805
2020-09-30 23:53:43.019000000
217.528564451

How to link DOCUVALUE table to related business metadata

I am trying to pull a report of all the documents referenced in AX, and I'm having a heck of a time figuring out the AX database structure. Ideally I want to pull a list of documents and the Journal / Batch # each is associated with.
In our AX environment, all documents are stored on a share (i.e. they're not actually stored as BLOBs in the AX database).
It looks like the DOCUVALUE table is the principal table that references the documents, having the ORIGINALFILENAME and other columns that seem to "point" to the files on the AX share. But DOCUVALUE doesn't contain any useful business metadata.
After a bit of exploring, it looks like like the DOCUREF table relates to DOCUVALUE (DOCUVALUE.RECID = DOCUREF.VALUERECID) which helps a little - gives you the Company #, but that's about it.
After a bit more exploring, it looked like it would be possible to join across to LEDGERJOURNALTABLE as shown below:
select ljt.journalnum, filename + '.' + filetype filename, ljt.name journal_name,
dr.refcompanyid, convert(varchar(10), ljt.posteddatetime,111) posted_date,
ljt.createdby, convert(numeric, ljt.journaltotalcredit) journalamount
from LEDGERJOURNALTABLE ljt, DOCUREF dr, DOCUVALUE dv
where dv.RECID = dr.VALUERECID and dr.refrecid = ljt.recid
order by 1,2
This looked promising, so I pulled out a data listing and asked one of our key business users to review the results. She indicated that it was accurate to some extent, but there were other areas where the document referenced just didn't have any relation to the JournalNum in the listing.
So - I'm at a bit of a dead end - I've spent further time generating SQL statements to harvest data using specific RECID values, trying other joins, but each time I just disappear down a rabbit hole.
Any ideas? Any help gratefully received!!
The AX document management framework is designed so that a document can be attached to any data row in any table. What you're trying to do is far easier in AX, but we'll stick with SQL for the question.
The problem you're having is you don't know the reference objects because you're ignoring REFTABLEID.
The key fields that connect a denormalized "document" to the associated business data are REFTABLEID, REFCOMPANYID, and REFRECID (you already have the last one).
So start with this query below:
SELECT sd.NAME
,sd.SQLNAME
,dr.*
,dv.*
FROM DOCUREF dr
,DOCUVALUE dv
,SQLDICTIONARY sd
WHERE dv.RECID = dr.VALUERECID
AND sd.TABLEID = dr.REFTABLEID
AND sd.FIELDID = 0 -- Indicates it is a table and not a table field
AND sd.NAME = 'LEDGERJOURNALTABLE' -- Instead of hardcoding, join & query
You'll have to get creative depending on your use with SQL, You'll want to remove the hardcoded 'LEDGERJOURNALTABLE' and then use sd.SQLNAME to join to the actual SQL table. Then if that SQL table has DataAreaId, you'd likely want to join it to dr.REFCOMPANYID.
Or you can hardcode the tables or whatever you want to do. You should be aware you can attach documents to journal headers OR lines...or many other rows for that matter.
Just start exploring the data and you should be able to figure out what you want with that query above.
So for your sample query, you can see I added 2 lines. Your query will only work when joined for LedgerJournalTable. You'll have to do some dynamic SQL or use a cursor or something if you want to report on every attachment.
SELECT ljt.journalnum
,filename + '.' + filetype filename
,ljt.name journal_name
,dr.refcompanyid
,convert(VARCHAR(10), ljt.posteddatetime, 111) posted_date
,ljt.createdby
,convert(NUMERIC, ljt.journaltotalcredit) journalamount
FROM LEDGERJOURNALTABLE ljt
,DOCUREF dr
,DOCUVALUE dv
WHERE dv.RECID = dr.VALUERECID
AND dr.REFRECID = ljt.RECID
AND dr.REFCOMPANYID = ljt.DATAAREAID -- ADDED
AND dr.REFTABLEID = 211 -- ADDED TableId for LedgerJournalTable
ORDER BY 1
,2

SF KAFKA CONNECTOR Detail: Table doesn't have a compatible schema - snowflake kafka connector

I have setup the snowflake - kafka connector. I setup a sample table (kafka_connector_test) in snowflake with 2 fields both are VARCHAR type.
Fields are CUSTOMER_ID and PURCHASE_ID.
Here is my configuration that I created for the connector
curl -X POST \
-H "Content-Type: application/json" \
--data '{
"name":"kafka_connector_test",
"config":{
"connector.class":"com.snowflake.kafka.connector.SnowflakeSinkConnector",
"tasks.max":"2",
"topics":"kafka-connector-test",
"snowflake.topic2table.map": "kafka-connector-test:kafka_connector_test",
"buffer.count.records":"10000",
"buffer.flush.time":"60",
"buffer.size.bytes":"5000000",
"snowflake.url.name":"XXXXXXXX.snowflakecomputing.com:443",
"snowflake.user.name":"XXXXXXXX",
"snowflake.private.key":"XXXXXXXX",
"snowflake.database.name":"XXXXXXXX",
"snowflake.schema.name":"XXXXXXXX",
"key.converter":"org.apache.kafka.connect.storage.StringConverter",
"value.converter":"com.snowflake.kafka.connector.records.SnowflakeJsonConverter"}}'\
I send data to the topic that I have configured in the connector configuration.
{"CUSTOMER_ID" : "test_id", "PURCHASE_ID" : "purchase_id_test"}
then when I check the kafka-connect server I get the below error:
[SF KAFKA CONNECTOR] Detail: Table doesn't have a compatible schema
Is there something I need to setup in either kafka connect or snowflake that says which parts of the json go into which columns of the table? Not sure how to specify how it parses the json.
I setup a different topic as well and didn't create a table in snowlake. In that I was able to populate this table but the connector makes a table with 2 columns RECORD_METADATA and RECORD_CONTENT. But I don't want to write a scheduled job to parse this I want to directly insert into a queryable table.
Snowflake Kafka connector writes data as json by design. The default columns RECORD_METADATA and RECORD_CONTENT are variant. If you like to query them you can create a view on top the table to achieve your goal and you don't need a scheduled job
So, your table created by connector would be something like
RECORD_METADATA, RECORD_CONTENT
{metadata fields in json}, {"CUSTOMER_ID" : "test_id", "PURCHASE_ID" : "purchase_id_test"}
You can create a view to display your data
create view v1 as
select RECORD_CONTENT:CUSTOMER_ID::text CUSTOMER_ID,
RECORD_CONTENT:PURCHASE_ID::text PURCHASE_ID
Your query will be
select CUSTOMER_ID , PURCHASE_ID from v1
PS. If you like to create your own tables you need to use variant as your data type instead of varchar
Also looks like it's not supported at this time in reference to this github issue

SonarQube - SQL Server : get users and groups associated with Project in SonarQube from backend database

I need to find the list of users and group associated with project in SonarQube.
I try to find tables user_roles and group_roles that have a column resource_id. This can be used to get corresponding kee value in table resource_index. This kee value is not same as projects file kee value.
Select *
into #TempTblSnprjusrs
From
(Select
users.login "lanid", users.name "Name", resource_index.kee "Kee"
from
user_roles, resource_index, users
Where
resource_index.resource_id = user_roles.resource_id
and users.id = user_roles.user_id) as x;
But we cannot get corresponding values of kee in projects table.
Select Distinct
#TempTblSnprjusrs.lanid, #TempTblSnprjusrs.Name,
#TempTblSnprjusrs.kee, projects.Name
from
#TempTblSnprjusrs
join
projects on projects.kee = #TempTblSnprjusrs.Project_key;
The database is not an API.
To get the users and groups associated with a project permission-wise, use the Administrative Security interface. Otherwise, you'll want the permissions web services.

INSERT statement not working when using it through a variable in Mule

My database component has the following configuration
<db:insert config-ref="Oracle_Configuration" bulkMode="true" doc:name="Database">
<db:dynamic-query><![CDATA[#[flowVars.dbquery]]]></db:dynamic-query>
</db:insert>
I have declared the "dbquery" variable as follows
<set-variable variableName="dbquery" value="INSERT INTO WBUSER.EMP VALUES('#[payload.FullName]','#[payload.SerialNumber]')" doc:name="Variable"/>
On running the application the values inserted into the DB are "#[payload.FullName] and #[payload.SerialNumber].
But when my database component has the following configuration actual values of FullName and SerialNumber are getting inserted into the database.
<db:insert config-ref="Oracle_Configuration" bulkMode="true" doc:name="Database">
<db:dynamic-query><![CDATA[INSERT INTO WBUSER.EMP VALUES('#[payload.FullName]','#[payload.SerialNumber]')]]></db:dynamic-query>
</db:insert>
Here FullName and SerialNumber are not variables. They are column names of the list in the payload as [{FullName=yo, SerialNumber=129329}, {FullName=he, SerialNumber=129329}].
Can someone tell me the difference here. And is there a way i can achieve database insertion using just the variable as in the earlier case?
It caused by different approach to insert data. It works correctly for the configuration inside db-insert, because the payload is in form of List and Bulk Mode option selected.
To make it work for the first configuration (declare SQL query in a variable) then you have to do the following steps:
Iterate each payload value by utilizing: collection-splitter.
Deselect Bulk Mode from database connector.
The configuration should be:
<collection-splitter doc:name="Collection Splitter"/>
<set-variable variableName="dbquery" value="INSERT INTO WBUSER.EMP VALUES('#[payload.FullName]','#[payload.SerialNumber]')" doc:name="Variable"/>
<db:insert config-ref="MySQL_Configuration" doc:name="Database">
<db:dynamic-query><![CDATA[#[flowVars.dbquery]]]></db:dynamic-query>
</db:insert>

Resources