SilverPOP API - Automated message report - export

I need to export all data from Silverpop automated message with silverpop API
apparently there are no many information on the net apart from the official guide "XML API Developer’s Guide ENGAGE"
I need to know how to:
retrieve a list of Automated message
extract / download data of selected report (for all days-not single one)
finally (again not documented in the official guide); how to programmatically export final report having set MOVE_TO_FTP=true
(the guide quotes Use the MOVE_TO_FTP parameter to retrieve the output file programmatically)
thank you very much in advance for any help in this

You can use the RawRecipientDataExport XML API export to get the following:
• One or more mailings
• One or more Mailing/Report ID combinations (for Autoresponders)
• A specific Database (optional: include related queries)
• A specific Group of Automated Messages
• An Event Date Range
• A Mailing Date Range
For automated messages, you can use the following XML tags <CAMPAIGN_ACTIVE/>, <CAMPAIGN_COMPLETED/>, and <CAMPAIGN_CANCELLED/> to retrieve active Groups of Automated Messages, retrieve completed Groups of Automated Messages, and retrieve canceled Groups of Automated Messages, respectively.
To get data for all days and not just one, you can set a date range for send dates and event dates, by putting your desired date ranges within the <SEND_DATE_START> and <SEND_DATE_END> tags and <EVENT_DATE_START> and <EVENT_DATE_END> tags. The date formats are like this: 12/02/2011 23:59:00
Hope this helps.

Related

QBO SDK PHP getting all deposits customer

I would like to know how to get all deposits of a target customer with the following SDK (https://github.com/intuit/QuickBooks-V3-PHP-SDK).
Is there an easier way than getting all deposits, then looping to extract the wanted customer's deposits?
Official QBO answer:
You can get complete deposit list by querying the “Query a deposit”
API end point. If we use filter in query, then it may return incorrect
results. Better you can iterate the response and add filter logic in
your code and get specified

Big Query Data Expiration

I have been collecting data in BigQuery for analysis purposes. However, the size of the data is growing and I only need 2 weeks of recent data. I wanted to erase data that is not used. I did some research and I found out that there is an expiration option for partitioned data.
Current setup:
My table is a partitioned table
I use a Lambda Function with a code similar to this in order to put data into the table (I have tried adding timePartitioning option, but it didn't work so that's why I am asking on stackoverflow if anyone knows)
wait bq
.dataset("dataset name")
.table('tablename' + '$' + partitionTime)
.load( filename, {
sourceFormat: 'CSV',
schema,
skipLeadingRows: 1,
timePartitioning: {
expirationMs: "300000"
}
});
Where partitionTime is in the format YYYYMMDD (this places the data inserted into that partition)
Thank you for all your comments and taking time to read my trouble :)
Have a nice day.
As you can see here, the function load accepts three parameters:
source (needed)
metadata (optional)
callback (optional)
The options that you need can be set in the metadata parameter. In the link provided above you can also notice that the BigQuery SDK uses API calls to perform the given operations.
In this link and in the printscreen bellow, you can see how to build a correct API call for BigQuery
In the field timePartitioning you can add DAY as your tipe of partitioning and your expiration time in miliseconds.
In the end, your code would have a small change:
wait bq
.dataset("dataset name")
.table('tablename')
.load( filename, {
sourceFormat: 'CSV',
schema,
skipLeadingRows: 1,
timePartitioning: {
type: "DAY",
expirationMs: "300000"
}
});
I hope it helps

How to Nest Date Functions in SOQL Advanced Filter in Einstein Analytics Data Flow

How do I nest date functions in SOQL Advanced Filters -> sfdcDigest node -> Data Flows -> Einstein Analytics to filter which records to pull?
I tried using CALENDAR_YEAR(ClosedDateTime)=THIS_YEAR, but received an error that ClosedDateTime has to be integer?. Reading through SF KB, I realized that CALENDAR_YEAR accepts Date, but not DateTime format. To convert, I can use DAY_ONLY(ClosedDateTime). Now, how do I put all this together? Advanced filter excepts WHERE portion of SOQL query.
I tried CALENDAR_YEAR(DAY_ONLY(ClosedDateTime))=THIS_YEAR but got an error about nested function.
I expect filter to pull only opportunities closed in current year.
Well, if you search internets a little longer...
CompletedDateTime = THIS_YEAR
This worked for the filter, but still not sure if it possible to nest functions :)

How to get salesforce Activity id

I have a salesforce query that extracting users time report
SELECT ID,Logged_Date__c ,CreatedBy.Email, CreatedBy.id, CreatedBy.Name, Time_Spent_Hours__c, Activity__c, CaseId__r.CaseNumber, CaseId__r.Account.id, CaseId__r.Account.Name , Utilized__c
FROM Time_and_Placement_Tracking__c
The Activity__c returns with the activity text.
I was trying to use Activity__c.Id, Activity__r etc. but all returns with error.
Is there a way to get the Activity id?
Verify these
You need to get to the object definition and see the field info. You can use workbench or any other API tool if you are familiar with and get the object and field def's.
Check the data type for Activity__c field. It should be a lookup/master relation. If it is not, find the field which ties to Activity object.
Open the field to get the API name and use that in the query with a '__r' extension.

How to aggregate files in Mule ESB CE

I need to aggregate a number of csv inbound files in-memory, if necessary resequencing them, on Mule ESB CE 3.2.1.
How could I implement this kind of logics?
I tried with message-chunking-aggregator-router, but it fails on startup because xsd schema does not admit such a configuration:
<message-chunking-aggregator-router timeout="20000" failOnTimeout="false" >
<expression-message-info-mapping correlationIdExpression="#[header:correlation]"/>
</message-chunking-aggregator-router>
I've also tried to attach mine correlation ids to inbound messages, then process them by a custom-aggregator, but I've found that Mule internally uses a key made up of:
Serializable key=event.getId()+event.getMessage().getCorrelationSequence();//EventGroup:264
The internal id is everytime different (also if correlation sequence is correct): this way, Mule does not use only correlation sequence as I expected and same message is processed many times.
Finally, I can re-write a custom aggregator, but I would like to use a more consolidated technique.
Thanks in advance,
Gabriele
UPDATE
I've tried with message-chunk-aggregator but it doesn't fit my requisite, as it admits duplicates.
I try to detail the scenario I need to cover:
Mule polls (on a SFTP location)
file 1 "FIXEDPREFIX_1_of_2.zip" is detected and kept in memory somewhere (as an open SFTPStream, it's ok).
Some correlation info are mantained for grouping: group, sequence, group size.
file 1 "FIXEDPREFIX_1_of_2.zip" is detected again, but cannot be inserted because would be duplicated
file 2 "FIXEDPREFIX_2_of_2.zip" is detected, and correctly added
stated that group size has been reached, Mule routes MessageCollection with the correct set of messages
About point 2., I'm lucky enough to get info from filename and put them into MuleMessage::correlation* properties, so that subsequent components could use them.
I did, but duplicates are processed the same.
Thanks again
Gabriele
Here is the right router to use with Mule 3: http://www.mulesoft.org/documentation/display/MULE3USER/Routing+Message+Processors#RoutingMessageProcessors-MessageChunkAggregator

Resources