Is there a way to rename a file attached to a Salesforce Object using Mule ?
( In my case, a file is uploaded to an Opportunity Object. I need to download the VersionData, process the file and update the fields in Salesforce. And then rename the file in Salesforce).
After I process the file, Should I use the Salesforce Update Connector and update the 'Title' or 'PathOnClient' of the specific Object or Is there any other way we could do this?
Related
Let's say I have imported a large number of audio files in S3. I would need to map my audio files metadata (including artist, track name, duration, release date, ...) to a DynamoDB table in order to query them using a GraphQL API in a react app. However, I can't yet figure out how to extract these metadata to be mapped in DynamoDB.
In the DynamoDB developer guide, it is mentioned (p.914) that the S3 object identifier can be stored in the DynamoDB item.
It is also mentioned that S3 object metadata support can provide a link back to the parent item in DynamoDB (by storing the primary key value of the table item as the S3 metadata).
However, the process is not really detailed; the closest approach I found is from J. Beswick who uses a lambda function to load a large amount of data from a JSON file stored in an S3 bucket.
(https://www.youtube.com/watch?v=f0sE_dNrimU&feature=emb_logo).
S3 object metadata is something different from audio metadata.
Think this way: everything that you put on S3 is a object. This object has a key (name) and some metadata attached to it by default by S3 and another metadata that you can attach to it. All of these things are explained here.
Audio files metadata are a different thing. They are inside the file (let's suppose that it is a mp3 file). To access this data you need to read the file using a api that knows the file format and how to extract the data.
When you upload the file to s3 it does not extract any kind of data and attach it to your object metadata (artist, track number, etc from mp3 files). You need to do it by yourself.
A suggested solution would be: for every file that you upload to s3, the upload triggers a lambda function that knows how to extract the audio metadata from the file. It will then extract this metadata and save it on DynamoDB together with the name of the object in s3. After that you can query your table with the search that you planned for and after found the record, point to the correct object in s3.
In that suggestion you can also run it for all objects already existent in the s3 bucket to avoid requiring new upload.
There is folder in a FTP server which contains multiple XML files. How do I read the XML files & get the tags with its corresponding values through Azure logic apps only(Logic apps may contain azure function in it as a step)
I created some xml files in my ftp folder, the xml format show as:
<id>1</id>
<name>hury</name>
Below is the screenshot of my logic app for your reference:
According to the screenshot, we need to create "Initialize variable" action to initialize a variable named "xmlstring". Then use "List files in folder" to access the xml files in your ftp folder.
After that, add "For each" action to loop the xml files from your ftp folder, and then use "Get file content" action and put the path in the File input box(shown as below)
Then create "Set variable" action to set the xml content to the variable(xmlstring) you created before(shown as below)
Next step please create "Parse JSON" action to parse the xmlstring, you can use "Use sample payload to generate schema" to generate the schema of the json(shown as below).
Now we can use the value in the xml in our logic app.
But for this solution, the prerequisite is all of your xml files have the same structure. Hope it would be helpful to you.
When you ask Kloudless to retrieve the files from an account, using: GET /v0/accounts/{account_id}/folders/{id}/contents/, it only lists the actual files, there are no thumbnail files.
So you cannot use the get files contents:GET /v0/accounts/{account_id}/files/{id}/contents/
because it needs a specific file id for the thumbnail file, but you don't get that because none are listed in the preview call.
So how do you retrieve thumbnails for the files?
2016-09 Update: A thumbnails endpoint (docs) is now available for select services. The prior SO answer has been preserved below as it describes the File Download endpoint which is valuable to obtain the file contents for services that do not yet support obtaining thumbnails for.
At the current time the Kloudless API does not support returning thumbnails for
files stored in users' cloud storage accounts.
The request that you are making:
GET /v0/accounts/{account_id}/files/{id}/contents/
is a download request which fetches the full contents of the file.
The file ID can be obtained from the objects listed in the
children request which you referenced before:
GET /v0/accounts/{accounts_id}/folders/{id}/contents/
This will return a list of file/folder objects which have the ID of the
resource as well as other metadata. The ID in the returned file objects can be
used in the download request to fetch the contents of the file.
I need to do bulk upload data from CSV file to Datastore. Although the data in the CSV file is also having a field which should be URL to a file.
Each row(person) is mapped to an associated file. which either i can upload in Google Cloud Storage. Although at runtime how can i upload the file and then get the URL and update the CSV file. Then use the CSV file to do Bulk upload.
Need to have a solution for this.
THanks for Help
Two ways of doing this
Write stuff in your request handler and perform the task, raw data can be uploaded to gae as a project resources, there are some size limits obviously
The better way is to enable remote api , then use remote api python script to batch upload stuff or write some code in python which points to your remote datasource.
I have to make a mailing list to which you can subscribe in Wordpress. I've found a WP plugin in which you have a form with two fields, name and email. These are being saved into a csv file which I can export with a press of a button, literally. I want to automatically export this csv file into a database or a simple text file, which keeps updating and adding new subscribers.
The plugin I'm using now is called "Mail Subscribe List".
I'm using Wordpress version 4.0
You need to create a cron job to automatically do stuff periodically. To do this on windows or linux us the 'at' command. Google it for details on how to do it.
Basically you will need to create a php file to do the export, then set up a cron job to run the php at whatever intervals you require.