Salesforce BULK upload always fails - salesforce

I'm uploading a simple CSV file using the following steps
1- Creating a Job
POST {{instance_api_name}}/services/data/v{{version}}/jobs/ingest/
{
"columnDelimiter" : "COMMA",
"object" : "portfolio__c",
"externalIdFieldName" : "portfolio_external_id__c",
"contentType" : "CSV",
"operation" : "upsert",
"lineEnding" : "CRLF"
}
2- Add CSV file to the Job
{{instance_api_name}}/services/data/v43.0/jobs/ingest/{{job_id}}/batches
attached with the following simple CSV file
Name,allow_trigger__c,portfolio_external_id__c
Airline,FALSE,blabla
3- Close the created Job
{{instance_api_name}}/services/data/v{{version}}/jobs/ingest/{{job_id}}
On Salesforce, the Job id appears with Failed status. The failed reason is the following
null:InvalidBatch : InvalidBatch : Field name not found : ----------------------------355072916529311982669462. Batch will not be retried.
Don't know why the field name is -------------- following with long number.
based on my research the CSV file should be saved Comma Separated Values (.csv), I did that and no progress.
Any solution for that? or any clear reason why the JOB is failing?

Related

Check whether file exists in server with partial name

I want to search for a file, say with name having the date time stamp (DDMMYYYYhhmmss)(14122017143339). But in the server possibilities are there that the filename which I am expecting can be like either (14122017143337 OR 14122017143338 OR 14122017143339 OR 14122017143340) as there is a minute change in the seconds.
Now, I am trying to search for the file with only a portion of its name say like (DDMMYYYYhhmm)only uptil the minute. Meaning the file which i am expecting should contain the string (141220171433) in its name.
Can someone help on how can we achieve this Using Java?
Note - Am using Selenium for my coding purposes.
Below code in in Java and will find all files in a folder. you can search for required file name to match
File[] allFiles = new File("FOlder path").listFiles();
for (File f : allFiles)
{
if (f.isFile())
{
if(file.getName().contains("DDMMYYYYhhmm"))
{
System.out.println("true and file found");
// do something here
}
}
}

Google Apps Script Duplicate A File

I am simply trying to duplicate an open sheets file. I tried so many variation without success. The below is an example. Would value any assistance. Thank You.
function copyDocs() {
var file = DriveApp.getFilesByName('My Income Statement');
file.makeCopy();
}
Data retrieved using DriveApp.getFilesByName() is FileIterator. In this case, file is retrieved by using next().
There are 2 patterns for the sample script.
Sample script 1
Pattern 1 :
If the filename is only one in your Drive, you can use following sample script. This sample copies the file using the filename.
function copyDocs() {
var file = DriveApp.getFilesByName('My Income Statement').next();
file.makeCopy();
}
Pattern 2 :
If there are files with the same filename and you want to copy one of them, you can use following sample script. This sample copies the file using the fileID. The fileID can be retrieved as follows.
For document,
https://docs.google.com/document/d/### File ID ###/edit
For spreadsheet,
https://docs.google.com/spreadsheets/d/### File ID ###/edit
Sample script :
function copyDocs() {
var file = DriveApp.getFileById("### File ID ###");
file.makeCopy();
}
Sample script 2
This sample is for opening the copied file using a dialog on spreadsheet. Since the copied file is opened as new window, please permit to open popup window.
Copy and paste following script to container-bound script of spreadsheet.
Run dialog().
Push a copy button on the dialog box on spreadsheet.
By above flow, the file with fileId is copied and opened as new window.
function dialog() {
var data = '<input type="button" value="copy" onclick="google.script.run.withSuccessHandler(openfile).filecopy();"><script>function openfile(url) {window.open(url);}</script>';
var html = HtmlService.createHtmlOutput(data);
SpreadsheetApp.getUi().showModalDialog(html, 'Sample dialog');
}
function filecopy(){
var fileId = "### File ID ###";
return DriveApp.getFileById(fileId).makeCopy().getUrl();
}

Compare file in Hadoop - Custom PIG loader

I want to write a custom PIG loader to load records from a multiline format to single line format. Later i want to compare each sub records.
How to write something like this ?
Here is the file format - Input
File :
REC|**Record_1**|ABC|DEF|GEH|1234
SUB_REC1|111|222|333|444|5555
SUB_REC1|AAA|BBB|CCC|DDD
SUB_REC2|EEE|FFF|GGG|HHH
SUB_REC2|III|JJJ
REC|**Record_2**|XYZ|MNO|PQR|1234
SUB_REC1|111|222|333|444|5555
SUB_REC1|AAA|BBB|CCC|DDD
SUB_REC2|EEE|FFF|GGG|HHH
SUB_REC2|III|JJJ
Expected output :
**Record_1**:REC|**Record_1**|ABC|DEF|GEH|1234~SUB_REC1|111|222|333|444|5555~SUB_REC1|AAA|BBB|CCC|DDD~SUB_REC2|EEE|FFF|GGG|HHH~SUB_REC2|III|JJJ
**Record_2**:REC|**Record_2**|XYZ|MNO|PQR|1234~SUB_REC1|111|222|333|444|5555~SUB_REC1|AAA|BBB|CCC|DDD~SUB_REC2|EEE|FFF|GGG|HHH~SUB_REC2|III|JJJ

Apache Camel - JsonPath & unique file names

Camel 2.13.0
I am attempting to consume a json string containing multiple records and produce an output file for each record.
public void configure() {
from("file:data/input")
// my bean splits the input string and returns a json string via Gson
.bean(new com.camel.Tokenizer(), "getSentences")
.split(new JsonPathExpression("$[*]"))
.convertBodyTo(String.class)
.to("file:data/output");
}
}
Sample input:
"This is string #1. This is string #2."
If I run the code above as-is I get a single output file containing "This is string #2.". If I remove the .split() call I get a single output file containing json as expected:
[
{
"token": "This is string #1."
},
{
"token": "This is string #2."
}
]
How can I achieve two output files representing both lines of data?
It occurred to me that perhaps the split was working correctly and the second output file was overwriting the first. According to the documentation, the default behavior if CamelFileName is not set is to create a unique generated ID but I do not experience this. In my case the output file name always matches the input file name.
How can I get unique file name within each folder?
Thanks!
Finally stumbled upon the proper search terms and came across the following helpful post: Camel: Splitting a collection and writing to files
The trick is to use a bit of logic in the .to() to achieve unique output file names:
.to("file:data/sentence_q?fileName=${header.CamelSplitIndex}.txt");
The JsonPathExpression works like a charm and no need for a Processor() or unmarshal(), as I'd tried previously.

Error while generating nodes with neo4j via neo4j-console

I'm trying to put data in my graph DB using neo4j. I'm new in the field and I don't find it easy to use the batch import tool that Michael Hunger wrote.
My goal is to generate at least 10000 nodes with just one property set. So I wrote a python script that generates 10000 lines of Cypher queries like "CREATE (:label{ number : '3796142470'})".
I put them in the console and execute them but I get this exception:
StackTrace:
scala.collection.immutable.List.take(List.scala:84)
org.neo4j.cypher.internal.compiler.v2_0.ast.SingleQuery.checkOrder(Query.scala:33)
Am I doing something wrong? In case the only way to generate those nodes is to use a batch/rest API, could you suggest me a easier way to do it?
Change:
CREATE (:label{ number : '3796142470'})
to look like:
CREATE (n1:Label { number : '3796142470'})
So you are following convention:
CREATE (n:Person { name : 'Andres', title : 'Developer' })
Put them into a file (say import.txt) and then:
bin/neo4j-shell -file import.txt
See http://blog.neo4j.org/2014/01/importing-data-to-neo4j-spreadsheet-way.html for more details.

Resources