Where can I define the encoding of my input file in order to be correctly read by Bindy ?
My input file is ISO-8859-1, CRLF
My local is UTF-8 (I don't to change it...)
So when I read my file some character are wrong...
camel:
.process(debugProcessor)
.unmarshal().bindy(BindyType.Csv, "mypackage.com")
Bindy:
#CsvRecord(separator = "\u0009", skipFirstLine = true)
public class elModel extends elModelGeneric{
/** Général */
#DataField(pos = 1)
/* N° id. */ String id;
...
Set the encoding when reading your source. If your source is a file, this is defined as follows:
from("file:inbox?charset=ISO-8859-1")
.process(debugProcessor)
.unmarshal().bindy(BindyType.Csv, "mypackage.com")
...
Thanks this work fine to read correctly the file.
I'm actually having some trouble with Bindy :
This as a test sample doesn't work :
from("file:inbox?charset=ISO-8859-1")
.process(debugProcessor)
.unmarshal().bindy(BindyType.Csv, "mypackage.com")
.marshal().bindy(BindyType.Csv, "mypackage.com")
.to("file:output/test.csv?charset=UTF-8");
The output encoding is wrong (it contains some "?" characters for "é" input char)
Could it be related to the Locale needed for Bindy to be set ?
I made several tests but couldn't get the right output.
Related
I have a problem reading different fileName from Camel file component.
from("file:/in?fileName={{property.name}}")
.to(file:/out)
I used fileName={{property.name}} from application.yml, but I need to use it from String.
Is there any way to use it like:
String name = "blabla.xml";
from("file:/in?fileName=${name}")
.to(file:/out)
Camel don't support it. String concatenation can solve your problem:
from("file:/in?fileName="+name)
or you can set a property and then read it:
String name="name";
from("direct:start")
.setProperty("name",constant(name))
.to("file:/in?fileName=${exchangeProperty.name}");
I have a json objects source,stored as strings that I'd like to render as a JSON array.
I'm doing this :
source.intersperse(",\n").concat(Source.single("]").prepend(Source.single("[")))
It does not seems to work , I never see the [ and ] char in the output.
Also, how can I say how can I tell Akka Streams that the end of stream is reached (I know the ending message), so it can add the ending char ? (I can know it's done reading a specific message in Kafka).
Thanks
This is working :
source.takeWhile(_.value != "EOF").intersperse("[", ",\n","]")
Note : of course, you need to have a EOF string at the end of your source to make this example work.
I have been trying to use the inbuilt dataUtils.downloadFile function from JHipster on the angular side. This accepts a content string and content type and let the user download the content as a file.
I noticed that, it can easily process content which contains ASCII character. However, it fails to process UTF-8 character set.
This is the error I get :
Failed to execute 'atob' on 'Window'
Am I missing something or is there a way to get around with it?
Currently i have to go through my file to replace all UTF-8 only chars to ASCII but that would be tedious.
Thanks for reading..
EDIT:
Below is the field definition.
{
"fieldName": "troubleshooting",
"fieldType": "byte[]",
"fieldTypeBlobContent": "text"
}
Here is the angular code which tries to convert the string to base64 and then download.
The problem is not with base 64 encoding. It is fine. The problem is with content format. If the content contains UTF-8 only chars, then it fails. In other cases I get the file downloaded successfully
download(appliance: Appliance) {
const applianceObj = JSON.parse(appliance.troubleShooting);
const prettyPrinted = JSON.stringify(applianceObj, null, 2);
const data = this.base64Utils.encode(prettyPrinted);
this.dataUtils.downloadFile('application/json', data, appliance.applianceType);
}
Camel 2.13.0
I am attempting to consume a json string containing multiple records and produce an output file for each record.
public void configure() {
from("file:data/input")
// my bean splits the input string and returns a json string via Gson
.bean(new com.camel.Tokenizer(), "getSentences")
.split(new JsonPathExpression("$[*]"))
.convertBodyTo(String.class)
.to("file:data/output");
}
}
Sample input:
"This is string #1. This is string #2."
If I run the code above as-is I get a single output file containing "This is string #2.". If I remove the .split() call I get a single output file containing json as expected:
[
{
"token": "This is string #1."
},
{
"token": "This is string #2."
}
]
How can I achieve two output files representing both lines of data?
It occurred to me that perhaps the split was working correctly and the second output file was overwriting the first. According to the documentation, the default behavior if CamelFileName is not set is to create a unique generated ID but I do not experience this. In my case the output file name always matches the input file name.
How can I get unique file name within each folder?
Thanks!
Finally stumbled upon the proper search terms and came across the following helpful post: Camel: Splitting a collection and writing to files
The trick is to use a bit of logic in the .to() to achieve unique output file names:
.to("file:data/sentence_q?fileName=${header.CamelSplitIndex}.txt");
The JsonPathExpression works like a charm and no need for a Processor() or unmarshal(), as I'd tried previously.
I need to read a file from the file system and load the entire contents into a string in a groovy controller, what's the easiest way to do that?
String fileContents = new File('/path/to/file').text
If you need to specify the character encoding, use the following instead:
String fileContents = new File('/path/to/file').getText('UTF-8')
The shortest way is indeed just
String fileContents = new File('/path/to/file').text
but in this case you have no control on how the bytes in the file are interpreted as characters. AFAIK groovy tries to guess the encoding here by looking at the file content.
If you want a specific character encoding you can specify a charset name with
String fileContents = new File('/path/to/file').getText('UTF-8')
See API docs on File.getText(String) for further reference.
A slight variation...
new File('/path/to/file').eachLine { line ->
println line
}
In my case new File() doesn't work, it causes a FileNotFoundException when run in a Jenkins pipeline job. The following code solved this, and is even easier in my opinion:
def fileContents = readFile "path/to/file"
I still don't understand this difference completely, but maybe it'll help anyone else with the same trouble. Possibly the exception was caused because new File() creates a file on the system which executes the groovy code, which was a different system than the one that contains the file I wanted to read.
the easiest way would be
new File(filename).getText()
which means you could just do:
new File(filename).text
Here you can Find some other way to do the same.
Read file.
File file1 = new File("C:\Build\myfolder\myTestfile.txt");
def String yourData = file1.readLines();
Read Full file.
File file1 = new File("C:\Build\myfolder\myfile.txt");
def String yourData= file1.getText();
Read file Line Bye Line.
File file1 = new File("C:\Build\myfolder\myTestfile.txt");
for (def i=0;i<=30;i++) // specify how many line need to read eg.. 30
{
log.info file1.readLines().get(i)
}
Create a new file.
new File("C:\Temp\FileName.txt").createNewFile();