I'm creating a file and writing a String on it with encoding set to LATIN1. However, the finished file is set with a different encoding (us-ascii or utf-8 returned by "file -bi" on Linux, depending on the method I use to get the String).
Here follows the creation method:
new File("/home/username/dart_test/file.xml").create(recursive: true).then((file) {
file.writeAsString(_methodReturnsAString(), mode: FileMode.WRITE, encoding: LATIN1);
});
Any ideas on what could be wrong?
EDIT (RELATED TO ANSWER):
There's no problem on the method described above. The problem was the data that was being provided to the method inside "writeAsString". That data comes from an HttpRequest that was not being processed properly (in fact, the setting of the encoding to ISO-8859-1 was causing the problem).
There's no problem with the method described on the question. Actually, the problem lays in an http request body handler that was not set properly.
So I'm answering my own question in order to help others with the same problem.
Here follows my request handler (from http_server package):
HttpBodyHandler.processRequest(request/*, defaultEncoding: Encoding.getByName("ISO-8859-1")*/).then((body) {
// Do something with body.
}, onError: _printError);
Take a look at the commented out "defaultEncoding". That was the cause. I don't think you can set it if you are not processing any files (blobs) on the request. I don't know if there's any situation where you should set it when just processing some String (I would appreciate if someone could complete this answer with this information).
Related
I have an IdP and an SP setup using the ITfoxtec SAML2 libraries, and everything works great when not using artifact binding, or when not validating signatures. When using artifact binding and validating signatures I'm getting a "Signature is invalid." exception in the ACS when trying to retrieve and bind the actual response/assertion.
It seems to unbind the artifact response fine, then when it goes to retrieve and unbind the artifact from the ArtifactResolutionService it fails, specifically on the last line of this block:
var soapEnvelope = new Saml2SoapEnvelope();
saml2AuthnResponse = new Saml2AuthnResponse(config);
await soapEnvelope.ResolveAsync(httpClient, saml2ArtifactResolve, saml2AuthnResponse);
I've checked that my signature validation certificate is correct and I've dug through the source code but am scratching my head. I've tried to validate the "saml2p:ArtifactResponse" myself but there isn't much out there.
If I put this line before the chunk above everything works as expected as it no longer validates the signature:
config.SignatureValidationCertificates.Clear();
One thing I noticed is that in the 'saml2p:ArtifactResponse' there is a signature inside of that node but not inside the contained 'saml2p:Response' node. Is it possible that the saml2p:Response is being isolated and then a signature check is being performed? I tried to see if it was supposed to be signing the response/assertion in the artifact cache on the IdP side (artifactSaml2AuthnResponseCache), but it doesn't sign response at all. I'm doing this before putting it in the cache just like in the example and just like I do when using POST binding:
var token = saml2AuthnResponse.CreateSecurityToken(relyingParty.Issuer, subjectConfirmationLifetime: 5, issuedTokenLifetime: 60);
artifactSaml2AuthnResponseCache[saml2ArtifactResolve.Artifact] = saml2AuthnResponse;`
EDIT: I have determined that the ArtifactResponse just isn't signed properly. Another tool claims the digest in the XML doesn't match the computed value. This is after stepping through the source and grabbing the XML that the code is trying to validate directly. I can see that the ArtifactResolve is being signed and validated properly (and I checked with the external tool) but the ArtifactResponse isn't. Even in the code it fails at the final validation of the signature (and not at any checks before it).
EDIT 2: Found the problem in the source. The .ToXmlDocument() extension is breaking the signed XML. The final test was done by 'replacing' it in the spot with a new method that just returns the string directly with "envelope.ToString(SaveOptions.DisableFormatting)":
protected virtual XmlDocument ToSoapXml()
{
var envelope = new XElement(Saml2Constants.SoapEnvironmentNamespaceX + Saml2Constants.Message.Envelope);
envelope.Add(GetXContent());
return envelope.ToXmlDocument();
}
protected string ToSoapXmlString()
{
var envelope = new XElement(Saml2Constants.SoapEnvironmentNamespaceX + Saml2Constants.Message.Envelope);
envelope.Add(GetXContent());
return envelope.ToString(SaveOptions.DisableFormatting);//.ToXmlDocument();
}
And directly save that to the SoapResponseXml of the Saml2SoapEnvelope:
protected override Saml2SoapEnvelope BindInternal(Saml2Request saml2Request, string messageName)
{
if (!(saml2Request is Saml2ArtifactResponse))
throw new ArgumentException("Only Saml2ArtifactResponse is supported");
BindInternal(saml2Request);
SoapResponseXml = ToSoapXmlString();// ToSoapXml().OuterXml;
return this;
}
I would initiate a pull request for this change but honestly I'm not that up to speed with Git. I'm also not sure if this is the best way to fix the issue.
Thank you for your question and code to solve the problem. I'll look into the problem.
EDIT: I'm trying to reproduce the error but no luck. The sample is both an IdP an RP, what have you changed to get the error?
I'm trying to upload a file using multipart/form-data to a Camel route.
All is good, however, I can't get the original file name.
Camel version is: 3.14.1
Update
With the following modification to the route. I managed to process binary files (getting the file name and storing them). However, with text files, the file is appended with the boundary footer:
------WebKitFormBoundary7BH9nQ2RqDXvTRAJ--
The route definition:
rest("/v1/file-upload-form")
.post()
.consumes(MediaType.MULTIPART_FORM_DATA_VALUE)
.route()
.process((exchange) -> {
InputStream is = exchange.getIn().getBody(InputStream.class);
MimeBodyPart mimeMessage = new MimeBodyPart(is);
DataHandler dh = mimeMessage.getDataHandler();
exchange.getIn().setBody(dh.getInputStream());
exchange.getIn().setHeader(Exchange.FILE_NAME, dh.getName());
})
.to("file://" + incomingFolder);
Thank you in advance
Edwardo
Edit: Since you have everything else already working, I'd recommend the Stream Caching option.
As Nicolas suggested, checkout Camel's MIME Multipart data format.
Also, the reason you're getting "Missing start boundary" is because your processor is consuming the InputStream. You can try to reset() it, but it might be better to just consume the InputStream once, or enable Stream Caching.
Instead of stream caching, you could also just convert the stream to a string. Before your processor, add:
.convertBodyTo(String.class)
The string can be read over and over. If you still get the missing start boundary error, try logging the body before the unmarshal operation. Make sure the message is intact and that it indeed contains the start boundary.
Problem: The file is not consumed from the server
I am using
from("test")
.routeId("test")
.pollEnrich()
.simple("smb://myUrl?password=test&fileName=${in.headers.test}")
.aggregationStrategy((Exchange oldExchange, Exchange newExchange) -> {
//do things
return newExchange;
})
I have no error, I am sure that the url is ok, because when I am using the same url in the from(), the file gets consumed.
I don't understand what is happening here, I am using camel 2.24.0 and camel-extra:camel-jcifs:2.23.1. I have tried to use smb2 using the library from github.jborza.camel-smbj, still the same outcome.
I tried to debug, I can see in the GenericFileComponent class in the createEndpoint method, that the endpoint is correctly created, then I tried (in debug mode) to get the exchanges from my endpoint, I can get them successfully, further this will be a SmbEndpoint, when I try to get the exchanges from my smbEndpoint it returns exactly the needed file from the server, further a EventDrivenPollingConsumer is created for this endpoint, I had a look at it, is started (seems ok). When it hits the consumer.receive() from the PollEnricher it blocks, no file is consumed. I tried using a timeout, than returns null, so somehow cannot find the file, or the consumer is wrong, I honestly have no clue at this point.
I had a look here too: https://github.com/apache/camel/blob/b9a3117f19dd19abd2ea8b789c42c3e86fe4c488/core/camel-core/src/test/java/org/apache/camel/component/file/FileConsumePollEnrichFileTest.java
and I have played with delays
&consumer.initialDelay=100&consumer.delay=100&consumer.bridgeErrorHandler=true
Then I tried to implement with processor like here:
https://github.com/apache/camel/blob/b9a3117f19dd19abd2ea8b789c42c3e86fe4c488/core/camel-core/src/test/java/org/apache/camel/component/file/FileConsumePollEnrichFileUsingProcessorTest.java
The same result :(
At some point the file was consumed, suddenly, but this happened only once, I cannot understand this behavior.
Sounds that you have a readlock problem, can you find any files in .done with the same name as the file you try to consume?
I am using talend-ESB and want to parse EDI message to XML using smooks & I am getting null in body. The code looks as below.
from(
"file://D:/cimt/InvoiceEDI_Mapping/" + "?noop=true"
+ "&autoCreate=true" + "&flatten=false"
+ "&fileName=InDev_EDI_Msg.txt" + "&bufferSize=128")
.routeId("TestSmooksConfig_cFile_1")
.log(org.apache.camel.LoggingLevel.WARN,
"TestSmooksConfig.cLog_1", "${body}")
.id("TestSmooksConfig_cLog_1")
.to("smooks://EDI_Config.xml")
.to("log:TestSmooksConfig.cLog_2" + "?level=WARN")
.id("TestSmooksConfig_cLog_2");
}
My Talend route looks as below.
I used following set of external dependencies.
milyn-commons-1.7.0.jar
milyn-smooks-camel-1.7.0.jar
milyn-smooks-edi-1.7.0.jar
milyn-smooks-core-1.7.0.jar
jaxen-1.1.6.jar
milyn-edisax-parser-1.4.jar
Also, I see a strange behavior that, upon execution, I still see "starting" prior to cJavaDSLProcessor, which initially made me wonder if at all it gets executed. But later, when I intentionally made a mistake in EDI-Mapping, then the route was throwing errors, which kind of convinced me that it does parse the EDI message.
I did also search before posting this question here, and found a similar problem in this link
And I tried to lower my revision of org.milyn.* jars to 1.4.0, and got an exception that the route could not register smooks component. So I continued using 1.7.0 version of org.milyn.* jars.
For the benefit of others who might bump into similar issue, I 'assume' that the output of the smooks gets written into an Object of type StringResult.class. However, in my initial implementation, there was no such option and hence the output body was null.
Later, I tried alternative approach from http://smooks.org/guide where they used processor endpoint.Actually they had even made a statement that the data could be retrieved through exports element. The below code snippet helped to fix issue.
Smooks smooks = new Smooks("edi-to-xml-smooks-config.xml");
ExecutionContext context = smooks.createExecutionContext();
smooks.setExports(new Exports(StringResult.class));
SmooksProcessor processor = new SmooksProcessor(smooks, context);
from("file://input?noop=true")
.process(processor)
.to("mock:result");
Now I have this code in javascript.
var file_object = $('#PHOTO').get(0).files[0];
the_form = new FormData();
the_form.append("AWSAccessKeyId", "TESTING");
the_form.append("acl", "authenticated-read");
the_form.append("policy", policy);
the_form.append("signature", signature);
the_form.append("Content-Type", "image/jpeg");
the_form.append("key", "test.jpg");
the_form.append("file", file_object);
$.ajax({
url: "http://S3BUCKET.s3.amazonaws.com",
type: "POST",
data: the_form,
processData: false,
contentType: false
})
It works sweetly, in Chrome, Firefox, except IE6,7,8,9.
The reason is that file object is not supported until IE10!
https://developer.mozilla.org/en-US/docs/Web/API/File
Is there any work-around solution for browsers before IE10?
PS: Code example would be nice!!
Without Flash many things are definitely a no-go. I believe the lib you reference has some Flash fallbacks, but I'm unclear as to whether they can handle all the issues involved. This is something I'm currently dealing with myself, and here are the issues in brief:
Content-Type header in response. IE (without Flash intermediary) will try to download a JSON content type, no way around this that I know without a proxy middleman to fudge headers.
hostname mapping. If you don't map to origin hostname, IE iframe (which is the non-Flash fallback) will not allow you to read the contents of it from the containing window. Fire and forget may be possible, but consuming the response/detecting errors from s3 may not.
I will update this answer as I uncover more in the coming days. This is a large project so we have some pretty significant requirements and I imagine I'll learn a lot in the next week or so.
This is covered in a lot more detail here (not my company/project/post): http://blog.fineuploader.com/2013/08/16/fine-uploader-s3-upload-directly-to-amazon-s3-from-your-browser/