convert filepart to byteArray - arrays

In my spring application, I am getting FilePart object and need to convert it to byteArray.
Below is my code.
val byteArray: ByteArray = file.content().map { it -> it.asInputStream().readBytes() }.blockLast()!!
But it is giving me error as
java.lang.IllegalStateException: block()/blockFirst()/blockLast() are blocking, which is not supported in thread reactor-http-kqueue-3

Take a look at DataBufferUtils, the join(...) method offers a safe and efficient way to aggregate a data buffer stream into a single data buffer
Mono<byte[]> getByteArray(FilePart filePart) {
return DataBufferUtils.join(filePart.content())
.map(dataBuffer -> dataBuffer.asByteBuffer().array());
}

Related

jose4j: how to set full serialization input?

Is there a way to set a JWE full serialization input with jose4j? For example, what goes in the TODO below?
public String decryptJWE(PrivateKey privateKey, String payload, boolean compact) throws JoseException {
JsonWebEncryption jwe = new JsonWebEncryption();
if (compact) {
jwe.setCompactSerialization(payload);
} else {
// TODO: what goes here? expecting something like jwe.setFullSerialization(payload)
}
jwe.setKey(privateKey);
return jwe.getPayload();
}
No, only the JWE compact serialization is supported. The general and flattened JWE JSON serializations aren't directly supported.

copy NSData to UnsafeMutablePointer<Void>

Hi there stackoverflowers. I'm implementing a wrapper for Secure Transport and I'm stuck on some of the C -> Swift syntax.
func sslReadCallback(connection: SSLConnectionRef,
data: UnsafeMutablePointer<Void>,
var dataLength: UnsafeMutablePointer<Int>) -> OSStatus
{
//let bytesRequested = dataLength.memory
let transportWrapper:SecureTransportWrapper = UnsafePointer(connection).memory
let bytesRead:NSData = transportWrapper.readFromConnectionFunc(transportWrapper.connection)
dataLength = UnsafeMutablePointer<Int>.alloc(1)
dataLength.initialize(bytesRead.length)
if (bytesRead.length == 0)
{
return OSStatus(errSSLClosedGraceful)
}
else
{
data.alloc(sizeof(bytesRead.length)) //<----compile error here
return noErr
}
}
I've marked the location of the compile error. I don't blame it for erring, I was kind of guessing here :P. I'm trying to copy the the NSData to the data:UnsafeMutablePointer. How do I do that?
Compile error:
/Users/*/SecureTransportWrapper.swift:108:9: Static member 'alloc' cannot be used on instance of type 'UnsafeMutablePointer' (aka 'UnsafeMutablePointer<()>')
Thanks a ton!
================
Update: here is the api doc for what the sslReadCallback is supposed to do:
connection: A connection reference.
data: On return, your callback should overwrite the memory at this location with the data read from the connection.
dataLength: On input, a pointer to an integer
representing the length of the data in bytes. On return, your callback
should overwrite that integer with the number of bytes actually
transferred.
Excerpt from here
OK, lets go through your code:
dataLength = UnsafeMutablePointer<Int>.alloc(1)
dataLength.initialize(bytesRead.length)
dataLength is a pointer you get passed in, it is where the caller of the function both gives you the size of the buffer and wants you to put the number of bytes you read. You don't need to alloc this, it is already allocated.
(Irrelevant for this example but: Also in alloc(N) and initialize(N) the N should be the same (it is the amount of memory being allocated, and then initialized))
I think what you want (Swift 3 uses pointee instead of memory) is this:
dataLength.memory = bytesRead.length
The C API says that you also get the size of the data buffer from this variable. data will be pre-allocated for this size.
Make sure the data you read fits (bytesRead.length <= dataLength.memory), then just do a
memcpy(data, bytesRead.bytes, bytesRead.length)
That's all.

Google cloud storage using stream instead of bytebuffer - java

I'm using the following code:
GcsService gcsService = GcsServiceFactory.createGcsService();
GcsFilename filename = new GcsFilename(BUCKETNAME, fileName);
GcsFileOptions options = new GcsFileOptions.Builder()
.mimeType(contentType)
.acl("public-read")
.addUserMetadata("myfield1", "my field value")
.build();
#SuppressWarnings("resource")
GcsOutputChannel outputChannel =
gcsService.createOrReplace(filename, options);
outputChannel.write(ByteBuffer.wrap(byteArray));
outputChannel.close();
The problem is that when I try to store video files, I have to store the file in the byteArray which could cause memory issues.
But I cannot find any interface to do the same with stream.
questions:
Should I worry about mem issues in the appengine srv, or are they capable of keeping a 1 min video in mem?
is it possible to use stream instead of byte array? how?
I'm reading the bytes as byte[] byteArray = IOUtils.toByteArray(stream); should I use the byte array as a real buffer and just read chunks and upload them to the GCS? how do I do that?
The amount of memory available depends on the appengine instance type you've configured. Streaming this data seems like a good idea if you can.
Not sure about the GcsService api, but looks like you can do this using the gcloud Storage api:
https://github.com/GoogleCloudPlatform/gcloud-java/blob/master/gcloud-java-storage/src/main/java/com/google/cloud/storage/Storage.java
This code might work (untested)...
final BlobInfo info = BlobInfo.builder(bucket.getBucketName(), "name").contentType("image/png").build();
final ReadableByteChannel src = Channels.newChannel(stream);
final WriteChannel dst = gcsStorage.writer(info);
fastChannelCopy(src, dst);
private void fastChannelCopy(final ReadableByteChannel src, final WritableByteChannel dest) throws IOException {
final ByteBuffer buffer = ByteBuffer.allocateDirect(16 * 1024);
while (src.read(buffer) != -1) {
buffer.flip(); // prepare the buffer to be drained
dest.write(buffer); // write to the channel, may block
// If partial transfer, shift remainder down
// If buffer is empty, same as doing clear()
buffer.compact();
}
// EOF will leave buffer in fill state
buffer.flip();
// make sure the buffer is fully drained.
while (buffer.hasRemaining()) {
dest.write(buffer);
}
}

read cloud storage content with "gzip" encoding for "application/octet-stream" type content

We're using "Google Cloud Storage Client Library" for app engine, with simply "GcsFileOptions.Builder.contentEncoding("gzip")" at file creation time, we got the following problem when reading the file:
com.google.appengine.tools.cloudstorage.NonRetriableException: java.lang.RuntimeException: com.google.appengine.tools.cloudstorage.SimpleGcsInputChannelImpl$1#1c07d21: Unexpected cause of ExecutionException
at com.google.appengine.tools.cloudstorage.RetryHelper.doRetry(RetryHelper.java:87)
at com.google.appengine.tools.cloudstorage.RetryHelper.runWithRetries(RetryHelper.java:129)
at com.google.appengine.tools.cloudstorage.RetryHelper.runWithRetries(RetryHelper.java:123)
at com.google.appengine.tools.cloudstorage.SimpleGcsInputChannelImpl.read(SimpleGcsInputChannelImpl.java:81)
...
Caused by: java.lang.RuntimeException: com.google.appengine.tools.cloudstorage.SimpleGcsInputChannelImpl$1#1c07d21: Unexpected cause of ExecutionException
at com.google.appengine.tools.cloudstorage.SimpleGcsInputChannelImpl$1.call(SimpleGcsInputChannelImpl.java:101)
at com.google.appengine.tools.cloudstorage.SimpleGcsInputChannelImpl$1.call(SimpleGcsInputChannelImpl.java:81)
at com.google.appengine.tools.cloudstorage.RetryHelper.doRetry(RetryHelper.java:75)
... 56 more
Caused by: java.lang.IllegalStateException: com.google.appengine.tools.cloudstorage.oauth.OauthRawGcsService$2#1d8c25d: got 46483 > wanted 19823
at com.google.common.base.Preconditions.checkState(Preconditions.java:177)
at com.google.appengine.tools.cloudstorage.oauth.OauthRawGcsService$2.wrap(OauthRawGcsService.java:418)
at com.google.appengine.tools.cloudstorage.oauth.OauthRawGcsService$2.wrap(OauthRawGcsService.java:398)
at com.google.appengine.api.utils.FutureWrapper.wrapAndCache(FutureWrapper.java:53)
at com.google.appengine.api.utils.FutureWrapper.get(FutureWrapper.java:90)
at com.google.appengine.tools.cloudstorage.SimpleGcsInputChannelImpl$1.call(SimpleGcsInputChannelImpl.java:86)
... 58 more
What else should be added to read files with "gzip" compression to be able to read the content in app engine? ( curl cloud storage URL from client side works fine for both compressed and uncompressed file )
This is the code that works for uncompressed object:
byte[] blobContent = new byte[0];
try
{
GcsFileMetadata metaData = gcsService.getMetadata(fileName);
int fileSize = (int) metaData.getLength();
final int chunkSize = BlobstoreService.MAX_BLOB_FETCH_SIZE;
LOG.info("content encoding: " + metaData.getOptions().getContentEncoding()); // "gzip" here
LOG.info("input size " + fileSize); // the size is obviously the compressed size!
for (long offset = 0; offset < fileSize;)
{
if (offset != 0)
{
LOG.info("Handling extra size for " + filePath + " at " + offset);
}
final int size = Math.min(chunkSize, fileSize);
ByteBuffer result = ByteBuffer.allocate(size);
GcsInputChannel readChannel = gcsService.openReadChannel(fileName, offset);
try
{
readChannel.read(result); <<<< here the exception was thrown
}
finally
{
......
It is now compressed by:
GcsFilename filename = new GcsFilename(bucketName, filePath);
GcsFileOptions.Builder builder = new GcsFileOptions.Builder().mimeType(image_type);
builder = builder.contentEncoding("gzip");
GcsOutputChannel writeChannel = gcsService.createOrReplace(filename, builder.build());
ByteArrayOutputStream byteStream = new ByteArrayOutputStream(blob_content.length);
try
{
GZIPOutputStream zipStream = new GZIPOutputStream(byteStream);
try
{
zipStream.write(blob_content);
}
finally
{
zipStream.close();
}
}
finally
{
byteStream.close();
}
byte[] compressedData = byteStream.toByteArray();
writeChannel.write(ByteBuffer.wrap(compressedData));
the blob_content is compressed from 46483 bytes to 19823 bytes.
I think it is the google code's bug
https://code.google.com/p/appengine-gcs-client/source/browse/trunk/java/src/main/java/com/google/appengine/tools/cloudstorage/oauth/OauthRawGcsService.java, L418:
Preconditions.checkState(content.length <= want, "%s: got %s > wanted %s", this, content.length, want);
the HTTPResponse has decoded the blob, so the Precondition is wrong here.
If I good understand you have to set mineType:
GcsFileOptions options = new GcsFileOptions.Builder().mimeType("text/html")
Google Cloud Storage does not compress or decompress objects:
https://developers.google.com/storage/docs/reference-headers?csw=1#contentencoding
I hope that's what you want to do .
Looking at your code it seems like there is a mismatch between what is stored and what is read. The documentation specifies that compression is not done for you (https://developers.google.com/storage/docs/reference-headers?csw=1#contentencoding). You will need to do the actual compression manually.
Also if you look at the implementation of the class that throws the exception (https://code.google.com/p/appengine-gcs-client/source/browse/trunk/java/src/main/java/com/google/appengine/tools/cloudstorage/oauth/OauthRawGcsService.java?r=81&spec=svn134) you will notice that you get the original contents back but you're actually expecting compressed content. Check the method readObjectAsync in the above mentioned class.
It looks like the content persisted might not be gzipped or the content-length is not set properly. What you should do is verify length of the compressed stream just before writing it into the channel. You should also verify that the content length is set correctly when doing the http request. It would be useful to see the actual http request headers and make sure that content length header matches the actual content length in the http response.
Also it looks like contentEncoding could be set incorrectly. Try using:.contentEncoding("Content-Encoding: gzip") as used in this TCK test. Although still the best thing to do is inspect the HTTP request and response. You can use wireshark to do that easily.
Also you need to make sure that GCSOutputChannel is closed as that's when the file is finalized.
Hope this puts you on the right track. To gzip your contents you can use java GZIPInputStream.
I'm seeing the same issue, easily reproducable by uploading a file with "gsutil cp -Z", then trying to open it with the following
ByteArrayOutputStream output = new ByteArrayOutputStream();
try (GcsInputChannel readChannel = svc.openReadChannel(filename, 0)) {
try (InputStream input = Channels.newInputStream(readChannel))
{
IOUtils.copy(input, output);
}
}
This causes an exception like this:
java.lang.IllegalStateException:
....oauth.OauthRawGcsService$2#1883798: got 64303 > wanted 4096
at ....Preconditions.checkState(Preconditions.java:199)
at ....oauth.OauthRawGcsService$2.wrap(OauthRawGcsService.java:519)
at ....oauth.OauthRawGcsService$2.wrap(OauthRawGcsService.java:499)
The only work around I've found is to read the entire file into memory using readChannel.read:
int fileSize = 64303;
ByteBuffer result = ByteBuffer.allocate(fileSize);
try (GcsInputChannel readChannel = gcs.openReadChannel(new GcsFilename("mybucket", "mygzippedfile.xml"), 0)) {
readChannel.read(result);
}
Unfortunately, this only works if the size of the bytebuffer is greater or equal to the uncompressed size of the file, which is not possible to get via the api.
I've also posted my comment to an issue registered with google: https://code.google.com/p/googleappengine/issues/detail?id=10445
This is my function for reading compressed gzip files
public byte[] getUpdate(String fileName) throws IOException
{
GcsFilename fileNameObj = new GcsFilename(defaultBucketName, fileName);
try (GcsInputChannel readChannel = gcsService.openReadChannel(fileNameObj, 0))
{
maxSizeBuffer.clear();
readChannel.read(maxSizeBuffer);
}
byte[] result = maxSizeBuffer.array();
return result;
}
The core is that you cannot use the size of the saved file cause Google Storage will give it to you with the original size, so it checks the sizes you expected and the real size and these are differents:
Preconditions.checkState(content.length <= want, "%s: got %s > wanted
%s", this, content.length, want);
So i solved it allocating the biggest amount possible for these files using BlobstoreService.MAX_BLOB_FETCH_SIZE. Actually maxSizeBuffer is only allocated once outsize of the function
ByteBuffer maxSizeBuffer = ByteBuffer.allocate(BlobstoreService.MAX_BLOB_FETCH_SIZE);
And with maxSizeBuffer.clear(); all data is flushed again.

How to read byte by byte from appengine datastore Entity Object

In a nutshell, since GAE cannot write to a filesystem, I have decided to persist my data into the datastore (using JDO). Now, I will like to retrieve the data byte by byte and pass it to the client as an input stream. There's code from the gwtupload library(http://code.google.com/p/gwtupload/) (see below) which breaks on GAE because it writes to the system filesystem. I'll like to be able to provide a GAE ported solution.
public static void copyFromInputStreamToOutputStream(InputStream in, OutputStream out) throws IOException {
byte[] buffer = new byte[100000];
while (true) {
synchronized (buffer) {
int amountRead = in.read(buffer);
if (amountRead == -1) {
break;
}
out.write(buffer, 0, amountRead);
}
}
in.close();
out.flush();
out.close();
}
One work around I have tried (didn't work) is to retrieve the data from the datastore as a resource like this:
InputStream resourceAsStream = null;
PersistenceManager pm = PMF.get().getPersistenceManager();
try {
Query q = pm.newQuery(ImageFile.class);
lf = q.execute();
resourceAsStream = getServletContext().getResourceAsStream((String) pm.getObjectById(lf));
} finally {
pm.close();
}
if (lf != null) {
response.setContentType(receivedContentTypes.get(fieldName));
copyFromInputStreamToOutputStream(resourceAsStream, response.getOutputStream());
}
I welcome your suggestions.
Regards
Store data in a byte array, and use a ByteArrayInputStream or ByteArrayOutputStream to pass it to libraries that expect streams.
If by 'client' you mean 'HTTP client' or browser, though, there's no reason to do this - just deal with regular byte arrays on your end and send them to/from the user as you would any other data. The only reason to mess around with streams like this is if you have some library that expects them.

Resources