I am trying to decode mp3 from icecast server but i am getting hip: bitstream problem, resyncing skipping 164 bytes... error. I am decoding using lame decoder.
hip_t hipVal;
hipVal = hip_decode_init();
while(1)
{
Readdatafromnet(ringBuf, (void**)&buffer);
sampleno = hip_decode(hipVal,buffer,irhandle->size, pcmrBuffer, pcmlBuffer);
player_write(handle, pcmrBuffer, sampleno);
}
hip_decode_exit(hipVal);
Related
In my spring application, I am getting FilePart object and need to convert it to byteArray.
Below is my code.
val byteArray: ByteArray = file.content().map { it -> it.asInputStream().readBytes() }.blockLast()!!
But it is giving me error as
java.lang.IllegalStateException: block()/blockFirst()/blockLast() are blocking, which is not supported in thread reactor-http-kqueue-3
Take a look at DataBufferUtils, the join(...) method offers a safe and efficient way to aggregate a data buffer stream into a single data buffer
Mono<byte[]> getByteArray(FilePart filePart) {
return DataBufferUtils.join(filePart.content())
.map(dataBuffer -> dataBuffer.asByteBuffer().array());
}
We are Streaming file from S3 and processing it , after process complete we upload file back to S3 as Error / Archive file while streaming file from S3 it streams data and in between it stops processing with Error as "akka.http.scaladsl.model.EntityStreamException: Entity stream truncation" , Not sure is this depend on file size stream from S3 or corrupt file ?
val source = s3Client.download(baseConfig.bucketName.get,
content.key)._1.via(Gzip.decoderFlow).
via(Framing.delimiter(ByteString("\n"), 256,
byeFormatterFlag).map(_.utf8String))
val flow = flowDefintion(list)
val resp = source.via(flow).runWith(Sink.seq)
akka {
loglevel = "INFO"
stdout-loglevel = "INFO"
logging-filter = "akka.event.slf4j.Slf4jLoggingFilter"
http {
routing {
decode-max-size = 25m
}
parsing {
max-to-strict-bytes = 20m
max-content-length = 20m
max-chunk-size=10m
}
}
}
I want to read a wav file sample rate and size from its header in Angular.
I am trying to use the Audio context to decode the data but I am unable to make it work.
var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
audioCtx.decodeAudioData(uint8Array, function (buffer) {
//read data
})
I have an array of bytes but I get an error : parameter 1 is not of type ArrayBuffer.Then I convert it to ArrayBuffer but still get an error
Any ideas?
Using the FFmpeg C API I'm trying to convert an input video into a video that looks like an animated gif - meaning no audio stream and a video stream of 4/fps.
I have the decode/encode part working. I can drop the audio stream from the output file, but I'm having trouble reducing the fps. I can change the output video stream's time_base to 4/fps, but it increases the video's duration - basically playing it in slow mo.
I think I need to drop the extra frames before I write them to the output container.
Below is the loop where I read the input frames, and then write them to output container.
Is this where I'd drop the extra frames? How do I determine which frames to drop (I,P,B frames)?
while(av_read_frame(input_container, &decoded_packet)>=0) {
if (decoded_packet.stream_index == video_stream_index) {
len = avcodec_decode_video2(input_stream->codec, decoded_frame, &got_frame, &decoded_packet);
if(len < 0) {
exit(1);
}
if(got_frame) {
av_init_packet(&encoded_packet);
encoded_packet.data = NULL;
encoded_packet.size = 0;
if(avcodec_encode_video2(output_stream->codec, &encoded_packet, decoded_frame, &got_frame) < 0) {
exit(1);
}
if(got_frame) {
if (output_stream->codec->coded_frame->key_frame) {
encoded_packet.flags |= AV_PKT_FLAG_KEY;
}
encoded_packet.stream_index = output_stream->index;
encoded_packet.pts = av_rescale_q(current_frame_num, output_stream->codec->time_base, output_stream->time_base);
encoded_packet.dts = av_rescale_q(current_frame_num, output_stream->codec->time_base, output_stream->time_base);
if(av_interleaved_write_frame(output_container, &encoded_packet) < 0) {
exit(1);
}
else {
current_frame_num +=1;
}
}
frame_count+=1;
av_free_packet(&encoded_packet);
}
}
}
It looks like you are decoding then re-encoding the video. In the decoded state there is no such thing as I/B/P. They are all I frames. This is also where you should be dropping frames. You must decode every frame, but once decoded, drop the frames you no longer want by simply not sending them to the encoder. And finally, don't touch the timebase at all.
We're using "Google Cloud Storage Client Library" for app engine, with simply "GcsFileOptions.Builder.contentEncoding("gzip")" at file creation time, we got the following problem when reading the file:
com.google.appengine.tools.cloudstorage.NonRetriableException: java.lang.RuntimeException: com.google.appengine.tools.cloudstorage.SimpleGcsInputChannelImpl$1#1c07d21: Unexpected cause of ExecutionException
at com.google.appengine.tools.cloudstorage.RetryHelper.doRetry(RetryHelper.java:87)
at com.google.appengine.tools.cloudstorage.RetryHelper.runWithRetries(RetryHelper.java:129)
at com.google.appengine.tools.cloudstorage.RetryHelper.runWithRetries(RetryHelper.java:123)
at com.google.appengine.tools.cloudstorage.SimpleGcsInputChannelImpl.read(SimpleGcsInputChannelImpl.java:81)
...
Caused by: java.lang.RuntimeException: com.google.appengine.tools.cloudstorage.SimpleGcsInputChannelImpl$1#1c07d21: Unexpected cause of ExecutionException
at com.google.appengine.tools.cloudstorage.SimpleGcsInputChannelImpl$1.call(SimpleGcsInputChannelImpl.java:101)
at com.google.appengine.tools.cloudstorage.SimpleGcsInputChannelImpl$1.call(SimpleGcsInputChannelImpl.java:81)
at com.google.appengine.tools.cloudstorage.RetryHelper.doRetry(RetryHelper.java:75)
... 56 more
Caused by: java.lang.IllegalStateException: com.google.appengine.tools.cloudstorage.oauth.OauthRawGcsService$2#1d8c25d: got 46483 > wanted 19823
at com.google.common.base.Preconditions.checkState(Preconditions.java:177)
at com.google.appengine.tools.cloudstorage.oauth.OauthRawGcsService$2.wrap(OauthRawGcsService.java:418)
at com.google.appengine.tools.cloudstorage.oauth.OauthRawGcsService$2.wrap(OauthRawGcsService.java:398)
at com.google.appengine.api.utils.FutureWrapper.wrapAndCache(FutureWrapper.java:53)
at com.google.appengine.api.utils.FutureWrapper.get(FutureWrapper.java:90)
at com.google.appengine.tools.cloudstorage.SimpleGcsInputChannelImpl$1.call(SimpleGcsInputChannelImpl.java:86)
... 58 more
What else should be added to read files with "gzip" compression to be able to read the content in app engine? ( curl cloud storage URL from client side works fine for both compressed and uncompressed file )
This is the code that works for uncompressed object:
byte[] blobContent = new byte[0];
try
{
GcsFileMetadata metaData = gcsService.getMetadata(fileName);
int fileSize = (int) metaData.getLength();
final int chunkSize = BlobstoreService.MAX_BLOB_FETCH_SIZE;
LOG.info("content encoding: " + metaData.getOptions().getContentEncoding()); // "gzip" here
LOG.info("input size " + fileSize); // the size is obviously the compressed size!
for (long offset = 0; offset < fileSize;)
{
if (offset != 0)
{
LOG.info("Handling extra size for " + filePath + " at " + offset);
}
final int size = Math.min(chunkSize, fileSize);
ByteBuffer result = ByteBuffer.allocate(size);
GcsInputChannel readChannel = gcsService.openReadChannel(fileName, offset);
try
{
readChannel.read(result); <<<< here the exception was thrown
}
finally
{
......
It is now compressed by:
GcsFilename filename = new GcsFilename(bucketName, filePath);
GcsFileOptions.Builder builder = new GcsFileOptions.Builder().mimeType(image_type);
builder = builder.contentEncoding("gzip");
GcsOutputChannel writeChannel = gcsService.createOrReplace(filename, builder.build());
ByteArrayOutputStream byteStream = new ByteArrayOutputStream(blob_content.length);
try
{
GZIPOutputStream zipStream = new GZIPOutputStream(byteStream);
try
{
zipStream.write(blob_content);
}
finally
{
zipStream.close();
}
}
finally
{
byteStream.close();
}
byte[] compressedData = byteStream.toByteArray();
writeChannel.write(ByteBuffer.wrap(compressedData));
the blob_content is compressed from 46483 bytes to 19823 bytes.
I think it is the google code's bug
https://code.google.com/p/appengine-gcs-client/source/browse/trunk/java/src/main/java/com/google/appengine/tools/cloudstorage/oauth/OauthRawGcsService.java, L418:
Preconditions.checkState(content.length <= want, "%s: got %s > wanted %s", this, content.length, want);
the HTTPResponse has decoded the blob, so the Precondition is wrong here.
If I good understand you have to set mineType:
GcsFileOptions options = new GcsFileOptions.Builder().mimeType("text/html")
Google Cloud Storage does not compress or decompress objects:
https://developers.google.com/storage/docs/reference-headers?csw=1#contentencoding
I hope that's what you want to do .
Looking at your code it seems like there is a mismatch between what is stored and what is read. The documentation specifies that compression is not done for you (https://developers.google.com/storage/docs/reference-headers?csw=1#contentencoding). You will need to do the actual compression manually.
Also if you look at the implementation of the class that throws the exception (https://code.google.com/p/appengine-gcs-client/source/browse/trunk/java/src/main/java/com/google/appengine/tools/cloudstorage/oauth/OauthRawGcsService.java?r=81&spec=svn134) you will notice that you get the original contents back but you're actually expecting compressed content. Check the method readObjectAsync in the above mentioned class.
It looks like the content persisted might not be gzipped or the content-length is not set properly. What you should do is verify length of the compressed stream just before writing it into the channel. You should also verify that the content length is set correctly when doing the http request. It would be useful to see the actual http request headers and make sure that content length header matches the actual content length in the http response.
Also it looks like contentEncoding could be set incorrectly. Try using:.contentEncoding("Content-Encoding: gzip") as used in this TCK test. Although still the best thing to do is inspect the HTTP request and response. You can use wireshark to do that easily.
Also you need to make sure that GCSOutputChannel is closed as that's when the file is finalized.
Hope this puts you on the right track. To gzip your contents you can use java GZIPInputStream.
I'm seeing the same issue, easily reproducable by uploading a file with "gsutil cp -Z", then trying to open it with the following
ByteArrayOutputStream output = new ByteArrayOutputStream();
try (GcsInputChannel readChannel = svc.openReadChannel(filename, 0)) {
try (InputStream input = Channels.newInputStream(readChannel))
{
IOUtils.copy(input, output);
}
}
This causes an exception like this:
java.lang.IllegalStateException:
....oauth.OauthRawGcsService$2#1883798: got 64303 > wanted 4096
at ....Preconditions.checkState(Preconditions.java:199)
at ....oauth.OauthRawGcsService$2.wrap(OauthRawGcsService.java:519)
at ....oauth.OauthRawGcsService$2.wrap(OauthRawGcsService.java:499)
The only work around I've found is to read the entire file into memory using readChannel.read:
int fileSize = 64303;
ByteBuffer result = ByteBuffer.allocate(fileSize);
try (GcsInputChannel readChannel = gcs.openReadChannel(new GcsFilename("mybucket", "mygzippedfile.xml"), 0)) {
readChannel.read(result);
}
Unfortunately, this only works if the size of the bytebuffer is greater or equal to the uncompressed size of the file, which is not possible to get via the api.
I've also posted my comment to an issue registered with google: https://code.google.com/p/googleappengine/issues/detail?id=10445
This is my function for reading compressed gzip files
public byte[] getUpdate(String fileName) throws IOException
{
GcsFilename fileNameObj = new GcsFilename(defaultBucketName, fileName);
try (GcsInputChannel readChannel = gcsService.openReadChannel(fileNameObj, 0))
{
maxSizeBuffer.clear();
readChannel.read(maxSizeBuffer);
}
byte[] result = maxSizeBuffer.array();
return result;
}
The core is that you cannot use the size of the saved file cause Google Storage will give it to you with the original size, so it checks the sizes you expected and the real size and these are differents:
Preconditions.checkState(content.length <= want, "%s: got %s > wanted
%s", this, content.length, want);
So i solved it allocating the biggest amount possible for these files using BlobstoreService.MAX_BLOB_FETCH_SIZE. Actually maxSizeBuffer is only allocated once outsize of the function
ByteBuffer maxSizeBuffer = ByteBuffer.allocate(BlobstoreService.MAX_BLOB_FETCH_SIZE);
And with maxSizeBuffer.clear(); all data is flushed again.