Wrapping a java.nio.ByteBuffer in a new instance, with offset, loses bytes - nio

I'm trying to write the contents of a byte buffer to a file, using the offset (position). It works when I convert to an input stream, but not when I wrap in a new ByteBuffer
This works:
new ByteArrayInputStream(byteBuffer.array(), byteBuffer.position(), byteBuffer.array().length - byteBuffer.position())
This doesn't
ByteBuffer.wrap(byteBuffer.array(), byteBuffer.position(), byteBuffer.array().length - byteBuffer.position())
More specifically, when I say it doesn't work, writing the contents of the buffer to a file:
Files.write(path, ByteBuffer.wrap(byteBuffer.array(), byteBuffer.position(), byteBuffer.array().length - byteBuffer.position()).array())
results in bytes written to the file but it is not complete, so the jpeg cannot be viewed, but if I write the same buffer, wrapping in a ByteArrayInputStream, it does work:
val in = new ByteArrayInputStream(byteBuffer.array(), byteBuffer.position(), byteBuffer.array().length - byteBuffer.position())
Iterator.continually (in.read).takeWhile (-1 != _).foreach (fileOutputStream.write)
So I must be doing something silly and perhaps I don't understand how ByteBuffer works

ByteBuffer.wrap(byteBuffer.array(), <ANYTHING>, <ANYTHING>).array() means just byteBuffer.array(), and <ANYTHING> isn't taken into account.
Also, the whole
ByteBuffer.wrap(byteBuffer.array(), byteBuffer.position(), byteBuffer.array().length - byteBuffer.position())
is just a cumbersome way to create a shallow copy of byteBuffer, why do you do it instead of just using the byteBuffer itself?
Looks like what you want is something like
try (FileChannel outCh = new FileOutputStream(filename).getChannel()) {


supercollider - access buffer information inside a `Pbind` that uses a buffer array

in brief
i have an array of buffers; those are passed to a synth at random using a Pbind ; i need to access info on the current buffer from within the Pbind but I need help doing that !
explanation of the problem
i have loaded an array of buffers containing samples. those samples must be played in a random order (and at random intervals, but that's for later). to do so, i pass those buffers to a synth inside a Pbind. i want to set the \dur key to be the length of the current buffer being played. the thing is, that i can't find a way to access info on the current buffer from within the Pbind. i have tried using Pkey, Pfset and Plambda, but to no success.
does somebody know how to do this ???
the sounds are played using:
SynthDef(\player, {
play a file from a buffer
out: the output channel
bufnum: the buffer to play
arg out=0, bufnum;
PlayBuf.ar(1, bufnum, BufRateScale.kr(bufnum), doneAction: Done.freeSelf)) ! 2
the buffers are loaded in an array:
path = PathName.new("/path/to/files");
bufferArray = Array.new(100);
arg file;
bufferArray.add( Buffer.read(s, file.fullPath) );
my Pbind pattern works like this:
i define a \buffer value which is a single buffer from the array
i pass this \buffer to my synth
i then try to calculate its duration (\dur) by dividing the number of frames of the buffer by its sample rate. this is what i can't seem to get right
p = Pbind(
\buffer, Prand(bufferArray, inf),
\instrument, \player,
\bufnum, Pkey(\buffer),
\dur, (Pkey(\buffer.numFrames) / Pkey(\buffer.sampleRate))
thanks in advance for your help !!
solution to the problem: how to access buffer information inside a Pbind pattern
after hours of searching, i've found a solution to this problem on the supercollider forum, and i'm posting my own solution in case others are looking on here, like i was !
define a global array of buffers
this isn't compulsory, but it allows to only create the buffer array once; the array is created asynchronously using the action parameter of Buffer.read(), which allows to trigger a function once the buffer is loaded:
var path;
Buffer.freeAll; // avoid using all buffers in server
path = PathName.new("/path/to/sound/files");
~bufferArray = Array.new(100);
// add the buffer to `~bufferArray` asynchronously
arg file;
b = Buffer.read(s, file.fullPath, action: {
arg buffer;
~bufferArray.add( buffer );
play the synth and use Pfunc to access buffer information inside of the Pbind
this is the solution per se:
define a Pbind pattern which activates a synth to play the buffer.
inside that, define a \buffer variable to hold the current buffer.
then, access data on that buffer inside of a Pfunc. this generates an argument containing the last event in the Pbind. using this event, the buffer data can be accessed
p = Pbind(
\buffer, Prand(~bufferArray, inf), // randomly access one buffer inside of the array
\instrument, \player,
\bufnum, Pfunc { arg event; event[\buffer] }, // define a `Pfunc` function to access the previous event containing a `\buffer` variable
\dur, Pfunc { arg event; event[\buffer].numFrames / event[\buffer].sampleRate } // duration
see the original answer on the supercollider forum for more details !

Extracting data from an unknown encoding file

We use testing equipment (1995 year manufacturing) powered by MS DOS. Analog-digital converter records information in the file.
In [picture1] is shown the structure of that file.
In [picture2] is shown the oscillogram that constructed according to the data from the file (program for opening the file on MS DOS).
Below I placed link to this file (google drive).
This file contains the data that need for me - the massive of point of oscillogram. I want have opportunities to keep, analyze and print this chart on Windows or Linux (not MS DOS). So I need to extract data from the file.
But I can't make it. And no program (known to me) can't open this file. I analyzed a few first byte and they point to program 'TRAS v4.99'. This program is on MS DOS.
But I really hope, that it is really to get data without this program.
P.S. If anyone will say it is impossible - it is will well too because I haven't found point of view yet:)
Thank you for your time! Best regards!
Here is an idea on how you can tackle this problem. Since the format is relatively well specified in the handbook you can use the Java programming language for example with something like java.io.RandomAccessFile to read arrays of bytes. These arrays of bytes can then be converted to Java primitive types OR to string according to the data type. After this conversion you can the print out the data in a human readable format.
Below you can find some sample code to give you an idea of what you could do with this approach (I have not tested the code, it is not complete, it is just to give you an idea of what you can do):
public static void readBinaryfile() throws IOException {
java.io.RandomAccessFile randomAccessFile = new RandomAccessFile("test.bin", "r");
byte[] addKenStrBytes = new byte[12];
String addKenStr = new String(addKenStrBytes, "UTF-8");
// TODO: Do something with addKenStr.
byte[] kopfSizeBytes = new byte[2];
// TODO: Do something with kopfSizeBytes
byte[] addRufNrCounterBytes = new byte[6];
long addRufNrCounter = convertToLong(addRufNrCounterBytes);
// TODO: Do something with addRufNrCounter
byte[] endAdrBytes = new byte[4];
// TODO: Do something with endAdrBytes
// Continue here and after you reached the end of the record repeat until you reached the end off the file
private static int convertToInt(byte[] bytes) {
if(bytes.length > 4) {
throw new IllegalArgumentException();
int buffer = 0;
for(byte b : bytes) {
buffer |= b;
buffer = buffer << 8;
return buffer;
private static long convertToLong(byte[] bytes) {
if(bytes.length > 8) {
throw new IllegalArgumentException();
long buffer = 0L;
for(byte b : bytes) {
buffer |= b;
buffer = buffer << 8;
return buffer;
Note that fields with more than 8 bytes need to be most probably converted to strings. This is not complete code, just an example to give you an idea on how you can tackle this problem.

"Error -- memory violation : Exception ACCESS_VIOLATION received" on LR controller

Scenario: We are trying to download 2500 PDFs from our website and we need to find the response time of this scenario when run with other business flows of the application. The custom code I had written for selecting and downloading PDFs dynamically worked fine for the size of 200-300 PDFs both on vugen and even on controller. But, when we ran the same script with 2500 PDFs loaded to the DB, the script worked fine on vugen, but failed running out of memory on controller. I tried running this script alone on controller for concurrent users (20) and even then it failed giving the same out of memory error.I started getting this error as soon as the concurrent users started running on the server.I tried following things and my observations:
1. I checked the LG we are using and had no high cpu usage/memory usage at the time I got this memory error.
2. I tried turning off the logging completely and also turned off the "Generate snapshot on error".
3. I increased the network buffer size from default 12KB to a higher value around 2MB as the server was responding with THAT PDF size.
4. Also, increased JavaScript runtime memory value to a higher value but I know it's something to do with the code.
5. I have set web_set_max_html_param_len("100000");
Here is my code:
int download_size,i,m;
lr_save_string(lr_eval_string("{r_buf}"), "dpAllRecords");
I am not able to find what the issue with my code as it is running fine in vugen.One thing is: it creates huge mdrv.log file to accommodate all the 2500 members in the format shown above
I need help on this.
Okay, since that did not work and I could not find the root cause, I tried modifying the code with string buffer to hold the value instead of the parameter. This time my code did not work properly and I could not get the proper formatted value resulting in my web_custom_request failing
so, here is the code with sprintf
char *r_buf=(char *) malloc(55000);
int download_size,i,m;
sprintf(r_buf,"%sselectedNotice=%s&",r_buf,lr_paramarr_idx ("DownloadableRecords_FundingNotices",i));
lr_save_string(r_buf, "dpAllRecords");
I also tried using this:
lr_save_string(lr_eval_string("{r_buf}"), "dpAllRecords");
though it is for embedded parameters but in vain
You could try something like the below. If frees the allocated memory, something you do not do in your examples.
I changed:
The way r_buf is allocated
how r_buf is populated (doing a sprintf() into the buffer and from the buffer might not work as expected)
uses lr_paramarr_len()
Check that the allocated buffer is big enough in the loop
Action() Code:
char *r_buf;
char buf[2048];
int download_size,i,m;
// Allocate memory
if ( (r_buf= (char *)calloc(65535 * sizeof(char))) == NULL)
lr_error_message ("Insufficient memory available");
return -1;
memset( buf, 0, sizeof(buf) );
m = lr_paramarr_len("DownloadableRecords_FundingNotices");
for(i=1; i<=m; i++) {
sprintf( buf, "selectedNotice=%s&", lr_paramarr_idx("DownloadableRecords_FundingNotices",i) );
// Check buffer is big enough to hold the new data
if ( strlen(r_buf)+strlen(buf) > 65535 ) {
lr_error_message("Buffer exceeded");
// Concatenate to final buffer
strcat( r_buf, buf ); // Bugfix: This was "strcat( r_buf, "%s", buf );"
// Save buffer to variable
lr_save_string(r_buf, "dpAllRecords");
// Free memory
free( r_buf );

How to replace a data in a file with another data from another file?

I'm trying to open this file (final.txt) and read the contents:
181100202900027Part No
from which I am reading only [PRTNUM], [QUANTY], [PONUMB], [SERIAL], [UNITS].
I've written the following C program:
char* cStart = strchr(cString, '[');
if (cStart)
// open bracket found
*cStart++ = '\0'; // split the string at [
char* cEnd = strchr(cStart, ']');
// you could check here for the close bracket being found
// and throw an exception if not
*cEnd = '\0'; // terminate the keyword
printf("Key: %s, Value: %s",cString, cStart);
// continue the loop
but now I want to replace these placeholders with data from the 2nd file:
I want to replace [PRTNUM] (from the 1st file) with 132424235 and so on... In the end my file should be updated with all this data. Can you tell me what function I should use in the above program?
If you don't mind having an alternate approach, here's an algorithm to do the work in an elegant way
Create one (large enough) temporary buffer. Also, create (open) one output file which will be the modified version.
Read a line from the input file into the buffer using fgets()
Search for the particular "keyword" using strstr()
If a match is found --
4.1. Open the other input file.
4.2. Read the corresponding data (line), using fgets()
4.3. Replace the actual data in the temporary buffer with the newly read value.
4.4. write the modified data to the output file.
If match is not found, write the original data in the output file. Then, go to step 2.
Continue until fgets() returns NULL (indicates the file content has been exhausted).
Finally, the output file will have the data from the first file with those particular "placeholders" substituted with the value read from the second file.
Obviously, you need to polish the algorithm a little bit to make it work with multiple "placeholder" string.
Keep an extra string(name it copy) large enough to hold file 1 + some extra to manage replacement of [PRTNUM] with 132424235.
Start reading first string that has file1 and keep copying into second string (copy) as soon as you encounter [PRTNUM] , in string 2 instead of copying [PRTNUM] you append it with 132424235 and so on for all others.
And finally replace file1.txt with this second (copy) string.

libjpeg: process all scanlines once

I use jpeg library v8d from Independent JPEG Group and I want to change the way jpeg decompression reads and processes data.
In the djpeg main(), only one scanline/row at a time is read and processed in each jpeg_read_scanlines() call. So, to read entire image this functions is called until all lines are read and processed:
while (cinfo.output_scanline < cinfo.output_height) {
num_scanlines = jpeg_read_scanlines(&cinfo, dest_mgr->buffer,
dest_mgr->buffer_height); //read and process
(*dest_mgr->put_pixel_rows) (&cinfo, dest_mgr, num_scanlines); //write to file
But I would like to read the entire image once and store it in the memory and then process the entire image from memory. By reading libjpeg.txt, I found out this is possible: "You can process an entire image in one call if you have it all in memory, but usually it's simplest to process one scanline at a time."
Even though I made some progress, I couldn't make it completely work. I can now read a couple of rows once by increasing pub.buffer_height value and pub.buffer size, but no matter how large pub.buffer_height and pub.buffer are, only a couple of lines are read in each jpeg_read_scanlines() call. Any thoughts on this?
only a couple of lines are read in each jpeg_read_scanlines()
Yes, so you call it in a loop. Here's a loop that grabs one scanline at a time:
unsigned char *rowp[1], *pixdata = ...;
unsigned rowbytes = ..., height = ...;
while (cinfo.output_scanline < height) {
rowp[0] = pixdata + cinfo.output_scanline * rowbytes;
jpeg_read_scanlines(&cinfo, rowp, 1);
Once the loop exits, you have the entire image.