Kotlin Int to Byte Conversion - arrays

What is the Kotlin 1.5 command to convert a 16 bit integer to a Byte of length 2? Secondary problem is that outputstream needs a string at the end so it can convert with toByteArray()
# Original Python Code
...
i = int((2**16-1)*ratio) # 16 bit int
i.to_bytes(2, byteorder='big')
output = (i).to_bytes(2, byteorder='big')
# Kotlin Code so far
var i = ((2.0.pow(16) - 1) * ratio).toInt() // Convert to 16 bit Integer
print("16 bit Int: " + i)
output = .....
....
...
val outputStream: OutputStream = socket.getOutputStream()
outputStream.write(output.toByteArray()) // write requires ByteArray for some reason

It is simple math, so it is probably the best to calculate manually and define as an extension function:
fun Int.to2ByteArray() : ByteArray = byteArrayOf(toByte(), shr(8).toByte())
Then you can use it:
output = i.to2ByteArray()
outputStream.write(output)
Note, this function writes the integer in little-endian. If you need big-endian the just reverse the order of items in the array. You can also add some min/max checks if you need them.
Also, if you only need 16-bit values then you can consider using Short or UShort instead of Int. It doesn't change much regarding the memory usage, but it could be a cleaner approach - we could name our extension just toByArray() and we would not need min/max checks.

Related

Converting From a list of bits to a byte array

I am really struggling here as a new programming with a process using the snap7 library connected to a siemens PLC using Python3 on a raspberry PI. Basically I am reading in data as a byte array then modifying it and sending it back to the PLC. I am able to read it in and convert it to a list and modify the data.
So my data is a list that looks like [0,0,0,0,0,0,1,0]. It will always be exactly 1 byte (8 bits). So I can modify these bits. However I am struggling with getting them back into a byte array. I need to convert from that list into a byte array response that should look like bytearray(b'\x02')
Couple examples of what I am expecting
Input [0,0,0,0,0,0,0,1]
Output bytearray(b'\x01')
Input [0,0,0,0,0,0,1,0]
Output bytearray(b'\x02')
Input[0,0,0,0,0,0,1,1]
Output bytearray(b'\x03')
It is a bit odd that it is a byte array for only 1 byte but that is how the library works for writing to the datablock in the PLC.
Please let me know if there is any additional data I can share
Kevin
First convert the list to a decimal, this can be done in one line using.
sum(val*(2**idx) for idx, val in enumerate(reversed(binary)))
but to make the code a little more readable
binary_list = [0,0,0,0,0,0,1,0]
number = 0
for b in binary_list:
number = (2 * number) + b
Then simply use bytearray and add the number as an input
output = bytearray([number])
Changing this into a function
def create_bytearray(binary_list):
number = 0
for b in binary_list:
number = (2 * number) + b
return bytearray([number])
Now you just have to call
output = create_bytearray([0,0,0,0,0,0,1,0])
print(output)
And you will get
bytearray(b'\x02')

Scala way for converting Long to ArrayByte

I'm trying to convert the Long to array byte. This code block is working but this solution is a Java solution. I'm looking for a good solution in Scala. How can I convert the Long to array byte in Scala way?
val arrayByteFromLong: Array[Byte] = ByteBuffer.allocate(8).putLong(myLong).array()
You can leverage scala.math.BigInt:
import scala.math.BigInt
val arrayByteFromLong: Array[Byte] = BigInt(myLong).toByteArray
If you want to also pad the array to 8 Bytes you can do (quick-and-dirty not so efficient version):
arrayByteFromLong.reverse.padTo(8,0).reverse

Inserting integer array with postgresql in C (libpq)

I'm trying to post an integer array into my postgresql database. I'm aware that I could format everything as a string and then send that string as one SQL command. However, I believe the PQexecParams function should also bring some help. However, I'm kind of lost as how to use it.
//we need to convert the number into network byte order
int val1 = 131;
int val2 = 2342;
int val3[5] = { 0, 7, 15, 31, 63 };
//set the values to use
const char *values[3] = { (char *) &val1, (char *) &val2, (char *) val3 };
//calculate the lengths of each of the values
int lengths[3] = { sizeof(val1), sizeof(val2), sizeof(val3) * 5 };
//state which parameters are binary
int binary[3] = { 1, 1, 1 };
PGresult *res = PQexecParams(conn, "INSERT INTO family VALUES($1::int4, $2::int4, $3::INTEGER[])", 3, //number of parameters
NULL, //ignore the Oid field
values, //values to substitute $1 and $2
lengths, //the lengths, in bytes, of each of the parameter values
binary, //whether the values are binary or not
0); //we want the result in text format
Yes this is copied from some tutorial.
However this returns :
ERROR: invalid array flags
Using a conventional method does work:
PQexec(conn, "INSERT INTO family VALUES (2432, 31, '{0,1,2,3,4,5}')");
Inserts data just fine, and I can read it out fine as well.
Any help would be greatly appreciated! :)
libpq's PQexecParams can accept values in text or binary form.
For text values, you must sprintf the integer into a buffer that you put in your char** values array. This is usually how it's done. You can use text format with query parameters, there is no particular reason to fall back to interpolating the parameters into the SQL string yourself.
If you want to use binary mode transfers, you must instead ensure the integer is the correct size for the target field, is in network byte order, and that you have specified the type OID. Use htonl (for uint32_t) or htons (for uint16_t) for that. It's fine to cast away signedness since you're just re-ordering the bytes.
So:
You cannot ignore the OID field if you're planning to use binary transfer
Use htonl, don't brew your own byte-order conversion
Your values array construction is wrong. You're putting char**s into an array of char* and casting away the wrong type. You want &val1[0] or (equivalent in most/all real-world C implementations, but not technically the same per the spec) just val1, instead of (char*)&val1
You cannot assume that the on-wire format of integer[] is the same as C's int32_t[]. You must pass the type OID INT4ARRAYOID (see include/catalog/pg_type.h or select oid from pg_type where typname = '_int4' - the internal type name of an array is _ in front of its base type) and must construct a PostgreSQL array value compatible with the typreceive function in pg_type for that type (which is array_recv) if you intend to send in binary mode. In particular, binary-format arrays have a header. You cannot just leave out the header.
In other words, the code is broken in multiple exciting ways and cannot possibly work as written.
Really, there is rarely any benefit in sending integers in binary mode. Sending in text-mode is often actually faster because it's often more compact on the wire (small values). If you're going to use binary mode, you will need to understand how C represents integers, how network vs host byte order works, etc.
Especially when working with arrays, text format is easier.
libpq could make this a lot easier than it presently does by offering good array construct / deconstruct functions for both text and binary arrays. Patches are, as always, welcome. Right now, 3rd party libraries like libpqtypes largely fill this role.

In Swift, how do I read an existing binary file into an array?

As part of my projects, I have a binary data file consisting of a large series of 32 bit integers that one of my classes reads in on initialization. In my C++ library, I read it in with the following initializer:
Evaluator::Evaluator() {
m_HandNumbers.resize(32487834);
ifstream inputReader;
inputReader.open("/path/to/file/7CHands.dat", ios::binary);
int inputValue;
for (int x = 0; x < 32487834; ++x) {
inputReader.read((char *) &inputValue, sizeof (inputValue));
m_HandNumbers[x] = inputValue;
}
inputReader.close();
};
and in porting to Swift, I decided to read the entire file into one buffer (it's only about 130 MB) and then copy the bytes out of the buffer.
So, I've done the following:
public init() {
var inputStream = NSInputStream(fileAtPath: "/path/to/file/7CHands.dat")!
var inputBuffer = [UInt8](count: 32478734 * 4, repeatedValue: 0)
inputStream.open()
inputStream.read(&inputBuffer, maxLength: inputBuffer.count)
inputStream.close()
}
and it works fine in that when I debug it, I can see inputBuffer contains the same array of bytes that my hex editor says it should. Now, I'd like to get that data out of there effectively. I know it's stored in.. whatever format you call it where the least significant bytes are first (i.e. the number 0x00011D4A is represented as '4A1D 0100' in the file). I'm tempted to just iterate through it manually and calculate the byte values by hand, but I'm wondering if there's a quick way I can pass an array of [Int32] and have it read those bytes in. I tried using NSData, such as with:
let data = NSData(bytes: handNumbers, length: handNumbers.count * sizeof(Int32))
data.getBytes(&inputBuffer, length: inputBuffer.count)
but that didn't seem to load the values (all the values were still zero). Can anyone please help me convert this byte array into some Int32 values? Better yet would be to convert them to Int (i.e. 64 bit integer) just to keep my variable sizes the same across the project.
Not sure about your endian-ness, but I use the following function. The difference from your code is using NSRanges of the actual required type, rather than lengths of bytes. This routine reads one value at a time (it's for ESRI files whose contents vary field by field), but should be easily adaptable.
func getBigIntFromData(data: NSData, offset: Int) -> Int {
var rng = NSRange(location: offset, length: 4)
var i = [UInt32](count: 1, repeatedValue:0)
data.getBytes(&i, range: rng)
return Int(i[0].bigEndian)// return Int(i[0]) for littleEndian
}
Grimxn provided the backbone of the solution to my problem, which showed me how to read sections of the buffer into an array; he then showed me a way to read the entire buffer in all at once. Rather than convert all of the items of the array needlessly to Int, I simply read the array into the buffer as UInt32 and did the casting to Int in the function that accesses that array.
For now, since I don't have my utility class defined yet, I integrated Grimxn's code directly into my initializer. The class initializer now looks like this:
public class Evaluator {
let HandNumberArraySize = 32487834
var handNumbers: [Int32]
public init() {
let data = NSData(contentsOfFile: "/path/to/file/7CHands.dat")!
var dataRange = NSRange(location: 0, length: HandNumberArraySize * 4)
handNumbers = [Int32](count: HandNumberArraySize, repeatedValue: 0)
data.getBytes(&handNumbers, range: dataRange)
println("Evaluator loaded successfully")
}
...
}
... and the function that references them is now:
public func cardVectorToHandNumber(#cards: [Int], numberToUse: Int) -> Int {
var output: Int
output = Int(handNumbers[53 + cards[0] + 1])
for i in 1 ..< numberToUse {
output = Int(handNumbers[output + cards[i] + 1])
}
return Int(handNumbers[output])
}
Thanks to Grimxn and thanks once again to StackOverflow for helping me in a very real way!

How can I efficiently convert a large decimal array into a binary array in MATLAB?

Here's the code I am using now, where decimal1 is an array of decimal values, and B is the number of bits in binary for each value:
for (i = 0:1:length(decimal1)-1)
out = dec2binvec(decimal1(i+1),B);
for (j = 0:B-1)
bit_stream(B*i+j+1) = out(B-j);
end
end
The code works, but it takes a long time if the length of the decimal array is large. Is there a more efficient way to do this?
bitstream = zeros(nelem * B,1);
for i = 1:nelem
bitstream((i-1)*B+1:i*B) = fliplr(dec2binvec(decimal1(i),B));
end
I think that should be correct and a lot faster (hope so :) ).
edit:
I think your main problem is that you probably don't preallocate the bit_stream matrix.
I tested both codes for speed and I see that yours is faster than mine (not very much tho), if we both preallocate bitstream, even though I (kinda) vectorized my code.
If we DONT preallocate the bitstream my code is A LOT faster. That happens because your code reallocates the matrix more often than mine.
So, if you know the B upfront, use your code, else use mine (of course both have to be modified a little bit to determine the length at runtime, which is no problem since dec2binvec can be called without the B parameter).
The function DEC2BINVEC from the Data Acquisition Toolbox is very similar to the built-in function DEC2BIN, so some of the alternatives discussed in this question may be of use to you. Here's one option to try, using the function BITGET:
decimal1 = ...; %# Your array of decimal values
B = ...; %# The number of bits to get for each value
nValues = numel(decimal1); %# Number of values in decimal1
bit_stream = zeros(1,nValues*B); %# Initialize bit stream
for iBit = 1:B %# Loop over the bits
bit_stream(iBit:B:end) = bitget(decimal1,B-iBit+1); %# Get the bit values
end
This should give the same results as your sample code, but should be significantly faster.

Resources