I am really struggling here as a new programming with a process using the snap7 library connected to a siemens PLC using Python3 on a raspberry PI. Basically I am reading in data as a byte array then modifying it and sending it back to the PLC. I am able to read it in and convert it to a list and modify the data.
So my data is a list that looks like [0,0,0,0,0,0,1,0]. It will always be exactly 1 byte (8 bits). So I can modify these bits. However I am struggling with getting them back into a byte array. I need to convert from that list into a byte array response that should look like bytearray(b'\x02')
Couple examples of what I am expecting
Input [0,0,0,0,0,0,0,1]
Output bytearray(b'\x01')
Input [0,0,0,0,0,0,1,0]
Output bytearray(b'\x02')
Input[0,0,0,0,0,0,1,1]
Output bytearray(b'\x03')
It is a bit odd that it is a byte array for only 1 byte but that is how the library works for writing to the datablock in the PLC.
Please let me know if there is any additional data I can share
Kevin
First convert the list to a decimal, this can be done in one line using.
sum(val*(2**idx) for idx, val in enumerate(reversed(binary)))
but to make the code a little more readable
binary_list = [0,0,0,0,0,0,1,0]
number = 0
for b in binary_list:
number = (2 * number) + b
Then simply use bytearray and add the number as an input
output = bytearray([number])
Changing this into a function
def create_bytearray(binary_list):
number = 0
for b in binary_list:
number = (2 * number) + b
return bytearray([number])
Now you just have to call
output = create_bytearray([0,0,0,0,0,0,1,0])
print(output)
And you will get
bytearray(b'\x02')
Related
What is the Kotlin 1.5 command to convert a 16 bit integer to a Byte of length 2? Secondary problem is that outputstream needs a string at the end so it can convert with toByteArray()
# Original Python Code
...
i = int((2**16-1)*ratio) # 16 bit int
i.to_bytes(2, byteorder='big')
output = (i).to_bytes(2, byteorder='big')
# Kotlin Code so far
var i = ((2.0.pow(16) - 1) * ratio).toInt() // Convert to 16 bit Integer
print("16 bit Int: " + i)
output = .....
....
...
val outputStream: OutputStream = socket.getOutputStream()
outputStream.write(output.toByteArray()) // write requires ByteArray for some reason
It is simple math, so it is probably the best to calculate manually and define as an extension function:
fun Int.to2ByteArray() : ByteArray = byteArrayOf(toByte(), shr(8).toByte())
Then you can use it:
output = i.to2ByteArray()
outputStream.write(output)
Note, this function writes the integer in little-endian. If you need big-endian the just reverse the order of items in the array. You can also add some min/max checks if you need them.
Also, if you only need 16-bit values then you can consider using Short or UShort instead of Int. It doesn't change much regarding the memory usage, but it could be a cleaner approach - we could name our extension just toByArray() and we would not need min/max checks.
I am attempting to write an app in Swift that uses this library for playing MOD music files: https://github.com/martincameron/micromod/tree/master/ibxm-ac
The library has a method called replay_get_audio that takes in, alongside the actual data to be processed (replay and mute), an int pointer to a buffer where audio data should be written to (mix_buf):
int replay_get_audio( struct replay *replay, int *mix_buf, int mute )
The problem is that no matter what I do, I can't seem to access the data that gets written to this buffer in Swift. It's showing up as an array of zeroes in the variable inspector when I convert the bufferPointer to the data to an array:
let dataSize = Int(calculate_mix_buf_len(44100)) // 14040 is the output for my mod file - this is the necessary memory allocation amount for mix_buf according to the library
let mixBufDataPointer = UnsafeMutablePointer<Int32>.allocate(capacity: dataSize)
let resultingNumSamplesWrittenToMixBuf = replay_get_audio(replay, mixBufDataPointer, 0) // The function outputs the number of samples written to mixBuf - it is returning 882 for my mod file, so I assume it is writing the data successfully
let mixBufBufferPointer = UnsafeMutableBufferPointer(start: mixBufDataPointer, count:dataSize)
let arrayDataForMixBuf = Array(mixBufBufferPointer) // An array of 14040 Int32s - all with the value of 0...as if the data is not being written
If I manually write a value to, say, mixBufDataPointer[1], I can see it shows up in the arrayDataForMixBuf, so I know the reading using mixBufBufferPointer and the associated array conversion are working correctly. The issue appears to be where the library writes into the mixBufDataPointer.
Am I missing something? I'm completely new to pointers in Swift so this is all new to me.
My ultimate goal is to be able to send an array of floats over a UDP socket, but for now I'm just trying to get a few things working in python3.
The code below works just fine:
import struct
fake_data = struct.pack('f', 5.38976)
print(fake_data)
data1 = struct.unpack('f', fake_data)
print(data1)
Output:
b'\xeax\xac#'
(5.3897600173950195,)
But when I try this I get:
electrode_data = [1.22, -2.33, 3.44]
for i in range(3):
data = struct.pack('!d', electrode_data[i]) # float -> bytes
print(data[i])
x = struct.unpack('!d', data[i]) # bytes -> float
print(x[i])
Output:
63
Traceback (most recent call last):
File "cbutton.py", line 18, in <module>
x = struct.unpack('!d', data[i]) # bytes -> float
TypeError: a bytes-like object is required, not 'int'
How can I turn a float array to byte array and vise versa. The reason I'm tryin to accomplish this is because the first code allows me to send float data from a client to server (one by one) using a UDP socket. My ultimate goal is to do this with an array so I can plot the values using matplotlib.
You're only packing a single float here. But then you're trying to pass the first byte of the resulting buffer (which was implicitly converted to int) to unpack. You need to give it the entire buffer. Also, to do this in a more general way, you want to first encode the number of items in your array as an integer.
import struct
electrode_data = [1.22, -2.33, 3.44]
# First encode the number of data items, then the actual items
data = struct.pack("!I" + "d" * len(electrode_data), len(electrode_data), *electrode_data)
print(data)
# Pull the number of encoded items (Note a tuple is returned!)
elen = struct.unpack_from("!I", data)[0]
# Now pull the array of items
e2 = struct.unpack_from("!" + "d" * elen, data, 4)
print(e2)
(The *electrode_data means to flatten the list: it's the same as electrode_data[0], electrode_data[1]...)
If you really only want to do one at a time:
for elem in electrode_data:
data = struct.pack("!d", elem)
print(data)
# Again note that unpack *always* returns a tuple (even if only one member)
d2 = struct.unpack("!d", data)[0]
print(d2)
The title is probably not accurate but I hope that reading this post you can understand what I want to do.
I'm kind stuck in here. New in Java ME, that unfortunately has, as you know, reduced methods than the Java SE.
What I want to accomplish is this: I have a txt, with numbers in it separated by space.
I want to put them in an array that can ""behave as an usual array in c++"" when one gets numbers separated by space into an array.
Now in J2ME what I've done (using Netbeans) is: I took the txt to a stream. Later sent the stream to a byte array and finally i converted it to a char array.
Let's say the original txt was: 98 2 12 13
Part of the code is:
InputStream is = getClass().getResourceAsStream("models.txt");
try{
int st_pk1_len = is.available();
byte st_pk1[] = new byte[st_pk1_len];
is.read(st_pk1);
char st_pk1_char[] = new String(st_pk1).toCharArray();
System.out.println(st_pk1_char);
What I get printed is: 98 2 12 13
Although, my problem is that when I want to access index 0 I get only the number 9 and not 98. If I try to reach the number 12, I put the pointer to 3 but what I get is an empty space and so on.
I've searched and tried different methods that I've found without luck to convert that into the original numbers again.
It could be a stupid mistake from my side or something that I've haven't think of.
Isn't there a simple solution to this problem?
update It's working now! Array is working as a "regular"c++ char array. In case somebody else need it or have my same problem, here is how it looks like:
InputStream is = getClass().getResourceAsStream("st_pk1.txt");
int st_pk1_len = is.available();
byte st_pk1[] = new byte[st_pk1_len];
is.read(st_pk1);
char st_pk1_char[] = new String(st_pk1).toCharArray();
String PreSplitted = new String(st_pk1);
String AftSplit[] = Split(PreSplitted, " ");
If you want to check: System.out.println(AftSplit[n]);
For the split method I used the second link in the Gnat's post.
Split text in J2ME
You can treat the char array as String, containing tokens separated by space: new String(st_pk1) does that for you.
After that, you need to split it, like as described in couple other Stack Overflow questions:
How do I split strings in J2ME?
Split text in J2ME
Here's the code I am using now, where decimal1 is an array of decimal values, and B is the number of bits in binary for each value:
for (i = 0:1:length(decimal1)-1)
out = dec2binvec(decimal1(i+1),B);
for (j = 0:B-1)
bit_stream(B*i+j+1) = out(B-j);
end
end
The code works, but it takes a long time if the length of the decimal array is large. Is there a more efficient way to do this?
bitstream = zeros(nelem * B,1);
for i = 1:nelem
bitstream((i-1)*B+1:i*B) = fliplr(dec2binvec(decimal1(i),B));
end
I think that should be correct and a lot faster (hope so :) ).
edit:
I think your main problem is that you probably don't preallocate the bit_stream matrix.
I tested both codes for speed and I see that yours is faster than mine (not very much tho), if we both preallocate bitstream, even though I (kinda) vectorized my code.
If we DONT preallocate the bitstream my code is A LOT faster. That happens because your code reallocates the matrix more often than mine.
So, if you know the B upfront, use your code, else use mine (of course both have to be modified a little bit to determine the length at runtime, which is no problem since dec2binvec can be called without the B parameter).
The function DEC2BINVEC from the Data Acquisition Toolbox is very similar to the built-in function DEC2BIN, so some of the alternatives discussed in this question may be of use to you. Here's one option to try, using the function BITGET:
decimal1 = ...; %# Your array of decimal values
B = ...; %# The number of bits to get for each value
nValues = numel(decimal1); %# Number of values in decimal1
bit_stream = zeros(1,nValues*B); %# Initialize bit stream
for iBit = 1:B %# Loop over the bits
bit_stream(iBit:B:end) = bitget(decimal1,B-iBit+1); %# Get the bit values
end
This should give the same results as your sample code, but should be significantly faster.