Convert 32 bit network order to host in C - c

I'm trying to convert a uint32_t from network byte order to host format. I'm reading from a tcp connection 4 bytes that I store in the buffer like this:
ssize_t read = 0;
char *file_buf;
size_t fb_size = 4 * sizeof(char);
file_buf = malloc(fb_size);
read = recv(socket_file_descriptor,file_buf,fb_size,0);
so I store the number in file_buf but I want a number, how can I do this?

This looks straightforward:
ssize_t read = 0;
uint32_t myInteger; // Declare a 32-bit uint.
// Pass a pointer to the integer, and the size of the integer.
read = recv(socket_file_descriptor,&myInteger,sizeof(myInteger),0);
myInteger = ntohl(myInteger); // Change from Network order to Host order.

Here's how I would do it. Note the use of ntohl() to convert the data from network-endian to host-endian form:
#include <stdio.h>
#include <stdint.h>
#include <arpa/inet.h>
#include <sys/socket.h>
[...]
char file_buf[4];
if (recv(socket_file_descriptor,file_buf,fb_size,0) == sizeof(file_buf))
{
uint32_t * p = (uint32_t *) file_buf;
uint32_t num = ntohl(*p);
printf("The number is %u\n", num);
}
else printf("Short read or network error?\n");

Some OSes (Linux with glibc, BSDs) have size-specific endianness conversion functions too, to supplement ntohl() and ntohs().
#include <endian.h> // Might need <sys/endian.h> instead on some BSDs
void your_function(uint32_t bigend_int) {
uint32_t host_int = be32toh(bigend_int);
}
Edit:
But since you seem to have easy access to the individual bytes, there's always Rob Pike's preferred approach:
uint32_t host_int = (file_buf[3]<<0) | (file_buf[2]<<8) | (file_buf[1]<<16) | (file_buf[0]<<24);

Related

OpenSSL EVP AES Encryption and Base64 Encoding producing unusable results

So I'm trying to reproduce an encryption and encoding operation in C, that I've managed to make work in C#, JScript, Python and Java. Now, it's mostly just for obfuscating data - not actual encryption - so it's basically for aesthetic purposes only.
First thing's first, the data string that's being encrypted looks like this:
"[3671,3401,736,1081,0,32558], [3692,3401,748,1105,0,32558], [3704,3401,774,1162,0,32558], [3722,3401,774,1162,0,32558], [3733,3401,769,1172,0,32558]"
Biggest first issue for C is that this can vary in length. Each [x,y,z,a,b,c] represents some data point, and the actual string that will be encrypted can have anywhere from one data point, to 100. So I'm sure my memory management might be broken somewhere as well. Second issue is, I don't seem to be getting the correct expected result after encoding. After encrypting, the byte result of the C cipher is the same as the python cipher. But when I encode to base64 in C, it does not get the expected result at all.
#include <X11/Xlib.h>
#include <assert.h>
#include <unistd.h>
#include <stdio.h>
#include <stdint.h>
#include <stdlib.h>
#include <malloc.h>
#include <time.h>
#include <errno.h>
#include <linux/input.h>
#include <fcntl.h>
#include <string.h>
#include <openssl/sha.h>
#include <openssl/hmac.h>
#include <openssl/evp.h>
#include <openssl/kdf.h>
#include <openssl/params.h>
#include <openssl/bio.h>
#include <openssl/buffer.h>
void PBKDF2_HMAC_SHA_1(const char* pass, int passlen, const unsigned char* salt, int saltlen, int32_t iterations, uint32_t outputBytes, char* hexResult, uint8_t* binResult)
{
unsigned int i;
unsigned char digest[outputBytes];
PKCS5_PBKDF2_HMAC(pass, passlen, salt, saltlen, iterations, EVP_sha1(), outputBytes, digest);
for (i = 0; i < sizeof(digest); i++)
{
sprintf(hexResult + (i * 2), "%02x", 255 & digest[i]);
binResult[i] = digest[i];
}
}
int main(void){
char intext[] = "[3671,3401,736,1081,0,32558], [3692,3401,748,1105,0,32558], [3704,3401,774,1162,0,32558], [3722,3401,774,1162,0,32558], [3733,3401,769,1172,0,32558]";
int outlen, final_length;
EVP_CIPHER_CTX *ctx;
ctx = EVP_CIPHER_CTX_new();
size_t i;
char sid[] = "u9SXNMeTkvyBr3n81SJ7Lj216w04gJ99";
char pk[] = "jeIHjod1cZeM1U04cy8z7488AeY1Sl25";
uint32_t outputBytes = 48;
uint32_t iterations = 128;
unsigned char byteresult[2*outputBytes+1];
char hexresult[2*outputBytes+1];
memset(byteresult,0,sizeof(byteresult));
uint8_t binResult[outputBytes+1];
memset(binResult,0,sizeof(binResult));
char *finResult = NULL;
char key[65];
memset(key,0,sizeof(key));
char * keystart = hexresult +32;
char iv[33];
memset(iv,0,sizeof(iv));
PBKDF2_HMAC_SHA_1(sid,strlen(sid),pk,strlen(pk),iterations,outputBytes,hexresult,binResult);
memcpy(key, keystart,64);
memcpy(iv, hexresult,32);
EVP_CipherInit_ex(ctx, EVP_aes_256_cbc(), NULL,(unsigned char *)key, (unsigned char *)iv, 1);
unsigned char *outbuf;
int outbuflen = sizeof(intext) + EVP_MAX_BLOCK_LENGTH - (sizeof(intext) % 16);
outbuf = (unsigned char *)malloc(outbuflen);
EVP_CipherUpdate(ctx, outbuf, &outbuflen,(unsigned char *)intext, strlen(intext));
EVP_CipherFinal_ex(ctx, outbuf + outbuflen, &final_length);
outlen += final_length;
EVP_CIPHER_CTX_free(ctx);
char bytesout[strlen(outbuf) + outbuflen];
int buflen = 0;
for (i=0;i< outbuflen + final_length;i++)
{
buflen += 1;
sprintf(bytesout + (i * 2),"%02x", outbuf[i]);
}
printf("bytesout: %s\n", bytesout);
char outtext[sizeof(bytesout)];
memset(outtext,0, sizeof(outtext));
int outtext_len = sizeof(outtext);
EVP_ENCODE_CTX *ectx = EVP_ENCODE_CTX_new();
EVP_EncodeInit(ectx);
EVP_EncodeBlock(outtext, bytesout, sizeof(bytesout));
EVP_EncodeFinal(ectx, (unsigned char*)outtext, &outtext_len);
EVP_ENCODE_CTX_free(ectx);
printf("b64Encoded String %s \n", outtext);}
Makefile:
gcc simplecipher.c -o simplecipher -lX11 -lncurses -lssl -lcrypto
Result:
bytesout: eafafcde5c00eb6e649d61a09f9b52d13dd8c783d73afcbc03dfb5cea0cd3ab627528ec1b2997105871d570c0b972349943800aacd063093d97f7f39554775aa4256bd26599dde66bb76b925d9f021f6b657d1a91eb08e1900b6ad91f7f65b97e1a7e17b8d959a65d6893af458e26761536b3ffdf470f89f1aac24ca02782fb8a691c25b368549387890dc73143bb213e0ce616264e5b30add3b480c24f5edc6
b64Encoded String ZWFmYWZjZGU1YzAwZWI2ZTY0OWQ2MWEwOWY5YjUyZDEzZGQ4Yzc4M2Q3M2FmY2JjMDNkZmI1Y2VhMGNkM2FiNjI3NTI4ZWMxYjI5OTcxMDU4NzFkNTcwYzBiOTcyMzQ5OTQzODAwYWFjZDA2MzA5M2Q5N2Y3ZjM5NTU0Nzc1YWE0MjU2YmQyNjU5OWRkZTY2YmI3NmI=
When I do a similar script in python:
import base64
from Cryptodome.Cipher import AES
from Cryptodome.Random import get_random_bytes
from Cryptodome.Protocol.KDF import PBKDF2
from Crypto.Util.Padding import pad
import binascii
symmetric_key = "u9SXNMeTkvyBr3n81SJ7Lj216w04gJ99"
salt = "jeIHjod1cZeM1U04cy8z7488AeY1Sl25"
pbbytes = PBKDF2(symmetric_key.encode("utf-8"), salt.encode("utf-8"), 48, 128)
iv = pbbytes[0:16]
key = pbbytes[16:48]
half_iv=iv[0:8]
half_key=key[0:16]
cipher = AES.new(key, AES.MODE_CBC, iv)
cipher = AES.new(binascii.hexlify(bytes(half_key)), AES.MODE_CBC, binascii.hexlify(bytes(half_iv)))
print("test encoding:")
intext = b"[3671,3401,736,1081,0,32558], [3692,3401,748,1105,0,32558], [3704,3401,774,1162,0,32558], [3722,3401,774,1162,0,32558], [3733,3401,769,1172,0,32558]"
print("intext pre padding: ", intext)
paddedtext = pad(intext,16)
print("intext post padding: ", paddedtext)
en_bytes = cipher.encrypt(paddedtext)
print("encrypted bytes: ", binascii.hexlify(bytearray(en_bytes)))
en_data = base64.b64encode(en_bytes)
en_bytes_string = ''.join(map(chr, en_bytes))
print("encoded bytes: ", en_data)
Result:
encrypted bytes: b'eafafcde5c00eb6e649d61a09f9b52d13dd8c783d73afcbc03dfb5cea0cd3ab627528ec1b2997105871d570c0b972349943800aacd063093d97f7f39554775aa4256bd26599dde66bb76b925d9f021f6b657d1a91eb08e1900b6ad91f7f65b97e1a7e17b8d959a65d6893af458e26761536b3ffdf470f89f1aac24ca02782fb8a691c25b368549387890dc73143bb213e0ce616264e5b30add3b480c24f5edc6'
encoded bytes: b'6vr83lwA625knWGgn5tS0T3Yx4PXOvy8A9+1zqDNOrYnUo7BsplxBYcdVwwLlyNJlDgAqs0GMJPZf385VUd1qkJWvSZZnd5mu3a5JdnwIfa2V9GpHrCOGQC2rZH39luX4afhe42VmmXWiTr0WOJnYVNrP/30cPifGqwkygJ4L7imkcJbNoVJOHiQ3HMUO7IT4M5hYmTlswrdO0gMJPXtxg=='
So as you can see, the encoded portion comes out completely differently in the C application. In Jscript, C#, and Java it comes out exactly as in the python script. The encrypted portion, however, is the same between the two. Just encoding seems to break it. Now this could be 100% because I've absolutely butchered something when passing the bytes/char arrays around. I just can't seem to find out where in the chain I've broken down here. Any suggestions?
The C code base64s the wrong buffer. namely bytesout, which is already an ASCII text:
for (i=0;i< outbuflen + final_length;i++)
{
buflen += 1;
sprintf(bytesout + (i * 2),"%02x", outbuf[i]);
}
You need to encode outbuf instead.
PS: the code cries for a serious cleanup.
Alright,
Just wanted to say thanks to everyone who commented, and answered but I did figure it out this morning, basically using
EVP_EncodeBlock(outtext, outbuf, buflen);
Is what solved it. Before I'd pass in either sizeof(outtext) or sizeof(outbuf) and that would only encode what looked like a part of the first data point (likely up to the first ',' or something). But this fixes it. I can now encrypt a string of datapoints regardless of their starting size, and decrypt it in python. I had buflen in there just to debug the amount of bytes that were being written to the bytesout char array, but it seemed to do the trick.
Cheers, everyone!
I was trying to do the same thing, and just finished doing so. I believe your question is misleading. You are not actually encoding a digest in base64. Rather, you are encoding the hexadecimal representation of a digest in base64 (as user58697 already stated in his own response). Also, as specified in Ian Abbott's comment, you're using EVP_ENCODE_CTX wrong.
I believe most people would actually want to encode the digest itself in base64. If you're trying to implement stuff like xmlenc (and I assume most specifications that use these base64 encoded digests), it can be done in the following fashion, using libcrypto~3.0:
void base64_digest(const char* input, int input_length)
{
// Generating a digest
EVP_MD_CTX* context = EVP_MD_CTX_new();
const EVP_MD* md = EVP_sha512();
unsigned char md_value[EVP_MAX_MD_SIZE];
unsigned int md_len;
EVP_DigestInit_ex2(context, md, NULL);
EVP_DigestUpdate(context, input, input_length);
EVP_DigestFinal_ex(context, md_value, &md_len);
// Encoding digest to base64
char output[EVP_MAX_MD_SIZE]; // not sure this is the best size for this buffer,
// but it's not gonna need more than EVP_MAX_MD_SIZE
EVP_EncodeBlock((unsigned char*)output, md_value, md_len);
// cleanup
EVP_MD_CTX_free(context);
printf("Base64-encoded digest: %s\n", output);
}
Incidentally, the result will be much shorter (with padding, 88 characters is the expected length, while I believe you'll get 172 characters by encoding the hex digest instead).
You also don't need to use EVP_ENCODE_CTX, EVP_EncodeInit nor EVP_EncodeFinal, as EVP_EncodeBlock doesn't need any of these.
For C++ developers, I also have an implementation at https://github.com/crails-framework/libcrails-encrypt (check out the MessageDigest class).

How to read and display file data using int86 function with REGS struct for 8086 in C-Language

I have a text file with some content I have to move the cursor in from relative to the BOF and display its content on the screen using int 21h/42h.
here is the code I am working on. I am using windows 98 (16-bit DOS)in VM and it's part of my system programming assignment so I have to use it tried in Turbo c++ with DOSBox but It has some issues.
on printing buff it displays random values
Code
#include <stdio.h>
#include <conio.h>
#include <fcntl.h>
#include <bios.h>
#include <dos.h>
unsigned int handle;
char buff[50];
void main(){
union REGS regs; // set pointer
union REGS regs_r; // read file
handle = open("text.txt", O_RDONLY);
// set pointer to BOF (Begenning of File)
regs.x.bx = handle;
regs.h.ah = 0x42; // LSEEK
regs.h.al = 0x00 // Mode (0) BOF
regs.x.cx = 0;
regs.x.dx = 0;
int86(0x21, &regs, &regs);
// read the file
regs_r.x.bx = handle;
regs_r.x.cx = 0x07; Bytes to read ?
regs_r.h.ah = 0x3fh;
regs_r.x.dx = (unsigned int) buff; // buffer for data
int86(0x21, &regs_r, &regs_r);
printf("DATA : %c", buff);
getch();
clrscr();
}
here are some reference links
8086 int
21h/42h
asm move File
Pointer
any help will be appreciated.
To get the data-segment you could try this way:
Declare a far pointer and assign to it the address of your buffer and recast it.
char buffer[100];
char far *ptr = (char far *)&buffer;
now take the 16 most significant bits from the pointer paying attention to the complement of the sign
unsigned long addr_value = (unsigned long)ptr;
unsigned int data_segment = (unsigned int)(addr_value >> 16);
It should work.

How to send data in little endian order C

I am communicating with a board that requires I send it 2 signed byte.
explaination of data type
what I need to send
Would I need to bitwise manipulation or can I just send 16bit integer as the following?
int16_t rc_min_angle = -90;
int16_t rc_max_angle = 120;
write(fd, &rc_min_angle, 2);
write(fd, &rc_max_angle, 2);
int16_t has the correct size but may or may not be the correct endianness. To ensure little endian order use macros such as the ones from endian.h:
#define _BSD_SOURCE
#include <endian.h>
...
uint16_t ec_min_angle_le = htole16(ec_min_angle);
uint16_t ec_max_angle_le = htole16(ec_max_angle);
write(fd, &ec_min_angle_le, 2);
write(fd, &ec_max_angle_le, 2);
Here htole16 stands for "host to little endian 16-bit". It converts from the host machine's native endianness to little endian: if the machine is big endian it swaps the bytes; if it's little endian it's a no-op.
Also note that you have you pass the address of the values to write(), not the values themselves. Sadly, we cannot inline the calls and write write(fd, htole16(ec_min_angle_le), 2).
If endian functions are not available, simply write the bytes in little endian order.
Perhaps with a compound literal.
// v------------- compound literal ---------------v
write(fd, &(uint8_t[2]){rc_min_angle%256, ec_min_angle/256}, 2);
write(fd, &(uint8_t[2]){rc_max_angle%256, ec_max_angle/256}, 2);
// ^-- LS byte ---^ ^-- MS byte ---^
// &
I added the & assuming the write() is a like write(2) - Linux.
If you don't need to have it type-generic, you can simply do:
#include <stdint.h>
#include <unistd.h>
/*most optimizers will turn this into `return 1;`*/
_Bool little_endian_eh() { uint16_t x = 1; return *(char *)&x; }
void swap2bytes(void *X) { char *x=X,t; t=x[0]; x[0]=x[1]; x[1]=t; }
int main()
{
int16_t rc_min_angle = -90;
int16_t rc_max_angle = 120;
//this'll very likely be a noop since most machines
//are little-endian
if(!little_endian_eh()){
swap2bytes(&rc_min_angle);
swap2bytes(&rc_max_angle);
}
//TODO error checking on write calls
int fd =1;
write(fd, &rc_min_angle, 2);
write(fd, &rc_max_angle, 2);
}
To send little-endian data, you can just generate the bytes manually:
int write_le(int fd, int16_t val) {
unsigned char val_le[2] = {
val & 0xff, (uint16_t) val >> 8
};
int nwritten = 0, total = 2;
while (nwritten < total) {
int n = write(fd, val_le + nwritten, total - nwritten);
if (n == -1)
return nwritten > 0 ? nwritten : -1;
nwritten += n;
}
return nwritten;
}
A good compiler will recognize that the code does nothing and compile the bit manipulation to no-op on a little-endian platform. (See e.g. gcc generating the same code for the variant with and without the bit-twiddling.)
Note also that you shouldn't ignore the return value of write() - not only can it encounter an error, it can also write less than you gave it to, in which case you must repeat the write.

Ruby, ioctl, and complex structures

I have a piece of hardware that I'm trying to control via my computer's built-in SPI driver. The SPI driver is controlled via ioctl.
I can successfully drive the hardware from a small C program; but when I try to duplicate the C program in Ruby I run into problems.
Using IO#ioctl to set basic registers (with u32 and u8 ints) works fine (I know because I can also use ioctl to read back the values I set); but as soon as I try to set a complex struct, the program fails with
small.rb:51:in 'ioctl': Connection timed out # rb_ioctl - /dev/spidev32766.0 (Errno::ETIMEDOUT)
I might be running into trouble because the spi_ioc_transfer struct has two pointers to byte buffers but the pointers are typed as unsigned 64-bit ints even on 32-bit platforms -- necessitating a cast to (unsigned long) in C. I'm trying to replicate that in Ruby but am quite unsure of myself.
Below are the C program which works and the Ruby port which doesn't work. The do_latch functions are necessary so I can see the result in my hardware; but are probably not germane to this problem.
C (which works):
#include <stdint.h>
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <fcntl.h>
#include <sys/ioctl.h>
#include <linux/spi/spidev.h>
int do_latch() {
int fd = open("/sys/class/gpio/gpio1014/value", O_RDWR);
write(fd, "1", 1);
write(fd, "0", 1);
close(fd);
}
int do_transfer(int fd, uint8_t *bytes, size_t len) {
uint8_t *rx_bytes = malloc(sizeof(uint8_t) * len);
struct spi_ioc_transfer transfer = {
.tx_buf = (unsigned long)bytes,
.rx_buf = (unsigned long)rx_bytes,
.len = len,
.speed_hz = 100000,
.delay_usecs = 0,
.bits_per_word = 8,
.cs_change = 0,
.tx_nbits = 0,
.rx_nbits = 0,
.pad = 0
};
if(ioctl(fd, SPI_IOC_MESSAGE(1), &transfer) < 1) {
perror("Could not send SPI message");
exit(1);
}
free(rx_bytes);
}
int main() {
int fd = open("/dev/spidev32766.0", O_RDWR);
uint8_t mode = 0;
ioctl(fd, SPI_IOC_WR_MODE, &mode);
uint8_t lsb_first = 0;
ioctl(fd, SPI_IOC_WR_LSB_FIRST, lsb_first);
uint32_t speed_hz = 100000;
ioctl(fd, SPI_IOC_WR_MAX_SPEED_HZ, speed_hz);
size_t data_len = 36;
uint8_t *tx_data = malloc(sizeof(uint8_t) * data_len);
memset(tx_data, 0xFF, data_len);
do_transfer(fd, tx_data, data_len);
do_latch();
sleep(2);
memset(tx_data, 0x00, data_len);
do_transfer(fd, tx_data, data_len);
do_latch();
free(tx_data);
close(fd);
return 0;
}
Ruby (which fails on the ioctl line in do_transfer):
SPI_IOC_WR_MODE = 0x40016b01
SPI_IOC_WR_LSB_FIRST = 0x40016b02
SPI_IOC_WR_BITS_PER_WORD = 0x40016b03
SPI_IOC_WR_MAX_SPEED_HZ = 0x40046b04
SPI_IOC_WR_MODE32 = 0x40046b05
SPI_IOC_MESSAGE_1 = 0x40206b00
def do_latch()
File.open("/sys/class/gpio/gpio1014/value", File::RDWR) do |file|
file.write("1")
file.write("0")
end
end
def do_transfer(file, bytes)
##########################################################################################
#begin spi_ioc_transfer struct (cat /usr/include/linux/spi/spidev.h)
#pack bytes into a buffer; create a new buffer (filled with zeroes) for the rx
tx_buff = bytes.pack("C*")
rx_buff = (Array.new(bytes.size) { 0 }).pack("C*")
#on 32-bit, the struct uses a zero-extended pointer for the buffers (so it's the same
#byte layout on 64-bit as well) -- so do some trickery to get the buffer addresses
#as 64-bit strings even though this is running on a 32-bit computer
tx_buff_pointer = [tx_buff].pack("P").unpack("L!")[0] #u64 (zero-extended pointer)
rx_buff_pointer = [rx_buff].pack("P").unpack("L!")[0] #u64 (zero-extended pointer)
buff_len = bytes.size #u32
speed_hz = 100000 #u32
delay_usecs = 0 #u16
bits_per_word = 8 #u8
cs_change = 0 #u8
tx_nbits = 0 #u8
rx_nbits = 0 #u8
pad = 0 #u16
struct_array = [tx_buff_pointer, rx_buff_pointer, buff_len, speed_hz, delay_usecs, bits_per_word, cs_change, tx_nbits, rx_nbits, pad]
struct_packed = struct_array.pack("QQLLSCCCCS")
#in C, I pass a pointer to the the structure; so mimic that here
struct_pointer_packed = [struct_packed].pack("P")
#end spi_ioc_transfer struct
##########################################################################################
file.ioctl(SPI_IOC_MESSAGE_1, struct_pointer_packed)
end
File.open("/dev/spidev32766.0", File::RDWR) do |file|
file.ioctl(SPI_IOC_WR_MODE, [0].pack("C"));
file.ioctl(SPI_IOC_WR_LSB_FIRST, [0].pack("C"));
file.ioctl(SPI_IOC_WR_MAX_SPEED_HZ, [0].pack("L"));
data_bytes = Array.new(36) { 0x00 }
do_transfer(file, data_bytes)
do_latch()
sleep(2)
data_bytes = []
data_bytes = Array.new(36) { 0xFF }
do_transfer(file, data_bytes)
do_latch()
end
I pulled the magic number constants out by having C print them (they're macros in C). I can validate that most of them work; I'm a little unsure about the ioctl message that fails (SPI_IOC_MESSAGE_1) since that doesn't work and it's a complicated macro. Still, I have no reason to think that it's incorrect and it's always the same when I look at it from C.
When I print out the structure in C and then print it out in Ruby, the only differences are in the buffer addresses, so if something's going wrong, that feels like the right place to look. But I've run out of things to try.
I can also print out the addresses in both versions and they look like what I would expect, 32 bits extended to 64 bits, and match the values in the structure (although the structure is little-endian -- this is an ARM).
Structure in C (that works):
60200200 00000000 a8200200 00000000 24000000 40420f00 00000800 00000000
Structure in Ruby (that fails):
a85da27f 00000000 08399b7f 00000000 24000000 40420f00 00000800 00000000
Is there an obvious mistake that I'm making when I lay out the struct in Ruby? Is there something else that I'm missing?
My next step is to write a library in C and use FFI to access it from Ruby. But that seems like giving up; and using the native ioctl function feels like the better approach if I can ever make it work.
Update
Above, I'm doing
struct_array = [tx_buff_pointer, rx_buff_pointer, buff_len, speed_hz, delay_usecs, bits_per_word, cs_change, tx_nbits, rx_nbits, pad]
struct_packed = struct_array.pack("QQLLSCCCCS")
#in C, I pass a pointer to the the structure; so mimic that here
struct_pointer_packed = [struct_packed].pack("P")
file.ioctl(SPI_IOC_MESSAGE_1, struct_pointer_packed)
because I have to pass a pointer to the struct in C. But that's what's causing the error!
Instead, it needs to be
struct_array = [tx_buff_pointer, rx_buff_pointer, buff_len, speed_hz, delay_usecs, bits_per_word, cs_change, tx_nbits, rx_nbits, pad]
struct_packed = struct_array.pack("QQLLSCCCCS")
file.ioctl(SPI_IOC_MESSAGE_1, struct_packed)
I guess Ruby is automatically making it an array when it marshalls it over?
Unfortunately, now it only intermittently works. The second call never works and the first call doesn't work if I pass in all zeros. It's very mysterious.
It is a common issue not to flush the buffer, you could check it out and try it.
Flush:
Flushes any buffered data within ios to the underlying operating system (note that this is Ruby internal buffering only; the OS may buffer the data as well).
rb_io_flush(VALUE io)
{
return rb_io_flush_raw(io, 1);
}

Compare 2 different hexadecimal

I have a hexadecimal in unsigned char *hex_1 that contains:
hex_1[0] = 0x5b
hex_1[1] = 0x83
hex_1[2] = 0xb6
hex_1[3] = 0xe9
and I want to compare it with a hex value: 1ca0aaf9.
What should I do? Should I create a new character array, split 1ca0aaf9 into 1c ca 0a, then do memcpy()?
EDIT: I actually want them to either tell me whether "THEY ARE THE SAME!" or "THEY ARE NOT THE SAME!".
EDIT 2: I want it to be like hex[0] to be compared with 1c, etc...
Your probably want this:
uint32_t val = *(uint32_t*)(hex_1); // uint32_t is available by #include <stdint.h>
if (val == 0x1ca0aaf9)
{
}
On a big-endian architecture, you are done. On Intel and other little endian architectures you need to decide if that byte array is logicialy meant to be interpreted as in network byte order as 0x5b83b6e9 (‭decimal 1535358697‬). Or if it's meant to be in host byte order (0xe9b6835b) (decimal ‭3921052507‬). If the byte array is in network byte order, then you'll need to swap the bytes. That's what the ntohl function does.
uint32_t val = *(uint32_t*)(hex_1);
val = ntohl(val); // <arpa/inet.h> or <winsock2.h>
if (val == 0x1ca0aaf9)
{
}

Resources