So my knowledge of pointers is a bit rusty and I think thats where I'm getting messed up, I am trying to write a function that will grab hex values (an amount n) at a specified offset in the file. And write those values to an array.
File I'm reading from, Example
0 1 2 3 4 5 6 7 8 9 A B C D E F
0 F6 EA 9D DE D8 40 1C 44 19 24 59 D2 6A 2C 48 1D
1 FC 96 DE 94 AF 95 FC 42 9B 6D DA 15 D4 CE 88 BB
2 B8 24 99 8F 65 B5 D3 7E D9 5D 51 44 89 97 61 85
3 2D 40 1A DC D5 16 1F 70 84 F9 85 58 C8 0E 13 80
4 32 AC 10 97 61 B3 16 3B 40 67 7A CA FE E1 4F 2B
5 21 A9 07 F6 80 26 66 04 20 EC 5C E8 FA 70 68 2C
6 1C 78 C4 7E 5C DA B9 9C 41 38 66 3F 19 B6 6A 3A
Here's the function I've written thus far.
aDest point's to an array of size nBytes + 1
bAddr point's to firstbyte of the memory region
OffsetAmt is a location which is relative bAddr
nBytes is just the number of bytes that I want to copy
Heres the function
void getHexBytesAt(uint8_t* const aDest, const uint8_t* const bAddr,
uint16_t OffsetAmt, uint8_t nBytes)
{
const uint8_t *point1 = bAddr; //set he address of point1 to base address value
//front point1 shift until we get to the specified offset value
for (int i = 0; i < Offset; i++)
{
point1 = (point1 + 1);
}
//set the values of *aDest to the value of point1;
//increment point1
for (int k = 0; k < nBytes; k++)
{
*aDest = point1;
point1 = (point1 + 1);
}
The problem I'm having is im not even getting the first byte copied into the array correctly,
My output looks like this Getting 9 bytes,
starting at the offset 2C
MY OUTPUT: 84 CA CA CA CA CA CA CA CA
FILE: 89 97 61 85 2D 40 1A DC D5
If you want to read the data from the Memory bAddr then you must
dereference the pointer for reading
increment the Destination pointer
This would be implemented like this:
void getHexBytesAt(uint8_t* const aDest, const uint8_t* const bAddr,
uint16_t OffsetAmt, uint8_t nBytes)
{
const uint8_t *point1 = bAddr; //set he address of point1 to base address value
//front point1 shift until we get to the specified offset value
for (int i = 0; i < OffsetAmt; i++) // fixed typo
{
point1 = (point1 + 1);
}
//set the values of *aDest to the value of point1;
//increment point1
for (int k = 0; k < nBytes; k++)
{
*aDest = *point1; // copy data from address the point1 points to
aDest = aDest + 1; // increment destination pointer
point1 = (point1 + 1);
}
}
But this can be done much simpler:
void getHexBytesAt(uint8_t* const aDest, const uint8_t* const bAddr,
uint16_t OffsetAmt, uint8_t nBytes)
{
memcpy(aDest, bAddr + OffsetAmt, nBytes);
}
You should consider replacing the function with the one-liner that implements it in your code.
BTW: There is no file used in the code. You should review your question.
I'm trying to understand the OpenSSL library in more detail. So rather than using the set of higher-level EVP functions, I've been trying to use the AES_* functions. Following the general set of calls in this question (though I'm using CBC instead of counter mode), I've come up with this code:
void ctr(log_t* log)
{
unsigned char ivec[16];
/* Out buffer for ciphertext */
unsigned char outBuf[16];
blockReader_t* br = blockReaderInit(log, "./input.txt", 128);
int outFD;
if ((outFD = open("out.bin", O_WRONLY)) == -1)
{
logPrint(br->log, LOG_ARGS, LOG_ERR, "open: %s", strerror(errno));
logExit(br->log, LOG_ARGS, EXIT_FAILURE);
}
memset(ivec, 0, 16);
unsigned char* ivec2 = ivec + 8;
unsigned long* ivec3 = (unsigned long*) ivec2;
*ivec3 = (unsigned long) 0xfd0;
AES_KEY aesKey;
char* myKey = "Pampers baby-dry";
int res;
if (!(res = AES_set_encrypt_key((unsigned char*) myKey, 16, &aesKey)))
{
logPrint(log, LOG_ARGS, LOG_ERR, "AES_set_encrypt_key: returned %d", res);
logExit(log, LOG_ARGS, EXIT_FAILURE);
}
unsigned char* buf;
while ((buf = blockReaderGet(br)) != NULL)
{
logPrint(log, LOG_ARGS, LOG_INFO, "ivec =");
logHexdump(log, LOG_ARGS, LOG_INFO, (char*) ivec, 16);
logPrint(log, LOG_ARGS, LOG_INFO, "buf =");
logHexdump(log, LOG_ARGS, LOG_INFO, (char*) buf, 16);
AES_cbc_encrypt(buf, outBuf, 16, &aesKey, ivec, 1);
logPrint(log, LOG_ARGS, LOG_INFO, "outBuf =");
logHexdump(log, LOG_ARGS, LOG_INFO, (char*) outBuf, 16);
int res = write(outFD, outBuf, 16);
if (res == -1)
{
logPrint(log, LOG_ARGS, LOG_ERR, "write: %s", strerror(errno));
logExit(log, LOG_ARGS, EXIT_FAILURE);
}
else if (res < 16)
{
logPrint(log, LOG_ARGS, LOG_WARN, "Unexpectedly wrote < 16 bytes");
}
}
if ((close(outFD)) == -1)
{
logPrint(log, LOG_ARGS, LOG_ERR, "close: %s", strerror(errno));
logExit(log, LOG_ARGS, EXIT_FAILURE);
}
}
The log_t struct and calls to log*() are my own logging framework which I am using to help debug this code. blockReader_t is another framework for reading files in sets of bytes. blockReaderGet() simply fills the destination buffer with the predetermined number of bytes of data (in this case 128 bits/16 bytes).
Contents of input.txt:
$ hexdump -C input.txt
00000000 4d 69 64 6e 69 67 68 74 5f 4d 61 72 6c 69 6e 05 |Midnight_Marlin.|
00000010 52 69 63 68 61 72 64 52 69 63 68 61 72 64 06 07 |RichardRichard..|
00000020
Output (ran in GDB):
(gdb) run
Starting program: /home/adam/crypto/openssl/aes/aes_128
[ 0.000020] <aes_128.c:83> "main" INFO: Log library started (v1.9.0)
...
[ 0.000054] <aes_128.c:50> "ctr" INFO: ivec =
[ 0.000057] <aes_128.c:51> "ctr" INFO: HEX (16 bytes)
---BEGIN_HEX---
00000000 00 00 00 00 00 00 00 00 d0 0f 00 00 00 00 00 00 |................|
00000010
---END_HEX---
[ 0.000069] <aes_128.c:53> "ctr" INFO: buf =
[ 0.000071] <aes_128.c:54> "ctr" INFO: HEX (16 bytes)
---BEGIN_HEX---
00000000 4d 69 64 6e 69 67 68 74 5f 4d 61 72 6c 69 6e 05 |Midnight_Marlin.|
00000010
---END_HEX---
Program received signal SIGSEGV, Segmentation fault.
_x86_64_AES_encrypt_compact () at aes-x86_64.s:170
170 xorl 0(%r15),%eax
I'm using an OpenSSL from GitHub that I've built myself and linked against locally; specifically the OpenSSL_1_0_2e tag, which I gather is the latest stable version.
The Perl file that generates this assembly file uses the $key variable to name what r15 represents. But given that AES_set_encrypt_key() returns success, I'm not sure what's wrong.
Could anyone please offer any pointers to what might be wrong here?
EDIT:
Despite compiling OpenSSL with -g3 instead of -O3, the backtrace isn't useful:
(gdb) bt
#0 _x86_64_AES_encrypt_compact () at aes-x86_64.s:170
#1 0x0000000000402b6b in AES_cbc_encrypt () at aes-x86_64.s:1614
#2 0x00007fffffffe0a0 in ?? ()
#3 0x000080007dfc19a0 in ?? ()
#4 0x00007fffffffe050 in ?? ()
#5 0x0000000000635080 in ?? ()
#6 0x00007fffffffe1a0 in ?? ()
#7 0x0000000000000010 in ?? ()
#8 0x00007ffff7bdf9a0 in ?? ()
#9 0x00007fffffffe1b0 in ?? ()
#10 0x00007fff00000001 in ?? ()
#11 0x00007ffff7bdf4c8 in ?? ()
#12 0x00007fffffffda40 in ?? ()
#13 0x0000000000000000 in ?? ()
EDIT 2:
CFLAG has been changed:
CFLAG= -DOPENSSL_THREADS -D_REENTRANT -DDSO_DLFCN -DHAVE_DLFCN_H -Wa,--noexecstack -m64 -DL_ENDIAN -O0 -ggdb -Wall -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DMD5_ASM -DAES_ASM -DVPAES_ASM -DBSAES_ASM -DWHIRLPOOL_ASM -DGHASH_ASM -DECP_NISTZ256_ASM
Note the -O0 -ggdb. Backtrace is the same:
(gdb) bt
#0 _x86_64_AES_encrypt_compact () at aes-x86_64.s:170
#1 0x0000000000402b6b in AES_cbc_encrypt () at aes-x86_64.s:1614
#2 0x00007fffffffe0a0 in ?? ()
#3 0x000080007dfc19a0 in ?? ()
#4 0x00007fffffffe050 in ?? ()
#5 0x0000000000635080 in ?? ()
#6 0x00007fffffffe1a0 in ?? ()
#7 0x0000000000000010 in ?? ()
#8 0x00007ffff7bdf9a0 in ?? ()
#9 0x00007fffffffe1b0 in ?? ()
#10 0x00007fff00000001 in ?? ()
#11 0x00007ffff7bdf4c8 in ?? ()
#12 0x00007fffffffda40 in ?? ()
#13 0x0000000000000000 in ?? ()
EDIT: MCVE example
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>
#include <errno.h>
#include <openssl/aes.h>
unsigned char input[] = {0x4du, 0x69u, 0x64u, 0x6eu, 0x69u, 0x67u, 0x68u, 0x74u,
0x5fu, 0x4du, 0x61u, 0x72u, 0x6cu, 0x69u, 0x6eu, 0x05u,
0x52u, 0x69u, 0x63u, 0x68u, 0x61u, 0x72u, 0x64u, 0x52u,
0x69u, 0x63u, 0x68u, 0x61u, 0x72u, 0x64u, 0x06u, 0x07u};
int main()
{
unsigned char ivec[16];
/* ivec[0..7] is the IV, ivec[8..15] is the big endian counter. */
unsigned char outBuf[16];
int outFD;
if ((outFD = open("out.bin", O_WRONLY)) == -1)
{
perror("open");
return EXIT_FAILURE;
}
memset(ivec, 0, 16);
unsigned char* ivec2 = ivec + 8;
unsigned long* ivec3 = (unsigned long*) ivec2;
*ivec3 = (unsigned long) 0xfd0;
AES_KEY aesKey;
char* myKey = "Pampers baby-dry";
int res;
if (!(res = AES_set_encrypt_key((unsigned char*) myKey, 16, &aesKey)))
{
fprintf(stderr, "AES_set_encrypt_key: returned %d", res);
return EXIT_FAILURE;
}
for (int i = 0; i < 32; i += 16)
{
printf("ivec = ");
for (int j = 0; j < 16; j++)
printf("%.02hhx ", ivec[j]);
putchar('\n');
printf("input = ");
for (int j = i; j < (i + 16); j++)
printf("%.02hhx ", input[j]);
putchar('\n');
AES_cbc_encrypt(&input[i], outBuf, 16, &aesKey, ivec, 1);
printf("outBuf = ");
for (int j = 0; j < 16; j++)
printf("%.02hhx ", outBuf[j]);
putchar('\n');
int res = write(outFD, outBuf, 16);
if (res == -1)
{
perror("write");
return EXIT_FAILURE;
}
else if (res < 16)
{
printf("Warning: unexpectedly wrote < 16 bytes");
}
}
if ((close(outFD)) == -1)
{
perror("close");
return EXIT_FAILURE;
}
return EXIT_SUCCESS;
}
So there are several major bugs here. I'll go through all the ones I caught, but there may be more, as I didn't do a thorough code review.
You are using sentinel values everywhere (ie: the 16 integer literals. Swap these out with a preprocessor macro, or even better, a const int).
The output buffer needs to be at least as big as your input buffer, and should be rounded up the the nearest multiple of the block size, plus one more block.
You are looping through each element of the the input data and trying to encrypt one byte at a time. Unless you are implementing some obscure layer on top of AES, this is wrong. You iterate over blocks of data, not individual bytes. The loop is completely unnecessary.
Your input data buffer appears to be bigger than your output data buffer. With your current implementation, the last 16 bytes I think will be truncated/lost, since the input buffer has 32 bytes of data, but the output buffer is 16 bytes. In your specific example, input should be 32 bytes, output should be 32+1.
In addition to the loop being unnecessary, with some modifications it would run (incorrectly, corrupting data), and eventually access invalid memory (ie: pointing to near the end of the input buffer, and telling the encrypt function to ask for 16 bytes of data after that point).
I've provided an updated code listing and sample output that should get you on the right track. Here's a working example that should also guide you along.
Good luck!
Modified Code Listing
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>
#include <errno.h>
#include <openssl/aes.h>
#define BLOCK_SIZE (128)
unsigned char input[BLOCK_SIZE] = {
0x4du, 0x69u, 0x64u, 0x6eu, 0x69u, 0x67u, 0x68u, 0x74u,
0x5fu, 0x4du, 0x61u, 0x72u, 0x6cu, 0x69u, 0x6eu, 0x05u,
0x52u, 0x69u, 0x63u, 0x68u, 0x61u, 0x72u, 0x64u, 0x52u,
0x69u, 0x63u, 0x68u, 0x61u, 0x72u, 0x64u, 0x06u, 0x07u};
int main()
{
unsigned char ivec[BLOCK_SIZE];
/* ivec[0..7] is the IV, ivec[8..15] is the big endian counter. */
unsigned char outBuf[BLOCK_SIZE+1];
int outFD;
if ((outFD = open("out.bin", O_CREAT | O_RDWR)) == -1)
{
perror("open");
return EXIT_FAILURE;
}
memset(ivec, 0, BLOCK_SIZE);
unsigned char* ivec2 = ivec + 8;
unsigned long* ivec3 = (unsigned long*) ivec2;
*ivec3 = (unsigned long) 0xfd0;
AES_KEY aesKey;
char* myKey = "Pampers baby-dry";
int res;
if ((res = AES_set_encrypt_key((unsigned char*) myKey, BLOCK_SIZE, &aesKey)) < 0)
{
fprintf(stderr, "AES_set_encrypt_key: returned %d", res);
return EXIT_FAILURE;
}
int i = 0;
//for (int i = 0; i < 32; i += BLOCK_SIZE)
{
printf("ivec = ");
for (int j = 0; j < BLOCK_SIZE; j++)
printf("%.02hhx ", ivec[j]);
putchar('\n');
printf("input = ");
for (int j = i; j < (i + BLOCK_SIZE); j++)
printf("%.02hhx ", input[j]);
putchar('\n');
putchar('\n');
putchar('\n');
putchar('\n');
AES_cbc_encrypt(input, outBuf, BLOCK_SIZE, &aesKey, ivec, AES_ENCRYPT);
printf("outBuf = ");
for (int j = 0; j < BLOCK_SIZE; j++)
printf("%.02hhx ", outBuf[j]);
putchar('\n');
int res = write(outFD, outBuf, BLOCK_SIZE);
if (res == -1)
{
perror("write");
return EXIT_FAILURE;
}
else if (res < BLOCK_SIZE)
{
printf("Warning: unexpectedly wrote < %d bytes.\n", BLOCK_SIZE);
}
}
if ((close(outFD)) == -1)
{
perror("close");
return EXIT_FAILURE;
}
return EXIT_SUCCESS;
}
Build Command
gcc -O0 -ggdb test.c --std=c99 -lssl -lcrypto && ./a.out
Sample Output
ivec = 00 00 00 00 00 00 00 00 d0 0f 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
input = 4d 69 64 6e 69 67 68 74 5f 4d 61 72 6c 69 6e 05 52 69 63 68 61 72 64 52 69 63 68 61 72 64 06 07 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
outBuf = 81 ee 91 c0 9f f6 40 db 3c 6d 32 dd 5e 86 6f f8 4e 7b aa 15 38 36 b8 20 bc 04 bd 4f 6c 53 0e 02 72 c2 b7 e8 79 35 f2 b2 e1 c1 6e 1e 3b 1e 75 81 6a 56 43 d8 9d 9c 4c 1e 04 bd 99 29 3a 55 c9 a4 90 48 20 13 5e 51 4a 0c 4b 35 bc db da 54 f1 2b 66 f6 1b 1a 42 25 33 30 0e 35 87 9d 4b 1f d5 3a 5d 3a 8e 8c c8 48 c0 52 72 c0 4e b3 b8 f5 37 03 1c 87 15 61 3b 64 2b 06 5e 12 8f c7 b5 21 98 06
I'm writing a program that encrypts based on bit/byte shifts. This program currently takes in a file, displays the contents, then displays the hex values.
To encrypt the file I am suppose to perform bit/byte shifts. However, I loaded the file into an array and performed shifts through the use of temp swaps. I'm pretty sure the outcome is the same, but I wanted to implement this with the proper bit shifts.
To produce the same results, do I just say load[x>>3] to shift the bits? I'm getting terrible results attempting to implement this.
Below is the entire block of code with a sample output at the end. Thanks in advance to everyone for your time.
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char* argv[]) {
FILE *fp;
char ch;
char load[100];
int count1 = 0;
int count2 = 0;
int i = 0;
fp = fopen("load.txt", "r");
if (fp == NULL)
{
perror("Exiting, an error occured while opening\n");
exit(EXIT_FAILURE);
}
while((ch = fgetc(fp)) != EOF)
{
printf("%c", ch);
++count1;
}
fclose(fp);
fp = fopen("load.txt", "r");
if (fp == NULL)
{
perror("Exiting, an error occured while opening\n");
exit(EXIT_FAILURE);
}
printf("\n\n");
while((ch = fgetc(fp)) != EOF)
{
printf("%x ", ch);
++count2;
}
fclose(fp);
printf("\n\n");
printf("Count1: %d, Count2: %d \n", count1, count2);
printf("\n\n");
fp = fopen("load.txt", "r");
if (fp == NULL)
{
perror("Exiting, an error occured while opening\n");
exit(EXIT_FAILURE);
}
i = 0;
while((ch = fgetc(fp)) != EOF)
{
load[i++] = ch;
}
load[i] = 0;
fclose(fp);
// for(i = 0; i < 100; ++i)
// {
// printf("%s", load);
// }
printf("\n\n");
for(i = 0; i < 50; ++i)
{
printf("%x ", load[i]);
}
// printf("\n index i: %c\n", load[0]);
//shift bytes
char temp, temp2, temp3, temp4;
temp = load[7];
load[7] = load[11];
load[11] = temp;
temp2 = load[8];
load[8] = load[12];
load[12] = temp2;
temp3 = load[9];
load[9] = load[13];
load[13] = temp3;
temp4 = load[10];
load[10] = load[14];
load[14] = temp4;
printf("\n\n");
for(i = 0; i < 50; ++i)
{
printf("%x ", load[i]);
}
printf("\nAfter byte shift: \n %s", load);
//shift bites
temp = load[6];
load[6] = load[4];
load[4] = temp;
temp2 = load[5];
load[5] = load[3];
load[3] = temp2;
printf("\nAfter bit shift: \n %s", load);
printf("\n\n");
for(i = 0; i < 50; ++i)
{
printf("%x ", load[i]);
}
printf("\n\n");
return 0;
}
The quick brown fox jumped over the lazy dog.
54 68 65 20 71 75 69 63 6b 20 62 72 6f 77 6e 20 66 6f 78 20 6a 75 6d 70 65 64 20 6f 76 65 72 20 74 68 65 20 6c 61 7a 79 20 64 6f 67 2e a
Count1: 46, Count2: 46
54 68 65 20 71 75 69 63 6b 20 62 72 6f 77 6e 20 66 6f 78 20 6a 75 6d 70 65 64 20 6f 76 65 72 20 74 68 65 20 6c 61 7a 79 20 64 6f 67 2e a 0 0 0 0
54 68 65 20 71 75 69 72 6f 77 6e 63 6b 20 62 20 66 6f 78 20 6a 75 6d 70 65 64 20 6f 76 65 72 20 74 68 65 20 6c 61 7a 79 20 64 6f 67 2e a 0 0 0 0
After byte shift:
The quirownck b fox jumped over the lazy dog.
After bit shift:
Theui qrownck b fox jumped over the lazy dog.
54 68 65 75 69 20 71 72 6f 77 6e 63 6b 20 62 20 66 6f 78 20 6a 75 6d 70 65 64 20 6f 76 65 72 20 74 68 65 20 6c 61 7a 79 20 64 6f 67 2e a 0 0 0 0
I am developing a Client/Server based on UDP I want to send different messages to the client from the server. There are different C structures defined for each message.
I would like to understand what is wrong in the way I am serializing the data.
struct Task
{
int mType;
int tType;
int cCnt;
int* cId;
char data[128];
};
Serialization/Deserialization functions
unsigned char * serialize_int(unsigned char *buffer, int value)
{
buffer[0] = value >> 24;
buffer[1] = value >> 16;
buffer[2] = value >> 8;
buffer[3] = value;
return buffer + 4;
}
unsigned char * serialize_char(unsigned char *buffer, char value)
{
buffer[0] = value;
return buffer + 1;
}
int deserialize_int(unsigned char *buffer)
{
int value = 0;
value |= buffer[0] << 24;
value |= buffer[1] << 16;
value |= buffer[2] << 8;
value |= buffer[3];
return value;
}
char deserialize_char(unsigned char *buffer)
{
return buffer[0];
}
Sender side code to serialize the structure
unsigned char* serializeTask(unsigned char* msg, const Task* t)
{
msg = serialize_int(msg,t->mType);
msg = serialize_int(msg,t->tkType);
msg = serialize_int(msg,t->cCnt);
for(int i=0; i<t->cCnt; i++)
msg = serialize_int(msg,t->cId[i*4]);
for(int i=0; i<strlen(data); i++)
msg = serialize_char(msg,t->data[i]);
return msg;
}
Receiver side code to de-serialize data
printf("Msg type:%d\n", deserialize_int(message) );
printf("Task Type:%d\n", deserialize_int(message+4) );
printf("Task Count:%d\n", deserialize_int(message+8));
Output
Msg type:50364598 //Expected value is 3
Task Type:-2013036362 //Expected value is 1
Task Count:1745191094 //Expected value is 3
Question 1:
Why is the de-serialized value not same as expected?
Question 2:
How is serialization/de-serialization method different from memcpy?
Task t;
memcpy(&t, msg, sizeof(t)); //msg is unsigned char* holding the struct data
EDIT
Code which invokes serializeTask
void addToDatabase(unsigned char* message, int msgSize, Task* task)
{
message = new unsigned char[2*msgSize+1];
unsigned char* msg = message; //To preserve start address of message
message = serializeTask(message, task); //Now message points to end of the array
//Insert serialized data to DB
//msg is inserted to DB
}
Serialized data stored in DB
Message:
00
03 70 B6 88 03 70 B6 68 05 70 B6 68 05 70 B6 00
00 00 00 00 00 00 00 A8 05 70 B6 AC 05 70 B6 B4
05 70 B6 C9 05 70 B6 DE 05 70 B6 E6 05 70 B6 EE
05 70 B6 FB 05 70 B6 64 65 66 00 63 6F 68 6F 72
74 73 00 70 65 6E 64 69 6E 67 5F 61 73 73 69 67
6E 5F 74 61 73 6B 73 00 70 65 6E 64 69 6E 67 5F
61 73 73 69 67 6E 5F 74 61 73 6B 73 00 6D 65 73
73 61 67 65 00 6D 65 73 73 61 67 65 00 3F 00 FF
FF 00 00 FC 90 00 00 00 00 00 00 00 C9 2D B7 00
00 00 00 10 06 70 B6 00 00 00 00 00 00 00 00 30
06 70 B6 34 06 70 B6 3C 06 70 B6
OP has 2 problems in serializeTask()
for(int i=0; i<t->cCnt; i++)
msg = serialize_int(msg,t->cId[i*4]); [i*4]
...
for(int i=0; i<strlen(data); i++)
msg = serialize_char(msg,t->data[i]); strlen(data)
Should be (assuming i<strlen(data) should have been i<strlen(t->data)
for(int i=0; i<t->cCnt; i++)
msg = serialize_int(msg,t->cId[i]); // [i]
...
for(int i=0; i<strlen(t->data); i++) // strlen(data) + 1
msg = serialize_char(msg,t->data[i]);
The first for loop serialize every 4th cId[]. OP certainly wanted to serialize consecutive cId[].
Only the length of the data string was serialized. OP certainly wanted to serialize all that and a NUL terminating byte.
The data in the posted buffer is more likely the below, which does not match the serialization code. This implies the higher level code populating Task* t is wrong. I am confident that the values seen in fields mType and tkType are either pointers or float, again Task* t is likely amiss before the serialization.
0xb6700300 or -3.576453e-06
0xb6700388 or -3.576484e-06
0xb6700568 or -3.576593e-06
0xb6700568 or -3.576593e-06
0x000000 or 0.000000e+00
0x000000 or 0.000000e+00
0xb67005a8 or -3.576608e-06
0xb67005ac or -3.576609e-06
0xb67005b4 or -3.576611e-06
0xb67005c9 or -3.576615e-06
0xb67005de or -3.576620e-06
0xb67005e6 or -3.576622e-06
0xb67005ee or -3.576624e-06
0xb67005fb or -3.576627e-06
def\0cohorts\0pending_assign_tasks\0pending_assign_tasks\0message\0message\0?\0
...