Decoding date storage in legacy database aka "fun" with numbers - database

I'm writing a utility to rip records out of a legacy DB(that we can't query), and I'm having trouble interpreting how a date field is stored.
All Dates will be in MM/DD/YYYY format. Hex will be bytes(2 digits) separated by spaces.
What we know:
Hours and mins are stored in a different location. Adding an hour or min to
the datetime does not effect the 4 bytes in question
The field that corresponds to the Month, day and year is 4 bytes:
01/01/1800 == 70 8E 00 00
01/15/1800 == 7E 8E 00 00
01/16/1800 == 7F 8E 00 00
01/31/1800 == 8E 8E 00 00
02/01/1800 == 8F 8E 00 00
02/02/1800 == 90 8E 00 00
02/15/1800 == 9D 8E 00 00
02/16/1800 == 9E 8E 00 00
02/28/1800 == AA 8E 00 00
02/29/1800 == AB 8E 00 00 #PLACEHOLDER FOR LEAP YEAR
03/01/1800 == AC 8E 00 00
12/01/1800 == BF 8F 00 00
12/02/1800 == C0 8F 00 00
12/03/1800 == C1 8F 00 00
12/15/1800 == CD 8F 00 00
12/16/1800 == CE 8F 00 00
12/30/1800 == DC 8F 00 00
12/31/1800 == DD 8F 00 00
01/01/1801 == DE 8F 00 00
12/31/1801 == 4A 91 00 00
Anyone have any ideas? And yes, I'm familiar with epoch time.

There are 4 bytes. Each new day increments the byte farthest to the left. Once that byte gets to "FF" it adds 1 to the byte to the right of it. Try this.. (Written in Ruby)
def parse_date(hex)
actual_known_date = "1/1/2050".to_date
known_date = "21F30100"
total_days_since_known_date = 0
first_byte = hex[0,2]
second_byte = hex[2,2]
third_byte = hex[4,2]
fourth_byte = hex[6,2]
known_first_byte = known_date[0,2]
known_second_byte = known_date[2,2]
known_third_byte = known_date[4,2]
known_fourth_byte = known_date[6,2]
byte_4_days = known_fourth_byte.hex - fourth_byte.hex
byte_3_days = 0
byte_2_days = 0
byte_1_days = 0
if known_third_byte.hex >= third_byte.hex
byte_3_days = known_third_byte.hex - third_byte.hex
else
byte_4_days -= 1
ktb = known_third_byte.hex + 256
byte_3_days = ktb - third_byte.hex
end
if known_second_byte.hex >= second_byte.hex
byte_2_days = known_second_byte.hex - second_byte.hex
else
byte_3_days -= 1
ktb = known_second_byte.hex + 256
byte_2_days = ktb - second_byte.hex
end
if known_first_byte.hex >= first_byte.hex
byte_1_days = known_first_byte.hex - first_byte.hex
else
byte_2_days -= 1
ktb = known_first_byte.hex + 256
byte_1_days = ktb - first_byte.hex
end
total_days_since_known_date = (byte_1_days + (byte_2_days * 256) + (byte_3_days * (256 * 256)) + (byte_4_days * (256 * 256 * 256)))
number_of_leap_days = 0
date_we_want = actual_known_date - (total_days_since_known_date).days
return date_we_want
end

Related

C program in Docker: fwrite(3) and write(2) fail to modify files on Windows but not on MacOS

I am writing a guest OS on top of Linux (Ubuntu distribution) within a Docker container. The filesystem is implemented as a single file resting inside the host OS, so anytime a file is changed in the guest OS filesystem, the file on the host OS must be opened, the correct block(s) must be overwritten, and the file must be closed.
My partner and I have developed the following recursive helper function to take in a block number and offset to abstract away all details at the block-level for higher level functions:
/**
* Recursive procedure to write n bytes from buf to the
* block specified by block_num. Also updates FAT to
* reflect changes.
*
* #param block_num identifier for block to begin writing
* #param buf buffer to write from
* #param n number of bytes to write
* #param offset number of bytes to start writing from as
* measured from start of file
*
* #returns number of bytes written
*/
int write_bytes(int block_num, const char *buf, int n, int offset) {
BlockTuple red_tup = reduce_block_offset(block_num, offset);
block_num = red_tup.block;
offset = red_tup.offset;
FILE *fp = fopen(fat->fname, "r+");
int bytes_to_write = min(n, fat->block_size - offset);
int write_n = max(bytes_to_write, 0);
fseek(fp, get_block_start(block_num) + offset, SEEK_SET);
fwrite(buf, 1, write_n, fp); // This line is returning 48 bytes written
fclose(fp);
// Check if there are bits remaining
int bytes_left = n - write_n;
if (bytes_left > 0) {
// Recursively write on next block
int next_block = get_free_block();
set_fat_entry(block_num, next_block); // point block to next block
set_fat_entry(next_block, 0xFFFF);
return write_bytes(next_block, buf + write_n, bytes_left, max(0, offset - fat->block_size)) + write_n;
} else {
set_fat_entry(block_num, 0xFFFF); // mark file as terminated
return write_n;
}
}
The issue is that fwrite(3) is reporting 48 bytes written (when n is passed as 48) but hexdumping the file on the host OS reveals no bytes have been changed:
00000000 00 01 ff ff ff ff 00 00 00 00 00 00 00 00 00 00 |................|
00000010 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00008000
This is particularly wacky because when my partner runs the code on the exact same commit (with no uncommitted changes), her write goes through and the file on the host OS hexdumps to:
00000000 00 01 ff ff ff ff 00 00 00 00 00 00 00 00 00 00 |................|
00000010 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00000100 66 31 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |f1..............|
00000110 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
00000120 00 01 00 00 02 00 01 06 e7 36 75 63 00 00 00 00 |.........6uc....|
00000130 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00000200 e0 53 f8 88 c0 0d 37 ca 84 1f 19 b0 6c a8 68 7b |.S....7.....l.h{|
00000210 57 83 cf 13 f0 42 21 d3 21 e1 da de d4 8a f1 e6 |W....B!.!.......|
00000220 f0 12 98 fb 1c 30 4c 04 b3 16 1d 96 17 ba d7 5a |.....0L........Z|
00000230 7e f3 8a f5 6a 42 6b ef 58 f6 bc 01 db 0c 02 53 |~...jBk.X......S|
00000240 e5 10 7e f3 4a d5 3f ac 8e 38 82 c3 95 f8 11 8e |..~.J.?..8......|
00000250 a6 82 eb 3b 24 56 9a 75 44 36 8b 25 60 83 4c 04 |...;$V.uD6.%`.L.|
00000260 07 9e 14 99 9c 9f 87 3c 8a d4 c3 e8 17 60 81 0e |.......<.....`..|
00000270 bc eb 1d 35 68 fc d5 be 4f 1c 9d 5e 72 57 65 01 |...5h...O..^rWe.|
00000280 b7 43 54 26 d6 6d ba 51 bf 12 8c a1 03 d5 66 b3 |.CT&.m.Q......f.|
00000290 90 0d 60 b8 95 8d 15 bd 53 9a 70 77 4f 7a 04 1e |..`.....S.pwOz..|
000002a0 9e b2 4c 9a 79 dd de 48 cd fe 1e dc 57 7d d1 7f |..L.y..H....W}..|
000002b0 3f f5 77 96 fa e7 d7 33 33 48 ce 0a 4d 61 ab 96 |?.w....33H..Ma..|
000002c0 5f c4 88 bf c6 3a 09 37 76 c4 b8 db bc 6a 7d c0 |_....:.7v....j}.|
000002d0 c4 89 68 e7 b4 70 f8 a6 a8 00 9d c4 63 da fb 66 |..h..p......c..f|
000002e0 be d2 cd 68 1c d2 ff bf 00 e9 37 ab 6b 1a 3c f2 |...h......7.k.<.|
000002f0 7b c1 a2 c4 46 ae db 93 b4 4f 64 79 14 2a 1a d4 |{...F....Ody.*..|
00000300 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00008000
The 48 bytes I'm referring to that don't get written are the bytes written to the directory block running from address 00000100-0000012E (the bytes below that represent the actual file being written, the code seg faults on my end before reaching that write). It's worth noting my container can still format the filesystem file, so all writes aren't broken. This snippet just represents the first write that did not work.
We are both running the code in an identical Docker container. The only difference I could imagine is my computer is a Windows and hers is a Mac. What could possibly be the issue?
The very first thing I believed was that there was some conflict with the host OS that blocked my write, but assigning and printing the return value of fwrite(3) returned that 48 bytes were indeed written on both machines.
I was also expecting that my buffer was simply all 0s (it is initially allocated using calloc(3)), but printing out the first 48 bytes of the buffer proved that theory false.
I finally considered that this was some issue with the higher level interface in <stdio.h> instead of the lower level one in <unistd.h>. I replaced fopen(3), fwrite(3), flseek(3), fclose(3) each with their lower-level equivalents (write(2) etc) and it still turned up 48 bytes written with no actual change to the files.
EDIT:
The guest OS filesystem can be formatted with respect to user parameters. All testing above was performed with a block size of 256 bytes and 128 blocks total. I've attempted the exact same write sequence again with a block size of 1024 bytes and 16384 blocks total, and there was no error. It's still unclear why the code works on my partner's machine for both format configs and not mine, but this may narrow it down.
Running strace reveals the following excerpt around the write:
openat(AT_FDCWD, "minfs", O_RDWR) = 4
newfstatat(4, "", {st_mode=S_IFREG|0777, st_size=32768, ...}, AT_EMPTY_PATH) = 0
lseek(4, 0, SEEK_SET) = 0
read(4, "\0\1\377\377\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 256) = 256
write(4, "f1\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 48) = 48
close(4)
It again appears the bytes get written, but a hd after the program finishes reveals the same output as above. My thought was perhaps the bytes written in the excerpt are overwritten later on, but the only write after the excerpt above in the strace is:
lseek(4, 0, SEEK_SET) = 0
read(4, "\0\1\377\377\377\377\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 320) = 320
write(4, "\0", 1) = 1
close(4)
which should be at address 320, squarely after the write at address 256 above.
It turns out the mismatch was due to undefined behavior concerning when changes were synchronized in mmap(2). There was a section of code where a region of memory mapped via mmap(2) was changed and then immediately followed by reads/writes to the file on the host OS containing the mapped region of memory. It seems the Mac would write through the changes before the following section while the Windows wouldn't synchronize until after the fact, resulting in the undefined behavior.
The problem was fixed by making a call to msync(2) immediately after modifying the mapped region from mmap(2) with the MS_SYNC flag forcing the write-through behavior.
Links to documentation here: mmap(2), msync(2).

Zlib uncompress method not parsing an array of bytes from an image

Currently I'm trying to read the bytes from the IDAT chunk of a PNG image, in C. I am able to get all the other info, including the said array of bytes.
The problem arises whenever I try to decompress said array with zlib's uncompress() method.
[ ... ]
int decompress(Chunk * _chunk, Image * _image)
{
uLongf compressedSize = _chunk->length;
byte * uncompressedData = NULL;
uLongf uncompressedSize = 0;
int ret = uncompress(uncompressedData, &uncompressedSize, _chunk->data, compressedSize);
if(ret != Z_OK)
{
fprintf(stderr, "Error: failed to uncompress IDAT chunk data. ERR CODE: %d\n", ret);
return -1;
}
[ ... ]
}
The chunk struct is defined as such:
typedef struct chunk
{
uint32_t length;
byte chunkType[4];
byte *data;
} Chunk;
The byte type is just an unsigned char, and the image struct is defined as follows:
typedef struct image
{
uint32_t width;
uint32_t height;
byte bitDepth;
byte colorType;
byte compression;
byte filter;
byte interlace;
} Image;
The test image's HEX representation is:
89 50 4E 47 0D 0A 1A 0A 00 00 00 0D 49 48 44 52
00 00 00 11 00 00 00 12 04 03 00 00 00 4F D7 28
67 00 00 00 30 50 4C 54 45 00 00 00 80 00 00 00
80 00 80 80 00 00 00 80 80 00 80 00 80 80 80 80
80 C0 C0 C0 FF 00 00 00 FF 00 FF FF 00 00 00 FF
FF 00 FF 00 FF FF FF FF FF 7B 1F B1 C4 00 00 00
09 70 48 59 73 00 00 0E C4 00 00 0E C4 01 95 2B
0E 1B 00 00 00 28 49 44 41 54 08 D7 63 D8 0D 05
1B 18 36 30 00 01 FF FF FF 24 B1 FE FF FF C0 C0
40 0E 6B FF FF FF 20 73 48 60 C1 5D 0A 00 BB 1A
49 27 39 98 BC 6E 00 00 00 00 49 45 4E 44 AE 42
60 82
And the bytes of the IDAT chunk are:
08 D7 63 D8 0D 05 1B 18 36 30 00 01 FF FF FF 24 B1 FE FF FF C0 C0 40 0E 6B FF FF FF 20 73 48 60 C1 5D 0A 00 BB 1A 49 27
It must be noted that I'm not taking the CRC of the chunk as well; from my understanding it shouldn't be a problem.
Any idea as to why the uncompress() method is returning Z_DATA_ERROR?
You're not giving uncompress() anywhere to put the uncompressed data! uncompressedData cannot be NULL.

Convert hex numbers from proprietary database into decimal

I’m trying to pull numbers out of a proprietary database (Lacerte tax software).
They are all whole numbers, stored in 4 bytes. I have put numbers into the program, and then checked out the file with a hex editor. This has allowed me to see how they are stored.
Here are some examples:
-100 = 00 00 59 C0
-4 = 00 00 10 C0
-3 = 00 00 08 C0
-2 = 00 00 00 C0
-1 = 00 00 F0 BF
0 = 00 00 00 00
1 = 00 00 F0 3F
2 = 00 00 00 40
3 = 00 00 08 40
4 = 00 00 10 40
5 = 00 00 14 40
6 = 00 00 18 40
7 = 00 00 1C 40
8 = 00 00 20 40
9 = 00 00 22 40
10 = 00 00 24 40
100 = 00 00 59 40
1,000,000 = 80 84 2E 41
Does anybody have any idea how to convert these hex numbers from the database into decimals?

Pulling individual integer value from hexadecimal value

Here is my hex code:
42 4D C6 00 00 00 00 00 00 00 76 00 00 00 28 00
00 00 0A 00 00 00 0A 00 00 00 01 00 04 00 00 00
00 00 50 00 00 00 12 0B 00 00 12 0B 00 00 10 00
00 00 10 00 00 00 FF 00 00 00 00 FF 00 00 00 00
42 00 5A 5A 84 00 00 00 FF 00 FF 00 FF 00 00 FF
FF 00 08 FF FF 00 5A FF FF 00 FF FF FF 00 FF FF
FF 00 FF FF FF 00 FF FF FF 00 FF FF FF 00 FF FF
FF 00 FF FF FF 00 92 59 00 16 47 00 00 00 25 90
01 64 61 00 00 00 59 90 11 64 61 00 00 00 99 00
16 48 11 00 00 00 90 01 64 61 11 00 00 00 00 16
64 61 00 00 00 00 01 16 46 10 09 00 00 00 11 64
41 00 99 00 00 00 16 64 11 09 95 00 00 00 66 48
10 09 53 00 00 00
I know that the pixel "assignment" starts with the first line being (10 pixels wide):
92 59 00 16 47 00 00 00
I need to count how many times each colour is in the image, but I am unable to pull the individual integer value (ie: just the 9, then just the 2, then just the 5, and so on). The only value I am able to pull is "92" then "59" then "00"...
This is my code for that segment (the offset is 118 and the total hex values remaining are 80):
int nbr_each[NBRCOLOURS];
int ch, pixel;
fseek(fptr, 118, SEEK_SET);
for (count = 0; count < 81; count++)
{
pixel = fgetc(fptr);
nbr_each[pixel] = nbr_each[pixel] + 1;
}
fgetc will get you the individual characters.
first = fgetc(fptr); // '9'
second = fgetc(fptr); // '2'
space = fgetc(fptr); // ' '
Then convert each digit to a number 0..9 by subtracting off '0':
first -= '0';
second -= '0';
Then to count each digit, something like this:
nbr_each[first]++;
nbr_each[second]++;

Xbee sending wrong ZDO responses

I´m playing around with two Xbees, one defined as coordinator, another as router. I want to read information about the network interoperably so i decided to use the ZDO messages.
I send a message like this ((profile ID 0x00 00, cluster ID 0x 00 31) and receive for example the following response from the router:
7E 00 2D 91 00 13 A2 00 40 E5 F0 B4 FB CE 00 00 80 31 00 00 01 2C 00 01 00 01 58 CE C1 8D 7A 3F 2D 40 AB F0 E5 40 00 A2 13 00 00 00 04 02 00 FF 33
Correct answer cluster ID: 0x 80 31
Focussing on the RF Data i have the following:
2C 00 01 00 01 58 CE C1 8D 7A 3F 2D 40 AB F0 E5 40 00 A2 13 00 00 00 04 02 00 FF
I now try to decode this hex string and face some problems.
From my point of view, this string should be encoded like defined within the ZigBee Spec from 2012, at Table 2.126 and 2.127
Unfortunately this don´t work for me. If i ignore, that the first byte should be the status and take the first two of them, i can read out NeighborTableEntries, StartIndex, NeighborTabelListCount. But when it comes to the NeighTableList i only can read out the Extended PAN id, the extended address and the network address, the rest of the string does not fit to the standard. Am i doing something wrong here or does the xbee´s don´t stick to the standard?
2C = Sequence Number
00 = Status (Success)
01 = 1 entry (total)
00 = starting at index 0
01 = 1 entry (in packet)
58 CE C1 8D 7A 3F 2D 40 = Extended Pan ID
AB F0 E5 40 00 A2 13 00 = IEEE address
00 00 = NodeId
04 = (Coordinator, RxOnWhenIdle)
02 = (Unknown Permit Join)
00 = (Coordinator)
FF = (LQI)
The values after the NodeId are bitmasks, not bytes.

Resources