EBCDIC to ASCII not working Properly - c

I have to process a file which comes from mainframes. There are some Non-Latin text in the file. I have to process this Non-Latin characters for some invalid characters. As the mainframe encodes the data in EBCDIC format, I have to convert it to ASCII to do the validation.
I used this code to convert from EBCDIC to ASCII. But when I execute the program for the sample input, I am getting Hello there] instead of Hello there!.
I also checked sample input against the EBCDIC table.
I also generated the lookup table using this. But the same result.
Am I doing anything wrong? Or is the lookup table wrong?
Is there anyother way to validate for invalid chars without converting to ASCII?
Sample code is below...
#include <stdio.h>
static const unsigned char e2a[256] = {
0, 1, 2, 3,156, 9,134,127,151,141,142, 11, 12, 13, 14, 15,
16, 17, 18, 19,157,133, 8,135, 24, 25,146,143, 28, 29, 30, 31,
128,129,130,131,132, 10, 23, 27,136,137,138,139,140, 5, 6, 7,
144,145, 22,147,148,149,150, 4,152,153,154,155, 20, 21,158, 26,
32,160,161,162,163,164,165,166,167,168, 91, 46, 60, 40, 43, 33,
38,169,170,171,172,173,174,175,176,177, 93, 36, 42, 41, 59, 94,
45, 47,178,179,180,181,182,183,184,185,124, 44, 37, 95, 62, 63,
186,187,188,189,190,191,192,193,194, 96, 58, 35, 64, 39, 61, 34,
195, 97, 98, 99,100,101,102,103,104,105,196,197,198,199,200,201,
202,106,107,108,109,110,111,112,113,114,203,204,205,206,207,208,
209,126,115,116,117,118,119,120,121,122,210,211,212,213,214,215,
216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,
123, 65, 66, 67, 68, 69, 70, 71, 72, 73,232,233,234,235,236,237,
125, 74, 75, 76, 77, 78, 79, 80, 81, 82,238,239,240,241,242,243,
92,159, 83, 84, 85, 86, 87, 88, 89, 90,244,245,246,247,248,249,
48, 49, 50, 51, 52, 53, 54, 55, 56, 57,250,251,252,253,254,255
};
void ebcdicToAscii (unsigned char *s)
{
while (*s)
{
*s = e2a[(int) (*s)];
s++;
}
}
int main (void) {
unsigned char str[] = "\xc8\x85\x93\x93\x96\x40\xa3\x88\x85\x99\x85\x5a";
ebcdicToAscii (str);
printf ("%s\n", str);
return 0;
}
Thanks in advance.

Your lookup table is wrong. It converts EBCDIC value 0x5A ('!') to (decimal) 93. ASCII decimal 93 is an ']'. So, your application works fine, it outputs the ']' character. You indicate that you generated the lookup table from a python sample that uses cp500 which is IBM code page 500. This code page indeed maps EBCDIC value 0x5A to the ']' character. If you would use the character set listed here for your lookup table, things would be ok.

There is a utility called īconv in USS on z/OS that will do the conversion for you. A full reference can be found here of the code pages it supports.
That said, should you choose to roll your own here are a few suggestions. First, EBCDIC to ASCII has a few subtle things to consider. For EBCDIC there are a variety of code pages that are in use for z/OS systems. Here is a link that provide more detail. In general EBCDIC is Code Page 1047 which works for most North America users. However, there are other code pages which would impact what you are doing. For instance, in the UK they would commonly use Code Page 37. This means there are subtle differences in character translation.
So, you would need one translation table for each code page your translating from to properly convert special characters that are unique to the code pages.
Also, this is pure preference but I prefer seeing the code in hex rather than decimal for this kind of data (translation). Its often how people will refer to characters on Z
static const unsigned char e2a[256] = {
0x00, 0x01, 0x02, 0x03, 0x9c, 0x09, 0x86, 0x7f, 0x97, 0x8d, 0x8e, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
0x10, 0x11, 0x12, 0x13, 0x9d, 0x85, 0x08, 0x87, 0x18, 0x19, 0x92, 0x8f, 0x1c, 0x1d, 0x1e, 0x1f,
0x80, 0x81, 0x82, 0x83, 0x84, 0x0a, 0x17, 0x1b, 0x88, 0x89, 0x8a, 0x8b, 0x8c, 0x05, 0x06, 0x07,
0x90, 0x91, 0x16, 0x93, 0x94, 0x95, 0x96, 0x04, 0x98, 0x99, 0x9a, 0x9b, 0x14, 0x15, 0x9e, 0x1a,
0x20, 0xa0, 0xa1, 0xa2, 0xa3, 0xa4, 0xa5, 0xa6, 0xa7, 0xa8, 0x5b, 0x2e, 0x3c, 0x28, 0x2b, 0x21,
0x26, 0xa9, 0xaa, 0xab, 0xac, 0xad, 0xae, 0xaf, 0xb0, 0xb1, 0x5d, 0x24, 0x2a, 0x29, 0x3b, 0x5e,
0x2d, 0x2f, 0xb2, 0xb3, 0xb4, 0xb5, 0xb6, 0xb7, 0xb8, 0xb9, 0x7c, 0x2c, 0x25, 0x5f, 0x3e, 0x3f,
0xba, 0xbb, 0xbc, 0xbd, 0xbe, 0xbf, 0xc0, 0xc1, 0xc2, 0x60, 0x3a, 0x23, 0x40, 0x27, 0x3d, 0x22,
0xc3, 0x61, 0x62, 0x63, 0x64, 0x65, 0x66, 0x67, 0x68, 0x69, 0xc4, 0xc5, 0xc6, 0xc7, 0xc8, 0xc9,
0xca, 0x6a, 0x6b, 0x6c, 0x6d, 0x6e, 0x6f, 0x70, 0x71, 0x72, 0xcb, 0xcc, 0xcd, 0xce, 0xcf, 0xd0,
0xd1, 0x7e, 0x73, 0x74, 0x75, 0x76, 0x77, 0x78, 0x79, 0x7a, 0xd2, 0xd3, 0xd4, 0xd5, 0xd6, 0xd7,
0xd8, 0xd9, 0xda, 0xdb, 0xdc, 0xdd, 0xde, 0xdf, 0xe0, 0xe1, 0xe2, 0xe3, 0xe4, 0xe5, 0xe6, 0xe7,
0x7b, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46, 0x47, 0x48, 0x49, 0xe8, 0xe9, 0xea, 0xeb, 0xec, 0xed,
0x7d, 0x4a, 0x4b, 0x4c, 0x4d, 0x4e, 0x4f, 0x50, 0x51, 0x52, 0xee, 0xef, 0xf0, 0xf1, 0xf2, 0xf3,
0x5c, 0x9f, 0x53, 0x54, 0x55, 0x56, 0x57, 0x58, 0x59, 0x5a, 0xf4, 0xf5, 0xf6, 0xf7, 0xf8, 0xf9,
0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37, 0x38, 0x39, 0xfa, 0xfb, 0xfc, 0xfd, 0xfe, 0xff
};

Related

API for setting ECC Key mbedTLS

I am trying to set the ECC private key explicitly with mbedTLS for ECDSA signing. The key has been generated externally from mbedTLS and consists of the following arrays for the private key and the public key in the NIST secp256r1 curve (below). In all the of the mbedTLS ECDSA exmaples that I have seen, the key is generated with a random number generator with mbedtls_ecp_gen_key() but this doesn't work for me since I need to generate the key pair outside of the code and then set explicitly in the code.
const uint8_t Private_Key[] =
{
0x0a, 0x75, 0xde, 0x36, 0x78, 0x73, 0x50, 0x8b, 0x25, 0x1e, 0x19, 0xbe, 0xf4, 0x7b, 0x74,
0xfc, 0xd6, 0x97, 0x44, 0x12, 0x5f, 0x1c, 0x49, 0x89, 0x98, 0x0b, 0x65, 0x6c, 0x48, 0xa7, 0x8c, 0x5c
};
const uint8_t Public_Key[] =
{
0x3b, 0x08, 0xd7, 0x1a, 0x1b, 0x5a, 0xd0, 0x3e, 0x41, 0x5d, 0x8f, 0x68, 0xe9, 0x78,0x47, 0x6b,
0x35, 0x5c, 0xe2, 0x90, 0x8d, 0xb9, 0xc1, 0x46, 0xb1, 0x44, 0x77, 0x1f, 0x92, 0x57, 0xbf, 0x8e,
0x7c, 0xed, 0xdf, 0x3b, 0xea, 0xed, 0x5d, 0xea, 0x1d, 0x77, 0x39, 0xdb, 0xb7, 0x42, 0xe3, 0x6a,
0x07, 0x74, 0xca, 0x50, 0x8b, 0x19, 0xf5, 0x37, 0xd5, 0x2d, 0x57, 0x71, 0x70, 0x7e, 0xc7, 0x16
};
You can have a look at mbedtls_ecp_read_key for importing secret key and mbedtls_ecp_point_read_binary for importing public key from key data generated outside. Notice that mbedtls_ecp_point_read_binary expects binary data in uncompressed public key format, i.e 0x04 followed by X followed by Y, which means you should add a 0x04 to the head of the Public_Key data in your code.

Verifying signature created using OpenSSL with BearSSL

I am trying to verify a ECDSA signature created with OpenSSL on an embedded device using BearSSL.
First I created a private key using OpenSSL and extracted the public key:
$ openssl ecparam -name secp256r1 -genkey -noout -out private.pem
$ openssl ec -in private.pem -pubout -out public.pem
Then I extracted the raw public key:
$ openssl ec -noout -text -inform PEM -in public.pem -pubin
read EC key
Public-Key: (256 bit)
pub:
04:28:4b:54:a4:d4:92:6c:82:2d:da:8a:e1:be:4b:
49:61:5d:91:2b:2d:f5:f2:66:f8:9b:d1:be:cb:fb:
db:fc:4f:68:cf:52:68:55:36:53:0f:8e:8d:69:3f:
40:3a:06:62:ad:5b:5a:66:e6:1d:31:c6:13:08:f3:
4f:94:7b:59:7a
I have a text.txt that contains nothing but hello world and sign this using the following command:
$ cat text.txt
hello world
$ openssl dgst -sha256 -sign private.pem text.txt > signature
This now contains the signature in ASN1 format and is 72 bytes long as to be expected:
$ hexdump signature
0000000 4530 2002 ac54 51af 8ac0 cee8 dc74 4120
0000010 105c b65d a085 06c5 8e9f 1527 12f5 8e50
0000020 9d19 9b30 2102 a900 d2a5 343e 3a10 0bdd
0000030 e0a8 82f8 de2a 4f2d 51bf a775 bc42 2d2e
0000040 19c0 874f d85e 004b
Now to the embedded part. I include the data, signature and the public key first:
uint8_t text[] = "hello world\n";
size_t textlen = 12;
uint8_t signature[] = {
0x45, 0x30, 0x20, 0x02, 0xac, 0x54, 0x51, 0xaf, 0x8a, 0xc0, 0xce, 0xe8, 0xdc, 0x74, 0x41, 0x20,
0x10, 0x5c, 0xb6, 0x5d, 0xa0, 0x85, 0x06, 0xc5, 0x8e, 0x9f, 0x15, 0x27, 0x12, 0xf5, 0x8e, 0x50,
0x9d, 0x19, 0x9b, 0x30, 0x21, 0x02, 0xa9, 0x00, 0xd2, 0xa5, 0x34, 0x3e, 0x3a, 0x10, 0x0b, 0xdd,
0xe0, 0xa8, 0x82, 0xf8, 0xde, 0x2a, 0x4f, 0x2d, 0x51, 0xbf, 0xa7, 0x75, 0xbc, 0x42, 0x2d, 0x2e,
0x19, 0xc0, 0x87, 0x4f, 0xd8, 0x5e, 0x00, 0x4b };
static const uint8_t public_bytes[] = {
0x04, 0x28, 0x4b, 0x54, 0xa4, 0xd4, 0x92, 0x6c, 0x82, 0x2d, 0xda, 0x8a, 0xe1, 0xbe, 0x4b,
0x49, 0x61, 0x5d, 0x91, 0x2b, 0x2d, 0xf5, 0xf2, 0x66, 0xf8, 0x9b, 0xd1, 0xbe, 0xcb, 0xfb,
0xdb, 0xfc, 0x4f, 0x68, 0xcf, 0x52, 0x68, 0x55, 0x36, 0x53, 0x0f, 0x8e, 0x8d, 0x69, 0x3f,
0x40, 0x3a, 0x06, 0x62, 0xad, 0x5b, 0x5a, 0x66, 0xe6, 0x1d, 0x31, 0xc6, 0x13, 0x08, 0xf3,
0x4f, 0x94, 0x7b, 0x59, 0x7a };
static const br_ec_public_key public_key = {
.curve = BR_EC_secp256r1,
.q = (void *)public_bytes,
.qlen = sizeof(public_bytes)
};
Also, out of paranoia, I compared the md5 sum of both the text.txt and my string before going any further as a quick check:
$ md5sum text.txt
6f5902ac237024bdd0c176cb93063dc4 text.txt
uint8_t sum[br_md5_SIZE];
br_md5_context md5ctx;
br_md5_init(&md5ctx);
br_md5_update(&md5ctx, text, textlen);
br_md5_out(&md5ctx, sum); // sum is the same as md5sum command output
I told OpenSSL to use sha256 previously to hash the payload for the signing process using ECDSA, so I am doing the same in BearSSL now and try to verify the signature using secp256r1:
br_sha256_context sha256ctx;
br_sha256_init(&sha256ctx);
br_sha256_update(&sha256ctx, text, textlen);
br_sha256_out(&sha256ctx, hash);
uint32_t result = br_ecdsa_i15_vrfy_asn1(&br_ec_prime_i15, hash, sizeof(hash), &public_key, signature, sizeof(signature));
My expectation would be that the verify works, as I give it the same hash, hash function, curve, the corresponding public key and the signature. However, this doesn't work. I am probably missing something obvious but can't work it out.
The signature file contents shown as
$ hexdump signature
0000000 4530 2002 ac54 51af 8ac0 cee8 dc74 4120
...
is displayed as 16-bit values.
The signature in the C program is defined as an array of 8-bit values
uint8_t signature[] = {
0x45, 0x30, 0x20, 0x02, 0xac, 0x54, 0x51, 0xaf, 0x8a, 0xc0, 0xce, 0xe8, 0xdc, 0x74, 0x41, 0x20,
...
};
Depending on the byte order this may or may not be correct. Does 4530 correspond to 45, 30 or 30, 45?
With little-endian byte-order, the hex dump
4530 2002 ...
would correspond to (*)
uint8_t signature[] = {
0x30, 0x45, 0x02, 0x20, ...
};
I suggest to display the hex dump as 8-bit values, e.g. by using
od -t x1 signature
and, if necessary, fix the array initialization in the C code.
According to dave_thompson_085's comment, the correct byte order is 0x30, 0x45, so the proposed fix (*) is the solution.
And an ECDSA signature on a 256-bit group definitely starts with first tag=SEQUENCE+constructed (always 0x30) then body length usually 68 to 70 (0x44 to 0x46)

How are raw (hex) bitmap data created and is it possible to view them without a different program?

Recently I started to work with the M5 Stacks Core 2 and wondered how they displayed images onto the screen. Using an example given for Arduino, I see that the project file contains many '.c' files containing a const unsigned char followed by a series of hex values. Here is an example of one of those files:
const unsigned char image_DigNumber_0000_0[504] = {
0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x16, 0xcc, 0xcc, 0xcc, 0xcc, 0xcc, 0xcc, 0xc7, 0x11, 0x11, 0x11, 0x11, 0x1e, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0x21, 0x11, 0x11, 0x12, 0x34, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xf7, 0x42, 0x11, 0x11, 0x1a, 0xf5, 0x4f, 0xff, 0xff, 0xff, 0xff, 0xff, 0x76, 0xf7, 0x11, 0x11, 0x1c, 0xff, 0x74,
0xff, 0xff, 0xff, 0xff, 0xf7, 0x7f, 0xf7, 0x11, 0x11, 0x1f, 0xff, 0xf7, 0x4c, 0xcc, 0xcc, 0xcc, 0x77, 0xff, 0xf7, 0x11, 0x11, 0x1f, 0xff, 0xfc, 0x11, 0x11, 0x11, 0x11, 0x6f, 0xff, 0xf7, 0x11,
0x11, 0x1f, 0xff, 0xfc, 0x11, 0x11, 0x11, 0x11, 0x7f, 0xff, 0xf7, 0x11, 0x11, 0x1f, 0xff, 0xfc, 0x11, 0x11, 0x11, 0x11, 0x7f, 0xff, 0xf7, 0x11, 0x11, 0x1f, 0xff, 0xfc, 0x11, 0x11, 0x11, 0x11,
0x7f, 0xff, 0xf7, 0x11, 0x11, 0x5f, 0xff, 0xfa, 0x11, 0x11, 0x11, 0x11, 0x7f, 0xff, 0xf5, 0x11, 0x11, 0x5f, 0xff, 0xf7, 0x11, 0x11, 0x11, 0x11, 0x9f, 0xff, 0xf5, 0x11, 0x11, 0x5f, 0xff, 0xf7,
0x11, 0x11, 0x11, 0x11, 0xcf, 0xff, 0xf5, 0x11, 0x11, 0x5f, 0xff, 0xf7, 0x11, 0x11, 0x11, 0x11, 0xcf, 0xff, 0xf5, 0x11, 0x11, 0x6f, 0xff, 0xf7, 0x11, 0x11, 0x11, 0x11, 0xcf, 0xff, 0xf5, 0x11,
0x11, 0x7f, 0xff, 0xf7, 0x11, 0x11, 0x11, 0x11, 0xcf, 0xff, 0xf5, 0x11, 0x11, 0x7f, 0xff, 0xf3, 0x11, 0x11, 0x11, 0x11, 0x7f, 0xff, 0xf2, 0x11, 0x11, 0x7f, 0xfd, 0x41, 0x11, 0x11, 0x11, 0x11,
0x17, 0xff, 0xf1, 0x11, 0x11, 0x2f, 0xd2, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x6f, 0xc1, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x13, 0x31, 0x11, 0x11, 0x56, 0x11, 0x11,
0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0xef, 0x91, 0x11, 0x11, 0x11, 0x11, 0x11, 0x13, 0xdf, 0x31, 0x11, 0x11, 0xff, 0xfa, 0x11, 0x11, 0x11, 0x11, 0x11, 0x4f, 0xff, 0x71, 0x11,
0x13, 0xff, 0xff, 0x91, 0x11, 0x11, 0x11, 0x13, 0xff, 0xff, 0x71, 0x11, 0x15, 0xff, 0xff, 0xc1, 0x11, 0x11, 0x11, 0x17, 0xff, 0xff, 0x71, 0x11, 0x15, 0xff, 0xff, 0xc1, 0x11, 0x11, 0x11, 0x17,
0xff, 0xff, 0x71, 0x11, 0x15, 0xff, 0xff, 0xc1, 0x11, 0x11, 0x11, 0x17, 0xff, 0xff, 0x51, 0x11, 0x15, 0xff, 0xff, 0xc1, 0x11, 0x11, 0x11, 0x17, 0xff, 0xff, 0x51, 0x11, 0x15, 0xff, 0xff, 0xb1,
0x11, 0x11, 0x11, 0x17, 0xff, 0xff, 0x51, 0x11, 0x17, 0xff, 0xff, 0x71, 0x11, 0x11, 0x11, 0x1c, 0xff, 0xff, 0x51, 0x11, 0x17, 0xff, 0xff, 0x71, 0x11, 0x11, 0x11, 0x1c, 0xff, 0xff, 0x41, 0x11,
0x17, 0xff, 0xff, 0x71, 0x11, 0x11, 0x11, 0x1c, 0xff, 0xff, 0x11, 0x11, 0x17, 0xff, 0xff, 0x71, 0x11, 0x11, 0x11, 0x1c, 0xff, 0xff, 0x11, 0x11, 0x17, 0xff, 0xff, 0x61, 0x11, 0x11, 0x11, 0x1d,
0xff, 0xff, 0x11, 0x11, 0x17, 0xff, 0xf7, 0x7c, 0xcc, 0xcc, 0xcc, 0x47, 0xff, 0xff, 0x11, 0x11, 0x17, 0xff, 0x77, 0xff, 0xff, 0xff, 0xff, 0xf4, 0x6f, 0xfc, 0x11, 0x11, 0x17, 0xfa, 0x7f, 0xff,
0xff, 0xff, 0xff, 0xff, 0x44, 0xf7, 0x11, 0x11, 0x14, 0x97, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xf4, 0x31, 0x11, 0x11, 0x11, 0x2f, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xfe, 0x11, 0x11, 0x11,
0x11, 0x19, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xd7, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11,
};
It is called in the main code like this:
DisClockbuff.drawColorBitmap(posx[1],26,24,42,
DigNumber[ sytState.Rtctime.Hours % 10 ],
0xff9c00,
0x1a1100);
My question is how did they create this bitmap images and gather its hex values. Additionally, how would I be able to view these individual bitmap images using the hex values. Is it even possible?
I would suggest you to watch Ben Eater's series on the "Worst Video Card" (https://www.youtube.com/watch?v=uqY3FMuMuRo) It will give you some insights into how an image is represented as binary data. Watch from minute 15:00 from the previous link.
It is possible to convert hex to bmp by following this description: https://www.youtube.com/watch?v=Oj1I55Amrsk
If you are patient you could do it manually here: https://hexed.it based on those rules, however, this is suitable for creating a nice C++ or Matlab algorithm, as it has been previously discussed here
how did they create this bitmap images and gather its hex values?
How the image can be created depend on the hardware of the display, more specifically the display driver chip used by the display module. You would have to know the display driver (i.e. reading the data sheet of the display driver) and create library, fonts for using the chip.
I don't know what display M5 Stacks is using, but I have a blog talk about how to create a display library for using a LCD display commonly-known as Nokia 5110 display which uses a display chip called PCD 8544, the blog should give you some insight on how a bitmap image can be created for that particular display chip.
how would I be able to view these individual bitmap images using the hex values?
The answer is maybe. If you know the columns and the rows, and align your data accordingly, you might be able to see roughly the image to be displayed. For example, this is bitmap image of Adafruit's logo for its SSD1306 lcd (which is a 128x64 pixels oled display), when align the data this way, you could vaguely see the star fruit logo from far direct from this data structure, because the data bit (0 or 1) is directly related to the pixel to be displayed on the screen in this case. This however might not be possible, depend on how data is structured (e.g. some display divided the display into page, and within each page, each data byte represent a vertical column on the display).
static const unsigned char PROGMEM logo_bmp[] =
{ 0b00000000, 0b11000000,
0b00000001, 0b11000000,
0b00000001, 0b11000000,
0b00000011, 0b11100000,
0b11110011, 0b11100000,
0b11111110, 0b11111000,
0b01111110, 0b11111111,
0b00110011, 0b10011111,
0b00011111, 0b11111100,
0b00001101, 0b01110000,
0b00011011, 0b10100000,
0b00111111, 0b11100000,
0b00111111, 0b11110000,
0b01111100, 0b11110000,
0b01110000, 0b01110000,
0b00000000, 0b00110000 };

Access violation when using OpenSSL's HMAC

I'm trying to do an HMAC-SHA512 on some data using OpenSSL. I get an "Exception thrown at 0x... (libcrypto-1_1-x64.dll) in Program.exe: 0xC0000005: Access violation writing location 0x..." error when executing the following code:
int main(int argc, char** argv)
{
uint8_t* data[] = { 0x14, 0xf7, 0xbd, 0x95, 0x57, 0x9a, 0x7e, 0xa1, 0x5c, 0xf7, 0x27, 0x91, 0x0d, 0x61, 0x58, 0x01, 0xa3, 0x12, 0x17, 0x54, 0x0b, 0x2e, 0xb4, 0xc5, 0xb1, 0xeb, 0xab, 0xe0, 0x43, 0x9b, 0x8e, 0x1f, 0x39, 0x7d, 0x85, 0x1a, 0x3a, 0x4b, 0x9c, 0xf4, 0xbf, 0x31, 0x55, 0x72, 0x41, 0xf5, 0xdb, 0xcb, 0xb3, 0xa6, 0xb5, 0xb8, 0x82, 0xe5, 0xef, 0x18, 0x72, 0xa0, 0x59, 0x08, 0x9b, 0xfa, 0x17, 0xa3 };
uint8_t* key = "some_rand_pw";
uint8_t* result = malloc(64);
memset(result, 0, 64);
HMAC(EVP_sha512(), key, 12, data, 64, result, (unsigned int)64); //ERROR
}
I would use uint8_t* result = HMAC(EVP_sha512(), key, 12, data, 64, NULL, NULL), but it isn't thread safe, and this will be a multithreaded program. Anyone have any idea what I did wrong here?
I'm using Visual Studio 2017 with 64-bit OpenSSL pre-built for Windows.
Your code is wrong. data must be an array of uint8, but you declared it as an array of pointers to uint8.
Furthermore the last parameter of HMAC must be a pointer to unsigned int but you provided an unsigned int, that's the reason for the crash.
Your compiler should have warned you. Compile with -Wall.
Corrected (untested) code:
int main(int argc, char** argv)
{
uint8_t data[] = { 0x14, 0xf7, 0xbd, 0x95, 0x57, 0x9a, 0x7e, 0xa1, 0x5c, 0xf7, 0x27, 0x91, 0x0d, 0x61, 0x58, 0x01, 0xa3, 0x12, 0x17, 0x54, 0x0b, 0x2e, 0xb4, 0xc5, 0xb1, 0xeb, 0xab, 0xe0, 0x43, 0x9b, 0x8e, 0x1f, 0x39, 0x7d, 0x85, 0x1a, 0x3a, 0x4b, 0x9c, 0xf4, 0xbf, 0x31, 0x55, 0x72, 0x41, 0xf5, 0xdb, 0xcb, 0xb3, 0xa6, 0xb5, 0xb8, 0x82, 0xe5, 0xef, 0x18, 0x72, 0xa0, 0x59, 0x08, 0x9b, 0xfa, 0x17, 0xa3 };
uint8_t* key = "some_rand_pw";
uint8_t* result = malloc(64);
unsigned int len;
memset(result, 0, 64);
HMAC(EVP_sha512(), key, 12, data, 64, result, &len);
}
There is still room for improvement though.

extreme difference in time between AES-CBC + HMAC and AES-GCM

So I've been searching far and wide for different AES implementations for CBC and GCM, i do not want to implement this my self in case I make mistakes so i have found the following AES CBC codes and tested the speed of them on my RX63NB (Rennesas test board).
Encrypt Decrypt
bytes speed (us) bytes speed (us)
Tiny AES 64 1500 64 8900
128 2880 128 17820
aes-byte-29-08-08 64 1250 64 4900
128 1220 128 9740
Cyclone 64 230 64 237
128 375 128 387
I was suprised about how much faster Cyclone was, to clarify I took the AES, CBC and Endian files from CycloneSSL and only used those.
Then I tried GCM from CycloneSSl and this was the output:
Encrypt Decrypt
bytes speed μs bytes speed μs
Cyclone GCM 64 9340 64 9340
128 14900 128 14900
I have examained the HMAC time (from CycloneSSL) to see how much that would take:
HMAC bytes speed μs
Sha1 64 746
128 857
Sha224 64 918
128 1066
Sha256 64 918
128 1066
Sha384 64 2395
128 2840
Sha512 64 2400
128 2840
Sha512_224 64 2390
128 2835
Sha512_356 64 2390
128 2835
MD5 64 308
128 345
Whirlpool 64 5630
128 6420
Tiger 64 832
128 952
The slowest of which is whirlpool.
if you add the cbc encryption time for 128 bytes to the hmac of whirlpool with 128 bytes you get 6795 μs which is about half the time GCM takes.
now I can understand that a GHASH takes a bit longer than HMAC because of the galios field and such but beeing 2 times slower compared to the slowest HASH algorithm I know is insane.
So i've started to wonder if i did anything wrong or if the CycloneSLL gcm implementation is just really show. unfortunatly I have not found an other easy to use GCM implementation in c to compare it with.
All the code i used can be found on pastebin, the different files are separated by --------------------
This is the code i use to encrypt with GCM:
static void test_encrypt(void)
{
uint8_t key[] = { 0x2b, 0x7e, 0x15, 0x16, 0x28, 0xae, 0xd2, 0xa6, 0xab, 0xf7, 0x15, 0x88, 0x09, 0xcf, 0x4f, 0x3c };
uint8_t iv[] = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f };
uint8_t in[] = { 0x48, 0x61, 0x6c, 0x6c, 0x6f, 0x20, 0x68, 0x6f, 0x65, 0x20, 0x67, 0x61, 0x61, 0x74, 0x20, 0x68,
0x65, 0x74, 0x20, 0x6d, 0x65, 0x74, 0x20, 0x6a, 0x6f, 0x75, 0x20, 0x76, 0x61, 0x6e, 0x64, 0x61,
0x61, 0x67, 0x2c, 0x20, 0x6d, 0x65, 0x74, 0x20, 0x6d, 0x69, 0x6a, 0x20, 0x67, 0x61, 0x61, 0x74,
0x20, 0x68, 0x65, 0x74, 0x20, 0x67, 0x6f, 0x65, 0x64, 0x20, 0x68, 0x6f, 0x6f, 0x72, 0x2e, 0x21,
0x48, 0x61, 0x6c, 0x6c, 0x6f, 0x20, 0x68, 0x6f, 0x65, 0x20, 0x67, 0x61, 0x61, 0x74, 0x20, 0x68,
0x65, 0x74, 0x20, 0x6d, 0x65, 0x74, 0x20, 0x6a, 0x6f, 0x75, 0x20, 0x76, 0x61, 0x6e, 0x64, 0x61,
0x61, 0x67, 0x2c, 0x20, 0x6d, 0x65, 0x74, 0x20, 0x6d, 0x69, 0x6a, 0x20, 0x67, 0x61, 0x61, 0x74,
0x20, 0x68, 0x65, 0x74, 0x20, 0x67, 0x6f, 0x65, 0x64, 0x20, 0x68, 0x6f, 0x6f, 0x72, 0x2e, 0x21};
AesContext context;
aesInit(&context, key, 16 ); // 16 byte = 128 bit
error_crypto_t error = gcmEncrypt(AES_CIPHER_ALGO, &context, iv, 16, 0, 0, in, in, 128, key, 16);
}
static void test_decrypt(void)
{
uint8_t key[] = { 0x2b, 0x7e, 0x15, 0x16, 0x28, 0xae, 0xd2, 0xa6, 0xab, 0xf7, 0x15, 0x88, 0x09, 0xcf, 0x4f, 0x3c };
uint8_t tag[] = { 0x56, 0x56, 0x5C, 0xCD, 0x5C, 0x57, 0x36, 0x66, 0x73, 0xF7, 0xFF, 0x2A, 0x17, 0x49, 0x0E, 0xC4};
uint8_t iv[] = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f };
uint8_t out[] = { 0x05, 0x7C, 0x51, 0xFF, 0xE4, 0x9F, 0x8C, 0x90, 0xF1, 0x7D, 0x56, 0xFB, 0x87, 0xB9, 0x44, 0x79,
0xB1, 0x04, 0x32, 0x39, 0x78, 0xFF, 0x51, 0x60, 0x48, 0x0B, 0x21, 0x77, 0xF2, 0x26, 0x0B, 0x94,
0x7B, 0xA7, 0x26, 0x74, 0x87, 0xA8, 0x2C, 0x5A, 0xA1, 0x19, 0x03, 0x17, 0x66, 0x3A, 0x46, 0x9F,
0xE6, 0x1D, 0x3B, 0x65, 0xFD, 0xC0, 0xBA, 0xC0, 0xD9, 0x45, 0xE7, 0x17, 0x74, 0x0F, 0xB7, 0x4B,
0x0F, 0xF0, 0x16, 0xF6, 0xE8, 0x4F, 0xFD, 0x96, 0x64, 0x5E, 0xDB, 0x9E, 0x3A, 0x0B, 0x93, 0x8F,
0x87, 0x83, 0x90, 0xF8, 0xF9, 0xE6, 0xA3, 0xE7, 0x5E, 0x72, 0x3C, 0xB5, 0x98, 0x54, 0x11, 0xD7,
0xB4, 0x7C, 0xFF, 0xA3, 0x51, 0x1A, 0xB0, 0x69, 0x4F, 0x57, 0xBB, 0x83, 0x40, 0x2A, 0xE6, 0x75,
0x8B, 0xB5, 0xCA, 0xA4, 0x84, 0x82, 0x1D, 0xA8, 0x94, 0x03, 0x77, 0x9C, 0x3B, 0xF8, 0xA0, 0x60};
AesContext context;
aesInit(&context, key, 16 ); // 16 byte = 128 bit
error_crypto_t error = gcmDecrypt(AES_CIPHER_ALGO, &context, iv, 16, 0, 0, out, out, 128, tag, 16);
}
the data in the out[] is the gcm encrypted data from the in[] and it all works properly. (decrypts correctly and passes authentication.
Question
Are all GCM implementations this slow?
Are there other (better) GCM implementations?
Should I just use HMAC if i want a fast encryption + verification?
EDIT
I have been able to get the GCM method from mbedTLS (PolarSSL) to work which is about 11 times faster than cyclone (it takes 880us do encrypt/decrypt 128 bytes). and it produces the same output as the cylcone GCM so i'm confident this works properly.
gcm_context gcm_ctx;
gcm_init(&gcm_ctx, POLARSSL_CIPHER_ID_AES,key, 128);
int error = gcm_auth_decrypt(&gcm_ctx, 128,iv, 16, NULL, 0, tag, 16, out, buffer );
Your numbers seem odd, 128 bytes for aes-byte-29-08-08 takes less time than 64 bytes for encryption?
Assuming RX63N is comparable to Cortex-M (they both are 32 bit, no vector unit, and it's difficult to find information on RX63N):
The claimed benchmark for SharkSSL puts CBC at a bit more than twice as fast as GCM, 2.6 if optimized for speed. 9340 is way way larger than 340.
Cifra's benchmark shows a 10x difference between their AES and AES-GCM, although the GCM test also included auth-data. Still nowhere close to your differential between straight AES and GCM.
So in relative terms, to answer 1, I don't think all GCM implementations are that slow, relative to plain AES.
As for other GCM implementations, there's the aforementioned Cifra (although I haven't heard of it until just now, and it only has 3 stars on GitHub (if that means anything), so the level of vetting is likely to be rather low), and maybe you can rip out the AES-GCM implementation from FreeBSD. I can't speak about performance though in absolute terms on your platform.
HMAC is likely to be faster on platforms w/o hardware support like AES-NI support though (CLMUL), regardless of the implementation. How performance critical is this? Do you have to use AES or a block cipher? Perhaps ChaCha20+Poly1305 suits your performance needs better (see performance numbers from Cifra). That's now being used in OpenSSH - chacha.* and poly1305.*
Be aware of side channel attacks. Software implementations of AES can be sensitive to cache timing attacks, although I don't think this is applicable to microcontrollers where everything is in SRAM anyway.
*Salsa20 is ChaCha20's predecessor

Resources