USB HID keycodes (keyboard) - c

I'm trying to implement a laptop keyboard on a microcontroller. I can not find usb codes for some fn buttons. The usb protocol has descriptions for combinations of changing the display brightness, volume, media keys, but more is needed. Can someone tell me where to find usb codes for disable touchpad or enable airplane mode?
Here is my descriptor now.
0x05, 0x01, // USAGE_PAGE (Generic Desktop)
0x09, 0x06, // USAGE (Keyboard)
0xa1, 0x01, // COLLECTION (Application)
0x85, 0x01, // Report ID (1)
0x05, 0x07, // USAGE_PAGE (Keyboard)
0x19, 0xe0, // USAGE_MINIMUM (Keyboard LeftControl)
0x29, 0xe7, // USAGE_MAXIMUM (Keyboard Right GUI)
0x15, 0x00, // LOGICAL_MINIMUM (0)
0x25, 0x01, // LOGICAL_MAXIMUM (1)
0x75, 0x01, // REPORT_SIZE (1)
0x95, 0x08, // REPORT_COUNT (8)
0x81, 0x02, // INPUT (Data,Var,Abs)
0x95, 0x01, // REPORT_COUNT (1)
0x75, 0x08, // REPORT_SIZE (8)
0x81, 0x03, // INPUT (Cnst,Var,Abs)
0x95, 0x05, // REPORT_COUNT (5)
0x75, 0x01, // REPORT_SIZE (1)
0x05, 0x08, // USAGE_PAGE (LEDs)
0x19, 0x01, // USAGE_MINIMUM (Num Lock)
0x29, 0x05, // USAGE_MAXIMUM (Kana)
0x91, 0x02, // OUTPUT (Data,Var,Abs)
0x95, 0x01, // REPORT_COUNT (1)
0x75, 0x03, // REPORT_SIZE (3)
0x91, 0x03, // OUTPUT (Cnst,Var,Abs)
0x95, 0x06, // REPORT_COUNT (6)
0x75, 0x08, // REPORT_SIZE (8)
0x15, 0x00, // LOGICAL_MINIMUM (0)
0x25, 0x65, // LOGICAL_MAXIMUM (101)
0x05, 0x07, // USAGE_PAGE (Keyboard)
0x19, 0x00, // USAGE_MINIMUM (Reserved (no event indicated))
0x29, 0x65, // USAGE_MAXIMUM (Keyboard Application)
0x81, 0x00, // INPUT (Data,Ary,Abs)
0xc0, // END_COLLECTION 65
0x05, 0x0C, // Usage Page (Consumer)
0x09, 0x01, // Usage (Consumer Control)
0xA1, 0x01, // Collection (Application)
0x85, 0x02, // Report ID (2)
0x05, 0x0C, // Usage Page (Consumer)
0x15, 0x00, // Logical Minimum (0)
0x25, 0x01, // Logical Maximum (1)
0x75, 0x01, // Report Size (1)
0x95, 0x08, // Report Count (8)
0x09, 0x6F, // Brightness Increment
0x09, 0x70, // Brightness Decrement
0x09, 0xB8, // Usage (Eject)
0x09, 0xCD, // Usage (Play/Pause)
0x09, 0xE2, // Usage (Mute)
0x09, 0xE9, // Usage (Volume Increment)
0x09, 0xEA, // Usage (Volume Decrement)
0x81, 0x02, // Input (Data,Var,Abs,No Wrap,Linear,Preferred State,No Null Position)
0xC0, // End Collection

I found the way to toggle flight mode in the wireless control section and here a descriptor to it. Still can't find a way to disable the touchpad.
0x05, 0x01, // USAGE_PAGE (Generic Desktop)
0x09, 0x0C, // USAGE (Wireless Radio Controls)
0xA1, 0x01, // COLLECTION (Application)
0x85, 0x03, // Report ID (3)
0x15, 0x00, // LOGICAL_MINIMUM (0)
0x25, 0x01, // LOGICAL_MAXIMUM (1)
0x09, 0xC6, // USAGE (Wireless Radio Button)
0x95, 0x01, // REPORT_COUNT (1
0x75, 0x01, // REPORT_SIZE (1)
0x81, 0x06, // INPUT (Data,Var,Rel)
0x75, 0x07, // REPORT_SIZE (7
0x81, 0x03, // INPUT (Cnst,Var,Abs)
0xC0, // END_COLLECTION

Related

API for setting ECC Key mbedTLS

I am trying to set the ECC private key explicitly with mbedTLS for ECDSA signing. The key has been generated externally from mbedTLS and consists of the following arrays for the private key and the public key in the NIST secp256r1 curve (below). In all the of the mbedTLS ECDSA exmaples that I have seen, the key is generated with a random number generator with mbedtls_ecp_gen_key() but this doesn't work for me since I need to generate the key pair outside of the code and then set explicitly in the code.
const uint8_t Private_Key[] =
{
0x0a, 0x75, 0xde, 0x36, 0x78, 0x73, 0x50, 0x8b, 0x25, 0x1e, 0x19, 0xbe, 0xf4, 0x7b, 0x74,
0xfc, 0xd6, 0x97, 0x44, 0x12, 0x5f, 0x1c, 0x49, 0x89, 0x98, 0x0b, 0x65, 0x6c, 0x48, 0xa7, 0x8c, 0x5c
};
const uint8_t Public_Key[] =
{
0x3b, 0x08, 0xd7, 0x1a, 0x1b, 0x5a, 0xd0, 0x3e, 0x41, 0x5d, 0x8f, 0x68, 0xe9, 0x78,0x47, 0x6b,
0x35, 0x5c, 0xe2, 0x90, 0x8d, 0xb9, 0xc1, 0x46, 0xb1, 0x44, 0x77, 0x1f, 0x92, 0x57, 0xbf, 0x8e,
0x7c, 0xed, 0xdf, 0x3b, 0xea, 0xed, 0x5d, 0xea, 0x1d, 0x77, 0x39, 0xdb, 0xb7, 0x42, 0xe3, 0x6a,
0x07, 0x74, 0xca, 0x50, 0x8b, 0x19, 0xf5, 0x37, 0xd5, 0x2d, 0x57, 0x71, 0x70, 0x7e, 0xc7, 0x16
};
You can have a look at mbedtls_ecp_read_key for importing secret key and mbedtls_ecp_point_read_binary for importing public key from key data generated outside. Notice that mbedtls_ecp_point_read_binary expects binary data in uncompressed public key format, i.e 0x04 followed by X followed by Y, which means you should add a 0x04 to the head of the Public_Key data in your code.

Sending LLDP multicast packet from raw socket

I am sending LLDP packet to mock switch, because I am testing some DCB settings and I can see packet going out in tcpdump, but I can't see it coming to the link partner.
My code:
#include <stdio.h>
#include <sys/socket.h>
#include <sys/types.h>
#include <unistd.h>
#include <arpa/inet.h>
#include <linux/if_packet.h>
#include <net/ethernet.h>
#include <string.h>
#include <net/if.h>
#include <errno.h>
#include <sys/ioctl.h>
int main() {
char buf[414] = {0x01, 0x80, 0xc2, 0x00, 0x00, 0x0e, 0xb4, 0x96, 0x91, 0x94, 0xaa, 0x25, 0x88, 0xcc, 0x02, 0x07,
0x04, 0x4c, 0x76, 0x25, 0xee, 0xd3, 0x40, 0x04, 0x11, 0x05, 0x68, 0x75, 0x6e, 0x64, 0x72, 0x65,
0x64, 0x47, 0x69, 0x67, 0x45, 0x20, 0x31, 0x2f, 0x32, 0x34, 0x06, 0x02, 0x00, 0x78, 0x0a, 0x0a,
0x5a, 0x39, 0x31, 0x30, 0x30, 0x2d, 0x4f, 0x4e, 0x36, 0x31, 0x0c, 0xd6, 0x44, 0x65, 0x6c, 0x6c,
0x20, 0x52, 0x65, 0x61, 0x6c, 0x20, 0x54, 0x69, 0x6d, 0x65, 0x20, 0x4f, 0x70, 0x65, 0x72, 0x61,
0x74, 0x69, 0x6e, 0x67, 0x20, 0x53, 0x79, 0x73, 0x74, 0x65, 0x6d, 0x20, 0x53, 0x6f, 0x66, 0x74,
0x77, 0x61, 0x72, 0x65, 0x2e, 0x20, 0x44, 0x65, 0x6c, 0x6c, 0x20, 0x4f, 0x70, 0x65, 0x72, 0x61,
0x74, 0x69, 0x6e, 0x67, 0x20, 0x53, 0x79, 0x73, 0x74, 0x65, 0x6d, 0x20, 0x56, 0x65, 0x72, 0x73,
0x69, 0x6f, 0x6e, 0x3a, 0x20, 0x32, 0x2e, 0x30, 0x2e, 0x20, 0x44, 0x65, 0x6c, 0x6c, 0x20, 0x41,
0x70, 0x70, 0x6c, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x20, 0x53, 0x6f, 0x66, 0x74, 0x77,
0x61, 0x72, 0x65, 0x20, 0x56, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x3a, 0x20, 0x39, 0x2e, 0x31,
0x31, 0x28, 0x30, 0x2e, 0x30, 0x50, 0x36, 0x29, 0x20, 0x43, 0x6f, 0x70, 0x79, 0x72, 0x69, 0x67,
0x68, 0x74, 0x20, 0x28, 0x63, 0x29, 0x20, 0x31, 0x39, 0x39, 0x39, 0x2d, 0x32, 0x30, 0x31, 0x37,
0x44, 0x65, 0x6c, 0x6c, 0x20, 0x49, 0x6e, 0x63, 0x2e, 0x20, 0x41, 0x6c, 0x6c, 0x20, 0x52, 0x69,
0x67, 0x68, 0x74, 0x73, 0x20, 0x52, 0x65, 0x73, 0x65, 0x72, 0x76, 0x65, 0x64, 0x2e, 0x42, 0x75,
0x69, 0x6c, 0x64, 0x20, 0x54, 0x69, 0x6d, 0x65, 0x3a, 0x20, 0x4d, 0x6f, 0x6e, 0x20, 0x46, 0x65,
0x62, 0x20, 0x32, 0x37, 0x20, 0x31, 0x36, 0x3a, 0x35, 0x37, 0x3a, 0x32, 0x30, 0x20, 0x32, 0x30,
0x31, 0x37, 0x0e, 0x04, 0x00, 0x16, 0x00, 0x16, 0x10, 0x0c, 0x05, 0x01, 0x0a, 0xed, 0x5f, 0x37,
0x02, 0x00, 0x90, 0x00, 0x01, 0x00, 0xfe, 0x06, 0x00, 0x80, 0xc2, 0x0b, 0x08, 0x18, 0xfe, 0x19,
0x00, 0x80, 0xc2, 0x09, 0x07, 0x12, 0x34, 0x56, 0x70, 0x0d, 0x0d, 0x0d, 0x0d, 0x0c, 0x0c, 0x0c,
0x0c, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0xfe, 0x19, 0x00, 0x80, 0xc2, 0x0a, 0x00,
0x01, 0x23, 0x45, 0x67, 0x0d, 0x0d, 0x0d, 0x0d, 0x0c, 0x0c, 0x0c, 0x0c, 0x02, 0x02, 0x02, 0x02,
0x02, 0x02, 0x02, 0x02, 0xfe, 0x0e, 0x00, 0x80, 0xc2, 0x0c, 0x00, 0x61, 0x89, 0x06, 0x61, 0x89,
0x14, 0x82, 0x0c, 0xbc, 0x00, 0x00};
struct ifreq ifr;
strncpy((char *)ifr.ifr_name, "eth1", IFNAMSIZ);
int sock_r = socket(AF_PACKET,SOCK_RAW,htons(ETH_P_ALL));
if (sock_r < 0)
printf("socket errno: %s\n", strerror(errno));
if (ioctl(sock_r, SIOCGIFINDEX, &ifr))
printf("ioctl errno: %s\n", strerror(errno));
struct sockaddr_ll sll = {.sll_family = AF_PACKET, .sll_ifindex = ifr.ifr_ifindex};
if(sendto(sock_r, buf, sizeof(buf), 0, (struct sockaddr *) &sll, sizeof(sll)))
printf("sendto errno: %s\n", strerror(errno));
close(sock_r);
return 0;
}
in tcpdump I see:
13:40:33.472198 LLDP, length 400
Chassis ID TLV (1), length 7
Subtype MAC address (4): 4c:76:25:ee:d3:40
0x0000: 044c 7625 eed3 40
Port ID TLV (2), length 17
Subtype Interface Name (5): hundredGigE 1/24
0x0000: 0568 756e 6472 6564 4769 6745 2031 2f32
0x0010: 34
Time to Live TLV (3), length 2: TTL 120s
0x0000: 0078
System Name TLV (5), length 10: Z9100-ON61
0x0000: 5a39 3130 302d 4f4e 3631
System Description TLV (6), length 214
Dell Real Time Operating System Software. Dell Operating System Version: 2.0. Dell Application Software Version: 9.11(0.0P6) Copyright (c) 1999-2017Dell Inc. All Rights Reserved.Build Time: Mon Feb 27 16:57:20 2017
0x0000: 4465 6c6c 2052 6561 6c20 5469 6d65 204f
0x0010: 7065 7261 7469 6e67 2053 7973 7465 6d20
0x0020: 536f 6674 7761 7265 2e20 4465 6c6c 204f
0x0030: 7065 7261 7469 6e67 2053 7973 7465 6d20
0x0040: 5665 7273 696f 6e3a 2032 2e30 2e20 4465
0x0050: 6c6c 2041 7070 6c69 6361 7469 6f6e 2053
0x0060: 6f66 7477 6172 6520 5665 7273 696f 6e3a
0x0070: 2039 2e31 3128 302e 3050 3629 2043 6f70
0x0080: 7972 6967 6874 2028 6329 2031 3939 392d
0x0090: 3230 3137 4465 6c6c 2049 6e63 2e20 416c
0x00a0: 6c20 5269 6768 7473 2052 6573 6572 7665
0x00b0: 642e 4275 696c 6420 5469 6d65 3a20 4d6f
0x00c0: 6e20 4665 6220 3237 2031 363a 3537 3a32
0x00d0: 3020 3230 3137
System Capabilities TLV (7), length 4
System Capabilities [Repeater, Bridge, Router] (0x0016)
Enabled Capabilities [Repeater, Bridge, Router] (0x0016)
0x0000: 0016 0016
Management Address TLV (8), length 12
Management Address length 5, AFI IPv4 (1): 10.237.95.55
Interface Index Interface Numbering (2): 9437185
0x0000: 0501 0aed 5f37 0200 9000 0100
Organization specific TLV (127), length 6: OUI Ethernet bridged (0x0080c2)
Priority Flow Control Configuration Subtype (11)
Willing: 0, MBC: 0, RES: 0, PFC cap:8
PFC Enable
Priority : 0 1 2 3 4 5 6 7
Value : 0 0 0 1 1 0 0 0
0x0000: 0080 c20b 0818
Organization specific TLV (127), length 25: OUI Ethernet bridged (0x0080c2)
ETS Configuration Subtype (9)
Willing:0, CBS:0, RES:0, Max TCs:7
Priority Assignment Table
Priority : 0 1 2 3 4 5 6 7
Value : 1 2 3 4 5 6 7 0
TC Bandwidth Table
TC% : 0 1 2 3 4 5 6 7
Value : 13 13 13 13 12 12 12 12
TSA Assignment Table
Traffic Class: 0 1 2 3 4 5 6 7
Value : 2 2 2 2 2 2 2 2
0x0000: 0080 c209 0712 3456 700d 0d0d 0d0c 0c0c
0x0010: 0c02 0202 0202 0202 02
Organization specific TLV (127), length 25: OUI Ethernet bridged (0x0080c2)
ETS Recommendation Subtype (10)
RES: 0
Priority Assignment Table
Priority : 0 1 2 3 4 5 6 7
Value : 0 1 2 3 4 5 6 7
TC Bandwidth Table
TC% : 0 1 2 3 4 5 6 7
Value : 13 13 13 13 12 12 12 12
TSA Assignment Table
Traffic Class: 0 1 2 3 4 5 6 7
Value : 2 2 2 2 2 2 2 2
0x0000: 0080 c20a 0001 2345 670d 0d0d 0d0c 0c0c
0x0010: 0c02 0202 0202 0202 02
Organization specific TLV (127), length 14: OUI Ethernet bridged (0x0080c2)
Application Priority Subtype (12)
RES: 0
Application Priority Table
Priority: 3, RES: 0, Sel: 1, Protocol ID: 24969
Priority: 3, RES: 0, Sel: 1, Protocol ID: 24969
Priority: 4, RES: 0, Sel: 2, Protocol ID: 33292
0x0000: 0080 c20c 0061 8906 6189 1482 0cbc
End TLV (0), length 0
Should I e.g. set something in raw socket props, or enable something in the driver or is it vendor specific and I'm in a pickle?
Turned out it was hardware-specific problem. Apparently some devices do not support transmitting spoofed LLDP frames for security reasons. Changing adapter to Niantic worked.

USR-TCP232-T2: Basic config and port config commands returns 0xBE 0x45 (Error)

This is my first time using the USR-TCP232-T2 module (TTL-Ethernet converter).
My project is to config and Tx\Rx with the module by the serial port (not the LAN port).
When I send basic parameters config command or port parameters config command, the result is 0xBE 0x45 (Error).
Explanation:
During initialization I send read configuration command.
The module returns 137 bytes as follows:
0x55 0xb - ducSequenceNum[2];
0x00 - ucCRC;
0x00 - ucVersion;
0x00 - UnknownParameter;
0x00 - ucFlags_1;
0x00 0x00 - usLocationURLPort[2];
0x50 0x00 - usHTTPServerPort[2];
0x00 - ucUserFlag;
0x07 0x00 0xa8 0xc0 - ulStaticIP[4];
0x01 0x00 0xa8 0xc0 - ulGatewayIP[4];
0x00 0xff 0xff 0xff - ulSubnetMask[4];
0x55, 0x53, 0x52, 0x2d, 0x54, 0x43, 0x50, 0x32
0x33, 0x32, 0x2d, 0x54, 0x32, 0x00 - ucModName[14];
0x00, 0x00 - ProtocolReserved[2];
0x61, 0x64, 0x6D, 0x69, 0x6E, 0x00 - username[6];
0x61, 0x64, 0x6D, 0x69, 0x6E, 0x00 - password[6];
0x00 - ucNetSendTime;
0x01, 0x00 - uiId[2];
0x80 - ucIdType;
0xd8, 0xb0, 0x4c, 0xf9, 0xb4, 0x8d - mac_addrs[6];
0xde, 0xde, 0x43, 0xd0 - DNS_Gateway_IP[4];
0x03, 0x00, 0x00, 0x00 - ucReserved_1[4];
0x00, 0xC2, 0x01, 0x00 - ulBaudRate[4];
0x08 - ucDataSize;
0x01 - ucParity;
0x01 - ucStopBits;
0x00 - ucFlowControl;
0x00, 0x00, 0x00, 0x00 - ulTelnetTimeout[4];
0x8C, 0x4E - usTelnetLocalPort[2];
0x2a, 0x20 - usTelnetRemotePort[2];
0x31, 0x39, 0x32, 0x2e, 0x31, 0x36,
0x38, 0x2e, 0x30, 0x2e, 0x32, 0x00,
0x31, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00 - uiTelnetURL[30];
0xc9, 0x00, 0xa8, 0xc0 - ulTelnetIPAddr[4];
0x20 - ucFlags_2;
0x01 - ucWorkMode;
0x00 - HTPucFlags;
0x04 - tc_number;
0x10, 0x0e - uiPackLen[2];
0x00 - ucPackTime;
0x00 - ucTimeCount;
0x00, 0x00, 0x00, 0x00, 0x00 - ucReserved_2[5];
0xac 0x13 0x01 0x57 - Current_IP[4];
0xb1 - Version;
Now, When I send Basic Parameters config Command, same as in the received configuration:
0x55 - Start byte
0xBE - Basic parameters command code 0x00 - ucSequenceNum
0x00 - ucCRC
0x00, - ucVersion
0x00, - ucFlags - DHCP
0x00, 0x00, - usLocationURLPort[2]
0x50, 0x00, - usHTTPServerPort[2]
0x00, - ucUserFlag
0x07, 0x00, 0xA8, 0xC0, - ulStaticIP[4]
0x01, 0x00, 0xA8, 0xC0, - ulGatewayIP[4]
0x00, 0xFF, 0xFF, 0xFF, - ulSubnetMask[4]
0x55, 0x53, 0x52, 0x2d, 0x54,
0x43, 0x50, 0x32, 0x33, 0x32,
0x2d, 0x54, 0x32, 0x00, - ucModName[14]
0x00, 0x00, - ProtocolReserved[2]
0x61, 0x64, 0x6D, 0x69, 0x6E, 0x00, - username[6]
0x61, 0x64, 0x6D, 0x69, 0x6E, 0x00, - password[6]
0x00, - ucNetSendTime
0x01, 0x00, - uiId[2]
0x80, - ucIdType
0xd8, 0xb0, 0x4c, 0xf9, 0xb4, 0x8d, - mac_addrs[6]
0xde, 0xde, 0x43, 0xd0, - DNSGatewayIP[4]
0x03, 0x00, 0x00, 0x00 - ucReserved[4]
0xFF - CheckSum
The USR-TCP232-T2 module returns 0xBE 0x45 which indicates some error.
Or, when I send Port Parameters config Command, same as in the received configuration:
0x55 - Start byte
0xBF - Port parameters command code
0x00, 0xC2, 0x01, 0x00, - ulBaudRate[4] - 115200 bps
0x08, - ucDataSize
0x01, - ucParity
0x01, - ucStopBits
0x00, - ucFlowControl
0x00, 0x00, 0x00, 0x00, - ulTelnetTimeout[4]
0x8C, 0x4E, - usTelnetLocalPort[2]
0x2a, 0x20, - usTelnetRemotePort[2]
0x31, 0x39, 0x32, 0x2e, 0x31, 0x36,
0x38, 0x2e, 0x30, 0x2e, 0x32, 0x00,
0x31, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - uiTelnetURL[30]
0xc9, 0x00, 0xa8, 0xc0, - ulTelnetIPAddr[4]
0x20, - ucFlags
0x01, - ucWorkMode
0x00, - HTPucFlags
0x04, - tc_number
0x10, 0x0e, - uiPackLen[2]
0x00, - ucPackTime
0x00, - ucTimeCount
0x00, 0x00, 0x00, 0x00, 0x00 - ucReserved[5]
0xBD - CheckSum
The USR-TCP232-T2 module returns 0xBE 0x45 which indicates some error.
Any help will be appreciated.
Thanks in advance.
I've found the problem.
The code to send the command was:
u8 BasicSettingCom[] = {0x55, 0xBE, &basicParamsStruct, checkSum};
.
.
SendDataToClient((u8*)BasicSettingCom, Length);
So the pointer of basicParamsStruct was sent and not its content.
Now the code to send the command is:
const u8 BasicSettingCmd[] = {0x55, 0xBE};
.
.
SendDataToClient((u8*)BasicSettingCmd, cmdLen);
SendDataToClient((u8*)basicParamsStruct , dataLen);
SendByteToClient(checkSum);
and it works fine.

extreme difference in time between AES-CBC + HMAC and AES-GCM

So I've been searching far and wide for different AES implementations for CBC and GCM, i do not want to implement this my self in case I make mistakes so i have found the following AES CBC codes and tested the speed of them on my RX63NB (Rennesas test board).
Encrypt Decrypt
bytes speed (us) bytes speed (us)
Tiny AES 64 1500 64 8900
128 2880 128 17820
aes-byte-29-08-08 64 1250 64 4900
128 1220 128 9740
Cyclone 64 230 64 237
128 375 128 387
I was suprised about how much faster Cyclone was, to clarify I took the AES, CBC and Endian files from CycloneSSL and only used those.
Then I tried GCM from CycloneSSl and this was the output:
Encrypt Decrypt
bytes speed μs bytes speed μs
Cyclone GCM 64 9340 64 9340
128 14900 128 14900
I have examained the HMAC time (from CycloneSSL) to see how much that would take:
HMAC bytes speed μs
Sha1 64 746
128 857
Sha224 64 918
128 1066
Sha256 64 918
128 1066
Sha384 64 2395
128 2840
Sha512 64 2400
128 2840
Sha512_224 64 2390
128 2835
Sha512_356 64 2390
128 2835
MD5 64 308
128 345
Whirlpool 64 5630
128 6420
Tiger 64 832
128 952
The slowest of which is whirlpool.
if you add the cbc encryption time for 128 bytes to the hmac of whirlpool with 128 bytes you get 6795 μs which is about half the time GCM takes.
now I can understand that a GHASH takes a bit longer than HMAC because of the galios field and such but beeing 2 times slower compared to the slowest HASH algorithm I know is insane.
So i've started to wonder if i did anything wrong or if the CycloneSLL gcm implementation is just really show. unfortunatly I have not found an other easy to use GCM implementation in c to compare it with.
All the code i used can be found on pastebin, the different files are separated by --------------------
This is the code i use to encrypt with GCM:
static void test_encrypt(void)
{
uint8_t key[] = { 0x2b, 0x7e, 0x15, 0x16, 0x28, 0xae, 0xd2, 0xa6, 0xab, 0xf7, 0x15, 0x88, 0x09, 0xcf, 0x4f, 0x3c };
uint8_t iv[] = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f };
uint8_t in[] = { 0x48, 0x61, 0x6c, 0x6c, 0x6f, 0x20, 0x68, 0x6f, 0x65, 0x20, 0x67, 0x61, 0x61, 0x74, 0x20, 0x68,
0x65, 0x74, 0x20, 0x6d, 0x65, 0x74, 0x20, 0x6a, 0x6f, 0x75, 0x20, 0x76, 0x61, 0x6e, 0x64, 0x61,
0x61, 0x67, 0x2c, 0x20, 0x6d, 0x65, 0x74, 0x20, 0x6d, 0x69, 0x6a, 0x20, 0x67, 0x61, 0x61, 0x74,
0x20, 0x68, 0x65, 0x74, 0x20, 0x67, 0x6f, 0x65, 0x64, 0x20, 0x68, 0x6f, 0x6f, 0x72, 0x2e, 0x21,
0x48, 0x61, 0x6c, 0x6c, 0x6f, 0x20, 0x68, 0x6f, 0x65, 0x20, 0x67, 0x61, 0x61, 0x74, 0x20, 0x68,
0x65, 0x74, 0x20, 0x6d, 0x65, 0x74, 0x20, 0x6a, 0x6f, 0x75, 0x20, 0x76, 0x61, 0x6e, 0x64, 0x61,
0x61, 0x67, 0x2c, 0x20, 0x6d, 0x65, 0x74, 0x20, 0x6d, 0x69, 0x6a, 0x20, 0x67, 0x61, 0x61, 0x74,
0x20, 0x68, 0x65, 0x74, 0x20, 0x67, 0x6f, 0x65, 0x64, 0x20, 0x68, 0x6f, 0x6f, 0x72, 0x2e, 0x21};
AesContext context;
aesInit(&context, key, 16 ); // 16 byte = 128 bit
error_crypto_t error = gcmEncrypt(AES_CIPHER_ALGO, &context, iv, 16, 0, 0, in, in, 128, key, 16);
}
static void test_decrypt(void)
{
uint8_t key[] = { 0x2b, 0x7e, 0x15, 0x16, 0x28, 0xae, 0xd2, 0xa6, 0xab, 0xf7, 0x15, 0x88, 0x09, 0xcf, 0x4f, 0x3c };
uint8_t tag[] = { 0x56, 0x56, 0x5C, 0xCD, 0x5C, 0x57, 0x36, 0x66, 0x73, 0xF7, 0xFF, 0x2A, 0x17, 0x49, 0x0E, 0xC4};
uint8_t iv[] = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f };
uint8_t out[] = { 0x05, 0x7C, 0x51, 0xFF, 0xE4, 0x9F, 0x8C, 0x90, 0xF1, 0x7D, 0x56, 0xFB, 0x87, 0xB9, 0x44, 0x79,
0xB1, 0x04, 0x32, 0x39, 0x78, 0xFF, 0x51, 0x60, 0x48, 0x0B, 0x21, 0x77, 0xF2, 0x26, 0x0B, 0x94,
0x7B, 0xA7, 0x26, 0x74, 0x87, 0xA8, 0x2C, 0x5A, 0xA1, 0x19, 0x03, 0x17, 0x66, 0x3A, 0x46, 0x9F,
0xE6, 0x1D, 0x3B, 0x65, 0xFD, 0xC0, 0xBA, 0xC0, 0xD9, 0x45, 0xE7, 0x17, 0x74, 0x0F, 0xB7, 0x4B,
0x0F, 0xF0, 0x16, 0xF6, 0xE8, 0x4F, 0xFD, 0x96, 0x64, 0x5E, 0xDB, 0x9E, 0x3A, 0x0B, 0x93, 0x8F,
0x87, 0x83, 0x90, 0xF8, 0xF9, 0xE6, 0xA3, 0xE7, 0x5E, 0x72, 0x3C, 0xB5, 0x98, 0x54, 0x11, 0xD7,
0xB4, 0x7C, 0xFF, 0xA3, 0x51, 0x1A, 0xB0, 0x69, 0x4F, 0x57, 0xBB, 0x83, 0x40, 0x2A, 0xE6, 0x75,
0x8B, 0xB5, 0xCA, 0xA4, 0x84, 0x82, 0x1D, 0xA8, 0x94, 0x03, 0x77, 0x9C, 0x3B, 0xF8, 0xA0, 0x60};
AesContext context;
aesInit(&context, key, 16 ); // 16 byte = 128 bit
error_crypto_t error = gcmDecrypt(AES_CIPHER_ALGO, &context, iv, 16, 0, 0, out, out, 128, tag, 16);
}
the data in the out[] is the gcm encrypted data from the in[] and it all works properly. (decrypts correctly and passes authentication.
Question
Are all GCM implementations this slow?
Are there other (better) GCM implementations?
Should I just use HMAC if i want a fast encryption + verification?
EDIT
I have been able to get the GCM method from mbedTLS (PolarSSL) to work which is about 11 times faster than cyclone (it takes 880us do encrypt/decrypt 128 bytes). and it produces the same output as the cylcone GCM so i'm confident this works properly.
gcm_context gcm_ctx;
gcm_init(&gcm_ctx, POLARSSL_CIPHER_ID_AES,key, 128);
int error = gcm_auth_decrypt(&gcm_ctx, 128,iv, 16, NULL, 0, tag, 16, out, buffer );
Your numbers seem odd, 128 bytes for aes-byte-29-08-08 takes less time than 64 bytes for encryption?
Assuming RX63N is comparable to Cortex-M (they both are 32 bit, no vector unit, and it's difficult to find information on RX63N):
The claimed benchmark for SharkSSL puts CBC at a bit more than twice as fast as GCM, 2.6 if optimized for speed. 9340 is way way larger than 340.
Cifra's benchmark shows a 10x difference between their AES and AES-GCM, although the GCM test also included auth-data. Still nowhere close to your differential between straight AES and GCM.
So in relative terms, to answer 1, I don't think all GCM implementations are that slow, relative to plain AES.
As for other GCM implementations, there's the aforementioned Cifra (although I haven't heard of it until just now, and it only has 3 stars on GitHub (if that means anything), so the level of vetting is likely to be rather low), and maybe you can rip out the AES-GCM implementation from FreeBSD. I can't speak about performance though in absolute terms on your platform.
HMAC is likely to be faster on platforms w/o hardware support like AES-NI support though (CLMUL), regardless of the implementation. How performance critical is this? Do you have to use AES or a block cipher? Perhaps ChaCha20+Poly1305 suits your performance needs better (see performance numbers from Cifra). That's now being used in OpenSSH - chacha.* and poly1305.*
Be aware of side channel attacks. Software implementations of AES can be sensitive to cache timing attacks, although I don't think this is applicable to microcontrollers where everything is in SRAM anyway.
*Salsa20 is ChaCha20's predecessor

char* array brakes with more then 6 values

I have been all day around this and can´t understand where is the problem.
I'm using a Nokia LCD screen to print a number (2 digits) that I draw myself.
Each number (0 to 9) is composed by an array, like so:
char Number_0[] = {0x04, 0x04, 0x04, 0xC4, 0xE4, 0xE4, 0xE4, 0xE4, 0xE4, 0xE4, 0xE4, 0xE4, 0xE4, 0xE4, 0xC4, 0x04, 0x04, 0x04, 0xFE, 0xFF, 0xFF, 0xFE, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0xFE, 0xFF, 0xFF, 0xFE, 0x87, 0xCF, 0xCF, 0x87, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x87, 0xCF, 0xCF, 0x87, 0xFF, 0xFF, 0xFF, 0xFF, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xFF, 0xFF, 0xFF, 0xFF, 0x81, 0x83, 0x83, 0x8D, 0x9E, 0x9E, 0x9E, 0x9E, 0x9E, 0x9E, 0x9E, 0x9E, 0x9E, 0x9E, 0x8D, 0x83, 0x83, 0x81};
To be able to point to each Number Array, I'm using another array, with the List of the Array Numbers, like so:
char* NumberList[] = {Number_0, Number_1, Number_2, Number_3, Number_4, Number_5, Number_6, Number_7, Number_8, Number_9};
To print the number unto the screen I use the funtion:
int Number = 30;
void PrintNumber(){
int Unit = int(Number / 10);
LCDBitmap(DigitalList[Unit]); //should Print 3
Unit = Number - Unit * 10;
LCDBitmap(DigitalList[Unit]); //should Print 0
}
LCDBitmap, is another function, that is irrelevant to the problem at hand.
For some reason, this does not work, unless I remove 4 values from NumberList[].
As long as I have only 6 values, it will work, doesn't matter with one I remove, but as son as I add a 7º one... the code brakes.
Any idea?
Congratulations, you're out of SRAM. Since this LUT isn't going to be changing at runtime it should be placed in flash instead.
#include <avr/pgmspace.h>
...
char Number_0[] PROGMEM = {0x04, ...};
...
char* NumberList[] PROGMEM = {Number_0, ...};
You then use pgm_read_word() to read the location from NumberList, and then pgm_read_byte() to read the specific values from Number_x.

Resources