What do they refer to with GKI here? What does it abbreviate?
It seems to stand for General Kernel Interface, not that it is much useful as itself.
For example:
The libnfc-nci implementation uses a reliable mechanism of queues and message passing named General Kernel Interface (GKI) to easily communicate between layers and modules: Each task is isolated, owning a buffer (or inbox) where messages are queued and processed on arrival. This mechanism is used to send messages from the DH to the NFCC chip, and vice versa.
(Radio Frequency Identification: 11th International Workshop, RFIDsec 2015)
Related
I'm trying to do the same protocol that is defined and described here in that wiki
https://wiki.trezor.io/Developers_guide-Message_Workflows
My toolset is Protobuf for embedded systems: Nanopb.
STM32F7 using Serial port.
I'm trying to communicate right now between PC and STM32F7, and usually the communication will be done between two STM32F7 boards.
My questions:
What kind of protocol that is sufficient for a request and answer like the one in trezor ?
I googled and I found I have to use something like HLDC, is it necessary for that purpose or it's an overhead ?
Coding and design issue: I will fire a serial interrupt that always polling for the data that is communicated between two boards, now there will a very big state machine to decode each message type and do the event based on the message type. Is there an alternative design ?
Firstly, AFAIK, nanopb doesn't support the full range of possibilities in the Protobuf schema language. So you'll need a schema that works for nanopb, and hopefully that'll be good enough for the needs. However it can be very annoying as (so far as I know) the very useful oneof doesn't work.
Secondly, protobuf wireformat is not self delimiting. So you'll be squirting data down the serial cable, but it's not possible to reliably (if at all) tell where one message ends and another starts. So you'll need to transmit some sort of inter-message sync pattern of bytes, chosen to be unlikely to be encountered in a message. You'll have to read in the bytes inbetween sync patterns, place them in a buffer, and parse from that.
Thirdly, if you're sending a variety of different messages and you can't use oneof, then you'll need some other way of identifying what type of message has arrived so that you can parse it to the right type of object. That "way" could simply be a fixed sequence of message types, or a byte whose value identifies the message type, or a field that does the same thing in all the messages. oneof is attractive (though not available to you) because it can be used as a carrier for a variety of different message types; you simply parse the received data using the oneof's parser.
I am looking to implement some kind of transmission protocol in C, to use on a custom hardware. I have the ability to send and receive through RF, but I need to rely in some protocol that validates the package integrity sent/received, so I though it would be a good idea to implement some kind of UDP library.
Of course, if there is any way that I can modify the existing implementations for UDP or TCP so it works over my RF device it would be of great help. The only thing that I think it needs to be changed is the way that a single bit is sent, if I could change that on the UDP library (sys/socket.h) it would save me a lot of time.
UDP does not exist in standard C99 or C11.
It is generally part of some Internet Protocol layer. These are very complex software (as soon as you want some performance).
I would suggest to use some existing operating system kernel (e.g. Linux) and to write a network driver (e.g. for the Linux kernel) for your device. Life is too short to write a competitive UDP like layer (that could take you dozens of years).
addenda
Apparently, the mention of UDP in the question is confusing. Per your comments (which should go inside the question) you just want some serial protocol on a small 8 bits PIC 18F4550 microcontroller (32Kbytes ROM + 2Kbytes RAM). Without knowing additional constraints, I would suggest a tiny "textual" like protocol (e.g. in ASCII lines, no more than 128 bytes per line, \n terminated ....) and I would put some simple hex checksum inside it. In the 1980s Hayes modems had such things.
What you should then do is define and document the protocol first (e.g. as BNF syntax of the message lines), then implement it (probably with buffering and finite state automaton techniques). You might invent some message format like e.g. DOFOO?123,456%BE53 followed by a newline, meaning do the command DOFOO with arguments 123 then 456 and hex checksum BE53
I am writing a simple char driver which accesses a PCI card. It is registered to sysfs with the help of a new class and accessible under /dev/foodev. Using standard file operations I can perform simple read and write operations to the device.
My problem: I have multiple parameters stored at different addresses on the card (version, status, control, ...) which I would like access independently. Currently having only one read and one write function I therefore have to change the address every time again in the driver code.
Obviously there is a more convenient way to implement this. I stumbled about the two following approaches and was wondering which is the better one in terms of sustainability and user accessibility:
Using ioctl commands setting the address/parameter before an
access.
Having the device already nicely set up in udev using multiple attributes
(device_create_file()) of which the user than just can write/read from
different "files":
/dev/foodev
../version
../status
../control
I think you should take a look at the PCI framework to implement your driver.
Don't (mis)use ioctls; you'll have race conditions. Use the attributes as files. That scheme is already used in sysfs. E.G. look at GPIO LEDs and keys. – sawdust
I'm having a small architecture argument with a coworker at the moment. I was hoping some of you could help settle it by strongly suggesting one approach over another.
We have a DSP and Cortex-M3 coupled together with shared memory. The DSP receives requests from the external world and some of these requests are to execute certain wireless test functionality which can only be done on the CM3. The DSP writes to shared memory, then signals the CM3 via an interrupt. The shared memory indicates what the request is along with any necessary data required to perform the request (channel to tune to, register of RF chip to read, etc).
My preference is to generate a unique event ID for each request that can occur in the interrupt. Then before leaving the interrupt pass the event on to the state machine's event queue, which would get handled in the thread devoted to RF activity.
My coworker would instead like to pass a single event ID (generic RF command) to the state machine and have the parsing of the shared memory area occur after receiving this event ID in the state machine. After parsing, then you would know the specific command that you need to act on.
I dislike this approach because you will be doing the parsing of shared memory in whatever state you happen to be in. You can make this a function, but it's still processing that should be state-independent. She doesn't like the idea of parsing shared memory in the interrupt.
Any comments on the better approach? If it helps, we're using the QP framework from Miro Samek for state machine implementation.
EDIT: moved statechart to ftp://hiddenoaks.asuscomm.com/Statechart.bmp
Here's a compromise:
pass a single event ID (generic RF command) to the state machine from the interrupt
create an action_function that "parses" the shared memory and returns a specific command
guard RF_EVENT transitions in the statechart with [parser_action_func() == RF_CMD_1] etc.
The statechart code generator should be smart enough to execute parser_action_func() only once per RF_EVENT. (Dunno if QP framework is that smart).
This has the same statechart semantics of your "unique event ID for each request," and avoids parsing the shared memory in the interrupt handler.
ADDENDUM
The difference in the statechart is N transitions labeled
----RF_EVT_CMD_1---->
----RF_EVT_CMD_2---->
...
----RF_EVT_CMD_N---->
verus
----RF_EVT[cmd()==CMD_1]---->
----RF_EVT[cmd()==CMD_2]---->
...
----RF_EVT[cmd()==CMD_N]---->
where cmd() is the parsing action function.
I wrote a kernel module and used dev_add_pack to get all the incoming packets.
According to given filter rules, if packet matches, I am forwarding it to user space.
When I am loading this kernel module and send udp traffic using sipp,
ksoftirqd process appears and starts consume cpu. (I am testing this by top command)
is there any way to save cpu ?
I guess you use ETH_P_ALL type to register your packet_type structure to protocol stack. And I think your packet_type->func is the bottleneck, which maybe itself consumes lots of cpu, or it break the existing protocol stack model and triggers other existing packet_type functions to consumes cpu. So the only way to save cpu is to optimize you packet_type->func. If your function is too complicated, you should consider to spit the function to several parts, use the simple part as the packet_type->func which runs in ksoftirqd context, while the complicated parts should be put to other kernel thread context(you can create new thread in your kernel module if needed).