Linux block device module is hanging when unloading with rmmod (del_gendisk) - c

I'm writing a little example for block device drivers in Linux. This example is not complete and I take progress step by step. I registered a block device with blkdev_register and allocated the gendisk structure with alloc_disk. All works fine, when inserting the module. It shows up in /proc/devices. But if I want to unload it with rmmod, it hangs.
I figured out, that in the module unload function, the call to del_gendisk caused the hanging. I know that the gendisk structure has an embedded kobject, which takes care of reference counting. This mechanism hinders you to unload the module, while it's being used. But since I don't call add_disk, there should be no reference's to that structure.
#include <linux/module.h>
#include <linux/init.h>
#include <linux/fs.h>
#include <linux/genhd.h>
#include <linux/blkdev.h>
#define BLKDEVNAME "blkdev_test"
#define MINORS 16
struct block_device_operations bdops = {
.owner = THIS_MODULE
};
struct blkdev {
int major;
struct gendisk *disk;
struct request_queue *queue;
} dev;
static int __init blkdev_init(void)
{
dev.major = register_blkdev( 0, BLKDEVNAME );
if( dev.major < 0 )
return -EIO;
dev.disk = alloc_disk(MINORS);
if( dev.disk == NULL )
goto DISK_ERR;
dev.disk->major = dev.major;
dev.disk->first_minor = 0;
snprintf(dev.disk->disk_name, DISK_NAME_LEN, "bd0" );
dev.disk->fops = &bdops;
// dev.disk->queue = dev.queue;
// add_disk( dev.disk );
return 0;
DISK_ERR:
unregister_blkdev( dev.major, BLKDEVNAME );
return -EIO;
}
static void __exit blkdev_exit(void)
{
del_gendisk(dev.disk);
unregister_blkdev( dev.major, BLKDEVNAME );
}
module_init(blkdev_init);
module_exit(blkdev_exit);
MODULE_LICENSE("GPL");
If I issue the command sudo rmmod mod.ko then the command gets killed by the system.

Quoting :
Manipulating gendisks Once you have your gendisk structure set up, you
have to add it to the list of active disks; that is done with:
void add_disk(struct gendisk *disk);
After this call, your device is active. There are a few things worth
keeping in mind about add_disk():
add_disk() can create I/O to the device (to read partition tables and such). You should not call add_disk() until your driver is
sufficiently initialized to handle requests.
If you are calling add_disk() in your driver initialization routine, you should not fail the initialization process after the
first call.
The call to add_disk() increments the disk's reference count; if the disk structure is ever to be released, the driver is responsible
for decrementing that count (with put_disk()).
Should you need to remove a disk from the system, that is accomplished
with:
void del_gendisk(struct gendisk *disk);
This function cleans up all of the information associated with the
given disk, and generally removes it from the system. After a call to
del_gendisk(), no more operations will be sent to the given device.
Your driver's reference to the gendisk object remains, though; you
must explicitly release it with:
void put_disk(struct gendisk *disk);
That call will cause the gendisk structure to be freed, as long as no
other part of the kernel retains a reference to it.

Related

Synchronizing a usermode application to a kernel driver

We have a driver that schedules work using a timer (using add_timer). Whenever the work results in a change, an application should be notified of this change.
Currently this is done by providing a sysfs entry from the driver, that blocks a read operation until there is a change in the data.
(Please see the relevant code from the driver and the application in the code block below.)
I have inspected the source of most functions related to this in the linux kernel (4.14.98), and I did not notice any obvious problems in dev_attr_show, sysfs_kf_seq_show,__vfs_read and vfs_read.
However in seq_read the mutex file->private_data->lock is held for the duration of the read:
https://elixir.bootlin.com/linux/v4.14.98/source/fs/seq_file.c#L165
Will this pose a problem?
Are there any other (potential?) problems that I should be aware of?
Please note that:
The data will change at least once per second, usually way faster
The application should respond as soon as possible to a change in the data
This runs in a controlled (embedded) environment
// driver.c
static DECLARE_WAIT_QUEUE_HEAD(wq);
static int xxx_block_c = 0;
// driver calls this to notify the application that something changed (from a timer, `timer_list`)
void xxx_Persist(void)
{
xxx_block_c = 1;
wake_up_interruptible(&wq);
}
// Sysfs entry that blocks until there is a change in the data.
static ssize_t xxx_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
wait_event_interruptible(wq, xxx_block_c != 0);
xxx_block_c = 0;
/* Remainder of the implementation */
}
//application.cpp
std::ifstream xxx;
xxx.open("xxx", std::ios::binary | std::ios::in);
while (true)
{
xxx.clear();
xxx.seekg(0, std::ios_base::beg);
xxx.read(data, size);
/* Do something with data */
}

mbed uvisor and EthernetInterface overflowed

I am quite new to mbed and uvisor so maybe my problem is about understanding how things work. I have a NXP FRDM-K64F board where I am trying to learn about mbed and uvisor. I have succesfully compiled an run some basic examples of tasks running on different boxes. I am trying to connect to the net one of the boxes in uvisor but something is not working correctly.
This is the main file code:
#include "uvisor-lib/uvisor-lib.h"
#include "mbed.h"
#include "main-hw.h"
/* Create ACLs for main box. */
MAIN_ACL(g_main_acl);
/* Enable uVisor. */
UVISOR_SET_MODE_ACL(UVISOR_ENABLED, g_main_acl);
UVISOR_SET_PAGE_HEAP(8 * 1024, 5);
int main(void)
{
printf("----Eup---------\r\n");
DigitalOut led(MAIN_LED);
while (1) {
printf("taka\r\n");
led = !led;
/* Blink once per second. */
Thread::wait(1000);
}
return 0;
}
This is the code in box file:
#include "uvisor-lib/uvisor-lib.h"
#include "mbed.h"
#include "main-hw.h"
#include "EthernetInterface.h"
// Network interface
EthernetInterface net;
struct box_context {
Thread * thread;
uint32_t heartbeat;
};
static const UvisorBoxAclItem acl[] = {
};
static void my_box_main(const void *);
/* Box configuration
* We need 1kB of stack both in the main and interrupt threads as both of them
* use printf. */
UVISOR_BOX_NAMESPACE(NULL);
UVISOR_BOX_HEAPSIZE(3072);
UVISOR_BOX_MAIN(my_box_main, osPriorityNormal, 1024);
UVISOR_BOX_CONFIG(my_box, acl, 1024, box_context);
static void my_box_main(const void *)
{
while (1) {
printf("tan tan\r\n");
Thread::wait(2000);
}
}
I have not yet added the specific connection code, just the definition of the EthernetInterface object and I am getting the following error on compilation:
../../../../arm-none-eabi/bin/ld.exe: Region m_data_2 overflowed with stack and heap
collect2.exe: error: ld returned 1 exit status
I have tried changing the values of the heap size but I have not found a way of making it work. What am I missing?
In your main box, change the value for UVISOR_SET_PAGE_HEAP.
With UVISOR_SET_PAGE_HEAP(8 * 1024, 3) in the main box; and 8K heap in the secure box and UVISOR_BOX_STACK_SIZE stack size in the secure box it compiles and links for me (mbed OS 5.3, GCC ARM on K64F).

Does Linux kernel have main function?

I am learning Device Driver and Kernel programming.According to Jonathan Corbet book we do not have main() function in device drivers.
#include <linux/init.h>
#include <linux/module.h>
static int my_init(void)
{
return 0;
}
static void my_exit(void)
{
return;
}
module_init(my_init);
module_exit(my_exit);
Here I have two questions :
Why we do not need main() function in Device Drivers?
Does Kernel have main() function?
start_kernel
On 4.2, start_kernel from init/main.c is a considerable initialization process and could be compared to a main function.
It is the first arch independent code to run, and sets up a large part of the kernel. So much like main, start_kernel is preceded by some lower level setup code (done in the crt* objects in userland main), after which the "main" generic C code runs.
How start_kernel gets called in x86_64
arch/x86/kernel/vmlinux.lds.S, a linker script, sets:
ENTRY(phys_startup_64)
and
phys_startup_64 = startup_64 - LOAD_OFFSET;
and:
#define LOAD_OFFSET __START_KERNEL_map
arch/x86/include/asm/page_64_types.h defines __START_KERNEL_map as:
#define __START_KERNEL_map _AC(0xffffffff80000000, UL)
which is the kernel entry address. TODO how is that address reached exactly? I have to understand the interface Linux exposes to bootloaders.
arch/x86/kernel/vmlinux.lds.S sets the very first bootloader section as:
.text : AT(ADDR(.text) - LOAD_OFFSET) {
_text = .;
/* bootstrapping code */
HEAD_TEXT
include/asm-generic/vmlinux.lds.h defines HEAD_TEXT:
#define HEAD_TEXT *(.head.text)
arch/x86/kernel/head_64.S defines startup_64. That is the very first x86 kernel code that runs. It does a lot of low level setup, including segmentation and paging.
That is then the first thing that runs because the file starts with:
.text
__HEAD
.code64
.globl startup_64
and include/linux/init.h defines __HEAD as:
#define __HEAD .section ".head.text","ax"
so the same as the very first thing of the linker script.
At the end it calls x86_64_start_kernel a bit awkwardly with and lretq:
movq initial_code(%rip),%rax
pushq $0 # fake return address to stop unwinder
pushq $__KERNEL_CS # set correct cs
pushq %rax # target address in negative space
lretq
and:
.balign 8
GLOBAL(initial_code)
.quad x86_64_start_kernel
arch/x86/kernel/head64.c defines x86_64_start_kernel which calls x86_64_start_reservations which calls start_kernel.
arm64 entry point
The very first arm64 that runs on an v5.7 uncompressed kernel is defined at https://github.com/cirosantilli/linux/blob/v5.7/arch/arm64/kernel/head.S#L72 so either the add x13, x18, #0x16 or b stext depending on CONFIG_EFI:
__HEAD
_head:
/*
* DO NOT MODIFY. Image header expected by Linux boot-loaders.
*/
#ifdef CONFIG_EFI
/*
* This add instruction has no meaningful effect except that
* its opcode forms the magic "MZ" signature required by UEFI.
*/
add x13, x18, #0x16
b stext
#else
b stext // branch to kernel start, magic
.long 0 // reserved
#endif
le64sym _kernel_offset_le // Image load offset from start of RAM, little-endian
le64sym _kernel_size_le // Effective size of kernel image, little-endian
le64sym _kernel_flags_le // Informative flags, little-endian
.quad 0 // reserved
.quad 0 // reserved
.quad 0 // reserved
.ascii ARM64_IMAGE_MAGIC // Magic number
#ifdef CONFIG_EFI
.long pe_header - _head // Offset to the PE header.
This is also the very first byte of an uncompressed kernel image.
Both of those cases jump to stext which starts the "real" action.
As mentioned in the comment, these two instructions are the first 64 bytes of a documented header described at: https://github.com/cirosantilli/linux/blob/v5.7/Documentation/arm64/booting.rst#4-call-the-kernel-image
arm64 first MMU enabled instruction: __primary_switched
I think it is __primary_switched in head.S:
/*
* The following fragment of code is executed with the MMU enabled.
*
* x0 = __PHYS_OFFSET
*/
__primary_switched:
At this point, the kernel appears to create page tables + maybe relocate itself such that the PC addresses match the symbols of the vmlinux ELF file. Therefore at this point you should be able to see meaningful function names in GDB without extra magic.
arm64 secondary CPU entry point
secondary_holding_pen defined at: https://github.com/cirosantilli/linux/blob/v5.7/arch/arm64/kernel/head.S#L691
Entry procedure further described at: https://github.com/cirosantilli/linux/blob/v5.7/arch/arm64/kernel/head.S#L691
Fundamentally, there is nothing special about a routine being named main(). As alluded to above, main() serves as the entry point for an executable load module. However, you can define different entry points for a load module. In fact, you can define more than one entry point, for example, refer to your favorite dll.
From the operating system's (OS) point of view, all it really needs is the address of the entry point of the code that will function as a device driver. The OS will pass control to that entry point when the device driver is required to perform I/O to the device.
A system programmer defines (each OS has its own method) the connection between a device, a load module that functions as the device's driver, and the name of the entry point in the load module.
Each OS has its own kernel (obviously) and some might/maybe start with main() but I would be surprised to find a kernel that used main() other than in a simple one, such as UNIX! By the time you are writing kernel code you have long moved past the requirement to name every module you write as main().
Hope this helps?
Found this code snippet from the kernel for Unix Version 6. As you can see main() is just another program, trying to get started!
main()
{
extern schar;
register i, *p;
/*
* zero and free all of core
*/
updlock = 0;
i = *ka6 + USIZE;
UISD->r[0] = 077406;
for(;;) {
if(fuibyte(0) < 0) break;
clearsig(i);
maxmem++;
mfree(coremap, 1, i);
i++;
}
if(cputype == 70)
for(i=0; i<62; i=+2) {
UBMAP->r[i] = i<<12;
UBMAP->r[i+1] = 0;
}
// etc. etc. etc.
Several ways to look at it:
Device drivers are not programs. They are modules that are loaded into another program (the kernel). As such, they do not have a main() function.
The fact that all programs must have a main() function is only true for userspace applications. It does not apply to the kernel, nor to device drivers.
With main() you propably mean what main() is to a program, namely its "entry point".
For a module that is init_module().
From Linux Device Driver's 2nd Edition:
Whereas an application performs a single task from beginning to end, a module registers itself in order to serve future requests, and its "main" function terminates immediately. In other words, the task of the function init_module (the module's entry point) is to prepare for later invocation of the module's functions; it's as though the module were saying, "Here I am, and this is what I can do." The second entry point of a module, cleanup_module, gets invoked just before the module is unloaded. It should tell the kernel, "I'm not there anymore; don't ask me to do anything else."
Yes, the Linux kernel has a main function, it is located in arch/x86/boot/main.c file. But Kernel execution starts from arch/x86/boot/header.S assembly file and the main() function is called from there by "calll main" instruction.
Here is that main function:
void main(void)
{
/* First, copy the boot header into the "zeropage" */
copy_boot_params();
/* Initialize the early-boot console */
console_init();
if (cmdline_find_option_bool("debug"))
puts("early console in setup code.\n");
/* End of heap check */
init_heap();
/* Make sure we have all the proper CPU support */
if (validate_cpu()) {
puts("Unable to boot - please use a kernel appropriate "
"for your CPU.\n");
die();
}
/* Tell the BIOS what CPU mode we intend to run in. */
set_bios_mode();
/* Detect memory layout */
detect_memory();
/* Set keyboard repeat rate (why?) and query the lock flags */
keyboard_init();
/* Query Intel SpeedStep (IST) information */
query_ist();
/* Query APM information */
#if defined(CONFIG_APM) || defined(CONFIG_APM_MODULE)
query_apm_bios();
#endif
/* Query EDD information */
#if defined(CONFIG_EDD) || defined(CONFIG_EDD_MODULE)
query_edd();
#endif
/* Set the video mode */
set_video();
/* Do the last things and invoke protected mode */
go_to_protected_mode();
}
While the function name main() is just a common convention (there is no real reason to use it in kernel mode) the linux kernel does have a main() function for many architectures, and of course usermode linux has a main function.
Note the OS runtime loads the main() function to start an app, when an operating system boots there is no runtime, the kernel is simply loaded to a address by the boot loader which is loaded by the MBR which is loaded by the hardware. So while a kernel may contain a function called main it need not be the entry point.
See Also:
http://msdn.microsoft.com/en-us/library/windows/desktop/ms633559%28v=vs.85%29.aspx
Linux kernel source:
x86: linux-3.10-rc6/arch/x86/boot/main.c
arm64: linux-3.10-rc6/arch/arm64/kernel/asm-offsets.c

new compiled kernel from 2.4.20 version doesnt get start after an initializing a global structure in sched.c

hi there i'm trying to compile my new kernel from kernel version 2.4.20. Moreover, i have a header file which includes the definitions of structures (one for to define node used by a linked list and a list structure) and two function prototypes which are defined in my new systemcall file sample.c . However, when i define a list globally and try to make an allocation in sched.c in function sched_init() my new kernel version doesnt open. it stucks before get started. here you can see my header file and system call file.
/* project_header.h */
#ifndef __LINUX_PROJECT_HEADER_H
#define __LINUX_PROJECT_HEADER_H
#include <linux/linkage.h>
#include <linux/vmalloc.h>
#endif
typedef struct node{
struct node* next;
struct node* prev;
long project_pid;
long project_ticket_number;
}PROJECT_NODE;
typedef struct{
PROJECT_NODE* head;
PROJECT_NODE* tail;
int list_size;
}PROJECT_LIST;
PROJECT_LIST* project_init_list(void);
void project_add_node(PROJECT_LIST*, long);
this is my system call sample which i implemented. As you can see i had to define functions in here and prototypes are in the project_header.h which is called by two system_call files which are fork.c and sched.c
/* sample.c */
#include <linux/sample.h>
#include <linux/project_header.h>
long int maximum_ticket_number=0;
extern PROJECT_LIST* project_list;
PROJECT_LIST* project_init_list(void){
PROJECT_LIST* list = vmalloc(sizeof(*list));
list->list_size=0;
list->head = NULL;
list->tail = NULL;
return list;
}
void project_add_node(PROJECT_LIST* list, long id){
PROJECT_NODE* pnew;
pnew = vmalloc(sizeof(*pnew));
pnew->project_pid=id;
maximum_ticket_number++;
pnew->project_ticket_number=maximum_ticket_number;
if(list->list_size==0){ // Assume list is empty
list->head = pnew;
list->tail = pnew;
list->list_size++;
}
else {
list->tail->next = pnew;
pnew->prev = list->tail;
list->tail = pnew;
list->list_size++;
}
}
asmlinkage void sys_sample(void){ //System call does print the inital list size
printk("LIST->SIZE = %d\n", project_list->list_size);
return;
}
and this is the part which added into sched.c
/* sched.c */
.
.
#include <linux/project_header.h>
#include <linux/sample.h>
PROJECT_LIST* project_list; // Create a list globally
extern PROJECT_LIST* project_init_list(void); // Provide to call project_init_list function which returns a list properly
.
.
void __init sched_init(void){
.
.
project_list = vmalloc(sizeof(*project_list)); //Allocate space and initialize the variables of main list
.
.
and this is a snapshot of my current situation before kernel get started
i 'm sure that the problem is in sched_init() function but i can't find it. i will be very appreciated if you can help and thanks anyway.
This isn't REALLY an answer, because this isn't a question that can be answered with the information given. But it's "guidance on how to find out why the kernel doesn't start,and a bit about how the kernel starts".
printk() is the corresponding kernel function to printf() outside of the kernel. Add that at points you think you are getting to, and places where you think it can fail, e.g. if you have "myptr = vmalloc(...);", then
Obviously, if you are so early that the kernel proper hasn't started and that printk may not be available, then you will need to be using out's to the serial port will be the way to debug -
mov $0x3fc, dx
mov $65, al
out al, dx
will print an 'A' (ascii code 65) to the serial port. Don't hammer more than about 16 characters out at once, they will only come out at 9600 bps or some such.
By the way, the only part of the kernel where you aren't in protected mode is for a few dozen instructions or so.
However, the virtual memory handling doesn't start immediately, and in fact, you may find that it doesn't work until AFTER THE SCHEDULING and the "kswapper" process has started - meaning that you can't use vmalloc in the scheduler - I'm not certain, as this is not a part of the Linux kernel I know that well - but it may be worth looking at kmalloc instead, which is a lower level kernel functionality. Just beware that they aren't exact replacements, so you need to look at the parameters and what they mean, and how to translate. I don't think I've ever used vmalloc() in the kernel - are you sure that's the right function to use?

alternative to container_of()

I am a newbie trying to code a serial driver(PCI based ) and I don't want to use the container_of() for lack of downward compatibility.The kernel version I may compile the module
would be < 2.6.x so I want to make it compatible with most of the old and new versions.
i want to access the structure member(s) of a serial card driver.the structure is a custom one containing the atomic variable for example - use_count and the related operation on it-- atomic_inc(&serial_card->use_count).I dont want to access them by using the container_of() function which will give me the containing structure
Is there any alternate to container_of() function.If I am not wrong the text LINux device drivers by Allesandro Roubini describes a way on page no 174 | Chapter 6: Advanced Char Driver Operations.
But I am still fixed about how to assign something like struct scull_dev *dev = &scull_s_device.
if the structure itself contains a variable of type struct pc_device *dev,the above statement populates a similar variable and that is assigned to dev,
in my case i have declared a structure and related function as below
struct serial_card
{
unsigned int id; // to identify the each card
//atomic_t use_count; // variable used to check whether the device is already opened or not
wait_queue_head_t rx_queue[64]; // queue in which the process are stored
unsigned int data_ready[64]; // queue in which the process is ready
unsigned int rx_chan; // used by interrupt handler
unsigned int base, len; // holds physical base address , holds the total area ( for each card )
unsigned int *base; // holds virtual address
/*struct cdev cdev; // kernel uses this structure to represent the EACH char device
not using the new method to represent char devices in kernel instead using the old method of register_chrdev();*/
struct pci_dev *device; // pci_dev structure for EACH device.
//struct semaphore sem; //Semaphore needed to handle the co-ordination of processes,use incase need arises
};
static struct serial_card *serial_cards; // pointer to array of structures [ depending on number of cards ],NO_OF_CARDS #defined in header file
static int serialcard_open(struct inode *inode,struct file *filep)
{
//getting the structure details of type struct serialcard,using the pointer inode->i_cdev and field type cdev
//struct serial_card *serial_cards = container_of(inode->i_cdev, struct serial_card, cdev);
// read the current value of use_count
static int Device_Open = 0;
if ( Device_Open ) //Device_Open is static varibale used here for checking the no of times a device is opened
{
printk("cPCIserial: Open attempt rejected\n");
return -EBUSY;
}
Device_Open++;
// using the card so increment use_count
//atomic_inc(&serial_cards->use_count);
//filep->private_data = serial_cards;
return 0;
}
the complete description on page 174 - 175 is as follows
Single-Open Devices
The brute-force way to provide access control is to permit a device to be opened by
only one process at a time (single openness). This technique is best avoided because it
inhibits user ingenuity. A user might want to run different processes on the same
device, one reading status information while the other is writing data. In some cases,
users can get a lot done by running a few simple programs through a shell script, as
long as they can access the device concurrently. In other words, implementing a singleopen
behavior amounts to creating policy, which may get in the way of what your
users want to do. Allowing only a single process to open a device has undesirable properties, but it is also the easiest access control to implement for a device driver, so it’s shown here.
The source code is extracted from a device called scullsingle.
The scullsingle device maintains an atomic_t variable called scull_s_available; that
variable is initialized to a value of one, indicating that the device is indeed available.
The open call decrements and tests scull_s_available and refuses access if somebody
else already has the device open:
static atomic_t scull_s_available = ATOMIC_INIT(1);
static int scull_s_open(struct inode *inode, struct file *filp)
{
struct scull_dev *dev = &scull_s_device; /* device information */
if (! atomic_dec_and_test (&scull_s_available)) {
atomic_inc(&scull_s_available);
return -EBUSY; /* already open */
}
/* then, everything else is copied from the bare scull device */
if ( (filp->f_flags & O_ACCMODE) = = O_WRONLY) {
scull_trim(dev);
filp->private_data = dev;
return 0; /* success */
}
The release call, on the other hand, marks the device as no longer busy:
static int scull_s_release(struct inode *inode, struct file *filp)
{
atomic_inc(&scull_s_available); /* release the device */
return 0;
}
Normally, we recommend that you put the open flag scull_s_available within the
device structure (Scull_Dev here) because, conceptually, it belongs to the device. The
scull driver, however, uses standalone variables to hold the flag so it can use the same
device structure and methods as the bare scull device and minimize code duplication.
Please let me know any alternative for this
thanks and regards
perhaps I am missing the point, but "cotainer_of" is not a function, but a macro. If you have porting concerns, you can safely define it by yourserlf if the system headers don't implement it. Here a basic implementation:
#ifndef container_of
#define container_of(ptr, type, member) \
((type *) \
( ((char *)(ptr)) \
- ((char *)(&((type*)0)->member)) ))
#endif
Or here's the implementation - more accurate - from recent linux headers:
#define container_of(ptr, type, member) ({ \
const typeof( ((type *)0)->member ) *__mptr = (ptr);
(type *)( (char *)__mptr - offsetof(type,member) );})
In that code the containerof is used to store and retrieve the correct dev structure easily. The correct structure can be retrieved inside the read and write functions also without accessing to the private_data.
I do not know why you do not want to use private->data and containerof, but you can always retrieve your minor number from the struct file pointer.
int minor=MINOR(filp->f_dentry_d_inode->i__rdev);
then accessing to your multiple devices vector using something like
struct scull_dev* dev = &scull_devices[minor]:
and using it.
You will need to use filp->privatedata to store "per-open" information that you also use in read/write. You will need to decide what to store to ensure that the right information is available.
Probably you want two structures. One "device structure" and one "open structure". The open structure can get dynamically allocated in "open" and stored at private_data. In release, it gets freed. It should have the members such that you can use them in read/write to get access to the data you need.
The device structure is going to be per "card". In your driver init, you probably want to loop on the number of cards and create a new device structure (serial_card) for each. You can make them a static array, or dynamically allocate, it doesn't matter. I would store the minor number in the structure as well. The minor number is chosen by you, so start with 1 and go through #cards. You can reserve "0" for a system level interface, if you want, or just start at 0 for the cards.
In open, you will get the minor number that the user opened. Go through your serial_card list looking for a match. If you don't find it, error out of open. Otherwise, you have your info and can use it to allocate a "open structure", populate it, and store it in filp->private_data.
#Giuseppe Guerrini Some compiler just cannot recognize the linux implementation of container_of.
The basic define is fine, but I wonder if there is any safety or compatibility risk at this implementation:
#ifndef container_of
#define container_of(ptr, type, member) \
((type *) \
( ((char *)(ptr)) \
- ((char *)(&((type*)0)->member)) ))
#endif

Resources