I'm writing a few kernel modules (as part of porting Linux to a new ARM board).
Most of what I want to expose to userspace fits nicely into the /sys/class/gpio and /sys/class/leds subsystems.
I have a couple of 12 bit ADCs which I want to expose to userspace.
Which subsystem should I use for this?
(Worst-case is exposing them as a character devices which stream the values as newline separated decimal values. The two ADCs are part of a multi-function-device, accessed over an i2c bus by reading two 16-bit registers.)
I tried returning 42 from a gpio module's gpio_chip.get function pointer, but it was changed to 1 by the time it reached userspace.
Related
I'm working on implementing a simple PCI device in QEMU and a kernel driver for it, and I have some trouble with handling pci_read/write_config_* function calls from the device side.
Unlike simple rw operations on a memory mapped bar, where the MemoryRegionOps callbacks receive the exact offset used by the driver, the config_read/write callbacks implemented as members in PCIDevice struct, receive an address that went through some manipulations/mapping that I have a hard time understanding.
Following the code path up to pci_config_host_read/write in QEMU sources, and the same in the kernel side for pci_read/write_config_* functions, didn't provide any clear answers.
Can anyone help me understand how to extract the config offset used by the driver when calling the pci config rw functions?
If you set your PCI device model up to implement the QEMU PCIDevice config_read and config_write methods, the addresses passed to them should be the offsets into the PCI config space (ie starting with the standard 0 == PCI_VENDOR_ID, 2 == PCI_DEVICE_ID, 4 == PCI_COMMAND and so on, and any device-specific stuff after the 64 bytes of standardized config space).
Can anybody explains the usage of EN4B command of micron SPI chips.
I want to know the difference between 3 byte and 4 byte address mode in SPI.
I was going through the SPI drivers where I found this commands.
Thanks in Advance !!
From a legacy point of view, SPI commands have always used 3 bytes for the address interested by their operation.
This was fine as with 24 bits it is possible to address up to 128MiB.
When the Flashes grew larger it was needed to switch from 3 bytes to 4 bytes addressing.
Whenever you have any doubts regarding the hardware you can find the answers in the proper datasheet, I don't know what specific chip you are referring to however.
I found the Micron N25Q512A NOR Flash, which is 512MiB so it needs a form of 4 bytes addressing; from it you can learn that
There are 3 bytes legacy commands and new 4 bytes commands.
For example 03h and 13h for the single read.
You can supply a default fourth address byte with a specific register.
The Extended Address Register let you choose the region of the flash for the legacy commands.
You can enable 4 bytes addressing for legacy command.
Either write the appropriate bit in the Nonvolatile Configuration Register or use the ENTER / EXIT 4-BYTE ADDRESS MODE (opcodes B7h and E9h respectively)
This Linux patch also have some insights, basically telling that some chips only support one of the three points above.
Macronix seems to have first opted for the number 3 only and Spansion for the number 1.
Checking some datasheet of theirs seems to suggests that now both support all three methods.
I am somewhat of a beginner in this space
I am using Arduino Mega2560 and interfacing it with a coin machine from a vending machine.
The coin machine runs on a protocol called MDB (multi-drop bus) which is 9bit serial.
I would normally use the Arduino IDE but that does not cater for 9-bit serial. I have therefore decided to code using c and ubuntu 12.04. I have come across a usart setup function which can bitbash into 9bit mode.
I have installed avr-gcc avr-libc avrdude.
The coin machine acts depending on serial data it recieves. i.e to reset it needs to read 100101010 from its Rx (this is a random 9-bit number, I am not sure what the true number is at this moment).
Another example would be if it recieves 10101111 on Rx it would dispense a coin of desired type etc.
There are also various other commands like ack,poll, etc.
So what I want to do is send the appropriate binary numbers out from the Arduino's Tx and into the coin machine's Rx and try get communication with the coin machine.
This was just for context but my main question is more general (lets assume we are working in 8bit mode):
a) how can I type a 8bit binary number (e.g 10111010) on the terminal, and have that number be put on the Tx line of the arduno.
b) since the mega2560 has 3 Tx/Rx modules, can I Tx from one module and Rx from another module, for testing, such that the 8bit binary number I type in terminal appears on the terminal too.
note: The reason I want the numbers to be represented in binary is because I want to see each bit, it will make more sense to me that way
I am trying to do something similar to Bouni's MateDealer (see github repository)
but he is implementing the Arduino as a slave, I want to implement as a master.
More on his project here.
Thank you kindly!
A) There is two solutions:
you send the binary representation of the number, like '00001111' through the serial line, and then in the µC you use the function strtoul, which takes as parameters an array of char (in this case '00001111') and the base (here 2) and returns the corresponding value (here 0x0F, or 16);
you create your own terminal, which converts the input binary representations into decimal representations (here, converts '00001111' into '16'), sends to the µc which uses the function atoi to get the corresponding value (here, 16 or 0x0F)
I think that the former will be easier, but a little bit slower, while the latter could offload the microcontroller.
I have developed an embedded solution which communicates over a Multi Drop Bus and now I would like to develop a PC based application which monitors traffic on the bus.
MDB supports true 9 data bits (plus start/stop/parity - and *no fudging* by using the parity bit as a 9th data bit) whereas standard Windows and Linux libraries offer a maximum of 8 data bits.
I have a StarTech PCI2S950 PC serial port card which supports 9-data bits, but am not sure how to code my monitoring app & have googled a lot to no great avail.
I would prefer to code in C (or Delphi, or C++). I have a slight preference for Cygwn, but am willing to use straightforward Windows or Linux.
Just anything to read/write 9 data bits over that PC serial port card.
Can anyone help?
The document at http://www.semiconductorstore.com/pdf/newsite/oxford/ox16c950b.pdf describes the differences between various UARTs.
While your StarTech board includes the 16C950, which is RS-485 (and 9-bit) capable, it uses it in RS-232 compatible (550) mode, similar to 16550/8250 from IBM-PC days, and supports max 8 bit data.
You need a board with the same chip (16C950) but that exposes the RS-485 compatible 950 mode that supports 9 bit data as per the spec. And any board claiming such support would have to come with custom drivers for Windows, since Microsoft's is 8 bit only.
There are several other chips that can do 9-bit RS-485 mentioned here but again finding Windows driver support will be tricky. And, of course, many boards use 16C950 but only in 8-bit and/or RS-232 only mode, and without appropriate drivers.
In answer to your related question on Superuser, sawdust suggested the Sealevel 7205e, which looks like a good choice, with Windows driver support. It is pricey but they specifically mention 9-bit, RS-485 support, and Windows drivers. It may well be your best option.
The card you selected is not suitable for this application. It has just plain RS-232 ports, it is not suitable for a multi-drop bus. You'll need to shop elsewhere for an EIA-485 style bus interface, you could only find those at industrial electronics suppliers. By far the best way is to go through the National Automatic Merchandising Association, the industry group that owns the MDB specification.
The 9-bit data format is just a trick and is used in the MDB protocol to mode-switch between address bytes and data bytes. All ports on the bus listen to address bytes, only the addressed port listens to data bytes.
The 9th bit is simply the parity bit that any UART can generate. The fundamental data size is still 8 bits. An UART auto-generates the parity bit from the way it was initialized, you can choose between mark, space, odd and even parity.
Now this is easy to do in a micro-controller that has an UART, the kind of processor used on a bus like this. You simply re-program the UART on-the-fly, telling it to generate mark parity when you send the address bytes. And re-program it again to space parity when you send the data bytes. Waiting for the fifo to empty will typically be necessary although it depends on the actual UART chip.
That is a lot harder to do on a regular Windows or Linux machine, there's a driver between the user mode program and the UART. The driver generates a "transmit buffer empty" status bit, like WaitCommmEvent() for EV_TXEMPTY on Windows, but this doesn't include the fifo empty status, it only indicates that the buffer is empty. A workaround would be to wait for the buffer empty status and then sleep() long enough to ensure that the fifo is emptied. A fifo is typically 16 bytes deep so sleep for 16 times the bit time. You'll need the datasheet for the UART on the card you selected to know these details for sure.
Under Win32 serial ports are just files, so you create a handle for it with CreateFile and then use a DCB structure to set up the configuration options (the members are documented here and include number of data bits as ByteSize).
There's a good walk through here:
http://www.codeproject.com/Articles/3061/Creating-a-Serial-communication-on-Win32
The link provided shows the card supports 9 data bits and Windows 8, so I would presume all the cards features are available to the application through the standard Windows API.
Apart from setting the correct data format in a DCB and opening the port, I would have thought the standard ReadFile would work. I wonder if the data read in would actually be 2*8 bit bytes which represent the 9 data bits, rather than a continuous 9 bits streamed in (which you will need to decode at a later date).
Is the 9th bit used for some purpose other than data?
i'm having hard time trying to understand how interrupts work.
the code below initialize the Programmable Interrupt Controller
#define PIC0_CTRL 0x20 /* Master PIC control register address. */
#define PIC0_DATA 0x21 /* Master PIC data register address. */
/* Mask all interrupts*/
outb (PIC0_DATA, 0xff);
/* Initialize master. */
outb (PIC0_CTRL, 0x11); /* ICW1: single mode, edge triggered, expect ICW4. */
outb (PIC0_DATA, 0x20); /* ICW2: line IR0...7 -> irq 0x20...0x27. */
outb (PIC0_DATA, 0x04); /* ICW3: slave PIC on line IR2. */
outb (PIC0_DATA, 0x01); /* ICW4: 8086 mode, normal EOI, non-buffered. */
/* Unmask all interrupts. */
outb (PIC0_DATA, 0x00);
can someone explain to me how it works:
-the role of outb (i didn't understand the linux man)
-the addresses and their meaning
another unrelated question,i read that outb and inb are for port-mapped I/O, can we use memory-mapped I/o for doing Input/output communication?
thanks.
outb() writes the byte specified by its second argument to the I/O port specified by its first argument. In this context, a "port" is a means for the CPU to communication with another chip.
The specific C code that you present relates to the 8259A Programmable Interrupt Controller (PIC).
You can read about the PIC here and here.
If that doesn't provide enough details to understand the commands and the bit masks, you could always refer to the chip's datasheet.
Device specific code is best read in conjunction with the corresponding datasheet. For example, the "8259A Programmable Interrupt Controller" datasheet (http://pdos.csail.mit.edu/6.828/2005/readings/hardware/8259A.pdf) clearly (but concisely) explains almost everything.
However, this datasheet will only explain how the chip can be used (in any system), and won't explain how the chip is used in a specific system (e.g. in "PC compatible" 80x86 systems). For that you need to rely on "implied de facto standards" (as a lot of the features of the PIC chips aren't used on "PC compatible" 80x86 systems, and may not be supported on modern/integrated chipsets).
Normally (for historical reasons) the PIC chip's IRQs are mapped to interrupts in a strange/bad way. For example, IRQ 0 is mapped to interrupt 8 and conflicts with the CPU's double fault exception. The specific code you've posted remaps the PIC chips so that IRQ0 is mapped to interrupt 0x20 (and IRQ1 to interrupt 0x21, ..., IRQ 15 mapped to IRQ 0x2F). It's something an OS typically does to avoid the conflicts (e.g. so that each interrupt is used for an IRQ or an exception and never both).
To understand "outb()", look at the "OUT" instruction in Intel's manuals. It's like there's 2 address spaces - one for normal physical addresses and a completely separate one for IO ports, where normal instructions (indirectly) access normal physical memory; and the IO port instructions (IN, OUT, INSB/W/D, OUTSB/W/D) access the separate "IO address space".
The traditional 8088/86 had/has a memory control signal that is essentially another address bit tied directly to the instruction. The control signal separating the accesses into I/O and Memory, creating two separate address spaces. Not unlike CS, DS, etc creating separate memory spaces inside the chip (before hitting the external memory space). Other processor families use what is called memory mapped I/O.
These days the memory controllers/system is chopped up inside and outside the chip in all different ways, sometimes for example with many control signals that indicate instruction vs data, cache line fills, write through vs write back, etc. To save on external circuitry the memory mapping happens inside the chip and for example dedicated rom interfaces, separate from ram, etc are found on the edge, far more complicated and separate than I/O space vs Memory space of the old 8088/86.
the out and in instruction and a few family members change whether you are doing an I/O access or memory access, and traditinally the interrupt controller was a chip that decoded the memory bus looking for I/O access with the address allocated for that device. Decades of reverse compatibility later and you have the present code you are looking at.
If you really want to understand it you need to find the datasheets for the device that contains the interrupt controller, likely to be combined with a bunch of other logic on a big support chip. Other datasheets may be required as well.