I have a Cincoze DE-1000 industrial PC, that features a Fintek F81866A chipset. I have to manage the DIO pins to read the input from a phisical button and to set on/off a LED. I have experience in C++ programming, but not at low/hardware level.
On the documentation accompanying the PC, there is the following C code:
#define AddrPort 0x4E
#define DataPort 0x4F
//<Enter the Extended Function Mode>
WriteByte(AddrPort, 0x87)
WriteByte(AddrPort, 0x87) //Must write twice to entering Extended mode
//<Select Logic Device>
WriteByte(AddrPort, 0x07)
WriteByte(DataPort, 0x06)
//Select logic device 06h
//<Input Mode Selection> //Set GP74 to GP77 input mode
WriteByte(AddrPort, 0x80) //Select configuration register 80h
WriteByte(DataPort, 0x0X)
//Set (bit 4~7) = 0 to select GP 74~77 as Input mode.
//<input Value>
WriteByte(AddrPort, 0x82) // Select configuration register 82h
ReadByte(DataPort, Value) // Read bit 4~7(0xFx)= GP74 ~77 as High.
//<Leave the Extended Function Mode>
WriteByte(AddrPort, 0xAA)
As far as I understood, the above code should read the value of the four input PINs (so it should read 1 for each PIN), but I am really struggling to understand how it actually works. I have understood the logic (selecting an address and reading/writing an hex value to it), but I cannot figure out what kind of C instructions WriteByte() and ReadByte() are. Also, I do not understand where Value in the line ReadByte(DataPort, Value) comes from. It should read the 4 PINs all together, so it should be some kind of "byte" type and it should contain 1 in its bits 4-7, but again I cannot really grasp the meaning of that line.
I have found an answer for a similar chip, but it did not help me in understanding.
Please advice me or point me to some relevant documentation.
That chip looks like a fairly typical Super I/O controller, which is basically the hub where all of the "slow" peripherals are combined into a single chipset.
Coreboot has a wiki page that talks about how to access the super I/O.
On the PC architecture, Port I/O is accomplished using special CPU instructions, namely in and out. These are privileged instructions, which can only be used from a kernel-mode driver (Ring 0), or a userspace process which has been given I/O privileges.
Luckily, this is easy in Linux. Check out the man page for outb and friends.
You use ioperm(2) or alternatively iopl(2) to tell the kernel to allow the user space application to access the I/O ports in question. Failure to do this will cause the application to receive a segmentation fault.
So we could adapt your function into a Linux environment like this:
/* Untested: Use at your own risk! */
#include <sys/io.h>
#include <stdio.h>
#define ReadByte(port) inb(port)
#define WriteByte(port, val) outb(val, port)
int main(void)
{
if (iopl(3) < 0) {
fprintf(stderr, "Failed to get I/O privileges (are you root?)\n");
return 2;
}
/* Your code using ReadByte / WriteByte here */
}
Warning
You should be very careful when using this method to talk directly to the Super IO, because your operating system almost certainly has device drivers that are also talking to the chip.
The right way to accomplish this is to write a device driver that properly coordinates with other kernel code to avoid concurrent access to the device.
The Linux kernel provides GPIO access to at least some Super I/O devices; it should be straightforward to port one of these to your platform. See this pull request for the IT87xx chipset.
WriteByte() and ReadByte() are not part of the C language. By the looks of things, they are functions intended to be placeholders for some form of system call for the OS kernel port IO (not macros doing memory mapped IO directly as per a previous version of this answer).
The prototypes for the functions would be something along the lines of:
#include <stdint.h>
void WriteByte(unsigned port, uint8_t value);
void ReadByte(unsigned port, uint8_t *value);
Thus the Value variable would be a pointer to an 8 bit unsigned integer (unsigned char could also be used), something like:
uint8_t realValue;
uint8_t *Value = &realValue;
Of course it would make much more sense to have Value just be a uint8_t and have ReadByte(DataPort, &Value). But then the example code also doesn't have any semicolons, so was probably never anything that actually ran. Either way, this is how Value would contain the data you are looking for.
I also found some more documentation of the registers here - https://www.electronicsdatasheets.com/download/534cf560e34e2406135f469d.pdf?format=pdf
Hope this helps.
Related
Consider this code here:
// Stupid I/O delay routine necessitated by historical PC design flaws
static void
delay(void)
{
inb(0x84);
inb(0x84);
inb(0x84);
inb(0x84);
}
What is port 0x84? Why is it a design flaws? delay() is used in serial_putc() function:
static void
serial_putc(int c)
{
int i;
for (i = 0;
!(inb(COM1 + COM_LSR) & COM_LSR_TXRDY) && i < 12800;
i++)
delay();
outb(COM1 + COM_TX, c);
}
The file is from lab1 of the course Operating System Engineering from OCW.
The serial port is a piece of hardware with some semantic you have to accept. It usually has a shift register that makes the conversion from parallel to serial data. It can have a holding register for the next byte to send or even a FIFO for more than one byte. That's why you have to poll the line status register (LSR).
There are some hardware revisions out there that doesn't behave correctly. Your code looks like a workaround for a bug in old hardware. It shouldn't be necessary to read the port 0x84 here.
But the delay implementation can't be optimized out when you increase the compiler optimization level since it's accessing the I/O range. Running this code in a up-to-date hardware might be problematic if the run-time performance gives too little delay. You will have to verify that the maximum time that can be waited in the loop is sufficient to shift out one byte by the UART. Keep in mind that this is baudrate-dependend while you code example isn't.
The port 0x84 is used to access the "extra page register" (Overview). But reading this register should be a noop. Only the read operation itself is important to consume CPU cycles.
Upon reset of the 8051 microcontroller, all the port-pin latches are set to values of ‘1’. Now I am reading this book "Embedded C" and it states thr problem with the below code is that it can lull the developer into a false sense of security:
// Assume nothing written to port since reset
// – DANGEROUS!!!
Port_data = P1;
If, at a later date, someone modifies the program to include a routine for writing to all or part of the same port, this code will not generally work as required:
unsigned char Port_data;
P1 = 0x00;
. . .
// Assumes nothing written to port since reset
// – WON’T WORK BECAUSE SOMETHING WAS WRITTEN TO PORT ON RESET
Port_data = P1;
Anyone with knowledge of embedded c can explain to me why this code won't work? All it does is assign 0 to a char variable.
Potential issues.
1) The data direction register (DDR) associated with the port may not be set as expected, thus on power-up, the DDR may be set to "input". So writing the port to 0 may unexpectedly not read read 0.
2) The data direction register associated with the port may have been set to "output" and "reading" the data may not have a clear meaning. Depending on architecture, phantom bits may be needed to shadow the output bits for read-back.
3) Power-up code may get entered via a reset command that is nothing more than a jump to "reset vector". So any hardware specific action associated with a "cold" power-up did not occur as this is a "warm" power-up.
Solution:
On power-up code, explicitly set the DDR and output values (and shadow bits as needed).
May not apply to 8051 - speaking to embedded processor in general.
I was reading the same book and having the same confusion few months ago. Latter working on projects with PIC18 and M0+ and being kind of figuring out what it's really about.
Actually, this is not a software/programming issue but rather a hardware/electronic one. If your 805X code want to be able to read both 1 and 0 from an outside input on a pin, the code has to write 1 to the pin in advance. If your code write 0 to the pin in advance, the outside peripheral won't be able to pull the pin high and allow the code to read 1. Why?? electronic stuff!! Imagining that if you want to enjoy the wind outside the window, you have to open the window first.
If you are really interested, google "pin value vs latch value" by yourself. I think it's okay for programmer to leave that to hardware engineer.I believe 805Xs don't have DDR as advanced ones. Switching a pin between input and output mode may be easy but confusing.
I am implementing a emulated EEPROM in flash memory on a STM32 microprocessor, mostly based on the Application Note by ST (AN2594 - EEPROM emulation in STM32F10x microcontrollers).
The basics outline there and in the respective Datasheet and Programming manual (PM0075) are quite clear. However, I am unsure regarding the implications of power-out/system reset on flash programming and page erasure operations. The AppNote considers this case, too but does not clarify what exactly happens when a programming (write) operations is interrupted:
Does the address have a arbitrary (random) value? OR
Are only part of the bits written? OR
Does it have the default erase value 0xFF?
Thanks for hints or pointers to the relevant documentation.
Arne
This is not really a software question (much less C++). It belongs on electronics.se, but there does not seem to be an option to migrate questions there… only to sites such as superuser or webmasters.se.
The short answer is that hardware is inherently unreliable. Something can always in theory go wrong that interrupts the write process or causes the wrong bit to be written.
The long answer is that Flash circuits are usually designed for maximum reliability. A sudden power loss on write will probably not cause corruption because the driver circuit may have enough capacitance or the capability to operate under a low-voltage condition long enough to finish draining the charge as necessary. A power loss on erasure might be trickier. You really need to consult the manufacturer.
For a "soft" system reset with no power interruption, it would be pretty surprising if the hardware didn't always completely erase whatever bytes it was immediately working on. Usually the bytes are erased in a predefined order, so you can use the first or last ones to indicate whether a page is full or empty.
#include "stm32f10x.h"
#define FLASH_KEY1 ((uint32_t)0x45670123)
#define FLASH_KEY2 ((uint32_t)0xCDEF89AB)
#define Page_127 0x0801FC00
uint16_t i;
int main()
{
//FLASH_Unlock
FLASH->KEYR = FLASH_KEY1;
FLASH->KEYR = FLASH_KEY2;
//FLASH_Erase Page
while((FLASH->SR&FLASH_SR_BSY));
FLASH->CR |= FLASH_CR_PER; //Page Erase Set
FLASH->AR = Page_127; //Page Address
FLASH->CR |= FLASH_CR_STRT; //Start Page Erase
while((FLASH->SR&FLASH_SR_BSY));
FLASH->CR &= ~FLASH_CR_PER; //Page Erase Clear
//FLASH_Program HalfWord
FLASH->CR |= FLASH_CR_PG;
for(i=0; i<1024; i+=2)
{
while((FLASH->SR&FLASH_SR_BSY));
*(__IO uint16_t*)(Page_127 + i) = i;
}
FLASH->CR &= ~FLASH_CR_PG;
FLASH->CR |= FLASH_CR_LOCK;
while(1);
}
If you are using the EEProm Emulation driver, you shouldn't worry too much about the flash corruption issues as the EEProm emulation driver always keeps a shadow copy in another page. Worst come worst, you will lose the most recent values that are being written into the flash. If you look closely on the emulation driver, you will notice that it is nothing but essentially a wrapper to stm32fxx_flash.c in the standard peripheral library.
If you look at the application note, you will see the times that the emulation library take for the flash operations. Erasing a page typically takes the longest time (tens of milliseconds on M0 core - this depends on the clock frequency).
If you are using the EEProm Emulation driver, you had bettern add a function such as check the data after write finished.
For example, if you have 10 data to save, so you need write 11 bytes to flash. The last byte is checksum. And check the data after read from flash.
I'm trying to write to my lpt register with the function outb(0x378,val);
well.. I tried to debug with the call int ret=inb(0x378); I always get the ret=255 no matter what value I insert with outb before.
*I'm writing on the kernel mode since my program is a driver, therefore I didn't use ioperm() etc.
thank you in advance.
You have the parameters of outb function wrong, correct order is :
outb(value, port)
so you have to change your code to do:
outb(val, 0x378)
For more details please read Linux I/O Programming Howto .
Do you know for a fact that you have a parallel port installed at that address?
Get yourself a small low-current LED. Stick the long end in one of pin 2 (LSB) to pin 9 (MSB) and the short end in pin 25 (ground).
Try writing various values and see if you can get the LED to change by the bit value of what you write.
This should work (unless as previously mentioned you've gotten it programmed in an input mode) Being able to read back the port value is less certain, depending on the type of parallel port and implementation details (for example, you probably couldn't with the buffer chip that implemented it in the original PC)
Also note that most USB "printer" adapters don't give you bitwise register access. Something hanging off of the PCI or PCMCIA, etc may also have problems with direct register access at the legacy port address. There are nice USB parallel interface chips such as the FT245 and successors which you can use if you don't have a "true" parallel port hanging off the chipset available.
Have you set the direction register? If it is set as input then you will read what is on the port.
inb(0x378) is not required to return what was written; at least I've seen chips to behave so. Well, since you, at some point, have called outb, you know what's on anyway.
I knew there is a similar post:
Steps to make a LED blink from a C/C++ program?
But now I am working on a arm-based development board, and it seems to have two serial ports that I could use it to make a LED on or off.
Basically I think the flow is , make one pin in serial "1" or on and the LED will be turned on and "0" to make it off.
Is there some reference code in C-language I could refers?
Generally speaking, the board should come with some Board Support Package (BSP) which lets you control the built in I/O. Look for a serial library if you really want to use the Hardware flow control signals.
I'd recommend looking for some GPIO (General Purpose I/O, or digial I/O) on the board, which typically lets you configure it as an input or an output. You should be able to connect the LED via a current limiting resister between a digital I/O line and a ground pin. Make sure you have the LED oriented correctly if you connect it backwards it will block the current instead lighting. And as always make sure you check it out with a digital voltage meter before connecting it.
Even if you don't have a BSP for digital I/O the configuration is usually pretty simple.
Set a bit in a register to enable it, set bit in another register to select input or output they will normally be arranged in 8-bit "ports." Some systems allow you configure individual I/O pins, other will only allow you to configure the whole port for input or output. Then you just write a 1 or 0 to the bit you want to control in an write/output register.
ARM chips typically have a considerable amount of built in peripherals today, so most boards will just be bringing the I/O out to physical connectors on the board and you may need to read the chip vender's documentation to find the register memory map. Better board venders will supply documentation, a library (BSP) and examples. Luminary Micro even supplies chips with built in ethernet MACs and PHYs, just add a connector and Magnetics and you have a 1 chip Webserver.
This will, I'm afraid, be heavily dependent on the specifications of the particular arm-based development board you are using.
You need to find documentation specific to that board.
I used to do this kind of programming before.
You need to study the serial port connection
http://www.lammertbies.nl/comm/cable/RS-232.html
http://www.beyondlogic.org/serial/serial.htm
It has +5v, -5v on the output, I can't remember clearly now. Not every pin is needed.
I never use ARM before, but I use a 8-bit PIC controller to program it. I guess you can find a lot of example online.
The preferred alternative for controlling a GPIO is via a BSP. Because this BSP (board support package) does all the work for you in setting all peripherals to good defaults and and allowing you to call a function. Possibly your BSP of choice will have a function to write a byte to an 8-bit GPIO port; your LED will only have one bit. In this case your C code could look like: (at least: it will work like this on Luminary Micro kits). (Example code; requires a bit of extra work to make it compile especially on your kit).
/* each LED is addressed by an address (byte) and a bit-within-this-byte */
struct {
address, // address of IO register for LED port
bit // bit of LED
} LEDConfigPair;
struct LEDConfigPair LEDConfig[NUMBER_OF_LEDS] = {
{GPIO_PORTB_BASE,0}, // LED_0 is at port B0
{GPIO_PORTB_BASE,1} // LED_1 is at port B1
} ;
/* function LED_init configures the GPIOs where LEDs are connected as output */
led_init(void)
{
U32 i;
for(i=0;i<NUMBER_OF_LEDS;i++)
{
GPIODirModeSet( LEDConfig[i][0], LEDConfig[i][1], GPIO_DIR_MODE_OUT );
}
}
/* my LED function
set_led_state makes use of the BSP of Luminary Micro to access a GPIO function
Implementation: this BSP requires setting 8 port wide IO, so the function will calculate a mask (
*/
set_led_state(U8 led,bool state)
{
U8 andmask;
U8 setmask;
andmask = ~(1 << LEDConfig[led].bit);// a bitmask with all 1's except bit of LED
if (true == state)
{
setmask = (1 << LEDConfig[led].bit); // set bit for LED
} else
{
setmask = 0;
}
GPIOPinWrite(LEDConfig[led].address, andmask, setmask);
}
Of course this is all spelled out; it can be done in a single lines like this:
#DEFINE SETLEDSTATE(led,state) GPIOPinWrite(LEDConfig[led].address, ~(1<<LEDConfig[led].bit),(state<<LEDConfig[led].bit))
this will do the same, but only makes sense when you can dream bit masks, and you only want to toggle some LEDs to debug the real program...
The alternative: bare metal.
In this case you need to set up everything for yourself. For an embedded system, you need to be aware of pin multiplexing and power management (assuming memory controller and cpu clocks are already set up!)
initialization: set pin multiplexing in such a way that the function you want to control is actually mapped on the package.
initialization of pheripheral (in this case either a UART, or a GPIO function on the same pin)
You can't do it using Rx or Tx pins of Serial port. For that you just need to control the RTS or CTS pins of serial port.
Just google for "access COM port in VC++ code" and then control the RTS and CTS status pins to turn ON and OFF any external device.