Upon reset of the 8051 microcontroller, all the port-pin latches are set to values of ‘1’. Now I am reading this book "Embedded C" and it states thr problem with the below code is that it can lull the developer into a false sense of security:
// Assume nothing written to port since reset
// – DANGEROUS!!!
Port_data = P1;
If, at a later date, someone modifies the program to include a routine for writing to all or part of the same port, this code will not generally work as required:
unsigned char Port_data;
P1 = 0x00;
. . .
// Assumes nothing written to port since reset
// – WON’T WORK BECAUSE SOMETHING WAS WRITTEN TO PORT ON RESET
Port_data = P1;
Anyone with knowledge of embedded c can explain to me why this code won't work? All it does is assign 0 to a char variable.
Potential issues.
1) The data direction register (DDR) associated with the port may not be set as expected, thus on power-up, the DDR may be set to "input". So writing the port to 0 may unexpectedly not read read 0.
2) The data direction register associated with the port may have been set to "output" and "reading" the data may not have a clear meaning. Depending on architecture, phantom bits may be needed to shadow the output bits for read-back.
3) Power-up code may get entered via a reset command that is nothing more than a jump to "reset vector". So any hardware specific action associated with a "cold" power-up did not occur as this is a "warm" power-up.
Solution:
On power-up code, explicitly set the DDR and output values (and shadow bits as needed).
May not apply to 8051 - speaking to embedded processor in general.
I was reading the same book and having the same confusion few months ago. Latter working on projects with PIC18 and M0+ and being kind of figuring out what it's really about.
Actually, this is not a software/programming issue but rather a hardware/electronic one. If your 805X code want to be able to read both 1 and 0 from an outside input on a pin, the code has to write 1 to the pin in advance. If your code write 0 to the pin in advance, the outside peripheral won't be able to pull the pin high and allow the code to read 1. Why?? electronic stuff!! Imagining that if you want to enjoy the wind outside the window, you have to open the window first.
If you are really interested, google "pin value vs latch value" by yourself. I think it's okay for programmer to leave that to hardware engineer.I believe 805Xs don't have DDR as advanced ones. Switching a pin between input and output mode may be easy but confusing.
Related
I have a Cincoze DE-1000 industrial PC, that features a Fintek F81866A chipset. I have to manage the DIO pins to read the input from a phisical button and to set on/off a LED. I have experience in C++ programming, but not at low/hardware level.
On the documentation accompanying the PC, there is the following C code:
#define AddrPort 0x4E
#define DataPort 0x4F
//<Enter the Extended Function Mode>
WriteByte(AddrPort, 0x87)
WriteByte(AddrPort, 0x87) //Must write twice to entering Extended mode
//<Select Logic Device>
WriteByte(AddrPort, 0x07)
WriteByte(DataPort, 0x06)
//Select logic device 06h
//<Input Mode Selection> //Set GP74 to GP77 input mode
WriteByte(AddrPort, 0x80) //Select configuration register 80h
WriteByte(DataPort, 0x0X)
//Set (bit 4~7) = 0 to select GP 74~77 as Input mode.
//<input Value>
WriteByte(AddrPort, 0x82) // Select configuration register 82h
ReadByte(DataPort, Value) // Read bit 4~7(0xFx)= GP74 ~77 as High.
//<Leave the Extended Function Mode>
WriteByte(AddrPort, 0xAA)
As far as I understood, the above code should read the value of the four input PINs (so it should read 1 for each PIN), but I am really struggling to understand how it actually works. I have understood the logic (selecting an address and reading/writing an hex value to it), but I cannot figure out what kind of C instructions WriteByte() and ReadByte() are. Also, I do not understand where Value in the line ReadByte(DataPort, Value) comes from. It should read the 4 PINs all together, so it should be some kind of "byte" type and it should contain 1 in its bits 4-7, but again I cannot really grasp the meaning of that line.
I have found an answer for a similar chip, but it did not help me in understanding.
Please advice me or point me to some relevant documentation.
That chip looks like a fairly typical Super I/O controller, which is basically the hub where all of the "slow" peripherals are combined into a single chipset.
Coreboot has a wiki page that talks about how to access the super I/O.
On the PC architecture, Port I/O is accomplished using special CPU instructions, namely in and out. These are privileged instructions, which can only be used from a kernel-mode driver (Ring 0), or a userspace process which has been given I/O privileges.
Luckily, this is easy in Linux. Check out the man page for outb and friends.
You use ioperm(2) or alternatively iopl(2) to tell the kernel to allow the user space application to access the I/O ports in question. Failure to do this will cause the application to receive a segmentation fault.
So we could adapt your function into a Linux environment like this:
/* Untested: Use at your own risk! */
#include <sys/io.h>
#include <stdio.h>
#define ReadByte(port) inb(port)
#define WriteByte(port, val) outb(val, port)
int main(void)
{
if (iopl(3) < 0) {
fprintf(stderr, "Failed to get I/O privileges (are you root?)\n");
return 2;
}
/* Your code using ReadByte / WriteByte here */
}
Warning
You should be very careful when using this method to talk directly to the Super IO, because your operating system almost certainly has device drivers that are also talking to the chip.
The right way to accomplish this is to write a device driver that properly coordinates with other kernel code to avoid concurrent access to the device.
The Linux kernel provides GPIO access to at least some Super I/O devices; it should be straightforward to port one of these to your platform. See this pull request for the IT87xx chipset.
WriteByte() and ReadByte() are not part of the C language. By the looks of things, they are functions intended to be placeholders for some form of system call for the OS kernel port IO (not macros doing memory mapped IO directly as per a previous version of this answer).
The prototypes for the functions would be something along the lines of:
#include <stdint.h>
void WriteByte(unsigned port, uint8_t value);
void ReadByte(unsigned port, uint8_t *value);
Thus the Value variable would be a pointer to an 8 bit unsigned integer (unsigned char could also be used), something like:
uint8_t realValue;
uint8_t *Value = &realValue;
Of course it would make much more sense to have Value just be a uint8_t and have ReadByte(DataPort, &Value). But then the example code also doesn't have any semicolons, so was probably never anything that actually ran. Either way, this is how Value would contain the data you are looking for.
I also found some more documentation of the registers here - https://www.electronicsdatasheets.com/download/534cf560e34e2406135f469d.pdf?format=pdf
Hope this helps.
I am programming a microcontroller of the PIC24H family and using xc16 compiler.
I am relaying U1RX-data to U2TX within main(), but when I try that in an ISR it does not work.
I am sending commands to the U1RX and the ISR() is down below. At U2RX, there are databytes coming in constantly and I want to relay 500 of them with the U1TX. The results of this is that U1TX is relaying the first 4 databytes from U2RX but then re-sending the 4th byte over and over again.
When I copy the for loop below into my main() it all works properly. In the ISR(), its like that U2RX's corresponding FIFObuffer is not clearing when read so the buffer overflows and stops reading further incoming data to U2RX. I would really appreciate if someone could show me how to approach the problem here. The variables tmp and command are globally declared.
void __attribute__((__interrupt__, auto_psv, shadow)) _U1RXInterrupt(void)
{
command = U1RXREG;
if(command=='d'){
for(i=0;i<500;i++){
while(U2STAbits.URXDA==0);
tmp=U2RXREG;
while(U1STAbits.UTXBF==1); //
U1TXREG=tmp;
}
}
}
Edit: I added the first line in the ISR().
Trying to draw an answer from the various comments.
If the main() has nothing else to do, and there are no other interrupts, you might be able to "get away with" patching all 500 chars from one UART to another under interrupt, once the first interrupt has ocurred, and perhaps it would be a useful exercise to get that working.
But that's not how you should use an interrupt. If you have other tasks in main(), and equal or lower priority interrupts, the relatively huge time that this interrupt will take (500 chars at 9600 baud = half a second) will make the processor what is known as "interrupt-bound", that is, the other processes are frozen out.
As your project gains complexity, you won't want to restrict main() to this task, and there is no need to for it be involved at all, after setting up the UARTs and IRQs. After that it can calculate π ad infinitum if you want.
I am a bit perplexed as to your sequence of operations. A command 'd' is received from U1 which tells you to patch 500 chars from U2 to U1.
I suggest one way to tackle this (and there are many) seeing as you really want to use interrupts, is to wait until the command is received from U1 - in main(). You then configure, and enable, interrupts for RXD on U2.
Then the job of the ISR will be to receive data from U2 and transmit it thru U1. If both UARTS have the same clock and the same baud rate, there should not be a synchronisation problem, since a UART is typically buffered internally: once it begins to transmit, the TXD register is available to hold another character, so any stagnation in the ISR should be minimal.
I can't write the actual code for you, since it would be supposed to work, but here is some very pseudo code, and I don't have a PIC handy (or wish to research its operational details).
ISR
has been invoked because U2 has a char RXD
you *might* need to check RXD status as a required sequence to clear the interrupt
read the RXD register, which also might clear the interrupt status
if not, specifically clear the interrupt status
while (U1 TXD busy);
write char to U1
if (chars received == 500)
disable U2 RXD interrupt
return from interrupt
ISR's must be kept lean and mean and the code made hyper-efficient if there is any hope of keeping up with the buffer on a UART. Experiment with the BAUD rate just to find the point at which your code can keep up, to help discover the right heuristic and see how far away you are from achieving your goal.
Success could depend on how fast your micro controller is, as well, and how many tasks it is running. If the microcontroller has a built in UART theoretically you should be able to manage keeping the FIFO from overflowing. On the other hand, if you paired up a UART with an insufficiently-powered micro controller, you might not be able to optimize your way out of the problem.
Besides the suggestion to offload the lower-priority work to the main thread and keep the ISR fast (that someone made in the comments), you will want to carefully look at the timing of all of the lines of code and try every trick in the book to get them to run faster. One expensive instruction can ruin your whole day, so get real creative in finding ways to save time.
EDIT: Another thing to consider - look at the assembly language your C compiler creates. A good compiler should let you inline assembly language instructions to allow you to hyper-optimize for your particular case. Generally in an ISR it would just be a small number of instructions that you have to find and implement.
EDIT 2: A PIC 24 series should be fast enough if you code it right and select a fast oscillator or crystal and run the chip at a good clock rate. Also consider the divisor the UART might be using to achieve its rate vs. the PIC clock rate. It is conceivable (to me) that an even division that could be accomplished internally via shifting would be better than one where math was required.
I am trying to fix an bug found in a mature program for Fujitsu MB90F543. The program works for nearly 10 years so far, but it was discovered, that under some special circumstances it fails to do two things at it's very beginning. One of them is crucial.
After low and high level initialization (ports, pins, peripherials, IRQ handlers) configuration data is read over SPI from EEPROM and status LEDs are turned on for a moment (to turn them a data is send over SPI to a LED driver).
When those special circumstances occur first and only first function invoking just a few EEPROM reads fails and additionally a few of the LEDs that should, don't turn on.
The program is written in C and compiled using Softune v30L32.
Surprisingly it is sufficient to add single __asm(" NOP ") in low level hardware init to make the program work as expected under mentioned circumstances. It is sufficient to turn off 'Control optimization of pointer aliasing' in Optimization settings. Adding just a few lines of code in various places helps too.
I have compared (DIFFed) ASM listings of compiled program for a version with and without __asm(" NOP ") and with both aforementioned optimizer settings and they all look just fine.
The only warning Softune compiler has been printing for years during compilation is as follows:
*** W1372L: The section is placed outside the RAM area or the I/O area (IOXTND)
I do realize it's rather general question, but maybe someone who has a bigger picture will be able to point out possible cause.
Have you got an idea what may cause such a weird behaviour? How to locate the bug and fix it?
During the initialization a few long (about 20ms) delay loops are used. They don't help although they were increased from about 2ms, yet single NOP in any line of the hardware initialization function and even before or after the function helps.
Both the wait loops works. I have checked it using an oscilloscope. (I have added LED turn on before and off after).
I have checked timming hypothesis by slowing down SPI clock from 1MHz to 500kHz. It does not change anything. Slowing down to 250kHz makes watchdog resets, as some parts of the code execute too long (>25ms).
One more thing. I have observed that adding local variables in any source file sometimes makes the problem disappear or reappear. The same concerns initializing uninitialized local variables. Adding a few extra lines of a code in any of the files helps or reveals the problem.
void main(void)
{
watchdog_init();
// waiting for power supply to stabilize
wait; // about 45ms
hardware_init();
clear_watchdog();
application_init();
clear_watchdog();
wait; // about 20ms
test_LED();
{...}
}
void hardware_init (void)
{
__asm("NOP"); // how it comes it helps? - it may be in any line of the function
io_init(); // ports initialization
clk_init();
timer_init();
adc_init();
spi_init();
LED_init();
spi_start();
key_driver_init();
can_init();
irq_init(); // set IRQ priorities and global IRQ enable
}
Could be one of many things but two spring to mind.
Timing.
Maybe the wait is not long enough for power to stabilize and not everything is synced to the clock. The NOP gets everything back in sync.
Alignment.
Perhaps the NOP gets your instructions aligned on a 32 or 64 bit boundary expected by the hardware. (we used to do this a lot on mainframe assemblers as IO operations often expected things to be on double word boundarys).
The problem was solved. It was caused by a trivial bug.
EEPROM's nHOLD and nCS signals were not initialized immediately after MCU's reset, but before the first use of the EEPROM. As a result they were 0's, so active.
This means EEPROM was selected, but waiting on hold. Meantime other transfer using SPI started. After 6 out of 8 CLK pulses EEPROM's nHOLD I/O pin was initialized and brought high. EEPROM was no longer on hold so it clocked in last two bits of a data for an other peripheral. Every subsequent operation on the EEPROM found it being having not synchronized CLK and MOSI.
When I have added NOP or anything other the moment of nHOLD 0->1 edge was shifted to happen after the last CLK pulse. Now CLK-MOSI were in sync.
All I have had to do was to initialize all the EEPROM's SPI lines, in
particular nHOLD and nCS right after the MCU reset.
I am implementing a emulated EEPROM in flash memory on a STM32 microprocessor, mostly based on the Application Note by ST (AN2594 - EEPROM emulation in STM32F10x microcontrollers).
The basics outline there and in the respective Datasheet and Programming manual (PM0075) are quite clear. However, I am unsure regarding the implications of power-out/system reset on flash programming and page erasure operations. The AppNote considers this case, too but does not clarify what exactly happens when a programming (write) operations is interrupted:
Does the address have a arbitrary (random) value? OR
Are only part of the bits written? OR
Does it have the default erase value 0xFF?
Thanks for hints or pointers to the relevant documentation.
Arne
This is not really a software question (much less C++). It belongs on electronics.se, but there does not seem to be an option to migrate questions there… only to sites such as superuser or webmasters.se.
The short answer is that hardware is inherently unreliable. Something can always in theory go wrong that interrupts the write process or causes the wrong bit to be written.
The long answer is that Flash circuits are usually designed for maximum reliability. A sudden power loss on write will probably not cause corruption because the driver circuit may have enough capacitance or the capability to operate under a low-voltage condition long enough to finish draining the charge as necessary. A power loss on erasure might be trickier. You really need to consult the manufacturer.
For a "soft" system reset with no power interruption, it would be pretty surprising if the hardware didn't always completely erase whatever bytes it was immediately working on. Usually the bytes are erased in a predefined order, so you can use the first or last ones to indicate whether a page is full or empty.
#include "stm32f10x.h"
#define FLASH_KEY1 ((uint32_t)0x45670123)
#define FLASH_KEY2 ((uint32_t)0xCDEF89AB)
#define Page_127 0x0801FC00
uint16_t i;
int main()
{
//FLASH_Unlock
FLASH->KEYR = FLASH_KEY1;
FLASH->KEYR = FLASH_KEY2;
//FLASH_Erase Page
while((FLASH->SR&FLASH_SR_BSY));
FLASH->CR |= FLASH_CR_PER; //Page Erase Set
FLASH->AR = Page_127; //Page Address
FLASH->CR |= FLASH_CR_STRT; //Start Page Erase
while((FLASH->SR&FLASH_SR_BSY));
FLASH->CR &= ~FLASH_CR_PER; //Page Erase Clear
//FLASH_Program HalfWord
FLASH->CR |= FLASH_CR_PG;
for(i=0; i<1024; i+=2)
{
while((FLASH->SR&FLASH_SR_BSY));
*(__IO uint16_t*)(Page_127 + i) = i;
}
FLASH->CR &= ~FLASH_CR_PG;
FLASH->CR |= FLASH_CR_LOCK;
while(1);
}
If you are using the EEProm Emulation driver, you shouldn't worry too much about the flash corruption issues as the EEProm emulation driver always keeps a shadow copy in another page. Worst come worst, you will lose the most recent values that are being written into the flash. If you look closely on the emulation driver, you will notice that it is nothing but essentially a wrapper to stm32fxx_flash.c in the standard peripheral library.
If you look at the application note, you will see the times that the emulation library take for the flash operations. Erasing a page typically takes the longest time (tens of milliseconds on M0 core - this depends on the clock frequency).
If you are using the EEProm Emulation driver, you had bettern add a function such as check the data after write finished.
For example, if you have 10 data to save, so you need write 11 bytes to flash. The last byte is checksum. And check the data after read from flash.
I'm trying to write to my lpt register with the function outb(0x378,val);
well.. I tried to debug with the call int ret=inb(0x378); I always get the ret=255 no matter what value I insert with outb before.
*I'm writing on the kernel mode since my program is a driver, therefore I didn't use ioperm() etc.
thank you in advance.
You have the parameters of outb function wrong, correct order is :
outb(value, port)
so you have to change your code to do:
outb(val, 0x378)
For more details please read Linux I/O Programming Howto .
Do you know for a fact that you have a parallel port installed at that address?
Get yourself a small low-current LED. Stick the long end in one of pin 2 (LSB) to pin 9 (MSB) and the short end in pin 25 (ground).
Try writing various values and see if you can get the LED to change by the bit value of what you write.
This should work (unless as previously mentioned you've gotten it programmed in an input mode) Being able to read back the port value is less certain, depending on the type of parallel port and implementation details (for example, you probably couldn't with the buffer chip that implemented it in the original PC)
Also note that most USB "printer" adapters don't give you bitwise register access. Something hanging off of the PCI or PCMCIA, etc may also have problems with direct register access at the legacy port address. There are nice USB parallel interface chips such as the FT245 and successors which you can use if you don't have a "true" parallel port hanging off the chipset available.
Have you set the direction register? If it is set as input then you will read what is on the port.
inb(0x378) is not required to return what was written; at least I've seen chips to behave so. Well, since you, at some point, have called outb, you know what's on anyway.