I want to program to set a break point on ARM1176jzs. According to the manual, the breakpoint value/control registers are accessible only if DSCR[15:14] is equal to b10, meaning Monitor mode.
And according to the manual, DSCR can be written no matter what value DSCR[15:14] takes.
But actually I find that bit 15 of DSCR cannot be changed to one by a MCR instruction. Any one could help?
Thanks
Related
I'm programming an STM8S microcontroller using STVD IDE. It uses the COSMIC compiler.
I found that there is a veriable that is increased unexpectedly. When debugging I found that there is a line in the assembly code that causes this variable to increase its value. It's a function named c_lgadc. Sometimes this assembly line is called while there is no ADC related function is shown in the call stack.
I don't understand where this code comes from and what is this c_lgadc? I have no function in my C code named c_lgadc
Here is a screenshot of the disassembly.
UPDATE:
I don't know what C code should I examine as the call stack is
different every time this disassembly line is called.
I've noticed that when I step over or into in the debugger, it comes
to the last line of a specific timer ISR.
I've also noticed that the line with the second breakpoint is the one that causes addition to my variable.
The line with first breakpoint is called always 5 times then the line
with second breakpoint is called once and so on.
I'd like to know how should I debug this further to prevent the unexpected addition to my variable.
UPDATE2:
I found the following in the map file:
c_lgadc 0000f39c defined in (C:\Users\xxxxxxxx\CXSTM8\Lib\libm0.sm8)lgadc.o section .text
used in Debug\stm8s_it.o
I'm not sure if this would help in clarifying the problem?
I've noticed that when I step over or into in the debugger, it comes to the last line of a specific timer ISR.
So, this timer ISR increments a 4-byte integer variable, and this variable overlaps with your variable. How such overlapping occurs might be revealed by inspecting that ISR or the link map, or it may be that the index register X is not correctly set in the ISR.
The function c_lgadc looks like part of a runtime library. Suggested by context, it is probably an add carry flag function because it is between the compare and unsigned right shift functions.
The c_l and c_lg prefixes for these functions are probably some part of a scheme indicate the types of the operands or their result.
As to your question, adc occurs in the instruction set of several CPU architectures, namely the intel x86 and motorola 680x. It means:
If the carry flag (unsigned arithmetic overflow or shift through carrry flag) is zero, return the operand as the result.
If the carry flag is set, return the result as one added to the operand.
I've created a listing file of my asm code using commands
cd c:\masm32\bin\
ml.exe /c /Fl"c:\path\file.lst" /Sc "c:\path\file.asm"
The lst file contains three columns: the first one is hex address of specific line, the third one is opcode, but I don't understand the meaning of values in the second column. I think it's called "timing" and the values are someting like: 2 or 10m or even 7m,3. What is the meaning of this numbers, what do they represent?
With the /Sc command-line switch, which generates instruction timings, each line has this syntax:
offset [[timing]] [[code]]
The offset is the offset from the beginning of the current code segment. The timing shows the number of cycles the processor needs to execute the instruction. The value of timing reflects the CPU type; for example, specifying the .386 directive produces instruction timings for the 80386 processor. If the statement generates code or data, code shows the numeric value in hexadecimal notation if the value is known at assembly time. If the value is calculated at run time, the assembler indicates what action is necessary to compute the value.
When assembling under the default .8086 directive, timing includes an effective address value if the instruction accesses memory. The 80186/486 processors do not use effective address values. For more information on effective address timing, see the "Processor" section in the Reference book.
(source)
I'm not sure how much I'd trust those timing values unless you're actually going to execute the code on an 80486 or earlier processor.
I would like to ask you how to determine in which ISA (ARM/Thumb/Thumb-2) an instruction is encoded?
First of all, I tried to do it following the instructions here (section 4.5.5).
However, when I use readelf -s ./arm_binary, and arm_binary was built in release mode, it appears that there is no .symtab in the binary. And anyway, I don't understand how to use this command to find the type for the instructions.
Secondly, I know the other way to differentiate is to look at the PC address for the ARM/Thumb instruction. If it is even then it is a Thumb instruction, if not - then ARM. But how can I do this without loading the file to memory? When I parse the sections of the file and find the execute section, all that I have is the start (offset) location in the file and the file-offset is always even, and it will be always even because we have instruction of size equal to 2 or 4...
Finally, the last way to check is to detect BX Rm, extract the value from Rm, and then check if that address in Rm is it even or not. But, this may be difficult because for this I would need to emulate the whole program.
So what is the correct way to identify the ISA for disassembly?
Thank you for your attention and I hope you will help me.
I don't believe it's possible to tell, in a mixed mode binary, without inspecting the instructions as you describe.
If the whole file will be one ISA or the other, then you can determine the ISA of the entry point by running this:
readelf -h ./arm_binary
And checking whether the entry point is even or odd.
However, what I would do is simply disassemble it both ways, and see what looks right. As long as you start the disassembly at the start of a function (or any 4-byte boundary), then this will work fine. Most code will produce nonsense when disassembled in the wrong ISA.
I am new to arduino (as a matter of fact to programming). I am thinking to use arduino due for my academic projects. While going through it's datasheet(SAM3X8E datasheet from Atmel) I came across timers, and it is said that all are 32bit counters. And they count till 0xFFFF before going to 0x0000 again. I am confused a bit. Shouldn't they count till 0xFFFFFFFF(before going to zero) as they are 32bit counters. I think 16bit counters are one's which count till 0xFFFF.
May be what I ask is silly but please throw some light on it.
Thanks in advance..
37.6 Functional Description
, 37.6.2 32-bit Counter
, page no: 873 in datasheet
Perhaps my library can help you: https://github.com/ivanseidel/DueTimer
Read this help file also: https://github.com/ivanseidel/DueTimer/blob/master/TimerCounter.md
I know it's not exactly what you want, but might be what you want as a final result.
I can find nothing in the datasheet or in Atmel's application notes that refutes your observation. This leads me to believe one of two things:
The description in the datasheet is incomplete. The behavior described is only applicable to the lower word, and the full 32-bit timer is incremented from 0x00000000 to 0xffffffff in order, with overflow only registering for the bottom 16 bits.
The behavior is exactly as described in the datasheet, and the software can set the timer counter to a value between 0x00010000 and 0xffffffff inclusive in order to allow for a one-shot longer period before the timer overflows at 0x0000ffff.
Testing will tell which behavior is the actual one.
You found a bug in their document, but they fixed it.
In the current version of the datasheet, this is now in section 36.6.2, page 860, and it makes more sense:
"When the counter has reached the value 2^32-1 and passes to zero, an overflow occurs..."
I am working with the registers of an ARM Cortex M3. In the documentation, some of the bits may be "reserved". It is unclear to me how I should deal with these reserved bits when writing on the registers.
Are these reserved bits even writeable? Should I be cautious to not touch them? Will something bad happen if I touch them?
This is a classic embedded world problem as to what to do with reserved bits! First, you should NOT write randomly into it lest your code becomes un-portable. What happens when the architecture assigns a new meaning to the reserved bits in future? Your code will break. So the best mantra when dealing with registers having reserved bits is Read-Modify-Write. i.e read the register contents, modify only the bits you want and then write back the value so that reserved bits are untouched ( untouched, does not mean we dont write into them, but in the sense, that we wrote that which was there before )
For example, say there is a register in which only the LSBit has meaning and all others are reserved. I would do this
ldr r0,=memoryAddress
ldr r1,[r0]
orr r1,r1,#1
str r1,[r0]
If there is no other clue in the documentation, write a zero. You cannot avoid writing to a few reserved bits spread around in a 32-bit register.
Read-Modify-Write should work most of the time, however there are cases where reserved bits are undefined on read but must be written with a specific value. See this post from the LPC2000 group (the whole thread is quite interesting too). So, always check the docs carefully, and also any errata that's available. When in doubt or docs are unclear, don't hesitate to write to the manufacturer.
Ideally you should read-modify-write, no guarantee for success, when you change to a newer chip with different bits, you are changing your code anyway. I have seen vendors where writing zeros to the reserved bits failed when they revved the chip and the code had to be touched. So there are no guarantees. The biggest clue is if in the vendors code you see a register or set that are clearly read-modify-write or clearly just a write. This could be different developers writing different sections of the example or there is a register in that peripheral that is sensitive, has an undocumented bit, and needs the read-modify-write.
On the chips that I work on I make sure that undocumented (to the customer), but not unused bits are marked in some way to stand out from other unused bits. We normally mark unused/reserved bits as zero, and these other bits get a name, and a must write this value marking. Not all vendors do this.
The bottom line is there is no guarantee, assume all documentation and example programs have bugs and you have to hack your way through to figure out what is right and what is wrong. No matter what path you take (read-modify-write, write zeros, etc) you will be wrong from time to time and have to re-do the code to match a hardware change. I strongly suggest that if a vendor has a chip id of some sort, that your software reads that ID and if it is an id that you have not tested your code against, declare a failure and not program that part. In production testing long before a customer sees the product, the part change will get detected and software will be involved in understanding the reason for the part change, the resolution being the alternate part is not compatible and rejected or the software changes, etc.
Reserved most of the time mean that they aren't used in this chip, but they might be used on feature devices (other product line). (Most chip manufacturers produce one peripheral driver and they use it for all there chips. This way it's mostly copy past work and there is less change for errors) Most of the time it doesn't matter if you write to reserved bits in peripheral registers, this because there isn't any logic attached to it.
It is possible that if you write something to it, it won't be stored and next time you attempt to read the register / bits it seams unchanged.