Timers in Arduino Due - arm

I am new to arduino (as a matter of fact to programming). I am thinking to use arduino due for my academic projects. While going through it's datasheet(SAM3X8E datasheet from Atmel) I came across timers, and it is said that all are 32bit counters. And they count till 0xFFFF before going to 0x0000 again. I am confused a bit. Shouldn't they count till 0xFFFFFFFF(before going to zero) as they are 32bit counters. I think 16bit counters are one's which count till 0xFFFF.
May be what I ask is silly but please throw some light on it.
Thanks in advance..
37.6 Functional Description
, 37.6.2 32-bit Counter
, page no: 873 in datasheet

Perhaps my library can help you: https://github.com/ivanseidel/DueTimer
Read this help file also: https://github.com/ivanseidel/DueTimer/blob/master/TimerCounter.md
I know it's not exactly what you want, but might be what you want as a final result.

I can find nothing in the datasheet or in Atmel's application notes that refutes your observation. This leads me to believe one of two things:
The description in the datasheet is incomplete. The behavior described is only applicable to the lower word, and the full 32-bit timer is incremented from 0x00000000 to 0xffffffff in order, with overflow only registering for the bottom 16 bits.
The behavior is exactly as described in the datasheet, and the software can set the timer counter to a value between 0x00010000 and 0xffffffff inclusive in order to allow for a one-shot longer period before the timer overflows at 0x0000ffff.
Testing will tell which behavior is the actual one.

You found a bug in their document, but they fixed it.
In the current version of the datasheet, this is now in section 36.6.2, page 860, and it makes more sense:
"When the counter has reached the value 2^32-1 and passes to zero, an overflow occurs..."

Related

What is "DN" in the specification of ARMv6M ADD(register) T2 Encoding?

I'm working on the specification of ARMV6M and the instruction's T2 encoding is as below.
0 1 0 0 0 1 | 0 0 | DN | Rm | Rdn
DN is always used as a prefix for Rdn register file position and I couldn't understand why it's not just put in the Rdn.
Many of the thumb instructions, esp ones that are ALU operations have the three bit register specifications encoded in the lower 6 bits, three bits per indicating r0-r7. this specific add instruction allows for operations on low and high registers r0-r15, so the other two bits need a home they happened to put one in bit 6 that goes with 5:3 so the other went above that.
So perhaps they were thinking of saving a few gates or readability or some other reason that cant be answered here at SO. so instead of using 7:4 and 3:0 like they would in a full sized arm instruction for this one special one off instruction, they put the upper bits in 7:6 an even better question is why didn't they put the left right the same as the other two why isn't it [7,5,4,3] [6,2,1,0] instead of [6,5,4,3] and [7,2,1,0]? IMO that would have helped readability esp if you started off on the arm thumb docs that were only in print (paper) originally where H1/H2 seemed swapped.
In the pseudo code they show (DN:Rdn) and talk about a four bit number and then Rm being a 4 bit number, so that indicates what the older docs did in a different way.
I suspect they used the n at the end as lower bits and the capital N as larger bits and yes it would have read better as RdN instead of DN. or Rd3 would have been even better than that.
Instruction sets can to some extent do whatever they want, ARM is no different here the designers way back when thumb started chose what they chose. Arbitrary decision, you will be lucky to find anyone here that was in the room at the time, I have seen these things be mistakes too, in the meeting one thing is decided, the implementation by someone is backward, but by the time that gets around to the room to much investment has been made in testing, etc, to just leave it. Possible that/those engineers are already retired, or got golden parachutes a while ago, maybe you will get lucky.
Also understand that it is not uncommon that the documentation folks are often a separate department, so this could have been a game time decision (or typo) by an individual technical writer, and later determined not to change the docs down the road.
Don't read anything magical into this or some industry nomenclature, that is a bad habit to have anyway, what matters is that you understand what the bits do not how they are labelled.

Reverse engineering a firmware - what's up with every fourth byte?

So I decided to grab my tools and analyze a router firmware. It went pretty okay up to the point where I had to find segments manually. I wouldn't bother you with it and i really don't want to ask about hacking anything or to do a favor for me. There is a pattern I'm sure someone could explain to me.
Looking at the hexdump, all i see is this:
There are strings that break the pattern but it goes all the way down almost to the end of the file.
what on earth can cause this pattern?
(if anyone's willing to help but needs more info: VxWorks 5.5.1 / probably ARM-9E CPU)
it is an arm, go look at the arm documentation you will see that for the 32 bit (non-thumb) arm instructions the first four bits are the condition code. The code 0b1110 is "ALWAYS" most of the time you dont do conditional execution so most arm instructions start with 0xE. makes it very easy to pick out an arm binary. the 16 bit thumb instructions also have a similar pattern but for different reasons, then if you add in thumb2 it changes that some...
Thats just due to how ARMs op codes are mapped and is actually helps me "eyeball" a dump to see if its ARM code.
I would suggest you go through part of the ARM Architecture Manual to see how op codes are generated. particularly conditionals. the E is created when you always want something to happen

Dealing with reserved register bits of an ARM chip

I am working with the registers of an ARM Cortex M3. In the documentation, some of the bits may be "reserved". It is unclear to me how I should deal with these reserved bits when writing on the registers.
Are these reserved bits even writeable? Should I be cautious to not touch them? Will something bad happen if I touch them?
This is a classic embedded world problem as to what to do with reserved bits! First, you should NOT write randomly into it lest your code becomes un-portable. What happens when the architecture assigns a new meaning to the reserved bits in future? Your code will break. So the best mantra when dealing with registers having reserved bits is Read-Modify-Write. i.e read the register contents, modify only the bits you want and then write back the value so that reserved bits are untouched ( untouched, does not mean we dont write into them, but in the sense, that we wrote that which was there before )
For example, say there is a register in which only the LSBit has meaning and all others are reserved. I would do this
ldr r0,=memoryAddress
ldr r1,[r0]
orr r1,r1,#1
str r1,[r0]
If there is no other clue in the documentation, write a zero. You cannot avoid writing to a few reserved bits spread around in a 32-bit register.
Read-Modify-Write should work most of the time, however there are cases where reserved bits are undefined on read but must be written with a specific value. See this post from the LPC2000 group (the whole thread is quite interesting too). So, always check the docs carefully, and also any errata that's available. When in doubt or docs are unclear, don't hesitate to write to the manufacturer.
Ideally you should read-modify-write, no guarantee for success, when you change to a newer chip with different bits, you are changing your code anyway. I have seen vendors where writing zeros to the reserved bits failed when they revved the chip and the code had to be touched. So there are no guarantees. The biggest clue is if in the vendors code you see a register or set that are clearly read-modify-write or clearly just a write. This could be different developers writing different sections of the example or there is a register in that peripheral that is sensitive, has an undocumented bit, and needs the read-modify-write.
On the chips that I work on I make sure that undocumented (to the customer), but not unused bits are marked in some way to stand out from other unused bits. We normally mark unused/reserved bits as zero, and these other bits get a name, and a must write this value marking. Not all vendors do this.
The bottom line is there is no guarantee, assume all documentation and example programs have bugs and you have to hack your way through to figure out what is right and what is wrong. No matter what path you take (read-modify-write, write zeros, etc) you will be wrong from time to time and have to re-do the code to match a hardware change. I strongly suggest that if a vendor has a chip id of some sort, that your software reads that ID and if it is an id that you have not tested your code against, declare a failure and not program that part. In production testing long before a customer sees the product, the part change will get detected and software will be involved in understanding the reason for the part change, the resolution being the alternate part is not compatible and rejected or the software changes, etc.
Reserved most of the time mean that they aren't used in this chip, but they might be used on feature devices (other product line). (Most chip manufacturers produce one peripheral driver and they use it for all there chips. This way it's mostly copy past work and there is less change for errors) Most of the time it doesn't matter if you write to reserved bits in peripheral registers, this because there isn't any logic attached to it.
It is possible that if you write something to it, it won't be stored and next time you attempt to read the register / bits it seams unchanged.

Changing type of 32-bit variable to 64-bit variable?

My application runs on a pSOS operating system. The code is compiled with Diab C compiler.
The application defines a number of counters which have been declared as
unsigned int call_count;
As there are chances of some of these overflowing in a small time frame, I have decided to declare the counters as
unsigned long long int call_count;
This I believe would not overflow at least during my lifetime.
My question is this conversion harmless? Are there any overhead that I need to concerned with. When the application is under stress the call_count would be incremented incessantly. Can performance take a hit ? A SNMP manager would be querying these counters every 15 seconds as well.
Is your code assuming that incrementing a 32-bit variable is an atomic operation? Incrementing a 64-bit variable on a 32-bit CPU probably won't be atomic unless you go out of your way to make it so.
Example:
call_count equals 0x00000005FFFFFFFF when a call comes in.
The lower half of call_count is incremented: call_count gets set to 0x000000500000000 and the CPU's carry bit gets set to 1.
The upper half of call_count is incremented by the carry bit: call_count gets set to 0x0000000600000000.
If another thread or an interrupt handler reads the value of call_count between steps 2 and 3, it will get the wrong result (0x000000500000000 instead of 0x000000600000000). The solution is to synchronize access to call_count. A few possibilities:
Disable interrupts (if appropriate)
Serialize access using a lock
Read and write using atomic/interlocked functions (example: InterlockedIncrement() on Windows)
I doubt there is a performance issue, at least if you use a 64 bit processor, since the variable is almost always in the cache.
Within broad limits, the change is harmless. You will need to be sure that any code accessing the value is prepared to handle a 64-bit quantity, and any code that formats its value will need to be changed, but otherwise, it should be safe enough -- in the absence of any information about other code that would be broken by the change.
You should be fine.
I assume (from the pSOS) that you're coding to a Moto 68000, which is a 32-bit processor; working with 64-bit numbers is slightly slower there because it needs a few more instructions (eg, add, check carry, branch or add to high word) but I doubt you're worried much about a four cycle cost. If you are on a 64-bit processor, then 64-bit ops are exactly as fast as 32-bit ones.
Doing this will increase your memory storage overhead of course, but again that's only a concern if you've a great many structures containing these counters.

Is one's complement a real-world issue, or just a historical one?

Another question asked about determining odd/evenness in C, and the idiomatic (x & 1) approach was correctly flagged as broken for one's complement-based systems, which the C standard allows for.
Do systems really exist in the 'real world' outside of computer museums? I've been coding since the 1970's and I'm pretty sure I've never met such a beast.
Is anyone actually developing or testing code for such a system? And, if not, should we worry about such things or should we put them into Room 101 along with paper tape and punch cards...?
I work in the telemetry field and we have some of our customers have old analog-to-digital converters that still use 1's complement. I just had to write code the other day to convert from 1's complement to 2's complement in order to compensate.
So yes, it's still out there (but you're not going to run into it very often).
This all comes down to knowing your roots.
Yes, this is technically an old technique and I would probably do what other people suggested in that question and use the modulo (%) operator to determine odd or even.
But understanding what a 1s complement (or 2s complement) is always a good thing to know. Whether or not you ever use them, your CPU is dealing with those things all of the time. So it can never hurt to understand the concept. Now, modern systems make it so you generally never have to worry about things like that so it has become a topic for Programming 101 courses in a way. But you have to remember that some people actually would still use this in the "real world"... for example, contrary to popular belief there are people who still use assembly! Not many, but until CPUs can understand raw C# and Java, someone is going to still have to understand this stuff.
And heck, you never know when you might find your self doing something where you actually need to perform binary math and that 1s complement could come in handy.
The CDC Cyber 18 I used back in the '80 was a 1s complement machine, but that's nearly 30 years ago, and I haven't seen one since (however, that was also the last time I worked on a non-PC)
RFC 791 p.14 defines the IP header checksum as:
The checksum field is the 16 bit one's complement of the one's complement sum of all 16 bit words in the header. For purposes of computing the checksum, the value of the checksum field is zero.
So one's complement is still heavily used in the real world, in every single IP packet that is sent. :)
I decided to find one. The Unisys ClearPath systems have an ANSI C compiler (yes they call it "American National Standard C" for which even the PDF documentation was last updated in 2013. The documentation is available online;
There the signed types are all using one's complement representation, with the following properties:
Type | Bits | Range
---------------------+------+-----------------
signed char | 9  | -2⁸+1 ... 2⁸-1
signed short |  18  | -2¹⁷+1 ... 2¹⁷-1
signed int | 36 | -2³⁵+1 ... 2³⁵-1
signed long int |  36 | -2³⁵+1 ... 2³⁵-1
signed long long int |  72 | -2⁷¹+1 ... 2⁷¹-1
Remarkably, it also by default supports non-conforming unsigned int and unsigned long, which range from 0 ... 2³⁶ - 2, but can be changed to 0 ... 2³⁶ - 1 with a pragma.
I've never encountered a one's complement system, and I've been coding as long as you have.
But I did encounter a 9's complement system -- the machine language of a HP-41c calculator. I'll admit that this can be considered obsolete, and I don't think they ever had a C compiler for those.
We got off our last 1960's Honeyboxen sometime last year, which made it our oldest machine on site. It was two's complement. This isn't to say knowing or being aware of one's complement is a bad thing. Just, You will probably never run into one's complement issues today, no matter how much computer archeology they have you do at work.
The issues you are more likely to run into on the integer side are endian issues (I'm looking at you PDP). Also, you'll run into more "real world" (i.e. today) issues with floating point formats than you will integer formats.
Funny thing, people asked that same question on comp.std.c in 1993, and nobody could point to a one's complement machine that had been used back then.
So yes, I think we can confidently say that one's complement belongs to a dark corner of our history, practically dead, and is not a concern anymore.
Is one's complement a real-world issue, or just a historical one?
Yes, it still used. Its even used in modern Intel processors. From Intel® 64 and IA-32 Architectures Software Developer’s Manual 2A, page 3-8:
3.1.1.8 Description Section
Each instruction is then described by number of information sections. The “Description” section describes the purpose of the instructions and required operands in more detail.
Summary of terms that may be used in the description section:
* Legacy SSE: Refers to SSE, SSE2, SSE3, SSSE3, SSE4, AESNI, PCLMULQDQ and any future instruction sets referencing XMM registers and encoded without a VEX prefix.
* VEX.vvvv. The VEX bitfield specifying a source or destination register (in 1’s complement form).
* rm_field: shorthand for the ModR/M r/m field and any REX.B
* reg_field: shorthand for the ModR/M reg field and any REX.R

Resources