I do apologize if this is a duplicate even though I did search around here for a similar question, I only found one.
So my programming team in my Engineering class currently use a 32-bit 72MHz ARM Cortex-M3 microprocessor. We're all seniors in high school, and we're struggling to use the libraries and whatnot, mostly due to poor docs from the manufacturer of the Bioloid Premium we're using. However we are about to purchase an 8-bit 16MHz AVR microcontroller because it has a wider range of support online and an easier-to-use library + more documentation. My question here is, would the decreased bit-count as well as the lower processor speed really matter to us? We're not going to be doing a lot of process-intensive programming, but more like a basic robotics class.
So, main differences between an 8-bit 16MHz AVR microprocessor and a 32-bit 72MHz ARM Cortex-M3 microprocessor?
Also, (if it holds any relevancy):
We're using a Bioloid Premium by Robotis w/ CM530 (ARM), about to switch to CM510 (AVR).
We'll be using Embedded C instead of Robotis' RoboPlus IDE as our instruction set.
I have googled around, found out what a bit-count was, and more about it's impact on processor speed, but not a lot of documents about it give a clear and concise answer and that's why I came here, because it's for clear and concise answers. (So please don't tell me to Google it when I've spent the past twenty minutes doing so.)
We're using a Bioloid Premium by Robotis w/ CM530 (ARM), about to
switch to CM510 (AVR). We'll be using Embedded C instead of Robotis'
RoboPlus IDE as our instruction set.
I looked around at the products you refer to, and your question seems to be missing the issues you should really be concerned with.
The Bioloid Premium kit looks pretty sweet, with all the parts put together and configured for you already. Much of robotics courses are usually concerned with designing the hardware. You are not going to be doing any of that. So your tasks really come down to programming the hardware you are given.
That said, there is a world of difference between the RoboPlus IDE, which seems similar to the Lego Mindstorms drag and drop interface, and writing code in C using AVR Studio!
I have used AVR Studio before, but there was a major change in versions recently. You might need to modify the example programs to work in the latest version, and you will probably need some help with that.
It looks like they supply you with enough example code to use the periperpherals, but I don't see right away how to write a main() function to do something like follow a plan. Perhaps, there are some examples online.
But to answer your question, you are probably not going to run into any limitations in terms of processor capacity. They switched to a cheaper and more powerful processor to write the newer version of their control software, but the old hardware will be great, too. Working in C, you will become familiar with how to actually use an MCU, and that knowledge will transfer to other chips. The AVR family is a great one to start with. It has lots of features and is pretty sensible in how it works, with lots of documentation and third-party support. Definitely download the datasheet from Atmel for the chip you are using, although it is a dense and difficult read. You will only need to read parts of it. Also, check out the AVR Freaks forums.
This sounds like a fantastic high school course. Have fun with it!
My question here is, would the decreased bit-count as well as the lower processor speed really matter to us? [...] So, main differences between an 8-bit 16MHz AVR microprocessor and a 32-bit 72MHz ARM Cortex-M3 microprocessor?
What a cool project! This is a great opportunity to learn a bit about how processors work and what bit-width and clock speed mean.
Clock speed is conceptually the easiest to understand. Microcontrollers like the AVR and ARM use a clock crystal that sets the speed the circuitry operates at. With a faster clock, the processor can execute more instructions in the same amount of time. The 72MHz clock is more than 4x the 16MHz one, so the ARM processor is going to be able to run 4x faster than the AVR. But what does "run faster" really mean? Processors execute instructions. At the basic level, these are instructions like "add two numbers" and "make the voltage on this pin high". The ARM processor is going to be a lot faster here, but consider what hardware it's going to be talking to: servos. Servo motors listen to a fairly low-speed PWM signal, so at that speed the difference between 72MHz and 16MHz isn't going to become that relevant.
But what about bit-width? This one is a bit more tricky. It doesn't really affect the speed at which your processor runs, but it affects the complexity of the instructions it executes. Let's say that you want to add two really big numbers together. Numbers like 100,000 and 200,000. When we add those together on paper, it's just one step. But an 8-bit processor like the AVR can only operate on numbers as large as 65,536. So in order to operate on numbers that large it'll need to break up the addition into several smaller steps. The 32-bit ARM, on the other hand, can work on numbers that large. So it does the addition in one step. I hope that makes sense.
Anyway, I've done a lot of work with servos on even slower processors than your 16MHz AVR. It'll most likely be just fine for what you want to do, and like you found it has a much more active hobbyist community. And if you're looking for quick examples of code, the Cornell 4760 page has some great projects that you could learn from.
Related
I have recently bought an STM32 NUCLEO Dev Kit and wondered if this what an actual Embedded Systems Engineer would use in the industry when developing a product?
I'm using Kiel Uvision 5, STMCubeMX and STM32 ST-LINK Utility to develop certain projects. As I am used to using PIC and using registers like PORTA, OSCCON, TIMER0 etc, I see that Kiel Uvision 5 uses ready made functions like HAL_GPIO_TogglePin(.........) etc. Is this the usual way they do this in industry or work more directly with the registers?
It's heavily opinion based and I wouldn't get surprised if this question got closed for this reason. This answer only touches on few aspects of what you're asking about. It's a very broad topic and it's going to be hard - if not impossible - to include everything in one post which wouldn't end up being several pages long. However to give you my perspective on the topic, while trying to remain unbiased, the short answer is.. it depends.
If you're asking about what is used in most common cases, it's likely going to be the HAL (previously StdPeriph) functions you've mentioned. The reason is - they get the job done in most common cases. After all it always comes down to what the cost of creating a product is going to be. If HAL functions are "good enough" for the purpose, they're going to be used simply because they're faster to develop with. The higher the development cost, the more you'll want to cut it (or move it elsewhere) and using abstractions is one way of doing so.
However, even though I think it's safe to assume that HAL / Std Periph / any other (including proprietary) abstraction layer is generally used, it's not always the case for at least two reasons I can think of:
Existing functions may not be suitable for your purpose. Giving HAL as an example, it works pretty well for most common cases, but sometimes your needs may be so specific that you'll have to go and mess "under the hood", often ending up either writing your own variation of the functions of building something new on top of HAL. Personally I can think of at least few examples where HAL functions weren't exactly what I needed. It doesn't necessarily mean that the library is bad, it's just sometimes the requirements are very specific.
Messing with registers directly may sometimes be required for performance reasons. HAL and similar are an abstraction layer and as any abstraction, they take more time to execute than using the registers directly. If you're trying to squeeze absolute maximum out of given peripheral, you'll sometimes have to go down to register level.
Now to a more biased portion of my answer.. I can see why you ask this question. Coming from PIC world where Flash or CPU clocks were more precious, it does make sense to use registers directly there. In case of STM32, it's not as critical anymore. Having said that, you'll sometimes stumble upon opinions that "using registers is the only true way", but personally I find such discussions ending up being purely academic. I see registers or any abstraction built on top of it as tools and you should use the right tools for the right job. Two examples of NOT using the right tools:
You use only registers as "the only right way" either because you believe it yourself or you've been told so. Your products take twice (if not more) time to develop, your code takes less space in flash (so now you use 46% of 1MB flash instead of 48%). Code that is performance-critical meets its goals. Code that has relaxed execution time constraints is also super efficient, but it doesn't affect the end customer much, if at all. Your code is also less reusable - you find yourself rewriting same portions of code over and over every time you release new product for a new MCU family.
You only use HAL / any other similar abstraction because "you didn't pick so powerful MCU to have to go down to register level", or because you're told you should never ever touch registers. You develop much faster and you're able to release two products instead of just one using registers. However when there are execution time constraints / transmission speeds you have to hit, you find yourself picking MCUs more powerful than should theoretically be needed. Sometimes you find yourself writing wrappers around HAL because they don't give you exactly the functionality you need - it feels like making it more complicated than it should be.
So after all, if there was anything to take out of what I'm trying to say is that you should use what is suitable for the job on a case-by-case basis. In case of STM32, you nowadays have 3 options: HAL (top abstraction level), HAL LL (Low Level abstraction - often simple wrapper functions around register acceses) or using registers directly. Which one you choose should come from what your requirements are.
I use Nucleo boards all the time. It allows me to start write software before I have the actual hardware ready.
HAL & register way is rather the programmer choice than the "industry standard". I personally use HAL drivers when program more complicated peripherals like USB and Ethernet to avoid wiring the full stacks from the scratch.
I have just finished my degree and we heavily used the STM32 platform with CMSIS and HAL.
It is a matter of preference. The HAL libraries offer higher abstraction but come with some quirks that are not very intuitive. The HAL (was/)is buggy sometimes. We have encountered SPI transfer bugs which made the higher transfer rates of SPI unusable because of a delay in the per byte transfer.
CMSIS offers lower level access but still abstracts away from simple bit manipulation. I don't think that direct register access is a great way to program anymore and at least CMSIS should be used. But it still a matter of opinion, preference and what is right for the job at hand. If you need something quick: HAL. If you need really fine control: CMSIS.
(sidenote I believe that CMSIS has been phased out in favor of HAL but it is still usable at this time)
All of the other answers are valid.
Just wanted to chime in and say I used STM32 regularly at my job (and at two different companies). We used the HAL drivers at both companies. There are obviously some issues with them, but used regularly enough by others that you can easily find support online. Additionally CubeMx does a decent job with them, at least enough to get you started on using the peripherals. So the 0->something step feel smaller. But getting really optimized code and design, you may want to dive deeper to really understand what each of the HAL drivers are doing and decide which method works for your project, goals, and requirements.
I am trying to select a particular companies line of ARM Cortex M microprocessors to work with for a project I want to do. Since all the companies license the architecture from ARM Holdings I am wondering how much difference there can be in the hardware between brands? I am thinking the documentation and software productivity and overall experience may be the deciding factor?
I have tried one company and their documentation was lacking! Thousands of pages of fluff about what wonderful stuff they make but very little info on how to use the things.
Mostly I am looking for good documentation. I don't need endless obsolete sample programs that don't compile and use mystery macros and functions! A line that includes a microprocessor with 2 ADC not just multiplexing channels to 1 ADC would be good but I may end up using external ADC.
Would anybody care to say what they recommend and why.
BTW: My history is programming C++, C# in MS Visual Studio for machine tool HMI as well as motion controllers.
Thanks In Advance
Chris
Most of your application has nothing to do with ARM and the cortex-m. Each chip vendor adds its own peripherals (or sometimes purchased) around the arm core. Most of your code is talking to peripherals. The processor core doesnt make the chip, the things other than the processor core make the chip.
You need to go do your research, stackoverflow is not about doing it for you. You should be able to find a list of parts with the number of peripherals you want, independent of processor core used. While a particular chip vendor may have different uarts across their product line or different gpios, adcs, etc, you can still get a feel for a vendor without having to look at every part on that list as you narrow in.
No vendor has great docs, some do have bad docs, that is part of the exercise. All of them provide libraries, same deal nobody has great libraries, some have bad ones, but the point of the libraries is to hide the details. You need to do your homework and look at the docs, look at that code, can you live with it can you replace it or repair it, or is it better to move on to another vendor or same vendor and an alternate library.
1% of the job is writing the application the other 99% is reading docs and doing experiments to make the peripherals do what you want them to do.
Same brand or different brands with the same name of processor core doesnt mean anything with respect to portability. If you read the arm docs as you should read any of the processor core docs for whatever parts you are evaluating or choosing, you will see that even if 7 vendors have products with the cortex-m0, that core has compile time and runtime options that each vendor could choose from making either the code or the performance incompatible with other chips using a cortex-m0. But the amount of code that would port anyway is a very very small percentage of your project. Most of your project is the not-processor-core stuff.
Note ARM makes a number of cortex-m cores that are not 100% compatible with each other. If you feel the need to go with an ARM core, then narrow in on the one you want, that will narrow your choices as far as available chips goes.
Built in ADCs are there to save on chips, depending on the specs you want, accuracy or performance, you may very well end up with an external ADC which makes the specific microcontroller less important if the ADC and its specs are your primary requirement.
Software productivity, also has little to nothing to do with the processor core. The vendors are going to cobble together an IDE with a compiler and libraries because folks expect that, doesnt make any of them any good nor productive. The text editor alone goes the way of religion and politics with developers, there is no single editor or environment that is perfect for every developer, developers have their ways of doing things and some are compatible and some are not. Some developers can bend some cant. Very rarely do you have to use the tools they provide.
It is not possible for us to choose your part for you nor is it possible for us to choose your development environment. That is not the purpose of stackoverflow.
The time it took to write your question and wait for an answer thus far you could have looked at all the major vendors docs several times over. I hope you didnt stop after the first one.
Hey I'm fairly new to coding, I joined a coding club because I have an interest for the growing world of technology. The topics lately have been about computer architecture and in the group we have been talking about ARM processor.
Our club captain notified us of the following:
"Develop a software product for an ARM processor to do a task of your choice. However, before you start the task, you need to send me a paragraph to let me know what you are trying to solve."
Does anyone know of any reference or code examples I could look up? Or any other helpful links
The question does not make a lot of sense. The Arm architecture spans a wide range from fairly small microcontrollers aimed at low power or fast response tasks, mid range devices which are good for set-top-box, camera or router embedded applications, smartphones, right up to server class products.
There are also variants which are designed for safety critical systems, or high reliability.
Each specific application will require different features, but at the level that you are writing code, you are unlikely to appreciate the factors which would help you to chose a device. Is it something portable, or needing a long battery life, or providing secure communication? If you had a specific device which you needed to use, the application would be more obvious (but still very wide).
I am going to give an answer to a generic question- How to code for an arm processor/any processor-
So, the first step is to understand what ISA is! ISA is a manual(for beginners) which the company releases which acts an information source for those who wish to code for their specific processor.(In architecture world, ISA has lot more implications. What I have given is somewhat an oversimplification). Now, I would recommend that you start looking at the ARM ISA. Go through the instructions that ARM supports, understand all the features in ARM, understand the state of the machine,registers, addressing modes, etc. Get a good understanding of what your version of ARM specifically supports(NEON, Thumb, Vectors(?)).
Once you have a hang of the ISA, the next step is to decide what you want to build since now you are aware what all your processor is capable of doing. You know your limitations as well. So, now will be a good time to come up with an idea.
Now, the next step is setting up your environment. Read the specs of the board which you will be using. Set up the environment. Know the compilers/assembler. Know what flags they support.
Finally, write a simple basic assembly code. Write a code to add two numbers. I usually follow this convention- For Assembly, I write addition of two numbers as my first code. For High Level(C,C++, Java), I write HelloWorld. Execute your code and observe the output. I recommend that for the first code, you step into your debugger and observe the state of the machine after every instruction. (Keep the ISA handy).
Slowly, up your game. Write codes with functions, strings,etc. Finally, get yourself some fun stuff and learn to interface them.(LEDs, LCDs, Sensors, Camera).
Cheers!
Can code written for an Cortex A5 built by one company be ported to a Cortex A9 made by another company without too much difficulty?
I want to write some bare metal C code that runs on Atmel's SAMA5D4 (Cortex A5) that takes video from a CMOS camera with a parallel interface and encodes it to H.264. That chip can hardware encode at 720p.
Later, I may want to build a similar setup that can encode at 1080p, so I would want to upgrade to a more expensive chip, NXP i.MX 6Solo (Cortex A9).
So I want to know if I would encounter major headaches or if it would be rather easy to port later. My gut tells me it should be easy but I thought I'd better ask the experts first. If it's a huge headache though I may start with the more expensive chip first.
I'm new to this and not at all experienced with ARM chips or even much C but am willing to learn :-)
As captured in the comments, this task can be made easier if the code is initially written to attempt to clearly abstract the platform specific detail from the application code. This is not as simple as simply replacing the boot.s and isn't something that you can really claim to have done until you've tested the porting.
Much of the architectural behaviour between the two processors will be unchanged, and the C-compiler ought to be able to take advantage of micro-architectural optimisations. This optimisation may not be the best that you could achieve with some manual effort.
Where you are likely to see hard problems is any points in your code that are sensitive to memory ordering or potentially interactions between code and exceptions. The Cortex-A9 is significantly more out-of-order than the Cortex-A5, and the migration may expose bugs in your code. Libraries ought to be stable now, but there is still a risk to be aware of. Anticipating this sort of problem is quite hard and if you are writing the majority of the code yourself you probably need to build in some contingency for the porting task. Once the code is stable on A9, issues of this sort are less likely to show up on either A5 (to give a lower cost production option), or more recent high performance cores.
If I cut and paste a chapter of my math textbook into a my biology text book will that make sense? They both are written using the english language.
No that makes no sense. Assuming you are sticking to common ARM instructions for the code (english), the code isnt going to work from one chip (math book) to another (biology). The majority of the difference is between the vendors logic which is outside the ARM core, no reason whatsoever to assume that two vendors have the same peripherals at the same addresses that work exactly the same bit for bit, gate for gate.
So in general baremetal will NOT work and does NOT work like this. A very high level printf this or that C program, sure because you have many layers of abstraction including the target, doesnt even have to be arm to arm. Now saying that it is certainly possible for you to make or maybe if very lucky find a hardware abstraction layer that hides the differences between the chips, at that layer then you can ideally write that portion of the project and port it. As far as the arm vs arm the differences should be handled by the compiler and again dont even have to be arm to arm could be arm to mips. Any assembly language you may have or any core specific accesses/instructions would need to be checked against the two technical reference manuals to insure they are compatible. Probably not at the cortex-a level but for cortex-ms there are some address space core specific items that can affect high level language code, but for something like this to work you would have to hide that in the abstraction layer.
Generally NO, ARM is the underlying core, the chip differences have nothing to do with ARM so its like cutting and pasting a chapter from a mystery novel you are writing in english into a biography you are also writing in english and hoping that chapter makes sense in the latter book.
I am looking for some information on programming ARM devices, in a particular non-particular way [1]. Assume that I am writing code for an ARM processor that is used a machine similar to a Apple II/Atari "**" XL/Commodore 64/DOS-PC, or even something that runs a multitasking OS like VMS or SUNOs. Assume further that any peripherals/OS specific stuff has already been abstracted into subroutines.Examples of this type of programing might be: a text/curses based game like rogue or moria; a curses based word processor ( or rather something based on a curses like library ) ; or a modem/terminal program.
I'm looking for two things. Materials to help learn ARM programming, though the ARM System Developers Guide may be enough, other resources would be helpful I'm looking in particular for something which explains the software ( and relative hardware ie registers ) differences of various generations of processor.
The other thing I'm looking for is a development environment which inculdes, emulation, a decent macro assembler, and a debugger. Along with any thing else that will help me see what is going on inside my programs.
[1] OK. Sorry I just couldn't resist that particular pun.
You have the choice of using ARM Cortex M or A series. If you are going to develop high end applications such as those which run on smartphones / tablets, then learning about ARM A is your choice. If you are going for an emphasis with hardware/low level stuff such as controllers then you should go for ARM Cortex-M. If you are into real time applications (which I doubt is your case, them use the R series).
Most of these new ARM generations are based on ARMv7 architecture and ISA, so reading the manuals on that could get you started. Most recently, a new ARMv8 architecture and ISA have been announced, it supports 64 bit processing.
Download the reference and technical manuals from ARM site to learn about the HW/peripherals.
I would go with auslen's suggestion of buying a board, you could go with TI's Stellaris Launch pad which has an ARM-M4F processor (supports floating point and SIMD), it sells for 12.99$
http://www.ti.com/ww/en/launchpad/stellaris_head.html?DCMP=stellaris-launchpad&HQS=stellaris-launchpad-b
or you could go with ST's discovery board (based on the same processor as above), but has audio, accelerometer and usb on board. it sells for 14.99$ http://www.st.com/internet/evalboard/product/252419.jsp
or the STM F3 board (10.99$)
http://www.st.com/internet/evalboard/product/254044.jsp
In any case, you need to check the examples which come with the board, without which you could go nowhere easily. The board comes with its own drivers, all is abstracted in a way, so you could get started from there!
As for OS, if your interest is an RTOS, ARM provides the CMSIS RTOS for it's M series processors
http://www.arm.com/products/processors/cortex-m/cortex-microcontroller-software-interface-standard.php
This book offers an introduction to the generations of ARM processors. Then focuses on cortex M3. It covers its ISA with lots of assembly code. It also addresses the built-in peripherals and how to start-up with C.
http://www.amazon.ca/Definitive-Guide-ARM-Cortex-M3/dp/185617963X/ref=sr_1_1?ie=UTF8&qid=1352506616&sr=8-1
good luck
infocenter.arm.com and look at the various ARM ARMs (architectural reference manuals) and TRM's (technical reference manuals) for the various architectures and cores. these manuals are better than most other companies documentation. except for the new 64 bit stuff, the difference from one architecture to the next is somewhat subtle as far as the instruction set goes. the major differences have to do with the peripherals, the mmu is a slow changing thing, the interrupt manager has taken big steps and the fpu has been replaced at least once if not twice wholesale (if you even have an fpu which, having one is the exception not the rule it consumes a huge real estate for such little return).
I am confused with your question. I think it is important to draw the line between learning the architecture/instruction set and learning the operating system calls, these are two separate things. Operating system stuff you rarely need to look beyond the source code (C/C++), and the limited asm is for hand tuned C libraries or boostrap code, and interrupt wrappers. Likewise the architecture, registers, instructions, etc vs the peripherals (the cores from arm generally have very very few peripherals, the bulk are in the vendor specific stuff) which I would separate as a separate learning curve, has little to do with asm and the instruction set so no different than learning a peripheral on any other platform, just some addresses you read and write.
If you are looking for non-operating system bare metal the stm32f0 discovery is $10, I highly recommend it. Looks like ti has a stellaris launchpad for just a little more (waiting for mine to arrive so I cant talk much about them, and shipping is free from ti so the cost is basically the same as the stm32 boards) the stm32f4 discovery is about $20 and I would barely call a microcontroller with all the stuff the cortex-m4 has.
Moving up to linux capable or designed for linux systems there is the raspberry pi, beaglebone and open-rd and on up (pandaboard). Again though you are just writing just another linux C/C++ program so there isnt much excitement there (related to a specific platform, the entertainment is the same for all platforms) and very little arm knowledge required if any. It is very easy to use any of these platforms for bare metal programming giving you race car like performance compared to the ARM based microcontrollers.
I have a thumb simulator which you are probably not interested in. gdb has the armulator which was the cornerstone of the company back in the day. skyeye or something like that has an arm instruction set simulator as does qemu, none of them will give you great visibility other than what gdb can provide. opencores has the amber project an armv2 clone, which you can see the close relationship to the armv4 and newer that you will not find rtl for without a box full of cash. with my arm and chip experience (No I do not work for arm) I do find the amber project worth looking at, but many folks wont know what to do with it and really are not interested in that level of visibility. (it is instruction compatible, a good design, but dont think you are looking at an arm design, no secrets there). you can learn the basic arm architecture from it and then move on to hardware for example...
With the microcontrollers being cortex-m based, you might find the older microcontrollers a better stepping stone to the upper end arm cores. ARM7tdmi based stuff like the sam7s and others from nxp, st, atmel, etc which you can still find at sparkfun and microcontroller pros and other places for arduino like prices.