Hey I'm fairly new to coding, I joined a coding club because I have an interest for the growing world of technology. The topics lately have been about computer architecture and in the group we have been talking about ARM processor.
Our club captain notified us of the following:
"Develop a software product for an ARM processor to do a task of your choice. However, before you start the task, you need to send me a paragraph to let me know what you are trying to solve."
Does anyone know of any reference or code examples I could look up? Or any other helpful links
The question does not make a lot of sense. The Arm architecture spans a wide range from fairly small microcontrollers aimed at low power or fast response tasks, mid range devices which are good for set-top-box, camera or router embedded applications, smartphones, right up to server class products.
There are also variants which are designed for safety critical systems, or high reliability.
Each specific application will require different features, but at the level that you are writing code, you are unlikely to appreciate the factors which would help you to chose a device. Is it something portable, or needing a long battery life, or providing secure communication? If you had a specific device which you needed to use, the application would be more obvious (but still very wide).
I am going to give an answer to a generic question- How to code for an arm processor/any processor-
So, the first step is to understand what ISA is! ISA is a manual(for beginners) which the company releases which acts an information source for those who wish to code for their specific processor.(In architecture world, ISA has lot more implications. What I have given is somewhat an oversimplification). Now, I would recommend that you start looking at the ARM ISA. Go through the instructions that ARM supports, understand all the features in ARM, understand the state of the machine,registers, addressing modes, etc. Get a good understanding of what your version of ARM specifically supports(NEON, Thumb, Vectors(?)).
Once you have a hang of the ISA, the next step is to decide what you want to build since now you are aware what all your processor is capable of doing. You know your limitations as well. So, now will be a good time to come up with an idea.
Now, the next step is setting up your environment. Read the specs of the board which you will be using. Set up the environment. Know the compilers/assembler. Know what flags they support.
Finally, write a simple basic assembly code. Write a code to add two numbers. I usually follow this convention- For Assembly, I write addition of two numbers as my first code. For High Level(C,C++, Java), I write HelloWorld. Execute your code and observe the output. I recommend that for the first code, you step into your debugger and observe the state of the machine after every instruction. (Keep the ISA handy).
Slowly, up your game. Write codes with functions, strings,etc. Finally, get yourself some fun stuff and learn to interface them.(LEDs, LCDs, Sensors, Camera).
Cheers!
Related
I have recently bought an STM32 NUCLEO Dev Kit and wondered if this what an actual Embedded Systems Engineer would use in the industry when developing a product?
I'm using Kiel Uvision 5, STMCubeMX and STM32 ST-LINK Utility to develop certain projects. As I am used to using PIC and using registers like PORTA, OSCCON, TIMER0 etc, I see that Kiel Uvision 5 uses ready made functions like HAL_GPIO_TogglePin(.........) etc. Is this the usual way they do this in industry or work more directly with the registers?
It's heavily opinion based and I wouldn't get surprised if this question got closed for this reason. This answer only touches on few aspects of what you're asking about. It's a very broad topic and it's going to be hard - if not impossible - to include everything in one post which wouldn't end up being several pages long. However to give you my perspective on the topic, while trying to remain unbiased, the short answer is.. it depends.
If you're asking about what is used in most common cases, it's likely going to be the HAL (previously StdPeriph) functions you've mentioned. The reason is - they get the job done in most common cases. After all it always comes down to what the cost of creating a product is going to be. If HAL functions are "good enough" for the purpose, they're going to be used simply because they're faster to develop with. The higher the development cost, the more you'll want to cut it (or move it elsewhere) and using abstractions is one way of doing so.
However, even though I think it's safe to assume that HAL / Std Periph / any other (including proprietary) abstraction layer is generally used, it's not always the case for at least two reasons I can think of:
Existing functions may not be suitable for your purpose. Giving HAL as an example, it works pretty well for most common cases, but sometimes your needs may be so specific that you'll have to go and mess "under the hood", often ending up either writing your own variation of the functions of building something new on top of HAL. Personally I can think of at least few examples where HAL functions weren't exactly what I needed. It doesn't necessarily mean that the library is bad, it's just sometimes the requirements are very specific.
Messing with registers directly may sometimes be required for performance reasons. HAL and similar are an abstraction layer and as any abstraction, they take more time to execute than using the registers directly. If you're trying to squeeze absolute maximum out of given peripheral, you'll sometimes have to go down to register level.
Now to a more biased portion of my answer.. I can see why you ask this question. Coming from PIC world where Flash or CPU clocks were more precious, it does make sense to use registers directly there. In case of STM32, it's not as critical anymore. Having said that, you'll sometimes stumble upon opinions that "using registers is the only true way", but personally I find such discussions ending up being purely academic. I see registers or any abstraction built on top of it as tools and you should use the right tools for the right job. Two examples of NOT using the right tools:
You use only registers as "the only right way" either because you believe it yourself or you've been told so. Your products take twice (if not more) time to develop, your code takes less space in flash (so now you use 46% of 1MB flash instead of 48%). Code that is performance-critical meets its goals. Code that has relaxed execution time constraints is also super efficient, but it doesn't affect the end customer much, if at all. Your code is also less reusable - you find yourself rewriting same portions of code over and over every time you release new product for a new MCU family.
You only use HAL / any other similar abstraction because "you didn't pick so powerful MCU to have to go down to register level", or because you're told you should never ever touch registers. You develop much faster and you're able to release two products instead of just one using registers. However when there are execution time constraints / transmission speeds you have to hit, you find yourself picking MCUs more powerful than should theoretically be needed. Sometimes you find yourself writing wrappers around HAL because they don't give you exactly the functionality you need - it feels like making it more complicated than it should be.
So after all, if there was anything to take out of what I'm trying to say is that you should use what is suitable for the job on a case-by-case basis. In case of STM32, you nowadays have 3 options: HAL (top abstraction level), HAL LL (Low Level abstraction - often simple wrapper functions around register acceses) or using registers directly. Which one you choose should come from what your requirements are.
I use Nucleo boards all the time. It allows me to start write software before I have the actual hardware ready.
HAL & register way is rather the programmer choice than the "industry standard". I personally use HAL drivers when program more complicated peripherals like USB and Ethernet to avoid wiring the full stacks from the scratch.
I have just finished my degree and we heavily used the STM32 platform with CMSIS and HAL.
It is a matter of preference. The HAL libraries offer higher abstraction but come with some quirks that are not very intuitive. The HAL (was/)is buggy sometimes. We have encountered SPI transfer bugs which made the higher transfer rates of SPI unusable because of a delay in the per byte transfer.
CMSIS offers lower level access but still abstracts away from simple bit manipulation. I don't think that direct register access is a great way to program anymore and at least CMSIS should be used. But it still a matter of opinion, preference and what is right for the job at hand. If you need something quick: HAL. If you need really fine control: CMSIS.
(sidenote I believe that CMSIS has been phased out in favor of HAL but it is still usable at this time)
All of the other answers are valid.
Just wanted to chime in and say I used STM32 regularly at my job (and at two different companies). We used the HAL drivers at both companies. There are obviously some issues with them, but used regularly enough by others that you can easily find support online. Additionally CubeMx does a decent job with them, at least enough to get you started on using the peripherals. So the 0->something step feel smaller. But getting really optimized code and design, you may want to dive deeper to really understand what each of the HAL drivers are doing and decide which method works for your project, goals, and requirements.
I am trying to select a particular companies line of ARM Cortex M microprocessors to work with for a project I want to do. Since all the companies license the architecture from ARM Holdings I am wondering how much difference there can be in the hardware between brands? I am thinking the documentation and software productivity and overall experience may be the deciding factor?
I have tried one company and their documentation was lacking! Thousands of pages of fluff about what wonderful stuff they make but very little info on how to use the things.
Mostly I am looking for good documentation. I don't need endless obsolete sample programs that don't compile and use mystery macros and functions! A line that includes a microprocessor with 2 ADC not just multiplexing channels to 1 ADC would be good but I may end up using external ADC.
Would anybody care to say what they recommend and why.
BTW: My history is programming C++, C# in MS Visual Studio for machine tool HMI as well as motion controllers.
Thanks In Advance
Chris
Most of your application has nothing to do with ARM and the cortex-m. Each chip vendor adds its own peripherals (or sometimes purchased) around the arm core. Most of your code is talking to peripherals. The processor core doesnt make the chip, the things other than the processor core make the chip.
You need to go do your research, stackoverflow is not about doing it for you. You should be able to find a list of parts with the number of peripherals you want, independent of processor core used. While a particular chip vendor may have different uarts across their product line or different gpios, adcs, etc, you can still get a feel for a vendor without having to look at every part on that list as you narrow in.
No vendor has great docs, some do have bad docs, that is part of the exercise. All of them provide libraries, same deal nobody has great libraries, some have bad ones, but the point of the libraries is to hide the details. You need to do your homework and look at the docs, look at that code, can you live with it can you replace it or repair it, or is it better to move on to another vendor or same vendor and an alternate library.
1% of the job is writing the application the other 99% is reading docs and doing experiments to make the peripherals do what you want them to do.
Same brand or different brands with the same name of processor core doesnt mean anything with respect to portability. If you read the arm docs as you should read any of the processor core docs for whatever parts you are evaluating or choosing, you will see that even if 7 vendors have products with the cortex-m0, that core has compile time and runtime options that each vendor could choose from making either the code or the performance incompatible with other chips using a cortex-m0. But the amount of code that would port anyway is a very very small percentage of your project. Most of your project is the not-processor-core stuff.
Note ARM makes a number of cortex-m cores that are not 100% compatible with each other. If you feel the need to go with an ARM core, then narrow in on the one you want, that will narrow your choices as far as available chips goes.
Built in ADCs are there to save on chips, depending on the specs you want, accuracy or performance, you may very well end up with an external ADC which makes the specific microcontroller less important if the ADC and its specs are your primary requirement.
Software productivity, also has little to nothing to do with the processor core. The vendors are going to cobble together an IDE with a compiler and libraries because folks expect that, doesnt make any of them any good nor productive. The text editor alone goes the way of religion and politics with developers, there is no single editor or environment that is perfect for every developer, developers have their ways of doing things and some are compatible and some are not. Some developers can bend some cant. Very rarely do you have to use the tools they provide.
It is not possible for us to choose your part for you nor is it possible for us to choose your development environment. That is not the purpose of stackoverflow.
The time it took to write your question and wait for an answer thus far you could have looked at all the major vendors docs several times over. I hope you didnt stop after the first one.
Can code written for an Cortex A5 built by one company be ported to a Cortex A9 made by another company without too much difficulty?
I want to write some bare metal C code that runs on Atmel's SAMA5D4 (Cortex A5) that takes video from a CMOS camera with a parallel interface and encodes it to H.264. That chip can hardware encode at 720p.
Later, I may want to build a similar setup that can encode at 1080p, so I would want to upgrade to a more expensive chip, NXP i.MX 6Solo (Cortex A9).
So I want to know if I would encounter major headaches or if it would be rather easy to port later. My gut tells me it should be easy but I thought I'd better ask the experts first. If it's a huge headache though I may start with the more expensive chip first.
I'm new to this and not at all experienced with ARM chips or even much C but am willing to learn :-)
As captured in the comments, this task can be made easier if the code is initially written to attempt to clearly abstract the platform specific detail from the application code. This is not as simple as simply replacing the boot.s and isn't something that you can really claim to have done until you've tested the porting.
Much of the architectural behaviour between the two processors will be unchanged, and the C-compiler ought to be able to take advantage of micro-architectural optimisations. This optimisation may not be the best that you could achieve with some manual effort.
Where you are likely to see hard problems is any points in your code that are sensitive to memory ordering or potentially interactions between code and exceptions. The Cortex-A9 is significantly more out-of-order than the Cortex-A5, and the migration may expose bugs in your code. Libraries ought to be stable now, but there is still a risk to be aware of. Anticipating this sort of problem is quite hard and if you are writing the majority of the code yourself you probably need to build in some contingency for the porting task. Once the code is stable on A9, issues of this sort are less likely to show up on either A5 (to give a lower cost production option), or more recent high performance cores.
If I cut and paste a chapter of my math textbook into a my biology text book will that make sense? They both are written using the english language.
No that makes no sense. Assuming you are sticking to common ARM instructions for the code (english), the code isnt going to work from one chip (math book) to another (biology). The majority of the difference is between the vendors logic which is outside the ARM core, no reason whatsoever to assume that two vendors have the same peripherals at the same addresses that work exactly the same bit for bit, gate for gate.
So in general baremetal will NOT work and does NOT work like this. A very high level printf this or that C program, sure because you have many layers of abstraction including the target, doesnt even have to be arm to arm. Now saying that it is certainly possible for you to make or maybe if very lucky find a hardware abstraction layer that hides the differences between the chips, at that layer then you can ideally write that portion of the project and port it. As far as the arm vs arm the differences should be handled by the compiler and again dont even have to be arm to arm could be arm to mips. Any assembly language you may have or any core specific accesses/instructions would need to be checked against the two technical reference manuals to insure they are compatible. Probably not at the cortex-a level but for cortex-ms there are some address space core specific items that can affect high level language code, but for something like this to work you would have to hide that in the abstraction layer.
Generally NO, ARM is the underlying core, the chip differences have nothing to do with ARM so its like cutting and pasting a chapter from a mystery novel you are writing in english into a biography you are also writing in english and hoping that chapter makes sense in the latter book.
I do apologize if this is a duplicate even though I did search around here for a similar question, I only found one.
So my programming team in my Engineering class currently use a 32-bit 72MHz ARM Cortex-M3 microprocessor. We're all seniors in high school, and we're struggling to use the libraries and whatnot, mostly due to poor docs from the manufacturer of the Bioloid Premium we're using. However we are about to purchase an 8-bit 16MHz AVR microcontroller because it has a wider range of support online and an easier-to-use library + more documentation. My question here is, would the decreased bit-count as well as the lower processor speed really matter to us? We're not going to be doing a lot of process-intensive programming, but more like a basic robotics class.
So, main differences between an 8-bit 16MHz AVR microprocessor and a 32-bit 72MHz ARM Cortex-M3 microprocessor?
Also, (if it holds any relevancy):
We're using a Bioloid Premium by Robotis w/ CM530 (ARM), about to switch to CM510 (AVR).
We'll be using Embedded C instead of Robotis' RoboPlus IDE as our instruction set.
I have googled around, found out what a bit-count was, and more about it's impact on processor speed, but not a lot of documents about it give a clear and concise answer and that's why I came here, because it's for clear and concise answers. (So please don't tell me to Google it when I've spent the past twenty minutes doing so.)
We're using a Bioloid Premium by Robotis w/ CM530 (ARM), about to
switch to CM510 (AVR). We'll be using Embedded C instead of Robotis'
RoboPlus IDE as our instruction set.
I looked around at the products you refer to, and your question seems to be missing the issues you should really be concerned with.
The Bioloid Premium kit looks pretty sweet, with all the parts put together and configured for you already. Much of robotics courses are usually concerned with designing the hardware. You are not going to be doing any of that. So your tasks really come down to programming the hardware you are given.
That said, there is a world of difference between the RoboPlus IDE, which seems similar to the Lego Mindstorms drag and drop interface, and writing code in C using AVR Studio!
I have used AVR Studio before, but there was a major change in versions recently. You might need to modify the example programs to work in the latest version, and you will probably need some help with that.
It looks like they supply you with enough example code to use the periperpherals, but I don't see right away how to write a main() function to do something like follow a plan. Perhaps, there are some examples online.
But to answer your question, you are probably not going to run into any limitations in terms of processor capacity. They switched to a cheaper and more powerful processor to write the newer version of their control software, but the old hardware will be great, too. Working in C, you will become familiar with how to actually use an MCU, and that knowledge will transfer to other chips. The AVR family is a great one to start with. It has lots of features and is pretty sensible in how it works, with lots of documentation and third-party support. Definitely download the datasheet from Atmel for the chip you are using, although it is a dense and difficult read. You will only need to read parts of it. Also, check out the AVR Freaks forums.
This sounds like a fantastic high school course. Have fun with it!
My question here is, would the decreased bit-count as well as the lower processor speed really matter to us? [...] So, main differences between an 8-bit 16MHz AVR microprocessor and a 32-bit 72MHz ARM Cortex-M3 microprocessor?
What a cool project! This is a great opportunity to learn a bit about how processors work and what bit-width and clock speed mean.
Clock speed is conceptually the easiest to understand. Microcontrollers like the AVR and ARM use a clock crystal that sets the speed the circuitry operates at. With a faster clock, the processor can execute more instructions in the same amount of time. The 72MHz clock is more than 4x the 16MHz one, so the ARM processor is going to be able to run 4x faster than the AVR. But what does "run faster" really mean? Processors execute instructions. At the basic level, these are instructions like "add two numbers" and "make the voltage on this pin high". The ARM processor is going to be a lot faster here, but consider what hardware it's going to be talking to: servos. Servo motors listen to a fairly low-speed PWM signal, so at that speed the difference between 72MHz and 16MHz isn't going to become that relevant.
But what about bit-width? This one is a bit more tricky. It doesn't really affect the speed at which your processor runs, but it affects the complexity of the instructions it executes. Let's say that you want to add two really big numbers together. Numbers like 100,000 and 200,000. When we add those together on paper, it's just one step. But an 8-bit processor like the AVR can only operate on numbers as large as 65,536. So in order to operate on numbers that large it'll need to break up the addition into several smaller steps. The 32-bit ARM, on the other hand, can work on numbers that large. So it does the addition in one step. I hope that makes sense.
Anyway, I've done a lot of work with servos on even slower processors than your 16MHz AVR. It'll most likely be just fine for what you want to do, and like you found it has a much more active hobbyist community. And if you're looking for quick examples of code, the Cornell 4760 page has some great projects that you could learn from.
Even before I learnt programming I've been fascinated with how robots could work. Now I know how the underlying programming instructions would be written, but what I don't understand is how those intructions are followed by the robot.
For example, if I wrote this code:
object=Robot.ScanSurroundings(300,400);
if (Objects.isEatable(object))
{
Robot.moveLeftArm(300,400);
Robot.pickObject(object);
}
How would this program be followed by the CPU in a way that would make the robot do the physical action of looking to the left, moving his arm, and such? Is it done primarily in binary language/ASM?
Lastly, where would i go if I wanted to learn how to create a robot?
In the end, something has to break down the high level commands into very low level commands. Something has to translate "Pick up the cup" to how to move the arm (what angles the joints should be at) to the hardware commands which actually turn the motors.
There are frameworks which try to provide some amount of this translation, including (but not limited to):
Player/Stage
Microsoft Robotics Studio
Carmen
CLARAty
Lego Mindstorms
However, since robotics research is interested in every layer of the system, there aren't many systems which provide the entire translation stack. If you're looking into getting into robotics, there are several systems which attempt to make this easier (again, a random sample):
Lego Mindstorms
TeRK
VEX Robotics
Failing that, sites such as Make even provide guides to building robot projects to start from. The challenge is find a project which you are excited about, and go to town!
You should check out Microsoft Robotics Studio (MRS). They have many videos/screencasts, and written tutorials. Additionally, Channel9 has many videos, interviews, etc, on the robitics subject. Including demonstrations, and interviews with developers of MRS.
In most modern robots you would have an Inverse Kinematic model of the mechanism, in this case the arm, that converts the spatial coordinates into positions for the joints of the arm. These joints are usually moved by servo motors. To smoothly move the arm, you need a series of intermediate joint positions defining the path you want the arm to follow. You also have to worry about the velocities of the joints, which together control the speed of the "hand" at the end of the arm.
While the arm is moving your servo system will be getting feedback about its actual position. Simple servo systems may use a basic PID feedback loop to adjust the motors. More complex systems will include feed-forward parameters which compensate for inertia, gravity, friction, and so on. These can become very sophisticated.
The real fun starts when you have to allow for obstacles in the space around the robot. You have to sense the obstacle and figure out how to avoid it and still reach the destination.
I just have to add something about Arduino projects to this because I dont see it mentioned above.
There is a very low bar for entry into the Arduino based robotics projects. The "sketch" programs that you write for the hardware are very easy to pick up and similar to C syntax. If you dont know your transistors from resistors these boards still allow you to do alot with plug-in hardware and additional "shields" that extend the base computer board.
Its very fun, very flexible and something to get your code interacting with the real world. Plus its "Open Hardware" very along the lines of open source software.
Robots will work by interacting with hardware. The bridge from your code is often done through different type of I/O ports. It could simply be a RS232 cable for example (you know those old COM1 ports). Hardware parts will be composed by motors (such as servo motors) and sensors (such as ultrasound to feel obstacles, lasers to get distance or switches).
You don't need to use assembler to do that, there are lots of languages (if not most) that can do it but it requires knowledge on how to interact with hardware. Like writing a driver. It requires at least basic electronics also if you want to build the robot yourself.
If you're interested, I suggest you have a look at this book which is a good primer.
Also, you could try out programming a Basic stamp, it's pretty easy following the tutorials and it will give you a good start on how to build robots. It's not too expensive and you'll be interacting with hardware in no time.
Good luck and have fun!
If you get good enough at programming, you may discover that you don't even actually need a robot to test much of the hardest code you'll need to write... (IE, making a robot see and recognize a scene always fascinated me... But at some point, I realized that the physical robot required for this problem is the easy part... The software is the hard part!)...
Is probably easier to get a more high-level language to describe the robot's behaviors and intelligence and let the low level language to the actions (move arm, walk, stop). There is a lot of research in what is called BDI architecture for intelligent agents, google for it.
You can find more about at this site, it's a DSL for describing agent behavior made in Java. It's called Jason interpreter and the language is AgentSpeak(L).
Find a local FIRST robotics team and volunteer to be a mentor. FIRST is a robotics competition for middle and high school kids. The goal is that the kids do all of the work to build, program, test, and run the robot, but you still will have lots of opportunities to dig in and really learn the software. They are using LabView by National Instruments, and, as of Feb 8, have just begun regional competition for this year. LabView is a graphical programming environment that interfaces with NI hardware to let you program motors, actuators, and sensors. The NI stuff is pretty slick and is pretty easy to use, plus it's provided free to each team, so you don't have to buy the hardware and software yourself (at least to get started.) Plus, you get the added bonus of helping a new generation of engineers get their start.
You would have to have a driver that interfaced with the hardware (most likely a STAMP or FPGA with motors etc...). You would then call the function me.moveLeftArm(x,y); and the driver would know that moveLeftArm() means to move an actuator for X seconds/milliseconds/degrees.
I'm sure that you could find a kit that does robot programming.
If you want a Java alternative, I can recommend the book Linux Robotics. It has a lot of good information about where to get kits, parts, and sensors, as well as complete source code listings in Java.
I share the same itch .. I'm about to buy my first Beagle Board and some sensors / servos that can use the I2C bus. I'm going to be using an event driven design and a crude implementation of fibers (fibrils, if you will) which are userspace threads.
Basically my design calls for one process, which launches one thread per group of servos. Each group manager thread will launch x # of fibrils, 2 per servo (likely). One fibril is used to control the servo, the other fibril handles events from that servo (i.e., an object is just too heavy to pick up, an object was dropped, etc).
The main process has the task of listening for events from everything else and making sure the 'right hand knows what the left hand is doing' while moving forward and negotiating obstacles.
Its going to take me the better part of two years to get something working to the point that I'm proud of it .. but I anticipate many enjoyable evenings getting it to that point.
I will very likely be using a Microkernel, not Linux.
I'm doing this as much to sharpen myself with event driven methods as well as my desire to make my own R2 :)
Start with Phidets if you are familiar with .Net. You can checkout TrossenRobotics.com for parts.
The Phidgets interface kit is a good place to start. From there you can get a servo controller and start building things that move.
The Trossen forums are also a good place to review other people's projects. They have a new Data Center with code/project samples too. I don't work for them...just a happy customer.
lots of good answers here. your piece of fantasy code is not far from how you'd do in a higher level language such as C# over MS Robotics Studio. Just keep in mind that even simple things (like "move arm left") are very loaded with "information bias".
down to the metal, a robotic arm is a set of links and [possibly] motorized joints. Therefore "move arm left" (or any point in coordinate) is already a very complex task to compute (look for D&H Table, forward and inverse kinematics for manipulators).
There's also the concept that move arm left assumes there's nothing in that space and a collision won't occur. If the environment is unconstrained, then you need to implement a collision detection system, often based on some sort of sensor (camera) and machine vision algorithms.
In summary, the language and the hardware interfacing are often trivial compared to modeling the system to achieve the desired behavior.
Regarding the last question "how to create a robot", I find starting from looking for a related project in online communities like [Adafruit][1], [Hackster.io][2], or even [glitch][3], or looking for blog posts of someone who have built a robot from scratch, e.g., https://burningservos.com, or a product that provides documentations & tutorials for both hardware and software e.g., http://emanual.robotis.com/docs/en/platform/openmanipulator_x/overview/.