Related
The C Standard Library is independent of any operating system and system.
So, why use the input/output functions from the standard library?
Unix-specific POSIX system calls exist. Windows-specific input/output system calls exist.
Don't standard library functions eventually call system calls internally? Is this just for portability?
The API presented by the C standard library is uniform between operating systems, well, uniform provided all the "unspecified" parts of C roughly align (like the size of int).
The implementation of the C standard library is not independent of the operating system. Basically the implementation consists of the compiled source code the provides the API, and that compiled code matches the CPU / machine instruction sets, and possibly other items specific to the hardware bus width, supporting chip sets and other actual hardware details.
So, programming against the C API helps your program be "more portable" but that doesn't mean that any specific implementation of the C API is portable. Finally, there are lots of small details that aren't specified in detail, or are allowed to vary between platforms (like byte order, size of int, and so on). So even a program written against the standard C API might not work correctly on another machine, unless you write code that accommodates and reacts to the parts of the C API that might differ between platforms.
POSIX is basically a standard that eventually became incorporated into most C development environments. It is designed to provide a single API to program against for multiple UNIX platforms for items that lie outside of the core C language. There are POSIX implementations for Windows too, but Microsoft's historical offerings are notorious for not actually working correctly.
Yes, these APIs (if available) are implemented with code that eventually performs operating specific calls, and is presented in "machine code" that is very specific to the CPU instruction set. There are dozens of CPUs out there, and each major platform has its own matching compiler and matching C API libraries, if the C language is available.
The C language and API is there for portability, but portability isn't it's primary reason for existing (and there are lots of small corner cases where the same code isn't portable across all platforms unless it is written a certain way.) The primary reason it's there is not portability, it is because if the language features weren't consistently available across all platforms, then you wouldn't have "one C language" that could be used on multiple machines, you would have "many C-like languages, where each supported item would have to be checked" meaning you might know C on your development platform, but not know C on another platform.
As for the libraries, there are many libraries that might be absent in a typical machine, and when developing, you generally have to use a dependency checker to ensure the library is present (and sometimes the correct functions are available in the library) before successful use of the machine for development. Autoconf, for example, has m4 macros that can be configured to check if a library is present before compiling the programs.
I would like to build an OS some time in the future, and now thinking of some light sketches on how it would be. I have pretty much been coding in C compiled for the Windows environment (and some little Java). I would have to recompile any of my C programs should I want to run it under Linux. So the binaries, the product of compilation, must be different for each operating system. If I design a totally new OS from scratch, for both hobby and academic purpose, without using the Linux kernel or any known base code of an OS, what I understand as to happen is that I cannot compile my C programs with GCC since my OS will not be among its target systems. Here my question written on the title emerges. Thanks in advance for any hints.
It depends. You could easily choose to re-use an existing compiler, such as the immemorial example GCC, and thus you would reap the benefits of an existing compiler. But there are some big provisos that must be cleared up.
Regards of whether or not you choose to build a new compiler, the challenge will remain in porting a C library. You technically can use C without a standard library (which is what the Linux kernel, or any self-hosted example for that matter, has to do, for example) but this is a ridiculous proposition for programs intended to run under an operating system, as most systems impose memory restrictions, etc, meaning that you cannot just have carte blanche in terms of using memory. Thusly, a C library call such as malloc is required.
Since any programs under your kernel (99% of your OS in all likelihood) will need a set of functions to link against, porting a C library is your biggest task. The C library is a huge monolith, and writing your own would be rather silly, especially with many implementations already available, the most well known being GCC's. So, the question you really should be asking is, do you want to write my own version of libc? (The answer is almost always no, and most alternative implementations are for niche use cases.) Plus, if you want to make your OS POSIX-compliant, then you'll have to implement more functions, adding to the hassle.
Whether you write your own compiler for your OS is a minor detail compared to which C library will be included with it. You can always use your own compiler with an already-written implementation of the C library.
My advice to your rather opinion-based question: no. Port an existing compiler such as GCC or clang, and then use that. Plus, that has several advantages:
Compatibility with existing tools and toolchains
A familiar program (no need for your users to learn how to use a new compiler)
They're open source - and in spite of that, you'd be insane to go at it alone. Heck, even Apple integrated two already existing compilers - GCC and clang - into their toolchains rather than do it themselves, and they're a billion-dollar company.
Take a look at this page. It demonstrates how to port GCC to your OS using Newlib as your C library.
No, you can just port an existing compiler. You can even choose an existing executable format, such as ELF, and use your standard GCC + GNU Binutils toolchain. You will need to port the standard library and C runtime, and you will need to write an ELF loader into your operating system.
I suspect the majority of the work will be in porting the C library.
A search turned up this page: Porting GCC to your OS
(1) No, you usually don't have to write your own compiler. Writing a good optimizing compiler can be actually big task which I would better avoid.
But in order to enable writing applications for your OS in some higher level language you will either need to provide
some (2.1) API emulation layer (so that code written and compiled for other OS can be run on your OS)
or you'll have to (2.2) port some existing compiler to your OS
or at least make your OS a new available (2.3) target platform in an existing compiler
or some other option I don't know about
The choices are multiple each with its own pros/cons.
Some examples (other then the obvious GCC already mentioned by #dietrich-epp, #sevenbits) to help you decide which way you want to follow:
(3.1) Free Pascal (see http://www.freepascal.org) compiler can be extended with another target platform
Free Pascal is a 32,64 and 16 bit professional Pascal compiler. It can target multiple processor architectures: Intel x86, AMD64/x86-64, PowerPC, PowerPC64, SPARC, and ARM. Supported operating systems include Linux, FreeBSD, Haiku, Mac OS X/iOS/Darwin, DOS, Win32, Win64, WinCE, OS/2, MorphOS, Nintendo GBA, Nintendo DS, and Nintendo Wii. Additionally, JVM, MIPS (big and little endian variants), i8086 and Motorola 68k architecture targets are available in the development versions
...
Source: http://www.freepascal.org
(3.2) Inferno Operating System (see http://www.vitanuova.com/inferno) has its own application language (see Limbo) with OS specific words, own compiler etc. Applications run in virtual machine (see Dis)
Inferno® is a compact operating system designed for building distributed and networked systems on a wide variety of devices and platforms. With many advanced and unique features, Inferno puts an unrivalled set of tools into your hands...Inferno can run as a user application on top of an existing operating system or as a stand alone operating system...
Source: http://www.vitanuova.com/inferno
(3.3) Squeak (see http://en.wikipedia.org/wiki/Squeak) is a self contained OS with graphics and everything. It uses Smalltalk-80 as the language. Compiler included, applications run in virtual machine (see Cog VM). The VM could be emitted as portable C code and then ported to a bare-bone hardware.
Squeak is a modern, open source, full-featured implementation of the powerful Smalltalk programming language and environment. Squeak is highly-portable, running on almost any platform you could name and you can really truly write once run anywhere. Squeak is the vehicle for a wide range of projects from multimedia applications and educational platforms to commercial web application development...
Source: http://www.squeak.org
(3.4) MenuetOS (see http://www.menuetos.net/) is 64bit OS written in assembly language. Flat Assembler (see FASM) compiler which can emit native binaries was ported to the OS including OS API and is included in basic installation. Later on C library was also ported
MenuetOS is an Operating System in development for the PC written entirely in 32/64 bit assembly language...supports 32/64 bit x86 assembly programming for smaller, faster and less resource hungry applications...Menuet isn't based on other operating system nor has it roots within UNIX or the POSIX standards. The design goal, since the first release in year 2000, has been to remove the extra layers between different parts of an OS, which normally complicate programming and create bugs...
Source: http://www.menuetos.net
(3.5) Google's Android OS (see Wikipedia: Android (operating system)) ported Java Virtual Machine (see Dalvik later replaced by Android Runtime) and provided OS APIs for the Java programming language, reusing existing compilers and IDEs just consuming the produced binaries
Android Runtime (ART) is an application runtime environment used by the Android mobile operating system. ART replaces Dalvik, which is the process virtual machine originally used by Android, and performs transformation of the application's bytecode into native instructions that are later executed by the device's runtime environment...
Source: http://en.wikipedia.org/wiki/Android_Runtime
There are many more useful examples available. Whether you have to or don't have to basically depends on the programming paradigm your new OS will introduce. Why you want to build it and how will it differ from the existing ones.
Examples for no are: (3.1), (3.4), (3.5)
Examples for yes are: (3.2), (3.3)
What exactly do we mean when we say that a program is OS-independent? do we mean that it can run on any OS as long as the processor is same?
For example, OpenGL is a library which is OS independent. Functions it contain must be assuming a specific processor. But ain't codes/programs/applications OS-specific?
What I learned is that:
OS is processor-specific.
Applications (programs/codes/routines/functions/libraries) are OS specific.
Source code is plain text.
Compiler (a program) is OS specific, but it can compile source code for a
different processor assuming the same OS.
OpenGL is a library.
Therefore, OpenGL has to be OS/processor-specific. How can it be OS-independent?
What can be OS independent is the source code. Is this correct?
How does it help to know if a source code is OS-independent or not?
What exactly do we mean when we say that a program is OS-independent? do we mean that it can run on any OS as long as the processor is same?
When a program uses only defined behaviour (no undefined, unspecified or implementation defined behaviours), then the program is guarenteed by the lanugage standard (in your case C language standard) to compile (using a standards compliant compiler) and run uniformly on all operating systems.
Basically you've to understand that a language standard like C or a library standard like OpenGL gives a set of minimum assumable guarentees that a programmer can make and build upon. These won't change as long as the compiler is compliant with the standard (in case of a library, the implementation is standards-compilant) and the program is not treading in undefined behaviour land.
openGL has to be OS/processor specific. How can it be OS-independent?
No. OpenGL is platform-independant. An OpenGL implementation (driver which implements the calls) is definitely platform and GPU-specific. Say C standard is implemented by GCC, MSVC++, etc. which are all different compiler implementations which can compile C code.
what can be OS independent is the source code. Is this correct?
Source code (if written for with portability in mind) is just one amongst many such platform-independant entities. Libraries (OpenGL, etc.), frameworks (.NET, etc.), etc. can be platform-independant too. For that matter even hardware can be spec'd by some one and implemented by someone else: ARM processors are standards/specifications charted out by ARM and implemented by OEMs like Qualcomm, TI, etc.
do we mean that it can run on any OS as long as the processor is same?
Both processor and the platform (OS) doesn't matter as long as you use only cross-platform components for building your program. Say you use C, a portable language; SDL, a cross-platform library for creating windows, handling events, framebuffers, etc.; OpenGL, a cross-platform graphics library. Now your program will run on multiple platforms, even then it depends on the weakest link. If SDL doesn't run on some J2ME-only phone then it'll not have a library distribution for that platform and thus you application won't run on that platform; so in a sense nothing is all independant. So it's wise to play around with the various libraries available for different architectures, platforms, compilers, etc. and then pick the required ones based on the platforms you're targetting.
What exactly do we mean when we say that a program is OS-independent?
It means that it has been written in a way, that it can be compiled (if compilation is necessary for the language used) or run without or just little modification on several operating systems and/or processor architectures.
For example, openGL is a library which is OS independent.
OpenGL is not a library. OpenGL is an API specification, i.e. a lengthy volume of text that describes a set of tokens (= named numeric values) and entry points (= callable functions) and the effects they have on the system level.
What I learned is that:
OS is processor-specific.
Wrong!
Just like a program can be written in a way that it can targeted to several operating systems (and processor architectures), operating systems can be written in a way, that they can be compiled for and run on several processor architecture.
Linux for example supports so many architectures, that it's jokingly said, that it runs on everything that is capable of processing zeroes and ones and has a memory management unit.
Applications (programs/codes/routines/functions/libraries) are OS specific.
Wrong!
Program logic is independent from the OS. A calculation like x_square = x * x doesn't depend on the OS at all. Only a very small portion of a program, namely those parts that make use of operating system services actually depend on the OS. Such services are things like opening, reading and writing to files, creating windows, stuff like that. But you normally don't use those OS specific APIs directly.
Most OS low level APIs have certain specifics which a easy to trip over and arcane to address. So you don't use them, but some standard, OS independent library that hides the OS specific stuff.
For example the C language (which is already pretty low level) defines a standard set of functions for file access, the stdio functions. fopen, fread, fwrite, fclose, … Similar does C++ with its iostreams But those just wrap the OS specific APIs.
source code is plain text.
Usually it is, but not necessarily. There are also graphical, data flow programming environments, like LabVIEW, which can create native code as well. The source code those use is not plain text, but a diagram, which is stored in a custom binary format.
Compiler ( a program ) is OS specific, but it can compile a source code for a different processor assuming the same OS.
Wrong! and Wrong!
A compiler is language and target specific. But its perfectly possible to have a compiler on your system that generates executables targeted for a different processor architecture and operating system than the system you're using it on (cross compilation). After all a compiler is "just" a (mathematical) function mapping from source code to target binary.
In fact the compiler itself doesn't target an operating system at all, it only targets a processor architecture. The whole operating system specifics are introduced by the ABI (application binary interface) of the OS, which are addresses by the linked runtime environment and that target linker (yes, the linker must be able to address a specific OS).
openGL is a library.
Wrong!
OpenGL is a API specification.
Therefore, openGL has to be OS/processor specific.
Wrong!
And even if OpenGL was a library: Libraries can be written to be portable as well.
How can it be OS-independent?
Because OpenGL itself is just a lengthy document of text describing the API. Then each operating system with OpenGL support will implement that API conforming to the specification, so that a program written or compiled to run on said OS can use OpenGL as specified.
what can be OS independent is the source code.
Wrong!
It's perfectly possible to write a program source code in a way that it will only compile and run for a specific operating system and/or for a specific processor architecture. Pinnacle of OS / architecture dependence: Writing things in assembler and using OS specific low level APIs directly.
How does it help to know if a source code is OS/window independent or not?
It gives you a ballpark figure of how hard it will be to target the program to a different operating system.
A very important thing to understand:
OS independence does not mean, a programm will run on all operating systems or architectures. It means that it is not tethered to a specific OS/CPU combination and porting to a different OS/CPU requires only little effort.
There's a couple concepts here. A program can be OS-independent, that is it can run/compile without changes on a range of OS's. Secondly libraries can be made on a range of OS's which can be used by a platform independent program.
Strictly OpenGL doesn't have to be OS-independent. OpenGL may actually have different source code on different OS's which interface with drivers in a platform specific way. What matters is that OpenGL's interface is OS-independent. Because the interface is OS-independent it can be used by code which is actually OS-independent and can be run/compiled without modification.
Libraries abstracting out OS-specific things is a wonderful way to allow your code to interface with the OS which normally would require OS-specific code.
One of those:
It compiles on any OS supported by program framework without changes to source code. (languages like C++ that compile directly into machine code)
The program is written in interpreted language or in language that compiles into platform-independent bytecode, and can actually run on whatever platform its interpreter supports without modifications. (languages like java or python).
Application relies on cross-platform framework of some kind that abstract operating-system-specific calls away. It will run without modifications on any OS supported by framework.
Because you haven't added any language tag, it is either #1, #2 or #3, depending on your language.
--edit--
OS is processor-specific.
No. See Linux. Same code base, can be compiled for different architectures. Normally, (well, it is reasonable to expect that) OS kernel is written in portable language (like C) that can be rebuild for different CPU. On distribution like gentoo, you can rebuild entire OS from source as well.
Applications (programs/codes/routines/functions/libraries) are OS specific.
No, Applications like java *.jar files can be made more or less OS independent - as long as there is interpreter, they'll run anywhere. There will be some OS-specific part (like java runtime environment in case of java), but your program will run anywhere where this part is present.
Source code is plain text.
Not necessarily, although it is true in most cases.
Compiler (a program) is OS specific, but it can compile source code for a
different processor assuming the same OS.
Not quite. It is reasonable to be written using (somewhat) portable code so compiler can be rebuilt for different OS.
While running on OS A it is possible (in some cases) to compile code for os B. On Linux you can compile code for windows platform.
OpenGL is a library.
It is not. It is a specification (API) that describes set of programming functions for working with 3d graphics. There are Libraries that implement this specifications. Specification itself is not a library.
Therefore, OpenGL has to be OS/processor-specific.
Incorrect conclusion.
How can it be OS-independent?
As long as underlying platform has standard-compliant OpenGL implementation, rendering part of your program will work in the same way as on any other platform with standard-compliant OpenGL implementation. That's portability. Of course, this is an ideal situation, in reality you might run into driver bug or something.
The more I look at this PDF (Application Binary Interface for
the ARM Architecture: The Base Standard) the less I understand what it means. Also I'd like some comments on Procedure Call Standard for the ARM Architecture and ELF for the ARM Architecture.
An ABI (Application Binary Interface) is a standard that defines a mapping between low-level concepts in high-level languages and the abilities of a specific hardware/OS platform's machine code. That includes things like:
how C/C++/Fortran/... data types are laid out in memory (data sizes / alignments)
how nested function calls work (where and how the information on how to return to a function's caller is stored, where in the CPU registers and/or in memory function arguments are passed)
how program startup / initialization works (what data format an "executable" has, how the code / data is loaded from there, how DLLs work ...)
The answers to these are:
language-specific (hence you've got a C ABI, C++ ABI, Fortran ABI, Pascal ABI, ... even the Java bytecode spec, although targeting a "virtual" processor instead of real hardware, is an ABI),
operating-system specific (MS Windows and Linux on the same hardware use a different ABI),
hardware/CPU-specific (the ARM and x86 ABIs are different).
evolving over (long) time (existing ABIs have often been updated / rev'ed so that new CPU features could be made use of, like, say, specifying how the x86 SSE registers are to be used by apps was of course only possible once CPUs had these regs, therefore existing ABIs needed to be clarified).
Without some kind of this standardization, (machine) code created by different compilers couldn't use the same kind of libraries (how would you know in which way the library code expects function arguments or data structures to be passed ?).
Every platform (a combination of specific hardware, operating system software and code written in specific programming languages / compiled with specific compilers) defines a whole set of ABIs to make things interoperable. The terminology in this area isn't clear, sometimes people just talk about "the ABI", other times it's called the "platform supplement", or one mentions the programming language and says e.g. "the C++ ABI". Keep in mind, there is not one such thing.
The documents that you linked to in your question are all specific examples of this (language- / operating-system / hardware-specific ABIs).
Even on a specific platform, there's no necessity to have one and only one ABI (set) because different such conventions might have different advantages (and therefore provide better performance / smaller code / better memory usage / ... - depending on the program) and system designers usually try to be flexible / permissible.
On 32bit Microsoft Windows, for example, there's a multitude of ABIs (fastcall, stdcall, pascal, ...) for the function calling convention parts.
Anyway, a generic stackoverflow search for "ABI" (included the links under the "Related" sidebar) gives so many leads to researching this question that I close my answer at this point.
ARM ABI should be referred when an OS kernel port on ARM is used.
EABI is when the processor boots to load an application with no intermediate kernel. (Something like there used to be ROM-BASIC when DOS came about), i. e. the firmware itself is the free-standing application, no board-specific monitor or anything.
The first link is to detailed sub-part related to procedure-calls of the ARM ABI. As the programmers' model advances with each version of ARM CPU, such topics are important and covered by ABI.
The second link is about binary format specification for object files generated by compiler called ELF that's specified by an OS vendor brand SCO. Perhaps SCO is Santa Cruz Organization that makes its own flavors of Unix as well as Linux, however that story deviates from the question. You should be interested in this if you intend to implement linker supporting ELF targeting ARM.
Unless you are directly concerned with implementation details of build tool-chain for ARM, EABI should be of little concern, and unless you are accounting for OS specific aspects of such tool-chain ARM ABI should also be of little concern.
ABI is basically how the function / procedures passes information to each other through registers (compiled form), where the return value stored (specify registers). In x86 or x86-x64 it is called ABI (Application binary interface).
In ARM architecture, it is knows as EABI (extended application binary interface). The ABI from before 2000 is known as OABI (old application binary interface), now this is obsolete.
In EABI, the information / parameters passes in the forms of integers.
the ARM processors who supports hardware floating point, passes parameters using floating point registers, called EABIHF (extended application binary interface hard float).
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I'm getting into microcontroller programming and have been hearing contrasting views. What language is most used in the industry for microcontroller programming? Is this what you use in your own work? If not, why not?
P.S.: I'm hoping the answer is not assembly language.
In my experience, you absolutely must know C, and assembly language helps too.
Unless you are dealing with very bare-bones microcontrollers (like the RS08 series), C is by far the language of choice. Get to know C, understand functionality like volatile and const. Also understand the architecture - what is efficient, what isn't, what can the CPU do. These will differ wildly from a "desktop" environment. Learn to love stdint.h.
You will encounter C++ (or a restricted subset) as projects scale up.
However, you need to understand the CPU and how to read basic assembly as a debugging tool. You can't become an excellent embedded developer without this skillset.
What 'contrasting' views have you heard? To some extent it will depend on the microcontroller and the application. However C is available for almost all architectures (I hesitate to say all, but probably all that you will ever encounter); so on that point alone, learning C would give you the greatest coverage.
For all architectures, the availability of an assembler and a C compiler are pretty much a given. For 32-bit and most 16-bit architectures C++ will also be available. Notable exceptions I have encountered are Microchip's PIC24/dsPIC parts for which C++ is not supported by Microchip's own GNU based compiler (although 3rd party compilers may do so).
While there are C++ compilers for 8 bit microcontroller's C++ is not ubiquitous on such platforms, and often the compilers are subsets of the full language. For the types (or more specifically the size) of application for which 8-bit is usually employed, C++ may be useful but not to the extent that it is on much larger applications, so C is generally adequate.
There are lot of myths about C++ in embedded systems; while the language is larger than C and has constructs that may compromise the performance or capacity of your system, you only pay for what you use with C++. But of course if what you use is just the C subset, the C would be adequate in any case.
The point about C (and C++) is that it is a systems level language; it will run on your microprocessor with no additional support save a very simple runtime start-up to initialise the processor (and possibly external SDRAM), initialise static data, establish a stack, and in the case of C++ invoke static constructors. This is why along with target specific assembler, it is used to build operating systems and kernels - it needs no operating system or kernel itself to run.
One of the reasons I suggested that it may depend on the microcontroller is that if for example it is an ARM9 with a few Mb of external SDRAM, and at least say 4Mb Flash (also usually external - memory takes up a lot of die space), then you could run a 'heavyweight' OS on it such as Linux, WinCE, or Symbian, or even a large RTOS such as QNX or VxWorks. Then your choice of language (once you got the OS working), would be influenced by the OS, though for real-time applications C and C++ would still dominate, (or often Ada in military, avionics, and some transport applications).
For mid-size applications - a few hundred KBytes of code and data space - C# running on the .NET-Micro platform is possible; However I sat in a presentation of this at the Embedded Systems Show in the UK a few years ago, just after it was when it was launched; when I asked the question "but is it real-time", and was told, "no you need WinCE for that", there was a gasp and a groan from much of the audience, and some stopped wasting their time an left the presentation there and then (including me).
So I am still interested in the 'contrasting' opinions you have heard; because although it is possible to use other languages; the answer to your question:
What language is most used in the
industry for microcontoller
programming?
then the definitive answer is C; for the reasons I have given. For anyone who might choose to contest this assertion here are the statistics (note the different survey method after 2004 explained in the text). However just to add to the collection of alternatives, I once spent two years programming in Forth on embedded systems, and I know of people still using it, but it is a bit of a niche.
I've successfully used both C and C++ but in almost any microcontroller project you will need to be familiar with the assembly language of the target micro. If only for debugging low level hardware issues assembly will be indispensable, even if it is a cursory familiarity.
I think the hardest thing for me when moving from a desktop environment to a micro was that almost everything needs to be allocated statically. You won't often use malloc/new in a micro unless maybe it has external RAM.
I notice that you also tagged your question with FPGA and Verilog, take a look at Altium, they have a C to Hardware compiler that works really well with their integrated environment.
Regarding assembler:
Prefer C/C++ over assembler as much as possible. You'll get better productivity by writing as much as possible in C or C++. That includes being able to run some of your code on a PC, which can help developing the higher-level code (application-layer functions).
On many embedded platforms, it's good to have someone on the project who is comfortable with a little assembler. Mostly to get start-up code and interrupts going nicely, and perhaps functions for interrupt enable/disable. That's not the same as knowing it really thoroughly--just a basic working knowledge will be sufficient.
If you're porting an RTOS (e.g. µC/OS-II) to a new platform, then you'll have to know your assembler more. But hopefully your RTOS supports your platform well already.
If you're pushing up against CPU performance limits, you probably need to know assembler more thoroughly. But hopefully you're not pushing performance limits much, because that can be a drag on a project's viability.
If you're writing for a DSP, you probably need to know the DSP's assembler fairly thoroughly.
Microcontrollers were originally programmed only in assembly language, but various high-level programming languages are now also in common use to target microcontrollers. These languages are either designed specially for the purpose, or versions of general purpose languages such as the C programming language. Compilers for general purpose languages will typically have some restrictions as well as enhancements to better support the unique characteristics of microcontrollers. Some microcontrollers have environments to aid developing certain types of applications. Microcontroller vendors often make tools freely available to make it easier to adopt their hardware.
Many microcontrollers are so quirky that they effectively require their own non-standard dialects of C, such as SDCC for the 8051, which prevent using standard tools (such as code libraries or static analysis tools) even for code unrelated to hardware features. Interpreters are often used to hide such low level quirks.
Interpreter firmware is also available for some microcontrollers. For example, BASIC on the early microcontrollers Intel 8052[4]; BASIC and FORTH on the Zilog Z8[5] as well as some modern devices. Typically these interpreters support interactive programming.
Simulators are available for some microcontrollers, such as in Microchip's MPLAB environment. These allow a developer to analyze what the behavior of the microcontroller and their program should be if they were using the actual part. A simulator will show the internal processor state and also that of the outputs, as well as allowing input signals to be generated. While on the one hand most simulators will be limited from being unable to simulate much other hardware in a system, they can exercise conditions that may otherwise be hard to reproduce at will in the physical implementation, and can be the quickest way to debug and analyze problems.
You need to know assembly language programming.You need to have good knowledge in C and also C++ too.so work hard on thse things to get better expertize on micro controller programming.
And don't forget about VHDL.
For microcontrollers assembler comes before C. Before the ARMs started pushing into this market the compilers were horrible and the memory and ROM really tiny. There are not enough resources or commonality to port your code so writing in C for portability makes no sense.
Some microcontroller's assembler is less than desirable, and ARM is taking over that market. For less money, less power, and less footprint you can have a 32-bit processor with more resources. It just makes sense. Much if your code will still not port, but you can probably get by with C.
Bottom line, assembler and C. If they advertise BASIC or Java or something like that, mark that company on your blacklist and move on. Been there, done that, have the scars to prove it.
First Assembly. After C.
I think that who knows Assembly and C are better than who knows only C.