As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I have been reading the C++ Primer, due to all the claims of how usable the language has become because of C++11. The book is probably ok, but it still leaves me wondering every few pages about what the code really does. A a result I end up googling lots and lots of stuff .. only to eventually end up with something along the lines of that bit is "machine specific" or "implementation defined", etc.
I realize what that means, and how portable code is important.
Yet I do wonder, what are the actual values for these specifics for "run-of-the-mill" 64bit x86 PCs? Since GCC, Visual Studio, etc. don't actually ask about what to do in all these cases, but just compile the code (and it works!), there seems to be some sane set of defaults for targeting desktops.
Is there a document that covers these details (in a for non-compiler-writers understandable way, like the pages that I linked to)?
Most Unix or Linux systems you can login and issue the command
locate limits.h
and it will find a number of include files that list the "limits" for values used by the compiler. Many of the limits files in the Linux kernel code are architecture specific, which is your especial interest
Frankly, portability is difficult to obtain 100%. I've been programming for 30 years and have never seen anything but simple programs that are 100% portable. Given the PC is ubiquitous, I don't think you should concern yourself with portability over functionality. Hence, all the references you find to "implementation defined".
In a perfect world, programs would be portable. In the real world, OS makers add features to compete with other OS makers and even themselves (Win 95, 98, 2000, XP, 7, Vista) (and Linux distros have differences). As a result, being portable -- IN MY EXPERIENCE -- means a trade-off you're not willing to make: too slow, too bulky, too much development time, too much testing, etc. If you seek portability, you need to ask why and is it worth it. Even if you decide to do so, you will find yourself adding compile-time options based on your environment and may end up with entire files that are specific and non-portable.
When I write code for an Atmel Mega16 I don't consider whether I'm going to port that code. In this case, you don't have the luxury of infinite CPU cycles and boundless memory to consider a portable solution -- we're trying to squeeze all the juice out of a little micro.
Likewise, it's often the case you need to optimize routines in assembler in order to gain back CPU cycles for more features. (Like a DSP running a DFT -- it's ok in C when you first ship it, but eventually you need to reduce that to ASM to get back a pile of CPU cycles for 23 more features your boss wants you to add by tomorrow morning. Portability be damned.)
So, yes, much is implementation specific. In the PC world you have a little more luxury, but if you're writing code that interfaces with hardware you're often forced to create non-portable code. I could go on and on about this, but I have a loop that needs optimizing...
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I have written a program in C which I need to prevent from illegal use by copying. The system will be connected to internet. How to make this program to run only in this computer or unique computer. Can we use http post and fetch from external server some encrypt codes?. Any ideas will be useful. Dont know if this is already answered, searched but could not find results.
How to secure my linux C program against piracy
You probably can't.
If I am expert enough and motivated enough, I could decompile your binary executable (or study it with binsec), study its dynamic behavior (with e.g. strace or gdb, etc...), or detect your tricks and patch then build and install my Linux kernel source code (it is free software) to circumvent your protections.
In other words, if your adversary is as powerful as the NSA, you have lost that game.
Conceptually the "protection" of a C program can be related to the halting problem and to Rice's theorem. Gory and difficult details are left as an exercise to the reader. And you'll find tons of academic papers about software obfuscation techniques (a quite effective one being in practice compiling and linking with gcc -flto -O3 then stripping the resulting executable).
How to make this program to run only in this computer
Read more about DMZ and iptables. Protect that computer by legal means and by physical means (including even 24h/24 machine guns armed guards to avoid it being stealen or damaged; they would cost you much more than the computer itself). Invest years of your time to learn more about cybersecurity (you could make a PhD on that at my workplace).
The socially and economically effective protection is a good license (EULA) written by some costly and expert lawyer. If your clients are corporations, they won't risk breaking that license, even if technically they could. (think of what could happen if they did). Observe that proprietary programs on Linux have in 2019 less protections against piracy than those in 1999 (and that even Oracle or SAP are not making most of their profit, while selling Linux proprietary software, on software licenses but on related services). Study the business model of RedHat and its profits. Read papers or books on economics of open source (e.g. this one, the most cited one).
According to rumors Oracle costly binaries don't have protections. But I use free software RDBMS.
And if you add too complex to deploy protections into your software, you just are losing potential clients.
The most difficult step is to find actual clients for your software, not to invent or deploy difficult technical tricks to avoid piracy. You could use some existing, but imperfect, license manager. My guess is that you won't find many clients (and you could give your source code to each of them, with a suitable license -perhaps a restricted license written by your lawyer- without harming your business; most persons on Earth don't even have the necessary skills to compile your source code, and those who do won't risk to go against the laws and the contracts, written by your lawyer, signed by you and them, without a very strong incentive; and I won't accept or trust your binary without having glanced into your source code before).
Don't spend a lot of efforts on protecting your software. Do spend months of efforts on documenting it properly, debugging it, and commercializing it (and, once you have a client who paid you, on training and helping your client to use your software).
PS. My personal feeling is that even if you gave me your binary Linux executable for free -as in beer- I won't even bother trying it (because I probably don't need it, and certainly because I don't trust you enough)
PPS. For me, the most important aspect of a Linux distribution is to be made of free software (a.k.a. libre software) or open source. It is certainly not the "gratis" (or free as in beer) aspect of Linux. I value freedom above the fact of not paying Debian. I am professionally writing free software, and I am paid for that.
NB. Look also into this draft report and its bibliography. It is border-line relevant to your question. And consider subcontracting the protection work at my workplace (send me an email at basile.starynkevitch#cea.fr if you are really serious). The lab I am working at is collectively capable of adding good protection to your code. Allocate then a budget of several hundred thousands € for that service, and at least 100k€ (for a few person-months of work). My boss would be delighted if such a contract becomes reality (but I would find such a task very boring).
the only secure way is using usb dongles, giving licence over usb dongles
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
So C obviously has a pretty dominant low level programming stronghold.....but is anything coming out that challenges/wants to replace it?
Python/C#/etc all seem to be aimed at very high level, but when it comes down to nitty-gritty low level stuff C seems to be king and I haven't seen much "try" to replace that?
Is there anything out there, or does learning C for low level stuff seem to be the standard?
If you mean systems level then perhaps D Language.
Whatever happened to Google's GO?
Well to be honest it depends on your need to be "low level"/"system level" and what the system is.
As Neera rightly points out, there is an increasing trend towards managed languages.
So, if you're writing application code, unless you're actually writing the algorithms and optimisations, the idea is that you use the managed code/higher level abstractions. The need to do low level stuff all the time is, on common platforms, vastly reduced. Anywhere you have access to an API that is anywhere near good, you're probably going to have nicer abstraction layers around.
However, if you're implementing on a new architecture, you can either: use assembly to produce a compiler for that platform or write a compiler that outputs machine code from that platform from another platform (cross compilation). Then you need to compile a compiler for that platform.
As you can imagine, C++ is harder to deal with than C in this regard. Even C is actually quite an effort to do well. I've heard people say they like stack based languages like FORTH because for basic work they can get it up and running with very little assembly (compared to a c compiler or full blown cross compilation effort).
Edit (because I like it) Here's a link to the JonesForth git repository. Take a look. Jonesforth is an implementation of forth in i386 assembly complete with code comments walking you through the whole process.
LLVM
C for low level stuff is standard. C works and its known. C is fast because it is low level and makes the programmer do lots of things that Python and C# do for you. You could write another language aimed to replace C, but I don't think it would get you anywhere except a slightly different syntax. (If you wanted to keep the speed of C).
Why is C so fast? Because its shiny assembler. For the things you need to do even faster you use YASM or inline assembler.
There's actually quite a few things that can be used for low level programming. Here's some used in the past with advantages over C.
Pascal variants (used in GEMSOS)
Oberon (used in Oberson System and A2 Bluebottle)
Ada (used in safety critical projects and at least three OS's on limited hardware)
PL/0 (MULTICS)
Modula (used in CVSup and some academic projects for correct system software)
Smalltalk and Haskell were used for prototype OS's or OS replacement layers.
Cyclone, Popcorn, C0, and Typed Assembly Language do better than C while keeping much of it.
Additionally, languages with a runtime can be used if the lowest level parts are implemented by another language. Microsoft's Verve and the JX Operating System are examples. For an old school one, look up the Genera LISP Machine and it's "advantages." You still can't do much of that in modern systems development with C/C++ toolchains. ;)
So, even if C isn't totally replaceable, it's mostly replaceable in most situations without much performance loss. Have fun with these.
The recent trend is moving towards object oriented and managed languages - For example Symbian as an OS is entirely written in C++, Also Microsoft research has come with Singularity OS which is a managed programming model. The idea is that managed languages protect users from easy to make mistakes in C - like resource leaks, pointer corruptions etc by abstracting away these ideas. Also object oriented paradigm helps in writing easy to maintain code. For now C still rule the embedded world, however we can see that changing in coming decade, with more and more embedded world embracing C++ as the language of choice.
I dont thing so
I like to use my old Assembly Rotines, but C is save
I don't think C is low level enough. I would suggest assembly language. As far as I know, it's the lowest level a programmer could go. But you still have to deal with assembler, linker and loaders. There're still many detail things related to the target platform.
There are platform specific low level languages, such as assembly languages and machine codes. With comparing these with C, C is rather high level language.
What do you exactly mean by low level?
C is also used for high level stuff like user interfaces (the whole GNOME Desktop, and its library GTK are written in C).
I'd put C in the low level category because it lets you play with the actual machine (eg: raw memory addresses, just to cite something) adding only a really tiny abstraction layer.
Also other programming languages are offering a clean vision of the underlying machine:
Many are derived from C and are compatible with it (C++, Objective-C). These supply some tools to ease your life by abstracting something. They could replace C, but if you'd use these languages, you'd lose compatibility: ObjectiveC and C++ interfaces cannot be used by C.
Others belong to completely different families, and these, other than the above issue, cannot even use C stuff directly.
Thus, in my opinion, the main reason why C isn't dropped is for commercial reason (it would cost too much to write everything again so that everything is compatible to other languages), pretty much the same reason why COBOL still exists.
There are other reasons, like the fact that C is bare-bone, simple and fast to parse and compile and stuff, but in my opinion these are secondary.
Some big companies who can afford rewriting anything are however trying to kick C off (Apple is extensively using ObjectiveC, for example, while others are using C++).
I think that in future C will keep to exist, since there are no efforts in choosing a specific standard language to be used everywhere in place of C (if you write C code it'll work both with C, with C++ and with ObjectiveC systems, while the opposite is not true) and since there's a too vast code base of C code out there.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I've never used Message Passing Interface (MPI), but I've heard its name thrown about, most recently with Windows HPC Server. I had a quick look on amazon to see if there were any books on it, but they're all dated around 7 or more years ago. Is MPI still a valid technology choice for new applications, or has it been largely superceded by other distributed programming alternatives (e.g. DataSynapse GridServer)?
As it's not really an implementation, but rather a standard, what is the likelihood (assuming it's not dead) that learning it will result in better design of distributed programming systems? Is there something else I should be looking at instead?
For what MPI is good for it's still a good choice. It's just possible that there are no recent books on the topic because the existing ones are good enough and most of us using MPI don't need anything more.
I wouldn't characterise MPI as a distributed programming standard, more a standard for parallel programming on distributed memory computers -- which covers most of the largest computers in the world right now.
If I were betting on it being replaced I'd be looking at Chapel, X10, or, most likely, Fortran 2008.
What you should be looking at depends on your requirements, but if they include high-performance number-crunching for scientific and engineering codes, Fortran or C/C++ with MPI should be in your sights. I've never heard of DataSynapse GridServer, a quick Google suggests to me that it's aimed at a completely different class of computational problems.
EDIT: I just checked Amazon for books 'on MPI'. While the Gropp et al books are a bit old now, there are still plenty of other books being published which cover (use of) MPI. This is, in part, a reflection of the usage of MPI. It's not terribly interesting to computer scientists so there aren't many books on 'MPI for MPI's sake', but it is of interest to many computational scientists, so there's a steady stream of 'physics with MPI' and 'engineering with MPI' books. If these are outside your sphere of interest, MPI probably is too.
2016 update. MPI is still king for distributed memory programming on low latency networks of reliable compute nodes. I think the question poser is correct in that MPI is probably not the protocol layer where fault tolerance should take place. Circa 2006 we had MPI over SunGridEngine. Lately MPI on Mesos is becoming popular.
The MPI standard is in active development:
http://meetings.mpi-forum.org/MPI_3.0_main_page.php
The main issue is that now we have some machines with over 10,000 processors, and MPI itself his having a hard time scaling. Lots of open research problems. http://www.springerlink.com/content/q11r042317g88230/
Why do you need a book? The API is well documented.
On distributed systems you dont really have any other option besides MPI.
Some Fortran compilers like the one from Cray and G95 support coarrays. Then you have UPC but I havent seen anyone using it.
Well probably because there's not 'enough to it' (or user base is still too small, or they're too smart) for just the API description and a few examples, to support a separate book. Lots of books of parallel programming do cover it as one of several parallel methods, though. One recent one (Feb 2010) is: "Parallel Programming: For Multicore and Cluster Systems" By Thomas Rauber, Gudula Rünger. I haven't read it, I mention it because it's recent, and by real experts in the field (both => MPI isn't dead). As for the best book to help you wrap your head around how to use MPI, I can only refer you to people's reviews on Amazon. But look for 'parallel' in the title.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I'm getting into microcontroller programming and have been hearing contrasting views. What language is most used in the industry for microcontroller programming? Is this what you use in your own work? If not, why not?
P.S.: I'm hoping the answer is not assembly language.
In my experience, you absolutely must know C, and assembly language helps too.
Unless you are dealing with very bare-bones microcontrollers (like the RS08 series), C is by far the language of choice. Get to know C, understand functionality like volatile and const. Also understand the architecture - what is efficient, what isn't, what can the CPU do. These will differ wildly from a "desktop" environment. Learn to love stdint.h.
You will encounter C++ (or a restricted subset) as projects scale up.
However, you need to understand the CPU and how to read basic assembly as a debugging tool. You can't become an excellent embedded developer without this skillset.
What 'contrasting' views have you heard? To some extent it will depend on the microcontroller and the application. However C is available for almost all architectures (I hesitate to say all, but probably all that you will ever encounter); so on that point alone, learning C would give you the greatest coverage.
For all architectures, the availability of an assembler and a C compiler are pretty much a given. For 32-bit and most 16-bit architectures C++ will also be available. Notable exceptions I have encountered are Microchip's PIC24/dsPIC parts for which C++ is not supported by Microchip's own GNU based compiler (although 3rd party compilers may do so).
While there are C++ compilers for 8 bit microcontroller's C++ is not ubiquitous on such platforms, and often the compilers are subsets of the full language. For the types (or more specifically the size) of application for which 8-bit is usually employed, C++ may be useful but not to the extent that it is on much larger applications, so C is generally adequate.
There are lot of myths about C++ in embedded systems; while the language is larger than C and has constructs that may compromise the performance or capacity of your system, you only pay for what you use with C++. But of course if what you use is just the C subset, the C would be adequate in any case.
The point about C (and C++) is that it is a systems level language; it will run on your microprocessor with no additional support save a very simple runtime start-up to initialise the processor (and possibly external SDRAM), initialise static data, establish a stack, and in the case of C++ invoke static constructors. This is why along with target specific assembler, it is used to build operating systems and kernels - it needs no operating system or kernel itself to run.
One of the reasons I suggested that it may depend on the microcontroller is that if for example it is an ARM9 with a few Mb of external SDRAM, and at least say 4Mb Flash (also usually external - memory takes up a lot of die space), then you could run a 'heavyweight' OS on it such as Linux, WinCE, or Symbian, or even a large RTOS such as QNX or VxWorks. Then your choice of language (once you got the OS working), would be influenced by the OS, though for real-time applications C and C++ would still dominate, (or often Ada in military, avionics, and some transport applications).
For mid-size applications - a few hundred KBytes of code and data space - C# running on the .NET-Micro platform is possible; However I sat in a presentation of this at the Embedded Systems Show in the UK a few years ago, just after it was when it was launched; when I asked the question "but is it real-time", and was told, "no you need WinCE for that", there was a gasp and a groan from much of the audience, and some stopped wasting their time an left the presentation there and then (including me).
So I am still interested in the 'contrasting' opinions you have heard; because although it is possible to use other languages; the answer to your question:
What language is most used in the
industry for microcontoller
programming?
then the definitive answer is C; for the reasons I have given. For anyone who might choose to contest this assertion here are the statistics (note the different survey method after 2004 explained in the text). However just to add to the collection of alternatives, I once spent two years programming in Forth on embedded systems, and I know of people still using it, but it is a bit of a niche.
I've successfully used both C and C++ but in almost any microcontroller project you will need to be familiar with the assembly language of the target micro. If only for debugging low level hardware issues assembly will be indispensable, even if it is a cursory familiarity.
I think the hardest thing for me when moving from a desktop environment to a micro was that almost everything needs to be allocated statically. You won't often use malloc/new in a micro unless maybe it has external RAM.
I notice that you also tagged your question with FPGA and Verilog, take a look at Altium, they have a C to Hardware compiler that works really well with their integrated environment.
Regarding assembler:
Prefer C/C++ over assembler as much as possible. You'll get better productivity by writing as much as possible in C or C++. That includes being able to run some of your code on a PC, which can help developing the higher-level code (application-layer functions).
On many embedded platforms, it's good to have someone on the project who is comfortable with a little assembler. Mostly to get start-up code and interrupts going nicely, and perhaps functions for interrupt enable/disable. That's not the same as knowing it really thoroughly--just a basic working knowledge will be sufficient.
If you're porting an RTOS (e.g. µC/OS-II) to a new platform, then you'll have to know your assembler more. But hopefully your RTOS supports your platform well already.
If you're pushing up against CPU performance limits, you probably need to know assembler more thoroughly. But hopefully you're not pushing performance limits much, because that can be a drag on a project's viability.
If you're writing for a DSP, you probably need to know the DSP's assembler fairly thoroughly.
Microcontrollers were originally programmed only in assembly language, but various high-level programming languages are now also in common use to target microcontrollers. These languages are either designed specially for the purpose, or versions of general purpose languages such as the C programming language. Compilers for general purpose languages will typically have some restrictions as well as enhancements to better support the unique characteristics of microcontrollers. Some microcontrollers have environments to aid developing certain types of applications. Microcontroller vendors often make tools freely available to make it easier to adopt their hardware.
Many microcontrollers are so quirky that they effectively require their own non-standard dialects of C, such as SDCC for the 8051, which prevent using standard tools (such as code libraries or static analysis tools) even for code unrelated to hardware features. Interpreters are often used to hide such low level quirks.
Interpreter firmware is also available for some microcontrollers. For example, BASIC on the early microcontrollers Intel 8052[4]; BASIC and FORTH on the Zilog Z8[5] as well as some modern devices. Typically these interpreters support interactive programming.
Simulators are available for some microcontrollers, such as in Microchip's MPLAB environment. These allow a developer to analyze what the behavior of the microcontroller and their program should be if they were using the actual part. A simulator will show the internal processor state and also that of the outputs, as well as allowing input signals to be generated. While on the one hand most simulators will be limited from being unable to simulate much other hardware in a system, they can exercise conditions that may otherwise be hard to reproduce at will in the physical implementation, and can be the quickest way to debug and analyze problems.
You need to know assembly language programming.You need to have good knowledge in C and also C++ too.so work hard on thse things to get better expertize on micro controller programming.
And don't forget about VHDL.
For microcontrollers assembler comes before C. Before the ARMs started pushing into this market the compilers were horrible and the memory and ROM really tiny. There are not enough resources or commonality to port your code so writing in C for portability makes no sense.
Some microcontroller's assembler is less than desirable, and ARM is taking over that market. For less money, less power, and less footprint you can have a 32-bit processor with more resources. It just makes sense. Much if your code will still not port, but you can probably get by with C.
Bottom line, assembler and C. If they advertise BASIC or Java or something like that, mark that company on your blacklist and move on. Been there, done that, have the scars to prove it.
First Assembly. After C.
I think that who knows Assembly and C are better than who knows only C.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
In this age of many languages, there seems to be a great language for just about every task and I find myself professionally struggling against a mantra of "nothing but C is fast", where fast is really intended to mean "fast enough". I work with very rational open-minded people, who like to compare numbers, and all I have are thoughts and opinions. Could you help me find my way past subjective opinions and into the "real world"?
Would you help me find research as to what if any other languages could be used for embedded and (Linux) systems programming? I very well could be pushing a false hypothesis and would greatly appreciate research to show me this. Could you please link or include good numbers so as to help keep the "that's just his/her opinion" comments to a minimum.
So these are my particular requirements
memory is not a serious constraint
portability is not a serious concern
this is not a real time system
In my experience, using C for embedded and systems programming isn't necessarily a performance issue - it's often a portability issue. C tends to be the most portable, well supported language on just about every platform, especially on embedded systems platforms.
If you wish to use something else in an embedded system, it's often a matter of figuring out what options are available, then determining whether the performance, memory consumption, library support, etc, are "good enough" for your situation.
"Nothing but C is fast [enough]" is an early optimisation and wrong for all the reasons that early optimisations are wrong. If your system has enough complexity that something other than C is desirable, then there will be parts of the system that must be "fast enough" and parts with lighter constraints. If writing your code, for example, in Python will get the project finished faster, with fewer bugs, then you can follow up with some C or assembly code to speed up the time-critical parts.
Even if it turns out that the entire code must be written in C or assembly to meet the performance requirements, prototyping in a language like Python can have real benefits. You can take your working Python prototype and gradually replace parts with C code until you reach the necessary performance.
So, use the tools that let you get the development work done most correctly and most quickly, then use real data to determine where you need to optimize. It could be that C is the most appropriate tool to start with sometimes, but certainly not always, even in embedded systems.
Using C for embedded systems has got some very good reasons, of which "performance" is only one of the minor. Embedded is very close to the hardware, you need manual memory adressing to communicate with hardware. All the APIs and SDKs are available for C mostly.
There are only a few platforms that can run a VM for Java or Mono which is partially due to the performance implications but also due to expensive implementation costs.
Apart from performance, there is another consideration: you'll most likely be dealing with low-level APIs that were designed to be used in C or C++.
If you cannot use some SDK, you'll only get yourself in trouble instead of saving time with developing using a higher level language. At the very least, you'll end up redoing a bunch of function declarations and constant definitions.
For C:
C is often the only language that is supported by compilers for a processors.
Most of the libraries and example code is probability also in C.
Most embedded developers have years of C experience but very little experience in anything else.
Allows direct hardware interfacing and manual memory management.
Easy integration with assembly language.
C is going to be around for many years to come. In embedded development its a monopoly that smothers any attempt at change. A language that need a VM like Java or Lua is never going to go mainstream in the embedded environment. A compiled language might stand a chance if it provide compelling new features over C.
There are several benchmarks on the web between different languages. Most of them you will find a C or C++ implementation at the top as they give you more control to really optimize things.
Example: The Computer Language Benchmarks Game.
It's hard to argue against C (or other procedure languages like Pascal, Modula-2, Ada) and assembly for embedded. There is a large history of success with those languages. Generally, you want to remove the risk of the unknown. Trying to use anything other than C or assembly, in my opinion, is an unknown. Having said that, there's nothing wrong with a mixed model where you use one of the Schemes that go to C, or Python or Lua or JavaScript as a scripting language.
What you want is the ability to quickly and easily go to C when you have to.
If you convince the team to go with something that is unproven to them, the project is your cookie. If it crumbles, it'll likely be seen as your fault.
This article (by Michael Barr) talks about the use of C, C++, assembler and other languages in embedded systems, and includes a graph showing the relative usage of each.
And here's another article, fittingly entitled, Poor reasons for rejecting C++.
Ada is a high-level programming language that was designed for embedded systems and mission critical systems.
It is a fast secure language that has data checking built in everywhere. It is what the auto pilots in airplanes are programmed in.
At this link you have a comparison between Ada and C.
There are situations where you need real-time performance, especially in embedded systems. You also have severe memory constraints. A language like C gives you greater control over execution time and execution space.
So, depending on what you are doing, C may very well be "better" or more appropriate.
Check out the following articles
http://theunixgeek.blogspot.com/2008/09/c-vs-python-speed.html
http://wiki.python.org/moin/PythonSpeed/PerformanceTips (especially see Python is not C section)
http://scienceblogs.com/goodmath/2006/11/the_c_is_efficient_language_fa.php
C is ubiquitous, available for almost any architecture, usually from day-one of a processor's availability. C++ is a close second. If your system can support C++ and you have the necessary expertise, use it in preference to C - it is all that C is, and more, so there are few reasons for not using it.
C++ is a larger language, and there are constructs and techniques supported that may consume resources or behave in unacceptable ways in an embedded system, but that is not a reason not to use the language, but rather how to use it appropriately.
Java and C# (on Micro.Net or WinCE) may be viable alternatives for non-real-time.
You may want to look at the D programming language. It could use some performance tuning, as there are some areas Python can outperform it. I can't really point you to benchmarking comparisons since haven't been keeping a list, but as pointed to by Peter Olsson, Benchmarks & Language Implementations has D Digital Mars.
You will probably also want to look at these lovely questions:
Getting Embedded with D (the programming language)
How would you approach using D in a embedded real-time environment?
I'm not really a systems/embedded programmer, but it seems to me that embedded programs generally need deterministic performance - that immediately rules out many garbage collected languages, because they are not deterministic in general. However, there has been work on deterministic garbage collection (for example, Metronome for Java: http://www.ibm.com/developerworks/java/library/j-rtj4/index.html)
The issue is one of constraints - do the languages/runtimes meet the deterministic, memory usage, etc requirements.
C really is your best choice.
There is a difference for writing portable C code and getting too deep into the ghee whiz features of a specific compiler or corner cases of the language (all of which should be avoided). But portability across compilers and compiler versions. The number of employees that will be capable of developing for or maintaining the code. The compilers are going to have an easier time with it and produce better, cleaner, and more reliable code.
C is not going anywhere, with all the new languages being designed to fix the flaws in all the prior languages. C, with all the flaws these new languages are trying to fix, still stands strong.
Here are a couple articles that compare C# to C++ :
http://systematicgaming.wordpress.com/2009/01/03/performance-c-vs-c/
http://journal.stuffwithstuff.com/2009/01/03/debunking-c-vs-c-performance/
Not exactly what you asked for as it doesn't have a focus on embedded C programming. But it's interesting nonetheless. The first one demonstrates the performance of C++ and the benefits of using "unsafe" code for processor intensive tasks. The second one somewhat debunks the first one and shows that if you write the C# code a little differently then the performance is almost the same.
So I will say that C or C++ can be the clear winner in terms of performance in many cases. But often times the margin is slim. Whether to use C or not is another topic altogether. In my opinion it really should depend on the task at hand. But in embedded systems you often don't have much of a choice.
A couple people have mentioned Lua. People I know who have worked with embedded systems have said Lua is useful, but it's not really its own language per se but more of a library that can be embedded in C. It is targetted towards use in embedded systems and generally you'll want to call Lua code from C. But pure C makes for simpler (though not necessarily easier) maintenance, since everyone knows it.
Depending on the embedded platform, if memory constraints are an issue, you'll most likely need to use a non-garbage collected programming language.
C in this respect is likely the most well-known by the team and the most widely supported with available libraries and tools.
The truth is - not always.
It seems .NET runtime (but any other runtime can be taken as an example) imposes several MBs of runtime overhead. If this is all you have (in RAM), then you are out of luck. JavaME seems to be more compact, but it still all depends on resources you have at your disposal.
C compilers are much faster even on desktop systems, because of how few langage features there are compared to C++, so I'd imagine the difference is non-trivial on embedded systems. This translates to faster iteration times, although OTOH you don't have the conveniences of C++ (such as collections) which may slow you down in the long run.