I am trying to understand exactly what it means that low-level languages are machine-dependent.
Let's take for example C, well if it is machine-dependent does it mean that if it was compiled on one computer it might not be able to run on another?
In the end processors executes machine code which is basicly a collection of binary numbers. The processor decode each binary number to figure out what it is supposed to do. One binary number could mean "Add register X to register Y and store the result in register Z". Another binary number could mean "Store the content of register X into the memory address held by register Y". And so on...
The complete description of these decoding rules (i.e. binary number into operation) represents the processors instruction set (aka ISA).
A low level language is a language where the code you can write maps very closely to the specific processors instruction set. Assembly is one obvious example. Since different processor may have different instruction sets, it's clear that an assembly program written for one processors ISA can't be used on a processor with a different ISA.
Let's take for example C, well if it is machine-dependent does it mean that if it was compiled on one computer it might not be able to run on another?
Correct. A program compiled for one processor (family) can't run on another processor with (completely) different ISA. The program needs to be recompiled.
Also notice that the target OS also plays a role. If you use the same processor but use different OS you'll also need to recompile.
There are at least 3 different kind of languages.
A languages that is so close to the target systems ISA that the source code can only be used on that specific target. Example: Assembly
A language that allows you to write code that can be used on many different targets using a target specific compilation. Example: C
A language that allows you to write code that can be used on many different targets without a target specific compilation. These still require some kind of target specific runtime environment to be installed. Example: Java.
High-Level languages are portable, meaning every architecture can run high-level programs but, compared to low-level programs (like written in Assembly or even machine code), they are less efficient and consume more memory.
Low-level programs are known as "closer to the hardware" and so they are optimized for a certain type of hardware architecture/processor, being faster programs, but relatively machine-dependant or not-very-portable.
So, a program compiled for a type of processor it's not valid for other types; it needs to be recompiled.
In the before
When the first processors came out, there was no programming language whatsoever, you had a very long and very complicated documentation with a list of "opcodes": the code you had to put into memory for a given operation to be executed in your processor. To create a program, you had to put a long string of number in memory, and hope everything worked as documented.
Later came Assembly languages. The point wasn't really to make algorithms easier to implement or to make the program readable by any human without any experience on the specific processor model you were working with, it was created to save you from spending days and days looking up things in a documentation. For this reason, there isn't "an assembly language" but thousands of them, one per instruction set (which, at the time, basically meant one per CPU model)
At this point in time, all languages were platform-dependent. If you decided to switch CPUs, you'd have to rewrite a significant portion (if not all) of your code. Recognizing that as a bit of a problem, someone created a the first platform-independent language (according to this SE question it was FORTRAN in 1954) that could be compiled to run on any CPU architecture as long as someone made a compiler for it.
Fast forward a bit and C was invented. C is a platform-independent programming language, in the sense that any C program (as long as it conforms with the standard) can be compiled to run on any CPU (as long as this CPU has a C compiler). Once a C program has been compiled, the resulting file is a platform-dependent binary and will only be able to run on the architecture it was compiled for.
C is platform-dependent
There's an issue though: a processor is more than just a list of opcodes. Most processors have hardware control devices like watchdogs or timers that can be completely different from one architecture to another, even the way to talk to other devices can change completely. As such, if you want to actually run a program on a CPU, you have to include things that make it platform-dependent.
A real life example of this is the Linux kernel. The majority of the kernel is written in C but there's still around 1% written in different kinds of assembly. This assembly is required to do things such as initialize the CPU or use timers. Using this hack means Linux can run on your desktop x86_64 CPU, your ARM Android phone or a RISCV SoC but adding any new architecture isn't as simple as just "compile it with your architecture's compiler".
So... Did I just say the only way to run a platform-independent on an actual processor is to use platform-dependent code? Yes, for most architectures, you have to.
Or is it?
But there's a catch! That's only true if you want to run you code on bare metal (meaning: without an OS). One of the great things of using an OS is how abstracted everything is: you don't need to know how the kernel initializes the CPU, nor do you need to know how it gets its clock, you just need to know how to access those abstracted resources.
But the way of accessing resources dependent on the OS, aren't we back to square one? We could be, if not for the standard library! This library is used to access functions like printf in a defined way. It doesn't matter if you're working on a Linux running on PowerPC or on an ARM Windows, printf will always print things on the standard output the same way.
If you write standard C using only the standard library (and intend for your program to run in an OS) C is completely platform-independent!
EDIT: As said in the comments below, even that is not enough. It doesn't really have anything to do with specific CPUs but some things such as the system function or the size of some types are documented as implementation-defined. To make C really platform independent you need to make sure to only use well defined functions of the STL and learn some best practice (never rely on sizeof(int)==4 for instance).
Thinking about 'what's a program' might help you understand your question. Is a program a collection of text (that you've typed in or otherwise manufactured) or is it something you run? Is it both?
In the case of a 'low-level' language like C I'd say that the text is the program source, and that this is turned into a program (aka executable) by a compiler. A program is something you can run. You need a C compiler for a system to be able to make the program source into a program for that system. Once built the program can only be run on systems close to the one it was compiled for. However there is a more interesting, if more difficult question: can you at least keep the program source the same, so that all you need to do is recompile? The answer to this is 'sort-of No' I sort-of think. For example you can't, in pure C, read the state of the shift key. Of course operating systems provide such facilities and you can interface to those in C, but then such code depends on the OS. There might be libraries (eg the curses library) that provide such facilities for many OS and that can help to reduce the dependency, but no library can clain to portably cover all OS.
In the case of a 'higher-level' language like python I'd say the text is both the program and the program source. There is no separate compilation stage with such languages, but you do need an interpreter on a system to be able to run your python program on that system. However that this is happening may not be clear to the user as you may well seem to be able to run your python 'program' just by naming it like you run your C programs. But this, most likely comes down to the shell (the part of the OS that deals with commands) knowing about python programs and invoking the interpreter for you. It can appear then that you can run your python program anywhere but in fact what you can do is pass the program to any python interpreter.
In the zoo of programming there are not only many, very varied beasts, but new kinds of beasts arise all the time, and old beasts metamorphose. Terms like 'program', 'script' and even 'executable' are often used loosely.
Related
I apologize if this should sound trivial and unsubtle but I couldn't figure out an intuitive way to google it, Why are some kernel actions like saving the current state of the registers and the stack(just to mention a few) written in Assembly? Why can't they be written in C because after all, presumably, when compilation is done, all we get is object code? Besides when you use ollydbg, you notice that before a function call(in C), the current state of the register is pushed to the stack
When writing an OS the main goal is to maintain the highest abstraction to make the code reusable on different architectures, but at the end inevitably there is the architecture.
Each machine performs the very low level functions in such a specialized way that no general programming language can sustain.
Task switching, bus control, device interrupt handling, just to name few, cannot be coded efficiently using an high level language (consider instruction sequences, involved registers, and eventual critical CPU timings and priority levels).
On the other hand, it is not even convenient to use mixed programming, i.e. inline assembler, because the crafted module will be no more abstract, containing specific architecture code that can't be reused.
The common solution is to write all code following the highest abstraction level, reducing to a few modules the specialized code. These routines, fully written in assembly, are normally well defined in terms of supplied input and expected output, so the programmer can produce same results on different architectures.
Compiling for different CPU is then done by simply switching the set of assembly routines.
C does not assure you that it modifies the registers you need to modify.
C just implements a logic you write in your code and the interpretation given by the language will be as you expect, hiding completely the details behind the interpretation.
If you want a kind of logic like set the X register with a given value or move data from register X to register Y, as it's necessary to do in kernel sometimes, this kind of logic is not defined by the C language.
C is a generic high level language, not specific to one target. But at the kernel level there are things that you need to do that are target specific that the C language simply cannot do. Enabling an interrupt or configuring an MMU or configuring something to do with the protection mechanism. On some targets these items and others are configured using registers in the address space but on some targets specific assembly language instructions are required and so C cannot be used, it has to be assembly. There is usually at least one thing you have to use assembly for per target if not many.
Sometimes it is a simple case of wanting the correct instruction to be used for example a 32 bit store must be used for some operation to insure that and not hope the compiler gets it right then use asm.
There is no C equivalent for "return from exception". The C compiler can't translate everything that assembly can do. For instance, if you write an operating system you will need a special return function in the interrupt service routine that goes back to where the interrupt was initiated and the C compiler can't translate such a functionality, it can only be expressed in assembly.
See also Is assembly strictly required to make the "lowest" part of an operating system?
Context switching is critical and need to be really really fast which should not be written in high level language.
I have searched but I couldn't find a clear answer. If we are compiling the code in a computer(powerful) then we are only sending a machine instruction to the memory in the embedded device. This, for my understandings, will make no difference if we use any sort of language because, in the end, we will be sending only a machine code to the embedded device, the code compilation which is the expensive phase is already done by a powerful machine!
Why using language like C ? Why not Java? we are sending a machine code at the end.
The answer partly lies in the runtime requirements and platform-provided expectations of a language: The size of the runtime for C is minimal - it needs a stack and that is about it to be able to start running code. For a compliant implementation static data initialisation is required, but you can run code without it - the initialisation itself could even be written in C, and even heap and standard library initialisation are optional, as is the presence of a library at all. It need have no OS dependencies, no interpreter and no virtual machine.
Most other languages require a great deal more runtime support and this is usually provided by an OS, runtime-library, or virtual machine. To operate "stand-alone" these languages would require that support to be "built-in" and would consequently be much larger - so much so that you may as well in many cases deploy a system with an OS and/or JVM for example in any case.
There are of course other reasons why particular languages are suited to embedded systems, such as hardware level access, performance and deterministic behaviour.
While the issue of a runtime environment and/or OS is a primary reason you do not often see higher-level languages in small embedded systems, it is by no means unheard of. The .Net Micro Framework for example allows C# to be used in embedded systems, and there are a number of embedded JVM implementations, and of course Linux distributions are widely embedded making language choice virtually unlimited. .Net Micro runs on a limited number of processor architectures, and requires a reasonably large memory (>256kb), and JVM implementations probably have similar requirements. Linux will not boot on less than about 16Mb ROM/4Mb RAM. Neither are particularly suited to hard real-time applications with deadlines in the microsecond domain.
C is more-or-less ubiquitous across 8, 16, 32 and 64 bit platforms and normally available for any architecture from day one, while support for other languages (other than perhaps C++ on 32 bit platforms at least) may be variable and patchy, and perhaps only available on more mature or widely used platforms.
From a developer point of view, one important consideration is also the availability of cross-compilation tools for the target platform and language. It is therefore a virtuous circle where developers choose C (or increasingly also C++) because that is the most widely available tool, and tool/chip vendors provide C and C++ tool-chains because that is what developers demand. Add to that the third-party support in the form of libraries, open-source code, debuggers, RTOS etc., and it would be a brave (or foolish) developer to select a language with barely any support. It is not just high level languages that suffer in this way. I once worked on a project programmed in Forth - a language even lower-level than C - it was a lonely experience, and while there were the enthusiastic advocates of the language, they were frankly a bit nuts favouring language evangelism over commercial success. C has in short reached critical mass acceptance and is hard to dislodge. C++ benefits from broad interoperability with C and similarly minimal runtime requirements, and by tool-chains that normally support both languages. So the only barrier to adoption of C++ is largely developer inertia, and to some extent availability on 8 and 16 bit platforms.
You're misunderstanding things a bit. Let's start by explaining the foundation of how computers work internally. I'll use simple and practical concepts here. For the underlying theories, read about Turing machines. So, what's your machine made up of? All computers have two basic components: a processor and a memory.
The memory is a sequential group of "cells" that works sort of like a table. If you "write" a value into the Nth cell, you can then retrieve that same value by "reading" from the Nth cell. This allows computers to "remember" things. If a computer is to perform a calculation, it needs to retrieve input data for it from somewhere, and to output data from it into somewhere. That place is the memory. In practice, the memory is what we call RAM, short for random access memory.
Then we have the processor. Its job is to perform the actual calculations on memory. The actual operations that are to be performed are mandated by a program, that is, a series of instructions that the processor is able to understand and execute. The processor decodes and executes an instruction, then the next one, and so on until the program halts (stops) the machine. If the program is add cell #1 and cell #2 and store result in cell #3, the processor will grab the values at cells 1 and 2, add their values together, and store the result into cell 3.
Now, there's some sort of an intrinsic question. Where is the program stored, if at all? First of all, a program can't be hardcoded into the wires. Otherwise, the system is not more of a computer than your microwave. To these problems are two distinct approaches/solutions: the Harvard architecture and the Von Neumann Architecture.
Basically, in the Harvard architecture, the data (as always has been) is stored in the memory. The code (or program) is stored somewhere else, usually in read-only memory. In the Von Neumann architecture, code is stored in memory, and is just another form of data. As a result, code is data, and data is code. It's worth noting that most modern systems use the Von Neumann architecture for several reasons, including the fact that this is the only way to implement just-in-time compilation, an essential part of runtime systems for modern bytecode-based programming languages, such as Java.
We now know what the machine does, and how it does that. However, how are both data and code stored? What's the "underlying format", and how shall it be interpreted? You've probably heard of this thing called the binary numeral system. In our usual decimal numeral system, we have ten digits, zero through nine. However, why exactly ten digits? Couldn't they be eight, or sixteen, or sixty, or even two? Be aware that it's impossible to create an unary based computational system.
Have you heard that computers are "logical and cold". Both of them are true... unless your machine has an AMD processor or a special kind of Pentium. The theory states that every logical predicate can be reduced to either "true" or "false". That is to say that "treu" and "false" are the basis of logic. Plus, computers are made up of electrical cruft, no? A light switch is either on or off, no? So, at the electrical level we can easily recognize two voltage levels, right? And we want to handle logic stuff, such as numbers, in computers, right? So zero and one may be, as the only feasible solution they are.
Now, taking all the theory into account, let's talk about programming languages and assembly languages. Assembly languages are a way to express binary instructions in a (supposedly) readable way to human programmers. For instance, something like this...
ADD 0, 1 # Add cells 0 and 1 together and store the result in cell 0
Could be translated by an assembler into something like...
110101110000000000000001
Both are equivalent, but humans will only understand the former, and processors will only understand the later.
A compiler is a program that translates input data that is expected to conform to the rules of a given programming language into another, usually lower-level form. For instance, a C compiler may take this code...
x = some_function(y + z);
And translate it into assembly code such as (of course this is not real assembly, BTW!)...
# Assume x is at cell 1, y at cell 2, and z at cell 3.
# Assuem that, when calling a function, the first argument
# is at cell 16, and the result is stored in cell 0.
MOVE 16, 2
ADD 16, 3
CALL some_function
MOVE 1, 0
And the assembler will spit (this is not random)...
11101001000100000000001001101110000100000000001110111011101101111010101111101111110110100111010010000000100000000
Now, let's talk about another language, namely Java. Java's compiler does not give you assembly/raw binary code, but bytecode. Bytecode is... like a generic, higher-level form of assembly language that the CPU can't understand (there are exceptions), but another program that directly runs on the CPU does. This means that the lie that some badly educated people spread around, that "both interpreted and compiled programs ultimately boil down to machine code" is false. If, for example, the interpreter is written in C, and has this line of code...
Bytecode some_bytecode;
/* ... */
execute_bytecode(&some_bytecode);
(Note: I won't translate that into assembly/binary again!) The processor executes the interpreter, and the interpreter's code executes the bytecode, by performing the actions specified by the bytecode. Although, if not optimized correctly, this can severely degrade performance, this is not the problem per se, but the fact that things such as reflection, garbage collection, and exceptions can add quite some overhead. For embedded systems, whose memories are small and whose processors are slow, this is something you want. You're wasting precious system resources on things you don't need. If C programs are slow on your Arduino, image a full blown Java/Python program with all sorts of bells and whistles! Even if you translated bytecode into machine code before inserting it into the system, support must be there for all that extra stuff, and results in basically the same unwanted overhead/waste. You would still need support for reflection, exceptions, garbage collection, etc... It's basically the same thing.
On most other environments, this is not a big deal, as memory is cheap and abundant, and processors are fast and powerful. Embedded systems have special needs, they're special by themselves, and things are not free in that land.
Why using language like C ? why not Java ? we are sending a machine
code at the end.
No, Java code does not compile to machine code, it needs a virtual machine (the JVM) on the target system.
You're partly right about the compilation, however, but still "higher-level" languages can result in less efficient machine code. For instance, the language can include garbage collection, run-time correctness checks, can't use all the "native" numeric types, etc.
In general it depends on the target. On small targets (i.e. microcontrollers like AVR) you don't have that complex programs running. Additionally, you need to access the hardware directly (f.e. a UART). High level languages like Java don't support accessing the hardware directly, so you usually end up with C.
In the case of C versus Java there's a major difference:
With C you compile the code and get a binary that runs on the target. It directly runs on the target.
Java instead creates Java Bytecode. The target CPU cannot process that. Instead it requires running another program: the Java runtime environment. That translates the Java Bytecode to actual machine code. Obviously this is more work and thus requires more processing power. While this isn't much of a concern for standard PCs it is for small embedded devices. (Note: some CPUs do actually have support for running Java bytecode directly. Those are exceptions though.)
Generally speaking, the compile step isn't the issue -- the limited resources and special requirements of the target device are.
you misunderstand something , 'compiling' java gives a different output then compiling a low level language , it is true that both are machine codes , but in c case the machine code is directly executable by the processor , whereas with java the output will be in an intermediate stage , a bytecode , and it can't be executed by the processor , it needs some extra work , a translation to a machine code , that is the only directly executable format , while that takes a extra time , c will be an attractive choice , because of its speed , with low level language you write you code then you compile to a target machine ( you need to specify the target to the compiler since each processor have his own machine code ) , then your code is understandable by the processor .
in the other hand c allows direct hardware access , that is not allowed in java-like languages even via an api
It's an industry thing.
There are three kinds of high level languages. Interpreted (lua, python, javascript), compiled to bytecode (java, c#), and compiled to machinne code (c, c++, fortran, cobol, pascal)
Yes, C is a high level language, and closer to java than to assembly.
High level languages are popular for two reasons.
Memory management, and a wide standard library.
Managed memory comes with a cost,
somebody must manage it. That's an issue not only for java and c#, where somebody must implement a VM, but also to baremetal c/c++ where someone must implement the memory allocation functions.
A wide standard library can't be supported by all targets because there aren't enough resources. ie, avr arduino doesn't support the full c++ standard library.
C gained popularity, because it can easily be converted to equivalent assembly code. Most statements can be converted, without optimization, to a bunch of fixed assembly instructions, so compilers are easy to program. And its standard is compact and easy to implement. C prevailed because it became the defacto standard for the lowest high level language of any arch.
So in the end, besides special snowflakes like cython, go, rust, haskell etc, industry decided that machinne code is compiled from C, C++ and most optimization efforts went that way
Languages, like java, decided to hide memory from the progarammer, so good luck trying to interface with low level stuff there. As by design they do that, almost nobody bothers trying to bring them to compete with C. Realistically, java without GC would be C++ with different syntax.
Finally, if all the industry money goes to one language, the cheapest/easyest thing to do is choosig that language.
You are right in that you can use any language that generates machine code. But JAVA is not one of them. JAVA, Python and even some languages that compile to machine code may have heavy system requirements. You could and some folks use Pascal, but C won the C vs Pascal war many years ago. There are some other languages that fell by the wayside that if you had a compiler for you could use. there are some new languages you can use, but the tools are not as mature and not as many targets as one would like. But it is very unlikely that they will unseat C. C is just the right amount of power/freedom, low enough and high enough.
Java is an interpreted language and (like all interpreted languages) produces an intermediate code that is not directly executable by the processor. So what you send to the embedded device would be the Bytecode and you should have a JVM running on it and interpreting your code. Clearly not feasible. For what concern the compiled languages (C, C++...) you are right to say that at the end you send machine code to the device. However consider that using high level features of a language will produce much more machine code that you would expect. If you use polymorphism for example, you have just a function call, but when you compile the machine code explodes. Consider also that very often the use of dynamic memory (malloc, new...) is not feasible on an embedded device.
Im a mid-level(abstraction) programmer, and some months ago i started to think if i should reduce or increase abstraction(i've chosen to reduce).
Now, i think i've done most of the "research" about what i need, but still are a few questions remaining.
Right now while im "doing effectively nothing", im just reinforcing my C skills (bought "K&R C Programing Lang"), and im thinking to (after feel comfortable) start studying operating systems(like minix) just for learning purposes, but i have an idea stuck in my mind, and i don't really know if i should care.
In theory(i think, not sure), the higher level languages cannot refer to the hardware directly (like registers, memory locations, etc...) so the "perfect language" for the base would be assembly.
I already studied assembly(some time ago) just to see how it was (and i stopped in the middle of the book due to the outdated debugger that the book used(Assembly Language Step By Step, for Linux!)) but from what i have read, i din't liked the language a lot.
So the question is simple: Can an operating system(bootloader/kernel) be programmed without touching in a single line of assembly, and still be effective?
Even if it can, it will not be "cross-architecture", will it? (i386/arm/mips etc...)
Thanks for your support
You can do a significant amount of the work without assembly. Linux or NetBSD doesnt have to be completely re-written or patched for each of the many targets it runs on. Most of the code is portable and then there are abstraction layers and below the abstraction layer you find a target specific layer. Even within the target specific layers most of the code is not asm. I want to dispell this mistaken idea that in order to program registers or memory for a device driver for example that you need asm, you do not use asm for such things. You use asm for 1) instructions that a processor has that you cannot produce using a high level language. or 2) where high level language generated code is too slow.
For example in the ARM to enable or disable interrupts there is a specific instruction for accessing the processor state registers that you must use, so asm is required. but programming the interrupt controller is all done in the high level language. An example of the second point is you often find in C libraries that memcpy and other similar heavily used library functions are hand coded asm because it is dramatically faster.
Although you certainly CAN write and do anything you want in ASM, but you typically find that a high level language is used to access the "hardware directly (like registers, memory locations, etc...)". You should continue to re-inforce your C skills not just with the K&R book but also wander through the various C standards, you might find it disturbing how many "implementation defined" items there are, like bitfields, how variable sizes are promoted, etc. Just because a program you wrote 10 years ago keeps compiling and working using a/one specific brand of compiler (msvc, gcc, etc) doesnt mean the code is clean and portable and will keep working. Unfortunately gcc has taught many very bad programming habits that shock the user when the find out they didnt know the language a decade or so down the road and have to redo how they solve problems using that language.
You have answered your question yourself in "the higher level languages cannot refer to the hardware directly".
Whether you want it or not, at some point you will have to deal with assembly/machine code if you want to make an OS.
Interrupt and exception handlers will have to have some assembly code in them. So will need the scheduler (if not directly, indirectly). And the system call mechanism. And the bootloader.
What I've learned in the past reading websites and books is that:
a) many programmers dislikes assembly language because of the reasons we all know.
b) the main programming language for OS's seems to be C and even C++
c) assembly language can be used to 'speed up code' after profiling your source code in C or C++ (language doesn't matter in fact)
So, the combination of a mid level language and a low level language is in some cases inevitable. For example there is no use to speed up code for waiting on user input.
If it matters to build the shortest and fastest code for one specific range of computers (AMD, INTEL, ARM, DIGITAL-ALPHA, ...) then you should use assembler. My opinion...
I want to decompile a DLL that I believe was written in C. How can I do this?
Short answer: you can't.
Long answer: The compilation process for C/C++ is very lossy. The compiler makes a whole lot of high and low level optimizations to your code, and the resulting assembly code more often than not resembles nothing of your original code. Furthermore there are different compilers in the market (and each has several different active versions), which each generate the output a little differently. Without knowledge of which compiler was used the task of decompiling becomes even more hopeless. At the best I've heard of some tools that can give you some partial decompilation, with bits of C code recognized here and there, but you're still going to have to read through a lot of assembly code to make sense of it.
That's by the way one of the reasons why copy protections on software are difficult to crack and require special assembly skills.
It is possible, but extremely difficult and will take ginormous amount of time even if you're pretty well versed in C, assembly and the intricacies of the operating system where this code is supposed to work.
The problem is, optimization makes compiled code hardly recognizable/understandable for humans.
Further, there will be ambiguities if the disassembler loses information (e.g. the same instruction can be encoded in different ways and if the rest of the code depends on a particular encoding which many disassemblers (or their users) fail to take into account, the resultant disassembly becomes incomplete or incorrect).
Self-modifying code complicates the matters as well.
See in this question more on the topic and available tools.
You can, but only up to a certain extent:
Optimizations could change the code
Symbols might have been stripped (DLL allows to refer to functions residing inside via index instead of symbol)
Some instruction combinations might not be convertible to C
and some other things I might forget...
I googled and I see a surprising amount of flippant responses basically laughing at the asker for asking such a question.
Microchip provides some source code for free (I don't want to post it here in case that's a no-no. Basically, google AN937, click the first link and there's a link for "source code" and its a zipped file). Its in ASM and when I look at it I start to go cross-eyed. I'd like to convert it to something resembling a c type language so that I can follow along. Because lines such as:
GLOBAL _24_bit_sub
movf BARGB2,w
subwf AARGB2,f
are probably very simple but they mean nothing to me.
There may be some automated ASM to C translator out there but all I can find are people saying its impossible. Frankly, its impossible for it to be impossible. Both languages have structure and that structure surely can be translated.
You can absolutely make a c program from assembler. The problem is it may not look like what you are thinking, or maybe it will. My PIC is rusty but using another assembler, say you had
add r1,r2
In C lets say that becomes
r1 = r1 + r2;
Possibly more readable. You lose any sense of variable names perhaps as values are jumping from memory to registers and back and the registers are being reused. If you are talking about the older pics that had what two registers an accumulator and another, well it actually might be easier because variables were in memory for the most part, you look at the address, something like
q = mem[0x12];
e = q;
q = mem[0x13];
e = e + q;
mem[0x12] = e;
Long and drawn out but it is clear that mem[0x12] = mem[0x12] + mem[0x13];
These memory locations are likely variables that will not jump around like compiled C code for a processor with a bunch of registers. The pic might make it easier to figure out the variables and then do a search and replace to name them across the file.
What you are looking for is called a static binary translation, not necessarily a translation from one binary to another (one processor to another) but in this case a translation from pic binary to C. Ideally you would want to take the assembler given in the app note and assemble it to a binary using the microchip tools, then do the translation. You can do dynamic binary translation as well but you are even less likely to find one of those and it doesnt normally result in C but one binary to another. Ever wonder how those $15 joysticks at wal-mart with pac-man and galaga work? The rom from the arcade was converted using static binary translation, optimized and cleaned up and the C or whatever intermediate language compiled for the new target processor in the handheld box. I imagine not all of them were done this way but am pretty sure some were.
The million dollar question, can you find a static binary translator for a pic? Who knows, you probably have to write one yourself. And guess what that means, you write a disassembler, and instead of disassembling to an instruction in the native assembler syntax like add r0,r1 you have your disassembler print out r0=r0+r1; By the time you finish this disassembler though you will know the pic assembly language so well that you wont need the asm to C translator. You have a chicken and egg problem.
Getting the exact same source code back from a compiled program is basically impossible. But decompilers have been an area of research in computer science (e.g. the dcc decompiler, which was a PhD project).
There are various algorithms that can be used to do pattern matching on assembly code and generate equivalent C code, but it is very hard to do this in a general way that works well for all inputs.
You might want to check out Boomerang for a semi-recent open source effort at a generalized decompiler.
I once worked a project where a significant part of the intellectual property was some serious algorithms coded up in x86 assembly code. To port the code to an embedded system, the developer of that code (not me) used a tool from an outfit called MicroAPL (if I recall correctly):
http://www.microapl.co.uk/asm2c/index.html
I was very, very surprised at how well the tool did.
On the other hand, I think it's one of those "if you have to ask, you can't afford it" type of things (their price ranges for a one-off conversion of a project work out to around 4 lines of assembly processed for a dollar).
But, often the assembly routines you get from a vendor are packaged as functions that can be called from C - so as long as the routines do what you want (on the processor you want to use), you might just need to assemble them and more or less forget about them - they're just library functions you call from C.
You can't deterministically convert assembly code to C. Interrupts, self modifying code, and other low level things have no representation other than inline assembly in C. There is only some extent to which an assembly to C process can work. Not to mention the resultant C code will probably be harder to understand than actually reading the assembly code... unless you are using this as a basis to begin reimplementation of the assembly code in C, then it is somewhat useful. Check out the Hex-Rays plugin for IDA.
Yes, it's very possible to reverse-engineer assembler code to good quality C.
I work for a MicroAPL, a company which produces a tool called Relogix to convert assembler code to C. It was mentioned in one of the other posts.
Please take a look at the examples on our web site:
http://www.microapl.co.uk/asm2c/index.html
There must be some automated ASM to C translator out there but all I can find are people saying its impossible. Frankly, its impossible for it to be impossible.
No, it's not. Compilation loses information: there is less information in the final object code than in the C source code. A decompiler cannot magically create that information from nothing, and so true decompilation is impossible.
It isn't impossible, just very hard. A skilled assembly and C programmer could probably do it, or you could look at using a Decompiler. Some of these do quite a good job of converting the asm to C, although you will probably have to rename some variables and methods.
Check out this site for a list of decompilers available for the x86 architecture.
Check out this: decompiler
A decompiler is the name given to a
computer program that performs the
reverse operation to that of a
compiler. That is, it translates a
file containing information at a
relatively low level of abstraction
(usually designed to be computer
readable rather than human readable)
into a form having a higher level of
abstraction (usually designed to be
human readable).
Not easily possible.
One of the great advantages of C over ASM apart from readability was that it prevented "clever" programing tricks.
There are numerous things you can do in assembler that have no direct C equivalent,
or involve tortuous syntax in C.
The other problem is datatypes most assemblers essentialy have only two interchangeable datatypes: bytes and words. There may be some language constructs to define ints and floats
etc. but there is no attempt to check that the memory is used as defined. So its very difficult to map ASM storage to C data types.
In addition all assembler storage is essentially a "struct"; storage is layed out in the order it is defined (unlike C where storage is ordered at the whim of the runtime). Many ASM programs depend on the exact storage layout - to acheive the same effect in C you would need to define all storage as part of a single struct.
Also there are a lot of absused instructions ( on olde worldy IBM manframes the LA, load address, instruction was regulary used to perform simple arithimatic as it was faster and didnt need an overflow register )
While it may be technically possible to translate to C the resulting C code would be less readable than the ASM code that was transalated.
I can say with 99% guarantee, there is no ready converter for this assembly language, so you need to write one. You can simply implement it replacing ASM command with C function:
movf BARGB2,w -> c_movf(BARGB2,w);
subwf AARGB2,f -> c_subwf(AARGB2,f);
This part is easy :)
Then you need to implement each function. You can declare registers as globals to make things easy. Also you can use not functions, but #defines, calling functions if needed. This will help with arguments/results processing.
#define c_subwf(x,y) // I don't know this ASM, but this is some Substraction must be here
Special case is ASM directives/labels, I think it can be converted with #defines only.
The fun starts when you'll reach some CPU-specific features. This can be simple function calls with stack operations, some specific IO/Memory operations. More fun are operations with Program Counter register, used for calculations, or using/counting ticks/latencies.
But there is another way, if this hardcore happens. It's hardcore too :)
There is a technique named dynamic recompilation exists. It's used in many emulators.
You don't need recompile your ASM, but the idea is almost the same. You can use all your #defines from first step, but add support of needed functionality to them (incrementing PC/Ticks). Also you need to add some virtual environment for your code, such as Memory/IO managers, etc.
Good luck :)
I think it is easier to pick up a book on PIC assembly and learn to read it. Assembler is generally quite simple to learn, as it is so low level.
Check out asm2c
Swift tool to transform DOS/PMODEW 386 TASM Assembly code to C code
It is difficult to convert a function from asm to C but doable by hand. Converting an entire program with a decompiler will give you code that can be impossible to understand since to much of the structure was lost during compilation. Without meaningful variable and function names the resultant C code is still very difficult to understand.
The output of a C compiler (especially unoptimised output) of an basic program could be translatable to C because of repeated patterns and structures.