Developing GUI and loading libraries for bootloader - c

1) I learned writing bootloaders and tested it using bochs. Now, I want to add GUI to by bootloader. I have googled but didnt hit on relevant sources for that. I even tried searching github for existing projects. I checked out this question. But is there any way to install some graphics libraries or include x-window-system APIs in my code to give in a GUI env (e.g: Chameleon, GAG) instead of including just a splash image ?
2) Is there any possibility to add python execution environment during bootloader stage, so that when I step in protected mode, i could add some python scripts?
3) How to add standard C/C++ library support?
Thanks in advance.

It might help if you told what kind of bootloader you're talking about. I work with this daily and most of what you're saying seems like crazy talk. LOL.
Usually bootloaders are very small startup program that takes a decision on what to load next. Some bootloaders contains features for "flashing" the main application. (The main app often cannot flash it self.) The bootloader being the first piece of code the processor meets, makes it very critical. You do not want it to fail nor upgrade it. Hence bootloaders tend to strive for smaller is better.
As for your questions.
3) I've never seen a bootloader not written in c. So it got that one covered. (They exist though) C++ you would have to compile the whole project with c++. (Not a problem.)
1) and 2) I'm not sure what "protected mode" is, but to include eg python or x-windows you would have to include the source into your project. Here's the thing. Usually bootloaders are baremetal code and code like X isn't. You'd have to include a whole lot of other code as well. And you'd end up with a bootloader at 500 mb. Usually you strive for something like 5 kb.
The splash image is usually done through low level interaction with the graphic card.
If anyone else is interested in bootloader development, I can recommend to look at uboot. It's a linux (mostly) loader and is rather easy to find your way through. And it already supports a lot.

Related

Create a shared library for multiple applications for ARM cortex-m4

I'm trying to create project which contains a drivers library and two separate application (Booltloader + app), now I want to share the drives library between the two apps in order to save space on the flash...
I saw this tutorial for IAR, but I must use Keil uvision5 and I didnt find anything helpful online.
Anyone can guide me through this?
thanks!
Splitting the code into three parts (bootloader, library, application) most likely is too much. I think it is better to combine the bootloader and the drivers in a single binary. While calling the application, the bootloader can provide information necessary to use the drivers.
A word of caution, though: a solution like this is way more tricky than just compiling the drivers in the application. Depending on the driver functions, there may be no true benefit on flash usage. In particular, if many drivers are not necessary, they will just occupy the flash instead of getting optimized-out.

How does one load external code with a custom bootloader?

I'm writing my own operating system, and so far I'm only really able to write it in assembly, because I don't really understand how I would set it up with multiple files/languages. I've written bootloaders with executable code in them before, but what I don't understand is how to make the bootloader aware of other files outside of itself. How would I be able to write a bootloader in assembly and then tell it to load, say, a kernel written in C in a different file? Do I have to bundle the .o files from the compilation of the kernel into the fdd image and tell the bootloader to load/execute them or is it more complicated than that?
Since it looks like you're trying to get the hang of system bring up it might be worthwhile to take a look at some "smaller" embedded systems to get a feel for what goes on once power is applied/chip comes out of reset. Take a look at U-Boot here: http://www.denx.de/wiki/U-Boot
It is a very popular bootloader especially for embedded systems and can launch a variety of OS's. The mainline supports a ton of different configurations as well. I think it is relatively straight forward to follow what happens during power up if you are comfortable with C.
To answer your question more specifically for instance with U-Boot you can either build parameters into the u-boot image as to where you are going to load your code, it can read where you image file is stored from a configuration file on powerup, u-boot can load a configuration automatically from your network somewhere, you can even tell u-boot where and what to load from its command line interface. Take a look and see if you have any further questions.

Porting Autodesk Animator Pro to be cross platform

a previous relevant question from me is here Reverse Engineering old paint programs
I have set up my base of operations here: http://animatorpro.org
wiki coming soon.
Okay, so now I have a 300,000 line legacy MSDOS codebase. It's sort of a "be careful what you wish for" situation. I am not an experienced C programmer. I'm not entirely inexperienced either, but for all intents and purposes I'm a noob to the language and in particular the intricacies of its libraries. I am especially ignorant of the vagaries of the differences between C programs written specifically for MSDOS and programs that are cross platform. However I have been studying this code base for over a year now, and this is what I know about Animator Pro:
Compilers and tools used:
Watcom C compiler
tcmake (make program from Turbo C)
386asm, a specialised assembler for the Phar Lap dos extender
and of course, the Phar Lap dos extender itself.
a selection of obscure dos utilities
Much of the compilation seems to be driven by batch files. Though I have obtained copies of all these tools, I have not yet succeeded at compiling it. (though I have compiled its older brother, autodesk animator original.
It's got a plugin system that replicates DLL before DLL's were available, based on REX. The plugin system handles:
Video Drivers (with a plethora of included VESA drivers)
Input drivers (including wacom tablets, and keyboards)
Drawing Tools
Inks (Like photoshop's filters, or blending modes)
Scripting Addons (essentially compiled scripts)
File formats
It's got its own script interpreter named POCO, based on the C language- The scripting language has enough power to do virtually all the things the plugin system can do- Just slower.
Given this information, this is my development plan. Please criticise this. The source code is available in the link above, so you can easily, if you are so inclined, assess the situation yourself.
Compile with its original tools.
Switch to using DJGPP, and make the necessary changes to get it to compile with that, plus the original assembler.
Include the Allegro.cc "Game" library, and switch over as much functionality to that library as possible- Perhaps by simply writing new video and input drivers that use the Allegro API. I'm thinking allegro rather than SDL because: there is a DOS version of Allegro, and fascinatingly, one of its core functions is the ability to play Animator Pro's native format FLIC.
Hopefully after 3, I will have eliminated most or all of the Assembler in the project. I say hopefully, because it's in an obscure dialect that doesn't assemble in any modern free assembler without significant modification. I have tried them all. Whatever is left gets converted to assemble in NASM, or to C code if I can define the assembler's actual function.
Switch the dos extender from Phar Lap to HX Dos http://www.japheth.de/HX.html, Which promises to replicate as much of the WIN32 api as possible. Then make all the necessary code changes for that to work.
Switch to the win32 version of Allegro.cc, assuming that the win32 version can run on top of HXDos. Make any further necessary changes
Modify the plugin system to use some kind of standard cross platform plugin library. What this would be, I have no idea. Maybe you can offer some suggestions? I talked to the developer who originally wrote the plugin system, and he said some of the things it does aren't possible on modern OS's because of segmentation restrictions. I'm not sure what this means, but I'm guessing it means all the plugins will need to be rewritten almost from scratch.
Magically, I got all the above done, and we can try and make it run in windows, osx, and linux, whilst dealing with other cross platform niggles like long file names, and things I haven't thought of.
Anyone got a problem with any of this? Is allegro a good choice? if not, why? what would you do about this plugin system? What would you do different? Is this whole thing foolish, and should I just rewrite it from scratch, using the original as inpiration? (it would apparently take the original developer "About a month" to do that)
One thing I haven't covered above is the text/font system. Not sure what to do about that, but Animator Pro has its own custom font format, but also is able to use Postscript Type 1 fonts, and some other formats.
My biggest concern with your plan, in a nutshell: Your approach seems to be to attempt to keep the whole enormous thing working at all times, tweaking the environment ever-further away from DOS. During each tweak to the environment, that means you will have approximately a billion subtle assumptions that might have broken at once, none of which you necessarily understand yet. Untangling them all at once will be incredibly painful.
If I were doing the port, my approach would be to disable as much code as possible to get SOMETHING running in a modern environment, and bring the parts back online, one piece at a time. Write a simple test harness program that loads a display driver and draws some stuff, and compile it for DOS to make sure you understand the interface. Then write some C code that implements the same interface, but with Allegro (or SDL or SFML), and make that program work under Windows or Linux. When the output differs, you have a simple test case to work from.
Your entire job on this port is swapping out implementations of various interfaces and functions with completely new ones. This is a job that unit testing excels at. Don't write any new code without a test of some kind that runs on the old code under DOS! Make your potential problems as small and simple as you possibly can. Port assembly code instead of rewriting it only if you're reasonably confident that it will actually make your job easier (ie, algorithmic stuff that compiles fine with few tweaks under NASM). Don't bite off a bigger piece than you can comfortably fit in your brain at once.
I, for one, look forward to seeing your progress! I think what you're attempting to do is great. Thanks for doing it.
Hmmm - I might approach it by writing an OpenGL video "driver" for it. and todays machines are fast enough with tons of ram that you could do all the pixel specific algorithms on main CPU into a back buffer and it would work. As the "generic" VGA driver just mapped the video buffer to a pointer this would be a place to start. There was a zoom mode in the UI so you can look at the pixels on a high res display.
It is often very difficult to take an existing non-trivial code base that wasn't written with portability in mind - you mention a few - and then try to make it portable. There will be a lot of problems on the way. It is probably a better idea to start from scratch and rewrite the code using the existing code as reference only. If you start from scratch you can leverage existing portable UI solution in your new project like Qt.

Invensense IMU3000 with microcontroller PIC

Has anybody experienced using the Invensense IMU3000 with some microcontroller?
I am trying to build the IMU library for a PIC but I am stuck with the dependencies.. any other experience with others microcontrollers will be nice as well!
Basically I don't get whether it is better to take the Visual Studio 2005 project and make the changes there, adding the PIC dependencies (I get stuck..) or compile the whole library in the PIC environment..
Any hint, even with other platforms, would help!
Thank you all!
PC and PIC programming are so very different... Also there are so many pIC variants, they are hugely different from each other, it's hard to answer such an open ended question. However, basically you're writing mathematical algorithms. So write these as ansi c functions, hosted with a load of PC things (dialogs etc) and once they're working, you can move just the math functions over to the PIC - having already got a framework running on the PIC, ready to receive the algorithms. BUT - take care with memory. You have bags of it on the PC, you have to be mean with memory once you work on a PIC. Good luck, enjoy!
While it can be helpful to write code on the PC that will eventually move to the PIC, you will need to make sure that all code that will move has been written with portability in mind. That is, you cannot assume that code that compiles and works perfectly under Visual Studio will work without modification on any other platform.
To run in the PIC, all of the code must be compiled with cross development tools that are designed to target the PIC.
That said, I often develop algorithms and detailed processing code in the PC where a test suite can be easily used to verify its operation, and then recompile it for my target platform.
Incidentally, Google tells me that the IMU-3000 is a MEMS Gyro. It would probably be helpful to include at least the link to its data sheet in the question.

Why do you obfuscate your code?

Have you ever obfuscated your code before? Are there ever legitimate reasons to do so?
I have obfuscated my JavaScript. It made it smaller, thus reducing download times. In addition, since the code is handed to the client, my company didn't want them to be able to read it.
Yes, to make it harder to reverse engineer.
To ensure a job for life, of course (kidding).
This is pretty hilarious and educational: How to Write Unmaintanable Code.
It's called "Job Security". This is also the reason to use Perl -- no need to do obfuscation as separate task, hence higher productivity, without loss of job security.
Call it "security through obsfuscability" if you will.
I don't believe making reverse engineering harder is a valid reason.
A good reason to obfuscate your code is to reduce the compiled footprint. For instance, J2ME appliactions need to be as small as possible. If you run you app through an obfuscator (and optimiser) then you can reduce the jar from a couple of Mb to a few hundred Kb.
The other point, nestled above, is that most obfuscators are also optimisers which can improve your application's performance.
Isn't this also used as security through obscurity? When your source code is publically available (javascript etc) you might want to at least it somewhat harder to understand what is actually occuring on the client side.
Security is always full of compromises. but i think that security by obscurity is one of the least effective methods.
I believe all TV cable boxes will have the java code obfuscated. This does make things harder to hack, and since the cable boxes will be in your home, they are theoretically hackable.
I'm not sure how much it will matter since the cable card will still control signal encryption and gets its authorization straight from the video source rather than the java code guide or java apps, but they are pretty dedicated to the concept.
By the way, it is not easy to trace exceptions thrown from an obfuscated stack! I actually memorized at one point that aH meant "Null Pointer Exception" for a particular build.
I remember creating a Windows Service for Online Backup application that was built in .NET. I could easily use either Visual Studio or tools like .NET Reflector to see the classes and the source code inside it.
I created a new Visual Studio Test application and added the Windows Service reference to it. Double clicked on the reference and I can see all the classes, namespaces everything (not the source code though). Anybody can figure out the internal working of your modules by looking at the class names. In my case, one such class was FTPHandler that clearly tells where the backups are going.
.NET Reflector goes beyond that by showing the actual code. It even has an option to Export the whole project so you get a VS project with all the classes and source code similar to what the developer had.
I think it makes sense to obfuscate, to make it atleast harder if not impossible for someone to disassemble. Also I think it makes sense for products involving large customer base where you do not want your competitors to know much about your products.
Looking at some of the code I wrote for my disk driver project makes me question what it means to be obfuscated.
((int8_t (*)( int32_t, void * )) hdd->_ctrl)( DISK_CMD_REQUEST, (void *) dr );
Or is that just system programming in C? Or should that line be written differently? Questions...
Yes and no, I haven't delivered apps with a tool that was easy decompilable.
I did run something like obfuscators for old Basic and UCSD Pascal interpreters, but that was for a different reason, optimizing run time.
If I am delivering Java Swing apps to clients, I always obfuscate the class files before distribution.
You can never be too careful - I once pointed a decent Java decompiler (I used the JD Java Decompiler - http://www.djjavadecompiler.com/ ) at my class files and was rewarded with an almost perfect reproduction of the original code. That was rather unnerving, so I started obfuscating my production code ever since. I use Klassmaster myself (http://www.zelix.com/klassmaster/)
I obfuscated code of my Android applications mostly. I used ProGuard tool to obfuscate the code.
When I worked on the C# project, our team used the ArmDot. It's licensing and obfuscation system.
Modern obfuscators are used not only to make hacking process difficult. They are able to protect programs and games from cheating, check licenses/keys and even optimize code.
But I don't think it is necessary to use obfuscator in every project.
It's most commonly done when you need to provide something in source (usually due to the environment it's being built in, such as systems without shared libraries, especially if you as the seller don't have the exact system being build for), but you don't want the person you're giving it to to be able to modify or extend it significantly (or at all).
This used to be far more common than today. It also led to the (defunct?) Obfuscated C Contest.
A legal (though arguably not "legitimate") use might be to release "source" for an app you're linking with GPL code in obfuscated fashion. It's source, it can be modified, it's just very hard. That would be a more extreme version of releasing it without comments, or releasing with all whitespace trimmed, or (and this would be pushing the legal grounds probably) releasing assembler source generated from C (and perhaps hand-tweaked so you can say it's not just intermediate code).

Resources