How does `Depends on` and `Selected by` work in Kconfig when they conflict - u-boot

To understand how SPL (secondary boot loader), I tried (in u-boot v2021.10)
make ARCH=arm CROSS_COMPILE=aarch64-none-elf- vexpress_ca9x4_defconfig
and
make ARCH=arm CROSS_COMPILE=aarch64-none-elf- vexpress_ca9x4_defconfig
I searched for SPL_OS_BOOT, which I need to test the SPL falcon mode. But it appears it is not enabled by default for this board.
So first I need to set CONFIG_SPL=y, but when I search for SPL, it shows this.
I can't clearly understand it here.
Does Depends on: ARM [=y] && ARCH_STM32MP [=n] mean I should set the ARCH_STM32MP=y?
and if I add a Selected by condition, should it still meet the Depends on condition above?
I ask it because SPL should be avaiable for many boards but having ARCH_STM32MP, -- a very specific architecture condition --, in the Depends on list looks weird.

Kconfig in general can be difficult to follow (and a few things in how we use it in U-Boot need to be cleaned up as it makes things harder still to follow). It's often best to look at the Kconfig files directly, to better understand things. In this case as you've noted, SPL_OS_BOOT depends on SPL and if we look in common/spl/Kconfig we see:
config SPL
bool
depends on SUPPORT_SPL
prompt "Enable SPL"
help
If you want to build SPL as well as the normal image, say Y.
which hints at the real problem you're facing, vexpress_ca9x4 does not support SPL. That's what the long list of things you were trying to figure out was showing, the places where SUPPORT_SPL is set.

Related

Readlink not finding C files (MSYS)

A while back I asked a question about this subject and "solved" it by using Cygwin instead with its XWin utility, but I've come back to this issue again since the Xwin utility does not use my GPU and creates a severe bottleneck in simulations as a result. MinGW/MSYS on the other hand DOES use my GPU for rendering, which is a huge help, but there are some rough areas that need smoothing over, specifically with readlink.
Basically, the src/makefile for rebound (https://github.com/hannorein/rebound) says this:
PREDEF+= -D$(shell basename `readlink gravity.c` '.c' | tr '[a-z]' '[A-Z]')
PREDEF+= -D$(shell basename `readlink boundaries.c` '.c' | tr '[a-z]' '[A-Z]')
PREDEF+= -D$(shell basename `readlink collisions.c` '.c' | tr '[a-z]' '[A-Z]')
If my understanding is correct, this is supposed to find which version of gravity, boundaries and collisions I specified, and adds that to PREDEFS so the compiler uses the right versions of gravity, boundaries and collisions. However, it does not seem to work in MSYS. What it ends up spitting out for predefs is this:
-DOPENGL -D.C -D.C -D.C
Obviously it did not get anything back from the code above. This results in a macronames must be identifiers error, of course. I can work around this by adding any of the special options in between readlink and the filename, like -f, for instance, but then it only spits out
-DOPENGL -DGRAVITY -DBOUNDARIES -DCOLLISIONS
Which is not right because it should have extra bits, like so:
-DOPENGL -DGRAVITY_DIRECT -DBOUNDARIES_OPEN -DCOLLISIONS_NONE
Now, if I don't want any special gravity, boundaries or collisions, the workaround is okay, but only because (I'm guessing) it defaults to those if there's nothing special specified after each macroname. But if I DO want something special, like the more efficient gravity tree code, or actual collisions, the shortened name resulting from the workaround will not help it find anything, and so it causes errors in compiling as certain functions it needed from the special files obviously are missing.
And so I'm pretty stuck at the moment. I would like very much to be able to use other codes than the defaults, but MSYS is acting funny with the readlink and not finding the right stuff. As I said, it worked fine in an X windows style compiler. I feel like there must be some library I'm missing or some hidden syntax disconnect I'm overlooking that needs to be accounted for between XWin and non-Xwin compiling, but I can't find anything.
Here's an example of the links it should be reading (at least I think this is what is being read, I'm still learning makefiles):
ln -fs gravity_tree.c ../../src/gravity.c
ln -fs boundaries_open.c ../../src/boundaries.c
ln -fs collisions_none.c ../../src/collisions.c
If anyone can tell me why this would work on an Xwin command line but not MSYS, I'd greatly appreciate it.
Why on earth do you expect readlink to work in MSYS? Where did you even get whatever readlink.exe is being invoked, (if that is what is being executed)? There is no readlink command in a standard MSYS installation. Perhaps you discovered it in MinGW.org's msys-coreutils-ext package? If this is the case, you should note the comment within the description of that package, (as seen via MinGW.org's mingw-get installer):
The msys-coreutils-bin subpackage contains those applications that were historically part of the standard MSYS installation. The associated msys-coreutils-ext subpackage contains the rest of the coreutils applications that have been (nominally) ported to MSYS -- usually these are less often used, and are not guaranteed to work: e.g. 'su.exe', 'chroot.exe' and 'mkfifo.exe' are known to be broken.
and, it seems that we may add readlink.exe to that list of "known to be broken" applications.
It may also be worth noting that readlink is not among the list of supporting tools, which a GNU Coding Standards conforming application is permitted to invoke from either its configure script, or its makefile. Thus, there is little incentive for the MinGW.org developers, (who maintain MSYS), to address the issue of making readlink.exe work, (although patches from an independent developer, with such an incentive, would be welcomed).
As a final qualification, and as one comment on the question notes, ln -s creates copies of files; it does not create symbolic links. How could it? MSYS itself dates from an era when windows didn't support symbolic links ... indeed, even today its support for them is flaky. At the time when MSYS was published, either copying the files, or creating NTFS hard links, was the best compromise MSYS could offer, in the situation where a script invoked ln -s. Consequently, it would become incumbent upon any developer submitting patches to make readlink.exe work, to also address the issue of updating ln.exe, such that it could create the symbolic links, (in an OS version dependent fashion), which readlink.exe would then read.
I'm sorry if this isn't the answer you hoped for, but unless someone devotes some effort into updating MSYS, so that it can make use of the (unreliable) symbolic link feature in more recent windows versions, then you need to find a different approach; current MSYS does not support symbolic links, even if the underlying OS now does.

What files need to be modified to compile for a custom architecture of an existing cpu with gcc?

I've been looking at examples of C code that is compiled for some lesser known processors (like ZPU) using the gcc cross compiler.
Most of the working examples I see assume a certain arquitecture (Memory map and set of peripherals) and simply give you a recipe to compile for these and they work.
However I can find very little information on what needs to modified if you use the same cpu with a different memory map and set of peripherals.
From what I've read. There are two main files that I need to make sure that are done "right". The linker script that is used and the crt0.o (Which if I need to modify means recompiling the crt0.S which is assembler). On this last one, especially I find very little information on what is actually supposed to do (other that setting up reset there is no clear info, and I'm talking conceptually not for an specific processor. Although something for this would also be useful).
Can any one tell me what is the relationship between a the c files for the code of program (bare metal development), the crt0.S (specially why it is needed) and it's relationship with a working linker script?
PD: Answers of the form "read this book" are welcome and I would love them.
PD: I realize this kind of question is usually vague and closed quickly but I don't know where else to turn, so I ask for a bit of leniency.

How to methodically trace the location of source code

I often spend lots of time trying to find out where the exact implementation is located. It gets very frustrating when dealing with some low-level code that might end up somewhere in kernel.
I usually just google or try to guess the location and/or method names, but it is not always very effective.
Is there some methodical way to trace the flow up to the implementation? How do you guys usually do it?
Load the whole the code with relevant dependencies to a graphical IDE (NetBeans can do it, for instance) which can to call graph, declaration-definition jumps, etc. or use LibClang and its wrapper for the text editor of your choice, it is also very good at indexing. At last, you can consider classics, ctags, which can link definition and declaration points.
There used to be a ctrace program that did just that but I don't think it is actively maintained.
Ultimately, it depends what exactly you are trying to achieve. It actually looks like you want to look up a specific function rather than trace it. If that is the case indeed, consider using some kind of source browser: from etags to cscope to OpenGrok.

Value optimized out in GDB: Can gdb handle decoding it automatically?

1) First I want to know, how to decode such variables ?
I know the solutions to this problem, remove optimization flag, make it volatile, I dont want to do all that. Is there any solution which can be done without compiling the source again ? The problem is whenever i make any changes, it takes ages to compile, so I dont want to compile it with different optimization flags, also I had tried once changing the optimization flag, but it crashed just because of change in compilation flags, for reasons I cant fathom.
Also I am not able to find documentation about understanding various registers when I do "info reg". i was expecting some variable ( whose value I knew, what would it be ) but info reg is showing me all different values. I am missing something here. The architecture I am working on is x86_64
2) I want to know what are the restrictions faced by gdb to decode such register variables ? Or is this problem already tackled by someone. I have read at many places that going through the assembly code, you can find out which variable is in that register. If thats true, why it cant be build into gdb. Please point me to relevant pages if there are solutions to this problem
If you don't have the source and compile with debug/no optimizations (i.e. 3rd party code.) the best you can do would be to disassemble the code and try to determine how the variables are stored.
In gdb the disassemble instruction will dump the assembly for the given function:
disassemble <function name>
Or if symbols have been stripped
disassemble <address>
where <address> is the entry point to the function.
You may also have to inspect where the function is called to determine the calling conventions used.
Once you've figured out the structure of the functions and variable layout (stack variables or registers), when debugging you can step through each instruction with nexti and stepi and watch how the values in the variables change by dumping the contents of the registers or memory locations.
I don't know any good primers or tutorials myself but this question and its answers may be of use to you. Personally I find myself referencing the Intel manuals the most. They can be downloaded in pdf from Intel's website. I don't have a link handy at the moment. If someone else does perhaps they can update my answer.
Have you looked at compiling your code un-optimized?
Try one of these in your gcc options:
-Og
Optimize debugging experience. -Og enables optimizations that do not interfere with debugging. It should be the optimization level of choice for the standard edit-compile-debug cycle, offering a reasonable level of optimization while maintaining fast compilation and a good debugging experience.
-O0
Reduce compilation time and make debugging produce the expected results. This is the default.

How to write your own code generator backend for gcc?

I have created my very own (very simple) byte code language, and a virtual machine to execute it. It works fine, but now I'd like to use gcc (or any other freely available compiler) to generate byte code for this machine from a normal c program. So the question is, how do I modify or extend gcc so that it can output my own byte code? Note that I do NOT want to compile my byte code to machine code, I want to "compile" c-code to (my own) byte code.
I realize that this is a potentially large question, and it is possible that the best answer is "go look at the gcc source code". I just need some help with how to get started with this. I figure that there must be some articles or books on this subject that could describe the process to add a custom generator to gcc, but I haven't found anything by googling.
I am busy porting gcc to an 8-bit processor we design earlier. I is kind of a difficult task for our machine because it is 8-bit and we have only one accumulator, but if you have more resources it can became easy. This is how we are trying to manage it with gcc 4.9 and using cygwin:
Download gcc 4.9 source
Add your architecture name to config.sub around line 250 look for # Decode aliases for certain CPU-COMPANY combinations. In that list add | my_processor \
In that same file look for # Recognize the basic CPU types with company name. add yourself to the list: | my_processor-* \
Search for the file gcc/config.gcc, in the file look for case ${target} it is around line 880, add yourself in the following way:
;;
my_processor*-*-*)
c_target_objs="my_processor-c.o"
cxx_target_objs="my_processor-c.o"
target_has_targetm_common=no
tmake_file="${tmake_file} my_processor/t-my_processor"
;;
Create a folder gcc-4.9.0\gcc\config\my_processor
Copy files from an existing project and just edit it, or create your own from scratch. In our project we had copied all the files from the msp430 project and edited it all
You should have the following files (not all files are mandatory):
my_processor.c
my_processor.h
my_processor.md
my_processor.opt
my_processor-c.c
my_processor.def
my_processor-protos.h
constraints.md
predicates.md
README.txt
t-my_processor
create a path gcc-4.9.0/build/object
run ../../configure --target=my_processor --prefix=path for my compiler --enable-languages="c"
make
make install
Do a lot of research and debugging.
Have fun.
It is hard work.
For example I also design my own "architecture" with my own byte code and wanted to generate C/C++ code with GCC for it. This is the way how I make it:
At first you should read everything about porting in the manual of GCC.
Also not forget too read GCC Internals.
Read many things about Compilers.
Also look at this question and the answers here.
Google for more information.
Ask yourself if you are really ready.
Be sure to have a very good cafe machine... you will need it.
Start to add machine dependet files to gcc.
Compile gcc in a cross host-target way.
Check the code results in the Hex-Editor.
Do more tests.
Now have fun with your own architecture :D
When you are finished you can use c or c++ only without os-dependet libraries (you have currently no running OS on your architecture) and you should now (if you need it) compile many other libraries with your cross compiler to have a good framework.
PS: LLVM (Clang) is easier to port... maybe you want to start there?
It's not as hard as all that. If your target machine is reasonably like another, take its RTL (?) definitions as a starting point and amend them, then make compile test through the bootstrap stages; rinse and repeat until it works. You probably don't have to write any actual code, just machine definition templates.

Resources