I'm doing some Linux kernel development, and I'm trying to use Netbeans. Despite declared support for Make-based C projects, I cannot create a fully functional Netbeans project. This is despite compiling having Netbeans analyze a kernel binary that was compiled with full debugging information. Problems include:
files are wrongly excluded: Some files are incorrectly greyed out in the project, which means Netbeans does not believe they should be included in the project, when in fact they are compiled into the kernel. The main problem is that Netbeans will miss any definitions that exist in these files, such as data structures and functions, but also miss macro definitions.
cannot find definitions: Pretty self-explanatory - often times, Netbeans cannot find the definition of something. This is partly a result of the above problem.
can't find header files: self-explanatory
I'm wondering if anyone has had success with setting up Netbeans for Linux kernel development, and if so, what settings they used. Ultimately, I'm looking for Netbeans to be able to either parse the Makefile (preferred) or extract the debug information from the binary (less desirable, since this can significantly slow down compilation), and automatically determine which files are actually compiled and which macros are actually defined. Then, based on this, I would like to be able to find the definitions of any data structure, variable, function, etc. and have complete auto-completion.
Let me preface this question with some points:
I'm not interested in solutions involving Vim/Emacs. I know some people like them, but I'm not one of them.
As the title suggest, I would be also happy to know how to set-up Eclipse to do what I need
While I would prefer perfect coverage, something that only misses one in a million definitions is obviously fine
SO's useful "Related Questions" feature has informed me that the following question is related: https://stackoverflow.com/questions/149321/what-ide-would-be-good-for-linux-kernel-driver-development. Upon reading it, the question is more of a comparison between IDE's, whereas I'm looking for how to set-up a particular IDE. Even so, the user Wade Mealing seems to have some expertise in working with Eclipse on this kind of development, so I would certainly appreciate his (and of course all of your) answers.
Cheers
Eclipse seems to be pretty popular for Linux kernel development:
http://cdtdoug.blogspot.com/2008/12/linux-kernel-debugging-with-cdt.html
http://jakob.engbloms.se/archives/338
http://revver.com/video/606464/debugging-the-linux-kernel-using-eclipsecdt-and-qemu/
I previously wrote up an answer. Now I come up with all the details of the solution and would like to share it. Unfortunately stackoverflow does not allow me to edit the previous answer. So I write it up in this new answer.
It involves a few steps.
[1] The first step is to modify linux scripts to leave dep files in. By default after using them in the build, those dep files are removed. Those dep files contains exact dependency information about which other files a C file depends. We need them to create a list of all the files involved in a build. Thus, modify files under linux-x.y.z/scripts to make them not to remove the dep files like this:
linux-3.1.2/scripts
Kbuild.include: echo do_not_rm1 rm -f $(depfile);
Makefile.build: echo do_not_rm2 rm -f $(depfile);
The other steps are detailed in my github code project file https://github.com/minghuascode/Nbk/blob/master/note-nbkparse. Roughly you do:
[2] Configure with your method of configuration, but be sure use "O=" option to build the obj files into a separate directory.
[3] Then use the same "O=" option and "V=1" option to build linux, and save make output into a file.
[4] Run my nbkparse script from the above github project. It does:
[4.1] Read in the make log file, and the dep files. Generate a mirroring command.
[4.2] Run the mirroring command to hard-link the relevant source files into a separate tree, and generate a make-log file for NetBeans to use.
Now create a NetBeans C project using the mirrored source tree and the generated log file. NetBeans should be able to resolve all the kernel symbols. And you will only see the files involved in the build.
The Eclipse wiki has a page about this: HowTo use the CDT to navigate Linux kernel source
I have been doing some embedded linux development. Including kernel module development and have imported the entire linux kernel source code into Eclipse, as a separate project. I have been building the kernel itself outside of Eclipse(so far), but I don't any reason why I shouldn't be able to set up the build environment within Eclipse to build the kernel. For my projects, as long as I setup the PATH properties to point to the appropriate linux source include directories, it seems to be pretty good about name completion for struct fields, etc.
I can't really comment, on if it is picking up the correct defines and not greying out the correspond sections, as I haven't really paid to much attention to the files within the kernel itself.(so far)
I was also wondering about using Netbeans as a linux 'C' IDE, as I do prefer Netbean's for Java GUI development.
I think this would work (done each step for various projects):
[1] Modify kernel build scripts to leave .d files. By default they are removed.
[2] Log the build process to a file.
[3] Write a script to parse the build log.
[3.1] From the build log, you know every .c files.
[3.2] From the .c file, you know which is the corresponding .d file.
[3.3] Look into .d files to find out all the included .h files.
[3.4] Form a complete .c and .h file list.
[4] Now create a new dir, and use "ln -s" or "ln" to pick files of interest.
Now, create a Netbeans project for existing source code in the [4].
Configure code assistance to use make-log file. You should see
exactly the effective source code as when you build it at [2].
Some explanations to the above steps:
At [2], do a real build so the log file contains the exact files and flags of interest.
Later netbeans will be able to use the exact flags to parse.
At [4], pick only the files you want to see. Incorporating the whole kernel tree into netbeans will be unpractical.
There is a trick to parsing .d files: Many of the depended items are not real paths to a .h file, they are a modified entry for part of the linux config sections in the auto config file. You may need to reverse the modification to figure out which is the real header file.
Actually there is a topic on netbeans site. This is the discussion url: http://forums.netbeans.org/ntopic3075.html . And there is a wiki page linked from the discussion: wiki.netbeans.org/CNDLinuxKernel . Basically it asks you to prefix make with CFLAGS="-g3 -gdwarf-2" .
I found this link very helpful in setting up proper indexing in Eclipse. It requires running a script to alter Eclipse environment to match your kernel options, in my case
$ autoconf-to-eclipse.py ./include/generated/autoconf.h .
An illustrated guide to indexing the linux kernel in eclipse
Related
I'm learning how to use Data Display Debugger (DDD) for my C/C++ programs. The Help reference for DDD shows some sample outputs, including the following graphic graph / charting example. I'm trying to reproduce the exercise, but I'm having difficulty. The way it should work is I would compile cxxtest.c with debugger options, and the DDD tool would actually graph the variable array of interest during a step debugging session, in both 2D and 3D. Wow, if it works.
The cxxtest.c program is included in the DDD repository, ddd-3.3.12.tar.gz. I'm trying to compile and run that program but I keep getting stuck. I can't figure out how to generate a config.h file, so I can link in necessary support files (e.g. bool.h) to compile cxxtest.c
Files I see in the DDD repository, relating to config include:
config-info
config.h.in
config.texi
configinfo.C
configinfo.h
configure
configure.in
None of them seem to offer much help on how to generate a config.h file.
Anybody know how to generate a config.h file ?
Update: As I continue to work this one, the whole thing seems odd. The program , cxxtest.C , has a .C suffix, but there are distinctly C++ elements in there, #include <iostream> If I block the config.h thing, change the suffix to .cpp and compile I get a whole bunch of different errors. Not sure what the intent was here.
As for README content, I do see some instructions on how to compile the entire DDD tool, and it's quite lengthy. It's not clear on if preparing / configuring and compiling the DDD tool will also compile this particular test file. I guess I can wade thru the make files and scripts and see if this file every gets mentioned. (sigh!)
Actually I'm considering converting the entire file over to pure .c via rewrite. Note, the original file is visible here...
Note: I'm working in Virtualbox Ubuntu desktop for now... Ultimately I'd like to use the DDD tool to analyze key arrays in some digital signal processing (DSP) programs I'm working on.
Update #2:
I tried two different things here. First I built a C version of a file with the plot routines copied from the original cxxtest.c program. I converted all the calls to pure C. I could easily see the data in the DDD data window in text format. When I select the data set and then choose plot, I get a popup "DDD: Starting Plot... Starting gnuplot..." The system just hangs there.
Second, I did a complete clean install of the ddd tool. I had to install a few dependencies, and correct a few known bugs (e.g. #include <cstdio> ) but was successful at both $ ./configure && make and $ make check . The make check command does correctly build and compile cxxtest.c . When I run the file and do the steps to plot the dr and ir array variables, I get the same failure as above.
System hang. A search of the failure indicates this has been reported for years, apparently without resolve. Not quite sure how to proceed. This appears to be a total fail. I cannot reproduce the DDD test to plot graphical output. Anybody else make progress on this one?
Note: with this edit, I'm also removing the How do I generate config.h? from the title. That's not really the key issue here.
Anybody know how to generate a config.h file ?
Yes: just run the configure script provided. A typical sequence for building open source software is:
./configure && make
We have subversion to help us manage our c files (and tortoise svn as front end).
When I want to know the changes in a c module, I (of course) only get the changes in the "body" of the program, not the changes in the include files.
So I wrote a small simple programm finding out all include files of a c module, checking the last subversion change date for each include file and writing the result in an output file.
This way I get a full impression of what has changed recently in the whole module.
But the program is very simple and I would like to know, if there is a solution out there that handles this "full view" of a c module in good way.
As I work on multiple independent change requests at one time in one subversion working folder, it does not help just looking at the result of "check for modifications".
Thanks a lot in advance.
Some handwork (onetime) required, but it can work
Using (file-level type for all files in every "project" /reqiure SVN 1.6+/) create virtual (or real) folders, which will include all files for each project. After it svn log inside such folder in WC will show only related to project-files changes
I have to create one-file (.exe) program.
In this program user can choose directory from his computer.
I create program in order of http://www.tarnyko.net/en/?q=node/31 and it run well.
But when I invoke FileChoser (click on button) I got this error
GLib-GIO-ERROR No GSettings schemas are installed on the system
Response of Tarnyko to this issue is in coment on webpage - this is known "bug" with static compiling.
How can I work around this?
On the one hand I have to have one-file.exe.
On the other hand I realy do not want create "sophisticated" FileChooser on my own... is there any option to deal with this?
My ideas:
1 - Call native File chooser of OS (windows)
2 - Create file chooser on my own - if it is not "much hard" in gtk
I do not know how to do either of this.
Sorry for duplicating - probably succes solution is in answer form "ebassi" here GLib-GIO-ERROR**: No GSettings schemas are installed on the system (not tested yet)
Settings schemas (which are used in GTK in more places than just the file selector widget) cannot be statically linked into a binary: they have to be installed in a well known location (controllable via the $XDG_DATA_DIRS environment variable) and they have to be compiled into a cache.
GTK's dependencies like Pango and GDK-Pixbuf also use ancillary files and loadable modules that are not strictly compatible (unless you're willing to spend time on it) with static linking.
The usual recommendation for only providing a single executable for your application is to have a self-extracting installer that contains all the installed files necessary to running a GTK application, and avoid static linking.
I don't think it's possible to create just one .exe file (without any other files) with GTK+. Maybe only when you recode the GTK and it's dependencies - which is not an easy task to do.
The best solution I found is to put all schemas (and also icons for your GTK+ app) in the same location where your .exe file is placed:
EXE_LOCATION\program.exe
// For icons:
EXE_LOCATION\share\icons\hicolor...
// For schemas
EXE_LOCATION\share\glib-2.0\schemas
Then you deliver these files together with your .exe file and with all needed .dll files.
About the native file chooser in GTK+: It also needs the schemas - at least on Windows OS.
I have mostly used kate, vim etc to code and just pure console and gdb (rarely) to test. I want to start using eclipse, mainly for ease of looking up stuff, and hopefully (while not super important,) run the GUI debugger.
However, I don't want eclipse to touch my real project folders (it should change the code, obviously, but don't want it to create any configuration folders etc). Is that possible? I was thinking to create a workspace on a different folder, and add sources from my project path. But this seem to be complicated without any experience with eclipse when handling Makefiles etc!
Anyone has done something similar? any guidelines?
Yes, this is fairly straightforward. Instead of creating a standard C Project that creates and manages makefiles for you, use the "Makefile Project with Existing Code" instead.
If you don't want the .cproject, .project, etc files intermixed, create the CDT project in an empty directory and use Linked Files and Folders to pull in what you do want in the project.
If you do try to do a Build within Eclipse it will do "make all", but if you don't have a Makefile you get this (same for clean):
make all
make: *** No rule to make target `all'. Stop.
What I have done is for projects that don't have a make equivalent (like CPython extension) is to write a trivial Makefile that delegates all and clean targets to my real tool.
Once you have the project created, you will need to configure it to get all the goodness of CDT. The CDT Indexer and Scanner needs to know about your compiler settings (includes and defines really). There are two ways of delivering that information:
Run a verbose build (i.e. with gcc command line arguments echoed) from within Eclipse (e.g. use trivial Makefile described above). CDT will parse that output and automatically pick up compiler options used.
There are a number of ways that CDT can learn about what your settings are, to configure how they are picked up, head to project properties -> C/C++ General -> Preprocessor Include Paths, Macros etc. and adjust the sources in the Providers tab:
In the project properties, edit the C/C++ General -> Paths and Symbols properties. You may have to do this if CDT cannot determine all your settings in Step 1 too. This is a screenshot of some of those settings:
Which configuration management tool is the best for FPGA designs, specifically Xilinx FPGA's programmed with VHDL and C for the embedded (microblaze) software?
There isn't a "best", but configuration control solutions that work for software will be OK for FPGAs - the flow is very similar. I use Subversion at work and git at home, and wrote a little on 'why' at my blog.
In other answers, binary files keep getting mentioned - the only binary files I deal with are compilation products (equivalent to software object and executables), so I don't keep them in the version control repository, I keep a zipfile for each release/tag that I create with all the important (and irritatingly slow to reproduce) ones in.
I don't think it much matters what revision control tool you use -- anything that you would consider good in general will probably be OK here. I personally use Git for a sizable Verilog + software project, and I'm quite happy with it.
What will bite you in the ass -- no matter what version control you use -- is this: The Xilinx tools don't generally respect a clean division between "input" and "output" or between (human edited) "source" and (opaque) "binary." Many of the tools like to store some state information, like a last-run time or a hash value, in their "input" files meaning that you'll get lots of false changes. Coregen does this to its .xco files, and project navigator (the main GUI) does this to its .xise files. Also, both tools have a habit of inserting or removing lines for default-valued parameters, seemingly at random.
The biggest issue I've encountered is the work-flow with Coregen: In many cases, at least one of the following is true:
You have to manually edit the HDL files produced by Coregen.
The parameters that went into Coregen are stored somewhere other than the .xco file (usually in what looks like an output file).
You have to copy-and-paste the output from Coregen into your top-level design.
This means that there is no single logical source/master location for your input to the core-generating process. So even if you have the .xco file under version control, there's no expectation that the design you're running corresponds to it. If you re-generate "the same" core from its nominal inputs, you probably won't get the right outputs. And don't even think about merging.
I suggest CM tools that support version labeling and binary files. Most Software CM applications are fine with ASCII text files. They may just store a "difference" file rather than the entire file for updates.
My recommendations: PVCS, ClearCase and Subversion. DO NOT USE Microsoft SourceSafe. I don't like it because it only supports one label per revision.
I've seen Perforce and Subversion used in a couple of FPGA-intensive companies.
We use Perforce, and its great. You can have your code that lives in Linux-land checked in side-by-side with your Specs and Docs that live in Windows-land. And you get branching, labels, etc.
I've seen everything from Clearcase to RCS used, and it is really all okay for this kind of thing. The important thing is to get a good set of check-in policies established for your group, and make sure they stick to it.
And have automated nightly regressions. That way, when someone breaks the rules, they can be identified and publicly shamed.
I have personally used Perforce, Subverion, git and ClearCase for FPGA projects. Since VHDL and C are just text files, any works fine. However be sure to capture the other project and contraint files and any libraries you use.
Also think about what to do with the outputs, e.g. log file and bitstreams. Both tend to be big and the bitstreams are binaries.
Previously I used Subversion but have switched to git two years ago. Git handles FPGA design files just as well as it handles every other text and binary file. Git is all you need for version controlling your files and artifacts.
For building the designs, I recommend just using a single ISE project called "ise" (living in a subdirectory called "ise/"). You can take a look at my (very modest) FPGA open-source project on github for the file layout. I don't bother storing the ISE files at all since they are easy to regenerate. The only things I save are the Verilog files and some ISIM waveform config files. In other projects that use coregen I save the coregen.cgp project file and all of the *.xco scripts for regenerating cores. Then I use a Makefile for actually running coregen on the *.xco files. There are a few other Xilinx-specific files you should version control too: *.ucf, *.coe, *.xcf, etc.
I experimented with using Makefiles and the Xilinx command-line tools but found that ISE did a much better job tracking dependencies and calling the tools with the right arguments. Just don't make the mistake of trying to version control your ise/ project files or you will go mad. Xilinx has something like 300 different file types which change every release. If you want to save a file, you can try the ISE project file itself with a .xise extension. Anything that is hard to recreate, like the golden bitfile that you know works and took 6 hours to build, you might want to copy that and configuration manage it explicitly.