I'm currently trying to add changes in how the SQLite virtual machine executes its code. To do that, I edit the vdbe.c file from the SQLite source.
The issue is, compiling SQLite consists in generating two huge implementation and header files (sqlite3.c and sqlite3.h) by amalgamating several smaller ones, after parsing some of them to generate code and documentation.
Unfortunately, the amalgamation process takes a relatively long time (about 15 seconds). I was wondering if there would be a somewhat easy way to not compile everything every time like it currently is the case, and possibly save a lot of compile time.
The main difficulty stems from the fact that source files are not valid by themselves (they can only compile once they have been amalgamated so that some types have already been defined earlier in the amalgamated file). After several attempts with a simple hand-written Python script (that would simply extract the virtual machine execution code from the amalgamation and keep the rest together), I came to the conclusion that there are two many edge cases to do it this way. I don't really know how to proceed.
Any suggestions are welcome.
I'd say: check in the amalgamation into your source code repository, treat it as your own artifact, and work on it. Whenever you wish to update the amalgamation, use git to help you.
Create two branches sqlite-upstream and sqlite-local.
Check in upstream amalgamation "v1" to sqlite upstream, then merge that to sqlite-local and do whatever local changes you need in that branch.
When upstream releases "v2", commit that to sqlite-upstream.
Merge or rebase - you may have some conflicts to resolve, but those will be much easier to deal with than manual change tracking.
Merge sqlite-upstream into sqlite-local, or
Duplicate sqlite-local into sqlite-local-v2, then rebase it onto sqlite-upstream, then use sqlite from that branch in dependent code.
So in case anyone needs the answer in the future, here is how I did. You can find the whole discussion on SQLite Forums.
After getting the source:
I have a first make pass where I don't change any file (to compile lemon, possibly among others):
make -f Makefile.linux-gcc
Then, I edit Makefile.linux-gcc and replace all occurrences of gcc with gcc -x none
I change main.mk and change the executable target by adding -lstdc++ at the end:
sqlite3$(EXE): shell.c libsqlite3.a sqlite3.h
$(TCCX) $(READLINE_FLAGS) -o sqlite3$(EXE) $(SHELL_OPT) \
shell.c libsqlite3.a $(LIBREADLINE) $(TLIBS) $(THREADLIB) -lstdc++
I run make again using make -f Makefile.linux-gcc and get my sqlite3 executable as expected.
Related
I'm learning how to use Data Display Debugger (DDD) for my C/C++ programs. The Help reference for DDD shows some sample outputs, including the following graphic graph / charting example. I'm trying to reproduce the exercise, but I'm having difficulty. The way it should work is I would compile cxxtest.c with debugger options, and the DDD tool would actually graph the variable array of interest during a step debugging session, in both 2D and 3D. Wow, if it works.
The cxxtest.c program is included in the DDD repository, ddd-3.3.12.tar.gz. I'm trying to compile and run that program but I keep getting stuck. I can't figure out how to generate a config.h file, so I can link in necessary support files (e.g. bool.h) to compile cxxtest.c
Files I see in the DDD repository, relating to config include:
config-info
config.h.in
config.texi
configinfo.C
configinfo.h
configure
configure.in
None of them seem to offer much help on how to generate a config.h file.
Anybody know how to generate a config.h file ?
Update: As I continue to work this one, the whole thing seems odd. The program , cxxtest.C , has a .C suffix, but there are distinctly C++ elements in there, #include <iostream> If I block the config.h thing, change the suffix to .cpp and compile I get a whole bunch of different errors. Not sure what the intent was here.
As for README content, I do see some instructions on how to compile the entire DDD tool, and it's quite lengthy. It's not clear on if preparing / configuring and compiling the DDD tool will also compile this particular test file. I guess I can wade thru the make files and scripts and see if this file every gets mentioned. (sigh!)
Actually I'm considering converting the entire file over to pure .c via rewrite. Note, the original file is visible here...
Note: I'm working in Virtualbox Ubuntu desktop for now... Ultimately I'd like to use the DDD tool to analyze key arrays in some digital signal processing (DSP) programs I'm working on.
Update #2:
I tried two different things here. First I built a C version of a file with the plot routines copied from the original cxxtest.c program. I converted all the calls to pure C. I could easily see the data in the DDD data window in text format. When I select the data set and then choose plot, I get a popup "DDD: Starting Plot... Starting gnuplot..." The system just hangs there.
Second, I did a complete clean install of the ddd tool. I had to install a few dependencies, and correct a few known bugs (e.g. #include <cstdio> ) but was successful at both $ ./configure && make and $ make check . The make check command does correctly build and compile cxxtest.c . When I run the file and do the steps to plot the dr and ir array variables, I get the same failure as above.
System hang. A search of the failure indicates this has been reported for years, apparently without resolve. Not quite sure how to proceed. This appears to be a total fail. I cannot reproduce the DDD test to plot graphical output. Anybody else make progress on this one?
Note: with this edit, I'm also removing the How do I generate config.h? from the title. That's not really the key issue here.
Anybody know how to generate a config.h file ?
Yes: just run the configure script provided. A typical sequence for building open source software is:
./configure && make
I am writing a program in C on Linux environment (Debian-Lenny) and would like the program to be updated when an update is available (the program gets notified when a new update is available). I am looking for a way that the program can update itself.
What I am thinking is that the main program invokes a new program to handle the update. The updater program will have(access to) the source code and receive the update information about the changes on the source code, something like that:
edit1: line 20, remove column 5 to 20;
edit2: line25, remove column 4-7 then add "if(x>3){" from the column4
edit3: line 26, enter a new line and insert "x++;"
then kill the main process, recompile the source code, and then replace the new binary with the old one.
or is there a better (easier) and standard way to implement the ability that a program can update itself?
I use the program to control a system with a Linux embedded board. Therefore, I don't want the source code to be accessible to another person (if the system is hacked or something).
If the best way to update a program by using the source code, how do you suggest me to secure the source code? If you suggest me to encrypt the source code, what function (Linux C) can the program use to encrypt and decrypt the source file?
If your target system is Debian, then you should just take advantage of the Debian packaging system to provide updates. Package your compiled application in a .deb package, distribute it on an APT archive which is included in your system's sources.list, and just use cron to schedule a regular update check with apt. The .deb package can include a post-installation script that restarts your application.
You could run an apt-proxy caching proxy on your "gateway" nodes that have internet access, and have the other nodes use that as their apt source.
Distributing source code in this case is probably not appropriate, because then you would need to include a full compiler toolchain on your target system.
What you're describing is very similar to the 80s-style of delivering Unix source code, popularized by the development of PERL. You use diff to get a record of changes between different versions of the source-code, then distribute this "patch" file, and use patch to perform the necessary modifications at the client-end. This doesn't address the network-communication or version-control issues.
A possible downside is that a first-time download may need to apply many patches to bring the version up. This is often the case when investigating old source from nntp:comp.sources.unix.
When I modify hello.c included with g-wan to include a simple header with #define TEST_VALUE 50 and output it in the hello.c file I noticed that a change to the header file did not trigger an update for g-wan to update the servlet. So if I change the header file test value to 51, no change is noted in the output. If I make any change to the hello.c file, it causes g-wan to recompile the servlet including the dependencies and the change in the header is compiled. Is this the expected behavior? I'm curious because that would mean during development with many dependencies, you would need to update just one character in the main servlet file to trigger a re-compile if all the changes being made are in dependency files.
This behavior was noted by Tim Bolton so I decided to also test it, and pose it as a separate question from a previous thread.
Thanks for any input.
G-WAN 3.3.28 64-bit (Mar 28 2012 11:24:16) - the latest version I saw in the download as of Oct 19th, 2012
... running on Ubuntu Server 10.04.4 LTS - 64 bit
Is this the expected behavior?
Yes.
that would mean during development with many dependencies, you would need to update just one character in the main servlet file to trigger a re-compile if all the changes being made are in dependency files.
No. There's a better way used by programmers for (at least) the past 30 years.
The touch Unix command is updating the time stamp of a file without changing its contents.
Just touch the hello.c servlet when you change its headers.
Also note that C headers are supposed to be more 'stable' than C files. What is stored in headers is there to be shared by many C files so you should consider to use C files for defines that change often.
At least you know how to proceed in both cases now.
I am also having this issue so I created a servlet to help me solve it. Using this I don't need to update every file on my CSP folder. I posted the code on my blog.
Update servlet_dependencies
The script just runs touch command on all files under CSP folder.
My test program works fine. I can create a client and a server and run them against each other. I can set my KRB5_CONFIG environment variable and use a local configuration for testing.
For some reason when I place the code in our production software it fails. Even if I strip our main() function to just calling gss_import_name() with a hard coded name I end up with the message "Cannot open configuration file".
If I run truss then I see a lot of Oracle going on. It tries to open lots of different Oracle trace files. It also tries to open
/krb5/krb.conf
instead of the file I specify.
It's as if Oracle is giving us the wrong gss, or maybe some other option in our huge and complex build system. I note -L/usr/lib/sparcv9 though this is after my -lgss now if that matters (too long since I worked in C on a regular basis!). The libgss.so.1 in that directory is larger than the one in /usr/lib - though putting that option into my test program's link command does not break it.
Any help?
Thanks
- Richard
This fixed what appeared to be a similar problem for us:
export KRB5_CONFIG=/etc/krb5.conf
It does appear likely that Oracle sets this env var incorrectly if it's not already set.
$ grep -r KRB5_CONFIG $ORACLE_HOME
Binary file /usr/lib/oracle/11.1.0.1/client64/lib/libclntsh.so matches
Binary file /usr/lib/oracle/11.1.0.1/client64/lib/libclntsh.so.11.1 matches
$ grep -r '/krb5/krb.conf' $ORACLE_HOME
Binary file /usr/lib/oracle/11.1.0.1/client64/lib/libclntsh.so matches
Binary file /usr/lib/oracle/11.1.0.1/client64/lib/libclntsh.so.11.1 matches
I found that the Oracle libraries contained an implementation of GSS. To make my code work I ensured I linked "-lgss" before linking any of the Oracle libraries.
I've not tested to see if this upsets Oracle in single sign-on, because we use Oracle with user name and password. That works fine.
I ran in to the very same issue with Oracle 11.2.0.4.0 on HP-UX 11.31 and wasted almost an entire day for that. Indeed, the crappy Oracle lib peforms a putenv with /opt/krb5/krb.conf and the tip from Richard Corfield makes the app even crash. The only workaround is to create a symbolic link. I have created a service request with Oracle for that issue.
Update (2014-06-02): I have received an update from Oracle. They confirmed the bug. It seems like there is a private GSS-API which is redefining symbols.
Bug 10184681 - ORACLE NEEDS TO USE VERSIONED SYMBOLS TO AVOID EXTERNAL SYMBOL CONFLICTS
This issue has been open since 2010-10. Terrible.
I'm doing some Linux kernel development, and I'm trying to use Netbeans. Despite declared support for Make-based C projects, I cannot create a fully functional Netbeans project. This is despite compiling having Netbeans analyze a kernel binary that was compiled with full debugging information. Problems include:
files are wrongly excluded: Some files are incorrectly greyed out in the project, which means Netbeans does not believe they should be included in the project, when in fact they are compiled into the kernel. The main problem is that Netbeans will miss any definitions that exist in these files, such as data structures and functions, but also miss macro definitions.
cannot find definitions: Pretty self-explanatory - often times, Netbeans cannot find the definition of something. This is partly a result of the above problem.
can't find header files: self-explanatory
I'm wondering if anyone has had success with setting up Netbeans for Linux kernel development, and if so, what settings they used. Ultimately, I'm looking for Netbeans to be able to either parse the Makefile (preferred) or extract the debug information from the binary (less desirable, since this can significantly slow down compilation), and automatically determine which files are actually compiled and which macros are actually defined. Then, based on this, I would like to be able to find the definitions of any data structure, variable, function, etc. and have complete auto-completion.
Let me preface this question with some points:
I'm not interested in solutions involving Vim/Emacs. I know some people like them, but I'm not one of them.
As the title suggest, I would be also happy to know how to set-up Eclipse to do what I need
While I would prefer perfect coverage, something that only misses one in a million definitions is obviously fine
SO's useful "Related Questions" feature has informed me that the following question is related: https://stackoverflow.com/questions/149321/what-ide-would-be-good-for-linux-kernel-driver-development. Upon reading it, the question is more of a comparison between IDE's, whereas I'm looking for how to set-up a particular IDE. Even so, the user Wade Mealing seems to have some expertise in working with Eclipse on this kind of development, so I would certainly appreciate his (and of course all of your) answers.
Cheers
Eclipse seems to be pretty popular for Linux kernel development:
http://cdtdoug.blogspot.com/2008/12/linux-kernel-debugging-with-cdt.html
http://jakob.engbloms.se/archives/338
http://revver.com/video/606464/debugging-the-linux-kernel-using-eclipsecdt-and-qemu/
I previously wrote up an answer. Now I come up with all the details of the solution and would like to share it. Unfortunately stackoverflow does not allow me to edit the previous answer. So I write it up in this new answer.
It involves a few steps.
[1] The first step is to modify linux scripts to leave dep files in. By default after using them in the build, those dep files are removed. Those dep files contains exact dependency information about which other files a C file depends. We need them to create a list of all the files involved in a build. Thus, modify files under linux-x.y.z/scripts to make them not to remove the dep files like this:
linux-3.1.2/scripts
Kbuild.include: echo do_not_rm1 rm -f $(depfile);
Makefile.build: echo do_not_rm2 rm -f $(depfile);
The other steps are detailed in my github code project file https://github.com/minghuascode/Nbk/blob/master/note-nbkparse. Roughly you do:
[2] Configure with your method of configuration, but be sure use "O=" option to build the obj files into a separate directory.
[3] Then use the same "O=" option and "V=1" option to build linux, and save make output into a file.
[4] Run my nbkparse script from the above github project. It does:
[4.1] Read in the make log file, and the dep files. Generate a mirroring command.
[4.2] Run the mirroring command to hard-link the relevant source files into a separate tree, and generate a make-log file for NetBeans to use.
Now create a NetBeans C project using the mirrored source tree and the generated log file. NetBeans should be able to resolve all the kernel symbols. And you will only see the files involved in the build.
The Eclipse wiki has a page about this: HowTo use the CDT to navigate Linux kernel source
I have been doing some embedded linux development. Including kernel module development and have imported the entire linux kernel source code into Eclipse, as a separate project. I have been building the kernel itself outside of Eclipse(so far), but I don't any reason why I shouldn't be able to set up the build environment within Eclipse to build the kernel. For my projects, as long as I setup the PATH properties to point to the appropriate linux source include directories, it seems to be pretty good about name completion for struct fields, etc.
I can't really comment, on if it is picking up the correct defines and not greying out the correspond sections, as I haven't really paid to much attention to the files within the kernel itself.(so far)
I was also wondering about using Netbeans as a linux 'C' IDE, as I do prefer Netbean's for Java GUI development.
I think this would work (done each step for various projects):
[1] Modify kernel build scripts to leave .d files. By default they are removed.
[2] Log the build process to a file.
[3] Write a script to parse the build log.
[3.1] From the build log, you know every .c files.
[3.2] From the .c file, you know which is the corresponding .d file.
[3.3] Look into .d files to find out all the included .h files.
[3.4] Form a complete .c and .h file list.
[4] Now create a new dir, and use "ln -s" or "ln" to pick files of interest.
Now, create a Netbeans project for existing source code in the [4].
Configure code assistance to use make-log file. You should see
exactly the effective source code as when you build it at [2].
Some explanations to the above steps:
At [2], do a real build so the log file contains the exact files and flags of interest.
Later netbeans will be able to use the exact flags to parse.
At [4], pick only the files you want to see. Incorporating the whole kernel tree into netbeans will be unpractical.
There is a trick to parsing .d files: Many of the depended items are not real paths to a .h file, they are a modified entry for part of the linux config sections in the auto config file. You may need to reverse the modification to figure out which is the real header file.
Actually there is a topic on netbeans site. This is the discussion url: http://forums.netbeans.org/ntopic3075.html . And there is a wiki page linked from the discussion: wiki.netbeans.org/CNDLinuxKernel . Basically it asks you to prefix make with CFLAGS="-g3 -gdwarf-2" .
I found this link very helpful in setting up proper indexing in Eclipse. It requires running a script to alter Eclipse environment to match your kernel options, in my case
$ autoconf-to-eclipse.py ./include/generated/autoconf.h .
An illustrated guide to indexing the linux kernel in eclipse