Is there anyway i can use expect without installing? - c

I want to automate expect for passwd but I don't have permission to install.
But if I could copy and paste the expect source code and execute the .c files usingcc/gcc
and generate the executable expect.
or
Can I copy the expect executable from linux and just use it anywhere else like on solaris, aix etc?
This is the expect in /usr/bin/expect in my linux box:
[root#test]# file /usr/bin/expect
/usr/bin/expect: ELF 64-bit LSB executable, AMD x86-64, version 1 (SYSV), for GNU/Linux 2.6.9, dynamically linked (uses shared libs), for GNU/Linux 2.6.9, stripped

The prebuilt executables for Solaris and Linux at kbskit include Expect (along with lots of other Tcl extensions) in the *bi "Batteries Included" versions. Each is just one big file, no unpacking or installing is needed, apart from eg. chmod a+x SunOS_kbsvq8.5-bi to make the file executable. You use this executable to run your script, and at the start of the script you need to add package require Expect to set up the Expect commands.

If you have a C compiler, you can build expect (but you have to build Tcl first).
The executable from Linux can't be used almost anywhere else (on solaris, aix, etc.); there is a chance for it to work on FreeBSD.

If you are on Solaris 11 then installing expect is as easy as:
pkg install //solaris/shell/expect
The package will come from the official Oracle Solaris repository so nothing to worry about. There's no need to get sources, build, etc.
I understand that you may not have permission to do this but since this comes from official source (Oracle) I'm pretty sure your sysadmin will do it in a sec.

Related

GNU configure options for binutils, gcc & glib

I am trying to build an alternative compilation suite on my debian-testing machine (sorry, real question is actually at bottom).
Technically it is a "cross-compilation" because I need to use this toolchain on another machine, but hardware is compatible (x86_64-unknown-linux-gnu) so I don't need to bother about build/host/target differencies.
On the other hand I do need to worry about prefix/sysroot because I cannot install in any standard location (to be more precise: I could install anywhere, since I have root access there, but I shouldn't); This leaves me with my $HOME, some completely non-standard place (e.g.: /usr/local/my/toolchain) or some semi-standard (e.g.: /opt) place. In any case I will need to do something to enable compilation to find includes and libs in such places and runtime linker to find needed .so.
My requirements are:
I have a running Linux that shouln't be messed with.
This system does not have a "C" compiler.
Said linux is BusyBox-based, so I will need a substantial amount of utilities to do any serious compiling there, including make, sed, awk, ..., beside the compiler proper.
I would be happy to stuff my augmented toolchain in /opt, but that is not a requirement; any place is ok as long as it's accessible by more than a single user, I would like ot avoid installing in $HOME.
I am aware of "optware", I installed it and it does work... up to a point. Unfortunately:
It's really old software
it's only 32bit (my system is Linux syno0 3.2.40 #5004 SMP Thu Nov 6 15:26:44 CST 2014 x86_64 GNU/Linux).
Some programs won't compile because provided libs have 32/64 mismatch.
Real motivation to do all this exercise is I need to install some perl modules needed for one application that will have to run there and to install them from cpan I need a native compiler (and other stuff, of course).
Similar arguments about a Ruby-on-rails application I should port there.
If at all possible I should try to use the "native" libs in /lib:/lib64:/usr/lib:/usr/lib64:/usr/lib32 ("static" .a libs are not available).
I had a limited success preparing a custom tarball from an available toolchain for my processor, relocating it to /opt, stuffing needed apps in its sysroot and compiling with: CPPFLAGS="-I/opt/include" and LDFLAGS="-L/opt/lib -Wl,-rpath -Wl,/opt/lib".
This enables me to build almost everything "LFS-style", but it's rather error-prone and 64-bit-only.
I seem to understand it should possible to automatize all this by a careful mix of --prefix, --with-sysroot, --with-native-system-header-dir, --enable-multilib and their friends.
I tried to understand exactly how they should be used and failed, for a reason or another. I didn't find any exhaustive documentation and information in GCC instalation docs are confusing me.
Can someone, please, give me a recipe to build this toolchain?
Any pointer to in-depth documentation welcome, but I suspect some tutoring will be necessary.
I assume recompilation of Binutils and GCC is mandatory, Glib is probably not needed; anything else can be recompiled "native" on target.
TiA
ZioByte
After installing your toolchain in nonstandard places you need to set environment(maybe system-wide) correctly for GCC using LIBRARY_PATH and C_INCLUDE_PATHor CPLUS_INCLUDE_PATH.
Environment Variables Affecting GCC
I see three ways to automate setting path variables for your relocatable toolchain:
on every relocation adding your GCC path to your PATH environment variable. And create alias in your busybox profile (usually /etc/profile)
alias example:
alias gcc='TOOLCHAIN_PREFIX=$(which gcc | rev | cut -d"/" -f3-10 |rev); \
LIBRARY_PATH=$TOOLCHAIN_PREFIX/lib/ \
C_INCLUDE_PATH=$TOOLCHAIN_PREFIX/include/ gcc'
creating for your toolchain launcher-script that will calculate pathes, but you'll should launch it with direct path, setting it when you launch build process, or of course you can add its location to PATH environment varaible.
script example
#!/bin/sh
TOOLCHAIN_PREFIX=$(echo $0 | rev | cut -d"/" -f3-10 |rev);
LIBRARY_PATH=$TOOLCHAIN_PREFIX/lib/ \
C_INCLUDE_PATH=$TOOLCHAIN_PREFIX/include/ \
$TOOLCHAIN_PREFIX/bin/gcc-4.*
The most reliable and ergonomic way — create install/uninstall script that will unpack and set environment correctly, to relocate toolchain you will uninstall from it from one prefix and install to another. If you have dpkg on your debian-testing system, .deb package is best choice.
I can see no way to set environment fully automatically. But we can reduce it to setting just one path — path of toolchain.
HINT* For better stability you should isolate your toolchain and also install in your prefix Linux Kernel headers and Glib

How to work with external libraries when cross compiling?

I am writing some code for raspberry pi ARM target on x86 ubuntu machine. I am using the gcc-linaro-armhf toolchain. I am able to cross compile and run some independent programs on pi. Now, I want to link my code with external library such as ncurses. How can I achieve this.
Should I just link my program with the existing ncurses lib on host machine and then run on ARM? (I don't think this will work)
Do I need to get source or prebuilt version of lib for arm, put it in my lib path and then compile?
What is the best practice in this kind of situation?
I also want to know how it works for the c stdlib. In my program I used the stdio functions and it worked after cross compiling without doing anything special. I just provided path for my arm gcc in makefile. So, I want to know, how it got correct std headers and libs?
Regarding your general questions:
Why the C library works:
The C library is part of your cross toolchain. That's why the headers are found and the program correctly links and runs. This is also true for some other very basic system libraries like libm and libstdc++ (not in every case, depends on the toolchain configuration).
In general when dealing with cross-development you need some way to get your desired libraries cross-compiled. Using binaries in this case is very rare. That is, especially with ARM hardware, because there are so many different configurations and often everything is stripped down much in different ways. That's why binaries are not very much binary compatible between different devices and Linux configurations.
If you're running Ubuntu on the Raspberry Pi then there is a chance that you may find a suitable ncurses library on the internet or even in some Ubuntu apt repository. The typical way, however, will be to cross compile the library with the specific toolchain you have got.
In cases when a lot and complex libraries need to be cross-compiled there are solutions that make life a bit easier like buildroot or ptxdist. These programs build complete Linux kernels and root file systems for embedded devices.
In your case, however, as long as you only want ncurses you can compile the source code yourself. You just need to download the sources, run configure while specifying your toolchain using the --host option. The --prefix option will choose the installation directory. After running make and make install, considering everything went fine, you will have got a set of headers and the ARM-compiled library for your application to link against.
Regarding cross compilation you will surely find loads of information on the internet and maybe ncurses has got some pointers in its shipped documentation, too.
For the query How the C library works in cross-tools
When compiling and building cross-tool chain during configuration they will provide sysroot.
like --with-sysroot=${CLFS_CROSS_TOOLS}
--with-sysroot
--with-sysroot=dir
Tells GCC to consider dir as the root of a tree that contains (a subset of) the root filesystem of the target operating system. Target system headers, libraries and run-time object files will be searched for in there. More specifically, this acts as if --sysroot=dir was added to the default options of the built compiler. The specified directory is not copied into the install tree, unlike the options --with-headers and --with-libs that this option obsoletes. The default value, in case --with-sysroot is not given an argument, is ${gcc_tooldir}/sys-root. If the specified directory is a subdirectory of ${exec_prefix}, then it will be found relative to the GCC binaries if the installation tree is moved.
So instead of looking /lib /usr/include it will look /Toolchain/(libc) and (include files) when its compiling
you can check by
arm-linux-gnueabihf-gcc -print-sysroot
this show where to look for libc .
also
arm-linux-gnueabihf-gcc -print-search-dirs
gives you clear picture
Clearly, you will need an ncurses compiled for the ARM that you are targeting - the one on the host will do you absolutely no good at all [unless your host has an ARM processor - but you said x86, so clearly not the case].
There MAY be some prebuilt libraries available, but I suspect it's more work to find one (that works and matches your specific conditions) than to build the library yourself from sources - it shouldn't be that hard, and I expect ncurses doesn't take that many minutes to build.
As to your first question, if you intend to use ncurses library with your cross-compiler toolchain, you'll have its arm-built binaries prepared.
Your second question is how it works with std libs, well it's really NOT the system libc/libm the toolchain is using to compile/link your program is. Maybe you'll see it from --print-file-name= option of your compiler:
arm-none-linux-gnuabi-gcc --print-file-name=libm.a
...(my working folder)/arm-2011.03(arm-toolchain folder)/bin/../arm-none-linux-gnuabi/libc/usr/lib/libm.a
arm-none-linux-gnuabi-gcc --print-file-name=libpthread.so
...(my working folder)/arm-2011.03(arm-toolchain folder)/bin/../arm-none-linux-gnuabi/libc/usr/lib/libpthread.so
I think your Raspberry toolchain might be the same. You can try this out.
Vinay's answer is pretty solid. Just a correction when compiling the ncurses library for raspberry pi the option to set your rootfs is --sysroot=<dir> and not --with-sysroot . Thats what I found when I was using the following compiler:
arm-linux-gnueabihf-gcc --version
arm-linux-gnueabihf-gcc (crosstool-NG linaro-1.13.1+bzr2650 - Linaro GCC 2014.03) 4.8.3 20140303 (prerelease)
Copyright (C) 2013 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Dealing with static libraries when porting C code from one operating system to another

I have been working on some C code on a windows machine and now I am in the process of transferring it to a Linux computer where I do not have full privileges. In my code I link to several static libraries.
Is it correct that these libraries need to be re-made for a Linux computer?
The library in question is GSL-1.13 scientific library
Side question, does anyone have a pre-compiled version of the above for Linux?
I have tried using automake to compile the source on the Linux machine, but no makefile seems to be created and no error is output.
Thanks
Yes, you do need to compile any library again when you switch from Windows to GNU/Linux.
As for how to do that, you don't need automake to build GSL. You should read the file INSTALL that comes inside the tarball (the file gsl-1.16.tar.gz) very carefully. In a nutshell, you run the commands
$ ./configure
$ make
inside the directory that you unpacked from the tarball.

How can I build a 32bit (i386) .deb on a 64bit box?

I have applications which successfully compile with the -m32 switch (in DMD and/or GCC) to produce:
appname: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked
(uses shared libs), for GNU/Linux 2.6.15, not stripped
The source packages I have created work fine, on both 32 bit and 64 bit Ubuntu to build the appropriate binary .debs.
I would like to produce the i386 .deb on the same 64 bit machine i use to produce the 64 bit .deb.
Is this possible, and where should I look for instructions?
The answer depends on the complexity of your build. When the normal 64 bit userland tools suffice for a build, simply specify the architecture via -a
debuild -ai386
However, there are often cases where this doesn't work, and in these cases you'll need pbuilder. pbuilder builds a clean Debian/Ubuntu system in a chroot-ed environment. man pbuilder is a good introduction. To get started, you'll need:
sudo pbuilder --create --architecture i386
sudo pbuilder --build mypackage.dsc
It starts with calling debuild with the -ai386 option, which will
change the architecture that the package is built for.
Of course you have to ensure that the package contains the i386 build of the application.
You don't do anything thing different from building a 64bit .deb. Except that you include a 32bit build of your application.
The control file specifies the architecture, so be sure you select the correct one.

Location of C standard library

In the gcc manual it is given that "The C standard library itself
is stored in ‘/usr/lib/libc.a’". I have gcc installed, but could not find libc.a at the said location. Curious to know where is it located.
I find many .so files in /usr/lib location. What are those?
If you are looking for libc.a:
$ gcc --print-file-name=libc.a
/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libc.a
A few things:
gcc and glibc are two different things. gcc is the compiler, glibc are the runtime libraries. Pretty much everything needs glibc to run.
.a files are static libraries, .so means shared object and is the Linux equivalent of a DLL
Most things DON'T link against libc.a, they link against libc.so
Hope that clears it up for you. As for the location, it's almost certainly going to be in /usr/lib/libc.a and / or /usr/lib/libc.so. Like I said, the .so one is the more common.
If you are on RPM based Linux (Red Hat/CentOS/Fedora/SUSE) then you would get the location of the installed glibc with
rpm -ql glibc and rpm -ql glibc-devel .
locate libc.a would get you the location. And to see from where it comes do:
rpm -qf /usr/lib/libc.a
Here is what rpm -qi has to tell about these packages
glibc-devel:
The glibc-devel package contains the object files necessary
for developing programs which use the standard C libraries (which are
used by nearly all programs). If you are developing programs which
will use the standard C libraries, your system needs to have these
standard object files available in order to create the
executables.
Install glibc-devel if you are going to develop programs which will
use the standard C libraries
glibc:
The glibc package contains standard libraries which are used by
multiple programs on the system. In order to save disk space and
memory, as well as to make upgrading easier, common system code is
kept in one place and shared between programs. This particular package
contains the most important sets of shared libraries: the standard C
library and the standard math library. Without these two libraries, a
Linux system will not function.
You need to install package for static libraries separately:
glibc-static.i686
On centos 5.8
$ ls -l /usr/lib/libc.a
-rw-r--r-- 1 root root 2442786 Apr 8 2010 /usr/lib/libc.a
$ rpm -qf /usr/lib/libc.a
glibc-devel-2.3.4-2.43.el4_8.3
You also have to have the glibc-devel package install under RedHat distributions.

Resources