I'm doing a series of benchmarks and found the httpperf tool.
But the version in my ubuntu 12.04 has a too small file descriptor size. Because it warns me with this message:
httperf: warning: open file limit > FD_SETSIZE; limiting max. # of open files to FD_SETSIZE
There used to be guide to compile httperf with a bigger size in http://gom-jabbar.org/articles/2009/02/04/httperf-and-file-descriptors but the site is down now.
Does anyone knows the steps to compile the tool with the proper settings?
I've always followed the instructions here, which should set the global values properly. You can check by issuing a ulimit -n (N.B. I had to include ulimit -n 65535 in my .profile — for some reason named users don't require this but root does.)
Don't forget to recompile httperf. Before doing make install issue a ./httperf -v | grep maximum — you should see 65535. If not, something went wrong.
I am working on a similar project (httperf 0.9.0 on Ubuntu 12.04) but I am having some difficulty getting httperf to actually compile properly. I'm sure I've forgotten something basic, but let me know how you fare. EDIT: Realized my problem was a library version incompatibility. I imported the binary built on a different server and it works fine.
Related
Sure, I've "chosen the wrong OS," Fedora instead of RHEL or CentOS, but I am where I am and there's no rtmp module in the standard distribution of nginx for Fedora since both RHEL and CentOS DO have the rtmp module available as a standard package.
So, I downloaded the source and did a build. While the call make install does the build (and I didn't see any errors in the 817 lines of output), it DOES NOT do the installation?!
At first I went down the wrong garden path, which is not totally wrong (see below as "Part II") but while writing "Part II" for this posting, I realized that I can't even find ANY evidence that it compiled ANY of the source for the rtmp module?! I followed the directions in the module's github readme file.
Of course, I kept a log of the run - too long to post here.
Having decades of coding experience I knew to check for a Makefile for that code and didn't find any?! STRANGE, right?
If anyone asks for info from that log, I have it and will provide it, but IDK what you may want to see from it.
Part II
I figured the installation didn't happen because the source code is written generically and doesn't pay any attention to the OS it's being installed on, and that's what "packaging" is all about and what package maintainers have to deal with...
I don't really have time to learn ALL the ins and outs of these packages, but I do know that the standard nginx packages provide these modules:
usr/lib64/nginx/modules/ngx_http_perl_module.so
usr/lib64/nginx/modules/ngx_http_image_filter_module.so
usr/lib64/nginx/modules/ngx_mail_module.so
usr/lib64/nginx/modules/ngx_http_naxsi_module.so
usr/lib64/nginx/modules/ngx_stream_module.so
usr/lib64/nginx/modules/ngx_http_xslt_filter_module.so
usr/lib64/nginx/modules/ngx_http_vhost_traffic_status_module.so
However, I don't see the compilation creating ANY .so files, much less moving them where they go on Fedora (the default is apparently /etc/nginx/modules). Further, the log output directed me to look to /usr/local/nginx, and there no .o or .so files at all but rather a single binary. That's fine, but doesn't help me, I presume, unless I want to screw around with moving files from where they're "expected" from the OS vs nginx point's of view and that sounds to me like a time-sink of massive proportions.
However, this IS a one-off installation at the moment and I'd rather not have a lot of pain whenever this box (and likely others to follow if this works) needs an upgrade. So, I found this gem of a blog posting. It touches on this problem but also seems rather involved as I don't fully grock it yet.
If I could simply learn how to build the correct file, which I presume is intended to be (once installed):
/usr/lib64/nginx/modules/ngx_rtmp_module.so
...from the .c source files, then I'm pretty sure I could "figure it out from there."
(Another possibility might be to find a way to prove from some sort of analysis that the GetPageSpeed people didn't alter the source when providing their package. Or, perhaps I could convince the package maintainer to include the rtmp package in with the standard packages available for Fedora, but, well, at best that's a long wait.)
It turns out that the build from source skips the .o and .so file stages and just builds an executable.
It's not set up for running in the normal, modern Fedora environment, however, as already noted above.
Not finding another answer and wanting to move on to other things, I simply got this from-source version working and it wasn't that hard. Note that this presumes you've installed the standard nginx package(s), which in this case hook in your man pages, systemd interfaces and so on so you can manage it as usual. In your favorite shell, as root:
# First, for my own sanity:
#
cd /etc
mv nginx nginx.from_FC_distro
ln -s /usr/local/nginx
cd /usr/local/nginx/logs
mv error.log error.log.orig
ln -s /var/log/nginx/error.log
#
# Now, get it to run and STAY running:
#
cd /usr/sbin
mv nginx nginx.from_FC_Distro
# Then EITHER this:
cp -p /usr/local/nginx/sbin/nginx /usr/sbin/nginx.from_src
ln -s nginx.from_src nginx
# OR this:
ln -s /usr/local/nginx/sbin/nginx
# Either vi or the echo works:
# vi /etc/nginx/conf/nginx.conf
echo "pid /run/nginx.pid;" >> /etc/nginx/conf/nginx.conf
#
# Finally:
systemctl enable nginx.service
systemctl start nginx.service
And now you have a running installation of the nginx server with whatever config you set up in the config file WITH the rtmp service! AND, you can manage it as usual. Upgrades aren't so hard, either, just don't bother with upgrading the nginx package the usual way. I'm sure the script-kiddies can figure out how to script it based on this article.
I have been using the "file" command in terminal (Mac) for a while.
Now encountering this error:
file: File 5.31 supports only version 14 magic files. `/usr/share/file/magic.mgc' is version 13
Seems like a fairly simple solution to update the magic file, but can't find any instructions to complete this. Can someone advise?
Any help is much appreciated here.
Perhaps you have more than one file executable in your $PATH. It can get confused if it finds a different database (magic) than it expected. File 5.29 (tagged October 2016) bumped the format to version 14. File 5.31 appears to be the current version on MacOS (works for me).
The "magic" file is built up from many smaller files (see git repository (mirror)). If you needed a specific version that's not prepackaged, you could download the source and compile it, starting with the project page, which points to an ftp site.
However, replacing that magic.mgc file runs into Apple's "system protection" (limited permissions). It's possible to turn that off (perhaps not a Good Idea®) But it's doable.
While you could replace the data file, it might be simpler to just install MacPorts and use the file package from that. It's currently at 5.32 (a step ahead of Apple's package), and if you did not like that, it's simpler to remove/alter.
I am new to VyOS development. I have written a code, which will fetch info from VyOS kernel module and write it on a netlink socket.But the problem is I am not sure whether
Can I edit the kernel module code directly to call my defined function or I have to write the patch.
If I have to make a patch file for it then where to place it in kernel source code. I have already made a patch file using diff command.
I have searched a lot about this problem but couldn't find the satisfactory solution.
Thanks.
After a long search I solved the problem I was facing. Here is conclusions in case any of you gets stuck in same problem.
Yes, you can edit the kernel module code in VyOS Development. But this method is not much appreciated.
Yes, you can write patch for kernel modules too. and it should be in GIT formate as described in How to write VyOS Patch. Soon I will update, where to place .patch file in VyOS kernel code.
To check the debugging output using dmesg, use KERN_DEBUG option. As I am not sure about others.
printk(KERN_DEBUG "%s: Debuging info \n", __FUNCTION__);
Moreover to check modification in VyOS kernel you don't need to make a complete ISO file all the time. You just need to run following commands.
*Note each path is
described everytime from the main iso building directory to avoid path problems.
cd build-iso/
sudo make clean-linux-image
sudo make linux-image
Then
cd buil-iso/pkgs/
Here you will find these debian packages.
buil-iso/pkgs/linux-image-3.13.11-1-amd64-vyos_999.dev_amd64.deb
buil-iso/pkgs/linux-libc-dev_999.dev_amd64.deb
buil-iso/pkgs/linux-vyatta-kbuild_999.dev_amd64.deb
Copy these files to an already installed VyOS Sytem and install them over there.
dpkg -i linux-image-3.13.11-1-amd64-vyos_999.dev_amd64.deb
dpkg -i linux-libc-dev_999.dev_amd64.deb
dpkg -i linux-vyatta-kbuild_999.dev_amd64.deb
reboot the system and check you modifications using dmesg.
I have downloaded and ran the CUDA 5.0 installer on my Mint 15 64bit distro. After hours of agony adjusting / removing / installing packages, it was able to finish installation - at least that what it said.
I can go run the CUDA samples so I thought hey it's working. However, I just made a new cu file and wanted to compile but it said "nvcc command not found"
I have looked at a topic similar to this here and they are talking about /opt/bin/ directory however on mine, there is no such directory. Does that mean it actually did not install ? It tells me to install nvidia cuda toolkit with apt-get but I am not sure if I should do that.
Also, I did say I ran the CUDA samples fine but I have to say ldconfig /usr/local/cuda/lib64
before I can get it to working. Is there a way to automate that ?
Thanks
You need to add the bin directory of the nvcc compiler driver to your PATH (environment variable), and you need to add the appropriate lib directories to your LD_LIBRARY_PATH environment variable.
For an immediate test, this should be as simple as:
export PATH=$PATH:/usr/local/cuda/bin
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64:/usr/local/cuda/lib
These instructions should be presented to you at the completion of a successful cuda toolkit install, but it seems your install method may have been roundabout.
To make this "automatic" you may want to investigate one of the methods to add these statements to a script run at login. For example, if you have a .bashrc file in your user's home directory, try editing that with the above commands. It should probably be sufficient to put the above commands at the very end of your ~/.bashrc file if you have one.
Note that Linux Mint is not one of the officially supported CUDA distros, so your mileage may vary.
I am trying to compile a small utility called tcpslice. It's the typical GNU C application. When I run ./configure, here is the output:
checking build system type... Invalid configuration `x86_64-pc-linux-gnuoldld': machine `x86_64-pc' not recognized
configure: error: /bin/sh ./config.sub x86_64-pc-linux-gnuoldld failed
It appears to not support compilation as a 64-bit Linux application. So I have a few questions:
Is it possible to set some flags to compile the application as 32-bit AND be able to run it on my 64-bit operating system?
Is it possible to update the configure script to support 64-bit Linux? If so, will I be making some serious code changes in the .c files as well?
I noticed a 64-bit RHEL6 machine on my network has this utility installed and running with an identical version number (1.2a3). Could I somehow download the source that was used to build it? I can get access the to RHN if necessary.
Is it possible to set some flags to compile the application as 32-bit AND be able to run it on my 64-bit operating system?
Yes. -m32 is the option.
Is it possible to update the configure script to support 64-bit Linux? If so, will I be making some serious code changes in the .c files as well?
You will have to make some code changes to make a purely 32 bit application work on 64 bit. Here's a link that talks about porting code from 32 bit to 64 bit.
I am sorry, I do not know the answer for your 3rd question.
Hope the little information provided by me helps in some way.
You've misinterpreted what the configure script is telling you. The solution has nothing to do with CPU bitness.
The error comes down to a too-old version of config.guess, which the package creator generated with libtoolize. To fix it, you will need to have libtool installed, then say:
$ libtoolize --force
You'll find that configure now runs, because libtoolize overwrote the tarball version of config.guess with one appropriate to your system.
You may run into another problem, a "missing" bpf.h file. Just edit tcpslice.c and change this line:
#include <net/bpf.h>
to:
#include <pcap-bpf.h>
With those two changes, I got tcpslice to build on my 64-bit CentOS 5 box.
install following packages :
$apt-get install ia32-libs.
for rhel its different :
look at the answer to this question :
CentOS 64 bit bad ELF interpreter