LLDB ioctl problems - c

I have a program in which I use ioctl(0, TIOCGWINSZ, (struct winsize *)) to find the size of the terminal window the program is running in. When I run it in the terminal, it works fine, but when I use LLDB, ioctl gives a window size of 0 x 0.
Example:
#include <unistd.h>
#include <sys/ioctl.h>
#include <stdio.h>
int main(){
struct winsize tty_window_size;
ioctl(STDOUT_FILENO, TIOCGWINSZ, &tty_window_size);
printf("Rows: %i, Cols: %i\n", tty_window_size.ws_row, tty_window_size.ws_col);
return 0;
}
Terminal transcript:
$ clang test.c
$ ./a.out
Rows: 24, Cols: 80
$ lldb ./a.out
(lldb) target create "./a.out"
Current executable set to './a.out' (x86_64).
(lldb) r
Process 32763 launched: './a.out' (x86_64)
Rows: 0, Cols: 0
Process 32763 exited with status = 0 (0x00000000)
Does anybody why this happens, or a way to fix this?
Thanks in advance.

lldb uses pty's to handle program input & output, but it seems like a bug that they aren't set to track lldb's terminal size. Please file that with the lldb.llvm.org bug tracker.
If you are on OS X, you can run your app in a separate Terminal window (which is probably what you want if you're doing anything fancy with the terminal anyway) by launching it like:
(lldb) process launch -tty
I don't know if this has been implemented on Linux yet or not.

Not sure it is useful since it is an old post. Anyway... I faced the same problem and found a workaround. if ioctl on stdout fails, then try with /dev/tty
#include <stdio.h>
#include <fcntl.h>
#include <unistd.h>
#include <sys/ioctl.h>
void getTerminalSize(int *row, int *col) {
struct winsize ws;
*row = *col = 0; /* default value (indicates an error) */
if (!isatty(STDOUT_FILENO)) {
return;
}
ws.ws_row = ws.ws_col = 0;
if (ioctl(STDOUT_FILENO, TIOCGWINSZ, &ws) == -1 || ws.ws_row == 0 || ws.ws_col == 0) {
int fd = open("/dev/tty", O_RDONLY);
if (fd != -1) {
ioctl(fd, TIOCGWINSZ, &ws);
close (fd);
}
}
*row = ws.ws_row;
*col = ws.ws_col;
}
int main(){
int row, col;
getTerminalSize(&row, &col);
printf("Row: %i, Col: %i\n", row, col);
return 0;
}

Related

SIGXFSZ is sent by kernel unless something is printed to stdout?

I am learning "Advanced Programming in Unix Environment", and have a problem with exercise no.11 in chapter 10.
In my program, I set RLIMIT_FSIZE to 1024.
So the kernel should send SIGXFSZ to my program when write trying to exceed that limit.
But I found that SIGXFSZ is not send unless something is printed to stdout.
Here is my code:
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/time.h>
#include <sys/resource.h>
#include <signal.h>
#define BUFFSIZE 100
void xfsz_handler(int signo)
{
fprintf(stderr, "%d, %s\n", signo, strsignal(signo));
}
int main(int argc, char* argv[])
{
int n;
char buf[BUFFSIZE];
struct rlimit fsizeLimit;
fsizeLimit.rlim_cur=1024;
fsizeLimit.rlim_max=1024;
if(setrlimit(RLIMIT_FSIZE, &fsizeLimit) < 0)
{
perror("setrlimit error");
exit(-1);
}
if(signal(SIGXFSZ, xfsz_handler)==SIG_ERR)
{
fprintf(stderr, "set signal handler error for %d\n", SIGXFSZ);
exit(-1);
}
printf("what ever\n"); /* we need this to get SIGXFSZ sent */
while ( (n=read(STDIN_FILENO, buf, BUFFSIZE)) > 0)
{
int byteWrite = 0;
if ( (byteWrite = write(STDOUT_FILENO, buf, n)) < 0)
{
perror("write error");
exit(-1);
}
if(byteWrite!=n)
{
fprintf(stderr, "byteWrite=%d, n=%d\n", byteWrite, n);
exit(-1);
}
}
if (n<0)
{
perror("read error");
exit(-1);
}
return 0;
}
if I comment out the following line in the code, kernel will not transmit SIGXFSZ.
printf("What ever . . . \n");
Why this happens? Thanks in advance.
[root#luaDevelopment ex11]# ./myCopy < /root/workspace/AdvanceProgrammingInTheUnixEnvironment.20140627.tar.bz2 >aa.tar.bz2
byteWrite=24, n=100
[root#luaDevelopment ex11]# make
gcc -o myCopy myCopy.c -std=gnu99 -I../../lib/ -L../../lib/ -lch10
[root#luaDevelopment ex11]# ./myCopy < /root/workspace/AdvanceProgrammingInTheUnixEnvironment.20140627.tar.bz2 >aa.tar.bz2
byteWrite=24, n=100
25, File size limit exceeded
[root#luaDevelopment ex11]#
user3693690 found the answer in Appendix C of the book:
10.11 Under Linux 3.2.0, Mac OS X 10.6.8, and Solaris 10, the signal handler for SIGXFSZ is never called because the loop exits the program on a short write, but write returns a count of 24 as soon as the file’s size reaches 1,024 bytes. When the file’s size has reached 1,000 bytes under FreeBSD 8.0, the signal handler is called on the next attempt to write 100 bytes, and the write call returns −1 with errno set to EFBIG("File too big"). On all four platforms, if we attempt an additional write at the current file offset (the end of the file), we will receive SIGXFSZ and write will fail, returning −1 with errno set to EFBIG.

system("echo text > t.log") reports error

The following code is intend to get and set RLIMIT_NOFILE, then do exec and system:
#include <sys/resource.h>
#include <sys/time.h>
#include <unistd.h>
#include <stdlib.h>
#include <stdio.h>
void show_nofile()
{
struct rlimit rl;
if (getrlimit(RLIMIT_NOFILE, &rl)) {
perror("getrlimit");
}
printf("rlim_cur = %ld, rlim_max =%ld\n", rl.rlim_cur, rl.rlim_max);
}
int set_nofile(long cur, long max)
{
struct rlimit rl;
rl.rlim_cur = cur;
rl.rlim_max = max;
if (setrlimit(RLIMIT_NOFILE, &rl)) {
perror("setrlimit");
return -1;
}
return 0;
}
int main(int argc, char* argv[])
{
show_nofile();
if (argc<2) {
set_nofile(9, 10);
show_nofile();
printf("now execlp...\n");
execlp("./a.out", "a.out", "-n", (char*)NULL);
} else {
if (-1==system("/bin/echo abc > log")) {
perror("system");
}
}
return 0;
}
This is how i compile and run it gcc -Wall limit.c && ./a.out, the result is:
rlim_cur = 1024, rlim_max =4096
rlim_cur = 9, rlim_max =10
now execlp...
rlim_cur = 9, rlim_max =10
sh: 1: 1: Invalid argument
What could be wrong here?
P.S.:
I run it in my home dir, so there is not permission problems.
Besides, the error message does not seem to relate file permission.
The problem is that you set a too low limit for RLIMIT_NOFILE.
strace your program, you will find lines like
10512 open("def", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 3
10512 fcntl(1, F_DUPFD, 10) = -1 EINVAL (Invalid argument)
10512 close(1) = 0
which means sh cannot redirect I/O successfully.
Please replace
set_nofile(9, 10);
with
set_nofile(12, 20);
and try again.
Is the "text" a variable? If so, you may need to try this:
system("echo $text > t.log");
There seems to be nothing wrong with the code as such..
Try to check the write permissions on the directory you are trying to execute the program in.
On successful execution, your file "t.log" will only contain the word "text".
#Yunxuan Tuanmu: '$text' will be the shell's context, which will be null since it is not set within that context

libusb - Segmentation fault: 11

I've got a very basic, bare minimum libusb example going, which compiles, but the output produced by the following application:
#include <stdio.h>
#include <stdlib.h>
#include <libusb-1.0/libusb.h>
int main(void) {
puts("Looking for USB devices...");
libusb_device **devices;
libusb_context *context = NULL;
ssize_t device_count = 0;
device_count = libusb_get_device_list(context, &devices);
if(device_count < 0) {
puts("Unable to retrieve USB device list!");
}
printf("%lu devices found\n", device_count);
return EXIT_SUCCESS;
}
is as follows:
Looking for USB devices...
Segmentation fault: 11
The failure occurs on line 13:
device_count = libusb_get_device_list(context, &devices);
I'm running the above on Mac OS X 10.9, and have libusb version 1.0.9 installed via Homebrew.
Any idea what could be the problem?
The code misses to initialise context.
Call libusb_init() prior to any operation on libusb.
Add a line like this before issueing any other call into libusb:
int result = libusb_init(&context);
if (0 > result)
{
fprintf(stderr, "libusb_init() failed with %d.\n", result);
exit(EXIT_FAILURE);
}

How can i get screen resolution in c (Operating system QNX or Linux)

I am in a GUI development with QNX(screen resolution interdependent design)
How can i get screen resolution in c.
I am using QNX operating system.
Is it possible?
Is any OS function for this solution?
thanks
Assume you are using a device with a framebuffer (and have root access):
(taken from this answer: Paint Pixels to Screen via Linux FrameBuffer)
Also, as mentioned above, what graphics library you are using will make a lot of difference as this code will only tell you what the framebuffer is set to and not what the GUI code is using. So might not be useful at all. If you are not using X or any other graphics library, then you will probably need to be using the framebuffer, and you can see the rest of the answer for how to do that. (I strongly suggest you use DirectFB this will save you implementing a LOT of code).
Also, you could also use the gl drivers that turn up on most devices (inc. embedded ones) so this will also effect how you do what you require.
Are you using a SOC? Does the manufacturer have there own driver layer? That may work completely different and would probably come with it's own API to handle this.
But anyway, I hope this helps.
#include <stdlib.h>
#include <unistd.h>
#include <stdio.h>
#include <fcntl.h>
#include <linux/fb.h>
#include <sys/mman.h>
#include <sys/ioctl.h>
int main()
{
int fbfd = 0;
struct fb_var_screeninfo vinfo;
struct fb_fix_screeninfo finfo;
long int screensize = 0;
char *fbp = 0;
int x = 0, y = 0;
long int location = 0;
// Open the file for reading and writing
fbfd = open("/dev/fb0", O_RDWR);
if (fbfd == -1) {
perror("Error: cannot open framebuffer device");
exit(1);
}
printf("The framebuffer device was opened successfully.\n");
// Get fixed screen information
if (ioctl(fbfd, FBIOGET_FSCREENINFO, &finfo) == -1) {
perror("Error reading fixed information");
exit(2);
}
// Get variable screen information
if (ioctl(fbfd, FBIOGET_VSCREENINFO, &vinfo) == -1) {
perror("Error reading variable information");
exit(3);
}
printf("%dx%d, %dbpp\n", vinfo.xres, vinfo.yres, vinfo.bits_per_pixel);
// Figure out the size of the screen in bytes
//
close(fbfd);
}
For a unix-like OS, you may use the library X11, but if you need cross-platform solution, try the GTK+.
A full code
// The C standart library
#include <stdlib.h>
// GTK+
#include <gtk/gtk.h>
#include <glib.h>
#include <glib/gprintf.h>
// X11
#include <X11/Xlib.h>
/*
Printing a current screen resoltion by using the GTK+3
https://en.wikipedia.org/wiki/GTK%2B
*/
int
print_screen_resolution_by_GTK(int argc, char *argv[])
{
GdkScreen *screen;
gint width, height;
gtk_init(&argc, &argv);
if ((screen = gdk_screen_get_default()) != NULL) {
width = gdk_screen_get_width(screen);
height = gdk_screen_get_height(screen);
g_printf("Current screen resolution: %dx%d (by used GTK+)\n", width, height);
}
return 0;
}
/*
Printing a current screen resoltion by using the libX11 (worked only for Unix-like OS)
https://en.wikipedia.org/wiki/X_Window_System
Based on:
https://www.x.org/releases/X11R7.6/doc/libX11/specs/libX11/libX11.html
http://surfingtroves.blogspot.com/2011/01/how-to-get-screen-resolution-in-linux-c.html
*/
int
print_display_resolution_by_X11()
{
Display *display;
Window window;
XWindowAttributes xw_attrs;
if ((display = XOpenDisplay(NULL)) == NULL) {
fprintf(stderr, "Failed to open default display\n");
return -1;
}
window = DefaultRootWindow(display);
XGetWindowAttributes(display, window, &xw_attrs);
printf("Current window resolution: %dx%d (by used X11)\n", xw_attrs.width, xw_attrs.height);
XCloseDisplay(display);
return 0;
}
int main(int argc, char *argv[])
{
print_screen_resolution_by_GTK(argc, argv);
print_display_resolution_by_X11();
return EXIT_SUCCESS;
}
A compilation
gcc -o main main.c `pkg-config --libs --cflags gtk+-3.0 x11`
A result (actual for my computer)
Current screen resolution: 1366x768 (by used GTK+)
Current window resolution: 1366x768 (by used X11)
You can simply use this function I created, it get screen size from your configuration files, split it, and then return 2 values (resolution as x and y)
I tried it on Ubuntu 20.04 and it works perfectly !
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
unsigned short *get_screen_size(void)
{
static unsigned short size[2];
char *array[8];
char screen_size[64];
char* token = NULL;
FILE *cmd = popen("xdpyinfo | awk '/dimensions/ {print $2}'", "r");
if (!cmd)
return 0;
while (fgets(screen_size, sizeof(screen_size), cmd) != NULL);
pclose(cmd);
token = strtok(screen_size, "x\n");
if (!token)
return 0;
for (unsigned short i = 0; token != NULL; ++i) {
array[i] = token;
token = strtok(NULL, "x\n");
}
size[0] = atoi(array[0]);
size[1] = atoi(array[1]);
size[2] = -1;
return size;
}
int main(void)
{
unsigned short *size = get_screen_size();
printf("Screen resolution = %dx%d\n", size[0], size[1]);
return 0;
}
If you have any question, do not hesitate ! :)

Bus error in C Program on Unix machine

I'm fairly unexperienced with C and am running into a "Bus error" that I cannot understand the cause of. I had never heard of gdb but came across it on this forum and tried using it on my problem program and got the following output:
% gdb Proc1 GNU gdb 5.0
...
This GDB was
configured as
"sparc-sun-solaris2.8"...
(no
debugging symbols found)...
(gdb) run
Starting program:
/home/0/vlcek/CSE660/Lab3/Proc1
(no
debugging symbols found)...
(no
debugging symbols found)...
(no
debugging symbols found)...
Program
received signal SIGSEGV, Segmentation
fault. 0x10a64 in main ()
I have no idea what this means, is that saying there's an error in line 10 in my code? If so, line 10 in my code is merely "int main()" so I'm not sure the issue there... When I try running the program all it says is "Bus error" so I'm not sure where to go from here. I even tried putting a printf right after main and it doesn't print the string, only gives me a Bus error.
Below is my code:
// Compilation Command: gcc -o Proc1 Proc1.c ssem.o sshm.o
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include "ssem.h"
#include "sshm.h"
// Code of Proc1
int main()
{int i, internal_reg;
int key1 = 111111, key2 = 222222, key3 = 333333, key4 = 444444;
/* here create and initialize all semaphores */
int sem1 = sem_create(key1, 1);
if (sem1 < 0) {
perror("sem failed");
}
int sem2 = sem_create(key2, 1);
if (sem2 < 0) {
perror("sem failed");
}
int sem3 = sem_create(key3, 1);
if (sem3 < 0) {
perror("sem failed");
}
int sem4 = sem_create(key4, 1);
if (sem4 < 0) {
perror("sem failed");
}
/* here created: shared memory array Account of size 3 */
int *Account;
int shmid = shm_get(123456, (void**) &Account, 3*sizeof(int));
if (shmid < 0) {
perror("shm failed");
}
Account[0]=10000;
Account[1]=10000;
Account[2]=10000;
/* synchronize with Proc2, Proc3 and Proc4 (4 process 4 way synchronization)*/
for (i = 0; i < 1000; i++)
{
sem_signal(sem1);
sem_signal(sem1);
sem_signal(sem1);
internal_reg = Account[0];
internal_reg = internal_reg - 200;
Account[0] = internal_reg;
/* same thing, except we're adding $100 to Account1 now... */
internal_reg = Account[1];
internal_reg = internal_reg + 200;
Account[1] = internal_reg;
if (i % 100 == 0 && i != 0) {
printf("Account 0: $%i\n", Account[0]);
printf("Account 1: $%i\n", Account[1]);
}
if (i == 300 || i == 600) {
sleep(1);
}
sem_wait(sem2);
sem_wait(sem3);
sem_wait(sem4);
}
/* Here add a code that prints contents of each account
and their sum after 100th, 200th, 300th, ...., and 1000th iterations*/
}
/*in the code above include some wait and signal operations on semaphores. Do no
t over-synchronize. */
Here is the documentation for ssem and sshm:
/*
* ssem.c
*
* Version 1.0.0
* Date : 10 Jan 2002
*
*/
#include <sys/ipc.h>
#include <sys/sem.h>
#include <sys/types.h>
#include "ssem.h"
#define PERMS 0600
static struct sembuf op_lock[1] = {
0, -1, 0
};
static struct sembuf op_unlock[1] = {
0, 1, IPC_NOWAIT
};
int sem_create(int key,int initval)
{
int semid,i;
semid = semget((key_t)key, 1, IPC_CREAT | PERMS);
for(i=0;i<initval;i++)
semop(semid,&op_unlock[0],1);
return semid;
}
int sem_open(int key)
{
int semid;
semid = semget(key,0,0);
return semid;
}
int sem_wait(int semid)
{
return semop(semid,&op_lock[0],1);
}
int sem_signal(int semid)
{
return semop(semid,&op_unlock[0],1);
}
int sem_rm(int semid)
{
return semctl(semid, 0, IPC_RMID, 0);
}
/*
* sshm.c
*
* Routines for Simpler shared memory operations
* Version : 1.0.0.
* Date : 10 Jan 2002
*
*/
#include <sys/shm.h>
#include <sys/ipc.h>
#include <sys/types.h>
#include "sshm.h"
#define PERMS 0600
int shm_get(int key, void **start_ptr, int size)
{
int shmid;
shmid = shmget((key_t) key, size, PERMS | IPC_CREAT);
(*start_ptr) = (void *) shmat(shmid, (char *) 0, 0);
return shmid;
}
int shm_rm(int shmid)
{
return shmctl(shmid, IPC_RMID, (struct shmid_ds *) 0);
}
After compiling Proc1.c with the -ggdb flag and running gdb I got the following:
Program received signal SIGSEGV,
Segmentation fault. 0x10a64 in main ()
at Proc1.c:36
36 Account[0]=10000
Why would this cause a segmentation fault?
After changing the declaration of Account to
int *Account = 0;
and adding
printf("Account == %p\n", Account);
before Account[0] = 10000;
I get the following upon running Proc1:
Account == ffffffff
Bus error
In order to get more sensible results from gdb you should compile your program with the -ggdb option. This will then include debugging information (like line numbers) into your program.
What you are currently seeing is the memory address (0x10a64) of the program counter. This will not help you very much unless you can correlate the assembly instructions you find there with a part of your C program yourself.
It looks like you are using shm_get properly. I think the library designer has made a terrible mistake in naming the function so similarly to shmget.
It's just as I thought. The Account pointer is ending up with an invalid value (aka 0xffffffff (aka (void *)(-1))) in it. The value (void *)(-1) generally indicates some sort of error, and it is explicitly mentioned in the manpage for shmat. This indicates that the shmat call inside the library failed. Here is how you can tell if it failed:
if (Account == (void *)(-1)) {
perror("shmat failed");
}
Account[0] = 10000;
// ...
Now, why it failed is an interesting mystery. Apparently the shmget call succeeded.
Personally, I think System V IPC is basically deprecated at this point and you should avoid using it if you can.
Depending on your compiler and your compiler options you might encounter an aliasing problem because your are casting the address of your Account pointer. These oldish interfaces are not in phase with modern antialiasing rules, meaning that the optimizer supposes that the value of Account wouldn't change.
Also you should get the argument for shm_get as close as possible to the expected type. Try perhaps something like the following.
void volatile* shmRet;
int shmid = shm_get(123456, (void**) &shmRet, 3*sizeof(int));
int *Account = shmRet;
I don't have the same architecture, so I don't know the exact prototype of your shm_get but usually it is also a bad idea to use fixed keys for this type of functions. There should be some function that returns you some key to use in your application.

Resources