I have an OpenGL application that renders directly to the framebuffer.
If I start the application from the terminal, at times I will see a glimpse of the cursor flashing behind my application. Likewise, if I start it from inside a terminal emulator in X, I get glimpses of the mouse moving behind if I move the mouse around.
My application currently renders at 45fps so low frame rate shouldn't be the issue.
I notice when X starts it seems to clear the shell before starting to render but then when you close X server later on, the diagnostic stuff that was being sent to stdout comes back so I doubt it's issuing a clear command.
How is what I want to do accomplished? Can you simply render to fb1 and tell the video output to display from fb1 and not fight over fb0? Then when your application dies you can return the display to fb0?
EDIT:
For clarification, the app is being developed for an embedded system on an ARM SoC (Freescale i.MX6) with the Vivante GPU and running on ArchLinux ARM.
I have an OpenGL application that renders directly to the framebuffer.
Just for clarification: You're doing this using KMS + DRI/DRM + GBM right?
If I start the application from the shell, at times I will see a glimpse of the cursor flashing behind my application.
You're mixing up a few terms here. A shell is the program that provides you with a command like, job control, stdio redirection, scripting support and so on. What you probably are referring to is the Linux kernel virtual terminal console (Linux VT).
When starting a program that directly uses a framebuffer device, you have to put the virtual terminal your process uses into graphics mode (KDSETMODE).
Likewise, if I start it from inside a terminal emulator in X, I get glimpses of the mouse moving behind if I move the mouse around.
When in a started from a X11 environment the X11 server is the exclusive owner of the VT and graphics mode. All graphics operations must go through the X11 server. As far as systems design is concerned, any program trying to touch a fbdev it doesn't own should be shot in the face (immediately be sent a SIGSEGV). Don't do it. Period, no discussion. The X11 server owns the VT and while the VT is active the fbdev.
What you can do instead is allocate a own VT for your program, and let it use that. However you will then get graphical output only if the X11 server is not running and the console switched to the VT of your program.
Related
There are some cases when it is needed to see the full log of Linux kernel panic.
But it often can't be done through the regular terminal in display.
I thought that it should be done through the COM-port, but can't figure out:
How can I do that?
What is the reason of working well through COM-port, but not through terminal in my display?
UPD: I use debian-based custom Linux (4.9 kernel) with HDMI-display.
Related: How to get a Linux panic output to a USB serial console when system has also a display adapter
Simple direct connected console / VTY use the graphics card to convert character bytes (0 - 255) into display character cells. 80x25 is a common format. This is very simple and not hard for a crashing kernel to manage. Just copy some memory around.
The graphical console is more complicated because the kernel now has to locate the font bitmap for a character and copy the bitmap onto the display. It also needs to handle scrolling with more memory copies, or IOCTL calls into the graphics driver, etc.
A console running in a Gnome or KDE GUI session is very complicated. The kernel isn't involved in drawing it at all and doesn't know how.
The more complex the output process is, the less likely that a kernel that is already crashing can manage it successfully.
Serial port output is once again very simple. A buffered UART makes it a bit more complicated, but if a crashing kernel wants to ignore that and simply output bytes at a 9,600 line rate, any serial port will work with that without needing buffers or interrupt management.
Using the lisp implementation of the X11 protocol, get-overlay-window freezes when no compositor is running. If I kill the lisp process, the xid is printed out.
This also freezes my lisp window manager running in another lisp thread, though same process. Basically X acts like it's been grabbed, so thank god for ctrl-alt-f1.
Some previous questions about composite show others running into similar problems when no compositor is running.
I'm guessing that maybe the server is waiting for some sort of out of protocol authorization or something? Or something particular sequence of events has to be completed?
Having access to the overlay window when another compositor is active isn't helpful for writing a compositor!
Apparently I had a reading comprehension fail with the protocol description, or they a writing fail.
Asking composite to redirect windows automatically ensures the windows contents get drawn. It does not ensure they get drawn to the overlay! Nor does the overlay appear to be transparent. So even with setting all windows to be automatically updated, when the overlay window gets mapped by the call to get its XID it blocks you from seeing any other updates to the screen and blocks all input.
Making the overlay in a sense not very useful. Or the request to have automatic updates for redirected windows not useful. Either way, seems will have to paint every single pixel even of the windows we're not interested in.
Maybe it's just a driver thing?
I'm working on a small C videogames library for the Raspberry Pi. I'm coding the input system from scratch and after reading and seeing some examples about raw input reading, I got some doubts.
For mouse reading, I just use /dev/input/event1, I open() it as O_NONBLOCK, I read() input_event(s) and I also put the mouse reading in a separate pthread. Easy.
For keyboard reading, I saw some examples that reconfigure stdin to O_NONBLOCK (using fcntl()), then save and reconfigure keyboard termios attibutes (ICANON, ECHO), and some example also save and reconfigure keyboard mode with ioctl(). What's the point of doing all that stuff instead of just reading /dev/input/event0 input_event(s) (same way as mouse)?
Note that I undestand what those functions do, I just don't undestand why should be better to do all that stuff instead of just reading input_event(s).
Reading stdin is not limited to reading locally connected keyboards (but has other limitation making it mostly unsuitable for games). Then reading stdin you could be reading keystrokes from a user logged in remotely, using a locally connected serial terminal or a terminal emulator (possibly operated from a remote X server).
For terminal based games it might make sense to use stdin. But then it would probably be better to use GPM instead of reading /dev/input/event1 for the mouse. And possibly even better to use ncurses for everything.
Otherwise you might want to look at SDL, if not for using it directly, at least for learning about different ways to read input. For example, SDL has full support for network transparency using X. Which means a game can execute on one computer but be played on another.
Expanding on Fabel answer - standard input and event1 are entirely different things. Starting from that the event1 does not have to be mouse but can be an other device depending on udev, kernel version, phase of moon etc. - on my (non Raspberry Pi) system it's input device Sleep Button - which has a keyboard interface. In short you cannot and should not assume it's keyboard - or the only keyboard for that matter (for example also YubiKey emulates the keyboard for it's function but it's rather useless as gaming device, and while 2 keyboards are rarely connected to same Raspberry Pi I don't think it's good idea to assume such setup never happens).
Furthermore typically input devices can be read only by privileged user (on older/current systems) or by user holding a 'seat' (on rootless X systems) - assuming you're not using bleeding edge system it means your game can only by used by root user which is usually considered a bad idea.
Finally it only allows for user to play using a evdev subsystem, which probably should be considered Linux/X11 implementation detail. If you try to connect via network (X11, vnc or whaterver), try to use on-screen keyboard or any accessible input program - you might run into problems.
On the other hand what you can do is either use standard input directly, use some more portable libraries like readline (which should be sufficient for rougulike) or use graphics server (probably indirectly via library like QT or SDL as X11 is not the nicest protocol to program against).
Reading from /dev/input/eventN works only on GNU/Linux.
Configuring and using stdin works on any system implementing POSIX.1-2008, specifically chapter 11: General Terminal Interface. So if you want portable code, like say to work on Linux and OS X without rewriting the input system, you'd do it this way.
I am developing an application for linux based embedded system which directly writes on the framebuffer device of the Linux kernel.The writing works perfectly. But the problem happens when some other event occurs with a demand of display(Like plugging a flash drive or a kernel message). Every time when it happens, the screen gets interrupted and the unwanted things appear on the screen erasing the previous graphics from the overlapped portion(other things remain unchanged).
How can I get rid of this problem?
Add console=0 to the kernel command line. It disables both the kernel outputting anything to the console, and the console login. (For development purposes, I recommend having a separate boot option, so you can boot to a console.)
Alternatively, have your application create a new virtual terminal for the framebuffer, like X does. This avoids the kernel (kernel console, really) scribbling text all over your framebuffer.
I have a WPF program that communicates with a specialized USB stick that is collecting data (in fact an ANT USB dongle). I noticed that the data collection simply stopped after a few hours. The reason was evident in the windows logs (system) where at the exact time the program stopped getting data, I see:
The system is entering sleep
Sleep Reason: System Idle
Questions
How do I programmatically prevent Windows from going to sleep so that I can continue to gather data?
2. Stepping backwards for the big picuture view... What's going on? Why does the computer going to sleep affect my program? Or is it just affecting the USB stick? Is it necessary to prevent sleep or should I do something else instead?
Einstein's answer is tantalizingly close. I just can't seem to get SetThreadExecutionState working in my C#/WPF program and can't find a lot of examples or discussions of it. Does it need to be called more than once? If so how? Is there a event that I receive that tell me to call it or should I call it every so often (5 minutes?) as suggested in:http://stackoverflow.com/questions/5870280/setthreadexecutionstate-is-not-working-when-called-from-windows-service
For now, I'm just going into ctrl panel -> power options and preventing sleep but it would sure be nice to have an elegant solution. Even on my own computer, I don't want to mess with the sleep settings. It's too hard to remember to set them back again!
You can prevent the computer from entering low power modes (sleep/suspend) by using the SetThreadExecutionState function.
As far as why going into low power mode is interrupting your data collection - well Windows suspends all processes in these modes and USB ports enter low power mode which likely means your USB device will not have power either. It's by design. After all, the whole reason we want our computers to go to sleep is so that the battery is not drained.
You can apply the above responses, o simply you can go in Control Panel -> Power Options and modify the settings, so your system never goes to sleep.
To 1): You can use SystemParametersInfo with one of the power related values on Windows versions less than Vista to turn on/off power savings settings. Starting with Vista, you should register for one of the power events instead, and the OS will notify you when it needs to for the event you request.
To 2): If the OS shuts down, the hardware it's managing shuts down. What else would you expect to happen? If the OS runs the USB device driver, which runs the USB device, what would you think would happen if the OS goes to sleep? The USB device begins running itself instead without the driver?