Saving session or process state in linux [closed] - c

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I have to create a functionality for my project like saving session and further resume it from the same position in future. So I need to know how save the state of a process and then read from disk and resume it afterwards.
PS. The application is an interactive shell for a language and I need to extend it include a save session and resume session. But we want the code to be generic enough to be used in other places too.

This is quite a challenging task. The basic idea is to interrupt the process with a signal, at which point the OS puts the state of all registers (including the instruction pointer) in memory where you can access them if your shell has spawn the process you want to interrupt.
For more detail, you can look how checkpointing utilities handle that problem:
dmtcp
BLCR
Criu

That is quite hard to answer in general other than “save the program's entire state to a file and load it from there on resume”. That can be very tricky because you might need to restore things like file handles and sockets, which may not even be possible if things have changed during the suspended state. But it may suffice for you to support something less than that and only save the information necessary to approximate the previous state (e.g., save the list of program files to load, or the save user's command history and replay it, etc).

Related

Pass from HC-05 Data Mode to HC-05 AT Command mode by writing some bunch of code [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
My main problem is to reduce the power consumption of the HC-05 Bluetooth module. As it may be known, such a module consumes lower and lower energy when it is in the AT Command Mode (between 1.5 and 3 mA current). Since my project requires sending real time data that change every 15seconds, I want to keep the module in the AT command mode in the 15seconds that the HC-05 don't receive any data. I obviously believe that this kind of idea/solution will dramatically save the energy of the module. In other words, instead of keeping the module in data mode permanently, it will be set in data mode during 15seconds, and in AT command mode during 15seconds, after that it returns to the data mode and still 15seconds etc... I want to know is there any solution for that ? For example writing a bunch of C code (since my Hc-05 is directly connected to an STM32 board) to pass for the AT commande mode every 15seconds
Thanks in advance.
In the absence of any sample code, the answer to your only explicit question
I want to know is there any solution for that ?
is, "Yes."

How can I keep a file in memory between runs of the rust binary? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
When developing and testing I have a very large file that needs to be loaded into memory. This takes about 20 seconds each time I run the program.
Is there a way to keep the file in memory so that it doesn't need to be loaded each time?
It depends on what you mean by "loaded".
If you're referring to transferring the data from storage to ram that's more or less what your operating system's IO cache already should be doing, assuming you have enough spare memory and you're not using methods that bypass that cache.
On linux it's called page cache and you can check whether a file is in the cache via fincore. Or you can simulate the cache being cold via echo 3 > /proc/sys/vm/drop_caches which will drop its contents (requires root).
If you mean moving the bytes from the OS's cache into your application then that shouldn't take much time as long as you use sufficiently large block sizes for the read calls or use mmap. The latter is a dual-edged sword, used incorrectly it can actually cause slowdowns.
If you mean decoding the bytes into some application-specific logic then that's not IO but deserialization.

XOpenDisplay fails when run from daemon (C language) [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
i'm working in a simple project on my raspberry pi, which flash some leds in differ ways on some system events (like disk reading, ethernet communications, processor overload), and these leds need to be shut off some time after system is idle (these leds will behave varying their intensity when no sys activity detected).
To achieve idle detection, i'm using XScreenSaver, until here, everything works flawlessly.
As my project needed to be executed as daemon (etc/init.d) and needed to run with root privileges (because the pigpio library) the communication with X Server (via XOpenDisplay) is returning NULL every time, even when system is ready and in graphical interface. On terminal, running this manually, everything works perfectly.
as my research goes, i've understood that isn't possible to access X Server when there is no console available at boot time, and there is no way to access it for security reasons.
so i ask, how i could achieve this (detect idle time) on a simplest way possible? ( i tried self restart, tried setting DISPLAY variable on start script nothing seems to work.) I'm new on linux development and can't figure how to properly solve this.
Just awnsering my own question, if anyone having the same issue as me.
Detecting System Inactivity (Idle) outside X graphical interface, is just a matter of USB Keyboard / mouse activity by monitoring their IRQs (usually IRQ 1 /IRQ 12) on /proc/interrupt or more easy (supporting other USB Input like even Joysticks!) by monitoring /proc/stat on "softirq" line, second numeric column that contains numeric amount of bytes transferred when these devices has some / any input (mouse moving or key pressed / released)
This achieved easily in C by time to time, fopen / fread on these fields comparing the values with old ones.
Thanks a lot to my intensive researchs on Linux internals & User Olaf that have a huge knowledge on discover the obvious.

Concurrency without threads [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have this single monolithic daemon which does multiple operations like interfacing with north bound APIs, interfacing with south bound APIs, executing state machines, building internal databases.
Currently I ended up with scalability issues and I want to redesign the daemon so that all the multiple actions inside the daemon are made concurrent. But using threads will complicate the logic since I end up having to:
Add locks for synchronization.
Take proper care for future extensions.
Debug the timing issues.
So my question is please suggest a design approach where I can still make the actions concurrent and remove the complexity of threads.
My application is currently in C. Any sample open source (FOSS) project as example would also help me to understand the design approach.
Your only remaining options are:
Multi-process approach (ie: spawn(), fork(), exec(), etc. You still need to synchronize data, setup shared memory, etc. Threads would likely be easier).
Bite the bullet and live with no concurrency.
Become proficient in "lock free / lockless programming" approaches, which will still likely require atomic operations at the least.
Software sync, protection, and future-proofing/scalability are common problems in any production code that does non-trivial operations. Trying to avoid them outright usually indicates you have bigger concerns than avoiding threaded models.
This sounds like a perfect case for go which provides a concurrency model based on Hoare's Communicating Sequential Processes (CSP)*. Fortunately you don't have to use go to get CSP. Martin Sustrik of ZeroMQ fame has given us libmill, which provides the go concurrency primitives in C. Still, you might consider go for its other features.
* Rather than try to describe CSP directly, I'd suggest you watch some of Rob Pike's excellent videos, like this one: Go Concurrency Patterns.
One way you can achieve asynchronous execution without running multiple threads is using command pattern and command queue. You can implement it in any programming language. Of course things will not be really executing in parallel but this is the way to do asynchronous programming in environments where resources are very limited. Robect C Martin describles this really well in his video.
Example scenario:
You add a initial command to the queue (for the sake of example it's just single simple command).
You start infinite loop which does only one thing:
Take next command from the queue
Execute taken command on the current thread
Our command (lets call it CheckButtonPressed) can do some simple check (for example if button was clicked or some web service responded with some value)
if condition check is negative command will add itself back to the queue (queue is never empty and we are checking all the time if button was pressed)
if condition check is positive we add to the queue the HandleButtonClick command that contains whatever code we want to run in respond to this event.
When HandleButtonClick command will be processed it will execute whatever code is required and at the end it will add CheckButtonPressed again to the queue so the button can be pressed again and queue is never empty.
As you see except the initial commands (the ones that are added to the queue before starting queue processing loop) all other commands are added to the queue by other commands. Commands can be statefull but there is no need for threads synchronization because there is only one thread.

A way for saving data after closing a C program [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I want to use the c-programming language to built a small database for students. Only the admnistrator should enter delete or modify the data. I have developed this program in C. But when i close my program, the data is lost. At the beginning i thought to store these data in files (like xml) but know i thinking to store these data on hardware (harddisk or sd-card). Is it possible? Any suggestions?
You could write a separate program which acts as a "server" - that is, it runs continuously and communicates only through some sort of network interface - named pipes or TCP/IP or whatever. When your "client" program starts up it attempts to establish a connection with the server - if it does not find the server it starts it up and then establishes communications with it. Once the "server" is found the "client" requests any saved data from the "server", which the "server" then returns if it has any. When the "client" decides to shut down it first communicates with the "server", passing any data it wishes to save to the "server" which then stores it (perhaps in a file, perhaps in memory - the implementation is up to you).
Best of luck.

Resources