How to stop showing input items in C - c

Took some element for the matrix, just want to show the output matrix, hiding the matrix elements separately. Could you help please. I have tried as bellow:
#include<stdio.h>
int main() {
int mat[100][100];
int row, column, i, j;
printf("enter how many row and column you want:\n \n");
scanf("%d", &row);
scanf("%d", &column);
printf("enter the matrix:");
for (i = 0; i < row; i++) {
for (j = 0; j < column; j++) {
scanf("%d", &mat[i][j]);
}
printf("\n");
}
for (i = 0; i < row; i++) {
for (j = 0; j < column; j++) {
printf("%d \t", mat[i][j]);
}
printf("\n");
}
}

Actually it's not compiler dependant, but platform dependant.
The thing you are looking for is called Termcap for "Terminal capability".
It basically allow you to configure your terminal but it's not necessary simple as you need to understand a bit how a terminal works.
This link should interest you if you are working on linux.
http://man7.org/linux/man-pages/man3/termios.3.html
I'm not sure on this point but I think there's a library that allow you to have the same code for linux/windows.
Sorry not to be more precise it's been a long time since I didn't play with that.

use the tcgetattr() function on the serial/usb port used by your terminal to get the current setup of the terminal driver.
use the tcsetattr() function to update the serial/usb port used by your terminal to turn off the echo feature of the terminal driver.
use the escape sequences to move the cursor around on the terminal, change colors, etc.
Be sure to save the original settings returned from the call to tcgetattr() so you can easily restore the terminal driver to its' original settings.
Read the man page for tcgetattr(3) and tcsetattr(3) for all the details of those commands. The details can be found online at: http://man7.org/linux/man-pages/man3/termios.3.html
Read the man page for console_codes(4). The details can be found online at: http://man7.org/linux/man-pages/man4/console_codes.4.html

Related

How to display a moving array in the output console?

What's the better way than what I have done below:
system("cls") does the job but clearing the screen will later on mess with whatever that I want to display, the other negative side effect is the annoying blinks.
#include <stdio.h>
#include <stdlib.h>
int a[5]={1,0,1,1,0};
int b[5]={0,0,1,1,1};
int main()
{
srand(time(NULL));
while(1){
//display the arrays
for(int i=0; i<=4; i++){
printf("%d ",a[i]);
}
printf("\n");
for(int i=0; i<=4; i++){
printf("%d ",b[i]);
}
//shift every cell by 1
for (int i = 4 ; i >= 0; i--) {
a[i] = a[i - 1];
b[i] = b[i - 1];
}
sleep(1);
system("cls");
//keep generating 0s and 1s for the 1st cell of arrays
a[0] = rand() %2;
b[0] = rand() %2;
}
}
Without platform/terminal specific code that is not possible as there is no platform independent way to place the cursor or clear the screen.
The best you can do that is largely platform independent is return to the start of the current line, or back-space on the current line. That is to say you can only move the cursor backward on the current cursor line:
What you can do is:
Replace printf("\n"); with printf("\t");
Replace system("cls") ; with:
printf("\r");
fflush(stdout) ;
The output will be on one line with a TAB separation, and the line will be overwritten on each iteration.
1 1 1 0 1 1 0 1 0 0
Failing that you can either:
Use a platform independent console library such as ncurses,
On windows use the native Windows Console API.
Where supported use ANSI escape sequences.
The last option is simplest and while for a long time was not supported in Windows, Windows 10 now supports ANSI ESC sequences, so there are few reasons not to use that for this simple screen handling.
For example ESC[0;0H moves the cursor to the top-left. In this case you would simply replace the "cls" with:
printf( "\x1b[0;0H" ) ;
Note that in this case you also need either a newline or fflush(stdout) before the sleep() to ensure the second line is output before the clear screen:
printf("\n"); // Force output flush
sleep(1);
printf( "\x1b[0;0H" ) ; // Home cursor
If you have other content on the screen before this and you don't want to redraw everything, you could move the cursor by:
printf("\n"); // Force output flush
sleep(1);
printf( "\x1b[2A" ) ; // Cursor Up two lines

Out of memory kill

I have a problem with the creation of a too big matrix with slurm cluster(Out of memory killed). How can I fix the problem?
The following code is the part of the code about the allocation matrix:
double **matrix;
int rows = 30000;
int cols = 39996;
matrix = (double**)malloc(sizeof(double*)*rows);
for (int i = 0; i < rows; i++)
matrix[i] = (double*)malloc(sizeof(double)*cols);
for(int i=0; i<rows; i++)
for(int j=0; j<cols; j++)
matrix[i][j] = 1;
This value (rows, cols) are an example because I can also have larger value
Instead the following code is the part of code about deallocation:
for (int i = 0; i < 30000; i++)
free(matrix[i]);
free(matrix);
This my output:
Slurmstepd: error: Detected 1 oom-kill event(s) in step 98584.0 cgroup. Some of your processes may have been killed by the cgroup out-of-memory handler.
srun: error: lab13p1: task 1: Out Of Memory
change the declaration of matrix to double pointer (maybe it's typo):
double **matrix;
You should verify the return value of malloc function, especially with too big matrix.
Do not cast malloc function. Do I cast the result of malloc?
matrix = malloc(sizeof(double*)*rows);
if(!matrix) {
// handle the error
}
for (int i = 0; i < rows; i++) {
matrix[i] = malloc(sizeof(double)*cols);
if(!matrix[i]) {
// handle the error
}
}
Looks like you're running on slurm, so probably on a cluster with shared access.
It's possible that cluster management limited the amount of memory per job and per cpu.
Check the memory limits in the docs for your cluster. You can also see some limits in the config with scontrol show config. Look for stuff like MaxMemPerCPU, MaxMemPerNode, DefMemPerCPU.
Maybe it just uses the last to set a default setup per job, and you can change it in your launch commands (srun, sbatch) with --mem-per-cpu=8G.
You can also see what settings your jobs are running with with the squeue command. Look at the man page for the -o option, there, you can add outputs (-o %i %m shows the job ID and the amount of memory it can run with).

FreeRTOS freezes

I have a simple FreeRTOS programm and basically I need to calculate the time it takes to run for a different number of iterations.
The problem is that it just freezes and doesn't execute anymore though the iterations are not complete yet and I need it to succesfully come to vTaskEndScheduler() to terminate so I can calculate the time correctly. What could be a reason?
Freeze screenshot
void Task1() {
for (int i = 0; i < 100; i++)
{
printf("This is task 1 ");
printf("Iteration number ");
printf("%d", i);
printf("\n");
vTaskDelay(100);
}
vTaskEndScheduler();
};
void Task2() {
for (int i = 0; i < 100; i++) {
printf("This is task 2 ");
printf("Iteration number ");
printf("%d", i);
printf("\n");
vTaskDelay(100);
}
vTaskEndScheduler();
};
void main_blinky(void)
{
enableFlushAfterPrintf();
xTaskCreate(Task1, (signed char*) "t1", 100, NULL, 1, NULL);
xTaskCreate(Task2, (signed char*) "t2", 100, NULL, 1, NULL);
vTaskStartScheduler();
}
Just at a glance, without knowing anything about your system, I would GUESS printf() is causing the problem. How is it implemented? Is it thread safe? Do your tasks have enough stack space for its stack requirements, which can be substantial depending on the library you are using: https://freertos.org/Stacks-and-stack-overflow-checking.html
See notes in the (free but somewhat out of date) book (https://freertos.org/Documentation/RTOS_book.html) ref printf.
You must first choose appropriate stack for task and to be sure it's accessible by heap size in run-time, after that then i think
problem may is in printf() method and the way that's implemented.

Is it possible to make a loading animation in a Console Application using C?

I would like to know if it is possible to make a loading animation in a Console Application that would always appear in the same line, like a flashing dot or a more complex ASCII animation.
Perhaps like this
#include <stdio.h>
#include <time.h>
#define INTERVAL (0.1 * CLOCKS_PER_SEC) // tenth second
int main(void) {
int i = 0;
clock_t target;
char spin[] = "\\|/-"; // '\' needs escape seq
printf(" ");
while(1) {
printf("\b%c", spin[i]);
fflush(stdout);
i = (i + 1) % 4;
target = clock() + (clock_t)INTERVAL;
while (clock() < target);
}
return 0;
}
The more portable way would be to use termcap/terminfo or (n)curses.
If you send ANSI escape sequences you assume the terminal to be capable of interpreting them (and if it isn't it'll result in a big mess.)
It's essentially a system that describes the capabilities of the terminal (if there's one connected at all).
In these days one tends to forget but the original tty didn't have a way to remove ink from the paper it typed the output on ...
Termcap tutorials are easy enough to find on Google. Just one in the GNU flavor here: https://www.gnu.org/software/termutils/manual/termcap-1.3/html_mono/termcap.html (old, but should still be good)
(n)curses is a library that will allow you control and build entire text based user interfaces if you want to.
Yes it is.
One line
At first if you want to make animation only at one line, you could use putchar('\b') to remove last character and putchar('\r') to return to line beginning and then rewrite it.
Example:
#include
#include
int main() {
int num;
while (1) {
for (num = 1; num <= 3; num++) {
putchar('.');
fflush(stdout);
sleep(1);
}
printf("\r \r"); // or printf("\b\b\b");
}
return 0;
}
But if you want to place it at specified line, you can clear and re-draw every frame, or use libs.
Clearing method
You can do this with system("clear") or with printf("\e[1;1H\e[2J").
After that you'll need to re-draw your frame. I don't recommend this method.
But this is really unportable.
Other libraries
You can use ncurses.h or conio.h depending on system type.
Ncurses example:
#include <stdio.h>
#include <unistd.h>
#include <ncurses.h>
int main() {
int row, col;
initscr();
getmaxyx(stdscr, row, col);
char loading[] = "-\\|/";
while (1) {
for (int i = 0; i < 8; i++) {
mvaddch(row/2, col/2, loading[i%4]);
refresh();
sleep(1);
mvaddch(row/2, col/2, '\b');
}
}
endwin();
return 0;
}

Paralellized execution in nested for using Cilk

I'm trying to implement a 2D-stencil algorithm that manipulates a matrix. For each field in the matrix, the fields above, below, left and right of it are to be added and divided by 4 in order to calculate the new value. This process may be iterated multiple times for a given matrix.
The program is written in C and compiles with the cilkplus gcc binary.
**Edit: I figured you might interested in the compiler flags:
~/cilkplus/bin/gcc -fcilkplus -lcilkrts -pedantic-errors -g -Wall -std=gnu11 -O3 `pkg-config --cflags glib-2.0 gsl` -c -o sal_cilk_tst.o sal_cilk_tst.c
Please note that the real code involves some pointer arithmetic to keep everything consistent. The sequential implementation works. I'm omitting these steps here to enhance understandability.
A pseudocode would look something like this (No edge case handling):
for(int i = 0; i < iterations; i++){
for(int j = 0; j < matrix.width; j++){
for(int k = 0; k < matrix.height; k++){
result_ matrix[j][k] = (matrix[j-1][k] +
matrix[j+1][k] +
matrix[j] [k+1] +
matrix[j] [k-1]) / 4;
}
}
matrix = result_matrix;
}
The stencil calculation itself is then moved to the function apply_stencil(...)
for(int i = 0; i < iterations; i++){
for(int j = 0; j < matrix.width; j++){
for(int k = 0; k < matrix.height; k++){
apply_stencil(matrix, result_matrix, j, k);
}
}
matrix = result_matrix;
}
and parallelization is attempted:
for(int i = 0; i < iterations; i++){
for(int j = 0; j < matrix.width; j++){
cilk_for(int k = 0; k < matrix.height; k++){ /* <--- */
apply_stencil(matrix, result_matrix, j, k);
}
}
matrix = result_matrix;
}
This version compiles without errors/warning, but just straight out produces a Floating point exception when executed. In case you are wondering: It does not matter which of the for loops are made into cilk_for loops. All configurations (except no cilk_for) produce the same error.
the possible other method:
for(int i = 0; i < iterations; i++){
for(int j = 0; j < matrix.width; j++){
for(int k = 0; k < matrix.height; k++){
cilk_spawn apply_stencil(matrix, result_matrix, j, k); /* <--- */
}
}
cilk_sync; /* <--- */
matrix = result_matrix;
}
This produces 3 warnings when compiled: i, j and k appear to be uninitialized.
When trying to execute, the function which executes the matrix = result_matrix; step appears to be undefined.
Now for the actual question: Why and how does Cilk break my sequential code; or rather how can I prevent it from doing so?
The actual code is of course available too, should you be interested. However, this project is for an university class and therefore subject to plagiarism from other students who find this thread which is why I would prefer not to share it publicly.
**UPDATE:
As suggested I attempted to run the algorithm with only 1 worker thread, effectively making the cilk implementation sequential. This did, surprisingly enough, work out fine. However as soon as I change the number of workers to two, the familiar errors return.
I don't think this behavior is caused by race-conditions though. Since the working matrix is changed after each iteration and cilk_sync is called, there is effectively no critical section. All threads do not depend on data written by others in the same iteration.
The next steps I will attempt is to try out other versions of the cilkplus compiler, to see if its maybe an error on their side.
With regards to the floating point exception in a cilk_for, there are some issues that have been fixed in some versions of the Cilk Plus runtime. Is it possible that you are using an outdated version?
https://software.intel.com/en-us/forums/intel-cilk-plus/topic/558825
Also, what were the specific warning messages that are produced? There are some "uninitialized variable" warnings that occur with older versions of Cilk Plus GCC, which I thought were spurious warnings.
The Cilk runtime uses a recursive divide and conquer algorithm to parallelize your loop. Essentially, it breaks the range in half, and recursively calls itself twice, spawning half and calling half.
As part of the initialization, it calculates a "grain size" which is the size of the minimum size it will break your range into. By default, that's loopRange/8P, where P is the number of cores.
One interesting experiment would be to set the number of Cilk workers to 1. When you do this, all of the cilk_for mechanism is excersized, but because there's only 1 worker, nothing gets stolen.
Another possibility is to try running your code under Cilkscreen - the Cilk race detector. Unfortunately only the cilkplus branch of GCC generates the annotations that Cilkscreen needs. Your choices are to use the Intel commpiler, or try using the cilkplus branch of GCC 4.9. Directions on how to pull down the code and build it are at the cilkplus.org website.

Resources