How can I turn libavformat error messages off - c

By default, libavformat writes error messages to stderr, Like:
Estimating duration from bitrate, this may be inaccurate
How can I turn it off? or better yet, pipe it to my own neat logging function?
Edit: Redirecting stderr to somewhere else is not acceptable since I need it for other logging purposes, I just want libavformat to not write to it.

Looking through the code, it appears you can change the behavior by writing your own callback function for the av_log function.
From the description of this function in libavutil/log.h:
Send the specified message to the log if the level is less than or equal
to the current av_log_level. By default, all logging messages are sent to
stderr. This behavior can be altered by setting a different av_vlog callback
function.
The API provides a function that will allow you to define your own callback:
void av_log_set_callback(void (*)(void*, int, const char*, va_list));
In your case, you could write a simple callback function that discards the messages altogether (or redirects them to a dedicated log, etc.) without tainting your stderr stream.

Give av_log_set_level(level) a try!

include this header file
#include <libavutil/log.h>
add this code will disable the log
av_log_set_level(AV_LOG_QUIET);

You can redirect them to a custom file, it will redirect all cerr entry:
#include <iostream>
#include <fstream>
using namespace std;
int main()
{
ofstream file("file.txt");
streambuf *old_cerr = cerr.rdbuf();
cerr.rdbuf (file.rdbuf());
cerr << "test test test" << endl; // writes to file.txt
// ...
cerr.rdbuf (old_cerr); // restore orginal cerr
return 0;
}
Edit: After editing the question, i warn about above code that it will redirect all cerr entry stream to file.txt
I'm not familiar with libavformat, but if its code is unchangeable, you can temporary redirect cerr to a file before calling library's api and redirect it to original cerr again.(However this is ugly way)

Related

How to initialize stdout/stderr in a subsystem=windows program WITHOUT calling AllocConsole()?

So when trying to use the stdin/stdout/stderr streams in a Windows GUI app, one typically has to call AllocConsole (or AttachConsole) in order to initialize those streams for use. There are lots of posts on here on what you need to do AFTER calling AllocConsole (i.e. use freopen_s on the respective streams, etc).
I have a program where I want to redirect stdout and stderr to an anonymous pipe. I have a working example where I call:
AllocConsole();
FILE* fout;
FILE* ferr;
freopen_s(&fout, "CONOUT$", "r+", stdout);
freopen_s(&ferr, "CONOUT$", "r+", stderr);
HANDLE hreadout;
HANDLE hwriteout;
HANDLE hreaderr;
HANDLE hwriteerr;
SECURITY_ATTRIBUTES sao = { sizeof(sao),NULL,TRUE };
SECURITY_ATTRIBUTES sae = { sizeof(sae),NULL,TRUE };
CreatePipe(&hreadout, &hwriteout, &sao, 0);
CreatePipe(&hreaderr, &hwriteerr, &sae, 0);
SetStdHandle(STD_OUTPUT_HANDLE, hwriteout);
SetStdHandle(STD_ERROR_HANDLE, hwriteerr);
This snippet successfully sets stdout and stderr to the write ends of the anonymous pipes and I can capture the data.
However calling AllocConsole will spawn a Conhost.exe - this is the actual black window that pops to the screen. I don't have a use for this and most importantly, I would like to avoid the process creation of a child conhost.exe under my program.
So the question is, how can I fool Windows into thinking it has a console attached/manually setup the initial stdout and stderr file streams so that I can then redirect them as I have done already? I have looked at the AllocConsole call in a debugger as well as GetStdHandle and SetStdHandle to try and get a sense of what is going on, but my RE skills are lacking.
Without AllocConsole, the freopen_s calls fail with error 6, Invalid Handle. GetStdHandle also returns a NULL handle. Calling SetStdHandle succeeds (based on it's return code and calling GetLastError), however this doesn't appear to actually get things set up where I need them as I don't receive any output in my pipe.
Any ideas?
Use the SetStdHandle function to assign your pipe HANDLE values to STD_INPUT_HANDLE and STD_OUTPUT_HANDLE.

check if fclose() fails and return specific error

So, I tried to find an answer to this question unsuccessfully.
I know what to do and how to manage such a case - by using fluss/NULL etc. afterward. But checking it is tricky to me.
So, basically:
open some file(successfully) - let's call the pointer: file.
after some code running...
fclose(file);
Now, how can I check after(before it's also an option) closing the file - that it really happened?
What is the condition? By demand, I need to handle this case by printing some specific errors.
You can use the following snippet:
#include <errno.h>
if(fclose(file) != 0)
{
fprintf(stderr, "Error closing file: %s", strerror(errno));
}
From the man pages, we see that an error in closing a file using fclose() sets the global variable errno to a value indicating what error occurred. The function strerror() takes this value of errno and outputs a string to help indicate what the error actually was.

Suppress messages from underlying C-function in R

In a script I often call the function Rcplex(), which prints "CPLEX environment opened" and "Closed CPLEX environment" to the console. Since the function is called rather frequently, it prints this very often, which is quite annoying. Is there a way to suppress this? I tried sink(), suppressWarnings/Messages or invisible(catch.output()) but none of these did the trick. I proceeded to check the code of Rcplex() and found where the printing to the console happens. Rcplex() calls an underlying C-function (Rcplex.c). In the code of rcplex.c I located the commands which cause the printing:
REprintf("CPLEX environment opened\n");
REprintf("Closed CPLEX environment\n");
Is there a way to capture the output from REprintf() so that it does not get printed to the R-console? One way would obviously be to mess around with the Rcplex.c file and delete the corresponding lines. However, this would not be a very clean solution, which is why I'm asking for another way to capture the output from C-functions.
You had problems using sink() and capture.output() normally because sink() does not redirect output from REprintf, as we see in comments from the source code for REprintf:
/* =========
* Printing:
* =========
*
* All printing in R is done via the functions Rprintf and REprintf
* or their (v) versions Rvprintf and REvprintf.
* These routines work exactly like (v)printf(3). Rprintf writes to
* ``standard output''. It is redirected by the sink() function,
* and is suitable for ordinary output. REprintf writes to
* ``standard error'' and is useful for error messages and warnings.
* It is not redirected by sink().
However, we can use type = "message" to deal with this; from help("capture.output"):
Messages sent to stderr() (including those from message, warning and
stop) are captured by type = "message". Note that this can be "unsafe" and should only be used with care.
First I make a C++ function with the same behavior you're dealing with:
#include <Rcpp.h>
using namespace Rcpp;
// [[Rcpp::export]]
NumericVector example_function(NumericVector x) {
REprintf("CPLEX environment opened\n");
REprintf("Closed CPLEX environment\n");
// As mentioned by Dirk Eddelbuettel in the comments,
// Rcpp::Rcerr goes into the same REprintf() stream:
Rcerr << "Some more stuff\n";
return x;
}
If I call it from R normally, I get:
example_function(42)
CPLEX environment opened
Closed CPLEX environment
Some more stuff
[1] 42
However, I could instead do this:
invisible(capture.output(example_function(42), type = "message"))
[1] 42
And while the output is is printed to the console, the message is not.
Warning
I would be remiss if I didn't mention the warning from the help file I quoted above:
Note that this can be "unsafe" and should only be used with care.
The reason is that this will eliminate all output from actual errors as well. Consider the following:
> log("A")
Error in log("A") : non-numeric argument to mathematical function
> invisible(capture.output(log("A"), type = "message"))
>
You may or may not want to therefore send the captured output to a text file in case you have to diagnose something that went wrong. For example:
invisible(capture.output(log("A"), type = "message", file = "example.txt"))
Then I don't have to see the message in the console, but if I need to check example.txt afterward, the message is there:
Error in log("A") : non-numeric argument to mathematical function

embedded perl in C, perlapio - interoperability with STDIO

I just realized, that the PerlIO layer seems to do something more than just (more or less) easily wrap the stdio.h-functions.
If I try to use a file-descriptor resolved via PerlIO_stdout() and PerlIO_fileno() with functions from stdio.h, this fails.
For example:
PerlIO* perlStdErr = PerlIO_stderr();
fdStdErrOriginal = PerlIO_fileno(perlStdErr);
relocatedStdErr = dup(fdStdOutOriginal);
_write(relocatedStdErr, "something", 8); //<-- this fails
I've tried this with VC10. The embedded perl program is executed from a different context - so it's not possible to use PerlIO from the context where the write to the relocatedStdErr is performed.
For the curious: I need to execute a perl script and forward the output of the script's stdout/stderr to a log whilst keeping the ability to write on stdout for myself. Moreover this should work platform independent (linux, windows console application, win32 desktop application). Just to forward the stdout/stderr doesn't work in Win32 desktop applications since there is none ;) - you need to use the perl's stdout/stderr.
Needed solution: Be able to write on a filehandle (or descriptor) derived from perlio NOT using the PerlIO stack.
EDIT - my solution:
As Story Teller was pointing to PerlIO_findFILE, this did the trick.
So here an excerpt of the code - see the comments inside for descriptions:
FILE* stdErrFILE = PerlIO_findFILE(PerlIO_stderr()); //convert to Perl's stderr to stdio FILE handle
fdStdErrOriginal = _fileno(stdErrFILE); //get descriptor using MSVC
if (fdStdErrOriginal >= 0)
{
relocatedStdErr = _dup(fdStdErrOriginal); //relocate stdErr for external writing using MSVC
if (relocatedStdErr >= 0)
{
if (pipe(fdPipeStdErr) == 0) //create pipe for forwarding stdErr - USE PERL's IO since win32subsystem(non-console) "_pipe" doesn't work
{
if (dup2(fdPipeStdErr[1], fdStdErrOriginal) >= 0) //hang pipe on stdErr - USE PERL's IO (since it's created by perl)
{
close(fdPipeStdErr[1]); //close the now duplicated writer on stdErr for further usage - USE PERL's IO (since it's created by perl)
//"StreamForwarder" creates a thread that catches/reads the pipe's input and forwards it to the processStdErrOutput function (using the PerlIO)
stdErrForwarder = new StreamForwarder(fdPipeStdErr[0], &processStdErrOutput, PerlIO_stderr());
return relocatedStdErr; //return the relocated stdErr to be able to '_write' onto it
}
}
}
}
...
...
_write(relocatedStdErr, "Hello Stackoverflow!", 20); //that works :)
One interesting thing that I actually don't understand is, that the perl documentation says that is't necessary to #define PERLIO_NOT_STDIO 0 to be able to use PerlIO_findFILE(). But for me, that works fine without it and further I like to use PerlIO and the stdio together anyway. That's a point I didn't figured out what is going on.

Does the libcurl library give any way to determine which response header came from which command?

Background:
I'm working on my first C program with the library and I need to gather responses from each command sent to a SMTP server.
I've gotten as far as sending commands to the SMTP server and printing the response headers using curl_easy_setopt(curl_handle, CURLOPT_HEADERFUNCTION, parse_head), but I'm using multi threaded options. It is not at all clear when I get a response which command it was caused by. I am assuming that they will not necessarily be received in the same order sent. Is that correct?
Making it more difficult, since the library handles some calls (like setting up the initial connection) without my explicit request, I would need to handle more headers than explicit requests. That would be predictable and repeatable, but definitely adds an extra level of complexity.
Question:
Is there a "good" way to determine exactly which command resulted in which response header using multi thread?
Also, moderately related, does the library support returning the numeric return code or do I have to manually parse that out? Looking through the library, it seems that it doesn't. I just want to be sure.
I am assuming that they will not necessarily be received in the same order sent. Is that correct?
Yes, it is. That's how multithreading works.
Is there a "good" way to determine exactly which command resulted in which response header using multi thread?
Yes. You can set user data (context info, whatever you call it) using the CURLOPT_HEADERDATA option - this will be passed in as the 4th argument of your header function. So you can write code like this:
CURL *hndl = curl_easy_init();
// ...
curl_easy_setopt(hndl, CURLOPT_HEADERFUNCTION, parse_head);
curl_easy_setopt(hndl, CURLOPT_HEADERDATA, some_pointer_that_identifies_the_thread);
// ...
size_t parse_head(void *buf, void *size_t sz, size_t nmemb, void *context)
{
// context will be the pointer identifying the thread
}
does the library support returning the numeric return code or do I have to manually parse that out?
Yes, it does:
long httpStatus;
curl_easy_getinfo(hndl, CURLINFO_RESPONSE_CODE, &httpStatus);
if (200 <= httpStatus && httpStatus < 300) {
// HTTP 2XX OK
} else {
// Error (4XX, 5XX) or redirect (3XX)
}

Resources