Simple C program labelled as virus - c

I was writing a program to input a string and output alternate characters in the string and I could compile it once, and building it made my antivirus program report that it is a virus (Gen:variant.graftor.74557).
Does any of my code causes something malicious to be called a virus
#include<stdio.h>
#include<string.h>
void altchar()
{
char a[50];
printf("Enter a string");
gets(a);
int i=0;
for(i=0;i<strlen(a);i+=2)
printf("%c",*(a+i));
}
int main()
{
altchar();
return 0;
}
Other c programs compiles very smoothly with no clashes with my AV.
Update:
My AV has no problem with gets() function, and other programs that use gets works smoothly.
Update 2:
By the way, I can run the program exactly once, then it is moved to quarantine.
And the output is nothing and the compiler tells me
Process returned 1971248979 (0x757EDF53) execution time: -0.000 s
For curious minds, I use Bitdefender Antivirus!

The "virus" detected is actually a placeholder name for the F-Secure generic trojan detector. It looks into programs for suspect behaviour. Unfortunately that kind of analysis is bound to sometimes producing false positives.
Maybe your harmless code matches some known malware behaviour on a byte code level? Try making a small change to your code and see if the problem goes away. Otherwise you can submit your program (info on the paged linked above) as a false positive to help them improve their database.

The classic methodology of an antivirus software is to take a few bytes (very well chosen bytes) from an infected file and use those as an identifier string ... it searches the executables and if the bytes match with some bytes from an executable (this check is most of the time done when opening (running) an executable, or when performing a full system scan) then it marks it as a virus.
Fix your code (as per comments), and recompile ... see what happens :)

Related

How can I debug the cause of a 0xc0000417 exit code

I get an exit error code 0xc0000417 (which translates to STATUS_INVALID_CRUNTIME_PARAMETER) in my executable (mixed Fortran/C) and try to find out what's causing it. It seems to occur when trying to write to a file which I infer because the file is created but there's nothing in it. Yet I have the suspicion that's not the /real/ cause. When I disable writing of that file, which is done from C code, it crashes when writing a different file, this time from Fortran code.
The unfortunate thing is: this only happens after the program (a CPU heavy calculation) has finished after having run for ~2-3 days. When I tried to shorten the calculation time by various means to facilitate debugging, the problem did not occur anymore. It almost seemed like the long runtime was crucial for triggering the problem.
I tried running it in Visual Studio 2015 but VS does not break/stop (like it would if e.g. a segfault had happened) despite having turned on breaking at all the C++ Exceptions, like was suggested in some other thread and all Common Language Runtime Exceptions.
What I would like VS to do is to either break whenever that error code is 'produced' and examine the values of variables or at least get a stack trace.
I searched intensively but I could not find a satisfactory solution to my problem. In essence, my question is similar to how to debug "Invalid parameter passed to C runtime function"? but the problem does not occur with the linux version of my program, so I'm looking for directions on how to debug it on Windows, either with Visual Studio or some other tool.
Edit:
Sadly, I was not able to find any convenient means of breaking automatically when the error occurs. So I went with the manual way of setting a breakpoint (in VS) near the supposed crash and step through the code.
It turned out that I got a NULL pointer from fopen:
myfile = fopen("somedir\\somefile.xml");
despite the file being created. But when trying to write to that file (via the NULL handle!), a segfault occurred. Strangely, it seems I only get a NULL pointer from fopen when the process has a long lifetime. But that's offtopic for that question.
Edit 2:
Checking the global errno variable gave error code 22 which again translates to an invalid argument. However, the argument to fopen is not invalid as I verified with the debugger and the fact that the file is actually created correctly (with 0 bytes length). Now I think that that that error code 22 is simply misleading because when I check (via a watch in VS) $err, hr I get:
0x000005aa ERROR_NO_SYSTEM_RESOURCES : Insufficient system resources exist to complete the requested service.
Just like mentioned here, I have plenty of HD space (1.4 GB), plenty of free RAM (3.2 GB), and I fear it is something not directly caused by my program but something due to broken Windows design of file handling (it does not happen under Linux).
Edit 3: OK, it seems it is not Windows itself that's the culprit but rather the Intel Fortran compiler I'm using. Every time I'm doing formatted write statements in my program, a Mutant (Windows speak for mutex) handle is leaked. Using WinDbg and !htrace -enable, then running a bit further, break and issue !htrace -diff gives loads of these backtraces:
0x00000000777ca25a: ntdll!NtCreateMutant+0x000000000000000a
0x000007fefd54a1b7: KERNELBASE!CreateMutexExW+0x0000000000000057
0x000007fefd551d60: KERNELBASE!CreateMutexExA+0x0000000000000050
0x000007fedfab24db: libifcoremd!for_lge_ssll+0x0000000000001dcb
0x000007fedfb03ed6: libifcoremd!for_write_int_fmt+0x0000000000000056
0x000000014085aa21: myprog!MY_ROUTINE+0x0000000000000121
During the program runtime these mutant handles seem to accumulate until they exhaust all handle resources (16711680 handles) so that there's nothing left for file handles.
Edit 4: It's a bug in the intel fortran runtime libraries that has been fixed with a later version (see here). Using the patched version of libifcoremd.dll fixes the problem, i.e. the handle count does not increase anymore during formatted writes.
It could be too many open files or leaked (not closed) handles. You can check that with e.g. Process Explorer (I think you could see the number of handles in the process with it).

C: fprintf does not work

I have a long C code. At the beginning I open two files and write something on them:
ffitness_data = fopen("fitness_data.txt","w");
if( ffitness_data == NULL){
printf("Impossible to open the fitness data file\n");
exit(1);
}else{
fprintf(ffitness_data,"#This file contains all the data that are function of fitness.\n");
fprintf(ffitness_data,"#Columns: f,<p>(f),<l>(f).\n\n");
}
fmeme_data = fopen("meme_data.txt","w");
if( fmeme_data == NULL){
printf("Impossible to open the meme data file\n");
exit(1);
}else{
fprintf(fmeme_data,"#This file contains all the data relative to memes.\n");
fprintf(fmeme_data,"#Columns: fitness, popularity, lifetime.\n\n");
}
Everything is fine at this step: files are open and two lines are written on them.
Then I have a long simluation of a stochastic process, whose code is not interesting for the question's purposes: the files and their pointers are never used. At the end of the process I have:
for(i=0;i<data;i++){
fprintf(fmeme_data,"%f\t%d\t%f\n",meme[i].fitness,meme[i].popularity,meme[i].lifetime);
}
for(i=0;i<40;i++){
fprintf(ffitness_data,"%f\t%f\t%f\n",(1.0/40)*(i+0.5),popularity_histo[i],lifetime_histo[i]);
}
Then I DO fflush() and fclose() of both files.
If I make the code run on my laptop, both files are filled. If the code runs on a remote server, the file fitness_data.txt contains only the first print, i.e. the print starting with # but doesn't contain the data. I want you to note that:
The other file never gives me problems.
I'm used to this server. Something similar never happened.
Given all these information, the question is:
Why it is happening that a certain command, used always in the same way and in the same code, always works on a server while on a different server it works sometime but sometime it doesn't?
Admins: I don't think this question is a duplicate. All similar questions were solved by asjusting the code (here) or adding fflush() (here) and similar things. Here is not a problem in the code (in my modest opinion) because on my laptop it works. I bet it works on most.
We can't say for certain what's going on here, because we don't have your full program nor do we have access to the server where the problem happens. But, we can give you some debugging advice.
When a C program behaves differently on one computer than another, the very first thing you should suspect is memory corruption. The best available tool for finding memory corruption is valgrind. Fix the first invalid operation it reports and repeat until it reports no more invalid operations. There are excellent odds that the problem will have then gone away.
Turn up the warning levels as high as they can go and fix all of the complaints, even the ones that look silly.
You say you are calling fflush and fclose, but are you checking whether they failed? Check thoroughly, like this:
if (ferror(ffitness_data) || fflush(ffitness_data) || fclose(ffitness_data)) {
perror("write error on fitness_data.txt");
exit(1);
}
Does the problem go away if you change the optimization level you are compiling with? If so, you may have a bug that causes "undefined behavior". Unfortunately there are a lot of possible ways to do that and I can't easily explain how to look for them.
Use a tool like C-Reduce to cut your program down to a smaller program that still doesn't work correctly but is short enough to post here in its entirety.
Read and follow the instructions in the article "How to Debug Small Programs"..

C: Line repeated twice [duplicate]

I'm debugging the goldfish android kernel (version 3.4), with kernel sources.
Now I found that gdb sometimes jump back and forth between lines, e.g consider c source code like the following:
char *XXX;
int a;
...
if (...)
{
}
When I reached the if clause, I type in n and it will jump back to the int a part. Why is that?
If I execute that command again, it would enter the brackets in the if.
If possible, I want to avoid that part, and enter the if directly (of course, if condition matches)
When I reached the if clause, I type in n and it will jump back to the int a part. Why is that?
Because your code is compiled with optimization on, and the compiler can (and often does) re-arrange instructions of your program in such a way that instructions "belonging" to different source lines are interleaved (code motion optimizations attempt (among other things) to move load instructions to long before their results are needed; this helps to hide memory latency).
If you are using gcc-4.8 or later, build your sources with -Og. Else, see this answer.

scanf Cppcheck warning

Cppcheck shows the following warning for scanf:
Message: scanf without field width limits can crash with huge input data. To fix this error message add a field width specifier:
%s => %20s
%i => %3i
Sample program that can crash:
#include
int main()
{
int a;
scanf("%i", &a);
return 0;
}
To make it crash:
perl -e 'print "5"x2100000' | ./a.out
I cannot crash this program typing "huge input data". What exactly should I type to get this crash? I also don't understand the meaning of the last line in this warning:
perl -e ...
The last line is an example command to run to demonstrate the crash with the sample program. It essentially causes perl to print 2.100.000 times "5" and then pass this to the stdin of the program "a.out" (which is meant to be the compiled sample program).
First of all, scanf() should be used for testing only, not in real world programs due to several issues it won't handle gracefully (e.g. asking for "%i" but user inputs "12345abc" (the "abc" will stay in stdin and might cause following inputs to be filled without a chance for the user to change them).
Regarding this issue: scanf() will know it should read a integer value, however it won't know how long it can be. The pointer could point to a 16 bit integer, 32 bit integer, or a 64 bit integer or something even bigger (which it isn't aware off). Functions with a variable number of arguments (defined with ...) don't know the exact datatype of elements passed, so it has to rely on the format string (reason for the format tags to not be optional like in C# where you just number them, e.g. "{0} {1} {2}"). And without a given length it has to assume some length which might be platform dependant as well (making the function even more unsave to use).
In general, consider it possibly harmful and a starting point for buffer overflow attacks. If you'd like to secure and optimize your program, start by replacing it with alternatives.
I tried running the perl expression against the C program and it did crash here on Linux (segmentation fault).
Using of 'scanf' (or fscanf and sscanf) function in real-world applications usually is not recommended at all because it's not safe and it's usually a hole for buffer overrun if some incorrect input data will be supplied.
There are much more secure ways to input numbers in many commonly used libraries for C++ (QT, runtime libraries for Microsoft Visual C++ etc.). Probably you can find secure alternatives for "pure" C language too.

Stack overflow error in C, before any step

When I try to debug my C program, and even before the compiler starts executing any line I get:
"Unhandled exception at 0x00468867 in HistsToFields.exe: 0xC00000FD: Stack overflow."
I have no clue on how to spot the problem since the program hasn't even started executing any line (or at least this is what I can see from the compiler debugging window). How can I tell what is causing the overflow if there isn't yet any line of my program executed?
"The when the debugger breaks it points to a line in chkstk.asm"
I'm using Microsoft Visual Studio 2008 on a win7.
I set the Stack Reserve Size to 300000000
PS: the program used to execute fine before but on another machine.
I have a database (120000 x 60)in csv format, I need to change it to space delimited. The program (which I didn't write myself) defines a structure of the output file:
`struct OutputFileContents {
char Filename[LINE_LEN];
char Title[LINE_LEN];
int NVar;
char VarName[MAX_NVAR][LINE_LEN];
char ZoneTitle[LINE_LEN];
int NI;
int NJ;
int NK;
double Datums[MAX_NVAR];
double Data[MAX_NVAR][MAX_NPOINT];`
This last array "Data[][]" is what contains all the output. hence is the huge size.
This array size "MAX_NPOINT" is set in a header source file in the project, and this header is used by several programs in the projects.
Thank you very much in advance.
Ahmad.
First, IDE != compiler != debugger.
Second, and no matter why the debugger fails debugging the application - a dataset that huge, on the stack, is a serious design error. Fix that design error, and your debugger problem will go away.
As for why the debugger fails... no idea. Too little RAM installed? 32bit vs 64bit platform? Infinite recursion in constructing static variables? Can't really say without looking at things you haven't showed us, like source, specs of environment, etc.
Edit: In case the hint is missed: Global / static data objects are constructed before main() starts executing. An infinite (or just much-too-deep) recursion in those constructors can trigger a stack overflow. (I am assuming C++ instead of C as the error message you gave says "unhandled exception".)
Edit 2: You added that you have a "database" that you need to convert to space-delimited. Without seeing the rest of your code: Trying to do the whole conversion in one go in memory isn't a good idea. Read a record, convert it, write it. Repeat. If you need stuff like "longest record" to determine the output format, iterate over the input once read-only for finding the output sizes, then iterate again doing the actual conversion.

Resources