Program received signal: “EXC_BAD_ACCESS”? - c

I'm making this little program that just reads contents of a file, but when I run it I get this error: Program received signal: “EXC_BAD_ACCESS”.
I also get a warning from Xcode: "Assignment makes pointer from integer without a cast" at line 10 of my code (main.c):
#include <stdio.h>
#include <stdlib.h>
#define FILE_LOCATION "../../Data"
int main (int argc, const char * argv[]) {
FILE *dataFile;
char c;
if ( dataFile = fopen(FILE_LOCATION, "r") == NULL ) {
printf("FAILURE!");
exit(1);
}
while ( (c = fgetc(dataFile)) != EOF ) {
printf("%c", c);
}
fclose(dataFile);
return 0;
}
This is the debugger output:
[Session started at 2012-06-27 10:28:13 +0200.]
GNU gdb 6.3.50-20050815 (Apple version gdb-1515) (Sat Jan 15 08:33:48 UTC 2011)
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB. Type "show warranty" for details.
This GDB was configured as "x86_64-apple-darwin".tty /dev/ttys000
Loading program into debugger…
Program loaded.
run
[Switching to process 34331]
Running…
Program received signal: “EXC_BAD_ACCESS”.
sharedlibrary apply-load-rules all
(gdb)
Is there a problem with the pointer, wrong function? I found something to track memory issues wich is called NSZombie, what is that and can I use it?

if ( (dataFile = fopen(FILE_LOCATION, "r")) == NULL ) {

Here's another error:
char c;
while ( (c = fgetc(dataFile)) != EOF ) {
You should really study the documentation for important functions like fgetc() before using them.
Specifically, fgetc() returns int, which is required in order to fit the EOF value. If it returned char, then there would have to be one character whose numerical value collided with EOF and that, thus, couldn't appear in a binary file. That would suck, so that's not how it works. :)

Related

gdb debugging segmentation fault, arguments count showing false

I am trying to debug the segmentation fault of a menu program written in 'C' and the main function is shown in the below screen shot.
int main( int ac, char **av ) {
/* TDT,II - 02 May 2006 - Added this check to see if there is a Debug level passed in */
if ( ac > 0 ) {
iDebug = atoi( av[1] );
sprintf( cLogText, "Setting Debug Level to ~%d~", iDebug );
WriteTrace( cLogText );
};
initscr();
clear();
t1=time(NULL);
local =localtime(&t1);
Svc_Login();
for ( ; ; ) {
cases_on_pc=FALSE;
if ( !Process_security() ) break;
menu1();
};
wrap_up(0);
endwin( );
exit(0);
}
when i try to debug (run using gdb), without any arguments, getting halted at 0x00007ffff34c323a in ____strtoll_l_internal () from /lib64/libc.so.6 as shown below. if(ac>0) becomes true, only when i pass any arguments. but i haven't passed any runtime arguments. still that 'f block is being executed and function atoi(av[1]) is called and resulted in segmentation fault. I am unable to figure it out. how to proceed further to identify and correct the issue so that i could run the menu program successfully. could somebody give any suggestions on this?
-rw-rw-r--. 1 MaheshRedhat MaheshRedhat 2275270 Jan 10 03:09 caomenu.c
-rw-rw-r--. 1 MaheshRedhat MaheshRedhat 0 Jan 10 03:09 caomenu.lis
-rwxr-xr-x. 1 root root 796104 Jan 10 03:10 scrmenu
[MaheshRedhat#azureRHEL MenuPrograms]$
[MaheshRedhat#azureRHEL MenuPrograms]$ gdb ./scrmenu
(gdb) run
Starting program: /home/MaheshRedhat/MenuPrograms/scrmenu
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Program received signal SIGSEGV, Segmentation fault.
0x00007ffff34c323a in ____strtoll_l_internal () from /lib64/libc.so.6
Missing separate debuginfos, use: yum debuginfo-install glibc-2.28-189.5.0.1.el8_6.x86_64 libnsl-2.28-189.5.0.1.el8_6.x86_64 ncurses-libs-6.1-9.20180224.el8.x86_64
(gdb)
(gdb) bt
#0 0x00007ffff34c323a in ____strtoll_l_internal () from /lib64/libc.so.6
#1 0x00007ffff34bfce4 in atoi () from /lib64/libc.so.6
#2 0x0000000000401674 in main (ac=1, av=0x7fffffffe228) at caomenu.pc:541
(gdb)
Update
The above issue has been resolved. Here is another encounter of segmentation fault.
from this backtrace of system calls, in WriteTrace (cEntryText=0x6f4d20 <cLogText> in my main function lead to call fputs() from library file /lib64/libc.so.6
Starting program: /home/MaheshRedhat/MenuPrograms/scrmenu 1
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Program received signal SIGSEGV, Segmentation fault.
0x00007ffff34f7f5c in fputs () from /lib64/libc.so.6
(gdb)
(gdb) bt
#0 0x00007ffff34f7f5c in fputs () from /lib64/libc.so.6
#1 0x0000000000472efc in WriteTrace (cEntryText=0x6f4d20 <cLogText> "Setting Debug Level to ~1~") at caomenu.pc:18394
#2 0x00000000004016a0 in main (ac=2, av=0x7fffffffe208) at caomenu.pc:543
(gdb)
The below is the declaration of cLogText
char cLogText[250];
The below is code for WriteTrace:
/**************************************************************************
routine to write an entry in the trace file
**************************************************************************/
void WriteTrace( char *cEntryText ) {
char cTimeStamp[40]; /* time stamp variable */
char cTimeFormat[]="%H:%M:%S: "; /* time stamp format */
GetTimeStamp( &cTimeStamp[0], sizeof(cTimeStamp), &cTimeFormat[0] );
TrcFile = fopen( cTraceFile, cTrcOpenFlag ); /* open the file */
cTrcOpenFlag[0] = 'a'; /* after first, always app} */
fprintf(TrcFile, "%s", cTimeStamp); /* write the time stamp */
fprintf(TrcFile, "%s\n", cEntryText); /* write the entry */
fclose(TrcFile); /* close the trace file */
return; /* return to caller */
}
From C11:
The value of argc shall be nonnegative. argv[argc] shall be a null
pointer. If the value of argc is greater than zero, the array members
argv[0] through argv[argc-1] inclusive shall contain pointers to
strings, which are given implementation-defined values by the host
environment prior to program startup. The intent is to supply to the
program information determined prior to program startup from elsewhere
in the hosted environment. If the host environment is not capable of
supplying strings with letters in both uppercase and lowercase, the
implementation shall ensure that the strings are received in
lowercase.
If the value of argc is greater than zero, the string
pointed to by argv[0] represents the program name; argv[0][0] shall be
the null character if the program name is not available from the host
environment. If the value of argc is greater than one, the strings
pointed to by argv[1] through argv[argc-1] represent the program
parameters.
The condition
( ac > 0 )
would be true even if you provided 0 program arguments, with argv[0] pointing to the program name (if it was available).
This statement:
atoi( av[1] );
tries to access av[1] which the Standard defines to be NULL when no program arguments were provided. Hence the segmentation violation signal.
fopen returns NULL to indicate failure.
TrcFile = fopen( cTraceFile, cTrcOpenFlag );
You do not check its return value here, before passing it to fprintf.
Perhaps:
TrcFile = fopen( cTraceFile, cTrcOpenFlag );
if (!TrcFile) {
deal with error here..
}
if (ac > 0) {
iDebug = atoi(av[1]); // out of bounds here, when you run `./scrmenu`
}
when you run ./scrmenu, you will have ac=1 and av[0]=./scrmenu
You may misunderstand ac and av mean in the main function.
how to fix:
if (ac == 2) {
iDebug = atoi(av[1]); // parse the second argument
}
then you can run: ./scrmenu or ./scrmenu 1 something like this.
you can check this post about command line argument

Why segfaults occur with string.h functions?

With the same command in my coworker's PC, my program works without the problem.
But in my PC, the program crashes with segfault;
GDB backtrace at core reads as follows:
#0 strrchr () at ../sysdeps/x86_64/strrchr.S:32
32 ../sysdeps/x86_64/strrchr.S: no such file or directory
(gdb) bt
#0 strrchr () at ../sysdeps/x86_64/strrchr.S:32
#1 0x00007f10961236d7 in dirname (path=0x324a47a0 <error: Cannot access memory at address 0x324a47a0>) at dirname.c:31
I'm already compiling the executable with -g -ggdb options.
Odd thing is that.. with valgrind the program works without error in my PC as well.
How can I solve the problem? I've observed that the errors occur only with strrchr, strcmp, strlen, ... string.h functions.
+Edit: the gdb backtrace indicates that the program crashes here:
char* base_dir = dirname(get_abs_name(test_dir));
where get_abs_name is defined as
char* get_abs_name(char* dir) {
char abs_path[PATH_MAX];
char* c = malloc(PATH_MAX*sizeof(char));
realpath(dir, abs_path);
strcpy(c, abs_path);
return c;
}
+Edit2: 'dir' is a path of certain file, like '../program/blabla.jpg'.
Using valgrind,
printf("%s\n", dir)
normally prints '/home/frozenca/path_to_program'.
I can't guess why the program crashes without valgrind..
We cannot know for sure without a Minimal, Complete, and Verifiable example. Your code looks mostly correct (albeit convoluted), except you do not check for errors.
char* get_abs_name(char* dir) {
char abs_path[PATH_MAX];
char* c = malloc(PATH_MAX*sizeof(char)); /* this may return NULL */
realpath(dir, abs_path); /* this may return NULL */
strcpy(c, abs_path);
return c;
}
Now, how could this lead to an error like you see? Well, if malloc returns NULL, you'll get a crash right away in strcpy. But if realpath fails:
The content of abs_path remains undefined.
So strcpy(c, abs_path) will copy undefined content. Which could lead to it copying just one byte if abs_path[0] happens to be \0. But could also lead to massive heap corruption. Which happens depends on unrelated conditions, such as how the program is compiled, and whether some debugging tool such as valgrind is attached.
TL;DR: get into the habit of checking every function that may fail.
char* get_abs_name(char* dir) {
char abs_path[PATH_MAX];
char* c = malloc(PATH_MAX*sizeof(char));
if (!c) { return NULL; }
if (!realpath(dir, abs_path)) {
free(c);
return NULL;
}
strcpy(c, abs_path);
return c;
}
Or, here, you can simplify it alot assuming a GNU system or POSIX.1-2008 system:
char * get_abs_name(const char * dir) {
return realpath(dir, NULL);
}
Note however that either way, in your main program, you also must check that get_abs_name() did not return NULL, otherwise dirname() will crash.
Drop your function entirely and use the return value of realpath(dir, NULL) instead.
Convert type
char* c = malloc(PATH_MAX*sizeof(char));
Thanks!

execvp call in git's source for external shell cmd returns EFAULT (Bad address) errno, seemingly only in 64 bit. Googling reveals nothing

UPDATE: running git diff with valgrind results in
Syscall param execve(argv) points to uninitialised byte(s)
And the output from strace is not fully decoded--i.e., there are hex numbers among the array of strings that is argv.
...
This started out like a superuser problem but it's definitely moved into SO's domain now.
But anyway, here is my original SU post detailing the problem before I looked at the source very much: https://superuser.com/questions/795751/various-methods-of-trying-to-set-up-a-git-diff-tool-lead-to-fatal-cannot-exec
Essentially, following standard procedure to set up vimdiff as a diff tool by setting the external directive under [diff] in .gitconfig leads to this errors like this:
fatal: cannot exec 'git_diff_wrapper': Bad address
external diff died, stopping at HEAD:switch-monitor.sh.
It happens on my Linux Mint 17 64 bit OS, as well as on an Ubuntu 14.04 64 bit OS on a virtualbox VM, but not in an Ubuntu 14.04 32 bit VM...
Googling reveals no similar problems. I've spent a lot of time looking at git's source to figure this out. Bad address is the description returned by strerror for an EFAULT error. Here is a short description of EFAULT from execve manpage:
EFAULT filename points outside your accessible address space
I've tracked down how the error message is pieced together by git, and have used that to narrow down the source of the problem quite a bit. Let's start here:
static int execv_shell_cmd(const char **argv)
{
const char **nargv = prepare_shell_cmd(argv);
trace_argv_printf(nargv, "trace: exec:");
sane_execvp(nargv[0], (char **)nargv);
free(nargv);
return -1;
}
This function should not return control, but it does due to the error. The actual execvp call is in sane_execvp, but perhaps prepare_shell_cmd is of interest, though I don't spot any problems:
static const char **prepare_shell_cmd(const char **argv)
{
int argc, nargc = 0;
const char **nargv;
for (argc = 0; argv[argc]; argc++)
; /* just counting */
/* +1 for NULL, +3 for "sh -c" plus extra $0 */
nargv = xmalloc(sizeof(*nargv) * (argc + 1 + 3));
if (argc < 1)
die("BUG: shell command is empty");
if (strcspn(argv[0], "|&;<>()$`\\\"' \t\n*?[#~=%") != strlen(argv[0])) {
#ifndef GIT_WINDOWS_NATIVE
nargv[nargc++] = SHELL_PATH;
#else
nargv[nargc++] = "sh";
#endif
nargv[nargc++] = "-c";
if (argc < 2)
nargv[nargc++] = argv[0];
else {
struct strbuf arg0 = STRBUF_INIT;
strbuf_addf(&arg0, "%s \"$#\"", argv[0]);
nargv[nargc++] = strbuf_detach(&arg0, NULL);
}
}
for (argc = 0; argv[argc]; argc++)
nargv[nargc++] = argv[argc];
nargv[nargc] = NULL;
return nargv;
}
It doesn't look like they messed up the terminating NULL pointer (the absence of which is known to cause EFAULT).
sane_execvp is pretty straightforward. It's a call to execvp and returns -1 if it fails.
I haven't quite figured out what trace_argv_printf does, though it looks like it might would affect nargv and maybe screw up the terminating NULL pointer? If you'd like me to include it in this post, let me know.
I have been unable to reproduce an EFAULT with execvp in my own C code thus far.
This is git 1.9.1, and the source code is available here: https://www.kernel.org/pub/software/scm/git/git-1.9.1.tar.gz
Any thoughts on how to move forward?
Thanks
Answer (copied from comments): it seems to be a bug in git 1.9.1. The (old) diff.c code, around line 2910-2930 or so, fills in an array of size 10 with arguments, before calling the run-command code. But in one case it puts in ten actual arguments and then an 11th NULL. Depending on the whims of the compiler, the NULL may get overwritten with some other local variable (or the NULL might overwrite something important).
Changing the array to size 11 should fix the problem. Or just update to a newer git (v2.0.0 or later); Jeff King replaced the hard-coded array with a dynamic one, in commits 82fbf269b9994d172719b2d456db5ef8453b323d and ae049c955c8858899467f6c5c0259c48a5294385.
Note: another possible cause for bad address is the use of run-command.c#exists_in_PATH() by run-command.c#sane_execvp().
This is fixed with Git 2.25.1 (Feb. 2020).
See commit 63ab08f (07 Jan 2020) by brian m. carlson (bk2204).
(Merged by Junio C Hamano -- gitster -- in commit 42096c7, 22 Jan 2020)
run-command: avoid undefined behavior in exists_in_PATH
Noticed-by: Miriam R.
Signed-off-by: brian m. carlson
In this function, we free the pointer we get from locate_in_PATH and then check whether it's NULL.
However, this is undefined behavior if the pointer is non-NULL, since the C standard no longer permits us to use a valid pointer after freeing it.
The only case in which the C standard would permit this to be defined behavior is if r were NULL, since it states that in such a case "no action occurs" as a result of calling free.
It's easy to suggest that this is not likely to be a problem, but we know that GCC does aggressively exploit the fact that undefined behavior can never occur to optimize and rewrite code, even when that's contrary to the expectations of the programmer.
It is, in fact, very common for it to omit NULL pointer checks, just as we have here.
Since it's easy to fix, let's do so, and avoid a potential headache in the future.
So instead of:
static int exists_in_PATH(const char *file)
{
char *r = locate_in_PATH(file);
free(r);
return r != NULL;
}
You now have:
static int exists_in_PATH(const char *file)
{
char *r = locate_in_PATH(file);
int found = r != NULL;
free(r);
return found;
}

C code runs in Eclipse-Kepler but fails to run in Codeblocks IDE

I am a newbie at C programming and new on Stackoverflow as well.
I have some c code that compiles and runs in Eclipse Kepler (Java EE IDE); I installed the C/C++ plugin and Cygwin Gcc compiler for c.
Everything runs ok in Eclipse IDE; however, when my friend tries to run the same code on his Codeblocks IDE, he doesn't get any output. At some point, he got some segmentation error, which we later learned was due to our program accessing memory space that didn't belong to our program.
Codeblocks IDE is using Gcc compiler not cygwin gcc, but I don't think they're that different to cause this sort of problem.
I am aware that C is extremely primitive and non-standardized, but why would my code run in eclipse with cygwin-gcc compiler but not run in Codeblocks IDE with gcc compiler?
Please help, it's important for our class project.
Thanks to all.
[EDIT] Our code is a little large to paste in here but here's a sample code of what would RUN SUCCESSFULLY in eclipse but FAIL in codeblocks, try it yourself if you have codeblocks please:
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <string.h>
int main(void) {
char *traceEntry1;
FILE *ifp;
traceEntry1 = malloc(200*sizeof(char));
ifp = fopen("./program.txt", "r");
while (fgets(traceEntry1, 75, ifp))
printf("String input is %s \n", traceEntry1);
fclose(ifp);
}
It simply doesn't give any outputs in codeblocks, sometimes just results in a segmentation fault error.
I have no idea what the problem is.
We need your help please, thanks in advance.
Always and ever test the results of all (revelant) calls. "relevant" at least are those call which return results which are unusable if the call failed.
In the case of the OP's code they are:
malloc()
fopen()
fclose()
A save version of the OP's code could look like this:
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
int result = EXIT_SUCCESS; /* Be optimistic. */
char * traceEntry1 = NULL; /* Always initialise your variables, you might remove this during the optimisation phase later (if ever) */
FILE * ifp = NULL; /* Always initialise your variables, you might remove this during the optimisation phase later (if ever) */
traceEntry1 = malloc(200 * sizeof(*traceEntry1)); /* sizeof(char) is always 1.
Using the dereferenced target (traceEntry1) on the other hand makes this
line of code robust against modifications of the target's type declaration. */
if (NULL == traceEntry1)
{
perror("malloc() failed"); /* Log error. Never ignore useful and even free informaton. */
result = EXIT_FAILURE; /* Flag error and ... */
goto lblExit; /* ... leave via the one and only exit point. */
}
ifp = fopen("./program.txt", "r");
if (NULL == ifp)
{
perror("fopen() failed"); /* Log error. Never ignore useful and even free informaton. */
result = EXIT_FAILURE; /* Flag error ... */
goto lblExit; /* ... and leave via the one and only exit point. */
}
while (fgets(traceEntry1, 75, ifp)) /* Why 75? Why not 200 * sizeof(*traceEntry1)
as that's what was allocated to traceEntr1? */
{
printf("String input is %s \n", traceEntry1);
}
if (EOF == fclose(ifp))
{
perror("fclose() failed");
/* Be tolerant as no poisened results are returned. So do not flag error. It's logged however. */
}
lblExit: /* Only have one exit point. So there is no need to code the clean-up twice. */
free(traceEntry1); /* Always clean up, free what you allocated. */
return result; /* Return the outcome of this exercise. */
}

On Linux, \n being treated as two characters in gdb using gcc compiler

My code
char[] fileContents = "hi\n whats up\n";
char *output=malloc(sizeof(char)*1024) ;
int i = 0; int j = 0;
char *bPtr = fileContents;
for(i=j=0; bPtr[i]!='\0'; i++)
{
if('\n'==bPtr[i])
outputPtr[j++]='\r';
outputPtr[j++]=bPtr[i];
}
On netbeans, this code works but using linux gcc, the \ and the \n are being treated as seperate characters, where as in net beans \n is all one char. plz help
Upon debugging, in Linux using GDB it completely skips the if statement, while in netbeans it enters and gets the job done.
First off, your C code isn't C code. It's close, but as is, it won't compile at all. Second, after cleaning up the code to get it to a compilable state:
#include <stdio.h>
#include <stdlib.h>
char fileContents[] = "hi\n whats up\n";
int main(void)
{
char *output;
int i;
int j;
char *bPtr;
output = malloc(1024);
bPtr = fileContents;
for (i = j = 0 ; bPtr[i] != '\0' ; i++)
{
if ('\n' == bPtr[i])
output[j++] = '\r';
output[j++] = bPtr[i];
}
output[j] = '\0';
fputs(output,stdout);
return EXIT_SUCCESS;
}
And compiling with "gcc -g a.c" and using gdb:
GNU gdb Red Hat Linux (6.3.0.0-1.132.EL4rh)
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB. Type "show warranty" for details.
This GDB was configured as "i386-redhat-linux-gnu"...Using host libthread_db
library "/lib/tls/libthread_db.so.1".
(gdb) break 17
Breakpoint 1 at 0x80483fa: file a.c, line 17.
(gdb) run
Starting program: /tmp/a.out
Breakpoint 1, main () at a.c:17
17 for (i = j = 0 ; bPtr[i] != '\0' ; i++)
(gdb) n
19 if ('\n' == bPtr[i])
(gdb) n
21 output[j++] = bPtr[i];
(gdb) n
17 for (i = j = 0 ; bPtr[i] != '\0' ; i++)
(gdb) n
19 if ('\n' == bPtr[i])
(gdb) n
21 output[j++] = bPtr[i];
(gdb) n
17 for (i = j = 0 ; bPtr[i] != '\0' ; i++)
(gdb) n
19 if ('\n' == bPtr[i])
(gdb) n
20 output[j++] = '\r';
(gdb) n
21 output[j++] = bPtr[i];
The first two times through the loop, we skip over the condition, since it's false. On the third time through, the condition is met, and the "\r" is included in the output.
But from reading some of your other comments, it seems you are confused by line endings. On Unix (and because Linux is a type of Unix, this is true for Linux as well), lines end with one character, LF (ASCII code 10). Windows (and MS-DOS, the precursor to Windows, and CP/M, the precursor to MS-DOS) uses the character sequence CR LF (ASCII code 13, ASCII code 10) to mark the end of line.
Why the two differing standards? Because of the wording of the ASCII standard, when it was created and why. Back when it was created, output was mostly on teletypes---think typewriter. CR was defined as moving the print carriage (or print head) back to the begining of the line, and LF was defined as advancing to the next line. The action of bringing the print carriage to the beginning of the next line was unspecified. CP/M (and descendants) standardized on using both to mark the end of a line due to a rather literal translation of the standards document. The creators of Unix decided on a more liberal interpretation where LF, a Line Feed, meant to advance to the next line for output, bringing the print carriage back to the start (whereas the first computer I used used CR for the same thing, bring the carriage back to the start and advance to the next line).
Now, if a teletype device is hooked up to a Unix system and requires both CR and LF, then it's up to the Unix device driver, when it sees a LF, to add the required CR. In other words, the system handles the details in behalf of your program, and you only need the LF to end a line.
To further confound the mess, the C standard weighs in. When you open a file,
FILE *fp = fopen("sometextfile.txt","r");
you open it in "text" mode. Under Unix, this does nothing, but under Windows, the C library will discard "\r" on input so the program only needs to concern itself with looking for "\n" (and for files opened for writing, it will add the CR when a LF is seen). But this is under Windows (there may be other systems out there that do this, but I am unfamiliar with any).
If you really want to see the file, as is, you need to open it in binary mode:
FILE *fp = fopen("sometextfile.txt","rb");
Now, if there are any CRs in the file, your program will see them. Normally, one doesn't need to concern themselves with line endings---it's only when you move a text file from one system to another that uses a different line-ending convention where it becomes an issue, and even then, the transport mechanism might take care of the issue for you (such as FTP). But it doesn't hurt to check.
Remember when I said that Unix does not make a distinction between "text" and "binary" modes? It doesn't. So a text file from the Windows world is processed with a Unix program, said Unix program will see the CRs. What happens is really up to the program in question. Programs like grep don't seem to care, but the editor I uses will show any CR that exists.
So I guess now, my question is---what are you trying to do?
Your code runs perfectly on my system with gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3
.
The code I ran :
int main()
{
char Contents[] = "hi\n whats up\n";
int i = 0; int j = 0;
char outputPtr[20];
for(i=j=0; Contents[i]!='\0'; i++)
{
if('\n'==Contents[i]) outputPtr[j++]='\r';
outputPtr[j++]=Contents[i];
}
outputPtr[j]='\0';
printf("%s %d %d \n", outputPtr,j,i);
i = 0;
while(outputPtr[i]!='\0') printf(" %d ", outputPtr[i++]);
return 0;
}
Output :
hi
whats up
15 13 //Length of the edited string and the original string
104 105 13 10 32 119 104 97 116 115 32 117 112 13 10 //Ascii values of the characters of the string
13 is the carriage return character.

Resources