Issue
in the Red Hat Enterprise Linux Server release 6.5 (Santiago) with gcc version 4.4.7 we can write:
//file1.C
#include <stdio.h>
int main( int argc, char *argv[] )
{
char * A=0;
A='0';//run segfalut
}
the question is how to write a test on 'file1.C' text file that shows this problem?
Solution
You are defining a pointer without initialising it. This means that it has an undifined value, ie. that it points to a random point in memory. Writing to this location does not by definition cause a segfault, because a segfault means that you write (or read) to a location that is not owned by you. But Since you run this in the very start of your program there is a chance that the pointer, that was still on your stack (see below) still pointed to some variable in your program by some hidden initialisation code. But as mentioned, it is undefined behavour. It may cause a segfault, it may just mess op your program somewhere else... nobody knows.
What i meant by 'still on the stack' is this: If you allocate memory for, say, an int, this is done by putting it on the stack. The stack is a little higher now, and your variable is at the top. Now if you allocate a variable (a pointer is also just a variable, but containing an address), then deallocate it, and then allocate another varable, the initial value of the second variable is the same as the the first variable. (This is also undefined, so don't assume it's true). Say we have the code:
#include <stdio.h>
char a= 'A';
void one()
{
char* p1 = &a;
printf("char is '%c', addr = 0x%x\n", *p1, p1);
}
void two()
{
char* p2;
printf("char is '%c', addr = 0x%x\n", *p2, p2);
}
int main(int argc, char** argv){
one();
two();
}
there is a fair chance that this will print char is 'A'
two times. Because what p2 is pointing to happens to be the same as p1, which is a.
In this case a is valid memory ( it is yours) but it is accessed by accident. So it does not cause a segfault.
Answered By - David van rijn