I am learning OpenGL and right now I am stuck on loading shaders. 90% of the time, this code works. The other 10% of the time, I get the following error for the vertex shader's compilation (I removed the error logging from the code below for easier readability.):
Vertex shader failed to compile with the following errors:
ERROR: 0:16: error(#132) Syntax error: "<" parse error
ERROR: error(#273) 1 compilation errors. No code generated
Shader loading code:
unsigned int LoadShader(const char *path_vert, const char *path_frag) { // Returns shader program ID.
unsigned int shader_program;
FILE *file;
char *source_vert, *source_frag;
unsigned int file_size;
// Read vertex shader.
file = fopen(path_vert, "rb");
fseek(file, 0, SEEK_END);
file_size = ftell(file);
fseek(file, 0, SEEK_SET);
source_vert = (char*)malloc(file_size + 1);
fread(source_vert, 1, file_size, file);
// Read fragment shader.
file = fopen(path_frag, "rb");
fseek(file, 0, SEEK_END);
file_size = ftell(file);
fseek(file, 0, SEEK_SET);
source_frag = (char*)malloc(file_size + 1);
fread(source_frag, 1, file_size, file);
fclose(file);
// Make sure the shader sources aren't garbage.
printf("%s\n\n %s\n", source_vert, source_frag);
// Create vertex shader.
unsigned int vert_shader = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(vert_shader, 1, &source_vert, NULL);
glCompileShader(vert_shader);
// Create fragment shader.
unsigned int frag_shader = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(frag_shader, 1, &source_frag, NULL);
glCompileShader(frag_shader);
// Create shader program.
shader_program = glCreateProgram();
glAttachShader(shader_program, vert_shader);
glAttachShader(shader_program, frag_shader);
glLinkProgram(shader_program);
// Clean up the extra bits.
glDeleteShader(vert_shader);
glDeleteShader(frag_shader);
free(source_vert);
free(source_frag);
return shader_program;
}
Vertex shader:
#version 460 core
layout (location = 0) in vec3 a_pos
layout (location = 1) in vec2 a_tex_coord
out vec2 tex_coord;
void main() {
gl_Position = vec4(a_pos, 1.0f);
tex_coord = a_tex_coord;
}
Fragment shader:
#version 460 core
uniform sampler2D tex0;
in vec2 tex_coord;
out vec4 frag_color;
void main() {
frag_color = texture(tex0, tex_coord);
}
I am compiling for C99 with GCC using VS Code. Thanks for reading!
I suspect that the problem is that source_vert and source_frag buffers are not null-terminated. You allocate file_size + 1 bytes for each but then only fill file_size bytes by reading from file leaving last byte filled with garbage.
user7860670's answer solved my issue: All that was needed was a source_vert[file_size] = '\0'; and the problem was solved. I also added source_frag[file_size] = '\0'; just to be safe. Make sure to null-terminate your strings!! :)
Related
I am currently working on loading a bitmap file in C. I am kind of new to C, and I ran into a problem: I have a file pointer that reads unsigned chars to a struct of rgb pixels, (it's called rgb but it reads in the order of b,g,r and padding - this is the default of bitmap format file). My file is 12x12 pixels and when it reaches to row 9 it putting only the value '204' in each component, whike the image is white (i.e. all components = 255). All the components before this equal 255.
EDIT: I changed the enum to a three defined values returned by the image state (didn't load, blue, not blue).
EDIT2: I edited the code, but now im equals 0xffffffff and cannot be accessed to.
here's the code:
int CalaculateBlueness()
{
bih bih;
bfh bfh;
int counterblue = 0;
hsv hsv;
FILE *filePtr = fopen("C:\\Users\\mishe\\Desktop\\white.bmp", "rb");
//if the file doesn't exist in memory
if (filePtr == NULL)
{
return IMAGE_NOT_LOADED;
}
//read the bitmap file header
fread(&bfh, sizeof(bfh), 1, filePtr);
//verify that this is a bmp file by check bitmap id
if (bfh.bitmap_type[0] != 'B' || bfh.bitmap_type[1] != 'M')
{
fclose(filePtr);
return IMAGE_NOT_LOADED;
}
fclose(filePtr);
//ensure that the filePtr will point at the start of the image info header
filePtr = fopen("C:\\Users\\mishe\\Desktop\\white.bmp", "rb");
fseek(filePtr, 14, SEEK_CUR);
//read the bitmap info header
fread(&bih, sizeof(bih), 1, filePtr);
if (bih.bit_count!=24)
{
return ERROR_BPP;
}
//point the pointer file to the start of the raw data
//fseek(filePtr, bfh.file_size, SEEK_SET);
int size = bih.height * WIDTHBYTES(bih.width * 32);
unsigned char *im = calloc(1, size);
//put the raw bitmap pixel data in strcut rgb array
fread(&im, sizeof(size), 1, filePtr);
//convert each pixel to it's hue value and check if in the range of blue
for (size_t i = 0; i < bih.height; i++)
{
for (size_t j = 0; j < bih.width; j++)
{
hsv =rgbpixel_hue(im);
if (hsv.h>190 && hsv.h<250)
{
counterblue++;
}
fseek(im, 3, SEEK_CUR);
}
}
//check if more than 80% of the image is blue and return the defined state according to the result
if (counterblue > BLUE_THRESHOLD*(bih.height*bih.width))
{
return BLUE;
}
return NOT_BLUE;
}
Reading in bitmaps is always a difficult thing.
There are quite some points to consider.
fread(&im[i][j].res, sizeof(unsigned char), 1, filePtr);
With this line your read the reserved byte of an RGBQUAD from the file. However, this member is not in the file. The image data in the file contains scanlines. See below.
After reading the Bitmap File Header, you open the file again. However, you didn't close it. This might be succesful or it fails because the file was already open. But you don't chek the return value of fopen. Anyway, there is no need to do this because after having read the BFH the filepointer is positioned at the BITMAPINFOHEADER; you can just read it in. And you need to read it in, otherwise you won't know the dimensions of the bitmap.
From MSDN documentation:
The established bitmap file format consists of a BITMAPFILEHEADER structure followed by a BITMAPINFOHEADER [...] structure. An array of RGBQUAD structures (also called a color table) follows the bitmap information header structure. The color table is followed by a second array of indexes into the color table (the actual bitmap data).
For 24 bits-per-pixel bitmaps, there is no color table.
so the sequence is now:
//read the bitmap file header
fread(&bfh, sizeof(bfh), 1, filePtr);
//read the bitmap info header
fread(&bih, sizeof(bih), 1, filePtr);
int bpp= bih.biBitCount;
if (bpp != 24) return 0; // error: must be 24 bpp/ 3 bytes per pixel
Now we must calculate the amount of memory needed to store the image. An image consists of heighth rows of width pixels. The rows are called scanlines.
For some reason, a scanline is alligned on a 4 byte boundary. This means that the last bytes of a scanline may not be in use. So the amount of memory to read the bitmap data from the file is the heigth of the image, times the number of scanlines of the image:
// WIDTHBYTES takes # of bits in a scanline and rounds up to nearest word.
#define WIDTHBYTES(bits) (((bits) + 31) / 32 * 4)
int size= bih.biHeight * WIDTHBYTES(bih.biWidth*32));
unsigned char *im= malloc(size);
Finally you can read the data. If there was a color table, you should have skipped that with a seek but since there is none, you can now read the data:
fread(im, size, 1, filePtr);
Now, to address the pixels you cannot use a simple two-dimensional notation governed by width and heigth of the image...due to the scanlines. You can use the following:
int scanlineSize= WIDTHBYTES(bih.biWidth*bih.biBitCount);
unsigned char *p, *scanline= im;
for (int i=0; i<bih.biHeight; i++)
{
p= scanline;
for (int j=0; j<bih.biWidth; j++)
{
g= *p++;
b= *p++;
r= *p++;
}
scanline += scanlineSize;
}
So to address a 3-byte pixel (x,y) use: im[y*scanlineSize + x*3]
.. except that I believe that scanlines are reversed, so the pixel would be at im[(bih.biHeight-y)*scanlinesize + x*3].
UPDATE: complete function
#include <winGDI.h>
// WIDTHBYTES takes # of bits in a scanline and rounds up to nearest word.
#define WIDTHBYTES(bits) (((bits) + 31) / 32 * 4)
unsigned char *readBitmap(char *szFilename)
{
BITMAPFILEHEADER bfh;
BITMAPINFOHEADER bih;
int i, j, size, scanlineSize;
unsigned char r, g, b, *p, *img, *scanline;
FILE *filePtr;
if ((filePtr=fopen(szFilename, "rb"))==0) return 0;
//read the bitmap file header
if (fread(&bfh, sizeof(bfh), 1, filePtr)!=1
|| bfh.bfType != 'MB') {fclose(filePtr); return 0;}
//read the bitmap info header
if (fread(&bih, sizeof(bih), 1, filePtr)!=1
|| bih.biSize!=sizeof(bih)) {fclose(filePtr); return 0;}
if (bih.biBitCount != 24) {fclose(filePtr); return 0;} // error: must be 24 bpp/ 3 bytes per pixel
// allocate memory and read the image
scanlineSize= WIDTHBYTES(bih.biWidth * bih.biBitCount);
size= bih.biHeight * scanlineSize;
if ((img= malloc(size))==0) {fclose(filePtr); return 0;}
if (fread(img, size, 1, filePtr)!=1) {free (img); fclose(filePtr); return 0;}
fclose(filePtr);
scanline= img;
for (i=0; i<bih.biHeight; i++)
{
p= scanline;
for (j=0; j<bih.biWidth; j++)
{
g= *p++;
b= *p++;
r= *p++;
}
scanline += scanlineSize;
}
return img;
}
close filePtr before opening file again
fclose(filePtr);
filePtr = fopen("C:\\Users\\mishe\\Desktop\\white.bmp", "rb");
and offset to raw data is bfh.offset
//point the pointer file to the start of the raw data
fseek(filePtr, bfh.offset, SEEK_SET);
As the title states, I'm trying to read a JPEG file using libjpeg-turbo. I tried this code on a mac at home and it worked, but now I'm on Windows and it's giving me a Empty input file error on calling jpeg_read_header. I have verified that the file is not empty by doing a fseek/ftell, and the size I get corresponds to what I expect it to be.
My initial thoughts were that I might not have been opening the file in binary mode, so I tried that as well using _setmode, but that didn't seem to help. Here is my code for reference.
int decodeJpegFile(char* filename)
{
FILE *file = fopen(filename, "rb");
if (file == NULL)
{
return NULL;
}
_setmode(_fileno(file), _O_BINARY);
fseek(file, 0L, SEEK_END);
int sz = ftell(file);
fseek(file, 0L, SEEK_SET);
struct jpeg_decompress_struct info; //for our jpeg info
struct jpeg_error_mgr err; //the error handler
info.err = jpeg_std_error(&err);
jpeg_create_decompress(&info); //fills info structure
jpeg_stdio_src(&info, file);
jpeg_read_header(&info, true); // ****This is where it fails*****
jpeg_start_decompress(&info);
int w = info.output_width;
int h = info.output_height;
int numChannels = info.num_components; // 3 = RGB, 4 = RGBA
unsigned long dataSize = w * h * numChannels;
unsigned char *data = (unsigned char *)malloc(dataSize);
unsigned char* rowptr;
while (info.output_scanline < h)
{
rowptr = data + info.output_scanline * w * numChannels;
jpeg_read_scanlines(&info, &rowptr, 1);
}
jpeg_finish_decompress(&info);
fclose(file);
FILE* outfile = fopen("outFile.raw", "wb");
size_t data_out = fwrite(data, dataSize, sizeof(unsigned char), outfile);
}`
Any help is much appreciated!
The core of the issue is a dll mismatch. The libjpeg is built agains msvcrt.dll, whereas the app is built against whatever runtime provided by MSVS2015. They are incompatible, and the file pointers opened in one runtime make no sense to another.
The solution, as per this discussion, is to avoid jpeg_stdio_src API.
You are passing C++ true value to jpeg_read_header -- that could also be the reason for failure. You should pass TRUE constant instead.
I'm trying to load an MD2 model but I can't seem to get the vertices to draw correctly. I'm not loading UVs or normals at the moment just want to see the model appear correctly in a single frame then take it from there.
Here's my md2 structures (mostly taken from here):
struct v3
{
union
{
struct
{
union { float x; float r; };
union { float y; float g; };
union { float z; float b; };
};
float At[3];
};
};
struct md2_header
{
unsigned int Magic;
unsigned int Version;
unsigned int TextureWidth;
unsigned int TextureHeight;
unsigned int FrameSize;
unsigned int NumTextures;
unsigned int NumVertices;
unsigned int NumUVs;
unsigned int NumTrigs;
unsigned int NumGLCommands;
unsigned int NumFrames;
unsigned int OffsetTextures;
unsigned int OffsetUVs;
unsigned int OffsetTrigs;
unsigned int OffsetFrames;
unsigned int OffsetGLCommands;
unsigned int OffsetEnd;
};
struct md2_vertex
{
unsigned char At[3];
unsigned char NormalIndex;
};
struct md2_frame
{
float Scale[3];
float Translate[3];
char Name[16];
md2_vertex *Vertices;
};
struct md2_skin
{
char Name[64];
};
struct md2_uv
{
unsigned short u;
unsigend short v;
}
struct md2_triangle
{
unsigned short Vertices[3];
unsigned short UVs[3];
};
struct md2_model
{
md2_header Header;
md2_uv *UVs;
md2_triangle *Triangles;
md2_frame *Frames;
md2_skin *Skins;
int *GLCommands;
unsigned int Texture;
unsigned int VAO, VBO;
};
And here's my simple loading function:
void MD2LoadModel (char *FilePath, md2_model *Model)
{
FILE *File = fopen (FilePath, "rb");
if (!File)
{
fprintf (stderr, "Error: couldn't open \"%s\"!\n", FilePath);
return;
}
#define FREAD(Dest, Type, Count)\
fread(Dest, sizeof(Type), Count, File)
#define FSEEK(Offset)\
fseek(File, Offset, SEEK_SET)
#define ALLOC(Type, Count)\
(Type *)malloc(sizeof(Type) * Count)
/* Read Header */
FREAD(&Model->Header, md2_header, 1);
if ((Model->Header.Magic != 844121161) ||
(Model->Header.Version != 8))
{
fprintf (stderr, "Error: bad md2 Version or identifier\n");
fclose (File);
return;
}
/* Memory allocations */
Model->Skins = ALLOC(md2_skin, Model->Header.NumTextures);
Model->UVs = ALLOC(md2_uv, Model->Header.NumUVs);
Model->Triangles = ALLOC(md2_triangle, Model->Header.NumTrigs);
Model->Frames = ALLOC(md2_frame, Model->Header.NumFrames);
Model->GLCommands = ALLOC(int, Model->Header.NumGLCommands);
/* Read model data */
FSEEK(Model->Header.OffsetTextures);
FREAD(Model->Skins, md2_skin, Model->Header.NumTextures);
FSEEK(Model->Header.OffsetUVs);
FREAD(Model->UVs, md2_uv, Model->Header.NumUVs);
FSEEK(Model->Header.OffsetTrigs);
FREAD(Model->Triangles, md2_triangle, Model->Header.NumTrigs);
FSEEK(Model->Header.OffsetGLCommands);
FREAD(Model->GLCommands, int, Model->Header.NumGLCommands);
/* Read frames */
FSEEK(Model->Header.OffsetFrames);
for (int i = 0; i < Model->Header.NumFrames; i++)
{
/* Memory allocation for vertices of this frame */
Model->Frames[i].Vertices = (md2_vertex *)
malloc(sizeof(md2_vertex) * Model->Header.NumVertices);
/* Read frame data */
FREAD(&Model->Frames[i].Scale, v3, 1);
FREAD(&Model->Frames[i].Translate, v3, 1);
FREAD(Model->Frames[i].Name, char, 16);
FREAD(Model->Frames[i].Vertices, md2_vertex, Model->Header.NumVertices);
}
v3 *Vertices = ALLOC(v3, Model->Header.NumVertices);
md2_frame *Frame = &Model->Frames[0];
For(u32, i, Model->Header.NumVertices)
{
Vertices[i] = V3(
(Frame->Vertices[i].At[0] * Frame->Scale[0]) + Frame->Translate[0],
(Frame->Vertices[i].At[1] * Frame->Scale[1]) + Frame->Translate[1],
(Frame->Vertices[i].At[2] * Frame->Scale[2]) + Frame->Translate[2]);
}
glGenBuffers(1, &Model->VBO);
glBindBuffer(GL_ARRAY_BUFFER, Model->VBO);
glBufferData(GL_ARRAY_BUFFER, Model->Header.NumVertices * sizeof(v3), Vertices, GL_STATIC_DRAW);
glGenVertexArrays(1, &Model->VAO);
glBindVertexArray(Model->VAO);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
glBindVertexArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
fclose (File);
free(Vertices);
#undef FSEEK
#undef FREAD
#undef ALLOC
}
Only passing the vertices data. Which, from my understanding Header->NumVertices is the number of vertices in each frame. So I'm taking an arbitrary frame (frame 0 in this case) and reading its uncompressed vertices data into Vertices.
Now I read in a book that Quake had their y and z axes flipped, but that still didn't change much.
Here's how I'm drawing the model:
GLuint Shader = Data->Shaders.Md2Test;
ShaderUse(Shader);
ShaderSetM4(Shader, "view", &WorldToView);
ShaderSetM4(Shader, "projection", &ViewToProjection);
glBindVertexArray(DrFreak.VAO);
{
ModelToWorld = m4_Identity;
ShaderSetM4(Shader, "model", &ModelToWorld);
glDrawArrays(GL_TRIANGLES, 0, DrFreak.Header.NumVertices);
}
glBindVertexArray(0);
The matrices are calculated in a CameraUpdate function which I can verify is working correctly because everything else in the scene render properly except the MD2 model. See:
Everything in yellow is supposed to be the MD2 model.
Here are my shaders (pretty much the same shaders for the crates and planes except there's only one 'in' variable, the position and no UVs):
#version 330 core
layout (location = 0) in vec3 position;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
gl_Position = projection * view * model * vec4(position, 1.0f);
}
#version 330 core
out vec4 color;
void main()
{
color = vec4(1, 1, 0, 1);
}
I've been stuck here for a couple of days. I stepped into the loading code and I seem to be getting valid values. I'm not sure what's the issue. What am I doing wrong/missing?
Any help is appreciated.
I fixed the problem by duplicating the vertices/uvs getting them from the tirangles data. I didn't have to flip the 't' UV coordinate like many tutorials do. I switched the y and z coordinates cause they're flipped.
u32 NumVerts = Model->Header.NumTrigs * 3;
u32 NumUVs = NumVerts;
v3 *Vertices = ALLOC(v3, NumVerts);
v2 *UVs = ALLOC(v2, NumUVs);
md2_frame *Frame = &Model->Frames[0]; // render first frame for testing
For(u32, i, Model->Header.NumTrigs)
{
For(u32, j, 3)
{
u32 VertIndex = Model->Triangles[i].Vertices[j];
Vertices[i * 3 + j] = V3(
(Frame->Vertices[VertIndex].At[0] * Frame->Scale[0]) + Frame->Translate[0],
(Frame->Vertices[VertIndex].At[2] * Frame->Scale[2]) + Frame->Translate[2],
(Frame->Vertices[VertIndex].At[1] * Frame->Scale[1]) + Frame->Translate[1]);
u32 UVIndex = Model->Triangles[i].UVs[j];
UVs[i * 3 + j] = V2(
Model->UVs[UVIndex].u / (r32)Model->Header.TextureWidth,
Model->UVs[UVIndex].v / (r32)Model->Header.TextureHeight);
}
}
glGenVertexArrays(1, &Model->VAO);
glBindVertexArray(Model->VAO);
glGenBuffers(1, &Model->VBO);
glBindBuffer(GL_ARRAY_BUFFER, Model->VBO);
glBufferData(GL_ARRAY_BUFFER, NumVerts * sizeof(v3), Vertices, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
u32 UVBuffer;
glGenBuffers(1, &UVBuffer);
glBindBuffer(GL_ARRAY_BUFFER, UVBuffer);
glBufferData(GL_ARRAY_BUFFER, NumUVs * sizeof(v2), UVs, GL_STATIC_DRAW);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 0, 0);
glBindVertexArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
I will probably use indexed arrays and glDrawElements. But for my testing purposes glDrawArrays is good enough. If anyone knows of a better way to do all this feel free to leave a comment.
And there's Dr Freak chillin'
I have a 1280 x 720 pixel .bmp file that I want to load into image2 that is declared like the following:
uint8_t *imageByte=NULL;
image2 is a file named home1.bmp
i want to read the bmp file and convert .bmp image to byte array imageByte which i will use for compare with home.bmp
I relatively new to c programming, so any one can tell me how should I being using to do so? thanks!
This a part of my bmp image comparison code which will compare image1 with image2
# define BYTES_PER_PIXEL 4
# define BITMAP_HEADER 54
int temp_width = 1280;
int temp_height = 720;
int temp_x = 0;
int temp_y = 0;
uint8_t k = 0;
char* image1= "E:\\home.bmp";
fp = fopen(image,"rb");
fseek(fp, 0, SEEK_SET);
fseek(fp, BITMAP_HEADER, SEEK_SET);
for(temp_y=0; temp_y<temp_height; temp_y++)
{
temp_x = 0;
for(temp_x=0; temp_x < (temp_width * BYTES_PER_PIXEL); temp_x++)
{
int read_bytes = fread(&k, 1, 1, fp);
if(read_bytes != 0)
{
if(k != imageByte[temp_x])
{
printf("CompareImage :: failed \n");
fclose(fp);
}
}
else
{
printf("CompareImage :: read failed \n");
}
}
}
printf("CompareImage :: passed \n");
As mentioned above, a BMP file starts with
1] A file header that gives information about the file like size ..etc
2] BMP information header that gives more information about the BMP properties
These structures are of fixed size
The following link seems to be a perfect reference for you
http://paulbourke.net/dataformats/bmp/
You should be able to get rid of byte by bytes comparison which is costly in most of the cases, just by reading the two header structured from the beginning and comparing.
Do bytes by byte comparison only of the headers match.
Hope it helps
I am trying to learn graphics programming and I have written a simple OpenGL program that draws a triangle and should shade it red, however when I call the function glShaderSource for the the fragment shader it causes a segfault.
I don't know why it causes a segfault because the spec page doesn't say anything about the function causing a segfault, and anything about the shaders being loaded into memory wrong can't be it either, as the vertex shader is loaded in the same way and when I call glGetShaderInfoLog and print the log it says the vertex shader compile fine.
Anyways heres my code that Loads the shaders and links the shading program...
int LoadShader(char* Filename, GLchar* ShaderSource) //dont call this function by itself as it doesnt free its own memory
{
FILE* z;
z = fopen(Filename, "rb");
if(z == NULL) {printf("Error: file \"%s\" does not exist...\n", Filename); return -1;}
unsigned long len = 0;
//get file length
fseek(z, 0, SEEK_END);
len = ftell(z);
rewind(z);
if(len == 0) {printf("Error reading file \"%s\"\n", Filename); return -1;}
ShaderSource = (char*)malloc((sizeof(char)) * len + 1); //allocate enough bytes for the file
if(ShaderSource == NULL) {puts("Memory Error"); return -1;}
size_t result = fread(ShaderSource, 1, len, z);
if( result != len)
{
puts("Reading Error");
free(ShaderSource);
ShaderSource = NULL;
return -1;
}
ShaderSource[len] = 0; //make it null terminated
puts(ShaderSource); //debbugging
fclose(z);
return 1;
}
//----------------------------------------------------------------------
GLuint MakeProgram(char* VSpath, char* FSpath){
GLuint VertexShaderID = glCreateShader(GL_VERTEX_SHADER);
GLuint FragmentShaderID = glCreateShader(GL_FRAGMENT_SHADER);
GLchar* VSsource;
GLchar* FSsource;
if(!LoadShader(VSpath, VSsource))
return -1;
if(!LoadShader(FSpath, FSsource))
return -1;
GLint Result = GL_FALSE;
int InfoLogLength;
//compile shaders
const char* VS = VSsource; // glShaderSource needs a const char
glShaderSource(VertexShaderID, 1, &VS, NULL); //we use NULL for length becuase the source is null-terminated
glCompileShader(VertexShaderID);
//check
glGetShaderiv(VertexShaderID, GL_COMPILE_STATUS, &Result);
glGetShaderiv(VertexShaderID, GL_INFO_LOG_LENGTH, &InfoLogLength);
char* VSerr;
VSerr = (char*)malloc(sizeof(char) * InfoLogLength);
glGetShaderInfoLog(VertexShaderID, InfoLogLength, NULL, &VSerr[0]);
printf("%s\n", VSerr);
free(VSerr);
VSerr = NULL;
//fragment shader
const char* FS = FSsource;
glShaderSource(FragmentShaderID, 1, &FS, NULL);
glCompileShader(FragmentShaderID);
//check
glGetShaderiv(FragmentShaderID, GL_COMPILE_STATUS, &Result);
glGetShaderiv(FragmentShaderID, GL_INFO_LOG_LENGTH, &InfoLogLength);
char* FSerr;
FSerr = (char*)malloc(sizeof(char) * InfoLogLength);
glGetShaderInfoLog(FragmentShaderID, InfoLogLength, NULL, &FSerr[0]);
printf("%s\n", FSerr);
free(FSerr);
FSerr = NULL;
//link program
GLuint ProgramID = glCreateProgram();
glAttachShader(ProgramID, VertexShaderID);
glAttachShader(ProgramID, FragmentShaderID);
glLinkProgram(ProgramID);
//check program
glGetProgramiv(ProgramID, GL_LINK_STATUS, &Result);
glGetProgramiv(ProgramID, GL_INFO_LOG_LENGTH, &InfoLogLength);
char* err;
err = (char*)malloc(sizeof(char) * InfoLogLength);
glGetProgramInfoLog(ProgramID, InfoLogLength, NULL, &err[0]);
printf("%s\n", err);
free(err);
//free the shaders
free(VSsource);
VSsource = NULL;
free(FSsource);
FSsource = NULL;
glDeleteShader(VertexShaderID);
glDeleteShader(FragmentShaderID);
return ProgramID;
}
Take a closer look at your actual declarations of VSsource (uninitialized), FSsource (uninitialized) and the implementation of LoadShader (...). Because this is C and you do not pass things by reference any changes made to the ShaderSource pointer inside the LoadShader (...) function as you originally wrote it will not propagate outside the function.
In short, you implemented LoadShader (...) incorrectly. You need to actually change the address stored in the pointer you pass it (since you are allocating this memory inside the function), but you cannot do that since you currently pass it a GLchar*.
As for why GL accepts an uninitialized pointer for your first call to glShaderSource (...) I cannot say. Perhaps you are just extremely lucky? Regardless, you can correct your issue by altering LoadShader to take a GLchar** instead. I will illustrate the necessary changes below:
/* Originally, you made a copy of an uninitialized pointer and then proceeded to
re-assign this copy a value when you called malloc (...) - you actually need
to pass a pointer to your pointer so you can update the address outside of
this function!
*/
int LoadShader(char* Filename, GLchar** pShaderSource) //dont call this function by itself as it doesnt free its own memory
{
[...]
*pShaderSource = (GLchar *)malloc((sizeof(GLchar)) * len + 1); //allocate enough bytes for the file
GLchar* ShaderSource = *pShaderSource;
[...]
}
GLuint MakeProgram(char* VSpath, char* FSpath){
[...]
GLchar* VSsource; /* Uninitialized */
GLchar* FSsource; /* Uninitialized */
if(!LoadShader(VSpath, &VSsource)) /* Pass the address of your pointer */
return -1;
if(!LoadShader(FSpath, &FSsource)) /* Pass the address of your pointer */
return -1;
/*
* Now, since you did not pass copies of your pointers, you actually have
* *VALID* initialized memory addresses !
*/
[...]
}
Alternatively, you could simply modify your function to return the address of the string you allocated. Instead of returning -1 on failure like you do now, you could return NULL. Your function interface would be as simple as this if you chose to go that route: GLchar* LoadShader (char* Filename).