I'm studying some very basic 3D rendering using opengl, but my teacher uses a pc under wich you can use the sleep() function for rendering each of the animation's frames.
However when i try to do the same thing on my mac i only get the last frame. if i use the sleep() function my mac goes literally to sleep mode.
I've read something about an NSTimer but i don't know how to implement it.
Here's my code, it works just fine on a pc, i only want to replace the commented sleep() with a mac equivalent that enables me to see each frame during that delay
#include<OpenGL/gl.h>
#include<OpenGL/glu.h>
#include<GLUT/glut.h>
#include <stdlib.h>
void init (void)
{
glClearColor(0.5,0.5,0.5,0);
glMatrixMode(GL_MODELVIEW);
gluLookAt(0,4,4,0,0,0,0,1,0);
glMatrixMode(GL_PROJECTION);
gluPerspective(90,1,1,12);
}
void Cubes(void)
{
int i=0;
glMatrixMode(GL_MODELVIEW);
glutInitDisplayMode(GL_DEPTH);
for (i=0; i<360;i++)
{
glRotated(1,1,0,0);
glClear(GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glClear(GL_COLOR_BUFFER_BIT);
glColor3f(1,0,0);
glutSolidSphere(1,20,20);
glColor3f(0.8,0.8,0);
glutWireSphere(1,20,20);
glColor3f(0,0,1);
glutSolidTorus(1,2,40,40);
glFlush();
glDisable(GL_DEPTH_TEST);
//Sleep(20);
}
}
int main (int argc, char** argv)
{
glutInit(&argc, argv);
glutInitWindowSize(600,600);
glutCreateWindow("Depth Buffer");
init();
glutDisplayFunc(Cubes);
glutMainLoop();
}
Please tell your teacher that he is doing it wrong! To do a animation with GLUT don't put a animation loop into the display function, but register a idle function, that reissues a display. Also this code misses a buffer swap (which is mandatory on MacOS X if not running in full screen mode, to tell the compositor you're done with rendering).
Also you shouldn't sleep for an animation (the buffer swap will do this anyway) but measure the time between frames.
Update: Fixed code
#include <OpenGL/gl.h>
#include <OpenGL/glu.h>
#include <GLUT/glut.h>
#include <stdlib.h>
#include <sys/time.h>
#include <math.h>
#include <stdio.h>
double getftime(
void )
{
struct timeval tv;
gettimeofday(&tv, NULL);
return tv.tv_sec + tv.tv_usec*1e-6;
}
static double lasttime;
void display(
void )
{
int width, height;
double finishtime, delta_t;
static float angle = 0;
width = glutGet(GLUT_WINDOW_WIDTH);
height = glutGet(GLUT_WINDOW_HEIGHT);
glClearColor( 0.5, 0.5, 0.5, 1.0 );
/* combine the clearing flags for more efficient operation */
glClear( GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT );
glViewport(0, 0, width, height);
glMatrixMode( GL_PROJECTION );
glLoadIdentity();
gluPerspective( 90, (float)width/(float)height, 1, 12 );
glMatrixMode( GL_MODELVIEW );
glLoadIdentity();
gluLookAt( 0, 4, 4, 0, 0, 0, 0, 1, 0 );
glEnable( GL_DEPTH_TEST );
glRotated( angle, 1, 0, 0 );
glColor3f( 1, 0, 0 );
glutSolidSphere( 1, 20, 20 );
glColor3f( 0.8, 0.8, 0 );
glutWireSphere( 1, 20, 20 );
glColor3f( 0, 0, 1 );
glutSolidTorus( 1, 2, 40, 40 );
glDisable( GL_DEPTH_TEST );
glutSwapBuffers();
finishtime = getftime();
delta_t = finishtime - lasttime;
angle = fmodf(angle + 10*delta_t, 360);
lasttime = finishtime;
}
int main(
int argc,
char **argv )
{
glutInit( &argc, argv );
glutInitWindowSize( 600, 600 );
/* glutInitDisplayMode must be called before glutCreateWindow, flags are prefixed with GLUT_ not GL */
glutInitDisplayMode( GLUT_RGBA | GLUT_DEPTH | GLUT_DOUBLE );
glutCreateWindow( "Depth Buffer" );
glutDisplayFunc( display );
/* register glutPostRedisplay for continuous animation */
glutIdleFunc(glutPostRedisplay);
lasttime = getftime();
glutMainLoop( );
}
I concur with the above posts that your teacher is wrong. However, to answer your question, use usleep(useconds_t useconds) to sleep on mac os x. here's the man page:
https://developer.apple.com/library/mac/#documentation/Darwin/Reference/ManPages/man3/usleep.3.html
note that the sleep time is in microseconds instead of milliseconds like it is on windows. Also note that like on windows, it will sleep at least the time specified. It might sleep a whole lot longer, which is one of the many reasons you shouldn't use it but should measure the time between frames and update appropriately instead.
Cheers!
Related
I'm creating OpenGL texture using default function glGenTextures. When OpenGL version set to 3.0 everything works fine, but when I override it with 4.2 glGenTextures starts to throw error #1282 (invalid operation). What i'm doing wrong?
Here's code segment I've tested:
#include "GL/freeglut.h"
#include "GL/gl.h"
#define MAJOR_GL_VERSION 3
#define MINOR_GL_VERSION 0
int w = 200;
int h = 200;
const char* title = "title";
int main(int argc, char const *argv[])
{
puts("Overriding default OpenGL version...");
glutInitContextVersion(MAJOR_GL_VERSION, MINOR_GL_VERSION);
glutInitContextProfile(GLUT_CORE_PROFILE);
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA | GLUT_ALPHA);
glutInitWindowSize(w, h);
glutCreateWindow(title);
printf("Using OpenGL Version: %s\n=========\n", (char*)glGetString(GL_VERSION));
glViewport(0, 0, w, h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, w, h, 0, -1, 1);
glMatrixMode(GL_MODELVIEW);
glEnable(GL_TEXTURE_2D);
glEnable(GL_DEPTH_TEST);
glShadeModel(GL_SMOOTH);
glEnable( GL_ALPHA_TEST );
glEnable( GL_BLEND );
GLenum error;
GLuint id = 0;
glGenTextures(1, &id);
if((error = glGetError()) != GL_NO_ERROR || id == 0)
{
printf("Gl error: %s (errno %i)\n", gluErrorString(error), error);
return 0;
}
while (1) { }
return 0;
}
The error does probably not happen in the line you expect it to happen. Chances are high that some of the methods before glGenTextures is the problem. Neither of this lines
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, w, h, 0, -1, 1);
glMatrixMode(GL_MODELVIEW);
glShadeModel(GL_SMOOTH);
are allowed in a OpenGL Core Profile. Profiles were introduces in OpenGL 3.2, thus the Core Profile request does not have any effect when requesting a 3.0 context. But with 3.2+, you'll get a core profile which removed a lot of stuff.
You can either remove the lines mentioned above and replace them with a Core-Profile compatible code. Or you could explicitly request a compatibility profile (glutInitContextProfile(GLUT_COMPATIBILITY_PROFILE) when you want to stick to the fixed function pipeline.
I keep calling glutMainLoopEvent to process the graphics. However, after someone closed the window, I would like to exit the loop and show Code reached here. . it seems when a window is closed, an exit function is called and the entire application stops. While I need the application to continue. How should I fix the code?
#include <stdio.h>
#include <GL/freeglut.h>
//display function - draws a triangle rotating about the origin
void cback_render()
{
//keeps track of rotations
static float rotations = 0;
//OpenGL stuff for triangle
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
glRotatef(rotations, 0, 0, 1);
glBegin(GL_TRIANGLES);
glVertex3f(0,0,0);
glVertex3f(1,0,0);
glVertex3f(0,1,0);
glEnd();
//display on screen
glutSwapBuffers();
//rotate triangle a little bit, wrapping around at 360°
if (++rotations > 360) rotations -= 360;
}
void timer(int value )
{
glutPostRedisplay();
glutMainLoopEvent();
glutTimerFunc(30, timer, 1);
}
int main(int argc, char **argv)
{
//initialisations
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_DEPTH);
glutInitWindowPosition(100, 100);
glutInitWindowSize(512, 512);
//create window and register display callback
glutCreateWindow("freegluttest");
glutDisplayFunc (cback_render);
glutTimerFunc(30, timer, 1);
//loop forever
long i=0;
while(1)
{
printf("[%ld]\n",i);
i++;
glutMainLoopEvent();
}
printf("Code reached here.");
return 0;
}
Use GLUT_ACTION_ON_WINDOW_CLOSE to allow your program to continue when a window is closed.
glutSetOption(GLUT_ACTION_ON_WINDOW_CLOSE,
GLUT_ACTION_GLUTMAINLOOP_RETURNS);
Sources:
http://www.lighthouse3d.com/cg-topics/glut-and-freeglut/
http://freeglut.sourceforge.net/docs/api.php
I want to get started with modern OpenGL while reading the superbible book by coding a cellular automata using a 2d-array texture of size 1280x1280x2 and computing the next state to another layer in a compute shader. The idea is impiously stolen from glumpy examples.
However, while having such ambition in mind, I got confused at even trying to display it, not even passing the samplers into shaders.
Below I included both, the generator, which works OK, and the piece of code that does contain a problem.
gen
#!/usr/bin/env perl
use strict;
use warnings;
sub get_arg {
return (scalar #ARGV == 0) ? shift : shift #ARGV;
}
my $size = get_arg 1280;
my $rate = get_arg ($size >> 1);
my $symbol = (sub { ((shift) < $rate) ? '*' : '_' } );
print "$size\n";
for (0..$size) {
print $symbol->(int(rand() * $size)) for (0..$size);
print "\n";
}
code
#include <stdio.h>
#include <stdbool.h>
#include <assert.h>
// including opengl libraries on linux/osx
//include glew
#include <GL/glew.h>
//include opengl
#if defined (__APPLE_CC__)
#include <OpenGL/gl3.h>
#else
#include <GL/gl3.h> /* assert OpenGL 3.2 core profile available. */
#endif
//include glfw3
#define GLFW_INCLUDE_GL3 /* don't drag in legacy GL headers. */
#define GLFW_NO_GLU /* don't drag in the old GLU lib - unless you must. */
#include <GLFW/glfw3.h>
// ----------- the program itself
GLFWwindow *g_window;
#define SIZE 1280
#define WIDTH SIZE
#define HEIGHT SIZE
#define DEPTH 2
init_glfw(const char *name) {
// start GL context and O/S window using the GLFW helper library
assert(glfwInit());
#if defined(__APPLE_CC__)
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
#endif
g_window = glfwCreateWindow(WIDTH, HEIGHT, name, NULL, NULL);
assert(g_window != NULL);
glfwMakeContextCurrent(g_window);
// start GLEW extension handler
glewExperimental = GL_TRUE;
glewInit();
// tell GL to only draw onto a pixel if the shape is closer to the viewer
glEnable(GL_DEPTH_TEST); // enable depth-testing
glDepthFunc(GL_LESS); // depth-testing interprets a smaller value as "closer"
glEnable(GL_DEBUG_OUTPUT);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
}
typedef enum { FILLED = '*', EMPTY = '_' } SYMBOL;
load_array(GLubyte array[SIZE * SIZE * DEPTH], FILE *stream) {
static char c;
for(int i = 0; i < SIZE; ++i) {
for(int j = 0; j < SIZE; ++j) {
bool approved = false;
GLubyte *it = &array[SIZE * i + j];
while(!approved) {
approved = true;
c = getc(stream);
assert(c != EOF);
switch(c) {
case FILLED:
*it = 0x00;
break;
case EMPTY:
*it = 0xff;
break;
default:
approved = false;
break;
}
}
assert(*it == 0x00 || *it == 0xff);
it[SIZE * SIZE] = it[0];
}
}
}
GLuint create_2d_texture() {
static GLuint texture = 0;
assert(texture == 0);
static GLubyte field[SIZE * SIZE << 1];
load_array(field, stdin);
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D_ARRAY, texture);
glTexStorage3D(GL_TEXTURE_2D_ARRAY, 1, GL_ALPHA8, SIZE, SIZE, DEPTH);
glTexSubImage3D(GL_TEXTURE_2D_ARRAY, 0, 0, 0, 0, SIZE, SIZE, DEPTH, GL_ALPHA, GL_UNSIGNED_BYTE, field);
glTexParameteri(GL_TEXTURE_2D_ARRAY,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D_ARRAY,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D_ARRAY,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D_ARRAY,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
return texture;
}
display() {
GLuint texture = create_2d_texture();
assert(texture != 0);
glEnable(GL_CULL_FACE);
glCullFace(GL_FRONT_AND_BACK);
while(!glfwWindowShouldClose(g_window)) {
glClearColor(1.0, 1.0, 1.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
glMatrixMode(GL_PROJECTION);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D_ARRAY, texture);
glPushMatrix();
glEnable(GL_TEXTURE_2D_ARRAY);
glBegin(GL_QUADS);
glTexCoord3s(0, SIZE, 0); glVertex3f( 0.0f, 0.0f, 0.0 );
glTexCoord3s(SIZE, SIZE, 0); glVertex3f( SIZE, 0.0f, 0.0 );
glTexCoord3s(0, SIZE, 0); glVertex3f( SIZE, SIZE, 0.0 );
glTexCoord3s(0, 0, 0); glVertex3f( 0.0f, SIZE, 0.0 );
glEnd();
glPopMatrix();
glfwSwapBuffers(g_window);
glfwPollEvents();
if(glfwGetKey(g_window, GLFW_KEY_ESCAPE)) {
glfwSetWindowShouldClose(g_window, 1);
}
}
}
main() {
init_glfw("I want to display a texture");
display();
glfwDestroyWindow(g_window);
glfwTerminate();
}
Could you help me to analyse the issues with displaying the 2d array on the screen, please? What I am trying to achieve is to make the whole window randomly black-and-white, but so far I ended up completely confused just adding more layers from googled solutions and man pages.
I am not asking for a working code, just a comprehensible explanation which would help me to get through this problem.
glEnable(GL_TEXTURE_2D_ARRAY);
That gave you an OpenGL error. Even in compatibility profiles, you cannot enable an array texture of any kind. Why?
Because you cannot used fixed-function processing with array textures at all. You cannot use glTexEnv to fetch from an array texture. They're wholly and completely shader-based constructs.
So if you want to use an array texture, you must use a shader.
#include <stdio.h>
#include <stdlib.h>
#include <GL/glew.h>
#include <GL/glut.h>
void changeSize(int w, int h)
{
if(h == 0)
h = 1;
float ratio = w / h;
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glViewport(0, 0, w, h);
gluPerspective(40,ratio,1.5,20);
glMatrixMode(GL_MODELVIEW);
}
void renderScene(void)
{
glClear(GL_COLOR_BUFFER_BIT );
glLoadIdentity();
glTranslatef(0.0,0.0,-5.0);
glDrawArrays(GL_TRIANGLES,0,3);
glutSwapBuffers();
}
void init()
{
GLfloat verts[] = {
0.0, 1.0,
-1.0, -1.0,
1.0, -1.0
};
GLuint bufferid;
glGenBuffers(1,&bufferid);
glBindBuffer(GL_ARRAY_BUFFER,bufferid);
glBufferData(GL_ARRAY_BUFFER,sizeof(verts),verts,GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0,2,GL_FLOAT,GL_FALSE,0,0);
if(glGetError()==GL_NO_ERROR)
printf("no error");
}
int main(int argc, char **argv)
{
glutInit(&argc, argv);
glutInitDisplayMode( GLUT_DOUBLE | GLUT_RGBA);
glutInitWindowPosition(100,100);
glutInitWindowSize(500,500);
glutCreateWindow("MM 2004-05");
glewInit();
init();
glutDisplayFunc(renderScene);
glutReshapeFunc(changeSize);
if (GLEW_ARB_vertex_program && GLEW_ARB_fragment_program)
printf("Ready for GLSL\n");
else {
printf("No GLSL support\n");
//exit(1);
}
glutMainLoop();
return 0;
}
When using glGenBuffers my screen turns out black and shows no error. If i draw some other shape without using buffers they are displayed but not with buffer objects.
openGL version:3.0
operating system:ubuntu
IDE:eclipse
When using glGenBuffers you're using the OpenGL-3.0 specification. To draw anything in OpenGL-3.0+ you need to use shaders, hence why the screen is black; your triangle isn't being shaded.
You are using calls for generic vertex attributes here:
glEnableVertexAttribArray(0);
glVertexAttribPointer(0,2,GL_FLOAT,GL_FALSE,0,0);
Generic vertex attributes can only be used in combination with shaders. As long as you're using the fixed function pipeline, you also have to use fixed function vertex attributes.
The corresponding calls using fixed function attributes are:
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(2, GL_FLOAT, 0, 0);
i am trying to draw a line using OpenGL while the both end coordinates of the line are set in the idle function , but it is not getting drawn while I am sending the endpoint coordinates over the network using sockets.
below is the snap of the code
int main(int argc, char **argv)
{
glutInit(&argc,argv);
glutInitWindowSize( 1024,1024); /* A x A pixel screen window */
glutInitDisplayMode( GLUT_RGB | GLUT_SINGLE);
glutCreateWindow("Skeleton Tracker"); /* window title */
glutDisplayFunc(display); /* tell OpenGL main loop what */
glutIdleFunc(idle);
//first create the connection then we wil talk about the data transfer...
/*****Code for server connection *****/
processRequest();
return 0;
}
void processrequest()
{
byte_sent = send(ClientSocket,(char*)&msg_pkt,sizeof(MSG_PACKET),0);
ofile<<"\nByte sent for start generating "<<byte_sent<<endl;
Sleep(1000);
memset(buf,0,sizeof(buf));
glutMainLoop();
}
void display(void)
{
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT); /* clear the window */
glColor3f ( 0.0, 1.0 , 0.0); /* draw in light red */
glBegin(GL_LINES);
glVertex2f(x[0] , y[0]);
glVertex2f(x[1] , y[1]);
glEnd();
glEnd();
glFlush();
}
void idle(void)
{
printf("\nIn Idle function\n");
nRetVal = recv(ClientSocket , (char*)mainbuf , 192,0);
printf("\nAmount of data received : %d\n" , nRetVal);
memcpy(buf , mainbuf , sizeof(buf)); //buf is of 8 bytes to hold 2 floating nos.
memcpy( &x[p] ,buf , 4); // upto 3
x[p] = x[p]/10.0;
memcpy( &y[p] ,buf+4 , 4); //upto 7
y[p] = y[p]/10.0;
glutPostRedisplay();
}
Design of your program is questionable - you have blocking recv() function in your idle function which is not good, idle should be as fast as possible to not affect your rendering.
consider creating one thread for rendering and the second thread for network communication, or at least use non-blockable recv() in your idle function to check whether there is any data on the socket available before reading (recv'ing) from it.
Thanks buddies for your time ... actually I forget to define the orthographic projection matrix before calling the glutMainloop ...
gluOrtho2D( -250, 250, -250, 250);
its working now.