How can I take a programmatic screenshot of an Open GL ES 2.0 scene using GLKit (in iOS 6)? - ios6

I've found numerous posts regarding this, but I haven't been able to work out a solution, largely due to the fact that I don't have a very thorough understanding of OpenGL or GLKit.
I added the method described here to my project.
They specifically mention:
Important: You must call glReadPixels before calling
EAGLContext/-presentRenderbuffer: to get defined results unless you're
using a retained back buffer.
I tried unsuccessfully to set up a retained back buffer and given that doing so has 'adverse performance implications' I would rather avoid it.
The problem is, according to a comment in another post:
In GLKit, the GLKView will automatically present itself and discard unneeded renderbuffers at the end of each rendering cycle.
That being the case, how can I call the 'Snapshot' method at the appropriate time when using GLKit?
To date, in iOS 5 I get a weirdly yellow coloured version of the scene (as though there were no other colours) and in iOS 6 I get a pure white image (I imagine because I am using white as the clear colour).
Further, I have no idea what they (apple) are talking about in this comment:
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, _colorRenderbuffer);
so I have commented out the call in my app. If it matters my objects are using VBOs with position, texture coords and colour.

Related

Rendering DOM elements onto webGL

I would like to render DOM elements (which im otherwise using React for) onto (for example) a plane in a 3D (WebGL -> Three.js -> Three Fiber (??? so far best way to go about it as im figuring?)) environment.
So basically Id have my containing my entire app lets say rendered, as is traditional, into "root" element. Except I want to inject a layer in that utilizes Three Fiber to render and everything I might want my site (or even just a particular component) to contain onto this plane. Allowing one to make a react site as normal and behind the scenes (probably in index.js or intermediate file) would instead of being rendered "flat" against the screen i could "tilt" the entire site, give it opacity, maybe have multiple webpages all able to be shuffled like cards, sides of a dice, w/e tf.
If anybody has any suggestions im struggling with how one could go about this.
Im worried i might have to get kind of low-level (which im not totally against), but was hopeful there was some support for something of this nature. So far ive tried even having simple elements like an image element inside a material (element - via three fiber) or inside the mesh element (where the material would go) to no avail. Im just beginning to get my hands dirty but google hasnt availed me much for this project, so I turned to good ol' stack overflow for my first personal request! 🤯 😲 👹 !
Thanks in advance so much for your time and consideration - cheers,
John Thummel
this is possible using drei/Html. some impressions:
https://twitter.com/0xca0a/status/1398633764931178498
https://twitter.com/0xca0a/status/1407758860203573251

Fail to improve pose of point cloud with ADF origin

I save the point clouds of a scene and its quaternion in a pcl file.
First, I've only used pose w.r.t to device start (see second image) to get the quaternion. I've discovered a drifting problem which I mentioned here.
Therefore, I learned the scene with area learning (see first image) by walking around the table.
After that, I'm loading the area description file (ADF) to overcome the drifting. I wait for the first loop closure/localization in the onPoseAvailable callback.
Then in the onXyzIjAvailable callback I use its timestamp to get a valid pose w.r.t to the ADF origin (baseFrame = COORDINATE_FRAME_AREA_DESCRIPTION and targetFrame = COORDINATE_FRAME_DEVICE).
And I save the poseAtTime(xyzIj.timestamp) and the xyzIj in a *.PCD file.
But, the drifting seems to get even worse (see third image). It's better oriented to the origin, but the point clouds aren't that
close as in the image without adf.
Do I something wrong?
Is there any way to improve this?
You should set up the pose callback such that only poses with respect to ADF base are returned - poses with respect to start of service should not be returned - drift will not go away, but it will become minimal and will autocorrect - the ADF needs to be well trained to keep up the pose return rate.

how can I exclude an element from an Angular scope?

my premise was wrong. while AngularJS was certainly slowing things down, it was not due to the problem I describe below. however, it was flim's answer to my question - how to exclude an element from an Angular scope - that was able to prove this.
I'm building a site that generates graphs using d3+Raphael from AJAX-fetched data. this results in a LOT of SVG or VML elements in the DOM, depending on what type of chart the user chooses to render (pie has few, line and stacked bar have many, for example).
I'm running into a problem where entering text into text fields controlled by AngularJS brings Firefox to a crawl. I type a few characters, then wait 2-3 seconds for them to suddenly appear, then type a few more, etc. (Chrome seems to handle this a bit better.)
when there is no graph on the page (the user has not provided enough data for one to be generated), editing the contents of these text fields is fine. I assume AngularJS is having trouble when it tries to update the DOM and there's hundreds SVG or VML elements it has to look through.
the graph, however, contains nothing that AngularJS need worry itself with. (there are, however, UI elements both before and after the graph that it DOES need to pay attention to.)
I can think of two solutions:
put the graph's DIV outside the AngularJS controller, and use CSS to position it where it's actually wanted
tell AngularJS - somehow - to nevermind the graph's DIV; to skip it over when keeping the view and model in-sync
the second option seems preferable to me, since it keeps the document layout sane/semantic. is there any way to do this? (or some, even-better solution I have not thought of?)
Have you tried ng-non-bindable? http://docs.angularjs.org/api/ng.directive:ngNonBindable
<ANY ng-non-bindable>
...
</ANY>

GLUT doesn't detect properly more then 2 keys pressed?

I'm trying to make a small game using (free)GLUT. I know that it's old and there are better alternatives, but currently I prefer to stick with it and use it as much as possible. I program with C.
I'm currently trying to make GLUT detect properly all the keys I press.
I use glutKeyboardFunc, glutKeyboardUpFunc, glutSpecialFunc and glutSpecialUpFunc to detect pressed keys and I store their state in a short array I created (I currently have only 5 usable keys, so I just created a specific array for them).
However, while everything works fine for 2 keys or less, the game doesn't detect properly 3 keys or more. While for some keys it detect the combination properly (that actually happens for only 1 specific combination), for others the functions simply don't detect the third key that I press.
I checked my code a few times, and there is nothing special about the combination that does work.
I also made glutKeyboardFunc and glutSpecialFunc directly print every key-press that they receive, and it seems they simply stop working after I press more then 2 keys.
Is it a known issue with GLUT or something? I googled a lot and didn't find anyone with a similar issue.
I am not very into GLUT but as I know, but you should make sure, that your keyboard supports more than 2 input keys at once. This feature is called n-key rollover. This page says, that 2-key rollover may be a common value for some keyboards, but you dont need to trust this source.
I'll clarify a point: The glutKeyBoardFunc is a callback i.e., it is invoked for every key pressed and re-executed over and over again and all the if-else (or switch-case) statements for various key combinations are executed. What it means is this - if you were to press 'A', '->' (right arrow) and 'D' all at once, depending on which key-press event was received first the callback will be executed accordingly. Sometimes with a delay and sometimes the on screen animation may stop momentarily.
GLUT is purely for educational/learning purposes but not good for full blown applications since that's not what it was designed for. You land up using OS specific libs or other languages (e.g., Qt) to embed OpenGL "window" within them and execute the keyboard events etc., The event handling in those (and/or OS specific frameworks) is radically different (and better) than GLUT.
You may want to keep your simultaneous key presses to a minimum. You may augment it with the mouse to get rid of the jerky response/processing...

Using Silverlight 2 for short audio caching

I'm attempting to use a large number of short sound samples in a game I'm creating in Silverlight 2. The samples are less than 2 seconds long.
I would prefer to load all the audio samples onto the canvas during the initualization. I have been adding the media element to the canvas and a generic list to manage it. So far, it appears to work.
When I play the sample the first time, it plays perfectly. If it has finished playing and I want to re-use the same element, it cuts off the first part of the sound. To play the sample again, I stop and play the media element.
Is there another method I should use the samples so that the audio is not clipped and good performance is obtained?
Also, it's probably a good idea to make sure that all of your audio samples are brought down to the client side initially. Depending on how you set it up, it's possible that the MediaElements are using their progressive download functionality to get the media files from the server. While there's nothing wrong with this per se (browser caching should be helping you out after the initial download), it does mean that you have to deal with the browser cache, and there are some potential issues there.
Possible steps to try:
Mark your audio files as "Content". This will get them balled up in the .xap.
Load your audio files into MemoryStreams (see Application.GetResourceStream method) and call MediaElement.SetSource().
HTH,
Erik
Some comments:
From MSDN:
Try to limit the number of MediaElement objects you have in your application at once. If you have over one hundred MediaElement objects in your application tree, regardless of whether they are playing concurrently or not, MediaFailed events may be raised. The way to work around this is to add MediaElement objects to the tree as they are needed and remove them when they are not.
You could try to seek to the start of the sample to reset the point currently being played before re-using it with:
mediaelement.Position = new TimeSpan();
See also MSDNs MediaElement.Position.
One techique you can use, although I'm not sure how well it will work in Silverlight, is create one large file with all of your samples joined together (probably with a half-second or so of silence between each). Figure out the timecode for each sample and seek the media element to that position and play. You'll only need as many media elements as simultaneous sounds you want to play.

Resources