overlapped ccsprites doesnt responds touch events albeit z-order is bigger - ios6

CCBigSprite and CCSmallSprite classes are inherited from CCSprite.
spr1,spr2 are instances of them.
All ccTouchesStart,ccTouchesMove,ccTouchesEnd methods overridden from protocol CCTargetedTouchDelegate
for both classes.
problem :in normal cases,touch events working as expected and works good for any sprite instances,
till sprites overlaps.
Touch events works for (CCBigSprite *spr1) if (CCSmallSprite *spr2) is touched.
Because of an overlap issue here and exact position as image shown.
when I press spr2 its touch events should work instead of sp1.
how can I fix this fatal problem ?
both classes have this overridden method same :
-(void)onEnter {
[super onEnter];
[[[CCDirector sharedDirector]touchDispatcher]addTargetedDelegate:self
priority:self.touchPriority swallowsTouches:YES];
}
does it sense ?
also:
setPriority didnt change anyhing.
Im changing manually.
[spr2 setTouchPriority:1];[spr1 setTouchPriority:2];//lower meant to be highest priority
or vise versa.
didnt help.
this shall be a cocos2d-iphone 2.0 stable version issue ?

The draw order does not influence the order of touch events.
If you tap on small sprite in the above image, only the order the sprites registered themselves with CCTouchDispatcher determines whether small or big sprite receives the first touch events.

Related

What is the proper way to animate an array of objects with react-native-reanimated, considering a sub-component is not an option?

I am trying to use the react-native-reanimated v2 to animate an array of Animated.Views.
For my case, there are 2 handicaps:
The objects will eventually animate each other. Because of this I am not able to generate a new sub-component for each draggable object. (If I try to do so, signaling between sub-components probably will be a hell.)
The Rules of Hooks forbids the usage of the hooks within loops or other functions. (But that is what I need as far as I see.)
I made a snack to give the idea about what I try to achieve here (please note, one of the boxes is able to move another one here):
https://snack.expo.dev/#mehmetkaplan/react-native-reanimated-array-animation
But I guess because of the above item 2, the snack does not behave consistent, sometimes the drag does not respond. (The single box version of the snack has a much more smooth response.)
Coming back to the main question. What is the proper way to animate an array of objects with react-native-reanimated, considering a sub-component is not an option?
Any suggestion is appreciated.
Using hooks in loop
Definitely not something to be proud of but it'll be all right as long as the order of use... calls remains unaffected and all logic happens after the calls. This means during the lifecycle of the component, the number of iterations in the loop should not change.
Basically the order in which all hooks are called should always be the same.
Inconsistent behaviour
Make sure your PanGestureHandler wraps Animated.View component otherwise it'll not work on mobile devices.
Apart from weird drag handle (only working from Text) and breaking on mobile devices. I couldn't spot any inconsistency in the behaviour. Nesting the whole Animated.View inside PanGestureHandler fixes both issues,
<PanGestureHandler onGestureEvent={gestureHandlers[i]}>
<Animated.View style={[styles.box, animatedStyles[i]]} key={`View-${i}`}>
<Text style={{color: "#FFFF00", fontWeight:'bold'}}>{`I am the box #${i}\nDrag Me${i === 1 ? "(I move also Box 0)" : ""}`}</Text>
</Animated.View>
</PanGestureHandler>
Here is a working example,
https://snack.expo.dev/A9IF7ngoC

Simulate a mouse click with IOKit

Backstory:
I want to write a C program to automate clicks in a program running in OSx (in a desktop setting).
I first tried Using Quartz Event Services to simulate input events. But then I had this problem: Simulating mouse clicks on Mac OS X does not work for some applications, and the answers didn't help in my case.
CGEventRef click1_down = CGEventCreateMouseEvent(NULL, kCGEventLeftMouseDown, CGPointMake(posx, posy), kCGMouseButtonLeft);
CGEventSetIntegerValueField(click1_down, kCGMouseEventClickState, 0);
// This down click works about 5% of the time.
CGEventPost(kCGHIDEventTap, click1_down);
usleep(30000);
CGEventRef click1_up = CGEventCreateMouseEvent(NULL, kCGEventLeftMouseUp, CGPointMake(posx, posy), kCGMouseButtonLeft);
CGEventSetIntegerValueField(click1_up, kCGMouseEventClickState, 1);
CGEventPost(kCGHIDEventTap, click1_up);
// I've tried every combination of CGEventSetIntegerValueField, usleep and CFRelease, nothing seems to help
// The only thing helping is repeating the line: "CGEventPost(kCGHIDEventTap, click1_down);" 100s of times,
// then the down click works about 80% of the time, still not acceptable
I'm now turning to solution #3 suggested here: How can Mac OS X games receive low-level keyboard input events?
(this might also help How can I simulate the touch events by IOHIDEvent?)
I tried with Karabiner by sending a mouse click on key press:
<item>
<name>Right Mousebutton</name>
<identifier>rightMouseButton</identifier>
<autogen>__KeyToKey__ KeyCode::H, PointingButton::LEFT</autogen>
</item>
And this sends the click 100% of the time, but I want to send the click with by writing C code (to have greater control). Tough I'm not sure, Karabiner seems to use IOKit to send events, so I think this should work in my case, if I'm able to send mouse events with IOKit.
So my question is basically: how do I write a C program to simulate a mouse left click with IOKit ? The documentation is very sparse and I didn't manage to do it.
I tried getting inspiration from some projects:
https://github.com/tekezo/Karabiner
https://github.com/NoobsArePeople2/manymouse

How can I take a programmatic screenshot of an Open GL ES 2.0 scene using GLKit (in iOS 6)?

I've found numerous posts regarding this, but I haven't been able to work out a solution, largely due to the fact that I don't have a very thorough understanding of OpenGL or GLKit.
I added the method described here to my project.
They specifically mention:
Important: You must call glReadPixels before calling
EAGLContext/-presentRenderbuffer: to get defined results unless you're
using a retained back buffer.
I tried unsuccessfully to set up a retained back buffer and given that doing so has 'adverse performance implications' I would rather avoid it.
The problem is, according to a comment in another post:
In GLKit, the GLKView will automatically present itself and discard unneeded renderbuffers at the end of each rendering cycle.
That being the case, how can I call the 'Snapshot' method at the appropriate time when using GLKit?
To date, in iOS 5 I get a weirdly yellow coloured version of the scene (as though there were no other colours) and in iOS 6 I get a pure white image (I imagine because I am using white as the clear colour).
Further, I have no idea what they (apple) are talking about in this comment:
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, _colorRenderbuffer);
so I have commented out the call in my app. If it matters my objects are using VBOs with position, texture coords and colour.

Identifying that a resolution is virtual on a X11 screen by it's API (or extensions)

I'm working in a embarked application on linux that can be used with different PC hardware (displays specifically)
This application should set the environment for the highest allowed resolution (get by
the function XRRSizes from libXrandr).
The problem is: With some hardware, trying to set for the highest option creates a virtual desktop i.e. a desktop where the real resolution is smaller and you have to scroll w/ the mouse in the edges of the screen to access all of it.
Is there a way to detect within the Xlib (or one of it's siblings) that I am working with a virtual resolution (In other words, the re-size didn't go as expected) ?
Hints for a work around for this situation would also be appreciated...
Thanks
Read this: http://cgit.freedesktop.org/xorg/proto/randrproto/tree/randrproto.txt
You need to learn the difference between "screen", "output" and "crtc". You need to check the modes available for each of the outputs you want to use, and then properly set the modes you want on the CRTCs, associate the CRTCs with the outputs, and then make the screen size fit the values you set on each output.
Take a look at the xrandr source code for examples: http://cgit.freedesktop.org/xorg/app/xrandr/tree/xrandr.c

Using Silverlight 2 for short audio caching

I'm attempting to use a large number of short sound samples in a game I'm creating in Silverlight 2. The samples are less than 2 seconds long.
I would prefer to load all the audio samples onto the canvas during the initualization. I have been adding the media element to the canvas and a generic list to manage it. So far, it appears to work.
When I play the sample the first time, it plays perfectly. If it has finished playing and I want to re-use the same element, it cuts off the first part of the sound. To play the sample again, I stop and play the media element.
Is there another method I should use the samples so that the audio is not clipped and good performance is obtained?
Also, it's probably a good idea to make sure that all of your audio samples are brought down to the client side initially. Depending on how you set it up, it's possible that the MediaElements are using their progressive download functionality to get the media files from the server. While there's nothing wrong with this per se (browser caching should be helping you out after the initial download), it does mean that you have to deal with the browser cache, and there are some potential issues there.
Possible steps to try:
Mark your audio files as "Content". This will get them balled up in the .xap.
Load your audio files into MemoryStreams (see Application.GetResourceStream method) and call MediaElement.SetSource().
HTH,
Erik
Some comments:
From MSDN:
Try to limit the number of MediaElement objects you have in your application at once. If you have over one hundred MediaElement objects in your application tree, regardless of whether they are playing concurrently or not, MediaFailed events may be raised. The way to work around this is to add MediaElement objects to the tree as they are needed and remove them when they are not.
You could try to seek to the start of the sample to reset the point currently being played before re-using it with:
mediaelement.Position = new TimeSpan();
See also MSDNs MediaElement.Position.
One techique you can use, although I'm not sure how well it will work in Silverlight, is create one large file with all of your samples joined together (probably with a half-second or so of silence between each). Figure out the timecode for each sample and seek the media element to that position and play. You'll only need as many media elements as simultaneous sounds you want to play.

Resources