How to add body background color or border to track it in box2d? - box2d-iphone

im using box2d physic engine, is there any way to track bodies where it collides or from where it is applied forces like wise?

There is a functionality for box2d called Debug-Draw you could use that.
Debug Draw in Box2D is a special feature that draws all the bodies in the physics world on the screen for debugging purposes. It is helpful because sometimes you can't tell if your image is in the wrong location or your physics body is in the wrong location.

Related

is it possible to draw trapezoid with border radius in react native

Here is the actual shape I want to draw using maybe view style properties or are there any alternatives, thanks
Depending on your needs I would choose something like react-native-canvas they have pretty much everything you gonna need to draw.
Other option would be GCanvas.
With those options you will be able to draw free forms as you wish
But there`s always the option to use SVG as well.
In my past experiences SVG forms solved most of the cases, but sometimes I had to use this canvas libs to draw some more complex or dynamic forms.
Hope this can be of any help to you.

Textured resizable buttons with Core Image filter and appearance proxy iOS

The app I'm writing involves buttons that have a slight noise filter texture, which can be any size. For a standard button I'd simply use resizableImageWithCapInsets: but due to the texture, this causes unusual artefacts to appear on the resulting button.
A solution I have in mind, is to use the Core Image monochrome filter combined with the random noise filter to add the noise texture to a plain image. In theory this works, and in practice this has been shown to work (One example here) but these are all in cases where the button size is known at the point of invoking the CI code.
What I'm looking to do, is use the appearance proxies, so across the app I can simply set the style of UIBarButtonItems for instance.
Is there a way I can apply these CI filters to the buttons through the appearance proxies or isn't this possible? Would something like a category on UIImage to add noise work? I'm not entirely sure at which point the appearance proxy would actually invoke that code.
Any help is appreciated
OK So I finally solved it but found out some stuff on the way.
It seems you can create a category on UIImage and use that in the appearance proxy. I created a category to add noise, and it seemed to partly work, but I couldn't get it looking how I wanted as it wasn't quite rendering properly, but in the process of coding this discovered another method
resizableImageWithCapInsets:resizingMode:
Because the texture I was dealing with was simply noise, it could be tiled, so rather than the image now being stretched, the centre of the image is instead tiled which gives me the appearance I needed :)

Can I create a motion colorizing pixel shader in WPF?

I have a video playing of lines being drawn on the screen. Is it possible to create a pixel shader (for WPF) that turns newly colored pixels a certain color for N milliseconds?
That way, there can be some indication to the user to movement on the screen when the lines don't move often and the user isn't always looking at the screen.
You can use DirectShow. Its written in unmanaged code, so you need to use this wrapper DirectShow.NET in order to use it in your C# application which is running in managed environment (samples are included, even with EVR which stands for Enhanced video Renderer which means MUCH better video quality). And when you will be passing a control handle to wrapper method for setting the video output, you need a WinForms control, because only from them you can get your desired control handle. That WinForms control you can then host in your WPF application using the WindowsFormsHost control provided for such situations when you need to use some WinForms control(s) in a WPF application. Its just theory, so i dont know if its an ultimate solution for you.
BTW: The whole idea is based on fact, that DirectShow is just some query constructed from separated filters. Renderer is a filter (EVR, VMR-7, VMR-9). Sound player is a filter. And they are connected through their pins. Its like a diagram. Electronic schema or something like that. And you can put for example Grey scale filter in there. And voila, video output will be greyscale. There is a bunch of tutorials for that. And completed simple filters as well. Unfortunately, filters must be written in C++:(
PS: I never said its gonna be easy:D

OpenGL mouse "lock"

How would one "lock" the mouse to a certain OpenGL window. Sort of like how it is done in Minecraft.
Is GameDev a better place to ask?
Like Robert said in the comment, OpenGL doesn't actually do user input.
However, there are libraries that can abstract the platform dependent part away, such as LibSDL. You can use it to grab the mouse to your window.
A similar question has been asked here, where a programmer used a class called robot to change the mouse position.
Code: Robot.moveMouse(x,y)
This code was written in java, however, There are several classes like robot that can do the trick!
One option is to constantly move the mouse to the center of the screen or wherever you want it.

Simulate a handwritten signature in Silverlight: (e.g. A pen progressively draws the lines)

I'd like to have Silverlight draw the blue "L" and "C" in the image below, preferably in a way that the thickness of the line is maintained and speeds/up slows/down at the correct locations to simulate a handwritten signature.
(source: lamontconsulting.com)
Can anyone point me to the right way to do this? Thanks!
The first this is to take this raster graphic and turn it vector. There are different ways to do that, but I find Adobe Illustrator to be the best. It's got a feature called LiveTrace that will do this for you (PDF file). After you've got that, you could import that into Expression Blend and turn it into XAML (a PathGeometry). Once in Blend, you can animate any which way you like. What you'd want for this is a key frame animation.

Resources