In computer vision, what would be a good approach to tracking a human in the black and white same scene at different times of the day (i.e. different levels of illumination)? The scene will never be dark so I don't need to worry about searching using infra-red or anything for heat sensing. I need to identify the people and then also track them so there are two parts.
Any advice would be great.
Thanks.
SIFT feature matching works well for this purpose. It is implemented in OpenCV.
Related
I made an app that detects objects occupying the smartphone camera and now I want to draw the bounding boxes.
I've seen more than one way to do it and so far I'm planning to use the react-native-canvas library or to create a button in form of a bounding box located in the corresponding coordinates, but I'm wondering what the least resource-intensive solution would be.
This is because object detection already takes up a lot of resources and now I am going to add a function that draws bounding boxes several times per second, so I will surely have to lower the detections per second, but the ideal would be to lower them as little as possible. This is one of those situations where a few fractions of a second will be significant in performance.
I'm pretty new to react native so I need some help finding the optimal solution.
For example plotting buttons without installing an external library and that might work faster, I'm not sure if that makes sense.
Hopefully somebody can point me in the right direction.
Thanks.
I'm trying to write my own software for security camera motion detection, but in the area of interest outside my house, there is a lot of vegetation motion that will obviously trigger recording if I use some of the more simple algorithms that rely just on the difference between images. Does anyone have any recommendations? I'm struggling to find motion detection information online. I'm guessing that I'll have to employ some edge detection, or maybe a filtering process.
Cheers,
Zan
Without having seen any of your recordings I would suspect that motion from the vegetation looks quite noisy and more random with only a few local edges as in contrast I would expect much stronger connected edges for people that move through the scenery. Also edges from objects moving on the floor will be mostly be oriented on specific directions for a longer period of time.
My first attempt would be
median filter on input image to reduce noise
difference image to previous (may be 2nd previous) image
some edge detector
build some edgelists based on the stronger
filter out weak/short edges
match edges from objects in last frame against the newly found
apply some tracking of positions and other features
classify object behaviour based on this features
consistent movement in one direction
consistently strong edges on the same object
object size
to trigger your recording
Alternatively you can jump on the recent Hype of Deep Neural Networks.
Look up online information and tools (and maybe embedded hardware) to train and run a CDNN.
Split your current videos into
videos there you do not want to be warned
videos there you do want to be warned
let the magic happen.
I'm creating a side-scroller video game for my final project in my grade 12 programming class. Right now I have nice delta-timer my partner made for me, a flying ship, asteroids, and a scrolling background. I've added a few basic things such as collision detection between asteroids and the ship, and ship movement. Now, my next steps are implementing random enemy spawns, and projectiles (laser beams :D) from both ships. Implementing random enemy spawns should fun and relatively easy, however I'm struggling with figuring out how I will create so many bullets that will fit on the screen. I need bullets from the enemy, and the ship (player controlled).
How can I achieve this? I know there are probably many answers to this question, but I would really like to see the types of approaches people have to this problem.
So far I have thought that:
a) I could make the game have a (say) 200 bullets max on the screen
b) I could make a dynamic array (I believe this term means the array gets bigger or smaller), that way I don't limit the amount of possible projectiles
...and then I'm afraid that all these bullets will cause lag from all the collision processing that will happen.
Please shed some light on this, and help guide me along the path to an efficient; well executed; side-scroller game.
Thanks,
Guest dude
I would like to use D3 to build simple charts with literally hundreds of millions of data points.
Obviously, I won't be attempting to plot millions of points at a time. Only a very, very tiny fraction of those points (<1000) would be in view at any given time. I'll download pre-processed data "on-demand" from the server depending on the current view and zoom level, and would like to use D3's built-in zoom and pan behaviors.
Basically, imagine an infinitely wide bar chart that pans back and forth, and alters itself to show the appropriate level of detail depending on the current zoom level (e.g. semantic zoom).
What techniques are available in D3 to achieve this, yet still have it feel responsive and smooth? What should I avoid doing? Are there any examples of this out there?
Examples: Have a look at Fabian Fischer's BankSafe, an award-winning entry to this year's VAST Challenge. Not sure if the code is available, but the report summarising the techniques he used certainly is. The dataset was also in the order of "hundreds of millions" and - if I remember correctly - had a zoom technique similar to the one you describe.
I would highly recommend you look into using canvas over svg. From what I've seen, having thousands of SVG elements doesn't scale particularly well. Microsoft has a pretty good writeup for how to know which to choose: http://msdn.microsoft.com/en-us/library/ie/gg193983(v=vs.85).aspx#Using_Canvas_AndOr_SVG
I'm trying to write an CAD-like application in WPF(.NET 4.0) that needs to be able to display a lot of 2D points/lines. It will be used to display CAD-plans of entire cities with zoom, pan, rotate and point snapping on mouseover.
Right now I purely use WPF. I read the objects from the CAD file draw them into a StreamGeometry, use it as stroke of a new Path and add it to a Canvas, with several transforms.
My problem is that this solution doesn't scale well enough. It works fine with small CAD-files, but when I want to display like half a city(with houses and land boundaries) it is very very delayed.
I also tried to convert my CAD-file to an image, but
- a resolution a 32000x32000 is sometimes not enough
- when zooming out the lines are too thin.
In the end I need to be able to place this on a Canvas(2D/3D) as background.
What are my best options here?
Thanks,
Niklas
wpf is not good for a large 3d models. im afraid it is too slow. Your best bet is direct 3d or openGL
However, even with the speed of direct3d,openGL you will still need to work out how to cull as many polygons/vertices as possible before the rendering of the scene if you are trying to show an entire city.
there is a large amount of information on this (generally under game development)
there are a few techniques including frustrum culling, near and far plane culling.
also, since you probably have a static scene you may be able to use binary spacial partitioning.
As I understand the subject is 2D CAD system within WPF.
Great! I use it...
OpenGL and DirectX are in infinite loop OnDraw always. The CPU works all the time.
WPF/Silverlight 2D is smart model.
Yes, total amount of elements (for example, primitives inherited from Shape) must be not so much. But how many?
I tested own app (Silverlight). WPF will be a bit faster I hope...
Here my 2D CAD results. Performance is still great. Each beam consists of multiple primitives.
Use a VirtualCanvas like this one from Chris Lovett.