Since I'm a beginner to SimpleCV, can somebody please guide me with the following application: The thing is that I'm working on a stereo project. I have two images, from left and right eyes.
First: I must display them side by side. (After using features and keypoints drawing it is able to show two images side by side but how can i do this manually?)
Second: I will track any mouse click event on any of these images. Then extract the point of click event and mark its location on the other image after a sift detection. (Since left and right views have an intersection, clicked pixel is most likely to be on the other with a little offset/shift). I may use sift features or any other similar method offered in SimpleCV. But by default features use SURF algorithm for detection. How can I switch to sift algorithm and use it? should I create a features object somewhere?
Thanks in advance.
To show two images side by side you can use
img1.sideBySide(img2)
For more information about it, start the SimpleCV shell,
$ simplecv
SimpleCV:1> help(Image.sideBySide)
This will show you the complete docs of sideBySide function.
KeyPoints:
You can use any of the following algorithms for keyPoints.
SURF
STAR
FAST
MSER
ORB
SIFT
img.findKeyPoints(flavour="SIFT")
Again for more info, just use help(Image.findKeyPoints) in SimpleCV shell.
Related
I've started my journey of making an interactive gallery using react-three. I already have the First-Person mechanics figured out and I made a small room to walk and interact. I want to escalate the environment but since I can't make 3D assets, I bought this https://sketchfab.com/3d-models/vr-art-gallery-el5-e3bda3a7086c49f0a84948fd6808bcf4 .
So..what's the logic to add colliders to the walls so I can't cross and floor so I can walk, using react-three/cannon?
Welcome to Stackoverflow Mickael, so your idea can be achieved by using a nav mesh. There's a popular library for this called three-pathfinding.
The idea with three-pathfinding is that you create a navmesh matching the floor of your scene in a 3D editing tool like blender (which supports gltf by the way) and then loading that into your scene and passing it to three-pathfinding.
You then find out where on the navmesh you've clicked and pass that information to the lib and it will do the rest. The demo code has this method that you can use to find where in the scene you are clicking. Good luck on your adventure, hope this helped.
I would like to render DOM elements (which im otherwise using React for) onto (for example) a plane in a 3D (WebGL -> Three.js -> Three Fiber (??? so far best way to go about it as im figuring?)) environment.
So basically Id have my containing my entire app lets say rendered, as is traditional, into "root" element. Except I want to inject a layer in that utilizes Three Fiber to render and everything I might want my site (or even just a particular component) to contain onto this plane. Allowing one to make a react site as normal and behind the scenes (probably in index.js or intermediate file) would instead of being rendered "flat" against the screen i could "tilt" the entire site, give it opacity, maybe have multiple webpages all able to be shuffled like cards, sides of a dice, w/e tf.
If anybody has any suggestions im struggling with how one could go about this.
Im worried i might have to get kind of low-level (which im not totally against), but was hopeful there was some support for something of this nature. So far ive tried even having simple elements like an image element inside a material (element - via three fiber) or inside the mesh element (where the material would go) to no avail. Im just beginning to get my hands dirty but google hasnt availed me much for this project, so I turned to good ol' stack overflow for my first personal request! 🤯 😲 👹 !
Thanks in advance so much for your time and consideration - cheers,
John Thummel
this is possible using drei/Html. some impressions:
https://twitter.com/0xca0a/status/1398633764931178498
https://twitter.com/0xca0a/status/1407758860203573251
I am trying to implement alpha blending with two images for image stitching .
My first image is this ->
here is my second image ->
here is my result image ->
As you can see the result is not proper.I think I first have to find the overlapping region between then and then implement alpha blending on the overlapping part.
First of all, have you seen a new "stitching" module introduced in OpenCV 2.3?
It provides a set of building blocks for stitching pipeline including blending and "finding an overlap" (e.g. registration) steps. Here is a documentation: http://docs.opencv.org/modules/stitching/doc/stitching.html and an example of stitching application: stitching_detailed.cpp
I recommend you to study the code of this sample for better understanding of the details.
Regarding the finding of overlap there are several common approaches in computer vision:
optical flow
template matching
feature matching
For your case I recommend the last one - it works very well on the photos. And this approach is already implemented in OpenCV - explore the OpenCv source and see how the cv::detail::BestOf2NearestMatcher works.
I think the most common approach is SIFT, find a few keypoints in both images, then warp them to get your result. See this
Here are explanations about SIFT and panorama stitching.
I am building a map application with the silverlight bing maps control.
In the map control I want to show all of the subscribed customers.
The amount of customers is somewhere between 5000 and 7000, this means I can't show them all at once. This would result in a crash I guess.
How would you solve this issue?
I've read about events on zoomlevels etc. about tile layers about spatial sql
but I have no idea what the right solution is in this situation and where to begin.
This seems like a pretty basic problem when working with maps but there is little to no information on how to handle lots of data when working with bing maps.
Can anyone explain or point me to a good tutorial?
You can use a space-filling-curve or a spatial-index to get those points nested with the zoom-level of your map application to achieve a cluster effect http://blog.notdot.net/2009/11/Damn-Cool-Algorithms-Spatial-indexing-with-Quadtrees-and-Hilbert-Curves. There are many implementation of sfc and hilbert-curves. I've uploaded my own at phpclasses.org (hilbert-curve, bsd licence) and with a quadkey function for a cluster function. I've succesful implemented it for some customers. The idea is to search for a quadkey from left to right to get only a portion of pois. www.maptiler.org uses a quadkey with a z-curve. Probably you are getting better answers at gis.stackexchange. A sfc has usually a constraint of power of 2.
I am thinking about creating a database system for images where they are stored with compact signatures and then matched against a "query image" that could be a resized, cropped, brightened, rotated or a flipped version of the stored one. Note that I am not talking about image similarity algorithms but rather strictly about duplicate detection. This would make things a lot simpler. The system wouldn't care if two images have an elephant on them, it would only be important to detect if the two images are in fact the same image.
Histogram comparisons simply won't work for cropped query images. The only viable way to go I see is shape/edge detection. Images would first be somehow discretized, every pixel being converted to an 8-level grayscale for example. The discretized image will contain vast regions in the same colour which would help indicate shapes. These shapes then could be described with coefficients and their relative position could be remembered. Compact signatures would be produced out of that. This process will be carried out over each image being stored and over each query image when a comparison has to be performed. Does that sound like an efficient and realisable algorithm? To illustrate this idea:
removed dead ImageShack link
I know this is an immature research area, I have read Wikipedia on the subject and I would ask you to propose your ideas about such an algorithm.
SURF should do its job.
http://en.wikipedia.org/wiki/SURF
It is fast an robust, it is invariant on rotations and scaling and also on blure and contrast/lightning (but not so strongly).
There is example of automatic panorama stitching.
Check article on SIFT first
http://en.wikipedia.org/wiki/Scale-invariant_feature_transform
If you want to do a feature detection driven model, you could perhaps take the singular value decomposition of the images (you'd probably have to do a SVD for each color) and use the first few columns of the U and V matrices along with the corresponding singular values to judge how similar the images are.
Very similar to the SVD method is one called principle component analysis which I think will be easier to use to compare between images. The PCA method is pretty close to just taking the SVD and getting rid of the singular values by factoring them into the U and V matrices. If you follow the PCA path, you might also want to look into correspondence analysis. By the way, the PCA method was a common method used in the Netflix Prize for extracting features.
How about converting this python codes to C back?
Check out tineye.com They have a good system that's always improving. I'm sure you can find research papers from them on the subject.
The article you might be referring to on Wikipedia on feature detection.
If you are running on Intel/AMD processor, you could use the Intel Integrated Performance Primitives to get access to a library of image processing functions. Or beyond that, there is the OpenCV project, again another library of image processing functions for you. The advantage of a using library is that you can try various algorithms, already implemented, to see what will work for your situation.