SVG/XAML: Defining a path which looks like an arrow - wpf

I'm trying to use SVG (really XAML) to define a path which looks like a downwards pointing arrow.
|
|
\ | /
\ | /
\ /
`
It is super-important that the edge of the arrow is sharp. I have tried with various combinations of M, L and z with no success.

You should take a look at the Marker element:
painting_Markers

I don't know about SVG but this will produce a sharp edge in XAML (tested in XAML Crancher)
<Path Data="M 10 0 L 10 20 0 10 0 15 12.5 27.5 25 15 25 10 15 20 15 0 z" Stroke="Black"/>

Have a look at this example from the SVG spec. You may want to tweak the 'stroke-miterlimit' property depending on the sharpness of the corner.

I'd recommend to use something like inkscape and design the arrow (just draw a line, select it, go to object > filling / contour > pattern and contourline > endmarker)
then save as pdf, rename the *.pdf to *.ai, open blend and import the ai file as adobe illustrator.
It's a bit difficult, but I still prefer it over using blend.
(i translated from german, so some menu items might be slightly different)

After having done some research, I don't think such a thing is possible to do with a single path easily. I did manage to solve it all by finding a library of XAML arrows and then doing some trickery to rotate the arrow I wanted the way I wanted it.

I agree with the above answer.
You (and your readers) should note the advent of Raphael (If they have not already) and also my offering for the SVG world.
I have been working for a year with Raphael and SVG. This link to my homepage might interest you (To those who are still listening and/or switched on to the power of Inkscape/Cross Browser SVG)
http://www.irunmywebsite.com/
This home page is a hybrid of the W3C SVG recommendations and the javascript library and beneath it are all the resources to get up and running quickly with SVG and Raphael.
Regards Chasbeen.

Related

maya makePaintable attribute

stupid question but I m having an issue with that.
Can you make more than 1 attribute paintable on the same shape ?
I am adding 3 double array attributes for testing purpose and I make them paintable trough a loop while adding them.
When I do that, I can t paint them through my test and to verify that i tried painting it via right click on the mesh -> paint -> mesh and I only see the usual paintable attributes + the first one I defined...
Is there anything specific to do to declare more than 1 attributes paintable ?
Thanks !
I found out why it wasn't working. The documentation for makePaintable is wrong ...
cmds.makePaintable('node','attribute') like shown in the documentation doesn't work. Instead, you have to do cmds.makePaintable('nodeType','attribute') and your mesh has to be selected (I suppose as you don't specify it in the command)
It is now working.

Why does my touch develop script keep crashing?

The question isn't exactly concerned with touch develop rather just basic programming "structure" or syntax.
what I am trying to do is create a simple compass working on the phones heading capability. The heading capability just spits out degree readings to several (like 12) decimal places.
Anyway, even just letting the phone spit out the heading, eventually the phone will crash, why is that? Running out of memory?
The reason I came here is because of this:
I want to update the page with a photo of an associated rotation based on degree readout. I can't figure out how to do something like if 0 < x < 1 post this picture. Since the heading readout varies like 321.18364947363 and 321.10243635471
So currently I am testing this: several if / if else statements saying if heading output is 1 post picture with 1 degree rotation, 2 post picture with 2 degree rotation. This definitely and guaranteed crashes the phone. Why? Memory?
If you are a touch developer, would it be easier and more sane to simply take a round object, center it in relation to a square image and use it as a sprite or object which then you can dictate what angular velocity and position the object has without doing / using 360 individual images.
GAH! Damn character limits / thread format
this is what follows what I last wrote below for anyone that cares :
The concept seems simple enough but I am basically a programming noob, I was all over the place trying to learn Python, Java and C/C#/C++. ( I wrote this on my Windows Phone 8 but I was unable to copy the text ( GAY ) ) I am happy to have come across Touch Develop because it is better for me as a visual learner. (Thanks for the life story )right ? haha
The idea would have been to use this dumb pink against black giant compass with three headings / points of interests namely A fixed relative north, the heading and a position given by the person to be found's lat and long coordinates relative to the finder's phone's current location (lat and long ). This app in my mind would be used for party scenarios. I would have benefited from this app had the circumstances been right, I was lost at a party and I had to take a cab home for $110.00 because I didn't drive to that party.

About finding pupil in a video

I am now working on an eye tracking project. In this project I am tracking eyes in a webcam video (resolution if 640X480).
I can locate and track the eye in every frame, but I need to locate the pupil. I read a lot of papers and most of them refer to Alan Yuille's deformable template method to extract and track the eye features. Can anyone help me with the code of this method in any languages (matlab/OpenCV)?
I have tried with different thresholds, but due to the low resolution in the eye regions, it does not work very well. I will really appreciate any kind of help regarding finding pupil or even iris in the video.
What you need to do is to convert your webcam to a Near-Infrared Cam. There are plenty of tutorials online for that. Try this.
A Image taken from an NIR cam will look something like this -
You can use OpenCV then to threshold.
Then use the Erode function.
After this fill the image with some color takeing a corner as the seed point.
Eliminate the holes and invert the image.
Use the distance transform to the nearest non-zero value.
Find the max-value's coordinate and draw a circle.
If you're still working on this, check out my OptimEyes project: https://github.com/LukeAllen/optimeyes
It uses Python with OpenCV, and works fairly well with images from a 640x480 webcam. You can check out the "Theory Paper" and demo video on that page also. (It was a class project at Stanford earlier this year; it's not very polished but we made some attempts to comment the code.)
Depending on the application for tracking the pupil I would find a bounding box for the eyes and then find the darkest pixel within that box.
Some psuedocode:
box left_location = findlefteye()
box right_location = findrighteye()
image_matrix left = image[left_location]
image_matrix right = image[right_location]
image_matrix average = left + right
pixel min = min(average)
pixel left_pupil = left_location.corner + min
pixel right_pupil = right_location.corner + min
In the first answer suggested by Anirudth...
Just apply the HoughCirles function after thresholding function (2nd step).
Then you can directly draw the circles around the pupil and using radius(r) and center of eye(x,y) you can easily find out the Center of Eye..

Understanding parsing SVG file format

First off, gist here
Map.svg in the gist is the original Map I'm working with, got it off wikimedia commons.
Now, there is a land mass off the eastern cost of Texas in that original svg. I removed it using Inkscape, and it re-wrote the path in a strange new way. The diff is included in the gist.
Now this new way of writing the path blows up my parser logic, and I'm trying to understand what happened. I'm hoping someone here knows more about the SVG file format that I do. I will admit I have not read through the entire SVG standard spec, however the parts of it I did read didn't mention anything about missing commands or relative coordinates. Then again I may have been looking at the incorrect spec, not sure.
The way I understood it, SVG path data was very straight forward, something like this:
(M,L,C)[point{n}] .... [Z] then repeat ad-nauseum
Now the part I'm trying to understand is this new Inkscape has written out what seems like relative coordinates, without commands like L, or L being implied somehow. My gut is telling me what has happened here is obvious to someone. For what it's worth I'm doing my parsing in C.
If you're parsing SVG, why not look at the SVG specification?
Start a new sub-path at the given (x,y) coordinate. M (uppercase) indicates that absolute coordinates will follow; m (lowercase) indicates that relative coordinates will follow. If a moveto is followed by multiple pairs of coordinates, the subsequent pairs are treated as implicit lineto commands.
From: http://www.w3.org/TR/2011/REC-SVG11-20110816/paths.html#PathDataMovetoCommands
You said,
The way I understood it, SVG path data was very straight forward, something like this: (M,L,C)[point{n}] .... [Z]
I don't know where you got that information. Stop getting your information from that source.
I will admit I have not read through the entire SVG standard spec...
Nobody reads the entire spec. Just focus on the part you're implementing at the moment. You could also start with SVG Tiny, and work with that subset for now.
Path Grammar is where you should start when writing a parser. If you can't read it, then buy a book on compilers.
Path grammar: http://www.w3.org/TR/2011/REC-SVG11-20110816/paths.html#PathDataBNF

RGB value detection and implementation

I'm writing an application that displays different color swatches to help people with color coordination. How can I find the RGB values of real world objects?
For example, one of the colors is Red Apple but obviously a red apple isn't just red. It has hints of other colors in it.
Well, it's not an easy task to be honest, but a good place to start would be with a digital camera and/or a flatbed scanner.
Once you have an image in the computer then the task is somewhat easier beacuse all you need is to use a picture / photo editing package such as photoshop or the gimp to sample a selection of colours before using them in your application.
once you have a few different samples, then you need to average them, and that's quite easy to do. Lets say you took 5 samples of RGB values:
255,50,10
250,40,11
253,51,15
248,60,13
254,45,20
You simply need to add up each component and divide by how many samples you took so:
Red = (255 + 250 + 253 + 248 + 254) / 5
Green = (50 + 40 + 51 + 60 + 45) / 5
Blue = (10 + 11 + 15 + 13 + 20) / 5
Now, if what your asking is how do I do this automatically in program code, that's a whole different kettle of fish, first you'll need something like a web cam, then you'll need to write code to capture images from the web-cam, then once you have your image you'll need not just the ability to pick colour, but to actually figure out where in the image the object you want to pick the colour from actually is.
For now, I'd look at using the first method, it's a bit manual I agree, but far easier and will get you started.
The image processing required to do the second maths has given software engineers & comp scientists headaches for years and is still not a perfect science... and that's before we even start thinking about the maths.
For each object, I would do it this way:
Use goolge images to search pictures of the object you want.
Select the one that have the most accurate color, say, to your idea of a "red apple" for example.
--you can skip 1 and 2 if you have a digital picture of the object.
Open that image in Paint; you can do it stroking the "Impr Pant" key on your keyboard, opening Paint, and then "ctrl+v" will paste the screenshoot in paint.
Select the pick color tool on Paint (the one like a dropper) and click on the image, just in the place with the color you want.
Select from the menu, "Colors -> Edit colors" and then in the Colors palette that opens, clic on "Define Custom Colors".
You got it, there RGB values are at your right.
There must be an easier way, but this will work.
If your looking for a programmatic solution then you would look into bitwise operations. The general idea here is you would read the image in it's binary roots and then you could logically convert the bits into RGB values. There are several methods for doing this depending on programming language. Here is a method for Actionscript3.
http://www.flashandmath.com/intermediate/rgbs/explanations.html
also if your looking for the average color look here, (for AS3)
http://blog.soulwire.co.uk/code/actionscript-3/extract-average-colours-from-bitmapdata
a related method and explanation for Java
Bitwise version of finding RGB in java

Resources