The question isn't exactly concerned with touch develop rather just basic programming "structure" or syntax.
what I am trying to do is create a simple compass working on the phones heading capability. The heading capability just spits out degree readings to several (like 12) decimal places.
Anyway, even just letting the phone spit out the heading, eventually the phone will crash, why is that? Running out of memory?
The reason I came here is because of this:
I want to update the page with a photo of an associated rotation based on degree readout. I can't figure out how to do something like if 0 < x < 1 post this picture. Since the heading readout varies like 321.18364947363 and 321.10243635471
So currently I am testing this: several if / if else statements saying if heading output is 1 post picture with 1 degree rotation, 2 post picture with 2 degree rotation. This definitely and guaranteed crashes the phone. Why? Memory?
If you are a touch developer, would it be easier and more sane to simply take a round object, center it in relation to a square image and use it as a sprite or object which then you can dictate what angular velocity and position the object has without doing / using 360 individual images.
GAH! Damn character limits / thread format
this is what follows what I last wrote below for anyone that cares :
The concept seems simple enough but I am basically a programming noob, I was all over the place trying to learn Python, Java and C/C#/C++. ( I wrote this on my Windows Phone 8 but I was unable to copy the text ( GAY ) ) I am happy to have come across Touch Develop because it is better for me as a visual learner. (Thanks for the life story )right ? haha
The idea would have been to use this dumb pink against black giant compass with three headings / points of interests namely A fixed relative north, the heading and a position given by the person to be found's lat and long coordinates relative to the finder's phone's current location (lat and long ). This app in my mind would be used for party scenarios. I would have benefited from this app had the circumstances been right, I was lost at a party and I had to take a cab home for $110.00 because I didn't drive to that party.
Related
I was initially going to write what looked like a four paragraph essay to explain what I'm working on, but it wasn't nessecary. In summary, I'm a rookie at Unity and know very little about how to create a platform in which I can store(send and retrieve from other clients) data in "the cloud". Yet, I need to be able to do so for my project.
using System.Collections.Generic;
using UnityEngine;
public class MultiplayerMoveMe : MonoBehaviour
{
//Note: I am making a retro-style game where all sprites face the camera, so the rotation of the players is not a factor that needs considering here; just the position.
//I have a fleshed-out idea on how I will do all of this, however I am completely foreign in all things server-related on this scale so I need some assistance(not the most prideful circumstances).
public GameObject p2Obj;
void Start()
{
//Anything that I might need to add that the serverGet() function might require
}
void serverGet(int playerNum, string reqType)
{
//On their side, every frame, the second player's X, Y, and Z pos should be packed into a string with seperator char '|' and then filed on the server under playerdata/2/pos/
//Then, this script(on the side of player 1) would(every frame, displaced by +1 frame initially) take the player number and the reqType to find said directory online with THIS function to return the value.
//Funny thing is; I have no idea what I'm doing.
//And no, I haven't connected to a server yet. I also want to stay away from any third party apps for this since this is small-scale and I only wish to learn the ins and outs of all of this.
}
void Update()
{
String p2Position = serverGet(2,"pos");
// String p2Position's value is currently "x|y|z"
String[] sl = p2Position.Split('|');
float xPos = float.parse(sl[0]);
float yPos = float.parse(sl[1]);
float zPos = float.parse(sl[2]);
// Now that all values are floats, we can feed them into the thingamabobber to change the position of the other player from our side.
p2Obj.transform.position = new Vector3(xPos, yPos, zPos);
}
}
Below I have a script which, if the serverGet() function's contents were actually existant(and functional, of course), would set the position of the second player to their position according to the data online, which the instance from their side submits in the first place(every frame as well, -1 frame initial displacement so that everything works). This way, if I move on one computer as "player 2", the computer in which I am playing as "player 1" will show the movement of player 2 as it progresses every frame. In other words, basically all calculation is client-side, but the actual communication is(unavoidably) server-side. That server-side is what I'm clueless about, and would appreciate if anyone here could lead me a step in the right direction.
As for the script that actually submits the terms to the server, that will come with my understanding of all of this; which again, I don't have as of right now.
There seems to have been a number of questions lately: "So I'm gonna write a MP game engine from scratch!" (Example.)
To get some basic grounding in mmp engineering, first spend a few days working with Unity's system https://docs-multiplayer.unity3d.com
Do try to understand the scale of your problem. You're about to embark on a PhD level enterprise that will take months of full-time work at the minimum. It would be insanity to not, first, spend a few days with current systems to gain some basic principles.
Similarly, Photon is very popular for mmp systems, https://www.raywenderlich.com/1142814-introduction-to-multiplayer-games-with-unity-and-photon next spend a few days making toy Photon/Unity systems to learn more.
Finally Mirror networking is the one that is "like Unity's old networking" https://assetstore.unity.com/packages/tools/network/mirror-129321 so really you should try that a little too.
Finally on the face of it to literally answer your question as is, click over to AWS, spin up some ubuntu EC2 instances, and start work on a "simple" game server, so almost certainly you'd use Node/Express/SQL and likely websockets (just to get started, you'd have to move to raw udp communications eventually). It's just not realistic to start doing that though until you familiarize yourself with some basic existing systems.
I have to do some image processing but I don't know where to start. My problem is as follows :-
I have a 2D fiber image (attached with this post), in which the fiber edges are denoted by white color and the inside of the fiber is black. I want to choose any black pixel inside the fiber, and travel from it along the length of the fiber. This will involve comparing the contrast with the surrounding pixels and then travelling in the desired direction. My main aim is to find the length of the fiber
So can someone please tell me atleast where to start? I have made a rough algorithm in my mind on how to approach my problem but I don't know even which software/library to use.
Regards
Adi
EDIT1 - Instead of OpenCV, I started using MATLAB since I found it much easier. I applied the Hough Transform and then Houghpeaks function with max no. of peaks = 100 so that all fibers are included. After that I got the following image. How do I find the length now?
EDIT2 - I found a research article on how to calculate length using Hough Transform but I'm not able to implement it in MATLAB. Someone please help
If your images are all as clean as the one you posted, it's quite an easy problem.
The very first technique I'd try is using a Hough Transform to estimate the line parameters, and there is a good implementation of the algorithm in OpenCV. After you have them, you can estimate their length any way you want, based on whatever other constraints you have.
Problem is two-fold as I see it:
1) locate start and end point from your starting position.
2) decide length between start and end points
Since I don't know your input data I assume it's pixel data with a 0..1 data on each pixel representing it's "whiteness".
In order to find end points I would do some kind of WALKER/AI that tries to walk in different locations, knowing original pos and last traversed direction then continuing along that route until "forward arc" is all white. This assumes fiber is somewhat straight (is it?).
Once you got start and end points you can input these into a a* path finding algorithm and give black pixels a low value and white very high. Then find shortest distance between start and end point, that is the length of the fiber.
Kinda hard to give more detail since I have no idea what techniques you gonna use and some example input data.
Assumptions:
-This image can be considered a binary image where there are only 0s(black) and 1s(white).
-all the fibers are straight and their starting and ending points are on borders.
-we can come up with a limit for thickness in fiber(thickness of white lines).
Under these assumptions:
start scanning the image border(start from wherever you want in whichever direction you want...just be consistent) until you encounter with the first white pixel.At this point your program will understand that this is definitely a starting point. By knowing this, you will gather all the white pixels until you reach a certain limit(or a threshold). The idea here is, if there is a fiber,you will get the angle between the fiber and the border the starting point is on...of course the more pixels you get(the inner you get)the surer you will be in the end. This is the trickiest part. after somehow ending up with a line...you need to calculate the angle(basic trigonometry). Since you know the starting point, the width/height of the image and the angle(or cos/sin of those) you will have the exact coordinate of the end point. Be advised...the exactness here is not really what you might have understood because we may(the thing is we will) have calculation errors in cos/sin values. So you need to hold the threshold as long as possible. So your end point will not be a point actually but rather an area indicating possibility that the ending point is somewhere inside that area. The rest is just simple maths.
Obviously you can put too much detail in this method like checking the both white lines that makes the fiber and deciding which one is longer or you can allow some margin for error since those lines will not be straight properly...this is where a conceptual thickness comes to the stage etc.
Programming:
C# has nice stuff and easy for you to use...I'll put some code here...
newBitmap = new Bitmap(openFileDialog1.FileName);
for (int x = 0; x < newBitmap.Width; x++)
{
for (int y = 0; y < newBitmap.Height; y++)
{
Color originalColor = newBitmap.GetPixel(x, y);//gets the pixel value...
//things go here...
}
}
you'll get the image from a openfiledialog and bitmap the image. inside the nested for loop this code scans the image left-to-right however you can change this...
Since you know C++ and C, I would recommend OpenCV
. It is open-source so if you don't trust anyone like me, you won't have a problem ;). Also if you want to use C# like #VictorS. Mentioned I would use EmguCV which is the C# equivilant of OpenCV. Tutorials for OpenCV are included and for EmguCV can be found on their website. Hope this helps!
Download and install the latest version of 3Dslicer,
Load your data and go the the package>EM segmenter without Atlas>
Choose your anatomical tree in 2 different labels, the back one which is your purpose, the white edges.
The choose the whole 2D image as your ROI and click on segment.
Here is the result, I labeled the edges in green and the black area in white
You can modify your tree and change the structures you define.
You can give more samples to your segmentation to make it more accurate.
I am now working on an eye tracking project. In this project I am tracking eyes in a webcam video (resolution if 640X480).
I can locate and track the eye in every frame, but I need to locate the pupil. I read a lot of papers and most of them refer to Alan Yuille's deformable template method to extract and track the eye features. Can anyone help me with the code of this method in any languages (matlab/OpenCV)?
I have tried with different thresholds, but due to the low resolution in the eye regions, it does not work very well. I will really appreciate any kind of help regarding finding pupil or even iris in the video.
What you need to do is to convert your webcam to a Near-Infrared Cam. There are plenty of tutorials online for that. Try this.
A Image taken from an NIR cam will look something like this -
You can use OpenCV then to threshold.
Then use the Erode function.
After this fill the image with some color takeing a corner as the seed point.
Eliminate the holes and invert the image.
Use the distance transform to the nearest non-zero value.
Find the max-value's coordinate and draw a circle.
If you're still working on this, check out my OptimEyes project: https://github.com/LukeAllen/optimeyes
It uses Python with OpenCV, and works fairly well with images from a 640x480 webcam. You can check out the "Theory Paper" and demo video on that page also. (It was a class project at Stanford earlier this year; it's not very polished but we made some attempts to comment the code.)
Depending on the application for tracking the pupil I would find a bounding box for the eyes and then find the darkest pixel within that box.
Some psuedocode:
box left_location = findlefteye()
box right_location = findrighteye()
image_matrix left = image[left_location]
image_matrix right = image[right_location]
image_matrix average = left + right
pixel min = min(average)
pixel left_pupil = left_location.corner + min
pixel right_pupil = right_location.corner + min
In the first answer suggested by Anirudth...
Just apply the HoughCirles function after thresholding function (2nd step).
Then you can directly draw the circles around the pupil and using radius(r) and center of eye(x,y) you can easily find out the Center of Eye..
I'm writing an application that displays different color swatches to help people with color coordination. How can I find the RGB values of real world objects?
For example, one of the colors is Red Apple but obviously a red apple isn't just red. It has hints of other colors in it.
Well, it's not an easy task to be honest, but a good place to start would be with a digital camera and/or a flatbed scanner.
Once you have an image in the computer then the task is somewhat easier beacuse all you need is to use a picture / photo editing package such as photoshop or the gimp to sample a selection of colours before using them in your application.
once you have a few different samples, then you need to average them, and that's quite easy to do. Lets say you took 5 samples of RGB values:
255,50,10
250,40,11
253,51,15
248,60,13
254,45,20
You simply need to add up each component and divide by how many samples you took so:
Red = (255 + 250 + 253 + 248 + 254) / 5
Green = (50 + 40 + 51 + 60 + 45) / 5
Blue = (10 + 11 + 15 + 13 + 20) / 5
Now, if what your asking is how do I do this automatically in program code, that's a whole different kettle of fish, first you'll need something like a web cam, then you'll need to write code to capture images from the web-cam, then once you have your image you'll need not just the ability to pick colour, but to actually figure out where in the image the object you want to pick the colour from actually is.
For now, I'd look at using the first method, it's a bit manual I agree, but far easier and will get you started.
The image processing required to do the second maths has given software engineers & comp scientists headaches for years and is still not a perfect science... and that's before we even start thinking about the maths.
For each object, I would do it this way:
Use goolge images to search pictures of the object you want.
Select the one that have the most accurate color, say, to your idea of a "red apple" for example.
--you can skip 1 and 2 if you have a digital picture of the object.
Open that image in Paint; you can do it stroking the "Impr Pant" key on your keyboard, opening Paint, and then "ctrl+v" will paste the screenshoot in paint.
Select the pick color tool on Paint (the one like a dropper) and click on the image, just in the place with the color you want.
Select from the menu, "Colors -> Edit colors" and then in the Colors palette that opens, clic on "Define Custom Colors".
You got it, there RGB values are at your right.
There must be an easier way, but this will work.
If your looking for a programmatic solution then you would look into bitwise operations. The general idea here is you would read the image in it's binary roots and then you could logically convert the bits into RGB values. There are several methods for doing this depending on programming language. Here is a method for Actionscript3.
http://www.flashandmath.com/intermediate/rgbs/explanations.html
also if your looking for the average color look here, (for AS3)
http://blog.soulwire.co.uk/code/actionscript-3/extract-average-colours-from-bitmapdata
a related method and explanation for Java
Bitwise version of finding RGB in java
I have to make an application that recognizes inside an black and white image a piece of tetris given by the user. I read the image to analyze into an array.
How can I do something like this using C?
Assuming that you already loaded the images into arrays, what about using regular expressions?
You don't need exact shape matching but approximately, so why not give it a try!
Edit: I downloaded your doc file. You must identify a random pattern among random figures on a 2D array so regex isn't suitable for this problem, lets say that's the bad news. The good news is that your homework is not exactly image processing, and it's much easier.
It's your homework so I won't create the code for you but I can give you directions.
You need a routine that can create a new piece from the original pattern/piece rotated. (note: with piece I mean the 4x4 square - all the cells of it)
You need a routine that checks if a piece matches an area from the 2D image at position x,y - the matching area would have corners (x-2, y-2, x+1, y+1).
You search by checking every image position (x,y) for a match.
Since you must use parallelism you can create 4 threads and assign to each thread a different rotation to search.
You might not want to implement that from scratch (unless required, of course) ... I'd recommend looking for a suitable library. I've heard that OpenCV is good, but never done any work with machine vision myself so I haven't tested it.
Search for connected components (i.e. using depth-first search; you might want to avoid recursion if efficiency is an issue; use your own stack instead). The largest connected component should be your tetris piece. You can then further analyze it (using the shape, the size or some kind of border description)
Looking at the shapes given for tetris pieces in Wikipedia, called "I,J,L,O,S,T,Z", it seems that the ratios of the sides of the bounding box (easy to find given a binary image and C) reveal whether you have I (4:1) or O (1:1); the other shapes are 2:3.
To detect which of the remaining shapes you have (J,L,S,T, or Z), it looks like you could collect the length and position of the shape's edges that fall on the bounding box's edges. Thus, T would show 3 and 1 along the 3-sides, and 1 and 1 along the 2 sides. Keeping track of the positions helps distinguish J from L, S from Z.