Image Analysis Program Based on Hashcode Method Resulting in Errors - image-formats

I am trying to write a program that will recognize an image on the screen, compare it against a resource library, and then calculate based on the result of the image source.
The first thing that I did was to create the capture screen function which looks like this:
private Bitmap Screenshot()
{
System.Drawing.Bitmap Table = new System.Drawing.Bitmap(88, 40, PixelFormat.Format32bppArgb);
System.Drawing.Graphics g = System.Drawing.Graphics.FromImage(RouletteTable);
g.CopyFromScreen(1047, 44, 0, 0, Screen.PrimaryScreen.Bounds.Size);
return Table;
}
Then, I analyze this picture. The first method I used was to create two for loops and analyze both the bitmaps pixel by pixel. The problem with this method was time, it took a long time to complete 37 times. I looked around and found the convert to bytes and the convert to hash methods. This is the result:
public enum CompareResult
{
ciCompareOk,
ciPixelMismatch,
ciSizeMismatch
};
public CompareResult Compare(Bitmap bmp1, Bitmap bmp2)
{
CompareResult cr = CompareResult.ciCompareOk;
//Test to see if we have the same size of image
if (bmp1.Size != bmp2.Size)
{
cr = CompareResult.ciSizeMismatch;
}
else
{
//Convert each image to a byte array
System.Drawing.ImageConverter ic = new System.Drawing.ImageConverter();
byte[] btImage1 = new byte[1];
btImage1 = (byte[])ic.ConvertTo(bmp1, btImage1.GetType());
byte[] btImage2 = new byte[1];
btImage2 = (byte[])ic.ConvertTo(bmp2, btImage2.GetType());
//Compute a hash for each image
SHA256Managed shaM = new SHA256Managed();
byte[] hash1 = shaM.ComputeHash(btImage1);
byte[] hash2 = shaM.ComputeHash(btImage2);
for (int i = 0; i < hash1.Length && i < hash2.Length&& cr == CompareResult.ciCompareOk; i++)
{
if (hash1[i] != hash2[i])
cr = CompareResult.ciPixelMismatch;
}
}
return cr;
}
After I analyze the two bitmaps in this function, I call it in my main form with the following:
Bitmap Table = Screenshot();
CompareResult success0 = Compare(Properties.Resources.Result0, Table);
if (success0 == CompareResult.ciCompareOk)
{ double result = 0; Num.Text = result.ToString(); goto end; }
The problem I am getting is that once this has all been accomplished, I am always getting a cr value of ciPixelMismatch. I cannot get the images to match, even though the images are identical.
To give you a bit more background on the two bitmaps, they are approximately 88 by 40 pixels, and located at 1047, 44 on the screen. I wrote a part of the program to automatically take a picture of that area so I did not have to worry about the wrong location or size being captured:
Table.Save("table.bmp");
After I took the picture and saved it, I moved it from the bin folder in the project directly to the resource folder and ran the program again. Despite all of this, the result is still ciPixelMismatch. I believe the problem lies within the format that the pictures are being saved as. I believe that despite them being the same image, they are being analyzed in different formats, maybe one of the pictures contains a bit more information than the other which is causing the mismatch. Can somebody please help me solve this problem? I am just beginning with my c# programming, I am 5 days into the learning process, and I am really at a loss for this.
Yours sincerely,
Samuel

Related

How to create a world map using C language?

How to create a world map using C language?
Here, I want to generate a 1000*1000 two dimensional array of world map. Within the array, the land part is marked with value 1 and the sea part is marked with value 0. Is there any simple way to create?
Obviously, there's no algorithm which describes the shape of the coastline of the continents ;)
But you could use e.g. Smrender (http://www.abenteuerland.at/smrender/), feed it with the coastline of e.g. OpenStreetMap or Naturalearthdata and a single rule. Let it create a PNG image with 1000x1000 pixels.
EDIT:
With ImageMagick (convert) you can directly convert a PNG into a C header file as an array.
Go to Openstreetmap.org (or Googlemaps), zoom out until you see the whole world, make a screenshot, open it in your favorite image manipulation program and cut and resize it to 1000x1000, then run convert input.png output.h.
Bernhard
I wrote a answer in game development some hours ago that can help you, heres the topic, it's written in javascript, but it won't be hard to translate to c.
In the example you fill the whole map of 0's and then you trace a path randomly, if you want to smooth the map later you can make a snippet that loops all the watter tiles and converts them to land if there are 3 or more land tiles adjacent to it, do it 40-60 times and you will get a smoother shore and no "holes" in the continent.
EDIT
Updating the answer with a demo in C.
I've not tested it, but if you include the headers and checks the code, this is the answer.
//Create and populate the map
int mapsize = 100;
int map[mapsize*mapsize];
for(int t = 0; t < mapsize*mapsize; t++) map[t] = 0;
//make the path
int currPos[2] = {0,50};
map[currPos[0]+(currPos[1]*mapSize)] = 1;
int landTiles = 20000;
for(var l = 0; l < landTiles; l++){
int dir[2] = {RANDOM_BETWEEN_-1_AND_1, RANDOM_BETWEEN_-1_AND_1};
int next[2] = {currPos[0]+dir[0], currPos[1]+dir[1]};
map[next[0]+(next[1]*mapSize)] = 1;
currPos = next;
}
//Draw the map
for(var row = 0; row < mapSize; row++){
for(var col = 0; col < mapSize; col++){
cout << map[col+(row*mapSize)];
}
cout << endl;
}
You could have some program or script (taking as input some representation of some image of the Earth) which generates a long C file like
const char map[1000][1000] = {
{'0', '1', //.... etc for the first fow
},
{'0', '0', '0', '1', //... etc
}
/// etc for other rows
};
See also XBM for an example.

DirectShow Custom Source Pin

I am new to DirectShow. I, like many others, am trying to create a socket-based P2P streaming solution for a WPF-based card game. I want each player to be able to see each other via small video windows.
My questions are two-fold. The first is How do I lower the frame sample rate and resolution? I believe 320x200 x 15 to 20 fps should be fine. I am using the SampleGrabber callback to grab frame data and send it over the socket; which is actually working with no compression at 640x480 resolution.
My second question is, since each frame contains 921,600 bytes, this really bogs down and I get very slow rendering just across my local WiFi connected LAN. I added a simple MJPEG compression (wanting to switch to h.264 later) and I noticed the bytes drop to around 330-360k. Not a bad improvement.
On the receiving end Do I need to create a custom DirectShow Source Pin in order to serve up the bytes received from the socket so I can attach a decoder and render the bytes in a window?
I just wanted to ask this first since it seems like a lot of work to create a new COM object (haven't done that in about 15 years!), register it, and use/debug it.
Is there perhaps another way?
Also if that is the way to go, should I use a SampleGrabber on the receiving end and create a BitmapSource from the decompressed bytes, or should I allow DirectShow to create a child window? Thing is, I want to have more than one other player and I set an extra byte in the socket to tell what table position they are in. How do I render each position in turn?
For those that are interested, here is how you set the resolution and add an encoder/compressor:
// Create a graph builder
int hr = captureGraphBuilder.SetFiltergraph(graphBuilder);
// Find a capture device (WebCam) and attach it to the graph
sourceFilter = FindCaptureDevice();
hr = graphBuilder.AddFilter(sourceFilter, "Video Capture");
// Get the source output Pin
IPin sourcePin = DsFindPin.ByDirection((IBaseFilter)sourceFilter, PinDirection.Output, 0);
IAMStreamConfig sc = (IAMStreamConfig)sourcePin;
int count;
int size;
sc.GetNumberOfCapabilities(out count, out size);
VideoInfoHeader v;
AMMediaType media2 = null;
IntPtr memPtr = Marshal.AllocCoTaskMem(size);
for (int i = 0; i < count; ++i)
{
sc.GetStreamCaps(i, out media2, memPtr);
v = (VideoInfoHeader)Marshal.PtrToStructure(media2.formatPtr, typeof(VideoInfoHeader));
// Break when width is 160
if (v.BmiHeader.Width == 160)
break;
}
// Set the new media format t0 160 x 120
hr = sc.SetFormat(media2);
Marshal.FreeCoTaskMem(memPtr);
DsUtils.FreeAMMediaType(media2);
// Create a FramGrabber
IBaseFilter grabberF = (IBaseFilter)new SampleGrabber();
ISampleGrabber grabber = (ISampleGrabber)grabberF;
// Set the media type
var media = new AMMediaType
{
majorType = MediaType.Video,
subType = MediaSubType.MJPG
//subType = MediaSubType.RGB24
};
// The media sub type will be MJPG
hr = grabber.SetMediaType(media);
DsUtils.FreeAMMediaType(media);
hr = grabber.SetCallback(this, 1);
hr = graphBuilder.AddFilter(grabberF, "Sample Grabber");
IPin grabberPin = DsFindPin.ByDirection(grabberF, PinDirection.Input, 0);
// Get the MPEG compressor
Guid iid = typeof(IBaseFilter).GUID;
object compressor = null;
foreach (DsDevice device in DsDevice.GetDevicesOfCat(FilterCategory.VideoCompressorCategory))//.MediaEncoderCategory))
{
if (device.Name == "MJPEG Compressor")
{
device.Mon.BindToObject(null, null, ref iid, out compressor);
hr = graphBuilder.AddFilter((IBaseFilter)compressor, "Compressor");
break;
}
string name = device.Name;
}
// This also works!
//IBaseFilter enc = (IBaseFilter)new MJPGEnc();
//graphBuilder.AddFilter(enc, "MJPEG Encoder");
// Get the input and out pins of the compressor
IBaseFilter enc = (IBaseFilter)compressor;
IPin encPinIn = DsFindPin.ByDirection(enc, PinDirection.Input, 0);
IPin encPinOut = DsFindPin.ByDirection(enc, PinDirection.Output, 0);
// Attach the pins: source to input, output to grabber
hr = graphBuilder.Connect(sourcePin, encPinIn);
hr = graphBuilder.Connect(encPinOut, grabberPin);
// Free the pin resources
Marshal.ReleaseComObject(sourcePin);
Marshal.ReleaseComObject(enc);
Marshal.ReleaseComObject(encPinIn);
Marshal.ReleaseComObject(encPinOut);
Marshal.ReleaseComObject(grabberPin);
// Create a render stream
hr = captureGraphBuilder.RenderStream(PinCategory.Preview, MediaType.Video, sourceFilter, null, grabberF);
Marshal.ReleaseComObject(sourceFilter);
Configure(grabber);

put live audio input data into array

Hey I am using GetUserMedia() to capture audio input from user's microphone. Meanwhile I want to put captured values into an array so I can manipulate with them. I am using the following code but the problem is that my array gets filled with value 128 all the time (I print the results in console for now), and I can't find my mistake. Can someone help me find my mistake?
//create a new context for audio input
context = new webkitAudioContext();
var analyser = null;
var dataarray = [];
getLiveInput = function() {
navigator.webkitGetUserMedia({audio: true},onStream,onStreamError);
};
function onStream(stream)
{
var input = context.createMediaStreamSource(stream);
analyser = context.createAnalyser();
var str = new Uint8Array(analyser.frequencyBinCount);
analyser.getByteTimeDomainData(str);
for (var i = 0; i < str.length; i++) {
var value = str[i];
dataarray.push(value);
console.log(dataarray)
}//end for loop
}//end function
function onStreamError(e) {
console.error('Streaming failed: ', e);
};
The values returned from getByteTimeDomainData are 8 bit integers, from 0 to 255. 128, which is half way, basically means "no signal". It is the equivalent of 0 in PCM audio data from -1 to 1.
But ANYWAY - there are a couple problems:
First, you're never connecting the input to the analyser. You need input.connect(analyser) before you call analyser.getByteTimeDomainData().
The second problem isn't with your code so much as it's just an implementation issue.
Basically, the gotStream function only gets called once - and getByteTimeDomainData only returns data for 1024 samples worth of audio (a tiny fraction of a second). The problem is, this all happens so quickly and for such a short period of time after the stream gets created, that there's no real input yet. Try wrapping the analyser.getByteTimeDomainData() call and the loop that follows it in a 1000ms setTimeout and then whistle into your microphone as soon as you give the browser permission to record. You should see some values other than 128.
Here's an example: http://jsbin.com/avasav/5/edit

Creating an If statement for an entire array

So here is my situation. I'm new to programming and I've just started making a very, very basic platform game. And I mean literally a game with platforms.
I've got my character in and jumping about and I've created my platforms as an array. This was so that I could put them all side by side at the bottom. Now there is other ways I can do this to get round the problem but I wanted to find out how to do it for an array.
So I've got my character falling with this
kirby.yVelocity += 1.0f
Which is all fine but I want his yVelocity to go to 0.0f when he hits any of the platforms in the array.
So I tried this piece of code
if (plat[i].drawRect.Intersects(kirby.drawRect))
{
kirby.yVelocity = 0.0f
}
which I thought would work but it gives me an error for the [i] saying that it isn't applicable in this context.
few notes:
kirby is my character name, drawRect is the definition for Rectangle, plat is my Platform array which consists of 13 platforms.
Thanks to anyone who can help
Update
The problem is any variation of plat.drawRect or plat[i].drawRect don't work. Here is all my code relating to the platform arrays.
struct Platform
{
public Texture2D txr;
public Rectangle drawRect;
}
Platform[] plat;
plat = new Platform[13];
for (int i = 0; i < plat.Length; i++)
{
plat[i].txr = Content.Load<Texture2D>("platform");
plat[i].drawRect = new Rectangle(i * plat[i].txr.Width, 460, plat[i].txr.Width, plat[i].txr.Height);`
}
for (int i = 0; i < plat.Length; i++)
{
spriteBatch.Draw(plat[i].txr, plat[i].drawRect, Color.White);
}
spriteBatch.End();
Seems like you have to add a for loop, to loop over the platforms. Maybe like this:
for(Platform : plat){
if (platform.drawRect.Intersects(kirby.drawRect)){
kirby.yVelocity = 0.0f;
}
}
Here, I'm assuming you're using Java and Platform is the class of your plat-array, which has class List<Platform>.

raw sound byteArray to float Array

I'm trying to convert the byteArray of a Sound Object to an array with floats. The Sound Object plays back fine & at full length, but the float Array i get from it is cut off (but sounds correct), so i must be doing something wrong in the conversion:
var s:Sound = mySound;
s.play(); // plays fine
var bytes:ByteArray = new ByteArray();
bytes.endian = Endian.LITTLE_ENDIAN;
s.extract(bytes, s.bytesTotal, 0);
var leftChannel:Array = new Array();
var rightChannel:Array = new Array();
bytes.position = 0;
while (bytes.bytesAvailable)
{
leftChannel.push(bytes.readFloat());
rightChannel.push(bytes.readFloat());
}
and this is what i get:
The top two channels are the original Sound Object.
The lower two is the float Array Data. I aligned them so you can see that the beginning is cut off and obviously the length is incorrect.
Thanks for any answers...
ok there were two problems:
the mp3 file i was importing was somehow corrupt, that caused the beginning to be cut off
the length i defined to extract was not correct, to find the full sound length use
var numTotalSamples:Number = int(s.length * 44.1); //assuming 44.1kHz sample rate
then:
s.extract(bytes, numTotalSamples, 0);

Resources