first of all, I'm totally new to kivy, so I'm struggling a bit.
I'm trying to display a numpy array in a kivy window.
So far i figured out that this should work using the Texture Class (http://kivy.org/docs/api-kivy.graphics.texture.html).
As my numpy array changes from time to time, I'm trying to adjust the following code to my application.
# create a 64x64 texture, defaults to rgb / ubyte
texture = Texture.create(size=(64, 64))
# create 64x64 rgb tab, and fill with values from 0 to 255
# we'll have a gradient from black to white
size = 64 * 64 * 3
buf = [int(x * 255 / size) for x in range(size)]
# then, convert the array to a ubyte string
buf = b''.join(map(chr, buf))
# then blit the buffer
texture.blit_buffer(buf, colorfmt='rgb', bufferfmt='ubyte')
# that's all ! you can use it in your graphics now :)
# if self is a widget, you can do this
with self.canvas:
Rectangle(texture=texture, pos=self.pos, size=(64, 64))
It seems that creating the texture and changing it works as it should, but i dont get, how to display the texture.
Can anybody explain to me, how to use the
with self.canvas:
Rectangle(texture=texture, pos=self.pos, size=(64, 64))
in a way, that I get to see my picture/numpy array.
Thanks alot in advance!
Holzroller
Edit:
I figured out that using Kivy 1.8.0 and the Texture Class is a bit messy. So I upgraded to Kivy 1.9.0 via github (installing Kivy via apt-get in Ubuntu 14.04 LTS serves you the 1.8.0 version) and I get to see the Texture using the following code. I hope that helps people who are having the same problem as me.
from kivy.graphics.texture import Texture
from kivy.graphics import Rectangle
from kivy.uix.widget import Widget
from kivy.base import runTouchApp
from array import array
from kivy.core.window import Window
# create a 64x64 texture, defaults to rgb / ubyte
texture = Texture.create(size=(1280, 1024), colorfmt='rgb')
# create 64x64 rgb tab, and fill with values from 0 to 255
# we'll have a gradient from black to white
size = 1280 * 1024 * 3
buf = [int(x * 255 / size) for x in range(size)]
# then, convert the array to a ubyte string
arr = array('B', buf)
# buf = b''.join(map(chr, buf))
# then blit the buffer
texture.blit_buffer(arr, colorfmt='rgb', bufferfmt='ubyte')
# that's all ! you can use it in your graphics now :)
# if self is a widget, you can do this
root = Widget()
with root.canvas:
Rectangle(texture=texture, pos=(0, 0), size=(1280*3, 1024*3))
runTouchApp(root)
Edit2:
Basically I'm back to the original Problem:
I have a numpy array (type 'numpy.ndarray'; dtype 'uint8') and I'm trying to convert it into a format, so that the texture will show me the image. I tried to break it down to the same way it is done in the example code i posted above. But i sadly doesn't work. I really do not know what I'm doing wrong here.
(my numpy array is called im2 in the folling code)
list1 = numpy.array(im2).reshape(-1,).tolist()
arr = array('B', list1)
texture.blit_buffer(arr, colorfmt='rgb', bufferfmt='ubyte')
Numpy have a tostring() attribute, that you could use directly, if the source array is uint8 type. You don't even need to reshape:
texture = Texture.create(size=(16, 16), colorfmt="rgb"))
arr = numpy.ndarray(shape=[16, 16, 3], dtype=numpy.uint8)
# fill your numpy array here
data = arr.tostring()
texture.blit_buffer(data, bufferfmt="ubyte", colorfmt="rgb"
About the issue you're talking in the comment, i see 2 points:
Ensure the callback from the ROS is called in the mainthread. Maybe the update is simply ignored.
When you manually change inplace the texture, the associated object that use it are not notified, you need to do it. Add a self.canvas.ask_update() to ensure the canvas redisplay at the next frame.
Related
I haven’t posted to a forum like this before, so I’ve tried to explain as clearly as possible, in as much detail as possible, to try to help anyone understand the problem I’m stuck on. I should add I’ve tried to dig for ages for a suitable function to help me in C but without resorting to parsing by looping on iterations of arrays, it seems no function(s) exist in C to get the job done more elegantly?
Anyway, I have a script which, when complete, will allow me to send specific integer value data, via a serial signal, to a dot matrix display.
I’m able to send the raw HEX data from an array/buffer to a dot matrix display using the script as it is. I also have the dot matrix script working well too so it displays the whole array of HEX data well. However, I want to refine the data by not only changing the data types in an array or buffer (named tempBuffer) from HEX to INT (or whatever is appropriate) but to also specifically select three items of data from their respective index positions in that eight item array/buffer (tempBuffer).
The problem looks like this:-
tempBuffer {0x4 0x41 0xC 0xB 0x90 0x0 0x0 0x0} // Raw hex data in array (buffer) - index positions 0 through 7.
I would like to select index positions [2], [3] and [4] in the above tempBuffer array and then place them consecutively into another array pending further processing (i.e. eventually adding item [3] and [4] together to produce one INT value representing a temperature, for example).
For example:
tempBuffer index item [2] would then be placed as Index item [0] in a new array, newArray . tempBuffer Items [3] & [4] would obviously become items [1] & [2] in newArray.
I plan to use newArray index item [0] as a condition check, for example, an if or case statement etc. The outcome of the clause would determine what happens next to the subsequent index items in [1] & [2]. It won’t always be necessary to use item [2] for data checking of a certain condition if item [0] is not met.
I could use a messy C loop/parsing iteration to extract the values I need from tempBuffer and place them in newArray. However, I wondered if there is a more elegant or existing function set existing in C which will allow me to get the job done more efficiently?
My query also applies I guess to possible function solutions in C to add the index values [1] and [2] together in newArray but if anyone can point me to a pre-existing function or an efficient means to iterate the arrays from above I should be grateful for any guidance, advice or pointers to further reading. Else, (excuse the pun) its back to using unwieldy looping and iteration in C statements to achieve the same outcome.
In the unlikely event anyone is following my original posting . . .
Thank you to those who tried to help.
Please excuse my syntax below, I'm 20 years out of practice with any coding. However, I don't care - it works!
I've managed to resolve my query and now have an adapted script (see section relating to my original stackoverflow query) running the way I want it to.
My adapted and integrated script now achieves the following:-
Read and *display (static, scroll, reverse & invert) Ford **ECU PID data (500Kb/11bit) relating to Vehicle Speed and RPM to a 10 x (8x8) LED Matrix display strip.
***Parse NEMA sentences to display on said matrix altitude and time-of-day data.
• I plan to change to Mikal Hart's TinyGPS++ libraries soon and adapt the script further to make it more efficient.
• I also plan to present a YouTube tutorial about my project.
*Thank you to Marco Colli (MajicDesigns) Parola libraries.
****Thank you to Seeed Studio's example script, OBDII_PIDs, as a base starting point.**
*** Thank you to "ludektalian" for further inspiration.
Thank you,
**# Define this_that_and_the_other;
String = "Only fraction of script";
blah - - -
blah - - -
blah . . .
unsigned char newArray[8] = {0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00};
unsigned char serialOutArray[3] = {0,0,0};
if(CAN_MSGAVAIL == CAN.checkReceive()) // check data acquisition
{
CAN.readMsgBuf(&len, buf); // read data, len: data length, buf: data buf
for(int i = 0; i<len; i++) // print the data
{
newArray [i] = buf[i], HEX;
}
serialOutArray [0] = newArray [2]; // create 3 item array (new array)
serialOutArray [1] = newArray [3]; // from original 8 item array (buf) which
serialOutArray [2] = newArray [4]; // assign 'new array' to
delay(100); //'serialOutArray'
for (int y = 0; y<13; y++)
{
if (int(serialOutArray [0]) == 5) // Coolant in Celsius
{
delay(100);
colSub = int(serialOutArray[1]);
colTot = (colSub-40);
P.print(colTot + "C");
delay(500);
colSub = 0;
colTot = 0;
break;
}
if (int(serialOutArray [0]) == 12) // RPM
{
rpmA = int(serialOutArray [1])*256;
rpmB = int(serialOutArray [2]);
subRPM = (rpmA + rpmB);
rpmTot = subRPM/4;
P.print(rpmTot + "RPM");
break;
}
if (int(serialOutArray [0]) == 13) // MPH
{
mphA = serialOutArray[1];
mphTot = mphA*0.621371192;
P.print(mphTot + "MPH");
break;
}
I was experimenting with GameplayKit’s GKAgent3D class to move a SCNNode within a scene. I was able to update the SCNNode with the agent’s position, but not rotation. The issue being the agent’s rotation is stored as a matrix_float3x3, which doesn’t match any of data types SceneKit uses for storing rotational information.
So what I’d like to know is if there’s a simple function or method that could convert a rotation stored as matrix_float3x3 to any SceneKit data types?
To expand on #rickster 's answer, here's a nice way to take the top-left 3x3 of a 4x4 matrix in Swift, taking advantage of the expanded simd support in the iOS 11/ tvOS 11/ High Sierra version of SceneKit:
extension float4 {
var xyz: float3 {
return float3(x, y, z)
}
init(_ vec3: float3, _ w: Float) {
self = float4(vec3.x, vec3.y, vec3.z, w)
}
}
extension float4x4 {
var upperLeft3x3: float3x3 {
let (a,b,c,_) = columns
return float3x3(a.xyz, b.xyz, c.xyz)
}
init(rotation: float3x3, position: float3) {
let (a,b,c) = rotation.columns
self = float4x4(float4(a, 0),
float4(b, 0),
float4(c, 0),
float4(position, 1))
}
}
Then, to update your agent to match your node's orientation, you'd write:
agent.rotation = node.simdTransform.upperLeft3x3
Or, if the node in question is not at the "root" level (as in, a direct child of the rootNode), you might want to use the node's worldTransform:
agent.rotation = node.simdWorldTransform.upperLeft3x3
EDIT: If the node in question has a dynamic physics body attached, or is being animated with an SCNTransaction block, the node's presentation node will more accurately reflect its current position on screen:
agent.position = node.presentation.simdWorldPosition
agent.rotation = node.presentation.simdWorldTransform.upperLeft3x3
EDIT: added code above for going in the other direction, moving the node to match the agent.
node.simdTransform = float4x4(rotation: agent3d.rotation, position: agent3d.position)
Note that if you have a physics body attached to the node, it should be kinematic rather than dynamic if you're going to be directly modifying the node's transform in this way.
SceneKit takes transform matrices as SCNMatrix4, and provides utilities for converting from SIMD matrix_float4x4: init(_ m: float4x4) for Swift and SCNMatrix4FromMat4 for ObjC/C++.
Sadly, I don't see a built-in way to convert between SIMD 3x3 and 4x4 matrices using the assumption that the 3x3 is the upper left of the 4x4. (Seems like you'd expect that in the SIMD library, so it's worth filing a bug to Apple about.)
But it's not too hard to provide one yourself: just construct a 4x4 from column vectors, using the three column vectors of the 3x3 (padded out to float4 vectors with zero for the w component) and identity for the fourth column (0,0,0,1). (Implementation code left for the reader, partly because I don't want to write it for three languages.) After converting float3x3 to float4x4 you can convert to SCNMatrix4.
Edit: In iOS 11 / tvOS 11 / macOS 10.13 (why didn't they just call this year's macOS version 11, too?), SceneKit has a whole parallel set of APIs for using SIMD types like float4x4 directly; e.g. simdTransform. However, you still need to convert a 3x3 to a 4x4 matrix.
I'm trying to implement an "LOG blob detector" using python and openCV.
the idea is to create 10-15 levels of LOG filters, apply each of them to my original gray scale image and save the images in an array of size height x width x numOfLevels and then find the local maximums on the 3D array.
The problem is I'm not sure how to save these in an array.
I tried to do the following:
myImage = cv2.imread('butterfly.jpg')
gray_image = cv2.cvtColor(myImage, cv2.COLOR_BGR2GRAY)
sigma = 2
k = 2**(0.25)
std2 = float(sigma**2)
arr = []
for i in range(10):
filt_size = 2*np.ceil(3*sigma)+1
H = log_filt( filt_size, sigma)
H *= sigma**2
dst = cv2.filter2D(gray_image,-1,H)
arr.append(dst)
cv2.imshow('Gray', dst)
cv2.waitKey(0)
cv2.destroyAllWindows()
sigma = sigma * k
std2 = float(sigma**2)
plt.imshow(H,interpolation='nearest')
But then if I try to take on the of the images and use cv2.imshow(arr[0]) I get the following error:
TypeError: Required argument 'mat' (pos 2) not found
What am I doing wrong here?
Is there a better way to save these in an array?
Maybe using np.array somehow?
Your error:
cv2.imshow(arr[0]) I get the following error: TypeError: Required argument 'mat' (pos 2) not found
Is because you pass the image (arr[0]) as first parameter, but you should pass it as second:
cv2.imshow('WindowName', arr[0])
See the OpenCV 2.4 and 3.0 documentation of imshow:
cv2.imshow(winname, mat)
I'm just trying to adjust contrast/ brightness in an image in gray scale to highlight whites in that image with Opencv in C. How can I do that? is there any function that makes this task in opencv?
Original image:
Modified image:
Thanks in advance!
I think you can adjust contrast here in two ways:
1) Histogram Equalization :
But when i tried this with your image, result was not as you expected. Check it below:
2) Thresholding :
Here, i compared each pixel value of input with an arbitrary value ( which i took 127). Below is the logic which has inbuilt function in opencv. But remember, output is Binary image, not grayscale as you did.
If (input pixel value >= 127):
ouput pixel value = 255
else:
output pixel value = 0
And below is the result i got :
For this, you can use Threshold function or compare function
3) If you are compulsory to get grayscale image as output, do as follows:
(code is in OpenCV-Python, but for every-function, corresponding C functions are available in opencv.itseez.com)
for each pixel in image:
if pixel value >= 127: add 'x' to pixel value.
else : subtract 'x' from pixel value.
( 'x' is an arbitrary value.) Thus difference between light and dark pixels increases.
img = cv2.imread('brain.jpg',0)
bigmask = cv2.compare(img,np.uint8([127]),cv2.CMP_GE)
smallmask = cv2.bitwise_not(bigmask)
x = np.uint8([90])
big = cv2.add(img,x,mask = bigmask)
small = cv2.subtract(img,x,mask = smallmask)
res = cv2.add(big,small)
And below is the result obtained:
You could also check out the OpenCV CLAHE algorithm. Instead of equalizing the histogram globally, it splits up the image into tiles and equalizes those locally, then stitches them together. This can give a much better result.
With your image in OpenCV 3.0.0:
import cv2
inp = cv2.imread('inp.jpg',0)
clahe = cv2.createCLAHE(clipLimit=4.0, tileGridSize=(8,8))
res = clahe.apply(inp)
cv2.imwrite('res.jpg', res)
Gives something pretty nice
Read more about it here, though it's not super helpful:
http://docs.opencv.org/3.1.0/d5/daf/tutorial_py_histogram_equalization.html#gsc.tab=0
Although this post is a bit aged:
What about using "cvAddWeighted( )" ?
What it does is:
dst = src1*alpha + src2*beta + gamma
What I understand from applying brightness and contrast is, that one wants to do:
dst = src*contrast + brightness;
so if
src1 = input image
src2 = any image of same type as src1
alpha = contrast value
beta = 0.0
gamma = brightness value
dst = resulting Image (must be of same type as src1)
One should be pretty much done with the task, no?
This approch works for me using CvMat* images
I have scuccesfully added canvas to bitmap using WritebleBitmap class and then trying to use the bitmap to save image on client system through SaveFileDilogue. I am using the method of FluxJpegCore image encoding where we use raster arrays to generate image pixel-wise.
Below is the part of the code which does the job.
byte[][,] raster = new byte[bands][,];
for (int i = 0; i < bands; i++)
{
raster[i] = new byte[width, height];
}
for (int row = 0; row < height; row++)
{
for (int column = 0; column < width; column++)
{
int pixel = bitmap.Pixels[width * row + column];
raster[0][column, row] = (byte)(pixel >> 16);
raster[1][column, row] = (byte)(pixel >> 8);
raster[2][column, row] = (byte)pixel;
}
}
All goes fine with image saving, however when i zoom the image and then print it, the code fails at the line "raster[i] = new byte[width, height];". System out of memory error is raised. Can anyone help me to find the solution on this?
I'm not sure there is a solution to be had. You have 3 arrays that each need a contiguous 163MB block of memory. The problem will be that the process does not have 3 such address blocks available that are that size.
Bear in mind also that the bitmap.Pixels will be an array 653MB big.
Your only real hope(s) would be to
Use the app OOB, hopefully VM fragmentation would be limited and allow such very large arrays to be allocated.
If FluxJpegCore can use Stream instead of a byte array and does so effeciently (still a lot of work for you to do there)
Move up to Silverlight 5 and host your app in a 64 bit browser instance.
Going with #AnthonyWJones I'm pretty sure width or height is something like double.NAN. Make sure you place a check to see is width and height is a real number. Also check that your array does not a lot for more than possible within Silverlight