I have picked up 'Learning OpenCV' and have been trying some of the code examples/exercises. In this code snippet, I want to get the slider to update its position with each video frame change, but for some reason it is slowing down the video playback speed.
The slider is updating the position during video playback using cvSetTrackbarPos() function but it is making the playback very slow.
#include <cv.h>
#include <highgui.h>
using namespace std;
int g_slider_position = 0;
CvCapture *g_capture = NULL;
void onTrackbarSlide(int pos)
{
cvSetCaptureProperty(g_capture,CV_CAP_PROP_POS_FRAMES,pos);
}
int main(int argc, char *argv[])
{
if(argc<2)
{
printf("Usage: main <video-file-name>\n\7");
exit(0);
}
// create a window
cvNamedWindow("Playing Video With Slider", CV_WINDOW_AUTOSIZE);
g_capture = cvCreateFileCapture(argv[1]);
int frames = (int) cvGetCaptureProperty(g_capture, \
CV_CAP_PROP_FRAME_COUNT);
if(frames !=0)
{
cvCreateTrackbar("Slider","Playing Video With Slider", \
&g_slider_position,frames, onTrackbarSlide);
}
IplImage* frame = 0;
while(1)
{
frame = cvQueryFrame(g_capture);
if(!frame)
{
break;
}
cvShowImage("Playing Video With Slider", frame);
cvSetTrackbarPos("Slider","Playing Video With Slider", \
g_slider_position+1); //Slowing down playback
char c= cvWaitKey(33);
if(c == 27)
{
break;
}
}
// release the image
cvReleaseImage(&frame );
cvReleaseCapture(&g_capture);
// Destroy Window
cvDestroyWindow("Playing Video With Slider");
return 0;
}
This is an inefficiency in how opencv displays trackbars (the same problem occurs even if you don't update the slider, and in case when you refer to non-changing variable outside the processing loop).
A workaround might be to display the trackbar in a separate window.
Line char c= cvWaitKey(33); have problem.
It is in while(1) and every time it takes 33 milisecond to Wait for a pressed key.Make this number lesser.
EDITED LATER:
Make change as shown below
void onTrackbarSlide(int pos)
{
pos = g_slider_position;
cvSetCaptureProperty(g_capture,CV_CAP_PROP_POS_FRAMES,pos);
}
The problem is that every time You call cvSetTrackbarPos("Slider","Playing Video With Slider", g_slider_position+1); callback onTrackbarSlide changing the video position another time and slowing down the program flow.
The way I find out to avoid that is with a flag. It tells the callback if the change in the Trackbar is produced by the normal update flow or is produced by You.
int g_slider_position = 0;
int g_update_slider = 0;//flag
CvCapture *g_capture = NULL;
void onTrackbarSlide(int pos)
{
if (!g_update_slider)//if not changed by the video flow
{
cvSetCaptureProperty(
g_capture,
CV_CAP_PROP_POS_FRAMES,
pos
);
}
}
void updateSlider(int pos)
{
g_update_slider = 1; //Changed by the video flow
cvSetTrackbarPos("Position", "Example3", pos);
g_update_slider = 0; //Returns the flag when the change is performed
}
In the main I call for updateSlider instead of cvSetTrackbarPos.
Related
I'm in a pickle regarding concepts relating to timers. How can I can I operate a "delay" inside a timer? This is the best way I can frame the question knowing full well what I'm trying to do is nonsense. The objective is: I wish to test the pinState condition 2 times (once initially and then 4 seconds later) but this all needs to happen periodically (hence a timer).
The platform is NodeMCU running a WiFi (ESP8266 chip) and coding done inside Arduino IDE.
#define BLYNK_PRINT Serial
#include <ESP8266WiFi.h>
#include <BlynkSimpleEsp8266.h>
BlynkTimer timer;
char auth[] = "x"; //Auth code sent via Email
char ssid[] = "x"; //Wifi name
char pass[] = "x"; //Wifi Password
int flag=0;
void notifyOnFire()
{
int pinState = digitalRead(D1);
if (pinState==0 && flag==0) {
delay(4000);
int pinStateAgain = digitalRead(D1);
if (pinStateAgain==0) {
Serial.println("Alarm has gone off");
Blynk.notify("House Alarm!!!");
flag=1;
}
}
else if (pinState==1)
{
flag=0;
}
}
void setup()
{
Serial.begin(9600);
Blynk.begin(auth, ssid, pass);
pinMode(D1,INPUT_PULLUP);
timer.setInterval(1000L,notifyOnFire);
}
void loop()
{
//Serial.println(WiFi.localIP());
Blynk.run();
timer.run();
}
an easy fix would be to set the periodicity of the timer to be 4000L timer.setInterval(4000L,notifyOnFire); and in notifyOnFire use a static variable and toggle its value whenever notifyOnFire is called
void notifyOnFire()
{
static char state = 0;
if( state == 0)
{
/* Write here the code you need to be executed before the 4 sec delay */
state = 1;
}
else
{
/* Write here the code you need to be executed after the 4 sec delay */
state = 0;
}
}
The nice thing about static variables is that they are initialized only once at compile time and they retain their values after the scope of code changes (In this case function notifyOnFire exits).
\
I'm working on code have that simulate Knight rider leds. I want to control Led via bluetooth as i can switch off.
but i tried several things but doesn't work .
any help.
*/I'm working on code have that simulate Knight rider leds. I want to control Led via bluetooth as i can switch off.
but i tried several things but doesn't work .
any help
code:
#include "BluetoothSerial.h"
#include <Adafruit_NeoPixel.h>
#define N_LEDS 8
#define PIN 23
Adafruit_NeoPixel strip = Adafruit_NeoPixel(N_LEDS, PIN, NEO_GRB +
NEO_KHZ800);
BluetoothSerial ESP_BT;
int incoming;
int r;
int pos = 0, dir = 1;
void setup() {
strip.begin();
Serial.begin(9600);
ESP_BT.begin("ESP32_LED_Control");
Serial.println("Bluetooth Device is Ready to Pair");
}
void loop() {
strip.setPixelColor(pos - 2, 0x100000); // Dark red
strip.setPixelColor(pos - 1, 0x800000); // Medium red
strip.setPixelColor(pos , 0xFF3000); // Center pixel is brightest
strip.setPixelColor(pos + 1, 0x800000); // Medium red
strip.setPixelColor(pos + 2, 0x100000); // Dark red
strip.show();
delay(85); // control speed
for (r=-2; r<= 2; r++) strip.setPixelColor(pos+r, 0);
pos += dir;
if (pos < 0) {
pos = 1;
dir = -dir;
} else if (pos >= strip.numPixels()) {
pos = strip.numPixels() - 2;
dir = -dir;
}
if (ESP_BT.available()) //Check if we receive anything from Bluetooth
{
incoming = ESP_BT.read(); //Read what we recevive
Serial.print("Received:"); Serial.println(incoming);
}
if (incoming == 48)
{
//Should put here code that works on stop strip led//
ESP_BT.println("LED turned OFF");
}
}
To turn off all LEDs I would do something like:
/**
* Turn off all LEDs
*/
void allBlack()
{
for (uint16_t indexPixel = 0; indexPixel < N_LEDS; indexPixel++)
{
strip.SetPixelColor(indexPixel, 0x000000);
}
strip.show();
}
I know is maybe not the scope of this question, but I recommend to test this library:
https://github.com/Makuna/NeoPixelBus
Since it has a lot of good examples and it would be in my opinion a lot better not to use delay but give a certain time like it's shown on Makuna examples to have more control over the animation.
I also have another animation examples in this small project that is triggered by UDP commands.
I've written this code by looking at various examples: Python pulseaudio monitor, Pavumeter source, async playback example, and Pacat source.
I have successfully connected to a sink and am able to record it, but my problem is, I'm stuck at getting the volume value out. If I try printing value from the read function, I just get a bunch of random numbers at a second's interval.
Now I'm not asking for someone to finish writing the code for me, I'd just like some tips, help so that I could head towards the right direction. How do I retrieve the volume value?
#include <stdio.h>
#include <string.h>
#include <math.h>
#include <pulse/pulseaudio.h>
static int latency = 20000; // start latency in micro seconds
static int sampleoffs = 0;
static short sampledata[300000];
static pa_buffer_attr bufattr;
static int underflows = 0;
static pa_sample_spec ss;
// This callback gets called when our context changes state. We really only
// care about when it's ready or if it has failed
void pa_state_cb(pa_context *c, void *userdata) {
pa_context_state_t state;
int *pa_ready = userdata;
state = pa_context_get_state(c);
switch (state) {
// These are just here for reference
case PA_CONTEXT_UNCONNECTED:
case PA_CONTEXT_CONNECTING:
case PA_CONTEXT_AUTHORIZING:
case PA_CONTEXT_SETTING_NAME:
default:
break;
case PA_CONTEXT_FAILED:
case PA_CONTEXT_TERMINATED:
*pa_ready = 2;
break;
case PA_CONTEXT_READY:
*pa_ready = 1;
break;
}
}
static void stream_read_cb(pa_stream *s, size_t length, void *userdata) {
const void *data;
pa_stream_peek(s, &data, &length);
data = (const unsigned char*) data;
printf("%u", data);
pa_stream_drop(s);
}
int main(int argc, char *argv[]) {
pa_mainloop *pa_ml;
pa_mainloop_api *pa_mlapi;
pa_context *pa_ctx;
pa_stream *recordstream;
int r;
int pa_ready = 0;
int retval = 0;
unsigned int a;
double amp;
int test = 0;
// Create a mainloop API and connection to the default server
pa_ml = pa_mainloop_new();
pa_mlapi = pa_mainloop_get_api(pa_ml);
pa_ctx = pa_context_new(pa_mlapi, "Simple PA test application");
pa_context_connect(pa_ctx, NULL, 0, NULL);
// This function defines a callback so the server will tell us it's state.
// Our callback will wait for the state to be ready. The callback will
// modify the variable to 1 so we know when we have a connection and it's
// ready.
// If there's an error, the callback will set pa_ready to 2
pa_context_set_state_callback(pa_ctx, pa_state_cb, &pa_ready);
// We can't do anything until PA is ready, so just iterate the mainloop
// and continue
while (pa_ready == 0) {
pa_mainloop_iterate(pa_ml, 1, NULL);
}
if (pa_ready == 2) {
retval = -1;
goto exit;
}
ss.rate = 44100;
ss.channels = 2;
ss.format = PA_SAMPLE_U8;
recordstream = pa_stream_new(pa_ctx, "Record", &ss, NULL);
if (!recordstream) {
printf("pa_stream_new failed\n");
}
pa_stream_set_read_callback(recordstream, stream_read_cb, NULL);
r = pa_stream_connect_record(recordstream, NULL, NULL, PA_STREAM_PEAK_DETECT);
if (r < 0) {
printf("pa_stream_connect_playback failed\n");
retval = -1;
goto exit;
}
// Run the mainloop until pa_mainloop_quit() is called
// (this example never calls it, so the mainloop runs forever).
// printf("%s", "Running Loop");
pa_mainloop_run(pa_ml, NULL);
exit:
// clean up and disconnect
pa_context_disconnect(pa_ctx);
pa_context_unref(pa_ctx);
pa_mainloop_free(pa_ml);
return retval;
}
Looking at the original question from UNIX.StackExchange, it looks like you're trying to create a VU meter. It can be done using an envelope detector. You have to read the input values and then average their rectified value. A simple envelope detector can be done as an exponential moving average filter.
float level = 0; // Init time
const float alpha = COEFFICIENT; // See below
...
// Inside sample loop
float input_signal = fabsf(get_current_sample());
level = level + alpha * (input_signal - level);
Here, alpha is the filter coefficient, which can be calculated as:
const float alpha = 1.0 - expf( (-2.0 * M_PI) / (TC * SAMPLE_RATE) );
Where TC is known as the "time constant" parameter, measured in seconds, which defines how fast you want to "follow" the signal. Setting it too short makes the VU meter very "bumpy" and setting it too long will miss transients in the signal. 10 mS is a good value to start from.
I am trying to manipulate pixel using sdl and manage to read them up now. Below is my sample code. When I print I this printf("\npixelvalue is is : %d",MyPixel); I get values like this
11275780
11275776
etc
I know these are not in hex form but how to manipulate say I want to filter just the blue colors out? Secondly after manipulation how to generate the new image?
#include "SDL.h"
int main( int argc, char* argv[] )
{
SDL_Surface *screen, *image;
SDL_Event event;
Uint8 *keys;
int done = 0;
if (SDL_Init(SDL_INIT_VIDEO) == -1)
{
printf("Can't init SDL: %s\n", SDL_GetError());
exit(1);
}
atexit(SDL_Quit);
SDL_WM_SetCaption("sample1", "app.ico");
/* obtain the SDL surfance of the video card */
screen = SDL_SetVideoMode(640, 480, 24, SDL_HWSURFACE);
if (screen == NULL)
{
printf("Can't set video mode: %s\n", SDL_GetError());
exit(1);
}
printf("Loading here");
/* load BMP file */
image = SDL_LoadBMP("testa.bmp");
Uint32* pixels = (Uint32*)image->pixels;
int width = image->w;
int height = image->h;
printf("Widts is : %d",image->w);
for(int iH = 1; iH<=height; iH++)
for(int iW = 1; iW<=width; iW++)
{
printf("\nIh is : %d",iH);
printf("\nIw is : %d",iW);
Uint32* MyPixel = pixels + ( (iH-1) + image->w ) + iW;
printf("\npixelvalue is is : %d",MyPixel);
}
if (image == NULL) {
printf("Can't load image of tux: %s\n", SDL_GetError());
exit(1);
}
/* Blit image to the video surface */
SDL_BlitSurface(image, NULL, screen, NULL);
SDL_UpdateRect(screen, 0, 0, screen->w, screen->h);
/* free the image if it is no longer needed */
SDL_FreeSurface(image);
/* process the keyboard event */
while (!done)
{
// Poll input queue, run keyboard loop
while ( SDL_PollEvent(&event) )
{
if ( event.type == SDL_QUIT )
{
done = 1;
break;
}
}
keys = SDL_GetKeyState(NULL);
if (keys[SDLK_q])
{
done = 1;
}
// Release CPU for others
SDL_Delay(100);
}
// Release memeory and Quit SDL
SDL_FreeSurface(screen);
SDL_Quit();
return 0;
}
Use SDL_MapRGB and SDL_MapRGBA to sort colors out. SDL will filter it out for you, based on surface format.
Just like this:
Uint32 rawpixel = getpixel(surface, x, y);
Uint8 red, green, blue;
SDL_GetRGB(rawpixel, surface->format, &red, &green, &blue);
You are printing the value of the pointer MyPixel. To get the value you have to dereference the pointer to the pixel value like this: *MyPixel
Then the printf would look like this:
printf("\npixelvalue is : %d and the address of that pixel is: %p\n",*MyPixel , MyPixel);
Other errors:
Your for loops are incorrect. You should loop from 0 to less than width or height, or else you will read uninitialized memory.
You didn't lock the surface. Although you are only reading the pixels and nothing should go wrong it is still not correct.
Test for correctness if the image pointer comes after you are already using the pointer. Put the test right after the initialization.
If I recall correctly I used sdl_gfx for pixel manipulation.
It also contains function like drawing a circle, oval etc.
I have implemented a capture code runs on OpenCV libraries. Code captures from 2 cameras order by order. But the code causes memory allocation error after a while.
I have to release a capture stream of camera1 to open a capture stream of camera2. I could not able to get two capture simultanously so I have to capture it order by order.
Why it couses memory allocation error in this scenario?
My code is located below:
#include <cv.h>
#include <highgui.h>
#include <cxcore.h>
#include <stdio.h>
CvCapture* camera; // Use the default camera
IplImage* frame;
int main(int argc, char* argv[])
{
while(1)
{
camera = cvCreateCameraCapture(0); // Use the default camera
//camera2 = cvCreateCameraCapture(1); // Use the default camera
frame = 0;
//frame2 = 0;
cvSetCaptureProperty(camera,CV_CAP_PROP_FRAME_WIDTH,1024) ;
cvSetCaptureProperty(camera,CV_CAP_PROP_FRAME_HEIGHT,768);
frame = cvQueryFrame(camera); //need to capture at least one extra frame
if (frame != NULL) {
printf("Frame extracted from CAM1\n\r");
cvSaveImage("/dev/shm/webcam1.jpg", frame,0);
printf("Frame from CAM1 saved\n\r");
} else {
printf("Null frame 1\n\r");
}
cvReleaseImage(&frame);
cvReleaseCapture(&camera);
camera = cvCreateCameraCapture(1); // Use the default camera
cvSetCaptureProperty(camera,CV_CAP_PROP_FRAME_WIDTH,1024) ;
cvSetCaptureProperty(camera,CV_CAP_PROP_FRAME_HEIGHT,768);
frame = cvQueryFrame(camera); //need to capture at least one extra frame
if (frame != NULL) {
printf("Frame extracted from CAM2\n\r");
cvSaveImage("/dev/shm/webcam2.jpg", frame,0);
printf("Frame from CAM2 saved\n\r");
} else {
printf("Null frame 2\n\r");
}
cvReleaseImage(&frame);
cvReleaseCapture(&camera);
}
First of all, you can start declaring out of while() statement the cameraCapture:
camera0 = cvCreateCameraCapture(0);
camera1 = cvCreateCameraCapture(1);
frame0 = 0;
frame1 = 0;
cvSetCaptureProperty(camera0,CV_CAP_PROP_FRAME_WIDTH,1024) ;
cvSetCaptureProperty(camera0,CV_CAP_PROP_FRAME_HEIGHT,768);
cvSetCaptureProperty(camera1,CV_CAP_PROP_FRAME_WIDTH,1024) ;
cvSetCaptureProperty(camera1,CV_CAP_PROP_FRAME_HEIGHT,768);
while(1) {
/* your operation */
frame0 = cvQueryFrame(camera0);
frame1 = cvQueryFrame(camera1);
/* your operation */
}
cvReleaseImage(&frame0);
cvReleaseImage(&frame1);
cvReleaseCapture(&camera0);
cvReleaseCapture(&camera1);
Edit
If you want to take first stream from A-cam and then from B-cam, you'll have a code like the this. I don't understand exactly what do you want to do, so this code simply show your stream on a window.
Remember to use cvWaitKey(...) in order to give to highgui time to process the draw requests from cvShowImage().
Also, take a look at the documentation of cvSetCaptureProperty: here is written that currently the function supports only video files:
CV_CAP_PROP_POS_MSEC;
CV_CAP_PROP_POS_FRAMES;
CV_CAP_PROP_POS_AVI_RATIO.
By the way, this is my suggest:
const char* wndName = "window";
cvNamedWindow(wndName, CV_WINDOW_NORMAL);
IplImage* frame;
CvCapture* capture;
while(true) {
capture = cvCaptureFromCAM(0);
/* Showing for a while capture in window */
while(/* your condition */) {
frame = cvQueryFrame(capture);
/* operation with A cam */
cvShowImage(wndName, frame);
cvWaitKey(30);
}
cvReleaseCapture(&capture);
cvWaitKey(100);
/* switching source cam */
capture = cvCaptureFromCAM(1);
/* Showing for a while capture in the same window */
while(/* your condition */) {
frame = cvQueryFrame(capture);
/* operation with B cam */
cvShowImage(wndName, frame);
cvWaitKey(30);
}
}
cvReleaseImage(&frame);
cvReleaseCapture(&capture);
This works for me, give me any feedback.