I am trying to jump to a specific frame by setting the CV_CAP_PROP_POS_FRAMES property and then reading the frame like this:
cvSetCaptureProperty( input_video, CV_CAP_PROP_POS_FRAMES, current_frame );
frame = cvQueryFrame( input_video );
The problem I am facing is that, OpenCV 2.1 returns the same frame for the 12 consecutive values of current_frame whereas I want to read each individual frame, not just the key frames. Can anyone please tell me what's wrong?
I did some research and found out that the problem is caused by the decompression algorithm.
The MPEG-like algorithms (including HD, et all) do not compress each frame separately, but save a keyframe from time to time, and then only the differences between the last frame and subsequent frames.
The problem you reported is caused by the fact that, when you select a frame, the decoder (ffmpeg, likely) automatically advances to the next keyframe.
So, is there a way around this? I don't want only key frames but each individual frame.
I don't know whether or not this would be precise enough for your purpose, but I've had success getting to a particular point in an MPEG video by grabbing the frame rate, converting the frame number to a time, then advancing to the time. Like so:
cv::VideoCapture sourceVideo("/some/file/name.mpg");
double frameRate = sourceVideo.get(CV_CAP_PROP_FPS);
double frameTime = 1000.0 * frameNumber / frameRate;
sourceVideo.set(CV_CAP_PROP_POS_MSEC, frameTime);
Due to this limitation in OpenCV, it may be wise to to use FFMPEG instead. Moviepy is a nice wrapper library.
# Get nth frame from a video
from moviepy.video.io.ffmpeg_reader import FFMPEG_VideoReader
cap = FFMPEG_VideoReader("movie.mov",True)
cap.initialize()
cap.get_frame(n/FPS)
Performance is great too. Seeking to the nth frame with get_frame is O(1), and a speed-up is used if (nearly) consecutive frames are requested. I've gotten better-than-realtime results loading three 720p videos simultaneously.
CV_CAP_PROP_POS_FRAMES jumps to a key frame. I had the same issue and worked around it using this (python-)code. It's probably not totally efficient, but get's the job done:
def seekTo(cap, position):
positiontoset = position
pos = -1
cap.set(cv.CV_CAP_PROP_POS_FRAMES, position)
while pos < position:
ret, image = cap.read()
pos = cap.get(cv.CV_CAP_PROP_POS_FRAMES)
if pos == position:
return image
elif pos > position:
positiontoset -= 1
cap.set(cv.CV_CAP_PROP_POS_FRAMES, positiontoset)
pos = -1
I've successfully used the following on OpenCV 3 / Python 3:
# Skip to 150 frame then read the 151th frame
cap.set(cv2.CAP_PROP_POS_FRAMES, 150))
ret, frame = cap.read()
After some years assuming this as a unsavable bug, I think I've figured out a way to use with a good balance between speed and correctness.
A previous solution suggested to use the CV_CAP_PROP_POS_MSEC property before reading the frame:
cv::VideoCapture sourceVideo("/some/file/name.mpg");
const auto frameRate = sourceVideo.get(CV_CAP_PROP_FPS);
void readFrame(int frameNumber, cv::Mat& image) {
const double frameTime = 1000.0 * frameNumber / frameRate;
sourceVideo.set(CV_CAP_PROP_POS_MSEC, frameTime);
sourceVideo.read(image);
}
It does return the expected frame, but the problem is that using CV_CAP_PROP_POS_MSEC may be very slow, for example for a video conversion.
Note: using global variables for simplicity.
On the other hand, if you just want to read the video sequentially, it is enough to read frame without seeking at all.
for (int frameNumber = 0; frameNumber < nFrames; ++frameNumber) {
sourceVideo.read(image);
}
The solution comes from combining both: using a variable to remember the last queried frame, lastFrameNumber, and only seek when requested frame is not the next one. In this way it is possible to increase the speed in a sequential reading while allowing random seek if necessary.
cv::VideoCapture sourceVideo("/some/file/name.mpg");
const auto frameRate = sourceVideo.get(CV_CAP_PROP_FPS);
const int lastFrameNumber = -2; // guarantee seeking the first time
void readFrame(int frameNumber, cv::Mat& image) {
if (lastFrameNumber + 1 != frameNumber) { // not the next frame? seek
const double frameTime = 1000.0 * frameNumber / frameRate;
sourceVideo.set(CV_CAP_PROP_POS_MSEC, frameTime);
}
sourceVideo.read(image);
lastFrameNumber = frameNumber;
}
Related
//I am trying to crop an image captured by espcam the image is in a jpg format I would like to crop it. As the image is stored as a single-dimensional array I tried to rearrange the elements in the array but no changes occurred //
I have cropped the image in RGB565 but I am struggling to understand the single-dimensional array(image buffer)
camera_config_t config;
config.ledc_channel = LEDC_CHANNEL_0;
config.ledc_timer = LEDC_TIMER_0;
config.pin_d0 = Y2_GPIO_NUM;
config.pin_d1 = Y3_GPIO_NUM;
config.pin_d2 = Y4_GPIO_NUM;
config.pin_d3 = Y5_GPIO_NUM;
config.pin_d4 = Y6_GPIO_NUM;
config.pin_d5 = Y7_GPIO_NUM;
config.pin_d6 = Y8_GPIO_NUM;
config.pin_d7 = Y9_GPIO_NUM;
config.pin_xclk = XCLK_GPIO_NUM;
config.pin_pclk = PCLK_GPIO_NUM;
config.pin_vsync = VSYNC_GPIO_NUM;
config.pin_href = HREF_GPIO_NUM;
config.pin_sscb_sda = SIOD_GPIO_NUM;
config.pin_sscb_scl = SIOC_GPIO_NUM;
config.pin_pwdn = PWDN_GPIO_NUM;
config.pin_reset = RESET_GPIO_NUM;
config.xclk_freq_hz = 20000000;
config.pixel_format = PIXFORMAT_RGB565;
config.frame_size = FRAMESIZE_SVGA;
// config.jpeg_quality = 10;
config.fb_count = 2;
esp_err_t result = esp_camera_init(&config);
if (result != ESP_OK) {
return false;
}
camera_fb_t * fb = NULL;
fb = esp_camera_fb_get();
if (!fb)
{
Serial.println("Camera capture failed");
}
the Fb buffer is a single-dimensional array I want to extract each individual RGB value.
JPG is a compressed format, meaning that your rows and columns are not corresponding to what you would see by displaying a 1:1 grid on the screen. You need to convert it to the plain RGB (or equivalents) format and then copy it.
JPG achieves compression by splitting the image into YCbCR components, using a mathematical transformation and then filtering. For additional information I refer to this page.
Luckily you can follow this tutorial to do the inverse JPEG transformation on an Arduino (tip: forget to do this in real time, unless your time constraints are very relaxed).
The idea is to use a library that converts the JPEG image into an array of data:
Using the library is fairly simple: we give it the JPEG file, and the library will start generating arrays of pixels – so called Minimum Coded Units, or MCUs for short. The MCU is a block of 16 by 8 pixels. The functions in the library will return the color value for each pixel as 16-bit color value. The upper 5 bits are the red value, the middle 6 are green and the lower 5 are blue. Now we can send these values by any sort of communication channel we like.
For your use case you won't send the data through the communication channel, but rather store it in a local array by pushing the blocks into adjacent tiles, then do the crop.
That depends on what kind of hardware (camera and board) you are using.
I'm basing this on the OV2640 camera module because it's the one I've been working with. It delivers the image to the frame buffer already encoded, so I'm guessing this might be what you are facing.
Trying to crop the image after it has been encoded can be tricky, but you might be able to instruct the camera chip to only deliver a certain part of the sensor output in the first place using a window function.
The easiest way to access this setting is to define a function to access this:
void setWindow(int resolution , int xOffset, int yOffset, int xLength, int yLength) {
sensor_t * s = esp_camera_sensor_get();
resolution = 0;
s->set_res_raw(s, resolution, 0, 0, 0, xOffset, yOffset, xLength, yLength, xLength, yLength, true, true);
}
/*
* resolution = 0 \\ 1600 x 1200
* resolution = 1 \\ 800 x 600
* resolution = 2 \\ 400 x 296
*/
where (xOffset,yOffset) is the origin of the window in pixels and (xLength,yLength) is the size of the window in pixels. Be aware that changing the resolution will effectively overwrite these settings. Otherwise this works great for me, although for some reason only if the aspect ratio of 4:3 is preserved in the window size.
Looking at the output format table for the ESP32 Camera Driver one can see that most output formats are non-jpeg. If you can handle a RAW format instead (it will be slower to save/transfer, and be MUCH larger) then that would allow you to more easily crop the image by make a copy with a couple of loops. JPEG is compressed and not easily cropped. The page linked also mentions this:
Using YUV or RGB puts a lot of strain on the chip because writing to PSRAM is not particularly fast. The result is that image data might be missing. This is particularly true if WiFi is enabled. If you need RGB data, it is recommended that JPEG is captured and then turned into RGB using fmt2rgb888 or fmt2bmp/frame2bmp
If you are using PIXFORMAT_RGB565 (which means each pixel value will be kept in TWO bytes, and the image is not jpeg compressed) and FRAMESIZE_SVGA (800x600 pixels), you should be able to access the framebuffer as a two-dimensional array if you want:
uint16_t *buffer = fb->buf;
uint16_t pxl = buffer[row * 800 + column]; // 800 is the SVGA width
// pxl now contains 5 R-bits, 6 G-bits, 5 B-bits
I'm writing a video editor, and I need to seek to exact frame, knowing the frame number.
Other posts on stackoverflow told me that ffmpeg may give me a few broken frames after seeking, which is not a problem for playback but a big problem for video editors.
And I need to seek by frame number, not by time, which will become inaccurate when converted to frame number.
I've read dranger's tuts (which is outdated now), and end up with:
av_seek_frame(fmt_ctx, video_stream_id, frame, AVSEEK_FLAG_ANY);
It always seek to frame No. 0, and always return 0 which means success.
Then I tried to read Blender's source code and found it really complex(maybe I should implement an image buffer?).
So, is there any simple way to seek to a frame with just a simple call like seek(context, frame_number)(while getting a full frame, not a broken one)? Or, is there any lightweight library that simplifies this?
EDIT:
Thanks to praks411,I found the solution:
void AV_seek(AV * av, size_t frame)
{
int frame_delta = frame - av->frame_id;
if (frame_delta < 0 || frame_delta > 5)
av_seek_frame(av->fmt_ctx, av->video_stream_id,
frame, AVSEEK_FLAG_BACKWARD);
while (av->frame_id != frame)
AV_read_frame(av);
}
void AV_read_frame(AV * av)
{
AVPacket packet;
int frame_done;
while (av_read_frame(av->fmt_ctx, &packet) >= 0) {
if (packet.stream_index == av->video_stream_id) {
avcodec_decode_video2(av->codec_ctx, av->frame, &frame_done, &packet);
if (frame_done) {
...
av->frame_id = packet.dts;
av_free_packet(&packet);
return;
}
}
av_free_packet(&packet);
}
}
EDIT2:
Turns out there is a library for this: FFMS2.
It is "an FFmpeg based source library [...] for easy frame accurate access", and is portable (at least across Windows and Linux).
av_seek_frame will only seek based on timestamp to the key-frame. Since it seeks to the keyframe, you may not get what you want. Hence it is recommended to seek to nearest keyframe and then read frame by frame util you reach the desired frame.
However, if you are dealing with fixed FPS value, then you can easily map timestamp to frame index.
Before seeking you will need to convert your time to AVStream.time_base units if you have specified stream. Read ffmpeg documentation of av_seek_frame in avformat.h.
For example, if you want to seek to 1.23 seconds of clip:
double m_out_start_time = 1.23;
int flgs = AVSEEK_FLAG_ANY;
int seek_ts = (m_out_start_time*(m_in_vid_strm->time_base.den))/(m_in_vid_strm->time_base.num);
if(av_seek_frame(m_informat, m_in_vid_strm_idx,seek_ts, flgs) < 0)
{
PRINT_MSG("Failed to seek Video ")
}
For FPS calculation, I use some code I found on the web and it's working well. However, I don't really understand it. Here's the function I use:
void computeFPS()
{
numberOfFramesSinceLastComputation++;
currentTime = glutGet(GLUT_ELAPSED_TIME);
if(currentTime - timeSinceLastFPSComputation > 1000)
{
char fps[256];
sprintf(fps, "FPS: %.2f", numberOfFramesSinceLastFPSComputation * 1000.0 / (currentTime . timeSinceLastFPSComputation));
glutSetWindowTitle(fps);
timeSinceLastFPSComputation = currentTime;
numberOfFramesSinceLastComputation = 0;
}
}
My question is, how is the value that is calculated in the sprint call stored in the fps array, since I don't really assign it.
This is not a question about OpenGL, but the C standard library. Reading the reference documentation of s(n)printf helps:
man s(n)printf: http://linux.die.net/man/3/sprintf
In short snprintf takes a pointer to a user supplied buffer and a format string and fills the buffer according to the format string and the values given in the additional parameters.
Here's my suggestion: If you have to ask about things like that, don't tackle OpenGL yet. You need to be fluent in the use of pointers and buffers when it comes to supplying buffer object data and shader sources. If you plan on using C for this, get a book on C and thoroughly learn that first. And unlike C++ you can actually learn C to some good degree over the course of a few months.
This function is supposedly called at every redraw of your main loop (for every frame). So what it's doing is increasing a counter of frames and getting the current time this frame is being displayed. And once per second (1000ms), it's checking that counter and reseting it to 0. So when getting the counter value at each second, it's getting its value and displaying it as the title of the window.
/**
* This function has to be called at every frame redraw.
* It will update the window title once per second (or more) with the fps value.
*/
void computeFPS()
{
//increase the number of frames
numberOfFramesSinceLastComputation++;
//get the current time in order to check if it has been one second
currentTime = glutGet(GLUT_ELAPSED_TIME);
//the code in this if will be executed just once per second (1000ms)
if(currentTime - timeSinceLastFPSComputation > 1000)
{
//create a char string with the integer value of numberOfFramesSinceLastComputation and assign it to fps
char fps[256];
sprintf(fps, "FPS: %.2f", numberOfFramesSinceLastFPSComputation * 1000.0 / (currentTime . timeSinceLastFPSComputation));
//use fps to set the window title
glutSetWindowTitle(fps);
//saves the current time in order to know when the next second will occur
timeSinceLastFPSComputation = currentTime;
//resets the number of frames per second.
numberOfFramesSinceLastComputation = 0;
}
}
I'm in the process of writing a little game to teach myself OpenGL rendering as it's one of the things I haven't tackled yet. I used SDL before and this same function, while still performing badly, didn't go as over the top as it does now.
Basically, there is not much going on in my game yet, just some basic movement and background drawing. When I switched to OpenGL, it appears as if it's way too fast. My frames per second exceed 2000 and this function uses up most of the processing power.
What is interesting is that the program in it's SDL version used 100% CPU but ran smoothly, while the OpenGL version uses only about 40% - 60% CPU but seems to tax my graphics card in such a way that my whole desktop becomes unresponsive. Bad.
It's not a too complex function, it renders a 1024x1024 background tile according to the player's X and Y coordinates to give the impression of movement while the player graphic itself stays locked in the center. Because it's a small tile for a bigger screen, I have to render it multiple times to stitch the tiles together for a full background. The two for loops in the code below iterate 12 times, combined, so I can see why this is ineffective when called 2000 times per second.
So to get to the point, this is the evil-doer:
void render_background(game_t *game)
{
int bgw;
int bgh;
int x, y;
glBindTexture(GL_TEXTURE_2D, game->art_background);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, &bgw);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_HEIGHT, &bgh);
glBegin(GL_QUADS);
/*
* Start one background tile too early and end one too late
* so the player can not outrun the background
*/
for (x = -bgw; x < root->w + bgw; x += bgw)
{
for (y = -bgh; y < root->h + bgh; y += bgh)
{
/* Offsets */
int ox = x + (int)game->player->x % bgw;
int oy = y + (int)game->player->y % bgh;
/* Top Left */
glTexCoord2f(0, 0);
glVertex3f(ox, oy, 0);
/* Top Right */
glTexCoord2f(1, 0);
glVertex3f(ox + bgw, oy, 0);
/* Bottom Right */
glTexCoord2f(1, 1);
glVertex3f(ox + bgw, oy + bgh, 0);
/* Bottom Left */
glTexCoord2f(0, 1);
glVertex3f(ox, oy + bgh, 0);
}
}
glEnd();
}
If I artificially limit the speed by called SDL_Delay(1) in the game loop, I cut the FPS down to ~660 ± 20, I get no "performance overkill". But I doubt that is the correct way to go on about this.
For the sake of completion, these are my general rendering and game loop functions:
void game_main()
{
long current_ticks = 0;
long elapsed_ticks;
long last_ticks = SDL_GetTicks();
game_t game;
object_t player;
if (init_game(&game) != 0)
return;
init_player(&player);
game.player = &player;
/* game_init() */
while (!game.quit)
{
/* Update number of ticks since last loop */
current_ticks = SDL_GetTicks();
elapsed_ticks = current_ticks - last_ticks;
last_ticks = current_ticks;
game_handle_inputs(elapsed_ticks, &game);
game_update(elapsed_ticks, &game);
game_render(elapsed_ticks, &game);
/* Lagging stops if I enable this */
/* SDL_Delay(1); */
}
cleanup_game(&game);
return;
}
void game_render(long elapsed_ticks, game_t *game)
{
game->tick_counter += elapsed_ticks;
if (game->tick_counter >= 1000)
{
game->fps = game->frame_counter;
game->tick_counter = 0;
game->frame_counter = 0;
printf("FPS: %d\n", game->fps);
}
render_background(game);
render_objects(game);
SDL_GL_SwapBuffers();
game->frame_counter++;
return;
}
According to gprof profiling, even when I limit the execution with SDL_Delay(), it still spends about 50% of the time rendering my background.
Turn on VSYNC. That way you'll calculate graphics data exactly as fast as the display can present it to the user, and you won't waste CPU or GPU cycles calculating extra frames inbetween that will just be discarded because the monitor is still busy displaying a previous frame.
First of all, you don't need to render the tile x*y times - you can render it once for the entire area it should cover and use GL_REPEAT to have OpenGL cover the entire area with it. All you need to do is to compute the proper texture coordinates once, so that the tile doesn't get distorted (stretched). To make it appear to be moving, increase the texture coordinates by a small margin every frame.
Now down to limiting the speed. What you want to do is not to just plug a sleep() call in there, but measure the time it takes to render one complete frame:
function FrameCap (time_t desiredFrameTime, time_t actualFrameTime)
{
time_t delay = 1000 / desiredFrameTime;
if (desiredFrameTime > actualFrameTime)
sleep (desiredFrameTime - actualFrameTime); // there is a small imprecision here
}
time_t startTime = (time_t) SDL_GetTicks ();
// render frame
FrameCap ((time_t) SDL_GetTicks () - startTime);
There are ways to make this more precise (e.g. by using the performance counter functions on Windows 7, or using microsecond resolution on Linux), but I think you get the general idea. This approach also has the advantage of being driver independent and - unlike coupling to V-Sync - allowing an arbitrary frame rate.
At 2000 FPS it only takes 0.5 ms to render the entire frame. If you want to get 60 FPS then each frame should take about 16 ms. To do this, first render your frame (about 0.5 ms), then use SDL_Delay() to use up the rest of the 16 ms.
Also, if you are interested in profiling your code (which isn't needed if you are getting 2000 FPS!) then you may want to use High Resolution Timers. That way you could tell exactly how long any block of code takes, not just how much time your program spends in it.
I am trying to write a simple Silverlight media player, but I need the timestamp to be hh:mm:ss:ff where FF is Frame count.
I used a timer in order to get ticks and calculate the frame I am in, but it seems very inaccurate.
How can I count reliably the frame I am in?
Does anyone know of a free Silverlight player that will do that?
Silverlight is designed to update at irregular intervals and catch-up any animation or media playing to the current elapsed time when the next frame is rendered.
To calculate the current frame (a frame is just a specific fraction of a second) it is simply a matter of multiplying total elapsed time, since the playback started, by the number of frames-per-second encoded in the video then finding the remainder of frames within that second to get the current frame.
e.g.
Current frame = (Elapsed-Time-in-seconds * FramesPerSecond) % FramesPerSecond;
So if 20.12 seconds has elapsed, on a video that has 24 frames per second, you are on Frame 482 (actually 482.88 but only whole frames matter).
Take the Modulus of that by the Frames-per-second and you get the remaining number of frames (e.g. 2) so you are on frame number 2 in second number 20 (or 00:00:20:02).
You need to do the multiply using doubles (or floats) and the final modulus on an integer value so it will be like this in C# code:
int framesPerSecond = 24; // for example
double elapsedTimeInSeconds = ...; /// Get the elapsed time...
int currentFrame = ((int)(elapsedTimeInSeconds * (double)framesPerSecond)) % framesPerSecond;
As the question has changed (in comment) to a fractional frame rate the maths will be as per this console app:
using System;
namespace TimeTest
{
class Program
{
static void Main(string[] args)
{
double framesPerSecond = 29.97;
for (double elapsedTime = 0; elapsedTime < 5; elapsedTime+=0.01)
{
int currentFrame = (int)((elapsedTime*framesPerSecond) % framesPerSecond);
Console.WriteLine("Time: {0} = Frame: {1}", elapsedTime, currentFrame);
}
}
}
}
Note: You are not guaranteed to display every frame number, as the framerate is not perfect, but you can only see the rendered frames so it does not matter. The frames you see will have the correct frame numbers.