Silverlight media player with frame counter - silverlight

I am trying to write a simple Silverlight media player, but I need the timestamp to be hh:mm:ss:ff where FF is Frame count.
I used a timer in order to get ticks and calculate the frame I am in, but it seems very inaccurate.
How can I count reliably the frame I am in?
Does anyone know of a free Silverlight player that will do that?

Silverlight is designed to update at irregular intervals and catch-up any animation or media playing to the current elapsed time when the next frame is rendered.
To calculate the current frame (a frame is just a specific fraction of a second) it is simply a matter of multiplying total elapsed time, since the playback started, by the number of frames-per-second encoded in the video then finding the remainder of frames within that second to get the current frame.
e.g.
Current frame = (Elapsed-Time-in-seconds * FramesPerSecond) % FramesPerSecond;
So if 20.12 seconds has elapsed, on a video that has 24 frames per second, you are on Frame 482 (actually 482.88 but only whole frames matter).
Take the Modulus of that by the Frames-per-second and you get the remaining number of frames (e.g. 2) so you are on frame number 2 in second number 20 (or 00:00:20:02).
You need to do the multiply using doubles (or floats) and the final modulus on an integer value so it will be like this in C# code:
int framesPerSecond = 24; // for example
double elapsedTimeInSeconds = ...; /// Get the elapsed time...
int currentFrame = ((int)(elapsedTimeInSeconds * (double)framesPerSecond)) % framesPerSecond;
As the question has changed (in comment) to a fractional frame rate the maths will be as per this console app:
using System;
namespace TimeTest
{
class Program
{
static void Main(string[] args)
{
double framesPerSecond = 29.97;
for (double elapsedTime = 0; elapsedTime < 5; elapsedTime+=0.01)
{
int currentFrame = (int)((elapsedTime*framesPerSecond) % framesPerSecond);
Console.WriteLine("Time: {0} = Frame: {1}", elapsedTime, currentFrame);
}
}
}
}
Note: You are not guaranteed to display every frame number, as the framerate is not perfect, but you can only see the rendered frames so it does not matter. The frames you see will have the correct frame numbers.

Related

HARD PROBLEM: How to get ADC Sampling to Sync with POSIX timer?

I'm working on a side project that involves comparing ADC samples from a Waveform generator to calculations from my embedded device (in C).
The device takes continuous samples from a waveform generator, with the following settings:
Sine Wave (60 Hz)
2.1 Vpk-Vpk
1.25 Vdc offset
On my device, I have a sine function that calculates the exact same data:
double sine_wave(double amplitude, int freq, double time, int phase_number,
double offset)
{
double voltage;
double rad = 2 * M_PI * freq * time;
voltage = amplitude * sin(rad) + offset;
return voltage;
}
Where FREQ = 60,
amplitude = 1.05
Offset = 1.25.
Time is the parameter to an abstracted sine function: stop_timer - start_timer.
Where it it is the time elapsed before and after a single ADC sample has been polled.
In my device handler, I am running (pseduo code)
while (1) {
Find_zero_crossing_point(); /* 1.25 is the midpoint */
if (ADC_get_sample() == zero_crossing && (Adc_sample is increasing)) { /* start timer at zero-crossing point */
start_timer(); /* get timestamp of initial point so it can be compared throughout */
while (1) {
stop_timer();
measured_data = ADC_get_sample();
expected_data = abstracted_sine_wave((stop_timer - start_timer));
compare_both_value(measured_data, expected_data);
}
To get stop_timer and start_timer, I'm using the timespec struct. And calling it with clock_gettime(CLOCK_MONOTONIC, &stop_timer);
Now the problem isn't the implementation. I'm able to get the values to match up for the most part. The issue is the the sampling rate is not uniform. And I believe this is due to the nature of running it inside a while loop.
After running this for about 30 seconds, or approximately 810,000 samples (30 x 27,000 samples), the little deviation adds up and the values deviate too much for it to serve the purpose of this project.
I was wondering if you have a solution where I can keep the timing in sync throughout the entirely of the device running this infinite loop?
Thanks!! (I've spent days trying to crack this but nothing is coming to mind)

RMS calculation DC offset

I need to implement an RMS calculations of sine wave in MCU (microcontroller, resource constrained). MCU lacks FPU (floating point unit), so I would prefer to stay in integer realm. Captures are discrete via 10 bit ADC.
Looking for a solution, I've found this great solution here by Edgar Bonet: https://stackoverflow.com/a/28812301/8264292
Seems like it completely suits my needs. But I have some questions.
Input are mains 230 VAC, 50 Hz. It's transformed & offset by hardware means to become 0-1V (peak to peak) sine wave which I can capture with ADC getting 0-1023 readings. Hardware are calibrated so that 260 VRMS (i.e. about -368:+368 peak to peak) input becomes 0-1V peak output. How can I "restore" back original wave RMS value providing I want to stay in integer realm too? Units can vary, mV will do fine also.
My first guess was subtracting 512 from the input sample (DC offset) and later doing this "magic" shift as in Edgar Bonet answer. But I've realized it's wrong because DC offset aren't fixed. Instead it's biased to start from 0V. I.e. 130 VAC input would produce 0-500 mV peak to peak output (not 250-750 mV which would've worked so far).
With real RMS to subtract the DC offset I need to subtract squared sum of samples from the sum of squares. Like in this formula:
So I've ended up with following function:
#define INITIAL 512
#define SAMPLES 1024
#define MAX_V 368UL // Maximum input peak in V ( 260*sqrt(2) )
/* K is defined based on equation, where 64 = 2^6,
* i.e. 6 bits to add to 10-bit ADC to make it 16-bit
* and double it for whole range in -peak to +peak
*/
#define K (MAX_V*64*2)
uint16_t rms_filter(uint16_t sample)
{
static int16_t rms = INITIAL;
static uint32_t sum_squares = 1UL * SAMPLES * INITIAL * INITIAL;
static uint32_t sum = 1UL * SAMPLES * INITIAL;
sum_squares -= sum_squares / SAMPLES;
sum_squares += (uint32_t) sample * sample;
sum -= sum / SAMPLES;
sum += sample;
if (rms == 0) rms = 1; /* do not divide by zero */
rms = (rms + (((sum_squares / SAMPLES) - (sum/SAMPLES)*(sum/SAMPLES)) / rms)) / 2;
return rms;
}
...
// Somewhere in a loop
getSample(&sample);
rms = rms_filter(sample);
...
// After getting at least N samples (SAMPLES * X?)
uint16_t vrms = (uint32_t)(rms*K) >> 16;
printf("Converted Vrms = %d V\r\n", vrms);
Does it looks fine? Or am I doing something wrong like this?
How does SAMPLES (window size?) number relates to F (50Hz) and my ADC capture rate (samples per second)? I.e. how much real samples do I need to feed to rms_filter() before I can get real RMS value providing my capture speed are X sps? At least how to evaluate required minimum N of samples?
I did not test your code, but it looks to me like it should work fine.
Personally, I would not have implemented the function this way. I would
instead have removed the DC part of the signal before trying to
compute the RMS value. The DC part can be estimated by sending the raw
signal through a low pass filter. In pseudo-code this would be
rms = sqrt(low_pass(square(x - low_pass(x))))
whereas what you wrote is basically
rms = sqrt(low_pass(square(x)) - square(low_pass(x)))
It shouldn't really make much of a difference though. The first formula,
however, spares you a multiplication. Also, by removing the DC component
before computing the square, you end up multiplying smaller numbers,
which may help in allocating bits for the fixed-point implementation.
In any case, I recommend you test the filter on your computer with
synthetic data before committing it to the MCU.
How does SAMPLES (window size?) number relates to F (50Hz) and my ADC
capture rate (samples per second)?
The constant SAMPLES controls the cut-off frequency of the low pass
filters. This cut-off should be small enough to almost completely remove
the 50 Hz part of the signal. On the other hand, if the mains
supply is not completely stable, the quantity you are measuring will
slowly vary with time, and you may want your cut-off to be high enough
to capture those variations.
The transfer function of these single-pole low-pass filters is
H(z) = z / (SAMPLES * z + 1 − SAMPLES)
where
z = exp(i 2 π f / f₀),
i is the imaginary unit,
f is the signal frequency and
f₀ is the sampling frequency
If f₀ ≫ f (which is desirable for a good sampling), you can approximate
this by the analog filter:
H(s) = 1/(1 + SAMPLES * s / f₀)
where s = i2πf and the cut-off frequency is f₀/(2π*SAMPLES). The gain
at f = 50 Hz is then
1/sqrt(1 + (2π * SAMPLES * f/f₀)²)
The relevant parameter here is (SAMPLES * f/f₀), which is the number of
periods of the 50 Hz signal that fit inside your sampling window.
If you fit one period, you are letting about 15% of the signal through
the filter. Half as much if you fit two periods, etc.
You could get perfect rejection of the 50 Hz signal if you design a
filter with a notch at that particular frequency. If you don't want
to dig into digital filter design theory, the simplest such filter may
be a simple moving average that averages over a period of exactly
20 ms. This has a non trivial cost in RAM though, as you have to
keep a full 20 ms worth of samples in a circular buffer.

Getting individual frames using CV_CAP_PROP_POS_FRAMES in cvSetCaptureProperty

I am trying to jump to a specific frame by setting the CV_CAP_PROP_POS_FRAMES property and then reading the frame like this:
cvSetCaptureProperty( input_video, CV_CAP_PROP_POS_FRAMES, current_frame );
frame = cvQueryFrame( input_video );
The problem I am facing is that, OpenCV 2.1 returns the same frame for the 12 consecutive values of current_frame whereas I want to read each individual frame, not just the key frames. Can anyone please tell me what's wrong?
I did some research and found out that the problem is caused by the decompression algorithm.
The MPEG-like algorithms (including HD, et all) do not compress each frame separately, but save a keyframe from time to time, and then only the differences between the last frame and subsequent frames.
The problem you reported is caused by the fact that, when you select a frame, the decoder (ffmpeg, likely) automatically advances to the next keyframe.
So, is there a way around this? I don't want only key frames but each individual frame.
I don't know whether or not this would be precise enough for your purpose, but I've had success getting to a particular point in an MPEG video by grabbing the frame rate, converting the frame number to a time, then advancing to the time. Like so:
cv::VideoCapture sourceVideo("/some/file/name.mpg");
double frameRate = sourceVideo.get(CV_CAP_PROP_FPS);
double frameTime = 1000.0 * frameNumber / frameRate;
sourceVideo.set(CV_CAP_PROP_POS_MSEC, frameTime);
Due to this limitation in OpenCV, it may be wise to to use FFMPEG instead. Moviepy is a nice wrapper library.
# Get nth frame from a video
from moviepy.video.io.ffmpeg_reader import FFMPEG_VideoReader
cap = FFMPEG_VideoReader("movie.mov",True)
cap.initialize()
cap.get_frame(n/FPS)
Performance is great too. Seeking to the nth frame with get_frame is O(1), and a speed-up is used if (nearly) consecutive frames are requested. I've gotten better-than-realtime results loading three 720p videos simultaneously.
CV_CAP_PROP_POS_FRAMES jumps to a key frame. I had the same issue and worked around it using this (python-)code. It's probably not totally efficient, but get's the job done:
def seekTo(cap, position):
positiontoset = position
pos = -1
cap.set(cv.CV_CAP_PROP_POS_FRAMES, position)
while pos < position:
ret, image = cap.read()
pos = cap.get(cv.CV_CAP_PROP_POS_FRAMES)
if pos == position:
return image
elif pos > position:
positiontoset -= 1
cap.set(cv.CV_CAP_PROP_POS_FRAMES, positiontoset)
pos = -1
I've successfully used the following on OpenCV 3 / Python 3:
# Skip to 150 frame then read the 151th frame
cap.set(cv2.CAP_PROP_POS_FRAMES, 150))
ret, frame = cap.read()
After some years assuming this as a unsavable bug, I think I've figured out a way to use with a good balance between speed and correctness.
A previous solution suggested to use the CV_CAP_PROP_POS_MSEC property before reading the frame:
cv::VideoCapture sourceVideo("/some/file/name.mpg");
const auto frameRate = sourceVideo.get(CV_CAP_PROP_FPS);
void readFrame(int frameNumber, cv::Mat& image) {
const double frameTime = 1000.0 * frameNumber / frameRate;
sourceVideo.set(CV_CAP_PROP_POS_MSEC, frameTime);
sourceVideo.read(image);
}
It does return the expected frame, but the problem is that using CV_CAP_PROP_POS_MSEC may be very slow, for example for a video conversion.
Note: using global variables for simplicity.
On the other hand, if you just want to read the video sequentially, it is enough to read frame without seeking at all.
for (int frameNumber = 0; frameNumber < nFrames; ++frameNumber) {
sourceVideo.read(image);
}
The solution comes from combining both: using a variable to remember the last queried frame, lastFrameNumber, and only seek when requested frame is not the next one. In this way it is possible to increase the speed in a sequential reading while allowing random seek if necessary.
cv::VideoCapture sourceVideo("/some/file/name.mpg");
const auto frameRate = sourceVideo.get(CV_CAP_PROP_FPS);
const int lastFrameNumber = -2; // guarantee seeking the first time
void readFrame(int frameNumber, cv::Mat& image) {
if (lastFrameNumber + 1 != frameNumber) { // not the next frame? seek
const double frameTime = 1000.0 * frameNumber / frameRate;
sourceVideo.set(CV_CAP_PROP_POS_MSEC, frameTime);
}
sourceVideo.read(image);
lastFrameNumber = frameNumber;
}

how to set a fps of a camera?

I am using a frame grabber inspecta-5 with 1GB memory, also a high speed camera "EoSens Extended Mode, 640X480 1869fps, 10X8 taps". I am new to coding for grabbers and also to controlling the camera. the Inspecta-5 grabber, gives me different options, like changing the requested number of frames from the camrea to grabber and also from grabber to main memory. also I can use camrea links to send signal to camera and have different exposure times.
but Im not really sure what should I use to obtain 1000 frame per second rate, and how can I test it?
according to the software manual if I set the following options in the camera profile :
ReqFrame=1000
GReqFrame=1000
it means transfer 1000 frames from the camera to grabber and also transfer 1000 frame from grabber to Main memory, respectively.
but does it mean that I have 1000fps?
what would be the option for setting the fps to 1000 ? and also how can I test it that I really grabbed 1000 frames in One Second????
here is a link to the grabber software manual : mikrotron.de/index.php?de_downloadfiles you can find the software manual under the "Inspecta Level1 API for WinNT/2000/XP" section. the file name is "i5-level1-sw_manual_e.pdf" , just in case if anybody needs it.
THANK YOU
At 1,000fps you don't have much time to snap a frame or even save a frame. Use the following example and plug in your estimated FPS, capture and save latencies. At 1,000fps, you can have a total of about .8ms latency (and why not .99999? I don't know - something to do with unattainable theoretical max or my old PC).
public static void main(String args[]) throws Exception {
int fps = 1000;
float simulationCaptureNowMS = .40f;
float simulationSaveNowNowMS = .40f;
final long simulationCaptureNowNS = (long)(simulationCaptureNowMS * 1000000.0f);
final long simulationSaveNowNowNS = (long)(simulationSaveNowNowMS * 1000000.0f);
final long windowNS = (1000*1000000)/fps;
final long movieDurationSEC = 2;
long dropDeadTimeMS = System.currentTimeMillis() + (1000* movieDurationSEC);
while(System.currentTimeMillis() < dropDeadTimeMS){
long startNS = System.nanoTime();
actionSimulator(simulationCaptureNowNS);
actionSimulator(simulationSaveNowNowNS);
long endNS = System.nanoTime();
long sleepNS = windowNS-(endNS-startNS);
if (sleepNS<0) {
System.out.println("Data loss. Try again.");
System.exit(0);
}
actionSimulator(sleepNS);
}
System.out.println("No data loss at "+fps+"fps with interframe latency of "+(simulationCaptureNowMS+simulationSaveNowNowMS)+"ms");
}
private static void actionSimulator(long ns) throws Exception {
long d = System.nanoTime()+ns;
while(System.nanoTime()<d) Thread.yield();
}

How can this function be optimized? (Uses almost all of the processing power)

I'm in the process of writing a little game to teach myself OpenGL rendering as it's one of the things I haven't tackled yet. I used SDL before and this same function, while still performing badly, didn't go as over the top as it does now.
Basically, there is not much going on in my game yet, just some basic movement and background drawing. When I switched to OpenGL, it appears as if it's way too fast. My frames per second exceed 2000 and this function uses up most of the processing power.
What is interesting is that the program in it's SDL version used 100% CPU but ran smoothly, while the OpenGL version uses only about 40% - 60% CPU but seems to tax my graphics card in such a way that my whole desktop becomes unresponsive. Bad.
It's not a too complex function, it renders a 1024x1024 background tile according to the player's X and Y coordinates to give the impression of movement while the player graphic itself stays locked in the center. Because it's a small tile for a bigger screen, I have to render it multiple times to stitch the tiles together for a full background. The two for loops in the code below iterate 12 times, combined, so I can see why this is ineffective when called 2000 times per second.
So to get to the point, this is the evil-doer:
void render_background(game_t *game)
{
int bgw;
int bgh;
int x, y;
glBindTexture(GL_TEXTURE_2D, game->art_background);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, &bgw);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_HEIGHT, &bgh);
glBegin(GL_QUADS);
/*
* Start one background tile too early and end one too late
* so the player can not outrun the background
*/
for (x = -bgw; x < root->w + bgw; x += bgw)
{
for (y = -bgh; y < root->h + bgh; y += bgh)
{
/* Offsets */
int ox = x + (int)game->player->x % bgw;
int oy = y + (int)game->player->y % bgh;
/* Top Left */
glTexCoord2f(0, 0);
glVertex3f(ox, oy, 0);
/* Top Right */
glTexCoord2f(1, 0);
glVertex3f(ox + bgw, oy, 0);
/* Bottom Right */
glTexCoord2f(1, 1);
glVertex3f(ox + bgw, oy + bgh, 0);
/* Bottom Left */
glTexCoord2f(0, 1);
glVertex3f(ox, oy + bgh, 0);
}
}
glEnd();
}
If I artificially limit the speed by called SDL_Delay(1) in the game loop, I cut the FPS down to ~660 ± 20, I get no "performance overkill". But I doubt that is the correct way to go on about this.
For the sake of completion, these are my general rendering and game loop functions:
void game_main()
{
long current_ticks = 0;
long elapsed_ticks;
long last_ticks = SDL_GetTicks();
game_t game;
object_t player;
if (init_game(&game) != 0)
return;
init_player(&player);
game.player = &player;
/* game_init() */
while (!game.quit)
{
/* Update number of ticks since last loop */
current_ticks = SDL_GetTicks();
elapsed_ticks = current_ticks - last_ticks;
last_ticks = current_ticks;
game_handle_inputs(elapsed_ticks, &game);
game_update(elapsed_ticks, &game);
game_render(elapsed_ticks, &game);
/* Lagging stops if I enable this */
/* SDL_Delay(1); */
}
cleanup_game(&game);
return;
}
void game_render(long elapsed_ticks, game_t *game)
{
game->tick_counter += elapsed_ticks;
if (game->tick_counter >= 1000)
{
game->fps = game->frame_counter;
game->tick_counter = 0;
game->frame_counter = 0;
printf("FPS: %d\n", game->fps);
}
render_background(game);
render_objects(game);
SDL_GL_SwapBuffers();
game->frame_counter++;
return;
}
According to gprof profiling, even when I limit the execution with SDL_Delay(), it still spends about 50% of the time rendering my background.
Turn on VSYNC. That way you'll calculate graphics data exactly as fast as the display can present it to the user, and you won't waste CPU or GPU cycles calculating extra frames inbetween that will just be discarded because the monitor is still busy displaying a previous frame.
First of all, you don't need to render the tile x*y times - you can render it once for the entire area it should cover and use GL_REPEAT to have OpenGL cover the entire area with it. All you need to do is to compute the proper texture coordinates once, so that the tile doesn't get distorted (stretched). To make it appear to be moving, increase the texture coordinates by a small margin every frame.
Now down to limiting the speed. What you want to do is not to just plug a sleep() call in there, but measure the time it takes to render one complete frame:
function FrameCap (time_t desiredFrameTime, time_t actualFrameTime)
{
time_t delay = 1000 / desiredFrameTime;
if (desiredFrameTime > actualFrameTime)
sleep (desiredFrameTime - actualFrameTime); // there is a small imprecision here
}
time_t startTime = (time_t) SDL_GetTicks ();
// render frame
FrameCap ((time_t) SDL_GetTicks () - startTime);
There are ways to make this more precise (e.g. by using the performance counter functions on Windows 7, or using microsecond resolution on Linux), but I think you get the general idea. This approach also has the advantage of being driver independent and - unlike coupling to V-Sync - allowing an arbitrary frame rate.
At 2000 FPS it only takes 0.5 ms to render the entire frame. If you want to get 60 FPS then each frame should take about 16 ms. To do this, first render your frame (about 0.5 ms), then use SDL_Delay() to use up the rest of the 16 ms.
Also, if you are interested in profiling your code (which isn't needed if you are getting 2000 FPS!) then you may want to use High Resolution Timers. That way you could tell exactly how long any block of code takes, not just how much time your program spends in it.

Resources