how to set a fps of a camera? - c

I am using a frame grabber inspecta-5 with 1GB memory, also a high speed camera "EoSens Extended Mode, 640X480 1869fps, 10X8 taps". I am new to coding for grabbers and also to controlling the camera. the Inspecta-5 grabber, gives me different options, like changing the requested number of frames from the camrea to grabber and also from grabber to main memory. also I can use camrea links to send signal to camera and have different exposure times.
but Im not really sure what should I use to obtain 1000 frame per second rate, and how can I test it?
according to the software manual if I set the following options in the camera profile :
ReqFrame=1000
GReqFrame=1000
it means transfer 1000 frames from the camera to grabber and also transfer 1000 frame from grabber to Main memory, respectively.
but does it mean that I have 1000fps?
what would be the option for setting the fps to 1000 ? and also how can I test it that I really grabbed 1000 frames in One Second????
here is a link to the grabber software manual : mikrotron.de/index.php?de_downloadfiles you can find the software manual under the "Inspecta Level1 API for WinNT/2000/XP" section. the file name is "i5-level1-sw_manual_e.pdf" , just in case if anybody needs it.
THANK YOU

At 1,000fps you don't have much time to snap a frame or even save a frame. Use the following example and plug in your estimated FPS, capture and save latencies. At 1,000fps, you can have a total of about .8ms latency (and why not .99999? I don't know - something to do with unattainable theoretical max or my old PC).
public static void main(String args[]) throws Exception {
int fps = 1000;
float simulationCaptureNowMS = .40f;
float simulationSaveNowNowMS = .40f;
final long simulationCaptureNowNS = (long)(simulationCaptureNowMS * 1000000.0f);
final long simulationSaveNowNowNS = (long)(simulationSaveNowNowMS * 1000000.0f);
final long windowNS = (1000*1000000)/fps;
final long movieDurationSEC = 2;
long dropDeadTimeMS = System.currentTimeMillis() + (1000* movieDurationSEC);
while(System.currentTimeMillis() < dropDeadTimeMS){
long startNS = System.nanoTime();
actionSimulator(simulationCaptureNowNS);
actionSimulator(simulationSaveNowNowNS);
long endNS = System.nanoTime();
long sleepNS = windowNS-(endNS-startNS);
if (sleepNS<0) {
System.out.println("Data loss. Try again.");
System.exit(0);
}
actionSimulator(sleepNS);
}
System.out.println("No data loss at "+fps+"fps with interframe latency of "+(simulationCaptureNowMS+simulationSaveNowNowMS)+"ms");
}
private static void actionSimulator(long ns) throws Exception {
long d = System.nanoTime()+ns;
while(System.nanoTime()<d) Thread.yield();
}

Related

RTC Static Memory in Deep Sleep on ESP32 with ESP-IDF

I am using the 8KB of static RAM on the RTC inside the ESP32 to save a small amount of sensor data to reduce power consumption by transmitting less frequently. But I am having no luck with even the simple example code:
RTC_DATA_ATTR uint32_t testValue = 0;
{
ESP_LOGE(TAG, "testValue = %d", testValue++);
...
}
In the monitor, I can see the value as 0 first time round, but then it's anyone's guess.
E (109) app_main: testValue = 0
...
...
E (109) app_main: testValue = -175962388
EDIT
Also tried the attribute:
RTC_NOINIT_ATTR uint32_t testValue = 0;
What am I doing wrong?
I received an answer from other channels that I'd like to share on. The solution was to set:
esp_sleep_pd_config(ESP_PD_DOMAIN_RTC_SLOW_MEM, ESP_PD_OPTION_ON);
esp_sleep_pd_config(ESP_PD_DOMAIN_RTC_FAST_MEM, ESP_PD_OPTION_ON);
So the RTC memory regions are enabled. In my case, I had specifically disabled them in another area of the code (the deep sleep power management code). This solution doesn't significantly affect the deep sleep power consumption ~ 10 uA.

Big latency in bluetooth communication

I have tried to write wireless servo control using two arduino nano v3 boards and two bluetooth 4.0 modules. First code is transmitter. It's very simple. It reads PPM signals and transform to separates PWM values for each channel. I use hardware serial port.
#include <PPMReader.h>
#include <InterruptHandler.h>
int ppmInputPin = 3;
int channelAmount = 2;
PPMReader ppm(ppmInputPin, channelAmount);
void setup()
{
Serial.begin(9600);
Serial.write("AT\r\n");
delay(10);
Serial.write("AT\r\n");
Serial.write("AT+INQ\r\n");
delay(5000);
Serial.write("AT+CONN1\r\n");
}
void loop()
{
unsigned long value1 = ppm.latestValidChannelValue(1, 0);
Serial.println(value1);
}
Receiver is simple too. It reads values from bluetooth and parse into integer value and sends to servo by 7th pin. Again I have used hardware serial port.
#include <Servo.h>
int PWM_OUTPUT = 7;
Servo servo;
void setup() {
servo.attach(PWM_OUTPUT);
Serial.begin(9600);
}
void loop() {
int pwmValue = Serial.parseInt();
if (Serial.available()) {
if(pwmValue > 900 && pwmValue < 2001) {
servo.writeMicroseconds(pwmValue);
}
}
}
All it works. But it has delay around 2-3 seconds. Can be problem in "spamming" serial port?
The first thing you need to ask yourself when implementing a device-to-device communication is how fast should I be sending? and if I send at that rate: is the receiver going to be able to keep pace (reading, doing processing or whatever it needs to do and answer back)?
This is obviously not about the baud rate but about what your loops are doing. You are using two different libraries: PPMReader and Servo. Now, pay attention to what each device is doing in their respective loops:
//Sending
void loop() {
unsigned long value1 = ppm.latestValidChannelValue(1, 0);
Serial.println(value1);
}
//Receiving
void loop() {
int pwmValue = Serial.parseInt();
if(pwmValue > 900 && pwmValue < 2001) {
servo.writeMicroseconds(pwmValue);
}
}
I don't really know how long it takes to execute each line of code (take a look here for some comments on that) but you cannot seriously expect both loops to magically synchronize themselves. Considering they are doing very different things (leaving out the serial part) dealing with different hardware, I would expect one of them to take significantly longer than the other. Think about what happens if that's the case.
As I said, I have no idea how long it takes to call ppm.latestValidChannelValue(1, 0) but for the sake of my argument let's say it takes 0.1 milliseconds. To have an estimate of the time it takes to complete one iteration around the loop you need to add the time it takes to print one (or two) bytes to the port with Serial.println(value1) but that's easier, maybe around 20-100 microseconds is a good ballpark figure. With these estimates, you end up reading 5000 times per second. If you are not happy or you don't trust my estimates I would suggest you do your own tests with a counter or a timer. If you do the same exercise for the other side of the link and let's say you get it's twice as fast, it runs 10000 times per second, how do you think it would happen with the communication? Yes, that's right: it will get clogged and run at snail pace.
Here you should carefully consider if you really need that many readings (you did not elaborate on what you're actually doing so I have no idea, but I lean on thinking you don't). If you don't, just add a delay on the sender's side to slow it down to a reasonable (maybe 10-20 iterations per second) speed.
There are other things to improve on your code: you should check you have received data in the buffer before reading it (not after). And you need to be careful with Serial.parseInt(), which sometimes leads to unexpected results but this answer is already too long and I don't want to extend it even more.
I found problem. It was in serial port spamming. I have added check if current value is not equal with previous value and it have started work and next small issue was in receiver. I read value before it was available.
#include <PPMReader.h>
#include <InterruptHandler.h>
int ppmInputPin = 3;
int channelAmount = 2;
PPMReader ppm(ppmInputPin, channelAmount);
volatile unsigned long previousValue1 = 0;
void setup()
{
Serial.begin(9600);
Serial.write("AT\r\n");
delay(10);
Serial.write("AT\r\n");
Serial.write("AT+INQ\r\n");
delay(5000);
Serial.write("AT+CONN1\r\n");
Serial.println("Transmitter started");
}
void loop()
{
unsigned long value1 = ppm.latestValidChannelValue(1, 0);
if(previousValue1 != value1) {
previousValue1 = value1;
Serial.println(value1);
}
}

Determine the number of samples in audio buffer

I am writing a small program to perform real-time ambient noise removal using PortAudio. To do some of the necessary calculations (like Fourier transforms), I need to supply the sample data, but I also need to know exactly how many samples I am working with at a given time.
How can I determine the number of audio samples in a buffer?
When attempting to solve this myself, two variables seemed particularly relevant and useful, namely: the sampling rate and the frames per buffer. When I attempted to calculate the number of samples using the sampling rate, I ran into the issue of miscalculating the time between each callback invocation.
int ambienceCallback(const void * inputBuffer,
void * outputBuffer,
unsigned long framesPerBuffer,
const PaStreamCallbackTimeInfo * timeInfo,
PaStreamCallbackFlags statusFlags,
void * userData)
{
const SAMPLE * in = (const SAMPLE *) inputBuffer;
PaStreamParameters * inputParameters = (PaStreamParameters *) userData;
PaTime time = timeInfo->inputBufferAdcTime;
int sampleCount = (time - callbackTime) * Pa_GetDeviceInfo(inputParameters->device)->defaultSampleRate;
callbackTime = time;
// extraneous ...
}
where callbackTime is a variable declared in the header file, and initialized upon starting the audio input stream.
// extraneous ...
error = Pa_StartStream(stream);
callbackTime = Pa_GetStreamTime(stream);
// extraneous ...
However, the calculated time would always be zero. As a result, I could not make my idea to simply multiply the sampling rate by the elapsed time work. The other variable, framesPerBuffer seemed like it could be useful for calculating the sample count if I could find how many samples were in a frame but, I flat out could not manage do that.
Again, how can I determine how many samples are in the buffer? As a disclaimer, I am new to audio programming. I am probably mixing up some terms or concepts, causing the more experienced to scratch their heads. (I apologize!)
Get the number of samples from the callback parameters! :)
framesPerBuffer gives you number of frames.
A frame is a set of samples that occur simultaneously. For a stereo stream, a frame is two samples.
Timestamps are not useful for your purpose, e.g. Pa_GetStreamTime() returns the stream's current time in seconds. This resolution won't allow you to calculate the number of samples.

Getting individual frames using CV_CAP_PROP_POS_FRAMES in cvSetCaptureProperty

I am trying to jump to a specific frame by setting the CV_CAP_PROP_POS_FRAMES property and then reading the frame like this:
cvSetCaptureProperty( input_video, CV_CAP_PROP_POS_FRAMES, current_frame );
frame = cvQueryFrame( input_video );
The problem I am facing is that, OpenCV 2.1 returns the same frame for the 12 consecutive values of current_frame whereas I want to read each individual frame, not just the key frames. Can anyone please tell me what's wrong?
I did some research and found out that the problem is caused by the decompression algorithm.
The MPEG-like algorithms (including HD, et all) do not compress each frame separately, but save a keyframe from time to time, and then only the differences between the last frame and subsequent frames.
The problem you reported is caused by the fact that, when you select a frame, the decoder (ffmpeg, likely) automatically advances to the next keyframe.
So, is there a way around this? I don't want only key frames but each individual frame.
I don't know whether or not this would be precise enough for your purpose, but I've had success getting to a particular point in an MPEG video by grabbing the frame rate, converting the frame number to a time, then advancing to the time. Like so:
cv::VideoCapture sourceVideo("/some/file/name.mpg");
double frameRate = sourceVideo.get(CV_CAP_PROP_FPS);
double frameTime = 1000.0 * frameNumber / frameRate;
sourceVideo.set(CV_CAP_PROP_POS_MSEC, frameTime);
Due to this limitation in OpenCV, it may be wise to to use FFMPEG instead. Moviepy is a nice wrapper library.
# Get nth frame from a video
from moviepy.video.io.ffmpeg_reader import FFMPEG_VideoReader
cap = FFMPEG_VideoReader("movie.mov",True)
cap.initialize()
cap.get_frame(n/FPS)
Performance is great too. Seeking to the nth frame with get_frame is O(1), and a speed-up is used if (nearly) consecutive frames are requested. I've gotten better-than-realtime results loading three 720p videos simultaneously.
CV_CAP_PROP_POS_FRAMES jumps to a key frame. I had the same issue and worked around it using this (python-)code. It's probably not totally efficient, but get's the job done:
def seekTo(cap, position):
positiontoset = position
pos = -1
cap.set(cv.CV_CAP_PROP_POS_FRAMES, position)
while pos < position:
ret, image = cap.read()
pos = cap.get(cv.CV_CAP_PROP_POS_FRAMES)
if pos == position:
return image
elif pos > position:
positiontoset -= 1
cap.set(cv.CV_CAP_PROP_POS_FRAMES, positiontoset)
pos = -1
I've successfully used the following on OpenCV 3 / Python 3:
# Skip to 150 frame then read the 151th frame
cap.set(cv2.CAP_PROP_POS_FRAMES, 150))
ret, frame = cap.read()
After some years assuming this as a unsavable bug, I think I've figured out a way to use with a good balance between speed and correctness.
A previous solution suggested to use the CV_CAP_PROP_POS_MSEC property before reading the frame:
cv::VideoCapture sourceVideo("/some/file/name.mpg");
const auto frameRate = sourceVideo.get(CV_CAP_PROP_FPS);
void readFrame(int frameNumber, cv::Mat& image) {
const double frameTime = 1000.0 * frameNumber / frameRate;
sourceVideo.set(CV_CAP_PROP_POS_MSEC, frameTime);
sourceVideo.read(image);
}
It does return the expected frame, but the problem is that using CV_CAP_PROP_POS_MSEC may be very slow, for example for a video conversion.
Note: using global variables for simplicity.
On the other hand, if you just want to read the video sequentially, it is enough to read frame without seeking at all.
for (int frameNumber = 0; frameNumber < nFrames; ++frameNumber) {
sourceVideo.read(image);
}
The solution comes from combining both: using a variable to remember the last queried frame, lastFrameNumber, and only seek when requested frame is not the next one. In this way it is possible to increase the speed in a sequential reading while allowing random seek if necessary.
cv::VideoCapture sourceVideo("/some/file/name.mpg");
const auto frameRate = sourceVideo.get(CV_CAP_PROP_FPS);
const int lastFrameNumber = -2; // guarantee seeking the first time
void readFrame(int frameNumber, cv::Mat& image) {
if (lastFrameNumber + 1 != frameNumber) { // not the next frame? seek
const double frameTime = 1000.0 * frameNumber / frameRate;
sourceVideo.set(CV_CAP_PROP_POS_MSEC, frameTime);
}
sourceVideo.read(image);
lastFrameNumber = frameNumber;
}

Silverlight media player with frame counter

I am trying to write a simple Silverlight media player, but I need the timestamp to be hh:mm:ss:ff where FF is Frame count.
I used a timer in order to get ticks and calculate the frame I am in, but it seems very inaccurate.
How can I count reliably the frame I am in?
Does anyone know of a free Silverlight player that will do that?
Silverlight is designed to update at irregular intervals and catch-up any animation or media playing to the current elapsed time when the next frame is rendered.
To calculate the current frame (a frame is just a specific fraction of a second) it is simply a matter of multiplying total elapsed time, since the playback started, by the number of frames-per-second encoded in the video then finding the remainder of frames within that second to get the current frame.
e.g.
Current frame = (Elapsed-Time-in-seconds * FramesPerSecond) % FramesPerSecond;
So if 20.12 seconds has elapsed, on a video that has 24 frames per second, you are on Frame 482 (actually 482.88 but only whole frames matter).
Take the Modulus of that by the Frames-per-second and you get the remaining number of frames (e.g. 2) so you are on frame number 2 in second number 20 (or 00:00:20:02).
You need to do the multiply using doubles (or floats) and the final modulus on an integer value so it will be like this in C# code:
int framesPerSecond = 24; // for example
double elapsedTimeInSeconds = ...; /// Get the elapsed time...
int currentFrame = ((int)(elapsedTimeInSeconds * (double)framesPerSecond)) % framesPerSecond;
As the question has changed (in comment) to a fractional frame rate the maths will be as per this console app:
using System;
namespace TimeTest
{
class Program
{
static void Main(string[] args)
{
double framesPerSecond = 29.97;
for (double elapsedTime = 0; elapsedTime < 5; elapsedTime+=0.01)
{
int currentFrame = (int)((elapsedTime*framesPerSecond) % framesPerSecond);
Console.WriteLine("Time: {0} = Frame: {1}", elapsedTime, currentFrame);
}
}
}
}
Note: You are not guaranteed to display every frame number, as the framerate is not perfect, but you can only see the rendered frames so it does not matter. The frames you see will have the correct frame numbers.

Resources