Real time display of .jpeg in byte array using C - c

I am working on a project where I have a server that sends an encrypted jpeg to a client which then decrypts it. Currently the image is stored in a byte array on the receiver end before it is written to the current directory.
void decryptionFunction(){
uint8 *plaintext; /* Pointer to buffer that contains decrypted jpeg
data*/
uint32 plaintext_len;
int k = 0;
while(1){
/*Decryption happens here... malloc() buffer and set *plaintext
equal to it*/
char buffer[32];
snprintf(buffer, sizeof(char) *32 "file%i.jpeg", k);
FILE *fp;
fp = fopen(buffer, "wb");
fwrite(plaintext, 1, plaintext_len, fp);
fclose(fp);
k++;
free(plaintext);
}
}
Each time the while loop completes a new image is placed in the buffer and then writes the image to the current directory. This all works fine, however I would like to display the image somehow instead of writing it to the current directory. Is there a way to do this in C? I have currently thought about streaming it from the receiver to VLC using some protocol, but that seems a little more complicated than what I am wanting.
Ideally, I would like to just display the image from the buffer and refresh the display each time through the while loop.
Thanks for any input.

There are many different libraries to display an image. One lite/simple is NxV. You can also use QT.

I found a solution that will work for right now. I decided to use the OpenCV library which has deprecated C functions to handle image display.
void decryptionFunction(){
uint8 *plaintext; /* Pointer to buffer that contains decrypted jpeg
data*/
uint32 plaintext_len;
int k = 0;
while(1){
/*Decryption happens here... malloc() buffer and set *plaintext
equal to it*/
CvMat matbox = cvMat(240, 320, CV_8UC1, plaintext);
IplImage *img;
img = cvDecodeImage(&matbox, 1);
cvNamedWindow("test", CV_WINDOW_AUTOSIZE);
cvShowImage("test", img);
cvWaitKey(100);
cvReleaseImage(&img);
free(plaintext);
}
}
While these functions are deprecated and can be cut at any time, for the time being they do exactly what I want. The buffer contains compressed jpeg data, and in order for it to be shown in the window, it has to be decoded into a raw format. I am still looking for a C implementation that can do exactly this that isn't deprecated.

You can maybe use the display program which is part of the ImageMagick suite which is installed on most Linux distros and is also available for OSX and Windows.
Basically, at the command line, you do:
display image.png
and it shows the image. It can display PNG, JPEG, TIFF and around 150 other formats. If your image changes, or you want to display a different one, you can do the following and it will look up the first instance of display and tell it to refresh with the new image:
display -remote image2.jpg
It can also take the image from its stdin as long as you tell it what type of file is coming, so it will work just as well like this:
cat image.jpg | display jpg:-
So you could effectively use popen() to start display and then send your image buffer to its stdin and then, when your image changes, run it again with the added -remote parameter to make it look up the first displayed image and refresh it rather than opening another display.
There are many options to resize the image before display, overlay it on a solid background or texture and so on if you run man display.

Related

How to put H264 encoded data, inside a container like .mp4 using libAV libraries (ffmpeg)?

I have been playing around with the libAV* family of libraries, that comes along with FFMPEG and I am learning how to implement things like encoding, decoding, muxing etc.
I came across this code : https://libav.org/documentation/doxygen/master/encode_video_8c-example.html , that encodes, YUV frames, to a file, using an encoder. In my implementation, I will change the encoder to H264 using avcodec_find_encoder(AV_CODEC_ID_H264) But I do not know, how to put this encoded data, into a proper container (.mp4) using libav libraries, so that the output file, can be played by any media player like VLC, QuickTime, etc...
Can anyone please help me on that? Every help will be greatly appreciated!
You should be able to write the frames to a file you initialize something like this:
// Have the library guess the format
AVOutputFormat *fmt = av_guess_format(NULL, "my_video.mp4", NULL);
// Set the AVOutputFormat in your output context (init not shown)
pOutFormatContext->oformat = fmt;
// Open the output, error handling not shown
avio_open2(&pOutFormatContext->pb, "my_video.mp4", AVIO_FLAG_WRITE, NULL, NULL);
// Create output stream
AVStream * outStream;
outStream = avformat_new_stream(pOutFormatContext, NULL);
// Set any output stream options you would want, example of setting the aspect ratio from the input coded 'inCodecContext' (init not shown for this)
outStream->sample_aspect_ratio.num = inCodecContext->sample_aspect_ratio.num;
outStream->sample_aspect_ratio.den = inCodecContext->sample_aspect_ratio.den;
// There may be other things you want to set like the time base, but will depend on what you're doing
// Write the header to the context, error handling not shown
avformat_write_header(pOutFormatContext, NULL);
// Dump out the format and check
av_dump_format(pOutFormatContext, 0, pOutFormatContext->filename, 1);
You then have to read the packets and encode them which it sounds like you're already doing, and then when you write to the output context it should be writing to the file in the container you specified.

WPF (.Net Core) Record sound to file using OpenAL

I am trying to record audio from a microphone within a WFP (.Net Core) application.
From what I have found online, it looks like the best candidate for recording audio in .Net Core is OpenAL. You can get OpenAL from the nuget package OpenTK.NetStandard.
I was able to use an example to implement some code that makes the program play the audio as it is capturing it (a bit like an echo effect, which is why I believe the example is called Parrot).
What I am missing and cannot find any help online for, is how to save the audio to a file instead of just playing it back right away.
I think the part of the code that should save the audio to a file would go somewhere in here:
if (available_samples > 0)
{
audio_capture.ReadSamples(buffer, available_samples);
int buf = AL.GenBuffer();
AL.BufferData(buf,
ALFormat.Mono16,
buffer,
available_samples * BlittableValueType.StrideOf(buffer),
audio_capture.SampleFrequency);
AL.SourceQueueBuffer(src, buf);
StatusBarService.WriteToStatusBar("Samples consumed: " + available_samples);
if (AL.GetSourceState(src) != ALSourceState.Playing)
AL.SourcePlay(src);
}
I think the data I need to save is in one of the following variables:
buffer
buf
AL.GetSourceState(src)
I could only find 1 single example code that saves to a file, but it is in C++, 8 years old, and it does a few things I wouldn't know how to port to C#:
it creates a WAVEHEADER sWaveHeader object which holds info about the audio file
it creates vector<ALbyte> bufferVector object which is saved to file (I don't have the ALbyte type in the OpenTK.NetStandard nuget
writes to file using this method fwrite( &bufferVector[0], sizeof( ALbyte ), 1, pFile ); (where pFile is a FILE* pFile)
Has anyone been able to save audio from a microphone to file .Net Core? I really cannot find an example!
Any help would be really appreciated! :)

arDetectMarker + pixel format + segmentation fault

I am trying to use the arDetectMarker function from arToolKit to detect markers in an image. I read the image from disk in the following way:
cv::Mat image;
cv::Mat temp;
image = cv::imread(path, CV_LOAD_IMAGE_COLOR);
cv::cvtColor(image, temp, CV_RGB2BGR);
and converted to ARUint8* format using:
dataPtr = (ARUint8 *) ((IplImage) temp).imageData;
I am sure that the data is correctly converted to dataPtr since I saved the image to check. Unfortunately, when I call arDetectMarker, a "segmentation fault" happens and I don't know the reason (I think it is due to the pixel format). I've read in the documentation:
http://artoolkit.sourceforge.net/apidoc/ar_8h.html#b2868d9587c68fb7255d4f270bcf878f
and it says that the format is in general ABGR. But I am using Ubuntu 14.04 and I think that I have v4l drivers, although I am not sure since I am not working with videos. I tried to convert the image loaded to ABGR or BGRA, but I am not sure if I did it correctly, or if this is really a requirement.
Also, I did the calibration procedure before.
Anybody can help me?
Thanks!
Marcelo.

Reading output of a USB webcam in Linux

I was experimenting with a little bit with fread and fwrite in C. So i wrote this little program in C to get data from a webcam and dump it into a file. The following is the source:
#include <stdio.h>
#include <stdlib.h>
#define SIZE 307200 // number of pixels (640x480 for my webcam)
int main() {
FILE *camera, *grab;
camera=fopen("/dev/video0", "rb");
grab=fopen("grab.raw", "wb");
float data[SIZE];
fread(data, sizeof(data[0]), SIZE, camera);
fwrite(data, sizeof(data[0]), SIZE, grab);
fclose(camera);
fclose(grab);
return 0;
}
The program works when compiled (gcc -o snap camera.c). What took me by surprise was that the output file was not a raw data dump but a JPEG file. Output of the file command on linux on the programs output file showed it was a JPEG image data: JFIF Standard 1.01. The file was viewable on an image viewer, although a little saturated.
How or why does this happen? I did not use any JPEG encoding libraries in the source or the program. Does the camera output JPEG natively? The webcam is a Sony Playstation 2 EyeToy which was manufactured by Logitech. The system is Debian Linux.
The Sony EyeToy has an OV7648 sensor with the quite popular OV519 bridge. The OV519 outputs frames in JPEG format - and if I remember correctly from my own cameras that's the only format that it supports.
Cameras like this require either application support, or a special driver that will decompress the frames before delivery to userspace. Apparently in your case the driver delivers the JPEG frames in their original form, which is why you are getting JPEG data in the output.
BTW, you should really have a look at the Video4Linux2 API for the proper way to access video devices on Linux - a simple open()/read()/close() is generally not enough...

Blackfin. 2D DCT/IDCT (image compression) with BF537 EZ-KIT, HOW TO WRITE TO FILE (on disk) reconstructed image (Raw pixel data) from BF537 memory?

I tried this experiment with Digital Image Processing - 2D DCT/IDCT (image compression) with BF537 EZ-KIT implemented by AnalogDevices.
To mention a "resume":
I build the project;
Load an black&white image (*.bmp) from disk to Blackfin memory at 0x8000 with Image Viewer;
Run project;
Push a button (from SW 10 to 13) from the BlackFin board (BF537) and select a level of compression;
After calculating the quantization table and DCT->Quantization->Dequantization->Inverse DCT.. results a reconstructed image at some adress point in BF memory (0x80000);
With Image Viewer (from VisualDsp) i load that reconstructed grayscale image from BF memory and it's everything ok, and differences are visible;
Mention that when i load image into BF memory from disk with Image Viewer, or from BF memory with Image Viewer, source format is Raw Pixel Data.
BUT all I want to do in addition to this project and DON'T KNOW HOW is:
to take ( create / write ) [in C language] that reconstructed image from Blackfin memory into disk (writing a code, or something like that; NOT with Image Viewer feature - Save image as... ).
I tried to fwrite that reconstructed buffer located in memory at 0x80000 into a *.bmp file, but it seems that when i open it i receive errors like: "can't read file header; unknown file format, or file not found...";
//my code for saving/creating/writing
// that reconstructed image = Raw pixel data from Blackfin memory
unsigned char *jpeg_buff;
int jpeg_buff_size;
jpeg_buff=0x80000;
jpeg_buff_size = 308280; //jpeg_buff_size = 480*640*1;
FILE *jpegfp = fopen ("myimg_reconstr80000.bmp", "wb");
fwrite (jpeg_buff, 1, jpeg_buff_size, jpegfp);
fclose (jpegfp);
Please anyone knows how to create / write / save *.bmp image from that Raw Pixel Data located in Blackfin memory in C language?
Thanks in advance; any solutions, suggestions will be appreciated!
Below is the link with archive of the entire Visual Dsp project. (i'm using VisualDsp++ 5.0)
https://docs.google.com/open?id=0B4IUN70RC09nMjRjNzlhNTctMTI3OS00ZmI4LWI4NzAtNWRkM2MyMDgyMjZm
*excuse me for my english writing errors
Before all the pixel data, add information for the bitmap header.
http://en.wikipedia.org/wiki/BMP_file_format#Bitmap_file_header
If you write this header data before your image data, it should be a valid bitmap file.

Resources