HVX Camera Streaming (Hexagon SDK) is not working in Intrinsyc Qualcomm Snapdtragon 845 uSOM Dev. Kit - qualcomm

I have been practicing sample codes of Hexagon DSP SDK on Intrinsyc Open-Q™ 845 uSOM Hardware Development Kit. when i have tried with the camera streaming example i was able to execute the code and run it in DSP, even though i'm not getting a valid output. The intended output is pixel data processing effects. For the hvx_add_constant example, the bright area will periodically be boosted to pink color. But I got an output with no preview image when the camera opened, just like a black window.
Thanks in advance.

Related

Some applications render black frames WPFMediaKit

I have a very weird bug that I just can't understand.
I am currently using the WPFMediaKit library in one of my programs.
When using the library in a near empty new "demo" wpf solution, the image from the USB Camera source is rendered perfectly to the MediaCaptureElement class.
However, using a more complex solution, the same library (without changes), the frame gets rendered black. I can see there is connection to the camera and everything goes well, DirectShow graph is built normally and everything just looks fine, besides the black frames. I can even open the camera properties and see it pushing out frames as usual.
Also, this only happens on Dell E6540 (With an AMD and Intel HD gpu) if it's Windows 7. Windows 8/10 work fine. Have tried a lot of drivers nothing seems to change the output.
I have no clue what to do or what to try.
After many hours and just after deciding to post this question, I found the answer.
The more complex solution, under the AMD Settings application was set to use "High Performance" mode, this mode was automatically enabled on Windows 7 for some reason. Disabling that and setting it to "none" or "lower power" fixed the issue.
Doesn't change the fact that there is an issue with laptops with dedicated AMD GPUs, probably something to do with DirectX/Direct3D which is used to render the frames.

SL_E_LICENSE_FILE_NOT_INSTALLED (0xC004F011) Mpeg 2 Decoder

Per MS documentation, there is a MPEG 1 and 2 Video Decoder bundled with Windows 8 which is compatible with Media Foundation. I have written a Source Reader for DVD, MPEG2 and MPEG1 and started testing.
In my pipeline, right around the MFEnumEx, I get a reference to an instance of a decoder found on the system. As soon as I try to activate the object I receive SL_E_LICENSE_FILE_NOT_INSTALLED. I encountered this message before when I was writing a wrapper for MPEG4. MPEG 4 encoder gave me the same thing.
Based on Google searches, I ended up at with link and directed the users to run the command: DISM /Online /Cleanup-Image /RestoreHealth
It seems this fixed the MPEG 4 encoder issue. Not sure what is happening. I ran TopoEdit.exe and tried adding Microsoft MPEG Video Decoder and the node fails with the same error.
I am wondering if anyone encountered the same issue? Any resolutions? I really don't want to write a decoder at this point.
I tested this on another PC at work and I am getting the same result. Both OS are 8.1 64bit.
Well, I found the issue.
http://www.infoworld.com/article/2616896/microsoft-windows/update--windows-8-won-t-be-able-to-play-dvds.html
Per the link above starting with Windows 8, DVD (MPEG 2 Decoder) is not included by default. It is a purchasable feature. This means no default DVD playback support for free. We purchased a copy for Surface Pro that we have at the company and it cost 10 bucks. Not a big deal. I wish they mentioned this in their documentation. It also appears that the feature is not purchasable/addable in 8.X Enterprise. Windows Media Center which contains the decoder was dropped from both Enterprise and Server 2012.

Is it a good idea to use a Screensaver on a raspberry pi as digital signage?

I asked this question in the Raspberry PI section, so please forgive me for posting this here again. Its just there doesn't seem to be as active as this section of the forum. So, onto my question...
I have an idea and I'm working on it right now. I just wanted to see what the community's thought was on using a screensaver as digital signage. Every tutorial I've read shows someone using chromium in kiosk mode, and while that's fine and works well for some uses, it doesn't work for what I need. I have successfully completed a chromium kiosk, and it was cool. But the signage that I need to create now, has to work without internet. I've thought about installing LAMP locally on the PI, and still using chromium. I still may have to if this idea doesn't pan out. All I need from the signage is a Title Message in the top center, and a message body underneath it, with roughly 300-400 character limit. My idea is to write a screensaver module, in C, that will work with a screensaver such as xscreensaver. The module would need to be able to load messages from a directory on the pi. Then for my clients to update their signage text, I would write a simple client that sent commands as well as the text via SSH to the pi. I want to know what other people think about this. Is it a good idea? Bad idea? Should I "waste" my time doing something like this?
Thanks in advance.
I am already using a rPi as digital signage, just over a year. I am using two different setups:
version 1 uses Raspian loading xdesktop and qiv image viewer to cycle images stored on the Pi itself, synchronized with a remote server. The problem I found was power and SD stability, when the power fails, which it will do no matter what, just when... The Sd card can become corrupt due to all the writing that Raspian does all the time. Certainly does not really need to write to SD.
version 2 uses a RO-filesystem and a command line image tool. Uses the same process to show images from local, and sync with server. But power fail causes no ill effects.
I am not using screensaver to display images, that seemed redundant to me, and unnecessary to wait for the SS to start just to display the images.
Some of the images are created using imagemagik, which is nicely dynamic where needed.

ActionScript 3 AIR — Video make blink jump

I make an application for iOS and Android using ActionScript 3 and Adobe AIR ( 3.7 ) to build the ipa and apk. In this application, I load a Video from an FLV and add it in the scene.
The problem is, on the emulator or the Flash view, all is ok, but, on the iPad ( test on iPad 1, 2 and 3 with same results ) the video makes shorts jumps ( like a sudden freeze follow by a short jump into the time-line ) every 2 secondes, approximately.
Of course, I make sure that the video wasn't under other elements or above moving clips. I try to load the video without the rest of the interface : same result. Change the renderMode to "direct" or "gpu", no... Export the video in different quality and assure no redimensionnement ( Even with a dimension in a multiple of 8 ), no again.
I use a similarity of this code to load my video ( It's the test code I use to be sur that the problem wasn't elsewhere in my code )
var myVideo:Video = new Video();
this.addChild(myVideo);
var nc:NetConnection = new NetConnection();
nc.connect(null);
var ns:NetStream = new NetStream(nc);
ns.client = { onMetaData:ns_onMetaData, NetStatusEvent:ns_onPlayStatus };
myVideo.attachNetStream(ns);
ns.play("myLink.flv");
var ns_onMetaData:* = function(item:Object):void { }
var ns_onPlayStatus:* = function(event:NetStatusEvent):void {}
ns.addEventListener(NetStatusEvent.NET_STATUS, ns_onPlayStatus);
Thanks in advance and sorry for my bad english
You should not use FLV on iOS devices. That is my personal guess as to why you are seeing the "jump". FLV is software decoded, so it is relatively slow. My personal guess is you are experiencing dropped frames while the video is being decoded.
On iOS (and all mobile devices, really), you want to use h.264 video with an mp4 extension (m4v will work on iOS, but not on Android, I believe). For playback, you want to use either StageVideo or StageWebView rather than an AS3-based video-player. StageVideo will render using the actual media framework of the device. StageWebView will only work on iOS and certain Android devices, and will render using the actual media player of the device.
The difference between this and Video or FLVPlayback (or the Flex or OSMF-based video players) is that the video will be hardware accelerated/decoded. This means that your app's render time (and thus the video render time) will not also be dictated by how fast the video is decoded because a separate chip will be handling it.
Additionally, hardware accelerated video will be much, much better on battery life. I ran a test last year on an iPad 3 and the difference between battery life consumed by software/CPU decoded FLV and hardware decoded h.264 was somewhere in the neighborhood of 30%.
Keep in mind that both of these options do not render in the display list. StageWebView renders above the display list and StageVideo renders below.
I suggest viewing my previous answers on video render in an AIR for Mobile app as well. I have gone into more detail about video in AIR for mobile in the past. I have built three video-on-demand apps using AIR for Mobile now and it is definitely a delicate task.
NetStream http Video not playing on IOS device
Optimizing video playback in AIR for iOS
How to implement stagevideo in adobe air for android?
Hopefully this is of some help to you.

Produce video from OpenGL C program

I have a C program that runs a scientific simulation and displays a visualisation in an OpenGL window. I want to make this visualisation into a video, which will eventually go on YouTube.
Question: What's the best way to make a video from a C / OpenGL program?
The way I've done it in the past is to use a screen capture program, but this is very labour-intensive (have to start/stop the screen capture program, save the video file, etc...). It seems like there should be a way to automate the process of making a video from within the C program. Then I can leave it running overnight and have 20 videos to look through in the morning, and choose the best one to put on YouTube.
YouTube recommend "MPEG4 (Divx, Xvid) format at 640x480 resolution".
I'm using GLUT 3.7.6_3, if that makes a difference. I can change windowing system if there's a good reason.
I'm running Windows (XP), so would prefer answers that work on Windows, but Linux answers are ok too. I can use Linux if it's not possible to do the video stuff easily on Windows. I have a friend who makes a .png image for each frame of the video and then stitches them together using "mencoder" on Linux.
you can use the glReadPixels function (see example)
But if the stuff you are trying to display is made of simple objects (i.e. spheres, rods, etc..), I would "export" each frame into a POV-ray files, render these, and then make a video out of these pictures. you will reach a much higher quality like that.
Use a 3rd party application like FRAPS to do the job for you.
Fraps can capture audio and video up
to 2560x1600 with custom frame rates
from 1 to 120 frames per second!
All movies are recorded in outstanding
quality.
They have video samples on the site. They seem good.
EDIT:
You could execute a tool to record the screen from your C application by calling it like system("C:\screen_recorder_app.exe -params"). Check camstudio, it has a command line version.

Resources