PDFjet says it supports App Engine, which by extension means it will run on Wave. e question is how can I get to to work on the Google WavE?
The goal is to get a PDF-button in the wave which is able to output the whole wave into PDF
Any assistance would be greatly appreciated.
You'll likely need to do the following:
Write a wave robot that simply joins a wave so it's capable of copying its contents.
Write a wave gadget that adds an 'As PDF' button to a wave. When clicked, it should make a call to your bot directly, returning the generated PDF.
Write an extension installer that installs the robot and gadget in a wave.
Implement code to render a wave to a PDF using PDFJet. Since PDFJet doesn't render HTML to PDFs directly, you'll either need to implement your own renderer, or use another library.
Related
Now, I have my simple project on changing the color image into white&black image using CUDA-C.
But I got a problem with importing/loading a bitmap image into program. I don't know how to import it.
So...
CUDA-C have a specific function about importing/loading bitmap image?
If yes, what is it and how to use it?
If no, how do you do with importing/loading bitmap image?
Thank you.
There's really nothing that is CUDA-specific about loading a bitmap image into an application.
If you have a preferred method for loading a bitmap image into an application, you should be able to use it with a CUDA app. You will obviously be loading the image into the host application space first. After that, if you want to transfer it to the device, you can use any of the standard methods for transferring data to the device to accomplish this.
CUDA (i.e. the runtime API) doesn't have any specific functions for importing/loading a bitmap image
-
There are many ways to load an image. If you are already using OpenGL or DirectX, then you will want to use a method associated with one of those APIs, and then use the appropriate interop API within CUDA to manipulate the object.
If you want to import a bitmap image directly into a CUDA program without using a graphics API, take a look at the CUDA samples, as a number of them do this and provide helper functions that you may want to re-use.
For example, the dct8x8 sample provides a file called BmpUtil.cpp which contains a number of useful bitmap import/handling routines, and the dct8x8 app (dct8x8.cu) shows how these may be used directly in a CUDA app.
I'd like to implement the live view function using EDSDK. I have used EdsGetPointer to get the pointer of the memory address for memory streaming. Now I want to display the streaming image on the PC.
I have read in some people use the API on VisualC such as ATL or CImage which able display the streaming image just by passing the pointer of the memory stream as the parameter, and the function could retrieve the streaming images by itself. I am thinking of using OpenCV in order to display the streaming images as I don't have VisualC installed on my computer. Is there any function on OpenCV that I can use to display streaming images? Or is there any other alternatives that I can use to deal with streaming images from EDSDK?
You can pack the data into an IplImage and show it using cvShowImage in a loop: http://opencv.willowgarage.com/documentation/user_interface.html The down side is you're tied into the OpenCV event loop.
There are alternatives. In the past I've used OpenGL to paint an image as a texture so that I could manage the viewport, draw on top of it, etc. You can get a simple and flexible working GUI pretty quickly using GLUT. A benefit to that is that whatever OpenGL code you write will be portable to any other UI library you use as long as that library has an OpenGL canvas widget. What I always do is Camera->IplImage->OpenGL Texture->wxWidgets glCanvas. I still use OpenCV for the actual image processing, etc. It's totally cross-platform and doesn't require the pay version of VC++.
are you want it for LiveView ? if not for liveview, you can save your streaming image in host
using
Error = EdsCreateFileStream(dirItemInfo.szFileName, EDSDK.EdsFileCreateDisposition.CreateAlways, EDSDK.EdsAccess.ReadWrite, out stream);
then you can load it
IplImage *inImg = cvLoadImage("photo2.jpg");
and then can process the image in opencv.
I currently send a live video mix to a output screen (a form on a particular screen). Consider it like a really advanced version of PowerPoint. I call it a video control room for the pc. I want to take 30 frames a second from a screen (of my choice, I allow multiple screens) and the audio from the computer (stereo) set, save it to a hard disks. How do I do that?
I know I can draw the image of the interface using the RenderTargetBitmap class, but How do I put those images (as frames) in an AVI file or push it to a video server? An SDK? or a Code Example to point me in the right direction, would be nice! I also want to capture the sound of the current Stereo mix, or microphone (as determined by the user).
I don't want to use a third-party and I'd prefer doing it in the program to take maximum control over it. I'm ok, with using a second program to do compression and just saving a raw AVI file (with audio stream). Disk is cheap, as any programmer would say. If I have to, I'll save the video and audio streams separately, but I'd prefer not to.
Let me know.
Check out the RenderTargetBitmap class. it allows you to turn any visual into pixels that you can then pass along to an encoder/network stack, like the ones described here
also check out Windows media foundation for turning your pixels into an avi or stream
I have a C program that runs a scientific simulation and displays a visualisation in an OpenGL window. I want to make this visualisation into a video, which will eventually go on YouTube.
Question: What's the best way to make a video from a C / OpenGL program?
The way I've done it in the past is to use a screen capture program, but this is very labour-intensive (have to start/stop the screen capture program, save the video file, etc...). It seems like there should be a way to automate the process of making a video from within the C program. Then I can leave it running overnight and have 20 videos to look through in the morning, and choose the best one to put on YouTube.
YouTube recommend "MPEG4 (Divx, Xvid) format at 640x480 resolution".
I'm using GLUT 3.7.6_3, if that makes a difference. I can change windowing system if there's a good reason.
I'm running Windows (XP), so would prefer answers that work on Windows, but Linux answers are ok too. I can use Linux if it's not possible to do the video stuff easily on Windows. I have a friend who makes a .png image for each frame of the video and then stitches them together using "mencoder" on Linux.
you can use the glReadPixels function (see example)
But if the stuff you are trying to display is made of simple objects (i.e. spheres, rods, etc..), I would "export" each frame into a POV-ray files, render these, and then make a video out of these pictures. you will reach a much higher quality like that.
Use a 3rd party application like FRAPS to do the job for you.
Fraps can capture audio and video up
to 2560x1600 with custom frame rates
from 1 to 120 frames per second!
All movies are recorded in outstanding
quality.
They have video samples on the site. They seem good.
EDIT:
You could execute a tool to record the screen from your C application by calling it like system("C:\screen_recorder_app.exe -params"). Check camstudio, it has a command line version.
I am making a general question since I am a developer and I have no advance experience on video elaboration. I have to preparare a web application with the purpose to allow video files upload on our company server and then video elaboration by server, on user command. The purpose of the web application is to allow to the user to make some elaboration on video depending on user action launch from the web app:
(server has to ) convert video in different format(mp4, flv...)
extact keyframes from video and saves them in jpeg format
possibility to extract audio from video
automatic control of quality audio & video (black frames,silences detection)
change scene detection and keyframe extraction
.....
This what's my bosses wanted from the web based application (with the server support obviously), and I understand only the first 3 points of this list, the rest for me was arabic....
My question is: Which is the best and fastest server side application for this works, that can support multiple batch video conversions, from command line (comand line for php-soap-socket interaction or something else..)?
Is suitable Adobe Media Server for batch video conversion?
Which are adobe products that can be used for this purpose?
Note: I have experience with Indesign Server scripting programing (sending xml with php and soap call...), and I am looking to something similiar for video elaboration.
I will appreciate any answers.
THANKS ALL
I suggest you start with the open source project FFmpeg. You can call the program from the command line and via a series of arguments specify the desired output types, thumbnails, etc.
As an aside, when you start looking around at Video related projects (MediaShare for example) you will find they are all using FFmpeg for their video processing.
as Nathan suggested, FFMPEG is the first choice. Also you can check MEncoder
Just to elaborate:
1) (server has to ) convert video in different format(mp4, flv...)
both FFMPEG and mencoder do this well
2) extact keyframes from video and saves them in jpeg format
as I know it's impossible using command-line interface of FFMPEG, not sure about mencoder. However they can save all frames as separate images
3) possibility to extract audio from video
both FFMPEG and mencoder do this well
4) automatic control of quality audio & video (black frames,silences detection)
you need to code this, using FFMPEG libraries or mencoder
5) change scene detection and keyframe extraction
it's not clear what your boss imposes here
I tried lot of videos converting in server side using advance Xuggler API libraries.
Xuggler is a free open-source library for Java developers which can be used to uncompress,
manipulate, and compress recorded or live video in real time. Xuggler uses the very
powerful FFmpeg media handling libraries under the hood, essentially playing the role of a
java wrapper around them. It is the easy way to uncompress, modify, and re-compress any
media file (or stream) from Java.
WebLinks : 1) http://www.xuggle.com/ -official website
2) http://www.javacodegeeks.com/2011/02/introduction-xuggler-video-
manipulation.html - example