For a web project I need the possibility to generate jpg and animated gif images very fast. As server platform I will use Linux and the NekoVM (behind a apache via mod_tora). As there is no library for image generation for Haxe and neko I am about to write a own one.
Neko itself is written in c, and you can simply extend the VM writing shared libraries with c. At the moment we playing arround with libGD, which offers all the features we need (resizing, sampling, copying images, adding text, save as jpeg or animated gif) and of course a lot of stuff we don't need.
At the moment this works great, but it seems to be a little bit slow. Is there another popular open library that I could try to use for my purposes (and that is maybe faster)?
Have you tried Magick++ and/or MagickCore?
Your next best bet is to run NekoVM under pprof to figure out which function(s) are the most costly in libGD, and try to avoid or optimize your use of those by changing your calling code.
There is imlib2, I doubt that it support animated gif's.
Related
I'm encoding images as video using FFmpeg using custom C code rather than linux commands because I am developing the code for an embedded system.
I am currently following through the first dranger tutorial and the code provided in the following question.
How to encode a video from several images generated in a C++ program without writing the separate frame images to disk?
I have found some "less abstract" code in the following github location.
https://github.com/FFmpeg/FFmpeg/blob/master/doc/examples/encode_video.c
And I plan to use it as well.
My end goal is simply to save video on an embedded system using embedded C source code, and I am coming up the curve too slowly. So in summary my question is, Does it seem like I am following the correct path here? I know that my system does not come with hardware for video codec conversion, which means I need to do it with software, but I am unsure if FFmpeg is even a feasible option for embedded work because I am yet to compile.
The biggest red flag for me thus far is that FFmpeg uses dynamic memory allocation. I am unfamiliar with how to assess the amount of dynamic memory that it uses. This is very important information to me, and if anyone is familiar with the amount of memory used or how to assess it before compiling, I would greatly appreciate the input.
After further research, it seems to me that encoding video is often a hardware intensive task that can use multiple processors and mega-gigbyte sizes of RAM. In order to avoid this I am performing a minimal amount of compression by utilizing the AVI format.
I have found that FFmpeg can't readily be utilized for raw-metal embedded systems because the initial "make" of the library sets up configuration settings specific to the computer compiling, which conflicts with the need to cross compile. I can see that there are cross compilation flags available, but I have not found any documentation describing how to use them. Either way I want to avoid big heaps and multi-threading, so I moved on.
I decided to look for more basic source code elsewhere. mikekohn.net/file_formats/libkohn_avi.php Is a great resource for very basic encoding without any complicated library dependencies or multi-threading. I am yet to implement, so no guarantees, but best of luck. This is actually one of the only understandable encoding source codes that I have found for image to video applications, other than https://www.jonolick.com/home/mpeg-video-writer. However, Jon Olick's source code uses lossy encoding and a minimum framerate (inherent to MPEG), both of which I am trying to avoid.
I have a WPF application that acquires images from a camera, processes these images, and displays them. The processing part has become burdensome for the CPU, so I've looked at moving this processing to the GPU and running custom CUDA kernels against them. The basic process is as follows:
1) acquire image from camera
2) load image onto GPU
3) call CUDA kernel to process image
4) display processed image
A WPF-to-CUDA-to-Display Control strategy is what I'm trying to figure out.
It seems natural that once the image is loaded onto the GPU that it would not have to be unloaded in order to be displayed. I've read that this can be done with OpenGL, but do I really need to learn OpenGL and include it in my project in order to do a fast display of a CUDA-processed image?
I understand (I think) the issues of calling CUDA kernels from C#. My plan is to either build an unmanaged library around my CUDA calls, which I later wrap for C# -- OR -- try to decide on which one of the managed wrappers (managedCUDA, Cudafy, etc.) to try. I worry about using one of the prebuilt wrappers because they all appear to be lightly supported...but maybe I have the wrong impression.
Anyway, I'm feeling a bit overwhelmed after days of researching the possible options. Any advice would be greatly appreciated.
The process of taking a result of CUDA computation and using it directly on the device for a graphics activity is called "interop". There is OpenGL "interop" and there is DirectX "interop". There are plenty of CUDA sample codes demonstrating how to interact with computed images.
To go directly from computed data on the device, to display, without a trip to the host, you will need to use one of these 2 APIs (OpenGL or DirectX).
You mentioned two of the managed interfaces I've heard of, so it seems like you're aware of the options there.
If the processing time is significant compared to (much larger than) the time taken to transfer the image from host to device, you might consider starting out by just transferring the image from host to device, processing it, and then transferring it back, where you can then use the same plumbing you have been using to display it. You can then decide if the additional effort for interop is worth it.
If you can profile your code to figure out how long the image processing takes on the host, and then prototype something on the device to find out how much faster it is, that will be instructive.
You may find that the processing time is so long you can even benefit from the double-copy arrangement. Or you may find the processing time is so short on the host (compared to just the cost to transfer to the device) that the CUDA acceleration would not be useful.
WPF has a control named D3DImage to directly show DirectX content on screen and in the managedCuda samples package you can find a version of the original fluids sample from Cuda Toolkit using it (together with SlimDX). You don’t have to use managedCuda to realize Cuda in C#, but you can take it to see how things can be realized: managedCuda samples
a previous relevant question from me is here Reverse Engineering old paint programs
I have set up my base of operations here: http://animatorpro.org
wiki coming soon.
Okay, so now I have a 300,000 line legacy MSDOS codebase. It's sort of a "be careful what you wish for" situation. I am not an experienced C programmer. I'm not entirely inexperienced either, but for all intents and purposes I'm a noob to the language and in particular the intricacies of its libraries. I am especially ignorant of the vagaries of the differences between C programs written specifically for MSDOS and programs that are cross platform. However I have been studying this code base for over a year now, and this is what I know about Animator Pro:
Compilers and tools used:
Watcom C compiler
tcmake (make program from Turbo C)
386asm, a specialised assembler for the Phar Lap dos extender
and of course, the Phar Lap dos extender itself.
a selection of obscure dos utilities
Much of the compilation seems to be driven by batch files. Though I have obtained copies of all these tools, I have not yet succeeded at compiling it. (though I have compiled its older brother, autodesk animator original.
It's got a plugin system that replicates DLL before DLL's were available, based on REX. The plugin system handles:
Video Drivers (with a plethora of included VESA drivers)
Input drivers (including wacom tablets, and keyboards)
Drawing Tools
Inks (Like photoshop's filters, or blending modes)
Scripting Addons (essentially compiled scripts)
File formats
It's got its own script interpreter named POCO, based on the C language- The scripting language has enough power to do virtually all the things the plugin system can do- Just slower.
Given this information, this is my development plan. Please criticise this. The source code is available in the link above, so you can easily, if you are so inclined, assess the situation yourself.
Compile with its original tools.
Switch to using DJGPP, and make the necessary changes to get it to compile with that, plus the original assembler.
Include the Allegro.cc "Game" library, and switch over as much functionality to that library as possible- Perhaps by simply writing new video and input drivers that use the Allegro API. I'm thinking allegro rather than SDL because: there is a DOS version of Allegro, and fascinatingly, one of its core functions is the ability to play Animator Pro's native format FLIC.
Hopefully after 3, I will have eliminated most or all of the Assembler in the project. I say hopefully, because it's in an obscure dialect that doesn't assemble in any modern free assembler without significant modification. I have tried them all. Whatever is left gets converted to assemble in NASM, or to C code if I can define the assembler's actual function.
Switch the dos extender from Phar Lap to HX Dos http://www.japheth.de/HX.html, Which promises to replicate as much of the WIN32 api as possible. Then make all the necessary code changes for that to work.
Switch to the win32 version of Allegro.cc, assuming that the win32 version can run on top of HXDos. Make any further necessary changes
Modify the plugin system to use some kind of standard cross platform plugin library. What this would be, I have no idea. Maybe you can offer some suggestions? I talked to the developer who originally wrote the plugin system, and he said some of the things it does aren't possible on modern OS's because of segmentation restrictions. I'm not sure what this means, but I'm guessing it means all the plugins will need to be rewritten almost from scratch.
Magically, I got all the above done, and we can try and make it run in windows, osx, and linux, whilst dealing with other cross platform niggles like long file names, and things I haven't thought of.
Anyone got a problem with any of this? Is allegro a good choice? if not, why? what would you do about this plugin system? What would you do different? Is this whole thing foolish, and should I just rewrite it from scratch, using the original as inpiration? (it would apparently take the original developer "About a month" to do that)
One thing I haven't covered above is the text/font system. Not sure what to do about that, but Animator Pro has its own custom font format, but also is able to use Postscript Type 1 fonts, and some other formats.
My biggest concern with your plan, in a nutshell: Your approach seems to be to attempt to keep the whole enormous thing working at all times, tweaking the environment ever-further away from DOS. During each tweak to the environment, that means you will have approximately a billion subtle assumptions that might have broken at once, none of which you necessarily understand yet. Untangling them all at once will be incredibly painful.
If I were doing the port, my approach would be to disable as much code as possible to get SOMETHING running in a modern environment, and bring the parts back online, one piece at a time. Write a simple test harness program that loads a display driver and draws some stuff, and compile it for DOS to make sure you understand the interface. Then write some C code that implements the same interface, but with Allegro (or SDL or SFML), and make that program work under Windows or Linux. When the output differs, you have a simple test case to work from.
Your entire job on this port is swapping out implementations of various interfaces and functions with completely new ones. This is a job that unit testing excels at. Don't write any new code without a test of some kind that runs on the old code under DOS! Make your potential problems as small and simple as you possibly can. Port assembly code instead of rewriting it only if you're reasonably confident that it will actually make your job easier (ie, algorithmic stuff that compiles fine with few tweaks under NASM). Don't bite off a bigger piece than you can comfortably fit in your brain at once.
I, for one, look forward to seeing your progress! I think what you're attempting to do is great. Thanks for doing it.
Hmmm - I might approach it by writing an OpenGL video "driver" for it. and todays machines are fast enough with tons of ram that you could do all the pixel specific algorithms on main CPU into a back buffer and it would work. As the "generic" VGA driver just mapped the video buffer to a pointer this would be a place to start. There was a zoom mode in the UI so you can look at the pixels on a high res display.
It is often very difficult to take an existing non-trivial code base that wasn't written with portability in mind - you mention a few - and then try to make it portable. There will be a lot of problems on the way. It is probably a better idea to start from scratch and rewrite the code using the existing code as reference only. If you start from scratch you can leverage existing portable UI solution in your new project like Qt.
I'm working on a project that's supposed to work on both Windows and Linux (with an unofficial Mac port as well) that emulates a true colour system console.
My problem is that recently there appeared a request for textfield support (yes, console-based) and it would be cool to add the possibility of copying text to clipboard and pasting from it. Is there a way of achieving this that will:
be done in C (not C++),
work in both Windows and in Linux (preprocessor macros are an option if there's no platform-independent code),
require no extra libraries to link to?
Thanks in advance for your help.
If you're not using a cross platform UI library (like wx or something), then it sounds like you're just going to have to write native clipboard code for each platform you want to support.
Remember, on Macintoshes, you copy with Command-C, not Ctrl+C :)
The clipboard is inherently an operating system defined concept. The C language itself has no knowledge of what a clipboard is or how to operate on it. You must either interface directly with the OS, or use a portability library that does this on your behalf. There is no way around this.
Personally I would define my your own function
getClipboardText();
That is defined in two different header files (linux_clipboard.h, windows_clipboard.h, etc) and then do pre-proccessor stuff to load the appropriate one accordingly. I don't really code in C/C++ so I'm sorry if that didn't make any sense or is bad practice but that's how I'd go about doing this.
#if WIN32
#include windows_clipboard.h
#endif
That sort of thing
Remember:
For linux you have to deal with different window managers (Gnome, KDE) all with different ways of managing the clipboard. Keep this in mind when designing your app.
You may be able to communicate to the clipboard by using xclip. You can use this python script here to do this job via communicating with 'dcop' and 'klipper' here. That is for KDE, I do not know how it would be done under GNOME... You may also be able to do this independantly of either GNOME/KDE by using DBUS, although I cannot say 100% confidently on that either...
Just be aware, that for a truly cross-platform job, you have to take into account of the different GUI's such as under Linux, X is the main window manager interface and either GNOME/KDE sits on top of it..I am not singling out other GUI's such as FluxBox, WindowMaker to name but a few, and that there will be a lot of platform dependant code, and also in conjunction, you will be dealing with Windows clipboard as well..all in all, a big integrated code...
Have you not considered looking at the raw X programming API for clipboard support? Maybe that might be better as I would imagine, GNOME/KDE etc are using the X's API to do the clipboard work...if that is confirmed, then the work would be cut out and be independant of the major GUI interfaces...(I hope that would be the case as it would make life easier for your project!)
Perhaps using compile-time switches, for each platform...WIN, KDE, GNOME, MAC or use the one that is already pre-defined..
Hope this helps,
Best regards,
Tom.
Is there a way to write a BitmapEffect in 100% Managed Code? I know that it would run a lot slower than using unmanaged code, but I'd like to write a BitmapEffect but its been a long time since I've done any C++ programing, plus the application might have to run in partial trust (so unmanaged code won't be permissible). The effect is going to be run very rarely on static content. Simply getting the Bitmap of the rendered content and handing back a bitmap of the altered content would suffice.
Before you go that route, have you seen this:
GPU based effects
It's a series of articles about writing Effects (supported in .NET 3.5SP1) as fragment shaders that run on your GPU... Pretty neat stuff!
You can use as your starting point the RGBFilter - a custom bitmap effect sample, written in C++ and C#.
I am not sure you can implement custom bitmap effect in C# only, as it requires implementing some MIL interfaces, which might not be doable in C#. Though I might be wrong about that.