OpenCV on ARM11 architecture with Ubuntu Support - c

I am developing a project based on OpenCV. Currently I am working on Ubuntu 10.04 system with AMD turion processor. But project need to work as an embedded system. So I am using a readymade board based on ARM11 processor with SAMSUNG S3C6410 processor. It supprot Linux 2.6.28. It also support ubuntu. So how could I port my code from the host system to my embedded system.
Thanks
This is link for the board.
http://www.minidevs.com/

I think the best way to start, is to take a look at Angstrom/OpenEmbedded.
It's a framework for building OS images for various embedded platforms. You could take the precompiled-images, but I've realized that after a while it's not worth the hassle.
Just build the target image yourself (with OpenCV for the target platform, it definitely builds for S3C2440 (tested it myself a year ago or so) and for all OMAP3 platforms (BeagleBoard, EVM and the like).
Then, use OpenEmbedded to build the cross-compiler (there is a package name for that), install it on your host machine, and you should be ready to go.
If there is no support for S3C6410, just use any other ARM11 platform out there, and install the packages. It is likely your vendor-supplied OS was built using OpenEmbedded, it quickly becomes de-facto standard.
http://www.angstrom-distribution.org/
http://www.openembedded.org

Check Linaro if the processor you are looking for is ARM. It looks promising. http://www.linaro.org/

Related

Portable Linux environment for applications

Is there any application (maybe a VM) that run Linux compatible compiled programs (like a web service) on other platforms (like windows) like an native application?
for example executive a C coded web service that is compiled by that application like a native linux programming but with an extra layer.
I think it have to be an x86 VM like QEMU but that is so heavy and complicated.
my problem is that I coded an application in C for linux but not I want to run it on others platforms without rebuilding that or using Cygwin.
You could look into Docker. As far as I know, it is possible to run Docker images that were built on Linux on Windows.
I found an incomplete but best match solution to my problem:
flinix
I hope more contribution to this awesome project.
Softwares like wine have been made for linux to run Windows applications. You can try http://www.andlinux.org/ to run linux apps in Windows.
You can uses VirtualBox or VMWare and run a Linux VM or use Cygwin or MinGW (similar to cygwin). Maybe try with Babun which is really easy but I'm not sure this would run a linux app...

in-Built linux application with the kernel image or boot.img

how to make an application in built(like top, vi , etc ) so that they can be put inside the /system/bin automatically on flashing the kernel and can be accessed from the command prompt.
I tried modifying the Makefile for the my application by looking at the example of top utility but could not find it under /system/bin .
I am not sure if I have included the sources of the file in the Makefile correctly.
You need to start with something that the manufacturer provided. Presumably it's a devKit or something. Most modern dev kits ship with either a MFG provided development environment, kernel, sources, etc. Many are based on Yocto Linux.
You can't just compile a binary locally on your PC with whatever version of GCC you have and have it work on an embedded environment. Chances are it's a different architecture (ARM or Freescale or something). There are ways to cross-compile but is some setup involved. Read about cross compiling here: http://en.wikipedia.org/wiki/Cross_compiler
There are development and packaging environments that have been developed by the community but it's not for the faint of heart. In short, start reading: https://www.yoctoproject.org/

Cross compiling with existing rootfs and external toolchain (buildroot? qemu?)

I am working on an arm embedded platform based on the Cortex A9, very similar to the hummingboard (http://www.solid-run.com/products/hummingboard/).
I am working on porting over some of our software that was previously running on a beaglebone. Our software is python based but uses some ctypes, an internal c library as well as several python modules and a rabbitmq server. On the beaglebone, setting this up was easy because there is a lot of support and ubuntu based distros that make it simple to install packages.
I have a linaro cross compiler and a uboot and rootfs image given to us by the manufacturer of the platform. Manually cross compiling and building all of our necessary dependencies is turning into quite a headache, as everything has little quirks. I do not have a native development toolchain that can run on the arm device.
I am looking for a simpler way to do some of these tasks. Buildroot sounds like exactly what I need, but I am not sure how I would make it work with an already existing rootfs and toolchain. Unfortunately, I don't know all the details of the rootfs and how the hardware is brought up, so I don't think I can replicate the settings exactly using buildroot.
Another option I was looking into was somehow using the rootfs with QEMU and building a native toolchain to run on it, which would allow me to manually build the dependencies without needing to deal with the headaches of cross compiling.
Any help is very much appreciated. Thanks.
Buildroot is designed to generate an entire rootfs, not to "complement" an existing one. So if you were to use Buildroot, you should get rid of the existing root filesystem, and use the new one generated by Buildroot.
Also, note that if you were happy with the Debian distro running on your BeagleBone, you can also run Debian on your Hummingboard.

Learning Linux Kernel programming on a virtual machine on Ubuntu?

I am just learning linux kernel programming with the LINUX KERNEL DEVELOPMENT book(I am beginner linux kernel programming but not on linux programming). It is possible to test programs in a kernel machine with VMware viritual on Ubuntu without damage my system ?
Yes you can safely test kernel modules on a virtual machine!
I'll give you some links that may help:
watch this site
http://free-electrons.com/
in particular this book:
http://free-electrons.com/doc/books/ldd3.pdf
Also this guide:
http://www.tldp.org/HOWTO/Module-HOWTO/
An embedded distro is even better
An Ubuntu guest is fine, but I prefer to keep things minimal and use an embedded distro, as this will make things:
simpler and easier to understand and control
faster
In particular, I recommend using:
Buildroot, which is highly configurable, documented and maintained, also builds host QEMU so easy to patch it up (e.g. to add your own devices since out-of-tree devices are not possible yet ?)
QEMU emulator: small comprehensible source, ARM support, official Android emulator, kernel GDB support
Embedded distros can generate rootfs images smaller than 10MiB, and it becomes possible to understand the entire userland setup, which will make it easier to focus on the kernel.
I have made a setup to make everything as automated as possible: https://github.com/cirosantilli/linux-kernel-module-cheat
I've been using a VM for a long time for Linux kernel programming and I've never had any problem. Actually, if you manage to violate the protections of a VM then you will probably be hired by Oracle or VMWare :D
However, I recommend you to read this post: https://security.stackexchange.com/questions/23452/is-it-safe-to-use-virtual-machines-when-examining-malware

OpenCL which SDK is best?

I am a beginner in OpenCL programming. My PC has windows 8.1 with both intel graphics and AMD Radeon 7670. When I searched to download an OpenCL SDK and sample helloworld programs, I found that there are separate SDKs and programs in entirely different formats available. I have to use C not C++. Can anyone suggest which SDK I should install? Please help.
At the lowest level, the various OpenCL SDKs are the same; they all include cl.h from the Khronos website. Once you've included that header you can write to the OpenCL API, and then you need to link to OpenCL.lib, which is also supplied in the SDK. At runtime, your application will load the OpenCL.dll that your GPU vendor has installed in /Windows/System32.
Alternatively, you can include cl.hpp and use the C++ wrapper, but since you said you're a C programmer, and because most of the books use the C API, stick with cl.h. I think this might account for the "programs in entirely different formats" observation you made which is why I bring it up here.
The benefit of one SDK over another typically is for profiling and debugging. The AMD SDK, for example, includes APP Profiler (or now CodeXL) which will help you figure out how to make your kernels faster. NVIDIA supplies Parallel Nsight for the same purpose, and Intel also has performance tools.
So you might choose your SDK based on the hardware in your machine, but understand that once you've coded to the OpenCL API, your application can run on other GPUs from other vendors -- that is the benefit of OpenCL. You should even be able to get samples from one vendor to execute on hardware from another.
One thing to be careful of is versions: If you code to an OpenCL 1.2 SDK you might not run on OpenCL 1.1 hardware.
For me the best thing with OpenCL is that you do not need an SDK at all because it abstracts different Vendor implementations behind a common Interface (see Answer in this Thread: Do I really need an OpenCL SDK?).

Resources