Does supports the cards with plain image? I mean non embossed cards -

Trying to scan the card with non embossed numbers that is plain image card and its not getting scanned. wanted to know whether supports non embossed cards also.

There has been an issue reported on GitHub where they say that doesn't support non embossed cards.
For now, the only solution presented there is a "library" made for, like an extension that helps it to read non embossed cards.
Here is the issue:

No. only supports embossed cards.
The project is not under active development.


How to embed the camera screen in the part of the main form?

I've some trouble regarding camera feature in codenameone.
I can use camera as full screen as follow:
This opens full camera screen, however, I want it to be embedded in special part(e.g. square) of the main form.
I've heard native interface is needed for this. But I couldn't know the detail source code.
Is there any way to do this or to get any source code?
(PS: I'm looking for the solution for iOS/Android.)
This is now possible with a new cn1lib:
Original answer below:
Codename One didn't support z-ordering of peer components until recently so this was not reasonably possible until a couple of months ago when we introduced that feature.
This is now available in all supported Codename One platforms and thus it should be possible to create a cn1lib that will allow you to do just that.
We hope to build such a cn1lib ourselves but with our current workload I'm not sure when we'll get around to do it. : Support for Non-embossed or Printed cards

I am trying to integrate sdk in my android app. I was wondering if it supports non-embossed cards as well as it is very important for the kind of app I am working on.
I went through some old posts where it was mentioned that it is not supported yet. I just wanted to know if -
With latest release 5.2.0, is there any support for printed cards?
Is there any plan for near future, to support non-embossed cards?
Unfortunately, printed/non-embossed cards are not supported as of Android SDK 5.2 (nor in iOS SDK 5.3).
The feature requires a non-trivial amount of work, but if you're interested in adding the feature, you may be able to look into the source repositories and try to get a few people on GitHub to help contribute. (Android, common dmz).

Ionic Framework different image for device

i am developing an ios and android app from Ionic. my problem is since we have different device, different density. How do you handle the displaying of images. Are you just creating one image file that will be used on all the screen device?
The short answer is that yes you use one high quality image and let the browser on the device do the down scaling.
The slightly longer answer is that you can do some optimizations because your locked into a pretty tight set of standards compliant browsers so you'll want to follow best practices for those. By that I mean if you have graphics that are in vector format, great leave them as SVG's if you can. If you have raster images you'll want to have them at a resolution that looks great on your biggest retina display and you'll want to use media queries to adjust where necessary.
You can also use CSS tricks to replace images by hiding and showing appropriate images at appropriate times.

Difference between Flash and Microsoft Silverlight

can anyone explain the different between Flash and Silverlight to me?
I understand Flash in a program which can export SWF or FLV formats for users to play on the web using Flash Player. How does Silverlight compare to this? Is it a player or a development tool? Also does it export video file types such as those Netflix uses?
Too many. I have worked on both and Flash is my favorite. Just go Wiki for the difference. I think if you have any specific question related to that which says any particular feature needed to be compared, that will be good.
The differences are many, but the main difference is they are competing products from competing companies (Microsoft Silverlight/Adobe Flash). Both are on the way out of favor, but still continue to be used for functions that have not been 100% replaced with open standards yet. Such as video codec support and DRM.

What technology is used to build multitouch applications?

Can anyone provide any details, code snippets, examples, etc. of how to go about building something as cool as this "Rock Wall" that Obscura Digital built?
Let's just pretend we have access to whatever technology is required. Where do I start?
My understanding of the question is, what kind of libraries are available for multitouch (specifically in .Net)?
If this is the case, see here and here. Neither actually requires a Microsoft Surface table, but it does require multi-touch compatible hardware and Windows 7. founded by Jeff Han
Have a look here:
TouchKit: the open source, multi-touch screen developer's kit
You need a large piece of plexiglass, an infrared webcam, a projector, several infrared LED's, and some sort of translucent material like butcher paper.
The infrared LED's are placed along the outer edge of the glass, and illuminate the glass with infrared light. The projector takes the computer image and projects it onto the butcher paper, which is mounted behind the glass. When you touch the glass with your finger, it produces an image in the infrared camera that can be processed by the computer.
The whole thing can be built for less than $1000. TouchKit offers everything but the projector in a pre-assembled form for $1580, including shipping.