what is the low power Consumption java card - mobile

Hi I am new to java card development.so i want to build application which encrypt data coming from the Mobile device(mobile phone) as NFC technology and send back to the device.i couldn't found any related document about java card power consumption.because the reason is mobile phone have limited power.so i want to know how java cards power consumption is works?? and what is the best low power contact less java card.
Thanks

Java Card is not a chip by itself, it is only an on-card runtime with a standardized API. So the power consumption is completely dependent on the implementation.
Now to comply with ISO 14443 which specifies the chip + antenna will have to conform to certain characteristics. This includes the frequency range but also power consumption. Normally the card should be readable up to about 7 cm but for weak readers the distance is normally around 5 cm. Strangely enough, some chip/antenna combinations show "holes" at different distances in operation.
Note that the amount of energy that can be used depends a lot on the distance. In general, the closer the card the more power. Most smart cards work at higher speeds when they are able to receive more power.
Personally I like NXP because they seem to have few implementation mistakes. I also like ISO 14443 type A slightly better than type B. But this is personal preference and the landscape is changing fast. Furthermore, it is only possible to measure the performance of the combination of chip, packaging and antenna together.

Related

Differences in volume of audio content

We are having a Skill built for us which plays podcasts and audio snippets of videos. We currently serve all of this content through other traditional platforms like native mobile apps and a website.
One problem we have is that the volume of all this audio content is much lower than the rest of the sounds outputted from the Alexa device. We don't notice any similar discrepancies on the other aforementioned platforms, and the developers building the Skill say that there's no API which allows you to boost or manipulate the output volume (not the system volume).
Has anyone else had experiences with this sort of issue? We are reluctant to pump up the volume of our source files as it will affect all the other places they are listened to.
Short answer - yes it's tricky and you're not alone.
As detailed in this BBC report* on designing a voice application - "All of our experts have experienced problems in audio levels when mixing and mastering audio for smart speakers."
Official guidence from Amazon is:
Program loudness for Alexa should average -14 dB LUFS/LKFS
The true-peak value should not exceed -2 dBFS
It then goes so far as to say that,
Your skill may be rejected if program loudness:
Is lower than -19 dB LUFS
Is higher than -9dB LUFS
However, this mainly seems to apply to audio played either as part of a Flash Briefing or using the AudioPlayer Interface, and you may get away with deviating from this if using it as part of SSML output.
That said, in practice developers tend to bump the levels up on audio until it sounds good. Sticking strictly to the recommendations usually renders the volume way too low.
So, if you didn't want to increase the loudness of clips as it'd affect other platforms, your best options might be to create an Alexa-version of the audio - mixed slightly louder. Though, I recognize that this would be tedious.
*In the interest of transparency, I wanted to declare that I fed into this report.

How computational expensive is running data through a Neural Network? Can Smartphones or Raspberry PIs do it?

I have only little background knowledge about Neural Networks (NN).
However, up to know I learnt, that training the network is the actual expensive part. Processing data by an already trained network is much cheaper/faster, ultimately.
Still, I'm not entirely sure what the expensive parts are within the processing chain. As far as I know, it's mostly Matrix-Multiplication for standard layers. Not the cheapest operation, but definitly doable. On top, there are are other layers, like max-pooling, or activation-functions at each node, which might have higher complexities. Are those the bottle-necks?
Now, I wonder if "simple" Hardware provided by Smartphones or even cheap stand-alone Hardware like Raspberry PIs are capable of utilizing a (convolutional-) Neuronal Networks to do, for example, Image Processing, like Object Detection. Of course, I mean doing the calculations on the device itself, not by transmitting the data to a second, powerful machine or even a cloud, which does the calculations, before sending back the results to the smartphone.
If so, what are the maximum Neurons such a Network should have (e.g. how many layers and how many neurons per layer), roughly estimated. And last, are there any good either projects, or librarys, using NNs for reduced simpler Hardware?
Current neural networks use convolutional layers, which perform a convolution on the input image. Also the high amount of parameters and dimensions is a real problem for low budget hardware. But anyways there are approaches that work on android for newer smartphones, like the SqueezeNet. Much of the work is actually done on gpus nowadays and so I am not sure if it works on a rasperry.
A better description than I could ever write on the topic can be found here: https://hackernoon.com/how-hbos-silicon-valley-built-not-hotdog-with-mobile-tensorflow-keras-react-native-ef03260747f3?gi=adb83ae18a85, where they actually built a neural network for a mobile phone. You can actually download the app and try it on your mobile phone if you have android or ios.
There are a lot of research going on in this area. There are roughly two lines of areas that deal with this:
Efficient network architectures
Some post-processing for an already trained model
You can have a look at ICNet Figure 1, where some architectures for fast inference for semantic segmentation are shown. Many of these models can be tweaked to do classification or other image processing tasks in real time. These models all have a low number of parameters compared to other networks and can be evaluated on embedded platforms.
For "post-hoc" optimizations you can look at TensorFlows Graph Transform Tool that do many of such for you: Graph Transform Tool
or maybe look into the paper by Song Han Deep Compression where many of these ideas are described. Song Han has also given many great lectures in this area and you can e.g. find one in the CS231n class at Stanford.
The speed of the inference phase depend on a lot of other things than the number of parameters or neurons. So I don't think there is a rule of thumb for saying how many neurons are the maximum.

How to know if HW/SW codesign will be useful for a specific application?

I will be in my final year (Electrical and Computer Engineering )the next semester and I am searching for a graduation project in embedded systems or hardware design . My professor advised me to search for a current system and try to improve it using hardware/software codesign and he gave me an example of the "Automated License Plate Recognition system" where I can use dedicated hardware by VHDL or verilog to make the system perform better .
I have searched a bit and found some youtube videos that are showing the system working ok .
So I don't know if there is any room of improvement . How to know if certain algorithms or systems are slow and can benefit from codesign ?
How to know if certain algorithms or systems are slow and can benefit
from codesign ?
In many cases, this is an architectural question that is only answered with large amounts of experience or even larger amounts of system modeling and analysis. In other cases, 5 minutes on the back of an envelop could show you a specialized co-processor adds weeks of work but no performance improvement.
An example of a hard case is any modern mobile phone processor. Take a look at the TI OMAP5430. Notice it has a least 10 processors, of varying types(the PowerVR block alone has multiple execution units) and dozens of full-custom peripherals. Anytime you wish to offload something from the 'main' CPUs, there is a potential bus bandwidth/silicon area/time-to-market cost that has to be considered.
An easy case would be something like what your professor mentioned. A DSP/GPU/FPGA will perform image processing tasks, like 2D convolution, orders of magnitude faster than a CPU. But 'housekeeping' tasks like file-management are not something one would tackle with an FPGA.
In your case, I don't think that your professor expects you to do something 'real'. I think what he's looking for is your understanding of what CPUs/GPUs/DSPs are good at, and what custom hardware is good at. You may wish to look for an interesting niche problem, such as those in bioinformatics.
I don't know what codesign is, but I did some verilog before; I think simple image (or signal) processing tasks are good candidates for such embedded systems, because many times they involve real time processing of massive loads of data (preferably SIMD operations).
Image processing tasks often look easy, because our brain does mind-bogglingly complex processing for us, but actually they are very challenging. I think this challenge is what's important, not if such a system were implemented before. I would go with implementing Hough transform (first for lines and circles, than the generalized one - it's considered a slow algorithm in image processing) and do some realtime segmentation. I'm sure it will be a challenging task as it evolves.
First thing to do when partitioning is to look at the dataflows. Draw a block diagram of where each of the "subalgorithms" fits, along with the data going in and out. Anytime you have to move large amounts of data from one domain to another, start looking to move part of the problem to the other side of the split.
For example, consider an image processing pipeline which does an edge-detect followed by a compare with threshold, then some more processing. The output of the edge-detect will be (say) 16-bit signed values, one for each pixel. The final output is a binary image (a bit set indicates where the "significant" edges are).
One (obviously naive, but it makes the point) implementation might be to do the edge detect in hardware, ship the edge image to software and then threshold it. That involves shipping a whole image of 16-bit values "across the divide".
Better, do the threshold in hardware also. Then you can shift 8 "1-bit-pixels"/byte. (Or even run length encode it).
Once you have a sensible bandwidth partition, you have to find out if the blocks that fit in each domain are a good fit for that domain, or maybe consider a different partition.
I would add that in general, HW/SW codesign is useful when it reduces cost.
There are 2 major cost factors in embedded systems:
development cost
production cost
The higher is your production volume, the more important is the production cost, and development cost becomes less important.
Today it is harder to develop hardware than software. That means that development cost of codesign-solution will be higher today. That means that it is useful mostly for high-volume production. However, you need FPGAs (or similar) to do codesign today, and they cost a lot.
That means that codesign is useful when cost of necessary FPGA will be lower than an existing solution for your type of problem (CPU, GPU, DSP, etc), assuming both solutions meet your other requirements. And that will be the case (mostly) for high-performance systems, because FPGAs are costly today.
So, basically you will want to codesign your system if it will be produced in high volumes and it is a high-performance device.
This is a bit simplified and might become false in a decade or so. There is an ongoing research on HW/SW synthesis from high-level specifications + FPGA prices are falling. That means that in a decade or so codesign might become useful for most of embedded systems.
Any project you end up doing, my suggestion would be to make a software version and a hardware version of the algorithm to do performance comparison. You can also do a comparison on development time etc. This will make your project a lot more scientific and helpful for everyone else, should you choose to publish anything. Blindly thinking hardware is faster than software is not a good idea, so profiling is important.

Creating a local CDMA or GSM network?

I am developing a number of different mobile applications for a number of different mobile devices and I want to quickly test in a local development environment. I was wondering if it is possible (with some sort of hardware) to set up a local desktop CDMA / GSM base station for testing devices over a local personal cellular network. The range does not have to be very far. The alternative is purchasing a SIM card and plans for various carriers but not all carriers/network types are available in our area.
I'm sure I had seen some sort of desktop device that would let you setup local networks for development/testing purposes but can't seem to find it.
Thanks.
About the least expensive such device I know is the Agilent 8960. It's difficult to give an accurate price, since it depends on the options you choose, but you are likely to need to drop about $30,000 or more for a new one.
GSM, GPRS, EDGE, WCDMA, HSPA, CDMA 2000, 1x, EV-DO are all supported, although a box with all of those options in it will be well over the figure I quoted above!
The device has been around for a while though, so you may find something on eBay or via surplus sales and the like.
The upside is that it gives you an enormous amount of control over the cellular environment, and will let you do repeatable throughput tests (something that is really impossible on 'live' networks unless you use statistical techniques and many, many test runs) but the obvious downside is the price!
In a standard network setup you would need an antenna, a BTS, a BSC, a MSC, a GGSN and a SGSN to do data traffic, all horribly expensive and requiring expert knowledge to get the stuff up and running.
If you are interested in experiments then try OpenBSC altough it might be difficult to find BTS hardware.
If you want to buy actual products then have a look at IPaccess. They offer picocell hardware. I am not sure though if their BSC can work without an MSC and SGSN. But still expect a 5-digit price. Tecore also might be worth a visit.
Test and measurement equipment manufacturers might be an alternative as well. There you should check if you actually can branch out the data traffic into the internet or some test server if you need that.
If you want to do this for a living and not for fun, I would assume that simply buying SIMs plus data plans is the cheapest alternative.
You can roll your own cell network! with http://openbts.org/ but stills you need a development kit(Hardware) which is a little expensive. Or you can try to hack your own phone to use it as a radio which is really difficult but cheap.
The answer to this question may be of interest to you also, depending on what your application does:
Testing Mobile Sites
Essentially there are companies that offer a sort of virtual testing service, allowing you test phones with different location and operator combinations.

Which resolution to target for a Mobile App?

When desinging UI for mobile apps in general which resolution could be considered safe as a general rule of thumb. My interest lies specifically in web based apps. The iPhone has a pretty high resolution for a hand held, and the Nokia E Series seem to oriented differently. Is 240×320 still considered safe?
Not enough information...
You say you're targeting a "Mobile App" but the reality is that mobile could mean anything from a cell phone with 128x128 resolution to a MID with 800x600 resolution.
There is no "safe" resolution for such a wide range, and if you're truly targeting all of them you need to design a custom interface for each major resolution. Add some scaling factors in and you might be able to cut it down to 5-8 different interface designs.
Further, the UI means "User Interface" and includes a lot more than just the resolution - you can't count on a touchscreen, full keyboard, or even software keys.
You need to either better define your target, or explain your target here so we can better help you.
Keep in mind that there are millions of phone users that don't have PDA resolutions, and you can really only count on 128x128 or better to cover the majority of technically inclined cell phone users (those that know there's a web browser in their phone, nevermind those that use it).
But if you're prepared to accept these losses, go ahead and hit for 320x240 and 240x320. That will give you most current PDA phones and up (older blackberries and palm devices had smaller square orientations). Plan on spending time later supporting lower resolution devices and above all...
Do not tie your app to a particular resolution.
Make sure your app is flexible enough that you can deploy new UI's without changing internal application logic - in other words separate the presentation from the core logic. You will find this very useful later - the mobile world changes daily. Once you gauge how your app is being used you can, for instance, easily deploy an iPhone specific version that is pixel perfect (and prettier than an upscaled 320x240) in order to engage more users. Being able to do this in a few hours (because you don't have to change the internals) is going to put you miles ahead of the competition if someone else makes a swipe at your market.
-Adam
Right now I believe it would make sense for me to target about 2 resolutions and latter learn my customers best needs through feedback?
It's a chicken and egg problem.
Ideally before you develop the product you already know what your customers use/need.
Often not even the customers know what they need until they use something (and more often than not you find out what they don't need rather than what they need).
So in this case, yes, spend a little bit of time developing a prototype app that you can send out there to a few people and get feedback. They will have better feedback because they can try it out, and you will have a springboard to start from. The ability to quickly release UI updates without changing core logic will allow you test several interfaces quickly without a huge time investment.
Further, to customers you will seem really responsive to their needs, which will be a big benefit to people who's jobs depend on reaction time.
-Adam
You mentioned Web based apps. Any particular framework you have in mind?
In many cases, WALL seems to help to large extent.
Here's one Article, Adapting to User Devices Using Mobile Web Technology exploiting WALL.

Resources