Card.io not scanning - card.io

I followed all of the instructions for card.io.
The camera comes up looks like it's scanning but never returns through the delegate. Am I missing something?
Even when I try the example it and put in my app id and it doesn't scan. Do I just have to wait a long time for the scan?
Here is my code
.h file
#interface MasterCheckoutCreditCardViewController : UIViewController<CardIOViewDelegate>
.m file
- (void)viewDidLoad {
[super viewDidLoad];
if (![CardIOView canReadCardWithCamera]) {
NSLog(#"troubles in paradise");
}
CardIOView* cioView = [[CardIOView alloc]initWithFrame:CGRectMake(0, 100, self.view.frame.size.width, 200)];
cioView.appToken = #"app_token";
cioView.delegate = self;
cioView.guideColor = [UIColor whiteColor];
[self.view addSubview:cioView];
}
-(void)cardIOView:(CardIOView *)cardIOView didScanCard:(CardIOCreditCardInfo *)cardInfo {
if (cardInfo) {
// The full card number is available as info.cardNumber, but don't log that!
NSLog(#"Received card info. Number: %#, expiry: %02i/%i, cvv: %#.", cardInfo.redactedCardNumber, cardInfo.expiryMonth, cardInfo.expiryYear, cardInfo.cvv);
// Use the card info...
}
}

Dave from card.io here.
Have you tried a variety of credit cards? There are many cards that card.io will not successfully scan. These include the newer style cards, which lack the traditional large, embossed card numbers. Even among traditional cards, there are some whose background color or pattern makes scanning difficult.
I would suggest trying some straightforward, traditional VISA or MasterCard cards to start.

Related

How to store all possible word derivations in a database efficiently?

I have this code which takes short words (4 letter words in this case) and breaks them into two (or optionally 3) pieces, and then generates words from those parts.
const words = load()
const derivations = words.map(w => analyze(words, w))
console.log(derivations
.filter(({ list }) => list.length)
.map(({ word, list }) => `${word} => ${list.map(x => `\n ${x.join(':')}`).join('')}`).join('\n')
)
function analyze(words, word, listSize = 2) {
const derivations = []
const permutations = permute([], [], word)
permutations.forEach(list => {
if (list.length !== listSize) return
resolve(words, list[0], list.slice(1)).forEach(derivation => {
if (derivation.length) {
derivations.push(derivation)
}
})
})
return { word, list: derivations }
}
function resolve(words, chunk, remainingList, isFirst = true, result = []) {
words.forEach(word => {
if (!word.match(/^[aeiou]/) && word.length === 4 && (word.startsWith(chunk)) && word !== chunk && word !== [chunk].concat(remainingList).join('')) {
if (remainingList.length) {
resolve(words, remainingList[0], remainingList.slice(1), false).forEach(array => {
if (word !== array[0]) {
result.push([word].concat(array))
}
})
} else {
result.push([ word ])
}
}
})
return result
}
// https://stackoverflow.com/questions/34468474/make-all-possible-combos-in-a-string-of-numbers-with-split-in-javascript
function permute(result, left, right) {
result.push(left.concat(right));
if (right.length > 1) {
for(var i = 1; i < right.length; i++) {
permute(result, left.concat(right.substring(0, i)), right.substring(i));
}
}
return result;
}
function load() {
return `arch
back
bait
bake
ball
band
bank
bark
base
bash
bead
bear
beat
bell
belt
bend
best
bike
bill
bind
bird
bite
blob
blot
blow
blue
blur
boat
bolt
bond
book
boom
boot
boss
bowl
brew
brim
buck
buff
bulb
bulk
bull
bunk
burn
bush
bust
buzz
cake
call
calm
cane
card
care
cart
case
cash
cast
cave
char
chat
chew
chow
cite
clam
claw
clue
coal
coat
code
coin
comb
cone
cook
cool
cord
cork
cost
crab
crew
crib
crow
cube
cuff
cull
cure
curl
dare
dart
dash
date
daub
dawn
daze
deck
deed
dent
dice
diff
dish
disk
dive
dock
dole
doll
doom
dose
down
draw
drum
duck
duct
dude
duel
duke
dunk
dusk
dust
ease
etch
face
fade
fake
fare
farm
fast
fate
fawn
fear
feed
feel
file
fill
film
find
fine
fish
fizz
flat
flaw
flex
flow
flux
foam
fold
fool
fork
form
foul
free
fret
fuel
fume
fuse
fuss
fuzz
hack
haft
halt
hand
hare
harm
hash
haul
have
hawk
haze
head
heat
herd
hide
hike
hill
hint
hiss
hive
hold
hole
home
honk
hood
hoof
hook
hoot
horn
hose
host
howl
huff
hunt
hurt
hush
inch
keel
kern
kick
kill
kiln
kink
kiss
kite
knee
knit
knot
lace
lack
lake
land
lash
laze
lead
leaf
leak
lick
like
line
link
lint
list
load
loaf
loan
lock
look
loom
loot
lord
love
lull
lure
make
mark
mash
mask
mass
mate
maze
mean
meet
melt
mesh
mess
milk
mill
mime
mind
mine
mint
miss
mist
moan
mock
mold
molt
moon
moss
move
muse
mush
must
name
need
nerd
nest
nick
nose
note
null
oink
ooze
race
rack
raft
rake
rank
rant
rate
rave
read
reek
reel
rent
rest
ride
riff
rift
rise
risk
roar
robe
rock
role
roll
room
rose
rule
rush
rust
sack
salt
save
scam
scan
scar
seam
seat
seed
seek
self
sell
shed
shim
shoe
show
side
sift
silt
sink
site
size
skew
skid
skim
skin
slab
slam
sled
slot
snow
soak
sole
soot
sort
star
stem
stew
stir
stub
suck
suit
surf
swan
swim
take
talk
task
team
tear
tell
term
test
text
thaw
thud
tick
tide
tilt
time
toke
toll
tone
tool
toot
toss
tote
tour
tree
trek
trim
trot
tube
tuck
tune
turn
twin
vibe
view
void
vote
waft
wait
wake
walk
wall
want
wash
wave
wear
weed
weld
well
will
wind
wink
wish
wolf
word
work
worm
zone
zoom`.split(/\n+/)
}
Its output as you can see is like this for debugging purposes:
bake =>
back:keel
back:kern
bait:keel
bait:kern
ball:keel
ball:kern
band:keel
band:kern
bank:keel
bank:kern
bark:keel
bark:kern
base:keel
base:kern
bash:keel
bash:kern
band =>
bank:dare
bank:dart
bank:dash
bank:date
bank:daub
bank:dawn
bank:daze
bank:deck
bank:deed
You'll notice that sometimes over 20 words on the left appear (nested, like bank 20+ times). There are only 400 some words, but the combinations number over 8000. If I have a list of 10,000 words, and longer words, and variable number of breakdowns instead of just 2 parts, it grows to enormous proportions.
What if I wanted to store every one of those derivations in a relational database? First, would a relational database be effective in this scenario? Two, what is a standard data structure which could efficiently represent this, in terms of size (sometimes my solution runs out of memory), and disk storage size if it were a DB. I can break these conceptually into models, such as the "Word" model and the "WordDerivation" model, etc.. But if they were each tables, there would be potentially tens of millions of rows just for this. How does a robust company implement such a thing like perhaps Google?
Currently I am trying to work on the data model side of an app side project. I want to show a word, and have it list the derivations for the word. A naive solution would be to search a 100,000 word list for potential matches on every request, but that is basically what I am precomputing here and it takes some time. So I am wondering how you might break it down for a database.
I am used to Rails model, but they seem quite heavy in this scenario, I am not sure. I would need Gigabytes to store this it seems like.

ARKit - ARRreferenceImage tracking

I'm playing around with ARReferenceImages in ARKit and I'm trying to add an SCNNode when a reference image is recognised and then leave that node in place regardless of whether the same reference image is then recognised elsewhere.
I can add my SCNode correctly, but if I move my marker it picks it up again and moves my placed node to the position of the marker.
My code to add is as follows:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
let referenceImage = imageAnchor.referenceImage
print("MAPNODE IS NIL = \(self.mapNode == nil)")
updateQueue.async {
if self.mapNode == nil {
// Create a plane to visualize the initial position of the detected image.
let plane = SCNPlane(width: 1.2912,
height: 1.2912)
let planeNode = SCNNode(geometry: plane)
planeNode.opacity = 1
/*
`SCNPlane` is vertically oriented in its local coordinate space, but
`ARImageAnchor` assumes the image is horizontal in its local space, so
rotate the plane to match.
*/
planeNode.eulerAngles.x = -.pi / 2
self.mapNode = planeNode
/*
Image anchors are not tracked after initial detection, so create an
animation that limits the duration for which the plane visualization appears.
*/
// Add the plane visualization to the scene.
node.addChildNode(planeNode)
}
}
}
reading the docs here https://developer.apple.com/documentation/arkit/recognizing_images_in_an_ar_experience#2958517 it states that
Apply Best Practices
This example app simply visualizes where ARKit detects each reference
image in the user’s environment, but your app can do much more. Follow
the tips below to design AR experiences that use image detection well.
Use detected images to set a frame of reference for the AR scene.
Instead of requiring the user to choose a place for virtual content,
or arbitrarily placing content in the user’s environment, use detected
images to anchor the virtual scene. You can even use multiple detected
images. For example, an app for a retail store could make a virtual
character appear to emerge from a store’s front door by recognizing
posters placed on either side of the door and then calculating a
position for the character directly between the posters.
Note
Use the ARSession setWorldOrigin(relativeTransform:) method to
redefine the world coordinate system so that you can place all anchors
and other content relative to the reference point you choose.
Design your AR experience to use detected images as a starting point
for virtual content. ARKit doesn’t track changes to the position or
orientation of each detected image. If you try to place virtual
content that stays attached to a detected image, that content may not
appear to stay in place correctly. Instead, use detected images as a
frame of reference for starting a dynamic scene. For example, your app
might recognize theater posters for a sci-fi film and then have
virtual spaceships appear to emerge from the posters and fly around
the environment.
So I tried setting my world transform to be equal to the transform of my image anchor
self.session.setWorldOrigin(relativeTransform: imageAnchor.transform)
However my mapNode follows the imageAnchor where ever it moves. I haven't implemented the renderer update method so I'm not sure why this keeps moving.
I'm assuming that the setWorldOrigin method is constantly updating to the imageAnchor.transform and not just that moment in time, which is weird as that code is only called once. Any ideas?
If you want to add the mapNode at the position of the ARImageAnchor you could set the position of your mapNode at the transform of the ARImageAnchor and add it to the scene but not linked to the reference image if that makes sense.
This could be done like so:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. If Out Target Image Has Been Detected Than Get The Corresponding Anchor
guard let currentImageAnchor = anchor as? ARImageAnchor else { return }
//2. An ImageAnchor Is Only Added Once For Each Identified Target
print("Anchor ID = \(currentImageAnchor.identifier)")
//3. Add An SCNNode At The Position Of The Identified ImageTarget
let nodeHolder = SCNNode()
let nodeGeometry = SCNBox(width: 0.02, height: 0.02, length: 0.02, chamferRadius: 0)
nodeGeometry.firstMaterial?.diffuse.contents = UIColor.cyan
nodeHolder.geometry = nodeGeometry
nodeHolder.position = SCNVector3(currentImageAnchor.transform.columns.3.x,
currentImageAnchor.transform.columns.3.y,
currentImageAnchor.transform.columns.3.z)
augmentedRealityView?.scene.rootNode.addChildNode(nodeHolder)
}
In another part of your question you seem to imply that you want to detect multiple occurrences of the same image. I could be wrong but I think the only way to do this is to remove the corresponding ARImageAnchor for the reference image, which can be done like so (by adding it at the end of the last code snippet):
augmentedRealitySession.remove(anchor: currentImageAnchor)
The issue here is that once the ARImageAnchor is removed, any time it is detected again, you would have to handle whether content should be added, which is tricky since the ARImageAnchor.identifier is always the same for the referenceImage regardless of whether it is removed and then re-added thus making it difficult to store in a dictionary etc. As such depending on your needs you would then need to find a way to determine if content existed at that location and whether to re add it etc.
The last part of your question about setWorldOrigin seems a bit odd like you said, but maybe you could add a Bool to prevent it from potentially changing e.g:
var hasSetWorldOrigin = false
Then based on this you could ensure that it is only set once e.g:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. If Out Target Image Has Been Detected Than Get The Corresponding Anchor
guard let currentImageAnchor = anchor as? ARImageAnchor else { return }
//2. If We Havent Set The World Origin Set It Based On The ImageAnchorTranform
if !hasSetWorldOrigin{
self.augmentedRealitySession.setWorldOrigin(relativeTransform: currentImageAnchor.transform)
hasSetWorldOrigin = true
//3. Create Two Nodes To Add To The Scene And Distribute Them
let nodeHolderA = SCNNode()
let nodeGeometryA = SCNBox(width: 0.04, height: 0.04, length: 0.04, chamferRadius: 0)
nodeGeometryA.firstMaterial?.diffuse.contents = UIColor.green
nodeHolderA.geometry = nodeGeometryA
let nodeHolderB = SCNNode()
let nodeGeometryB = SCNBox(width: 0.04, height: 0.04, length: 0.04, chamferRadius: 0)
nodeGeometryB.firstMaterial?.diffuse.contents = UIColor.red
nodeHolderB.geometry = nodeGeometryB
if let cameraTransform = augmentedRealitySession.currentFrame?.camera.transform{
nodeHolderA.simdPosition = float3(cameraTransform.columns.3.x,
cameraTransform.columns.3.y,
cameraTransform.columns.3.z)
nodeHolderB.simdPosition = float3(cameraTransform.columns.3.x + 0.2,
cameraTransform.columns.3.y,
cameraTransform.columns.3.z)
}
augmentedRealityView?.scene.rootNode.addChildNode(nodeHolderA)
augmentedRealityView?.scene.rootNode.addChildNode(nodeHolderB)
}
}
Hopefully my answer will provide a useful starting point to you assuming of course I have interpreted your question correctly,

Some items in my ipod library have a NULL assturl property

When try to pick an item from my ipod library some items play and some dont.
Looking at my log some items have a NULL for assetURL.
Why would that be?
all item are DRM protected which return null assetURL. You can't access such item
I want to add something to this. There are three reasons that I know of why assetURL will be null
The item is in the cloud (in which case isCloudItem will return true).
As stated in another answer, if the track is DRM protected.
This is the kicker, tho: in some cases, and item downloaded on the device, which is not DRM, will play in Music (the built-in App), but will still have assetURL returning NULL.
This means that any non-Apple App that uses the MediaPlayer framework may encounter some media items that play in Music, but which cannot be played in the App. Your end user can usually "fix" this problem by deleting the offending track in Music, and downloading it again.
I find that if I download a complete album and see this issue, then downloading the album again (after deleting it) will lead to some different tracks having the problem, so that is not a good way to go.
I have entered an Apple bug-report for this (21477730). I also used a DTS to ask for a work-around: there is none. If you are encountering this too, a "me too" to the bug report. This may increase the chances of a fix.
If you want to try this out for yourself, below is the code that I sent in with the bug report.
MPMediaQuery *allAlbumsQuery = [MPMediaQuery albumsQuery];
NSArray *allAlbumsArray = [allAlbumsQuery collections];
for (MPMediaItemCollection *collection in allAlbumsArray)
{
NSArray* items = collection.items;
MPMediaItem* rep = collection.representativeItem;
NSString* name = rep.albumTitle;
for(MPMediaItem* item in items)
{
NSURL* url = item.assetURL;
BOOL isCloudItem = item.isCloudItem;
if(!isCloudItem && (url==nil))
{
NSString* albumTitle = item.albumTitle;
NSString* trackTitle = item.title;
NSLog(#"****Nil: %# %#",albumTitle,trackTitle);
}
}
}

Synchronously play videos from arrays using actionscript 3

I am developing an AIR application using Actionscript 3.0. The application loads two native windows on two separate monitors. Monitor 1 will play a video, and monitor 2 will synchronously play an overlayed version of the same video (i.e. infrared). Presently, I am hung up with figuring out the best way to load the videos. I'm thinking about using arrays, but am wondering, if I do, how can I link arrays so to link a primary scene to its secondary overlay videos.
Here is my code so far. The first part of it creates the 2 native full-screen windows, and then you'll see where I started coding the arrays at the bottom:
package
{
import flash.display.NativeWindow;
import flash.display.NativeWindowInitOptions;
import flash.display.NativeWindowSystemChrome;
import flash.display.Screen;
import flash.display.Sprite;
import flash.display.StageAlign;
import flash.display.StageDisplayState;
import flash.display.StageScaleMode;
public class InTheAirNet_MultiviewPlayer extends Sprite {
public var secondWindow:NativeWindow;
public function InTheAirNet_MultiviewPlayer() {
// Ouput screen sizes and positions (for debugging)
for each (var s:Screen in Screen.screens) trace(s.bounds);
// Make primary (default) window's stage go fullscreen
stage.align = StageAlign.TOP_LEFT;
stage.scaleMode = StageScaleMode.NO_SCALE;
stage.displayState = StageDisplayState.FULL_SCREEN_INTERACTIVE;
stage.color = 0xC02A2A; // red
// Create fullscreen window on second monitor (check if available first)
if (Screen.screens[1]) {
// Second window
var nwio:NativeWindowInitOptions = new NativeWindowInitOptions();
nwio.systemChrome = NativeWindowSystemChrome.NONE;
secondWindow = new NativeWindow(nwio);
secondWindow.bounds = (Screen.screens[1] as Screen).bounds;
secondWindow.activate();
// Second window's stage
secondWindow.stage.align = StageAlign.TOP_LEFT;
secondWindow.stage.scaleMode = StageScaleMode.NO_SCALE;
secondWindow.stage.displayState = StageDisplayState.FULL_SCREEN_INTERACTIVE;
secondWindow.stage.color = 0x387D19; // green
}
//Create array of PRIMARY scenes
var primary:Array = ["scene1.f4v", "scene2.f4v"];
//Create array of SECONDARY scenes for scene1
var secondary1:Array = ["scene1A.f4v", "scene1B.f4v"];
//Create array of SECONDARY scenes for scene2
var secondary2:Array = ["scene2A.f4v", "scene2B.f4v"];
}
}
}
EDIT: Users will cycle through overlays using LEFT and RIGHT on the keyboard, and will cycle through the scenes using UP and DOWN.
Use a generic Object for each video, store each in an array, and then store the secondary videos in an array within each object:
var videos:Array = [
{
primary:'video1.mp4',
secondary:[
'overlay101.mp4',
'overlay102.mp4'
]
},
{
primary:'video2.mp4',
secondary:[
'overlay201.mp4',
'overlay202.mp4'
]
}
{
primary:'video3.mp4',
secondary:[
'overlay301.mp4',
'overlay302.mp4'
]
}
]
Then when you to play a video, you can loop through the videos array. Each object has a primary video and its associated secondary videos. You could take it a step further, as well, and use the object to store metadata or store more Objects within the secondary array, rather than strings.
EDIT: Quick response to the first comment
In Object-Oriented Programming (OOP) languages, everything is an object. Every single class, every single function, every single loop, every single variable is an object. Classes generally extend a base class, which is called Object in AS3.
The Object is the most basic, most primitive type of object available — it is simply a list of name:value pairs, sometimes referred to as a dictionary. So you can do the following:
var obj:Object = {hello:'world'};
trace(obj.hello); // output 'world'
trace(obj['hello']); // output 'world'
trace(obj.hasOwnProperty('hello')); //output true (obj has a property named 'hello')
What I did was create one of these objects for each video and then save them to an array. So say you wanted to trace out primary and find out how many videos were in secondary, you would do this:
for (var i:int = 0; i < videos.length; i++) {
var obj:Object = videos[i];
trace('Primary: ' + obj.primary); // for video 1, output 'video1.mp4'
trace('Secondary Length: ' + obj.secondary.length); // for video1, output 2
}
That should show you how to access the values. An dictionary/Object is the correct way to associate data and the structure I supplied is very simple, but exactly what you need.
What we're talking about is the most basic fundamentals of OOP programming (which is what AS3 is) and using dot-syntax, used by many languages (everything from AS3 to JS to Java to C++ to Python) as a way of accessing objects stored within other objects. You really need to read up on the basics of OOP before moving forward. You seem to be missing the fundamentals, which is key to writing any application.
I can't perfectly understand what you're trying to do, but have you thought about trying a for each loop? It would look something like...
prim1();
function prim1():void
{
for each (var vid:Video in primary)
{
//code to play the primary video here
}
for each (var vid:Video in secondary1)
{
//code to play the secondary video here
}
}
Sorry if it's not what you wanted, but what I understood from the question is that you're trying to play the both primary and secondary videos at the same time so that they're in synch. Good luck with your program ^^

iOS 6: Porting iPhone 4 application to iPhone 5 [duplicate]

The new iPhone 5 display has a new aspect ratio and a new resolution (640 x 1136 pixels).
What is required to develop new or transition already existing applications to the new screen size?
What should we keep in mind to make applications "universal" for both the older displays and the new widescreen aspect ratio?
Download and install latest version of Xcode.
Set a Launch Screen File for your app (in the general tab of your target settings). This is how you get to use the full size of any screen, including iPad split view sizes in iOS 9.
Test your app, and hopefully do nothing else, since everything should work magically if you had set auto resizing masks properly, or used Auto Layout.
If you didn't, adjust your view layouts, preferably with Auto Layout.
If there is something you have to do for the larger screens specifically, then it looks like you have to check height of [[UIScreen mainScreen] bounds] as there seems to be no specific API for that. As of iOS 8 there are also size classes that abstract screen sizes into regular or compact vertically and horizontally and are recommended way to adapt your UI.
If you have an app built for iPhone 4S or earlier, it'll run letterboxed on iPhone 5.
To adapt your app to the new taller screen, the first thing you do is to change the launch image to: Default-568h#2x.png. Its size should be 1136x640 (HxW). Yep, having the default image in the new screen size is the key to let your app take the whole of new iPhone 5's screen.
(Note that the naming convention works only for the default image. Naming another image "Image-568h#2x.png" will not cause it to be loaded in place of "Image#2x.png". If you need to load different images for different screen sizes, you'll have to do it programmatically.)
If you're very very lucky, that might be it... but in all likelihood, you'll have to take a few more steps.
Make sure, your Xibs/Views use auto-layout to resize themselves.
Use springs and struts to resize views.
If this is not good enough for your app, design your xib/storyboard
for one specific screen size and reposition programmatically for the
other.
In the extreme case (when none of the above suffices), design the two Xibs and load the appropriate one in the view controller.
To detect screen size:
if(UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPhone)
{
    CGSize result = [[UIScreen mainScreen] bounds].size;
    if(result.height == 480)
{
// iPhone Classic
    }
    if(result.height == 568)
{
// iPhone 5
    }
}
The only really required thing to do is to add a launch image named "Default-568h#2x.png" to the app resources, and in general case (if you're lucky enough) the app will work correctly.
In case the app does not handle touch events, then make sure that the key window has the proper size. The workaround is to set the proper frame:
[window setFrame:[[UIScreen mainScreen] bounds]]
There are other issues not related to screen size when migrating to iOS 6. Read iOS 6.0 Release Notes for details.
Sometimes (for pre-storyboard apps), if the layout is going to be sufficiently different, it's worth specifying a different xib according to device (see this question - you'll need to modify the code to deal with iPhone 5) in the viewController init, as no amount of twiddling with autoresizing masks will work if you need different graphics.
-(id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil
NSString *myNibName;
if ([MyDeviceInfoUtility isiPhone5]) myNibName = #"MyNibIP5";
else myNibName = #"MyNib";
if ((self = [super initWithNibName:myNibName bundle:nibBundleOrNil])) {
...
This is useful for apps which are targeting older iOS versions.
Here you can find a nice tutorial (for MonoTouch, but you can use the information for Non-MonoTouch-projects, too):
http://redth.info/get-your-monotouch-apps-ready-for-iphone-5-ios-6-today/
Create a new image for your splash/default screen (640 x 1136 pixel) with the name "Default-568h#2x.png"
In the iOS Simulator, go to the Hardware -> Device menu, and select "iPhone (Retina 4-inch)"
Create other images, e.g. background images
Detect iPhone 5 to load your new images:
public static bool IsTall
{
get {
return UIDevice.currentDevice.userInterfaceIdiom
== UIUserInterfaceIdiomPhone
&& UIScreen.mainScreen.bounds.size.height
* UIScreen.mainScreen.scale >= 1136;
}
}
private static string tallMagic = "-568h#2x";
public static UIImage FromBundle16x9(string path)
{
//adopt the -568h#2x naming convention
if(IsTall())
{
var imagePath = Path.GetDirectoryName(path.ToString());
var imageFile = Path.GetFileNameWithoutExtension(path.ToString());
var imageExt = Path.GetExtension(path.ToString());
imageFile = imageFile + tallMagic + imageExt;
return UIImage.FromFile(Path.Combine(imagePath,imageFile));
}
else
{
return UIImage.FromBundle(path.ToString());
}
}
It's easy for migrating iPhone5 and iPhone4 through XIBs.........
UIViewController *viewController3;
if ([[UIScreen mainScreen] bounds].size.height == 568)
{
UIViewController *viewController3 = [[[mainscreenview alloc] initWithNibName:#"iphone5screen" bundle:nil] autorelease];
}
else
{
UIViewController *viewController3 = [[[mainscreenview alloc] initWithNibName:#"iphone4screen" bundle:nil] autorelease];
}
I solve this problem here. Just add ~568h#2x suffix to images and ~568h to xib's. No needs more runtime checks or code changes.
I had added the new default launch image and (in checking out several other SE answers...) made sure my storyboards all auto-sized themselves and subviews but the retina 4 inches still letterboxed.
Then I noticed that my info plist had a line item for "Launch image" set to "Default.png", which I thusly removed and magically letterboxing no longer appeared. Hopefully, that saves someone else the same craziness I endured.
I guess, it is not going to work in all cases, but in my particular project it avoided me from duplication of NIB-files:
Somewhere in common.h you can make these defines based off of screen height:
#define HEIGHT_IPHONE_5 568
#define IS_IPHONE ([[UIDevice currentDevice] userInterfaceIdiom] == UIUserInterfaceIdiomPhone)
#define IS_IPHONE_5 ([[UIScreen mainScreen] bounds ].size.height == HEIGHT_IPHONE_5)
In your base controller:
- (void)viewDidLoad
{
[super viewDidLoad];
if (IS_IPHONE_5) {
CGRect r = self.view.frame;
r.size.height = HEIGHT_IPHONE_5 - 20;
self.view.frame = r;
}
// now the view is stretched properly and not pushed to the bottom
// it is pushed to the top instead...
// other code goes here...
}
In a constants.h file you can add these define statements:
#define IS_IPAD UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad
#define IS_IPHONE UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPhone
#define IS_WIDESCREEN (fabs((double)[[UIScreen mainScreen] bounds].size.height - (double)568) < DBL_EPSILON)
#define IS_IPHONE_5 (!IS_IPAD && IS_WIDESCREEN)
To determine if your app can support iPhone 5 Retina use this:
(This could be more robust to return the type of display, 4S Retina, etc., but as it is written below, it just returns if the iPhone supports iOS5 Retina as a YES or NO)
In a common ".h" file add:
BOOL IS_IPHONE5_RETINA(void);
In a common ".m" file add:
BOOL IS_IPHONE5_RETINA(void) {
BOOL isiPhone5Retina = NO;
if(UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPhone) {
if ([UIScreen mainScreen].scale == 2.0f) {
CGSize result = [[UIScreen mainScreen] bounds].size;
CGFloat scale = [UIScreen mainScreen].scale;
result = CGSizeMake(result.width * scale, result.height * scale);
if(result.height == 960){
//NSLog(#"iPhone 4, 4s Retina Resolution");
}
if(result.height == 1136){
//NSLog(#"iPhone 5 Resolution");
isiPhone5Retina = YES;
}
} else {
//NSLog(#"iPhone Standard Resolution");
}
}
return isiPhone5Retina;
}
First of all create two xibs and attach all delegates,main class to the xib and then u can put in this condition mentioned below in your appdelegate.m file in
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
if ([[UIScreen mainScreen] bounds].size.height == 568)
{
self.ViewController = [[ViewController alloc] initWithNibName:#"ViewControlleriphone5" bundle:nil];
}
else
{
self.ViewController = [[ViewController alloc] initWithNibName:#"ViewControlleriphone4" bundle:nil];
}
you can use it any where in the program depending upon your requirements even in your ViewController classes. What matters the most is that you have created two xib files separate for iphone 4(320*480) and iphone 5(320*568)
Try the below method in a singleton class:
-(NSString *)typeOfDevice
{
if(UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPhone)
{
CGSize result = [[UIScreen mainScreen] bounds].size;
if(result.height == 480)
{
return #"Iphone";
}
if(result.height == 568)
{
return #"Iphone 5";
}
}
else{
return #"Ipad";;
}
return #"Iphone";
}
You can use the Auto Layout feature and create the design using iPhone 5 screen resolution and it will work for the both 4" and 3.5" devices, but in this case you should have a enough knowledge of layout manager.
Checking bounds with 568 will fail in landscape mode. iPhone 5 launches only in portrait mode but if you want to support rotations then the iPhone 5 "check" will need to handle this scenario as well.
Here's a macro which handles orientation state:
#define IS_IPHONE_5 (CGSizeEqualToSize([[UIScreen mainScreen] preferredMode].size, CGSizeMake(640, 1136)))
The use of the 'preferredMode' call is from another posting I read a few hours ago so I did not come up with this idea.
First show this image. In that image you show warning for Retina 4 support so click on this warning and click on add so your Retina 4 splash screen automatically add in your project.
and after you use this code :
if([[UIScreen mainScreen] bounds].size.height == 568)
{
// For iphone 5
}
else
{
// For iphone 4 or less
}
I never faced such an issue with any device as I've had one codebase for all, without any hardcoded values. What I do is to have the maximum sized image as resource instead of one for each device. For example, I would have one for retina display and show it as aspect fit so it will be views as is on every device.
Coming to deciding the frame of button, for instance, at run time. For this I use the % value of the patent view, example , if I want the width to be half of parent view take 50 % of parent and same applies for height and center.
With this I don't even need the xibs.
You can use this define to calculate if you are using the iPhone 5 based on screen size:
#define IS_IPHONE_5 ( fabs( ( double )[ [ UIScreen mainScreen ] bounds ].size.height - ( double )568 ) < DBL_EPSILON )
then use a simple if statement :
if (IS_IPHONE_5) {
// What ever changes
}
Peter, you should really take a look at Canappi, it does all that for you, all you have to do is specify the layout as such:
button mySubmitButton 'Sumbit' (100,100,100,30 + 0,88,0,0) { ... }
From there Canappi will generate the correct objective-c code that detects the device the app is running on and will use:
(100,100,100,30) for iPhone4
(100,**188**,100,30) for iPhone 5
Canappi works like Interface Builder and Story Board combined, except that it is in a textual form. If you already have XIB files, you can convert them so you don't have to recreate the entire UI from scratch.
You can manually check the screen size to determine which device you're on:
#define DEVICE_IS_IPHONE5 ([[UIScreen mainScreen] bounds].size.height == 568)
float height = DEVICE_IS_IPHONE5?568:480;
if (height == 568) {
// 4"
} else {
// 3"
}
You could add this code:
if(UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPhone){
if ([[UIScreen mainScreen] respondsToSelector: #selector(scale)]) {
CGSize result = [[UIScreen mainScreen] bounds].size;
CGFloat scale = [UIScreen mainScreen].scale;
result = CGSizeMake(result.width * scale, result.height * scale);
if(result.height == 960) {
NSLog(#"iPhone 4 Resolution");
}
if(result.height == 1136) {
NSLog(#"iPhone 5 Resolution");
}
}
else{
NSLog(#"Standard Resolution");
}
}
This is a real universal code, you can create 3 different story board:
Set your project Universal mode, and set your main story iPhone with the iPhone5 storyboard and the ipad main with iPad target storyboard, now add new storyboard target for iphone and modify the resolution for iphone 4s or less now implement your AppDelegate.m
iPhone4/4s (is the same for 3/3Gs) one for iPhone5 and make the project universal, with a new Storyboard target for iPad, now in to AppDelegate.m under the didFinishLaunching add this code:
if(UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPhone){
UIStoryboard *storyBoard;
CGSize result = [[UIScreen mainScreen] bounds].size;
CGFloat scale = [UIScreen mainScreen].scale;
result = CGSizeMake(result.width *scale, result.height *scale);
//----------------HERE WE SETUP FOR IPHONE4/4s/iPod----------------------
if(result.height == 960){
storyBoard = [UIStoryboard storyboardWithName:#"iPhone4_Storyboard" bundle:nil];
UIViewController *initViewController = [storyBoard instantiateInitialViewController];
[self.window setRootViewController:initViewController];
}
//----------------HERE WE SETUP FOR IPHONE3/3s/iPod----------------------
if(result.height == 480){
storyBoard = [UIStoryboard storyboardWithName:#"iPhone4_Storyboard" bundle:nil];
UIViewController *initViewController = [storyBoard instantiateInitialViewController];
[self.window setRootViewController:initViewController];
}
}
return YES;
}
So you have created a Universal app for iPhone 3/3Gs/4/4s/5 All gen of iPod, and All type of iPad
Remember to integrate all IMG with myImage.png and myImage#2x.png
According to me the best way of dealing with such problems and avoiding couple of condition required for checking the the height of device, is using the relative frame for views or any UI element which you are adding to you view for example: if you are adding some UI element which you want should at the bottom of view or just above tab bar then you should take the y origin with respect to your view's height or with respect to tab bar (if present) and we have auto resizing property as well. I hope this will work for you
Rather than using a set of conditionals you can resize your view automatically using the screen size.
int h = [[UIScreen mainScreen] bounds].size.height;
int w = [[UIScreen mainScreen] bounds].size.width;
self.imageView.frame = CGRectMake(20, 80, (h-200), (w-100));
In my case I want a view that fills the space between some input fields at the top and some buttons at the bottom, so fixed top left corner and variable bottom right based on screen size. My app fills the image view with the photo taken by the camera so I want all the space I can get.
If you need to convert an already existing app to universal, you need to select corresponding xib file->show Utilities-> Show Size inspector.
In Size inspector you can see Autosizing, by using this tool you can convert to existing iOS App.
Using xCode 5, select "Migrate to Asset Catalog" on Project>General.
Then use "Show in finder" to find your launch image, you can dummy-edit it to be 640x1136, then drag it into the asset catalog as shown in the image below.
Make sure that both iOS7 and iOS6 R4 section has an image that is 640x1136. Next time you launch the app, the black bars will disappear, and your app will use 4 inch screen
Point worth notice - in new Xcode you have to add this image file Default-568h#2x.png to assets
Use the Auto Layout feature for views. It will adjust automatically to all resolutions.
Create two xibs for a controller having controller name with suffix either ~iphone or ~ipad. At compile time, Xcode will take the right xib based on the device.
Use size classes, if you want to create a single xib for both iPhone and iPad, if the view is simple enough to port to iPhone and iPad.
There is a slight problem when testing on both iOS device and iOS Simulator. It appears that simulator (XCode 6.0.1) gives switched values for width and height in [[UIScreen mainScreen] bounds].size depending on a device orientation.
So this might be a problem when determinating the right physical screen size. This code helps also to distinct all 2014. iPhone model generations:
iPhone4s
iPhone5 (and iPhone5s)
iPhone6 (and iPhone6+)
It can also be easily changed to make the distinction between e.g. iPhone6 from iPhone6+.
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions {
CGSize iOSDeviceScreenSize = [[UIScreen mainScreen] bounds].size;
if ([UIDevice currentDevice].userInterfaceIdiom == UIUserInterfaceIdiomPhone)
{
if (iOSDeviceScreenSize.width > 568 || // for iOS devices
iOSDeviceScreenSize.height > 568) // for iOS simulator
{ // iPhone 6 and iPhone 6+
// Instantiate a new storyboard object using the storyboard file named Storyboard_iPhone6
storyboard = [UIStoryboard storyboardWithName:#"MainStoryboard_iPhone6" bundle:nil];
NSLog(#"loaded iPhone6 Storyboard");
}
else if (iOSDeviceScreenSize.width == 568 || // for iOS devices
iOSDeviceScreenSize.height == 568) // for iOS simulator
{ // iPhone 5 and iPod Touch 5th generation: 4 inch screen (diagonally measured)
// Instantiate a new storyboard object using the storyboard file named Storyboard_iPhone5
storyboard = [UIStoryboard storyboardWithName:#"MainStoryboard_iPhone5" bundle:nil];
NSLog(#"loaded iPhone5 Storyboard");
}
else
{ // iPhone 3GS, 4, and 4S and iPod Touch 3rd and 4th generation: 3.5 inch screen (diagonally measured)
// Instantiate a new storyboard object using the storyboard file named Storyboard_iPhone4
storyboard = [UIStoryboard story boardWithName:#"MainStoryboard_iPhone" bundle:nil];
NSLog(#"loaded iPhone4 Storyboard");
}
}
else if ([UIDevice currentDevice].userInterfaceIdiom == UIUserInterfaceIdiomPad)
{ // The iOS device = iPad
storyboard = [UIStoryboard storyboardWithName:#"MainStoryboard_iPadnew" bundle:nil];
NSLog(#"loaded iPad Storyboard");
}
// rest my code
}
I would suggest to use Autoresizing Mask in your applications according to your UI interface, it saves a lot of trouble and is better than making different UI for iPhone 4 and 5 screens.

Resources