Corona SDK Cross Device Screen Resolution - mobile

This is going to be one of those awkward questions looking for an answer that probably doesn't exist, but here goes.
I've been developing some simple games using Corona and whilst the functionality seems to work pretty well across most of the physical devices I have tested on, the one main issue is the layout. I know you can't really build for every single device perfectly, but I'm wondering if there is a common method to make an app look good across as many screens as possible. I have access to these devices
iPad 1 & 2: 4:3 (1.33)
iPhone 960 × 640 3:2 (1.5)
iPhone 480x320 3:2 (1.5)
Galaxy Nexus 16:9 (1.77)
From what I have seen, people aim to use 320x480 as a scaled resolution and then let Corona upscale to the correct device resolution (with any #2x images as required) but this leads to letterboxing or cropping depending on the config.lua scale setting. Whilst it does scale correctly, having a letterbox isn't great.
So would I be best to not specify a width&height in the config file, but instead to use some sort of screen check at first to look for 1.33 / 1.5 / 1.77 aspect ratios? Surely with the whole point of Corona SDK, there would be some sort of 'typical' setup that developers use for the start of any new project?
Thank you

It seems that I have found a pretty good solution based on this forum post on the Ansca website: http://developer.anscamobile.com/forum/2012/03/12/understanding-letterbox-scalling
In summary, the config.lua should look like this:
application = {
content = {
width = 320,
height = 480,
scale = "letterbox",
xAlign = "center",
yAlign = "center",
imageSuffix = {
["#2x"] = 2,
},
}
}
Create background images at 360*570 for older devices. 320x480 screens will crop the image slightly and it will scale nicely for older Android devices.
Create background images at 1140*720 for iPad and iPhone retina - again these will scale on Android and be slightly cropped on iOS.
As an example, where you would normally create a 320x480 image and display it with:
local bg = display.newImageRect("bg.png",320x480)
bg.x = display.contentWidth/2
bg.y = display.contentHeight/2
.. instead create a 360x570 background and use the following code:
local bg = display.newImageRect("bg.png",360x570)
bg.x = display.contentWidth/2
bg.y = display.contentHeight/2
This is just a summary, so check the link for more detailed instructions.

Well, you CAN use a number slightly off 2 for the scaling if you want correct images for the different devices. Ex:
application =
{
content =
{
width = 640,
height = 960,
scale = "zoomEven",
imageSuffix =
{
["-iphone3"] = 0.5,
["-ipad2"] = 1.066,
["-ipad3"] = 2.133,
},
}
}
In which "background.png" would be a 640x960 image for the iphone4, while "background-iphone3.png" would be 320x480 (you don´t need this, but it will reduce memory requirement for iphone3 applications). "background-ipad3.png" would need to be 1536x2048 (and half that for -ipad2).
Of course it doesn´t solve the aspect ratio for screen positioning, but it solves it for all other gfx related problems. Remember to use display.newImageRect, not display.newImage or you won´t see any difference.

Related

Codename One - Zoom, center and crop a video to force it to occupy all the screen

I have an intro video in the center of a BorderLayout with the option BorderLayout.CENTER_BEHAVIOR_TOTAL_BELOW. In the top I have a logo, in the south I have login buttons.
I write code similar to the following one, that chooses the right video according the rotation of the device. I tested that the video will be automatically zoomed to the available space: in the smartphones that I tested (Android and iPhone models), the video covers all the screen area (because the video length and height are proportional to the screen length and height). That's good, it's exactly what I want.
However, probably there are smartphones with different screen size ratio from the ones that I tested. Moreover, all the tablets have different screen size ratio from the smartphones.
I need that the video will ALWAYS occupy all the screen area. If necessary, it should be automatically zoomed, centered and cropped to occupy all the screen.
I didn't find in the Codename One API what I need to implement this use case. What code can I use? My target devices are Android and iOS devices (smartphones and tablets).
Example of code:
Form hi = new Form("Hi World", new BorderLayout(BorderLayout.CENTER_BEHAVIOR_TOTAL_BELOW));
disableToolbar(hi);
introVideoMP4 = "/intro-landscape.mp4";
if (Display.getInstance().isPortrait()) {
introVideoMP4 = "/intro-portrait.mp4";
}
MediaPlayer introVideo = new MediaPlayer();
try {
InputStream videoInputStream = Display.getInstance().getResourceAsStream(getClass(), introVideoMP4);
introVideo.setDataSource(videoInputStream, "video/mp4", () -> {
introVideo.getMedia().setTime(0);
introVideo.getMedia().play();
});
introVideo.setHideNativeVideoControls(true);
introVideo.hideControls();
introVideo.setAutoplay(true);
hi.add(BorderLayout.CENTER, introVideo);
} catch (Exception err) {
Log.e(err);
}
hi.addOrientationListener(l -> {
introVideoMP4 = "/intro-landscape.mp4";
if (Display.getInstance().isPortrait()) {
introVideoMP4 = "/intro-portrait.mp4";
}
try {
InputStream videoInputStream = Display.getInstance().getResourceAsStream(getClass(), introVideoMP4);
introVideo.setDataSource(videoInputStream, "video/mp4", () -> {
introVideo.getMedia().setTime(0);
introVideo.getMedia().play();
});
} catch (Exception err) {
Log.e(err);
}
});
hi.add(BorderLayout.NORTH, new Label("My App"));
Button myButton = new Button("Tap me!");
myButton.addActionListener(l -> {
Log.p("myButton tapped");
});
hi.add(BorderLayout.SOUTH, myButton);
hi.show();
Usually sites generate video downloads/streams based on the device proportions on the server (using tools such as ffmpeg. Then deliver a video with the right aspect ratio, bitrate & format.
There is no builtin functionality for cropping the video similar to "fit" scale in Codename One.
However, if you are OK with the sides or top being cropped then you can probably create your own layout manager for the video component and position/size it based on your knowledge of the video dimensions and screen dimensions. Creating a layout manager is really easy, it's mostly just the work of implementing the layoutContainer method where you can set the X/Y/Width/Height of the component. See https://www.codenameone.com/blog/map-layout-update.html

What is the most performant way to render unmanaged video frames in WPF?

I'm using FFmpeg library to receive and decode H.264/MPEG-TS over UDP with minimal latency (something MediaElement can't handle).
On a dedicated FFmpeg thread, I'm pulling PixelFormats.Bgr32 video frames for display. I've already tried InteropBitmap:
_section = CreateFileMapping(INVALID_HANDLE_VALUE, IntPtr.Zero, PAGE_READWRITE, 0, size, null);
_buffer = MapViewOfFile(_section, FILE_MAP_ALL_ACCESS, 0, 0, size);
Dispatcher.Invoke((Action)delegate()
{
_interopBitmap = (InteropBitmap)Imaging.CreateBitmapSourceFromMemorySection(_section, width, height, PixelFormats.Bgr32, (int)size / height, 0);
this.Source = _interopBitmap;
});
And then per frame update:
Dispatcher.Invoke((Action)delegate()
{
_interopBitmap.Invalidate();
});
But performance is quite bad (skipping frames, high CPU usage etc).
I've also tried WriteableBitmap: FFmpeg is placing frames in _writeableBitmap.BackBuffer and per frame update:
Dispatcher.Invoke((Action)delegate()
{
_writeableBitmap.Lock();
});
try
{
ret = FFmpegInvoke.sws_scale(...);
}
finally
{
Dispatcher.Invoke((Action)delegate()
{
_writeableBitmap.AddDirtyRect(_rect);
_writeableBitmap.Unlock();
});
}
Experiencing almost the same performance issues (tested with various DispatcherPriority).
Any help will be greatly appreciated.
I know it is too late, but I write this answer for those folks who are struggling to solve this problem.
Recently, I have done a rendering project using InteropBitmap in which I was able to run about 16 media player components in a WPF window at the same time on a core i7 1.6 Ghz CPU +8Gb RAM Laptop, with 25fps.
Here are the tips I took for performance tweaking:
First of all, I did not let GC handle my video packets. I Allocated memory using Marshal.AllocateHGlobal wherever I needed to instantiate a video frame and Disposed using Marshal.FreeHGlobal as soon as I did the rendering.
Secondly, I created a dispatcher thread for each individual media player. For more information, read "https://blogs.msdn.microsoft.com/dwayneneed/2007/04/26/multithreaded-ui-hostvisual/".
Thirdly, for aspect ratio, and generally the resizing purposes, I used native EmguCV library. This library helped me a lot on performance rather than using bitmaps and overlays and etc.
I think these steps help everyone that needs to render manually using InteropBitmap or etc.

Pango and FreeType -- not rendering text, just weird pixels

I'm trying to get Pango to control FreeType. I've successfully got FreeType to render into a bitmap but Pango doesn't seem to know what's going on, I'm obviously not doing something correctly.
This is the code that I'm using at the moment:
font_map = pango_ft2_font_map_new();
pango_ft2_font_map_set_resolution(PANGO_FT2_FONT_MAP(font_map), 72, 72);
cr = pango_font_map_create_context(PANGO_FONT_MAP(font_map));
font_description = pango_font_description_new ();
pango_font_description_set_family (font_description, "Courier New");
pango_font_description_set_weight (font_description, PANGO_WEIGHT_BOLD);
pango_font_description_set_absolute_size (font_description, 32 * PANGO_SCALE);
layout = pango_layout_new(cr);
pango_layout_set_font_description(layout, font_description);
pango_layout_set_text(layout, "Some sample text!", -1);
pango_context_set_font_description(cr, font_description);
FT_Bitmap bitmap = { 0 };
bitmap.width = drawBitmap.get()->getWidth();
bitmap.rows = drawBitmap.get()->getHeight();
bitmap.pitch = bitmap.width * 4;
bitmap.buffer = (unsigned char*)drawBitmap.get()->getDataPtr();
bitmap.num_grays = 256;
bitmap.pixel_mode = FT_PIXEL_MODE_GRAY;
pango_ft2_render_layout(&bitmap, layout, 100, 100);
drawBitmap is just my helper class, I know this works because I can fill it with random colours and they show up.
This is what gets rendered:
I want to try to get that text to show up properly.
EDIT: The problem has been brought into sharper relief after fixing the bit depth of the image and switching from bare Pango FreeType to Pango Cairo with the FreeType engine.
Using this line:
font_map = pango_cairo_font_map_new_for_font_type(CAIRO_FONT_TYPE_WIN32);
produces
and when I run through pango_font_map_list_families, I get a long list of the fonts installed on my system.
However if I change it to this, to use FreeType:
font_map = pango_cairo_font_map_new_for_font_type(CAIRO_FONT_TYPE_FT);
it produces
and then there are suddenly only 3 fonts on my system, Sans, Serif and Monospace.
I just hit exactly the same issue as yourself but after a lot of digging managed to find a solution that works for me. I've been using the pre-built Pango/FreeType/dependencies libraries from http://www.gtk.org/download/win32.php and copying the runtime DLLs to my exe folder. In the end I had to also copy the '/etc/fonts' folder from the all-in-one version of GTK+ as a subfolder of my exe too. Then suddenly I had fonts via Pango/FreeType! Not sure what the origin of the 'fonts' folder is yet though..
you probably need to tell it where your font files are located manually, freetype alone has no notion of default font locations it's a barebones renderer (and it is really insufficient to display unicode text, that's why other pango engines add other libraries to the mix)
pango-cairo is the most complete pango backend IIRC, and unless I'm wrong it will pull in fontconfig. fontconfig sole purpose is to manage font file locations for apps, that's why it's working out of the box for you
If your chosen backend does use fontconfig try to locate its master conf file and make sure the default directories are appropriate for your system. And then run fc-cache

OpenCV Capture from external camera

I'm currently writing an real time application using OpenCV and in the following case:
I'm trying to capture an image from a HDV camera plugged in firewire 800.
I have tried to loop on index used on cvCaptureFromCam,
but no camera can't be found (except the webcam).
there is my code sample, it loop on index (escaping 0 cause it's the webcam's index) :
CvCapture* camera;
int index;
for (index = 1; index < 100; ++index) {
camera = cvCaptureFromCAM(index);
if (camera)
break;
}
if (!camera)
abort();
On any time it stops on the abort.
I'm compiling on OSX 10.7 and I have tested :
OpenCV 1.2 private framework
OpenCV 2.0 private framework (found here : OpenCV2.0.dmg)
OpenCV compiled by myself (ver. 2)
I know that the problem is knowned and there is a lot of discussion about this,
but I'm not able ti find any solution.
Does anyone have been in the same case ?
Regards.
To explicitly select firewire, perhaps you can try to add 300 to your index? At least in OpenCV 2.4, each type of camera is given a specific domain. For example, Video4Linux are given domain 200, so 200 is the first V4L camera, 201 is the second, etc. For Firewire, the domain is 300. If you specify an index less than 100, OpenCV just iterates through each of its domains in order, which may not be the order you expect. For example, it might find your webcam first, and never find the firewire camera. If this is not the issue, please accept my appologies.
index should start at 0 instead of 1.
If that doesn't work, maybe your camera is not supported by OpenCV. I suggest you check if it is in the compatibility list.

Silverlight MP3 Playing Library

I'm trying to play MP3 files in SilverSprite, and it's super buggy. Is there an alternative library I can use to play MP3s in Silverlight?
Edit: Now that there's a bounty, I'm specifically looking for something that:
Works with SL 3-4
Is a separate project/DLL
Will work in SilverSprite projects (I'm using a layer on top of SS) -- no GUI, just methods I can call to play sounds
Works with content that has the build action set to Content. I cannot use embedded resources due to a bug in SilverSprite. My app will not run.
Plays MP3s.
Can play multiple audio files at the same time
I hope it's clear what I'm trying to find. I would like something I can embed in my own game engine, which sits on top of SilverSprite. I will supply all the audio files in the XAP. (The SilverSprite audio is quite broken and doesn't work.)
Update: The specific direction I would probably like to go in is to instantiate a new MediaElement, set the source, and play it. I have some code below, but a) NaturalDuration.TimeSpan.TotalMilliseconds reports 0, and b) the .MediaOpened never triggers.
MediaElement m = new MediaElement();
m.Source = new Uri("Content/Audio/chimes.mp3", UriKind.Relative);
m.Stop(); // useless?
//m.SetSource(new FileStream("Content/Audio/chimes.mp3", FileMode.Open)); // "Permission denied" exception, is it even finding the file?
m.Volume = 1; // Max
m.Position = TimeSpan.FromMilliseconds(0);
while (m.CurrentState != System.Windows.Media.MediaElementState.Closed)
{
Thread.Sleep(10);
}
m.MediaOpened += (sender, e) =>
{
m.Play();
};
m.Play();
For some working code rather similar to your updated approach see http://www.wiredprairie.us/blog/index.php/archives/577 . Beware that the MediaElement needs to be added to the control/component tree - see http://www.michaelsnow.com/2010/12/17/playing-sound-effects-on-windows-phone-7/.
Two very interesting options for your requirements is this library and this one.
For this kind of stuff you could also implement/use a custom MediaStreamSource like this one... see here and here.
EDIT - some other options:
Playing multiple sounds in parallel via XNA see source code at http://create.msdn.com/en-US/education/catalog/sample/silverlightsound
Using MediaPlayer class from XNA 4 for example:
MediaPlayer.Stop();
MediaPlayer.Volume = 1;
MediaPlayer.Play(Song.FromUri("TestSound", new Uri("/Content/Audio/chimes.mp3", UriKind.Relative)));
As for playing multiple sound files at the same time:
IIRC this is something which could cause your app to fail validation.

Resources