How responsive works? - responsive-design

Why we have breakpoints for 400px or 600px, for example?
These breakpoints are verifying the resolution of the device, but now smartphones have resolutions such as Full HD (1920 x 1080 ) or 2k (2560 × 1440). That's why I'm confused. Why do I have to use breakpoints for smartphones with 400px - 600px if now most of them have 1080px - 1440px?
I have a 1080 x 2340 pixel device, but my webpage looks like it's 500px wide

The pixels we define in breakpoints refer to the viewport size of the device, while the pixels defined in a device's screen resolution refer to the number of physical pixels that device has. Viewport size exist to compensate the difference in screen sizes of different devices.
This unit of measurement is sometimes referred to as “device independent pixels” or “CSS pixels”.
For instance, a laptop and a smartphone, both with a screen resolution of 1920x1080. If the screen of the two devices are treated the same way because their screen resolution is the same, the same content (let's say a 1200x1200 image) displayed on both device would look fairly large on the laptop but would look very small on the smartphone because even though they have the same screen resolution, their screen sizes are very different. But with viewport size, the same content can be scaled depending on the device's screen size.
You can read more about this topic in Difference Between Viewport, Screen Resolution, DPR, and PPI for Responsive Web Development.
Hope this information answers your question.

Related

iOS controls on iPad specific screen sizes

I have a fundamental question that I would like to get addressed. I'm almost done with my universal app and I was told that I need to specifically customize the UI controls for iPad screens (e.g) labels, buttons. So, for example, I have the following code in viewDidLoad event in one of my xibs.
if ([[UIDevice currentDevice] userInterfaceIdiom] == UIUserInterfaceIdiomPhone)
{
[_lblRandomDisplay setFont: [UIFont fontWithName: #"Helvetica Neue" size:100]];
}
else
{
[_lblRandomDisplay setFont: [UIFont fontWithName: #"Helvetica Neue" size:250]];
}
Here _lblRandomDisplay is UILabel IBOutlet property and I increase the font for iPad and reduce it for iPhone. Is this the way to approach the iPad screens or does iOS automatically scale the screen when viewed on iPad?. As a side note, I have named the xib filenames as filename~iphone.xib and filename~ipad.xib and it loads correctly based on the device selected in the simulator.
In those lines, I have a Settings screen (not using the Settings bundle) using UITableViews that I have designed for iPhone and programmatically load data from NSArray. When loaded on iPad using the settings~ipad.xib (copied the controls from settings~iphone.xib), I haven't adjusted the row height specifically for iPad to make it look bigger. The screen shows OK on the iPad but it looks smaller. Is this the right approach or what is the best way to approach this?
Please advise.
The advice that you have been given, to customise UIControls for screen size, seems wrong. UIControl sizes, and text sizes in general, should remain the same on all devices. These items are designed in sizes appropriate for readability and touchability, neither of which has any relationship to screen size.
iOS is designed with this in mind. If you make a universal project with iPad and iPhone xibs or storyboards, you will find that user interface widgets are the same pixel/point size regardless of the device (I am ignoring Retina/non-retina distinctions here for simplicity). For example, the default size of a standard button is 73 x 43 with a font size of 15pt in both cases. A Navigation Bar is 44 px/points high on both devices. This is as it should be. If we assume that reading distance on an iPhone, or an iPad mini, or an iPad is approximately the same, then there is no reason to adjust text size. The idea of redesigning for the iPad is really that you can get more information on one screen, not a bigger version of the same information.
There are two circumstances in which iOS "scales the screen":
is if you create an iPhone-only app and run it on an iPad. Then there is a user option to run it at double size. In this case everything including font sizes are scaled up, but this is done by pixel-doubling, so the effect is a big blur. Take a look at it - and you will understand that this is precisely the reason why you should not design in this way.
if you use a single xib/storyboard for both iPhone and iPad, and rely on autolayout (ios6+) or autoresizing masks (ios5-) to auto-adjust the layout for different screen sizes. This method can -depending on your settings - proportionally resize image content of views, but will not dynamically resize text content, if you wanted to do that you would have to adjust in code. This is not a good way of designing an app anyway, it is better to make a dedicated design for iPhone and for iPad in separate xib/storyboards as you have done.
I expect when you say the iPad "looks smaller" you mean, the UI appears smaller as it gets lost on the larger screen... but the answer is not to just enlarge the size of your data, it is to reconsider your layout to fit more data on each screen. That is why with the iPad Apple provided the SplitViewController and introduced the pattern of the Container ViewController.
I wonder if you are also raising a related, but separate issue of proportional sizing of views for graphic design purposes (you mention font sizes of 100 and 250pt, not usual sizes for UI controls labels). You may want the look of your app to scale with the screen, a bit like the so-called fluid web design approach to variable window sizes. So for example you may have a graphic device based on a huge letter 'A' that fills your iphone screen, and want that letter to similarly fill you ipad screen. In this case you may need to set font sizes as in your code example.
You are certainly doing the right thing by not altering the row height of your table cells for the different devices, but for the larger screen you can of course make your table height larger, and accomodate more table cells in your view.
All of these comments are a bit general, as you haven't posted enough detail of your problem. It often helps to post a picture or two...

How can I make a 320 pixel wide mobile site zoom to full width when rotated to landscape?

I've been trying numerous viewport combinations, but can't seem to find the right one.
I have a mobile site that is built to be 320 pixels wide. I'd like these 320 pixels to stretch the full width of the screen in both portrait and landscape mode, regardless of which orientation the screen starts out with, or whether the viewer rotates the device.
Thanks
What is the point of using fixed pixel values when you want the site to stretch?
Just use
width: 100%

What unit of measurement is Titanium Mobiles "font-size"?

Trying to find what unit of measurement Titanium uses for defining the font size in mobile applications. Want to match it up to Photoshop for mockup purposes.
On iOS, font sizes are in typographical points (1/72 of an inch), so font size 12 should be the same visual size on both devices. (Of course, it will be larger in the Retina simulator, because it's twice as many pixels.)
Note that other iOS sizes are in Apple "points," which don't correspond to typographic points. An Apple "point" is 1px on a pre-Retina device, and 2px on a Retina device.
On Android, you can specify units. The default is pixels (for example, 12 and '12px' both specify 12 pixels). You can also specify sizes in Android's density-independent pixels, points, millimeters or inches. So:
'12dp' == 12 DIP (roughly equivalent to Apple's "points")
'12pt' == 12 points (typographical points)
'12mm' == 12 millimeters
'12in' is a REALLY big font
On a medium-density device like the G1, 12px == 12dp. On a high-density device (most of the newer Android phones with 800x480, 854x480, or 960x540 screens), 12dp renders twice as big as 12px--just like the Apple "point" system.
Why aren't DIP the default unit on Android? That I can't answer. I guess Androids just like pixels.
Its in pixels, but don't forget your photoshop mockups need to be double the size for the retina display.
So your mockup would be font-size 24px and in Titanium you would specify 12px.
According to this it's pixels converted to points. According the actual Apple UIKit it's points as well but I'm not sure if Appcelerator changed it or not. It could also be different based on the OS(name/type)?
actualFontSize
On input, a pointer to a floating-point value. On return, this value contains the actual font size that was used to render the string.

What measurement units does Silverlight and WPF use?

Does anyone know what measurement units are used by Silverlight/WFP? For example, if I create a new button and set its height to 150, is that 150 pixels? points? millimeters?
I design all of my applications in Adobe Illustrator before proceeding to code, and although I try and set everything to the dimensions in my Illustrator file, the Silverlight application is usually larger.
Although in theory, 1 unit in WPF is 1/96th of an inch, that's frequently not the case in practice.
It's usually true when printing. But it's rarely true on screen. The reason for this is that Windows almost always knows the true resolution of a printer, but almost never knows the true resolution of a screen.
For example, I have three screens attached to my computer. Windows thinks that they all have a resolution of 96 pixels per inch. Actually they don't. Two of them have a resolution of 101 pixels per inch, and one has a resolution of 94 pixels per inch. (Why? Because Windows has no way of working the true resolutions out for itself, and I haven't told it. The fiction that they all have the same pixel size is close to the truth, and turns out to be a convenient fiction.)
So when I create, say, a Rectangle in WPF with Width and Height both set to 96, the size of the Rectangle actually depends on which screen it appears on. Windows thinks that all 3 screens have a resolution of 96 pixels per inch, and so it'll render the rectangle as being 96 pixels tall and wide no matter which screen it appears on. That'll make it appear 0.95 inches tall on two of the screens, and 1.02 inches tall on the third.
So in practice, that means that units in WPF on my computer here are either 1/100th of an inch, or 1/94th of an inch in practice. (I.e., in practice, the size of 1 unit in WPF is exactly the size of 1 pixel on my particular setup, no matter how big the pixels happen to be.)
I could change that. I could reconfigure Windows - I could tell it the actual resolution of all 3 screens, in which case the nominal and actual WPF unit sizes would coincide. Or I could lie - I could claim that I have 200 pixel per inch screens, in which case everything would be massive...
The basic problem here is that there is no standard way for the computer to discover the true size of the physical pixels on the screen, and very few people bother to set it up by hand. (And in fact you can cause problems by configuring it 'correctly', because a lot of software doesn't behave correctly when you do.) So the majority of Windows computers don't report physical pixel sizes correctly to WPF - they can't because they don't know.
Consequently, there's no reliable answer to the question - 1 unit in WPF could be pretty much anything on screen. (In practice, most of the time, it turns out to be 1 pixel, simply because if you don't tell Windows anything else, it defaults to assuming that your screens have pixels that are 1/96th of an inch tall, which is the same as 1 WPF unit. And for most desktop screens, that's actually quite likely to be a good guess. But this isn't universal. On systems configured with what used to be called 'large fonts' for example, you'll find a different nominal screen resolution, and 1 WPF unit will correspond to slightly more than 1 physical pixel - about 1.2 in fact.)
With printers, it's all much more predictable. Printers are invariably able to report their resolutions correctly. So if you print something that's 96 WPF units high, you can be confident that it will be 1 inch high.
MSDN's documentation states that the FrameworkElement.Height property (for Silverlight) refers to:
The height, in pixels, of the object
However, for WPF it refers to:
a device-independent unit (1/96th inch) measurement
So, to answer your question... pixels for Silverlight, device-independent units for WPF.
The documentation refers to Pixels, however these are Pixels where there are 96 such pixels per inch. A line of Width 96 when display on a 120 DPI display will be 120 actual device pixels. Similarly such a line drawn on a printer output which has 600 DPI will be 600 pixels long.
They are Device Independent Units.
You can find more detailed explanations here.

What's the advantage of device-independent pixels?

I'm learning about WPF. WPF uses device-independent pixels. But I can't really understand them. Why are they better than device-dependent pixels, if most other apps are device-dependent and WPF apps aren't? Would they stick out?
The advantage of device independent pixels is that when specifying a UI you can determine the size that UI components will appear on the user's device, regardless of the user's screen resolution. Unfortunately, it's not quite as simple as that, as it requires the user to have various settings set 'correctly', and it can be overridden by a user who wants to change the resolution of their device (e.g. a partially sighted user who wants to run at a low resolution to make text easier to read).
In addition to the other link posted, you can also check out this one:
Is WPF Really Resolution Independent?
Note that you can turn on snapping a control to device pixels with the SnapsToDevicePixels set to true to avoid the blurriness that occurs when a horizontal/vertical line is drawn on the boundary between two device pixels.
Before understanding device independent unit, it is required to understand what DPI is. DPI is dots per inch, that means there would be certain number (96 usually) of pixels in an inch . But what is important to understand is in Win32 environment this inch is not fixed in size as a physical inch. So when the number of dots increases / decreases by changing the resolution there would more / less number of dots in an inch as a result the "inch" size increases or decreases.
However in case of WPF the inch size is as good as a physical inch as a result every time the DPI changes the system adjusts it self accordingly.
It's about UI and font scaling depending on the system's DPI setting:
Not all applications are DPI-aware: some
use hardware pixels as the primary
unit of measurement; changing the
system DPI has no effect on these
applications. Many other applications
use DPI-aware units to describe font
sizes, but use pixels to describe
everything else. Making the DPI too
small or too large can cause layout
problems for these applications,
because the applications' text scales
with the system's DPI setting, but the
applications' UI does not. This
problem has been eliminated for
applications developed using WPF.
WPF supports automatic scaling by
using the device independent pixel as
its primary unit of measurement,
instead of hardware pixels; graphics
and text scale properly without any
extra work from the application
developer.
This is taken from the link Kishore provided. (http://msdn.microsoft.com/en-us/library/ms748373.aspx)

Resources