I am working on a responsive layout for a page with two main elements, and I need to hide one of them on standard phone screens (let's say up to about 5 inches).
Subconditions are landscape mode and screen width up to 800px.
Usually smartphones cab be easily distinguished from common monitors because of their higher pixel ratio (> 1) and/or resolution (>130dpi). Even so, testing through Chrome Dev Tools, I came across some mobiles with pixel-ratio=1 and resolution=96dpi, specs that make them indistinguishable from a common monitor to my Media Query:
#media only screen and (orientation:landscape) and (max-width:800px) and (min-resolution: 1dppx);
As stated, this Media Query affects also common monitors. Is there any other specification I can use to separate the cases?
To throw in some devices about the matter:
Motorola Droid 3/4/Razr/Atrix (540x960px; pixel-ratio:1; resolution 96dpi)
Motorola Droid Razr HD (720x1280px; pixel-ratio:1; resolution 96dpi)
Sony Xperia Sola (480x854px; pixel-ratio:1; resolution 96dpi)
Thank you.
Related
I don't know why my LCP would be a p tag, and I have no idea what I would do to reduce the size of it. Sometimes it gets up to 2.6s and gives a yellow rating(instead of green).
This is the p tag. All of those classes are bootstrap classes.
<p className="text-center mb-md-5 mt-0 mb-5">{aboutText}</p>
This is the variable aboutText
const aboutText = `Suddenly Magazine highlights the uniqueness of Saskatchewan, and its sudden rise in popularity and growth mentioned in publications such as USA Today and the New York Times.
Advertorials and Articles focus on its rare & particular tourism, its passionate sports, its character, and the prosperous opportunity for businesses and artists influenced by a Saskatchewan setting.
It is centred in Saskatoon, but contributors range from Lac La Ronge in the North, to provincial boundaries east and west, to the Outlaw Caves near the US border.`
The domain is https://suddenlysask.com
So why is your LCP a p tag?
Its only on mobile a p tag, and here take a look at the mobile size.
Its clearly to see that the p tag takes the most place here.
You could try to make the image bigger on mobile devices, so lighthouse will count the image as the LCP.
Another solution is to split up your p tag into 2 smaller p tags
Another solution could be (witch is not recommended) to cut your p tag slightly out of the viewport because...
The size of the element reported for Largest Contentful Paint is
typically the size that's visible to the user within the viewport. If the
element extends outside of the viewport, or if any of the element is
clipped or has non-visible overflow, those portions do not count
toward the element's size.
I guess your bad result comes from this line here:
<link data-react-helmet="true" rel="preload" href="https://fonts.googleapis.com/css?family=Montserrat|Helvetica+Neue|Helvetica|Arial&display=swap">
Why does it take up to 2.6 sec?
Here is what i guess:
Loading the google font can take its time and its not guaranteed that it loads always exactly the same time, so when the font is loaded it will swap your fonts and that means the p tag swaps. That means that the p tag with the new font is treated as new LCP.
For testing purposes you could try to remove the link and see if it affects your speed score at your LCP
At the end, i would split the paragraph up into 2 smaller paragraphs so that the image is the LCP. i think its the easiest solution.
People seems to completely misunderstand the purpose of the Largest Contentful Paint metric. It is designed to show you when the majority of the above the fold content is ready.
What item is the Largest Contentful Paint is not as important as when it occurs. What item is only useful in determining what could be slowing your page down.
It is the main metric in determining when 'above the fold' content is painted sufficiently that an end user would see the page as "complete" (this is perceived completeness, there can still be stuff loading lower down the page / in the background).
The suggestions of splitting the paragraph, wrapping it in a div, making it taller etc. etc. serve no purpose, they just shift the LCP onto something else (possibly) making your score look better but not actually fixing the problem.
What you want to do is optimise the initial content on the page.
For this you want to serve just the 'above the fold' HTML along with the CSS and JS required for above the fold content.
Then you serve everything else.
This article is a good introduction to critical JS and CSS https://www.smashingmagazine.com/2015/08/understanding-critical-css/
However in a nutshell inlining critical CSS and JS means that the CSS and JS required to render the initial content on the page should be inline within the HTML. Now I am guessing with something like Gatsby you would inline the critical JS that renders the above the fold content, above the fold CSS etc. but the principle is the same.
The key is that the above the fold content should all be served (except for non vector images) within the HTML so that there is no round-trip time waiting for CSS files, JS files etc.
So for clarity instead of:-
HTML requested, (200ms round trip to server)
HTML loaded and parsed, links to CSS and JS found required to render the initial page content
CSS and JS requested. (200ms round trip to server)
CSS and JS loaded
Enough to render the page.
Instead you have
HTML requested, (200ms round trip to server)
HTML loaded, all required CSS and JS inlined in HTML
Enough to render the page
This may not seem like a lot but that 200ms can make a huge difference to perceived speed.
Also this is a simplified example, often a page requires 20 requests or more to render the above the fold content. Due to the limitations of 8 requests at a time (normally) this means there could be up to 3 round-trips of 200ms waiting for server responses.
Looking at your site you will be getting a false reading for "critical request chains" as there is no HTML served in the initial page as it is all rendered via JS. This could be why you do not think there is a problem.
If you do the above your will get low FCP and LCP times assuming your images are optimised.
There are some Gatsby users complaining recently about a huge fall and decreasing of Lighthouse score and everyone agrees on the same: the score of the Lighthouse has decreased a lot due to a high LCP (Largest Contentful Paint) response time.
This is the result of the changes in the new Lighthouse version (v6) which in fact, introduces the LCP as a new concept and metric. As you can see, the changelog was written in may but depends on the user, and on the site, the changes arrive on different dates (I guess that depends on Google's servers and the time that this change replicates through them).
According to the documentation:
Largest Contentful Paint (LCP) is a measurement of perceived loading
experience. It marks the point during page load when the primary–or
"largest"–content has loaded and is visible to the user. LCP is an
important complement to First Contentful Paint (FCP) which only
captures the very beginning of the loading experience. LCP provides a
signal to developers about how quickly a user is actually able to see
the content of a page. An LCP score below 2.5 seconds is considered
'Good.'
As you said, this metric is closely related to FCP and it's a complement of it: improving FCP will definitely improve the LCP score. According to the changelog:
FCP's weight has been reduced from 23% to 15%. Measuring only when the
first pixel is painted (FCP) didn't give us a complete picture.
Combining it with measuring when users are able to see what they most
likely care about (LCP) better reflects the loading experience.
You can follow this Gatsby GitHub thread to check how the users bypass this issue in other cases.
In your case, I would suggest:
Delete your <p> and check the score again to see the changes (just to be sure).
Wrapping your <p> inside a <div>.
Splitting your <p> in 2 or 3 small pieces to make them available for the LCP as well as FCP.
If none of the above work, I would try playing on <p>'s height to see if it improves the score.
I guess that Gatsby (and also Google) are working on adjusting this new feature and fix this bad score issues.
We see in all today electronic devices like mobile a Visual battery charging indicator,that a graphical Container composed of bars that increases one by one when the battery is charged for long, and decreases one by one when the mobile is used for long time.
I see the same thing laptop in every GUI operating system like windows and Linux.
I am not sure whether i am posting in the right place, because this requires a System Programmer and Electrical Engineer.
A Visual view of my Question is here:
http://gickr.com/results4/anim_eaccb534-1b58-ec74-697d-cd082c367a25.gif
I am thinking from long long ago , under what logic this works?
How the Program is managed to Monitor the battery.
I made a simple logic based on Amps-hour, that how much time the bar should increase when the battery is in charging mode.??? But that does not work perfectly for me.
Also i read a battery indicator Android application source code of my fried, but the function he used were System Calls based on Andriod Kernel (Linux Kernel).
I need the thing from the scratch....
I need this logic to know............. Because i am working on an Operating system kernel project, which later on will need battery charging monitor.
But the thing i will implement right now is to show just percentage on the Console Screen.
Please give me an idea how i can do it.... Thanks in Advance
Integrating amps over time is not a reliable way to code a battery meter. Use voltage instead.
Refer to the battery's datasheet for a graph of (approximate) voltage vs. charge level.
Obtain an analog input to your CPU. If it's a microcontroller with a built-in ADC, then hopefully that's sufficient.
Plug a reference voltage (e.g. a zener diode) into the analog input. As the power supply voltage decreases, the reference will appear to increase because the ADC only measures voltage proportionally. The CPU may include a built-in reference voltage generator that you can mux to the ADC, or the ADC might always measure relative to a fixed reference instead of rail-to-rail. Consult the ADC manual (or ADC section of micro controller manual) for details.
Ensure that the ADC provides sufficient accuracy.
Sample the battery level and run a simple low-pass filter to eliminate noise, like displayed_level = (displayed_level * 0.95) + (measured_level * 0.05). Run that through an approximate function mapping the apparent reference voltage to the charge level.
Display the charge level.
I'm working in a embarked application on linux that can be used with different PC hardware (displays specifically)
This application should set the environment for the highest allowed resolution (get by
the function XRRSizes from libXrandr).
The problem is: With some hardware, trying to set for the highest option creates a virtual desktop i.e. a desktop where the real resolution is smaller and you have to scroll w/ the mouse in the edges of the screen to access all of it.
Is there a way to detect within the Xlib (or one of it's siblings) that I am working with a virtual resolution (In other words, the re-size didn't go as expected) ?
Hints for a work around for this situation would also be appreciated...
Thanks
Read this: http://cgit.freedesktop.org/xorg/proto/randrproto/tree/randrproto.txt
You need to learn the difference between "screen", "output" and "crtc". You need to check the modes available for each of the outputs you want to use, and then properly set the modes you want on the CRTCs, associate the CRTCs with the outputs, and then make the screen size fit the values you set on each output.
Take a look at the xrandr source code for examples: http://cgit.freedesktop.org/xorg/app/xrandr/tree/xrandr.c
Till now I have been able to create an application where the Kinect sensor is at one place. I have used speech recognition EmguCV (open cv) and Aforge.NET to help me process an image, learn and recognize objects. It all works fine but there is always scope for improvement and I am posing some problems: [Ignore the first three I want the answer for the fourth]
The frame rate is horrible. Its like 5 fps even though it should be like 30 fps. (This is WITHOUT all the processing) My application is running fine, it gets color as well as depth frames from the camera and displays it. Still the frame rate is bad. The samples run awesome, around 25 fps. Even though I ran the exact same code from the samples it wont just budge. :-( [There is no need for code, please tell me the possible problems.]
I would like to create a little robot on which the kinect and my laptop will be mounted on. I tried using the Mindstorms Kit but the lowtorque motors dont do the trick. Please tell me how will I achieve this.
How do I supply power on board? I know that the Kinect uses 12 volts for the motor. But it gets that from an AC adapter. [I would not like to cut my cable and replace it with a 12 volt battery]
The biggest question: How in this world will it navigate. I have done A* and flood-fill algorithms. I read this paper like a thousand times and I got nothing. I have the navigation algorithm in my mind but how on earth will it localize itself? [It should not use GPS or any kind of other sensors, just its eyes i.e. the Kinect]
Helping me will be Awesome. I am a newbie so please don't expect me to know everything. I have been up on the internet for 2 weeks with no luck.
Thanks A lot!
Localisation is a tricky task, as it depends on having prior knowledge of the environment in which your robot will be placed (i.e. a map of your house). While algorithms exist for simultaneous localisation and mapping, they tend to be domain-specific and as such not applicable to the general case of placing a robot in an arbitrary location and having it map its environment autonomously.
However, if your robot does have a rough (probabilistic) idea of what its environment looks like, Monte Carlo localisation is a good choice. On a high level, it goes something like:
Firstly, the robot should make a large number of random guesses (called particles) as to where it could possibly be within its known environment.
With each update from the sensor (i.e. after the robot has moved a short distance), it adjusts the probability that each of its random guesses is correct using a statistical model of its current sensor data. This can work especially well if the robot takes 360º sensor measurements, but this is not completely necessary.
This lecture by Andrew Davison at Imperial College London gives a good overview of the mathematics involved. (The rest of the course will most likely be very interesting to you as well, given what you are trying to create). Good luck!
A while ago I came across this answer that introduced me to the obscure (at least for me) ISO 5218: a standard for representing human sexes (or is it genders? - thanks #Paul).
For a pet project I'm working on I need my database schema to store the skin color of a person, and I'm wondering if a similar standard exists. All my life I've heard people using terms such as "White", "Caucasian", "Black", "Blonde", "Brunette", "Afro", "Albino" and so on, but after some research in Wikipedia I've realized that everybody is wrong, because those words can all have different meanings:
White: yeah, it's a color
Caucasian: defines the race
Black: yet another color
Blonde: skin or hair color
Brunette: again, skin or hair color
Afro: "hairdo"?!
Albino: also represents more than the skin color
The Wikipedia has the following about human races:
Caucasoid
Congoid
Capoid
Mongoloid
Australoid
Seriously, Mongoloid?! I don't know about the connotations of the English language but in my native language (Portuguese) that's a synonym for a person who suffers from the Down syndrome disorder...
This Wikipedia page also has some interesting additional information:
Johann Friedrich Blumenbach
(1752-1840), one of the founders of
what some call scientific racism
theories, came up with the five color
typology for humans: white people (the
Caucasian or white race), more or less
black people (the Ethiopian or black
race), yellow people (the Mongolian or
yellow race), cinnamon-brown or flame
colored people (the American or red
race) and brown people (the Malay or
brown race).
The problem with using races (besides the horrific names chosen and scientific racism), is that they don't necessarily represent the skin color of a person... Take the following photo from Wikipedia:
The most serious attempt I could find to classify skin color is the Von Luschan's chromatic scale:
Most people however, are not aware of their von Luschan's scale (myself included). I also though of having the user visually specifying the color of their skin tone but that could lead to some problems due to the different color profiles used by the operating systems / monitors.
There is also a more general von Luschan's scale used to classify sun tanning risk:
von Luschan 1-5 (very light).
von Luschan 6-10 (light).
von Luschan 11-15 (intermediate).
von Luschan 16-21 ("Mediterranean").
von Luschan 22-28 (dark or "brown").
von Luschan 29-36 (very dark or "black").
Since this can become a very sensitive topic for some people I'm wondering what would be the best way to store this information in a normalized database. Is there a correct globally accepted standard to describe skin color without affecting susceptibilities while using straightforward terms and avoiding complicated and unfamiliar definitions such as von Luschan's scale?
Similar standards exist for eye and hair color. How would you approach the skin tone terminology?
olayforyou.com defines these skin tones
alt text http://www.freeimagehosting.net/uploads/151ab0ddd7.jpg
very fair
fair
olive
dark
very dark
Any person using cosmetics regularly would understand these terms. These rest of us are just guessing :-)
I'd do something like the Nintendo Wii's Mii Editor and just show several swatches of colors. Even if the monitor isn't calibrated, if someone sees them all on the screen at once they should be able to make the correct choice.
You can then give the color an internal name and do your data mining on that.
You may wish to consider skin tones defined by cosmetic companies as these can be quite exact and even refer to tanning effects.
I personally wouldn't define a domain; let that be a textbox and everybody fills anything he/she wants. I prefer this way just because can be a polemic and potentially offensive subject, like this.
EDIT: Or, what about doesn't display any names, but colors instead? Use that Von Luschan's scale and use a "Select your color: " label. You don't need to name it and can to define a domain into your database.
Despite monitor calibration issues, I think that the von Luschan chromatic scale along with the numbers and textual descriptions you have shown, is the best. Sure it's a bit subjective, but so are all the alternatives.
Seeing the entire available range of selections, and visualizing very light and dark people, it's not to hard to come up with an estimate of where you lie on the scale.
Plus, the combination of numbers, colors, and words makes it easier to hone in on your approximate color.
EDIT:
I do see that you have expressed doubts about using the chromatic scale in your post - I just thought you might consider these points. People don't have to be familiar with the scale ahead of time to use it. I've never seen it before but it makes perfect sense to me.
One skin tone scale which the most people have come in contact with is the simplified Fitzpatrick scale used for emoji skin tone selectors in Unicode:
🏻 U+1F3FB Emoji Modifier Fitzpatrick Type-1-2 Emoji (Light Skin Tone)
🏼 U+1F3FC Emoji Modifier Fitzpatrick Type-3 Emoji (Medium-Light Skin Tone)
🏽 U+1F3FD Emoji Modifier Fitzpatrick Type-4 Emoji (Medium Skin Tone)
🏾 U+1F3FE Emoji Modifier Fitzpatrick Type-5 Emoji (Medium-Dark Skin Tone)
🏿 U+1F3FF Emoji Modifier Fitzpatrick Type-6 Emoji (Dark Skin Tone)
Considering Unicode is globally accepted as a text-encoding standard, it could be argued that using one of these characters to represent and encode a skin tone is the least controversial.