How easy is it to convert AMP-HTML to MIP-HTML? - mobile

All the documentation for MIP pages seems to be in Chinese so really not sure how it works, however some sources claim it is basically a clone of AMP.
Has anyone got any experience of converting a full AMP build into MIP pages? Is it a case of swapping out like for like code or is it more of a complete rebuild?

It was forked from AMP a long time ago, and has specific components that cater to the China market (e.g. mip-appdl, a component for downloading apps, and supports multiple Android app store download sources).
If you are mostly using basic components and extensions, you might be able to get pretty far by just regex replacing "amp-" with "mip-" and the AMP runtime with "https://c.mipcdn.com/static/v1/mip.js".

Related

What's a good way to navigate code base and find source for a webpage

I'm new to frontend development and thinking about what's a good way to find source code in our code base for a webpage. What I usually do is going to the element tab in chrome dev tool, finding a special class name, and searching that in code base to locate the file. I feel there should be better way for this task. I tried to use source tab in dev tool, but it shows tons of files and folders in navigation column. I also tried to use Components tab since we're using react, but component names are minified to single letters. So want to get suggestions from you folks. How do you usually do this? Thanks!
You have the right idea, the problem is that you are looking at the minified (presumably production) version of the website. In general, while developing a website, you run a development server, in which all of the code (mostly) appears as it is written in your IDE/editor. That way you can find component names and inspect the source code through the chrome dev tools.
You should talk to whoever is currently responsible for the code to help you get a development server running on your machine. Then, you find the component names and then do a "find in files" search through your IDE/editor to see what they are, and where they are used in the code base. There may be many results that you have to sift through. That's par for the course in large code bases until you become more familiar with what goes where. And even afterwards.
I will say; even things that appear simple can be fiendishly complex, so it would be useful for you if the owner of the code could give you a rough outline of how things are organised and why to make navigating the code base easier. But, it will always be a bit hard, and depending on how clean the code is, it might be nearly impenetrable. Good luck.
There are many ways to to find source code or debug Code
①You can use Chrome dev tool
②You can use debbuger in VS
③you can debug your code by puttin debugger in java script code
④browser has good functionality to find
code(For reference please check Image.)

OCR reader in reactjs

I am new in reactjs and I have a task in hand. I need to be build and application which is capable to scan a mykad(Malaysian ID card) through camera. Details like name, address, image can be extracted. I googled a bit about open source tesseract but it is not giving me the right information and also some of the informations are misspelled. If anyone can guide me in the right direction.
Eventually I will develop a PWA and deploy in mobiles as well
If you're looking for a free solution, Tesseract.js is your way to go: https://tesseract.projectnaptha.com
You need to be aware that reading data from MyKads will not only require OCR component, but also specifying semantics for the document. Meaning, you'll need to tell tesseract where the name is, where the address is, etc.
Also, tesseract will not be able to detect the document on the image. For this you'll need to use a different tool.
Disclaimer: I'm working at Microblink where we develop commercial OCR products, including one for reading data from IDs. For PWAs we have an JavaScript / TypeScript component which uses WASM to process the IDs. It supports not only MyKads but more than 500 document types in the world.
Github link: https://github.com/BlinkID/blinkid-in-browser

How to collect data from a website

Preface: I have a broad, college knowledge, of a handful of languages (C++, VB,C#,Java, many web languages), so go with which ever you like.
I want to make an android app that compares numbers, but in order to do that I need a database. I'm a one man team, and the numbers get updated biweekly so I want to grab those numbers off of a wiki that gets updated as well.
So my question is: how can I access information from a website using one of the languages above?
What I understand the problem to be: Some entity generates a data set (i.e. numbers) every other week and you have a need to download that data set for treatment (e.g. sorting).
Ideally, the web site maintaining the wiki would provide a Service, like a RESTful interface, to easily gather the data. If that were the case, I'd go with any language that provides easy manipulation of HTTP request & response, and makes your data manipulation easy. As a previous poster said, Java would work well.
If you are stuck with the wiki page, you have a couple of options. You can parse the HTML your browser receives (Perl comes to mind as a decent language for that). Or you can use tools built for that purpose such as the aforementioned Jsoup.
Your question also mentions some implementation details such as needing a database. Evidently, there isn't enough contextual information for me to know whether that's optimal, so I won't address this aspect of the problem.
http://jsoup.org/ is a great Java tool for accessing content on html pages
Consider https://scraperwiki.com/ - it's a site where users can contribute scrapers. It's free as long as you let your scraper be public. The results of your scraper are exposed as csv and JSON.
If you don't know what a "scraper" is, google "screen scraping" - it's a long and frustrating tradition for coders, who have dealt with the same problem you have since the beginning of networked computing.
You could check out :http://web-harvest.sourceforge.net/
For Python, BeautifulSoup is one of the most tolerant HTML parsers out there. The documentation also lists similar libraries in Ruby and Java, so you'll probably find something relevant there.

Server-side options to deliver different page structure (HTML) to different mobile devices

I am researching best practices for developing 'classic' style mobile sites, i.e., mobile sites that are delivered and experienced as mobile HTML pages vs. small JavaScript applications (jQuery Mobile, Sencha, etc.).
There are two prevailing approaches:
Deliver the same page structure (HTML) to all mobile devices, then use CSS media queries or JavaScript to improve the experience for more capable devices.
Deliver an entirely different page structure (and possibly content) to devices with enhanced capabilities.
I'm specifically interested in best practices for the second approach. Two good examples are:
MIT's mobile site: different for Blackberries and feature(less) phones than for iOS & Android devices, but available at the same URLs -- http://m.mit.edu/
CNN's mobile site: ditto -- http://m.cnn.com/
I'd like to hear from people here at SO have actually worked on something like this, and can explain what the best practices are for delivering this type of device-dependent structure/content/experience.
I don't need a primer on mobile user-agent detection, or WURFL, or any of the concepts covered in other (great) SO threads like this one. I've used jQuery Mobile and Sencha Touch and I'm familiar with most approaches for delivering the final mobile experience, so no pointers required there either thanks.
What I really would like to understand is: how these specific types of experiences are delivered in terms of server-side detection and delivery based on user-agent groups -- where there's one stripped down page structure (different HTML) delivered to one group of devices, and another richer type of HTML document delivered to newer devices, but both at the same sub-domain / URLs.
Hope that all makes sense. Many thanks in advance.
At NPR, we use a server side 'application' to serve up the correct html/css/etc depending on if the user is on a high-end device or a lower-tier phone.
So, when a mobile device pings an npr.org page, our servers use a user-agent detection method to point them to the corresponding m.npr.org. Once directed to the m.npr.org URL, the web app - which is written in groovy, but I think could potentially be a number of things - sends back either the touch version of the site or the more simple, stripped down content. The choice of the web app is made based at least somewhat on the WURFL data.
I don't have enough rep points to post a comparison with screenshots, so I'll have to point you to the sites themselves.
You can see this in your desktop browser by typing in m.npr.org to see the stripped down site. And you can override the default device detection by adding the parameter ?devicegate.client=iPhone_3_0 to see the touch version you would see if you just went to npr.org on your smartphone. If you view the source, you can see how different html & css is being served at the same subdomain.
Hope it helps seeing something like this in the wild. Does that make sense?
A common way to detect which format a mobile device needs is the accept header:
application/xhtml+xml > xhtml
text/vnd.wap.wml > Old wml wap pages
.
.
.
On newer devices which can handle all the desktop html formats, you can use the user agent.
Then you have to ask yourself what you want to do:
Switch to another Stylesheet (only works with newer devices).
Switch to another view logic, like building wml page templates.
Switch to a complete other page.
I think the second approach is the best one. Many web frameworks make it easy to switch to another view logic without rewriting the rest (the mvc pattern in its glory).
I have two examples for you.
Read up on how facebook achieves this using XHP to give abstract different output for different markups: One Mobile Site to Serve Thousands of Phones
There will be a lot of good stuff in their actual implementation which I wish was available.
I use a framework called HawHaw, which let's you write your app once (in PHP Objects or XML files) and it outputs the correct markup to the device based on a few checks (accept header, agent string etc).

How to scrape logos from websites?

First off, this is not a question about how to scrape websites. I am fully aware of the tools available to me to scrape (css_parser, nokogiri, etc. I'm using Ruby to do the scraping).
This is more of an overarching question on the best possible solution to scrape the logo of a website starting with nothing but a website address.
The two solutions I've begun to create are these:
Use Google AJAX APIs to do an image search that is scoped to the site in question, with the query "logo", and grab the first result. This gets the logo, I'd say, about 30% of the time.
The problem with the above is that Google doesn't really seem to care about CSS image replaced logos (ie. H1 text that is image replaced with the logo). The solution I've tentatively come up with is to pull down all CSS files, scan for url() declarations, and then look for the words header or logo in the file names.
Solution two is problematic because of the many idiosyncrasies of all the people who write CSS for websites. They use Header instead of logo in the file name. Sometimes the file name is random, saying nothing about a logo. Other times, it's just the wrong image.
I realize I might be able to do something with some sort of machine learning, but I'm on a bit of a deadline for a client and need something fairly capable soon.
So with all that said, if anyone has any "out of the box" thinking on this one, I'd love to hear it. If I can create a solution that works well enough, I plan on open-sourcing the library for any other interested parties :)
Thanks!
Check this API by Clearbit. It's super simple to use:
Just send a query to:
https://logo.clearbit.com/[enter-domain-here]
For example:
https://logo.clearbit.com/www.stackoverflow.com
and get back the logo image!
More about it here
I had to find logos for ~10K websites for a previous project and tried the same technique you mentioned of extracting the image with "logo" in the URL. My variation was I loaded each webpage in webkit so that all images were loaded from CSS or JavaScript. This technique gave me logos for ~40% of websites.
Then I considered creating an app like Nick suggested to manually select the logo for the remaining websites, however I realized it was more cost effective to just give these to someone cheap (who I found via Elance) to do the work manually.
So I suggest don't bother solving this properly with a fully technical solution - outsource the manual labour.
Creating an application will definetely help you, but I believe in the end there will some manual work involved. Here's what I would do.
Have your application store in a database a link to all images on a website that are larger than a specified dimension so that you can weed out small icons.
Then you can setup a form to access these results. You may want to setup the database table to store the website url and relationship between the url and image links.
Even if it we're possible to write an application to truly figure out if it was a logo or not seems like it would be a massive amount of code. In the end, it would probably weed out even more than the above, but you have to take into account it could be faster for human to visually parse the results then the time it took for you to write and test the complex code.
Yet another simple way to solve this problem is to get all leaf nodes and get the first
<a><img src="http://example.com/a/file.png" /></a>
you can lookup for projects to get html leaf nodes on the net or use regular expressions to get all html tags.
I used C# console app with HtmlAgilityPack nuget package to scrape logos from over 600+ sites.
Algorithm is that you get all images that have "logo" in url.
The challenges you will face with during such extraction are:
Relative images
Base url is CDN HTTP/HTTPS (if you don't know
protocol before you make a request)
Images have ? or & with query
string at the end
With that things in mind I got approximately 70% of success but some images were not actual logos.

Resources