Is it possible to have a mobile website that can still function if there's no internet connection?
The user should still be able to use the website (if he has visited that page before), see the data (that was loaded before), add new stuff (cache locally).
When internet connection comes back online, all changed local data should be pushed online.
This should be a complete webbased solution, not a native app.
You should have a look at HTML5 offline storage, see http://diveintohtml5.ep.io/offline.html and the Offline Web Applications spec as a start. There are also quite a few posts here on SO.
Bookmarklets work when a user is offline. The trick with a bookmarklet is that it's entirely self contained javascript wrapped up in such a way that it can live within the bookmark itself. E.g. a javacsript: URL. You can also have a data: URL as a bookmark, which could be a complete HTML page. Usually these are base64 encoded with a mime type.
Probably what I'd do would be have a small base page as data:text/html,base64 which contained whatever offline content you cared about, but periodically tried to bootstrap the rest of the "real" content from wherever you host it.
Related
MDN docs state:
To enable a web page to contain an <img> element whose src attribute points to this image,
you could specify "web_accessible_resources" like this:
"web_accessible_resources": ["images/my-image.png"]
The file will then be available using a URL like:
moz-extension://<extension-UUID>/images/my-image.png"
<extension-UUID> is not your extension's ID.
It is randomly generated for every browser instance.
This prevents websites from fingerprinting a browser by examining
the extensions it has installed.
So, I would think that these resources cannot be read by any web page outside the extension, since they would need to know the random UUID.
However, the same MDN docs also state:
Note that if you make a page web-accessible, then any website may then link or redirect
to that page. The page should then treat any input (POST data, for examples)
as if it came from an untrusted source, just as a normal web page should.
I don't understand how "any website may then link or redirect to that page". Wouldn't it need to know the random UUID? How else could a webpage access this resource?
The point of Web Accessible Resources is to be able to include them in a web context.
While you can communicate the random UUID to the webpage so that it can use the file, it doesn't have to be included by the website code itself. Here's a hypothetical scenario:
You're writing an extension that adds a button to evil.com site's UI. That button is supposed to have an image on it.
You bundle the image with your extension, but to add it as src or CSS property to the webpage you need to be able to reference it from a web context.
So, you make it web-accessible, and then inject your UI element with a content script.
Perfectly plausible scenario.
Note that a random third-party site villains-united.com can't just scrape the URL to know if your extension is installed, since the URL is per-browser unique. This is the intent behind WebExtensions's UUID over Chrome's extension-id model.
However, let's continue our hypothetical scenario, from a security perspective.
The operators of evil.com are unhappy with your extra UI. They add a script to their code that looks for added buttons.
That script can see the DOM properties of the button, including the address of the image. Now evil.com's code can see your UUID.
Being the good guy, your extension's source code is available somewhere, including the page that launches nuclear missiles if called (why you would have that and why it would be web-accessible is another matter, perhaps to provide the functionality to good-guys-last-resort.org).
evil.com's script now can reconstruct the URL of this trigger page and XHR it, plunging the planet into nuclear apocalypse. Oops. You probably should've checked the origin of that request.
Basically, if a web-accessible resource is used in a page, the UUID likely leaks to that page's context via DOM. That may not be a page you control.
Based on my custom URL parameters I process, I am trying to modify dynamically a meta tag I have id'ed in index.html like so:
<meta name="og:image" content="http://example.com/someurl.jpg" id="ogImage"/>
The code below in my home.ts seems to be working
document.getElementById('ogImage').setAttribute("content", Media.ImageURL) ;
I can verify it is via the browser dev console/elements.
However, when I view from facebook via their ojbect graph debugger at
https://developers.facebook.com/tools/debug/og/object/
It appears to see the default
http://example.com/someurl.jpg
as if the index.html is shipped before my home.ts gets chance to make the update.
Perhaps, my understanding is flawed and there is better way to do this.
Thank you.
Note1: initially, I was thinking I had to make some angular binding between index.html and one of my services but I could not locate any sample code, the closest I came to was this post
How can I update meta tags in AngularJS?
But I don't know how to apply it for my ionic2/3 code, so I opted for the document.get approach.
Note2: the ultimate goal here is to share a link into a social media (web or app) like facebook, a messenger like viber/skype, etc... and have it resolve to meaningful images, title, description to drive the visit back to the site via browser, or app if the user clicking on the link is on a mobile device with my app version of the site installed on his device.
Note3: if you decide to point me to ionic deeplinking please provide code to match above, because I could not understand how to apply to my case.
If you are trying to implement dynamic open graph meta tags values in your pages, you will need a server-side scripting language like php. Such a script will run on the server, update the pages as needed, then the pages will be served to the requesting site or application.
client-side scripting (ie. JavaScript) is usually ignored when a site or app is merely visiting your site/link for the purpose of extracting (aka scrapping, parsing html) information such as the one provided by the open graph meta tags (og:title, og:description og:image...).
I have a personal project which consumes my free time and effort for about a year without significant profit. I have problems with it appearance in Google and would really appreciate to get help here.
This project (http://yuppi.com.ua - similar to craiglist in US) is WEB-based AngularJS 1.2 application that uses PHP rest API hosted on GoDaddy. And in order to make this application popular it have to be very visible in internet and very searchable in Google and users have to be able to share pages via social networks or skype.
According to Google specification, google crawlers doesn't run javascript to get content of a web page before index, so I've added _escaped_fragment_ page that displays content of web page without javascript. For example:
Page: http://yuppi.com.ua/#!/items/sub/18/_
Dirty : yuppi.com.ua/?_escaped_fragment_=/items/sub/18/_
This dirty page will be redirected here where google will see content.
http://yuppi.com.ua/server/crawler_proxy/routee.php?path=/items/sub/18/
So basically I have two versions on HTML file for that page. One version is the one that available to users, which has styles, a lot more HTML tags etc. And the second is the version for Google crawler - very light-weight without any styles. And I am expecting to see clean link to my site in Google, not dirty.
So, If to search all links to a web site in Google you will see that one of the links displays it's "dirty" state.
Another problem is sharing links in Skype.
When I send a link to someone, I am expecting that this link will be transformed to thumbnail image but it is not happens. Instead I see ungly link to my web site.
Please help me to understand how to make happy everyone: users, google crawler, GoDaddy and me.
I was encountering the same problems last year with a big project and we ended to use : https://prerender.io/.
It's a prerendering system that work with a phantomjs browser to detect bot request and render a full html template. It does also instanciate a cache service to not render again a template that haven't change.
Hope it help's.
I have a WPF intranet app running in Trusted mode (local only).
I would like the users to be able to upload an image and attach it to an article on my newsletters section. I am having trouble deciding where these images will be stored.
Please provide me with your opinions.
At present I have a few ideas myself;
I could have an aspx page that runs parallel to this app, and run this inside a browser(I-frame). This page could then handle the upload and display of the image.
I could also, have the users copy directly to a network share.
It seems that there should be a more elegant sollution that I am not aware of.
Any ideas?
Don't force the solution towards ASPX just because you know how to do it there. It's unnatural to build a page, host browser to show that page etc, just so you could upload an image.
It's actually quite simpler to do it in a desktop client than on web page. You have a "Load File Dialog" - use that to get to the filepath the user wants to upload, and when you have that you can either:
copy it (inside your application) to your share,
or if you have a service - send it through some method call,
or you can even store it inside a database (recommended if the files are small)
There's really lots of options here... it depends if your client has connection to db, do you have service in between, etc...
We are considering hosting the core of our site (everything that doesn't need to be dynamically generated) on a CDN, so that our root domain (e.g. "http://example.com/") would point to the CDN, then everything dynamic would either point to an alternate second-level domain (e.g. "http://search.example.com/ for searches) or be layered on top of the static content by AJAX calls to an alternate domain (e.g. http://ajax.example.com/).
This seems like something that would be very desirable for lots of sites but I don't see much information even on the CDN home pages about doing whole-site caching. There is at least one obvious problem that occurs to me, which is that we currently detect whether the user is coming from a mobile browser or not and serve mobile content if they are coming from a mobile browser. The problem is that as far as I know, with most CDNs you can only store on version of a page, so if you cache the regular page, mobile browsers will see that instead of the mobile version (and obviously vice versa).
We could get around this to some degree by moving the mobile stuff to a separate domain like m.example.com but we would need the CDN to detect mobile browsers and redirect them to that domain (which we would also like to have hosted on the CDN, but pointing at the mobile content instead of the regular content, obviously).
It seems like this should be widely supported but I can't find much information on it. Has anyone done something similar? If so, what CDN did you use and how did you address this issue? Were there other significant hurdles that needed to be overcome?
Edited to add a couple of things I forgot:
We also considered redirecting to the mobile site using javascript but then obviously older phones without javascript would be left out in the cold and they are the ones that probably need the mobile version the most.
One constraint that may factor into any answers to this question is that we need the URLs of our primary site to be very specific for SEO purposes but we don't care at all about SEO for the mobile version.
We have rules at our CDN (EdgeCast) that will cache multiple versions (Desktop, Iphone, Blackberry, etc.) of the same incoming Url. The CDN rules append a querystring to the request to the origin server. Custom code at our origin server renders the proper version depending on the incoming querystring. For example:
Desktop: CDN requests /?nomobile origin server returns Desktop rendering
Iphone: CDN requests /?iphone origin server returns Iphone rendering
Blackberry: CDN requests /?mobile origin server returns Mobile rendering
As far as the CDN is concerned, there are 3 different Urls, so 3 different pages are cached. The querystring is completely transparent to the end user. Even if you use a responsive design with media queries, this approach is incredibly valuable in giving you the flexibility to alter the HTML at the server level.
If the rendering of your page is different for various devices (e.g. mobile phones) it is not a static content and should not be on your CDN.
Put only real static files on your CDN and consider a different caching strategy for your pages.
Anyhow instead of detecting the client's browser via JavaScript you could also do this on the server-side and actually I would recommend you that, instead of JavaScript. Then you could realize the redirect approach.
Hope that helps.