Facebook Debugger needs several refreshes before returning proper og:image value - angularjs

I've been using the Facebook Debugger to consistently solve a problem with og:image tags that we are using on an AngularJS site. My content editor has to clear caches in FB several times before getting the correct meta to come through. Here is our setup:
We are using a PhantomJS, disk cache enabled look aside for all UA ~ Facebook to properly pass FB requests to our static HTML markup. We have verified (via curl 'http://localhost:9000/path/to/my/page' | grep og:image) that the proper og:image tag is present before trying to share or present a new object to the FB Open Graph
We have to consistently "Fetch new scrape information" 3 - 4 times before FB Debugger pulls the proper image. The debugger returns scrapes in the following way:
-- First fetch: Default og tags before Angular bindings hit.
It's hard to say why this is happening since we haven't tried to share the page previously. We've passed the page into the PhantomJS process and seen the proper og in the returns (in order to cache the return before sharing or heading to FB).
-- Second fetch: Proper og tags filled in with desired image but with the OG Image warning
og:image was not defined, could not be downloaded or was not big enough. Please define a chosen image using the og:image metatag, and use an image that's at least 200x200px and is accessible from Facebook. Image 'XXXXXXX' will be used instead. Consult http://developers.facebook.com/docs/sharing/webmasters/crawler for more troubleshooting tips.
The desired image is 600x337 png (no transparency) so size isn't an issue (and it eventually shows). The fallback image being used instead is the default og:image previous from scrape #1.
-- Third fetch:
OG Image warning is gone and all additional fetches return the proper meta. Sharing works and we can move on.
So while this works, it is a little heavy handed. Clearly we have an issue with FB seeing our default meta, caching that meta, and needing us to clear things out. Before we implement any cache warming in out PhantomJS process, and possibly a POST to the FB API to get the proper scrape markup into Open Graph, can someone answer why the additional 2nd refresh produces the og:image warning and then it goes away? If the proper og:image exists and is correctly sized, why the error?
We looked at this answer, and the comment says to clear our browser cache when using the debugger. We've considered the comment to use multiple browsers but to no avail. We've tried cache-less POSTs using Postman to test this theory as it may be how we cache warm but still see the need for the additional refreshes.

Related

Security with "web_accessible_resources"

MDN docs state:
To enable a web page to contain an <img> element whose src attribute points to this image,
you could specify "web_accessible_resources" like this:
"web_accessible_resources": ["images/my-image.png"]
The file will then be available using a URL like:
moz-extension://<extension-UUID>/images/my-image.png"
<extension-UUID> is not your extension's ID.
It is randomly generated for every browser instance.
This prevents websites from fingerprinting a browser by examining
the extensions it has installed.
So, I would think that these resources cannot be read by any web page outside the extension, since they would need to know the random UUID.
However, the same MDN docs also state:
Note that if you make a page web-accessible, then any website may then link or redirect
to that page. The page should then treat any input (POST data, for examples)
as if it came from an untrusted source, just as a normal web page should.
I don't understand how "any website may then link or redirect to that page". Wouldn't it need to know the random UUID? How else could a webpage access this resource?
The point of Web Accessible Resources is to be able to include them in a web context.
While you can communicate the random UUID to the webpage so that it can use the file, it doesn't have to be included by the website code itself. Here's a hypothetical scenario:
You're writing an extension that adds a button to evil.com site's UI. That button is supposed to have an image on it.
You bundle the image with your extension, but to add it as src or CSS property to the webpage you need to be able to reference it from a web context.
So, you make it web-accessible, and then inject your UI element with a content script.
Perfectly plausible scenario.
Note that a random third-party site villains-united.com can't just scrape the URL to know if your extension is installed, since the URL is per-browser unique. This is the intent behind WebExtensions's UUID over Chrome's extension-id model.
However, let's continue our hypothetical scenario, from a security perspective.
The operators of evil.com are unhappy with your extra UI. They add a script to their code that looks for added buttons.
That script can see the DOM properties of the button, including the address of the image. Now evil.com's code can see your UUID.
Being the good guy, your extension's source code is available somewhere, including the page that launches nuclear missiles if called (why you would have that and why it would be web-accessible is another matter, perhaps to provide the functionality to good-guys-last-resort.org).
evil.com's script now can reconstruct the URL of this trigger page and XHR it, plunging the planet into nuclear apocalypse. Oops. You probably should've checked the origin of that request.
Basically, if a web-accessible resource is used in a page, the UUID likely leaks to that page's context via DOM. That may not be a page you control.

Chromium fetching all my favicons all the time

When using chromium, in my angularjs application, when I click on any link, all my favicons get loaded.
My main HTML page contains 10 lines like
<link rel="icon" type="image/png" href="favicons/favicon-57x57.png" sizes="57x57">
with size going up to 192x192. This might be wrong as it's just an "adaptation of something I found somewhere".
However it doesn't explain, why all of them get loaded every time, does it? All the links just change the URL after the hashbang and usually lead to no server request at all, apart from fetching 10 favicons.
Even if I did everything wrong, the favicon is global for the whole site, so there's no need to reload it, right?
With a little fiddling with the headers I can serve them with any of 200 OK or 304 NOT MODIFIED or 200 OK (from cache), but whatever I do, they all always get requested.
This doesn't happen in Firefox.
What you described is a known issue of Chrome, related to the fact that Chrome doesn't support the sizes attribute.
Firefox also used to be impacted, and it still doesn't support sizes. However, it doesn't load all icons anymore. As far as I know, this is not documented anywhere. This may have been fixed as a side effect.
There is no "solution" but a workaround: declare less icons. I suggest you to use this favicon generator. The generated images and HTML were designed with this issue in mind. For example, it does't generate the 192x192 PNG icon by default, because Android Chrome (the browser it is dedicated to) primarily uses the Webb App manifest. Full disclosure: I'm the author of this service.

Should I senatize this in angular1?

We are running a legacy CMS system that stores some content in pure HTML. This content is now fetched using http from my angular 1.5 application and displayed on the page. Should I now senatize this HTML before adding it to the page? If yes, how? If not, why not?
This depends who can enter the HTML. If only authorized personal can enter content, you could sanitize it, so it gets displayed on the page... however this may cause errors on the page, when the syntax is incorrect and I wouldn't suggest it if avoidable.
If the content can come from anyone, you definitely absolutely shouldn't insert it into your page, it opens it up for XSS attacks!
See this video, how an attack can take over your site: https://www.youtube.com/watch?v=U4e0Remq1WQ

Weird Content of Facebook share/like button preview from angular app

I have an issue with share/like button from Angular app. I finally made it working correctly with links but share/like preview if completely wrong. I tried XFBML.parse(), switching to html 5 mode, etc.
There are two complete enigmas:
1. I got "Given URL is not allowed by the Application configuration..." despite adding all possible variants to fb app setting.
When share preview appear - it has "Angular", but I never added it anywhere.
Here is the link
Would be grateful for any ideas...
Thx
The Facebook Scraper only looks at the HTML code your server delivers, it does not execute any JavaScript.
So if you want to share different articles, you need an individual URL for each article, that delivers the relevant meta data when requested from the server.
You can find some more explanation and hints on how to implement this in this article, http://www.michaelbromley.co.uk/blog/171/enable-rich-social-sharing-in-your-angularjs-app

Getting pages indexed in Kik browser

I'm having trouble getting pages to show up in the NEW tab and in the Optimized for Kik search results.
All my pages have the required title, meta description, canonical and script tag served if the user-agent contains the string "kik".
Here is an example of a page that isn't being indexed.
http://playcanv.as/p/MW862amA
The pages have been correctly set up for around a week and still aren't showing up. Any ideas why?
Currently, the Kik browser shows a loading screen on top of your website until the window.onload event has fired. If the website takes too long to load the user is presented with an error screen.
Testing locally, http://playcanv.as/p/MW862amA downloaded roughly 5MB before window.onload and took roughly 30 seconds to get there. I'm betting the search index isn't let it in because of this.
So the fix is simply deferring expensive network requests until after window.onload. The easiest solution is to wrap your network calls in kik.ready(function(){})

Resources