Polymerjs Starter Kit: can't serve Word documents - polymer-starter-kit

I am using the Polymer Starter Kit to build a small website. However I have run into a problem. I am wanting to serve up Word documents. The usual way is to place these in an anchor tag eg
Session Notes
However this is captured as a page to load by Polymer and produces the 404 page, though the url on the page is correct. When I refresh the page, the document is served up normally.
How can I adjust the starter kit, especially the _pageChanged function so that there is no page change and the normal process of simply serving the document is followed.
Edit:
I have solved the problem, however not using anchor tags. I created a small form component with simply a button. In the form I have two fields which become attributes, one for the form action attribute and one for the button text. This means that in my pages I simply call this component with two attributes
<form-button submit="Button text" action="file location"></form-button>
While not the most elegant solution it has the desired effect.

The easiest way would be to find a CDN or use a Cloud-based solution. You could, for example, use Google Drive and find a sharelink to direct link converter and have it served externally.
Another method, harder, would be to set up another web server on a different port. For example, NGINX running on Port 80, with the PSK running, and Apache running on port 81, serving your Word Documents. Not as convenient, but it would work.

Related

Security with "web_accessible_resources"

MDN docs state:
To enable a web page to contain an <img> element whose src attribute points to this image,
you could specify "web_accessible_resources" like this:
"web_accessible_resources": ["images/my-image.png"]
The file will then be available using a URL like:
moz-extension://<extension-UUID>/images/my-image.png"
<extension-UUID> is not your extension's ID.
It is randomly generated for every browser instance.
This prevents websites from fingerprinting a browser by examining
the extensions it has installed.
So, I would think that these resources cannot be read by any web page outside the extension, since they would need to know the random UUID.
However, the same MDN docs also state:
Note that if you make a page web-accessible, then any website may then link or redirect
to that page. The page should then treat any input (POST data, for examples)
as if it came from an untrusted source, just as a normal web page should.
I don't understand how "any website may then link or redirect to that page". Wouldn't it need to know the random UUID? How else could a webpage access this resource?
The point of Web Accessible Resources is to be able to include them in a web context.
While you can communicate the random UUID to the webpage so that it can use the file, it doesn't have to be included by the website code itself. Here's a hypothetical scenario:
You're writing an extension that adds a button to evil.com site's UI. That button is supposed to have an image on it.
You bundle the image with your extension, but to add it as src or CSS property to the webpage you need to be able to reference it from a web context.
So, you make it web-accessible, and then inject your UI element with a content script.
Perfectly plausible scenario.
Note that a random third-party site villains-united.com can't just scrape the URL to know if your extension is installed, since the URL is per-browser unique. This is the intent behind WebExtensions's UUID over Chrome's extension-id model.
However, let's continue our hypothetical scenario, from a security perspective.
The operators of evil.com are unhappy with your extra UI. They add a script to their code that looks for added buttons.
That script can see the DOM properties of the button, including the address of the image. Now evil.com's code can see your UUID.
Being the good guy, your extension's source code is available somewhere, including the page that launches nuclear missiles if called (why you would have that and why it would be web-accessible is another matter, perhaps to provide the functionality to good-guys-last-resort.org).
evil.com's script now can reconstruct the URL of this trigger page and XHR it, plunging the planet into nuclear apocalypse. Oops. You probably should've checked the origin of that request.
Basically, if a web-accessible resource is used in a page, the UUID likely leaks to that page's context via DOM. That may not be a page you control.

How to modify HTMLElement in index.html before page gets returned to requestor

Based on my custom URL parameters I process, I am trying to modify dynamically a meta tag I have id'ed in index.html like so:
<meta name="og:image" content="http://example.com/someurl.jpg" id="ogImage"/>
The code below in my home.ts seems to be working
document.getElementById('ogImage').setAttribute("content", Media.ImageURL) ;
I can verify it is via the browser dev console/elements.
However, when I view from facebook via their ojbect graph debugger at
https://developers.facebook.com/tools/debug/og/object/
It appears to see the default
http://example.com/someurl.jpg
as if the index.html is shipped before my home.ts gets chance to make the update.
Perhaps, my understanding is flawed and there is better way to do this.
Thank you.
Note1: initially, I was thinking I had to make some angular binding between index.html and one of my services but I could not locate any sample code, the closest I came to was this post
How can I update meta tags in AngularJS?
But I don't know how to apply it for my ionic2/3 code, so I opted for the document.get approach.
Note2: the ultimate goal here is to share a link into a social media (web or app) like facebook, a messenger like viber/skype, etc... and have it resolve to meaningful images, title, description to drive the visit back to the site via browser, or app if the user clicking on the link is on a mobile device with my app version of the site installed on his device.
Note3: if you decide to point me to ionic deeplinking please provide code to match above, because I could not understand how to apply to my case.
If you are trying to implement dynamic open graph meta tags values in your pages, you will need a server-side scripting language like php. Such a script will run on the server, update the pages as needed, then the pages will be served to the requesting site or application.
client-side scripting (ie. JavaScript) is usually ignored when a site or app is merely visiting your site/link for the purpose of extracting (aka scrapping, parsing html) information such as the one provided by the open graph meta tags (og:title, og:description og:image...).

How to load a part of application before loading rest of the application in angularjs?

I could not think of a better title, Please suggest one.
I am planning to work on a large web application. It will take time to load the full application before application starts functioning.
Suppose its something like asana.com. If you have a link to the task and you open the link. It loads the application first and then shows the detail of the task.
Note: I have added another example in update 2
I want to do just the opposite. Suppose if I try to open the link directly. It should show me the tasks details first and then load the whole application in background.
What development strategy should I follow to implement such feature. Will angular be good for this? I have worked with angular for small projects and am capable of think in angular :)
I just wanted to be pointed in right direction.
Update 1:
I am using Apache2 PHP5 in backing as ReST API. I am thinking to change to GoLang http server. But that does not matter in this context :)
Update 2:
I have not yet started working on the application, but I know that its size is going to be big and its going to take time to load the application. This will be a javascript application, all the communication to web will be done mostly by API. APIs will be fast and it wont be slowing down the application. My main concern is the javascript library and the approach to the issue that I want to display the content of the page before the application is loaded and load the application in background.
As second example: https://chrome.google.com/webstore/detail/a-journey-through-middle/gjgkjeheegjnnmheaflhdocglkiegoni?utm_source=chrome-ntp-icon
If you open this link in chrome, it will load the application and then load the specific content in a popup. I want to load the content of the popup first and then load the application in background. How should I write my application to achieve that.
My suggestion (and I say this as I start to do similar vs. having proven it successful) would be to make some level of framework fairly static so that users get an almost instant response to the site loaded and then start the angular app with something like this
angular.bootstrap(document.getElementById("container"), ["app"])
Ref for the api - https://docs.angularjs.org/api/ng/function/angular.bootstrap
Ref for a demonstration of this - https://egghead.io/lessons/angularjs-angular-bootstrap-app-init
My expectation then is that you will be able to
Load your static elements quickly (which will just have placeholders for your content/material)
Access the data you want in the order you want to get it to present on the screen
Release any other part of the app you need to chrome it up/decorate or populate side items.

offline mobile website

Is it possible to have a mobile website that can still function if there's no internet connection?
The user should still be able to use the website (if he has visited that page before), see the data (that was loaded before), add new stuff (cache locally).
When internet connection comes back online, all changed local data should be pushed online.
This should be a complete webbased solution, not a native app.
You should have a look at HTML5 offline storage, see http://diveintohtml5.ep.io/offline.html and the Offline Web Applications spec as a start. There are also quite a few posts here on SO.
Bookmarklets work when a user is offline. The trick with a bookmarklet is that it's entirely self contained javascript wrapped up in such a way that it can live within the bookmark itself. E.g. a javacsript: URL. You can also have a data: URL as a bookmark, which could be a complete HTML page. Usually these are base64 encoded with a mime type.
Probably what I'd do would be have a small base page as data:text/html,base64 which contained whatever offline content you cared about, but periodically tried to bootstrap the rest of the "real" content from wherever you host it.

Is there any way to handle silverlight deep linking without '#' showing in the url?

I want to have two separate interfaces to my website, one that is silverlight, and one that is normal html for people who don't have Silverlight, and for search engines. They would have exactly the same content, the Silverlight one would just be a richer experience.
If someone with Silverlight copies the URL to a certain page, it will have a '#' in it (app#page1). If they then want to link to that page on their blog or something, it will have the # in it, and a search engine probably wouldn't consider it as a separate page from app#page2.
Is there any way to make the navigation from within Silverlight update the URL with a '/' instead of a '#', without actually loading a separate page? This way the URLs in the address bar appear like a normal websites' URLs ('app/page1', 'app/page2').
Is there any way to make the navigation from within Silverlight update the URL with a '/' instead of a '#', without actually loading a separate page? This way the URLs in the address bar appear like a normal websites' URLs ('app/page1', 'app/page2').
Unfortunately, no. The reason that Silverlight navigation URLs use # is that you can move around within a page by moving to an anchor location. If you used a full URL with '/' separators, it would cause the browser to navigate to a new page, which would reload your Silverlight application. This would basically unload your Silverlight application, and load a new one with the new URL.
The reason they use the # sign is because this is interpreted by the browser as moving to a location in the page, otherwise would reload the page.
As far as search engine implications I'm not sure either way. Maybe someone more experienced with SEO can chime in on that.
However I'm sure you can get the behavior you're looking for, just may take some trickiness on your end. Another way pass information to the Silverlight client runtime is using Query String parameters. You can access query string params using the System.Windows.Browser.HtmlPage.Document.QueryString collection, you could then load the Page or User control with the content you desire based on that parameter.
As far as mimicking a folder structure using '/'s. I know there are ways to do this using custom web server settings / HTTPModules. I assume you're using IIS/ASP.Net, I would look into this from Guthrie:
http://weblogs.asp.net/scottgu/archive/2007/02/26/tip-trick-url-rewriting-with-asp-net.aspx
Takes a bit of hackery, but if you're really set on doing it I'm sure you could. You will also face the things the above poster mentioned, if you attempt to do use the same logic during a session. This may work though for just the deep linking aspect you're looking for.

Resources