I have a small web application made with cakephp, and I am trying to submit its sitemap.xml to Google Webmaster Tools. This is the sitemap's address: http://gomme.gommarolo.org/sitemap.xml
The problem is, when I submit that address, google tells me that the sitemap is in HTML format, thus suggesting me to use another format.
The content of that page, itself, looks like a proper XML file and even chrome says that its content type is application/xml.
What's wrong with that sitemap?
Related
Based on my custom URL parameters I process, I am trying to modify dynamically a meta tag I have id'ed in index.html like so:
<meta name="og:image" content="http://example.com/someurl.jpg" id="ogImage"/>
The code below in my home.ts seems to be working
document.getElementById('ogImage').setAttribute("content", Media.ImageURL) ;
I can verify it is via the browser dev console/elements.
However, when I view from facebook via their ojbect graph debugger at
https://developers.facebook.com/tools/debug/og/object/
It appears to see the default
http://example.com/someurl.jpg
as if the index.html is shipped before my home.ts gets chance to make the update.
Perhaps, my understanding is flawed and there is better way to do this.
Thank you.
Note1: initially, I was thinking I had to make some angular binding between index.html and one of my services but I could not locate any sample code, the closest I came to was this post
How can I update meta tags in AngularJS?
But I don't know how to apply it for my ionic2/3 code, so I opted for the document.get approach.
Note2: the ultimate goal here is to share a link into a social media (web or app) like facebook, a messenger like viber/skype, etc... and have it resolve to meaningful images, title, description to drive the visit back to the site via browser, or app if the user clicking on the link is on a mobile device with my app version of the site installed on his device.
Note3: if you decide to point me to ionic deeplinking please provide code to match above, because I could not understand how to apply to my case.
If you are trying to implement dynamic open graph meta tags values in your pages, you will need a server-side scripting language like php. Such a script will run on the server, update the pages as needed, then the pages will be served to the requesting site or application.
client-side scripting (ie. JavaScript) is usually ignored when a site or app is merely visiting your site/link for the purpose of extracting (aka scrapping, parsing html) information such as the one provided by the open graph meta tags (og:title, og:description og:image...).
I am trying to create buttons on a web page that allow users to share links to PDF documents on LinkedIn. LinkedIn loads a window without any errors but offers no link or preview of the PDF or any indication of what is being shared.
Here are the two methods I have tried. First the plugin method.
<script type="in/share" data-url="http://example.net/DocumentDownload.aspx?Command=Core_Download&entryID=114"></script>
And, secondly with a custom url.
TEST
Encoding the url makes no difference.
The above links are direct document links from a DNN web site using Document Exchange. If I change the urls to any html page it works fine and LinkedIn seems to be able to extract the useful information right from the page and use that for the share details.
Can LinkedIn handle this kind of thing? There is nothing to guide me on the type of links that can be shared. I can't find any information about it. There are no errors in the web console.
Not sure, but you should try to provide LinkedIn with the link that has .pdf at the end, like http://example.com/documents/file1.pdf. I guess LinkedIn just checks the URL if it has .pdf file at the end to decide if it is a PDF document or not.
I have no problem sharing pdf's on LinkedIn. Check it out...
https://www.linkedin.com/sharing/share-offsite/?url=https://www.revoltlib.com/anarchism/the-conquest-of-bread/view.pdf
Works perfectly fine. And view.pdf is a script, not a file, either, so, it's not looking for a PDF file to analyze, so much as headers that indicate you have a PDF file available to analyze, so, in PHP, at DocumentDownload.aspx, we would do...
header('Content-type: application/pdf; charset=utf-8');
This header let's the sharing app know that it can analyze the document as a PDF file and extract useful information from it, as you can see from the screen shot.
I'm working on a web page that displays a pdf file that needs to be updatable via a (JSF) file upload. My question is, how can I set my webpage up so that this new uploaded file actually takes the place of the old one?
I have the file upload working so that an admin user can upload a different pdf file to replace the one currently displayed, sending the pdf to a folder in my tomcat server, with the same filename as the one previously displayed. I did this because I know you can't save the pdf to a resource file within the web application, as these are not dynamically loaded while the application is running. I am using the following HTML to display the pdf:
<object id="pdf" data="uploads/folder/replaceable.pdf" type="application/pdf" width="100%" height="100%">
<p>It appears you don't have a PDF plugin for this browser.
No biggie... you can <a hRef="uploads/folder/replaceable.pdf" onClick="updatePDF();">click here to
download the PDF file.</a></p>
</object>
I've seen Uploaded image only available after refreshing the page and How I save and retrieve an image on my server in a java webapp and see that this can be accomplished using <Context> tag to retrieve the file similarly to how I have data="uploads/folder/replaceable.pdf", but I don't know anything about the <Context> tag and haven't been able to get this to work
I think the problem that you are having is that the browser is caching the PDF file, and even though there is a new file available on the server, the browser is not fetching it. In order to force the browser to fetch the latest version of the PDF file you need to set the expiration header in the HTTP response to a very short time. More information for setting this up in Tomcat can be found here. I believe this is a feature only available since Tomcat 7. For previous versions of Tomcat, you need to roll your own Servlet that modifies the response header, which you can easily find with a bit of googling.
To take a look at the actual HTTP response header, you can use the developer tool built into Chrome or Firebug with Firefox.
Here's the relevant entry in web.xml that you will need:
<!-- EXPIRES FILTER -->
<filter>
<filter-name>ExpiresFilter</filter-name>
<filter-class>org.apache.catalina.filters.ExpiresFilter</filter-class>
<init-param>
<param-name>ExpiresByType application/pdf</param-name>
<param-value>access plus 1 minutes</param-value>
</init-param>
</filter>
<filter-mapping>
<filter-name>ExpiresFilter</filter-name>
<url-pattern>uploads/folder/*</url-pattern>
<dispatcher>REQUEST</dispatcher>
</filter-mapping>
Not sure if this will work, but I've done an asynchronous upload script using JQuery and Ajax that updates images on the run on a page. This might work with your scenario also.
The basic principle: Create an upload script with JQuery and Blueimp upload library: http://blueimp.github.com/jQuery-File-Upload/
Point the upload to a servlet, JSP page or the such that handles the upload. This page will store the file and what ever is needed, and then provide a callback with JSON or XML data back to the page where the upload was sent from, including the filename of the stored file. Then use JQuery to update the contents of your object on the JSF page.
Note that the filename should change for each upload; Otherwise the browser might try to fetch the old PDF file from cache and there is no change. I can't figure out how to go around this as I'm writing this, so you might need to do some more research on it.
For your convenience, I also wrote a blog post about it that might help you.
I have a mobile simulator at http://businessmobilewebsite.com/tester/ and was wondering if there is a script or way to populate the text field so that I can send a sampole of the customer mobile website to them.
Maybe I need to change it to a php file or something.
So I would like to send them to say
http://businessmobilewebsite.com/tester/?url=http://speedie.mobi/luchetti/#
Quentin
That website uses javascript within the page to load the entered URL within their "emulator" and does support passing parameters as the web page is not a "real" web form.
If you have a specific question about the functionality of a specific website, then why not contact them http://businessmobilewebsite.com/contact/
There are alternative online mobile emulators which do support the funcitonality you require, such as:
http://iphonetester.com/?url=http%3A%2F%2Fspeedie.mobi%2Fluchetti%2F
or
http://emulator.mtld.mobi/emulator.php?webaddress=speedie.mobi/luchetti/&emulator=sonyK750
I am trying to fetch URLs using Google App Engine's urlFetch service and implement a proxy site. Sites like Twitter and Facebook appear disfigured as if they are missing the stylesheet, even Google is missing the Google logo but Yahoo opens all fine, I can't understand why.
When you use urlfetch, it fetches the HTML of the page, and none of the images, CSS, JavaScript, or any other resources.
Yahoo looks fine presumably because they specify their images and CSS using absolute URLS (e.g., http://www.yahoo.com/image.png), so when your urlfetch'd page displays, it includes full image URLs from yahoo.com. Keep in mind, when someone doesn't have access to yahoo.com, those images won't appear on your proxied page either.
edit: It looks like Yahoo inlines their CSS into the HTML page itself, which would explain why it works in your fetched copy.
Google appears without CSS/images because their CSS/images are specified as relative URLs (e.g., /image.png), and your proxy doesn't have an image at /image.png
You'll have to parse the urlfetch'ed page content to find images and CSS that need to be fetched and proxied as well. Just be sure to handle relative URLs like /resource.png as well as absolute URLs like www.foo.com/resource.png.