I'm working on a web page that displays a pdf file that needs to be updatable via a (JSF) file upload. My question is, how can I set my webpage up so that this new uploaded file actually takes the place of the old one?
I have the file upload working so that an admin user can upload a different pdf file to replace the one currently displayed, sending the pdf to a folder in my tomcat server, with the same filename as the one previously displayed. I did this because I know you can't save the pdf to a resource file within the web application, as these are not dynamically loaded while the application is running. I am using the following HTML to display the pdf:
<object id="pdf" data="uploads/folder/replaceable.pdf" type="application/pdf" width="100%" height="100%">
<p>It appears you don't have a PDF plugin for this browser.
No biggie... you can <a hRef="uploads/folder/replaceable.pdf" onClick="updatePDF();">click here to
download the PDF file.</a></p>
</object>
I've seen Uploaded image only available after refreshing the page and How I save and retrieve an image on my server in a java webapp and see that this can be accomplished using <Context> tag to retrieve the file similarly to how I have data="uploads/folder/replaceable.pdf", but I don't know anything about the <Context> tag and haven't been able to get this to work
I think the problem that you are having is that the browser is caching the PDF file, and even though there is a new file available on the server, the browser is not fetching it. In order to force the browser to fetch the latest version of the PDF file you need to set the expiration header in the HTTP response to a very short time. More information for setting this up in Tomcat can be found here. I believe this is a feature only available since Tomcat 7. For previous versions of Tomcat, you need to roll your own Servlet that modifies the response header, which you can easily find with a bit of googling.
To take a look at the actual HTTP response header, you can use the developer tool built into Chrome or Firebug with Firefox.
Here's the relevant entry in web.xml that you will need:
<!-- EXPIRES FILTER -->
<filter>
<filter-name>ExpiresFilter</filter-name>
<filter-class>org.apache.catalina.filters.ExpiresFilter</filter-class>
<init-param>
<param-name>ExpiresByType application/pdf</param-name>
<param-value>access plus 1 minutes</param-value>
</init-param>
</filter>
<filter-mapping>
<filter-name>ExpiresFilter</filter-name>
<url-pattern>uploads/folder/*</url-pattern>
<dispatcher>REQUEST</dispatcher>
</filter-mapping>
Not sure if this will work, but I've done an asynchronous upload script using JQuery and Ajax that updates images on the run on a page. This might work with your scenario also.
The basic principle: Create an upload script with JQuery and Blueimp upload library: http://blueimp.github.com/jQuery-File-Upload/
Point the upload to a servlet, JSP page or the such that handles the upload. This page will store the file and what ever is needed, and then provide a callback with JSON or XML data back to the page where the upload was sent from, including the filename of the stored file. Then use JQuery to update the contents of your object on the JSF page.
Note that the filename should change for each upload; Otherwise the browser might try to fetch the old PDF file from cache and there is no change. I can't figure out how to go around this as I'm writing this, so you might need to do some more research on it.
For your convenience, I also wrote a blog post about it that might help you.
Related
I have a small web application made with cakephp, and I am trying to submit its sitemap.xml to Google Webmaster Tools. This is the sitemap's address: http://gomme.gommarolo.org/sitemap.xml
The problem is, when I submit that address, google tells me that the sitemap is in HTML format, thus suggesting me to use another format.
The content of that page, itself, looks like a proper XML file and even chrome says that its content type is application/xml.
What's wrong with that sitemap?
I am trying to create buttons on a web page that allow users to share links to PDF documents on LinkedIn. LinkedIn loads a window without any errors but offers no link or preview of the PDF or any indication of what is being shared.
Here are the two methods I have tried. First the plugin method.
<script type="in/share" data-url="http://example.net/DocumentDownload.aspx?Command=Core_Download&entryID=114"></script>
And, secondly with a custom url.
TEST
Encoding the url makes no difference.
The above links are direct document links from a DNN web site using Document Exchange. If I change the urls to any html page it works fine and LinkedIn seems to be able to extract the useful information right from the page and use that for the share details.
Can LinkedIn handle this kind of thing? There is nothing to guide me on the type of links that can be shared. I can't find any information about it. There are no errors in the web console.
Not sure, but you should try to provide LinkedIn with the link that has .pdf at the end, like http://example.com/documents/file1.pdf. I guess LinkedIn just checks the URL if it has .pdf file at the end to decide if it is a PDF document or not.
I have no problem sharing pdf's on LinkedIn. Check it out...
https://www.linkedin.com/sharing/share-offsite/?url=https://www.revoltlib.com/anarchism/the-conquest-of-bread/view.pdf
Works perfectly fine. And view.pdf is a script, not a file, either, so, it's not looking for a PDF file to analyze, so much as headers that indicate you have a PDF file available to analyze, so, in PHP, at DocumentDownload.aspx, we would do...
header('Content-type: application/pdf; charset=utf-8');
This header let's the sharing app know that it can analyze the document as a PDF file and extract useful information from it, as you can see from the screen shot.
I have .url files on a server and when I click on them, I see the content of the file instead of having the browser going to the url. As an example, try clicking on this:
http://69.160.61.109/document/116_1.url
The code in the url:
[DEFAULT]
BASEURL=http://www.agriculturemorethanever.ca/
[DOC_gform_ajax_frame_3]
BASEURL=about:blank
ORIGURL=about:blank
[InternetShortcut]
URL=http://www.agriculturemorethanever.ca/
IDList=
IconFile=http://www.agriculturemorethanever.ca/wp-content/uploads/2012/02/favicon1.ico
IconIndex=1
[{000214A0-0000-0000-C000-000000000046}]
Prop3=19,2
I tried with IE9 and FF 15. If I download the file on my desktop, it opens properly.
Thanks for your help.
Luc
".url" files are a Windows specific file format and have no meaning when served from the Internet.
If you want to send a visitor to another website you have several options.
URL Rewrite
If you are running apache with mod_rewrite you can add this to your .htaccess file:
RewriteRule path-to-file http://example.com/ [R=301,L]
Other web servers have similar options.
HTTP Header
You can send an HTTP Location header and 301 response code. The example below uses PHP, but any server programming language has similar functionality.
<?php
header("Location: http://example.com/", true, 301);
exit;
?>
Meta Refresh (not recommended for usability reasons)
This will break the back button on some browsers, so use carefully.
<html>
<head>
<meta http-equiv="refresh" content="0;URL='http://example.com/'">
</head>
</html>
Most of the servers these days are hosted in linux. And servers do not fetch the request and they do not direct/redirect unless webbrowsers are told to do so. In your case the .url file is merely nothing than a file containing few texts. This same file works in windows after downloading because .url file are the shortcuts or hyperlink file which is recognized by windows. Thus windows automatically takes you to the website. Please read more about .url file in the link http://www.fmtz.com/formats/url-file-format/article
If you are trying to have similar kind of behaviour such that clicking on .url file on server takes your to the corresponding web-url then you'll have to implement somekind of javascript tricks or use php server side scripts such that clicking on url file trigger request the browsesr to redirect browser to the link contained in the .url file.
Hope this answer your question.
I am writing a program in C that acts like a proxy server in a Linux system: Client asks it for a web page,
it sends an HTTP GET Request to a distant server, and it gets the servers response (web page), which is saved in an .html file.
Here goes my problem: Most web sites got some references to images, so when i try to view the .html file proxy created, the images don't appear.
I have searched a lot, but found nothing..Is there a way to write some code to GET images too?
Thank you in advance
You're going to have to write code that parses the HTML file you get back and looks for image references (img tags), then queries the server for those image files. This is what web browsers are doing under the hood.
You have an additional problem though which is that the image references in the HTML file are to the original server. I'm assuming that since they don't load for you the server that returned the original HTML isn't available. In that case after you get each image file you will need to give it a name on the local filesystem and then alter the reference in the HTML (programmatically) to point to your new local image name.
So for example:
<img src='http://example.com/image1.png'>
would become
<img src='localImage1.png'>
If you're querying arbitrary websites then you'll also find that there are various other files you'll need to do the same with like CSS files and JavaScript files. In general its hard to mirror arbitrary web pages accurately - browsers have complex object models they use to interpret web pages because they have to deal with things like CSS and Javascript and you may need to be able to 'run' all that dynamic code to even be sure what files to download from the server (e.g. JavaScript including other JavaScript etc).
The only way I know to take the contents of a local file and push those bytes to a server is to set up a form post with an <input> of appropriate type to prompt the user to select a file.
I would like to do the same thing only pushing the data through XMLHttpRequest (no cross-scripting tricks).
Currently, we do this with an iframe to get the post behavior.
My sense is the iframe is the only solution, but I post here in case I've missed something.
You could use the JavaScript File API (available in Firefox 3.6 or later and latest versions of Chrome and Safari). Basically, you can add an event listener to the <input> tag that will fire when a user selects a file. Then, you can upload it using an XMLHttpRequest. Also, the File API can allow you to do other fancy stuff, such as drag-and-drop uploads, getting information about a file before it is sent to the server, and providing a progress bar as a file is uploading.More info: https://developer.mozilla.org/en/using_files_from_web_applications
This is not a good cross-browser solution because it doesn't have good support in all the popular browsers (Internet Explorer), but you could use feature detection in JavaScript to detect if the File API is available and revert back to your iframe method if it is not.