I am working on meanjs application generated using https://github.com/DaftMonk/generator-angular-fullstack. I am trying to generate a .pdf file using phantomjs and download it to the browser.
The issue is that the downloaded .pdf file always shows the blank pages regardless of the number of pages. The original file on server is not corrupt. When I investigated further, found that the downloaded file is always much larger than the original file on the disk. Also this issue happens only with .pdf files. Other file types are working fine.
I've tried several methods like res.redirect('http://localhost:9000/assets/exports/receipt.pdf');, res.download('client\\assets\\exports\\receipt.pdf'),
var fileSystem = require('fs');
var stat = fileSystem.statSync('client\\assets\\exports\\receipt.pdf');
res.writeHead(200, {
'Content-Type': 'application/pdf',
'Content-Length': stat.size
});
var readStream = fileSystem.createReadStream('client\\assets\\exports\\receipt.pdf');
return readStream.pipe(res);
and even I've tried with https://github.com/expressjs/serve-static with no changes in the result.
I am new to nodejs. What is the best way to download a .pdf file to the browser?
Update:
I am running this on a Windows 8.1 64bit Computer
I had corruption when serving static pdfs too. I tried everything suggested above. Then I found this:
https://github.com/intesso/connect-livereload/issues/39
In essence the usually excellent connect-livereload (package ~0.4.0) was corrupting the pdf.
So just get it to ignore pdfs via:
app.use(require('connect-livereload')({ignore: ['.pdf']}));
now this works:
app.use('/pdf', express.static(path.join(config.root, 'content/files')));
...great relief.
Here is a clean way to serve a file from express, and uses an attachment header to make sure the file is downloaded :
var path = require('path');
var mime = require('mime');
app.get('/download', function(req, res){
//Here do whatever you need to get your file
var filename = path.basename(file);
var mimetype = mime.lookup(file);
res.setHeader('Content-disposition', 'attachment; filename=' + filename);
res.setHeader('Content-type', mimetype);
var filestream = fs.createReadStream(file);
filestream.pipe(res);
});
There are a couple of ways to do this:
If the file is a static one like brochure, readme etc, then you can tell express that my folder has static files (and should be available directly) and keep the file there. This is done using static middleware:
app.use(express.static(pathtofile));
Here is the link: http://expressjs.com/starter/static-files.html
Now you can directly open the file using the url from the browser like:
window.open('http://localhost:9000/assets/exports/receipt.pdf');
or
res.redirect('http://localhost:9000/assets/exports/receipt.pdf');
should be working.
Second way is to read the file, the data must be coming as a buffer. Actually, it should be recognised if you send it directly, but you can try converting it to base64 encoding using:
var base64String = buf.toString('base64');
then set the content type :
res.writeHead(200, {
'Content-Type': 'application/pdf',
'Content-Length': stat.size
});
and send the data as response.
I will try to put an example of this.
EDIT: You dont even need to encode it. You may try that still. But I was able to make it work without even encoding it.
Plus you also do not need to set the headers. Express does it for you. Following is the Snippet of API code written to get the pdf in case it is not public/static. You need API to serve the pdf:
router.get('/viz.pdf', function(req, res){
require('fs').readFile('viz.pdf', function(err, data){
res.send(data);
})
});
Lastly, note that the url for getting the pdf has extension pdf to it, this is for browser to recognise that the incoming file is pdf. Otherwise it will save the file without any extension.
Usually if you are using phantom to generate a pdf then the file will be written to disc and you have to supply the path and a callback to the render function.
router.get('/pdf', function(req, res){
// phantom initialization and generation logic
// supposing you have the generation code above
page.render(filePath, function (err) {
var filename = 'myFile.pdf';
res.setHeader('Content-type', "application/pdf");
fs.readFile(filePath, function (err, data) {
// if the file was readed to buffer without errors you can delete it to save space
if (err) throw err;
fs.unlink(filePath);
// send the file contents
res.send(data);
});
});
});
I don't have experience of the frameworks that you have mentioned but I would recommend using a tool like Fiddler to see what is going on. For example you may not need to add a content-length header since you are streaming and your framework does chunked transfer encoding etc.
Related
Succesfully i have made to Upload files into firebase storage, but now i want to display all files in table and to have option to download each file.I've read the documentation in firebase but it won't work.When i click the button which function is to get all files and the i want to visualize them in table which users can see:
Show file function:
showFileUrl(){
storageRef.child('UploadedFiles/').listAll().then(function(res) {
res.items.forEach(function(folderRef) {
console.log("folderRef",folderRef.toString());
var blob = null;
var xhr = new XMLHttpRequest();
xhr.open("GET", "downloadURL");
xhr.responseType = "blob";
xhr.onload = function()
{
blob = xhr.response;//xhr.response is now a blob object
console.log(blob);
}
xhr.send();
});
}).catch(function(error) {
});
}
This is log of the network which i found when debugging.What i need to do to get all data and visualize it in table and to hava a download button and when is pressed to download the file
Network log:
Storage in firebase:
Blob object of the files:
Your code gets a list of all the files, but it doesn't actually to anything to read the data for each file.
When using the Web client SDK, the only way to get the data for a file is through a download URL as shown here. So you'll need to:
Loop through all the files you get back from listAll() (you're already doing this).
Call `getDownloadURL as shown here, to get a download URL for each file.
Then use another library/function (such as fetch()/XMLHTTPRequest) to read the data for each file.
Alternatively, if your files are images, you can stuff the download URL in an img tag as the preview.
I have a simple service on Angular 2 and Typescript that requests Excel files to a server and then opens a download file dialogue for the user. However, as it is currently, the file becomes corrupt when downloaded.
When downloaded, it opens fine in OpenOffice and derivates, but throws a "File is Corrupt" error on Microsoft Excel, and asks if the user wants to recover as much as it can.
When Excel is prompted to recover the file, it does so successfully, and the recovered Excel has all rows and data that is expected for the Excel file. Comparing the recovered file against opening the file in OpenOffice and derivates evidence no outstanding differences.
The concrete Excel I am trying to download is generated with Apache POI in a microservice, then passed to the main backend and finally served to the frontend for the user to download. Both the backend and microservice are written in Java, through Spark Framework.
I made some tests on the backends, and concluded the problem is not the report generation nor the data transfer:
Asking the microservice to save the generated Excel in a file within the server and then opening such file (hereby file A) in Excel shows that file A is not corrupted.
Asking the main backend server to save the Excel file that it receives from the microservice in a file within itself and then opening such file in Excel (hereby file B) shows that file B is not corrupted.
Downloading both file A and file B through FileZilla from their respective servers yields completely uncorrupted files.
As such, I believe it is safe to assume the Excel becomes corrupted somewhere between the time the file is received on the frontend and the time the user downloads such file. Additionally, the Catalina logs do not evidence any error that might potentially be happening.
I have read several posts that deal with the issue, including a bug report (https://github.com/angular/angular/issues/14083) that included a workaround via XMLHTTPRequest. However, none of the workarounds detailed were successful in solving my issue.
Attached is the code I am using to both obtain the Excel file from the backend and serve it to the user. I am including both an XMLHTTPRequest and an Angular http call (within comments) since those are the two main ways I have been trying to make this work. Additionally, please do take into account the code has been altered to remove information I do not wish to make public.
download(body) {
let reply = Observable.create(observer => {
let xhr = new XMLHttpRequest();
xhr.open('POST', 'URL', true);
xhr.setRequestHeader('Content-type', 'application/json;charset=UTF-8');
xhr.setRequestHeader('Accept', 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet');
xhr.setRequestHeader('Authorization', 'REDACTED');
xhr.responseType = 'blob';
xhr.onreadystatechange = function () {
if(xhr.readyState === 4) {
if(xhr.status === 200) {
var contentType = 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet';
var blob = new Blob([xhr.response], { type: contentType });
observer.next(blob);
observer.complete();
}
else {
observer.error(xhr.response);
}
}
}
xhr.send(JSON.stringify(body));
});
return reply;
/*let headers = new Headers();
headers.set("Authorization", 'REDACTED');
headers.set("Accept", 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet');
let requestOptions :RequestOptions = new RequestOptions({headers: headers, responseType: ResponseContentType.Blob});
return this.http.post('URL', body, requestOptions);*/
}
Hereby is the code to prompt the user to download the Excel. It is currently made to work with the XMLHTTPRequest. Please do note that I have also attempted to download without resorting to FileSaver, with no luck.
downloadExcel(data) {
let body = {
/*REDACTED*/
}
this.service.download(body)
.subscribe(data => {
FileSaver.saveAs(data, "Excel.xlsx");
});
}
Hereby are the versions of the tools I am using:
NPM: 5.6.0
NodeJs: 8.11.3
Angular JS: ^6.1.0
Browsers used: Chrome, Firefox, Edge.
Any help on this issue would be appreciated. Any additional information you may need I will be happy to provide.
I think what you want is CSV format which open in Excel, update your sevice as follow:
You should tell Angular you are expecting a response of type blob (Binary Large Object) that is your Excel/Csv file.
Also make sure the URL/API on your server is set to accept content-type='text/csv'.
Here's an example with Angular 2.
#Injectable()
export class YourService {
constructor(private http: Http) {}
download() { //get file from the server
this.http.get("http://localhost/..", {
responseType: ResponseContentType.Blob,
headers: new Headers({'Content-Type', 'text/csv'})
}).subscribe(
response => {
var blob = new Blob([response.blob()], {type: 'text/csv'});
FileSaver.saveAs(blob, 'yourFileName.csv');
},
error => {
console.error('something went wrong');
}
);
}
}
Have you tried uploading/downloading your xls file as base64?
var encodedXLSToUpload = 'data:application/xls;base64,' + btoa(file);
Check this for more details: Creating a Blob from a base64 string in JavaScript
I am using the PDF.js library to display PDf files within my site (using the pdf_viewer.js to display documents on-screen), but the PDF files I am displaying are confidential and I need to be able to show them within the site but block non-authorized public folks from being able to view the same files just by typing in theie URLs and seeing them show up right in their browser.
I tried to add the Deny from all line in my htaccess file, but that also of courfse blocked the viewer from showing the docs, so that seems to be a no-go. Clearly anyone could simply look at inspector and see the pdf file that is being read by the viewer, so it seems a direct URL is not going to be secure in any way.
I did read about PDF.js being able to read binary data, but I have no knowledge of how I might read in a PDF in my own file system and prep it for use by the library, eveen if that means it is all a bit slower in loading to get the file contents and prep it on the fly.
Anyone have a solution that allows PDFJS to work without revealing the source PDF URL, or to otherwise read the file using local file calls?
Okay, after some testing, the solution is very easy:
Get the PDF data using an Ajax-called function that can figure out what actual file is to be viewed.
In that PHP file...
Read the file into memory, using fopen and fread normally.
Convert to base64 using the base64_encode
Pass that string back to the calling Javascript.
In the original calling function, use the following to convert the string to a Uint array and then pass that to the PDFJS library...
## The function that turns the base64 string into whatever a Uint8 array is...
function base64ToUint8Array(base64) {
var raw = atob(base64);
var uint8Array = new Uint8Array(raw.length);
for (var i = 0; i < raw.length; i++) {
uint8Array[i] = raw.charCodeAt(i);
}
return uint8Array;
}
## the guts that gets the file data, calls the above function to convert it, and then calls PDF.JS to display it
$.ajax({
type: "GET",
data: {file: <a file id or whatever distinguishes this PDF>},
url: 'getFilePDFdata.php', (the PHP file that reads the data and returns it encoded)
success: function(base64Data){
var pdfData = base64ToUint8Array(base64Data);
## Loading document.
PDFJS.getDocument(pdfData).then(function (pdfDocument) {
## Document loaded, specifying document for the viewer and
## the (optional) linkService.
pdfViewer.setDocument(pdfDocument);
pdfLinkService.setDocument(pdfDocument, null);
});
}
});
Can anyone provide me an example in PLUNKER that how to load JSON file for karma/jasmine test.I want to read the data from JSON file for the test cases i am writing.I have been searching but nowhere they mentioned clear example on how to do it? I appreciate it if anyone can provide with the example.
You can load an external json data file using require
var data = require('./data.json');
console.log(data);
// Your test cases goes here and you can use data object
Set the path to find your file, in this case my file (staticData.json) is located under /test folder.
jasmine.getFixtures().fixturesPath = 'base/test/';
staticData= JSON.parse(jasmine.getFixtures().read("staticData.json"));
You have to add also the pattern in the karma.conf.js file, something like:
{ pattern: 'test/**/*.json', included: false, served: true}
Do you want to read the JSON file from a webserver or a local file system? No one can give an example of loading from a local file system from Plunker, since it runs in a web browser and is denied access to the file system.
Here is an example of how to load a JSON file from disk in any Node.js program, this should work for Karma/Jasmine:
var fs = require('fs');
var filename = './test.json';
fs.readFile(filename, 'utf8', function (err, data) {
if (err) {
console.log('Error: ' + err);
return;
}
data = JSON.parse(data);
console.dir(data);
});
All the documentation I have found related to creating a new file and putting the new file in a user's Google Drive folder is achieved with the user uploading a file and having the python script use MediaFileUpload to gather the file and put it in Drive.
I want to create a new file in my GAE code, and put that. For example my code renders a new XML string after hitting database, and I would like to take that string, make it a file and put in Google Drive.
Anyone working with something like this?
You should use a MediaInMemoryUpload instead, which is designed for this exact purpose. You can pass a string and a MIME type.
media = MediaInMemoryUpload('some data', 'text/plain')
Use following code, content is the string you're going to put. You don't have to use MediaFileUpload and python client library.
def update(content, file_id):
url = 'https://www.googleapis.com/upload/drive/v2/files/%s?uploadType=media' % file_id
headers = {
'Content-Type': 'text/plain',
'Content-Length': str(len(content)),
'Authorization': 'Bearer <oauth2 token>'
}
response = urlfetch.fetch(url, payload=content, method='PUT', headers=headers)
assert response.status_code == 200
return response.content