I have a simple service on Angular 2 and Typescript that requests Excel files to a server and then opens a download file dialogue for the user. However, as it is currently, the file becomes corrupt when downloaded.
When downloaded, it opens fine in OpenOffice and derivates, but throws a "File is Corrupt" error on Microsoft Excel, and asks if the user wants to recover as much as it can.
When Excel is prompted to recover the file, it does so successfully, and the recovered Excel has all rows and data that is expected for the Excel file. Comparing the recovered file against opening the file in OpenOffice and derivates evidence no outstanding differences.
The concrete Excel I am trying to download is generated with Apache POI in a microservice, then passed to the main backend and finally served to the frontend for the user to download. Both the backend and microservice are written in Java, through Spark Framework.
I made some tests on the backends, and concluded the problem is not the report generation nor the data transfer:
Asking the microservice to save the generated Excel in a file within the server and then opening such file (hereby file A) in Excel shows that file A is not corrupted.
Asking the main backend server to save the Excel file that it receives from the microservice in a file within itself and then opening such file in Excel (hereby file B) shows that file B is not corrupted.
Downloading both file A and file B through FileZilla from their respective servers yields completely uncorrupted files.
As such, I believe it is safe to assume the Excel becomes corrupted somewhere between the time the file is received on the frontend and the time the user downloads such file. Additionally, the Catalina logs do not evidence any error that might potentially be happening.
I have read several posts that deal with the issue, including a bug report (https://github.com/angular/angular/issues/14083) that included a workaround via XMLHTTPRequest. However, none of the workarounds detailed were successful in solving my issue.
Attached is the code I am using to both obtain the Excel file from the backend and serve it to the user. I am including both an XMLHTTPRequest and an Angular http call (within comments) since those are the two main ways I have been trying to make this work. Additionally, please do take into account the code has been altered to remove information I do not wish to make public.
download(body) {
let reply = Observable.create(observer => {
let xhr = new XMLHttpRequest();
xhr.open('POST', 'URL', true);
xhr.setRequestHeader('Content-type', 'application/json;charset=UTF-8');
xhr.setRequestHeader('Accept', 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet');
xhr.setRequestHeader('Authorization', 'REDACTED');
xhr.responseType = 'blob';
xhr.onreadystatechange = function () {
if(xhr.readyState === 4) {
if(xhr.status === 200) {
var contentType = 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet';
var blob = new Blob([xhr.response], { type: contentType });
observer.next(blob);
observer.complete();
}
else {
observer.error(xhr.response);
}
}
}
xhr.send(JSON.stringify(body));
});
return reply;
/*let headers = new Headers();
headers.set("Authorization", 'REDACTED');
headers.set("Accept", 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet');
let requestOptions :RequestOptions = new RequestOptions({headers: headers, responseType: ResponseContentType.Blob});
return this.http.post('URL', body, requestOptions);*/
}
Hereby is the code to prompt the user to download the Excel. It is currently made to work with the XMLHTTPRequest. Please do note that I have also attempted to download without resorting to FileSaver, with no luck.
downloadExcel(data) {
let body = {
/*REDACTED*/
}
this.service.download(body)
.subscribe(data => {
FileSaver.saveAs(data, "Excel.xlsx");
});
}
Hereby are the versions of the tools I am using:
NPM: 5.6.0
NodeJs: 8.11.3
Angular JS: ^6.1.0
Browsers used: Chrome, Firefox, Edge.
Any help on this issue would be appreciated. Any additional information you may need I will be happy to provide.
I think what you want is CSV format which open in Excel, update your sevice as follow:
You should tell Angular you are expecting a response of type blob (Binary Large Object) that is your Excel/Csv file.
Also make sure the URL/API on your server is set to accept content-type='text/csv'.
Here's an example with Angular 2.
#Injectable()
export class YourService {
constructor(private http: Http) {}
download() { //get file from the server
this.http.get("http://localhost/..", {
responseType: ResponseContentType.Blob,
headers: new Headers({'Content-Type', 'text/csv'})
}).subscribe(
response => {
var blob = new Blob([response.blob()], {type: 'text/csv'});
FileSaver.saveAs(blob, 'yourFileName.csv');
},
error => {
console.error('something went wrong');
}
);
}
}
Have you tried uploading/downloading your xls file as base64?
var encodedXLSToUpload = 'data:application/xls;base64,' + btoa(file);
Check this for more details: Creating a Blob from a base64 string in JavaScript
Related
I currently am using react hook powered component to record my screen, and subsequently upload it to Google Cloud Storage. However, when it finishes, the file created inside Google Cloud appears to be corrupt.
This is the gist of the code within my React component, where useMediaRecorder is from here: https://github.com/wmik/use-media-recorder -
let {
error,
status,
mediaBlob,
stopRecording,
getMediaStream,
startRecording,
liveStream,
} = useMediaRecorder({
onCancelScreenShare: () => {
stopRecording();
},
onDataAvailable: (chunk) => {
// do the uploading here:
onChunk(chunk);
},
recordScreen: true,
blobOptions: { type: "video/webm;codecs=vp8,opus" },
mediaStreamConstraints: { audio: audioEnabled, video: true },
});
As data becomes available through this hook - it calls onChunk( chunk ) passing a binary Blob through to that method, to perform the upload, I tie in with this section of code to perform the upload:
const onChunk = (binaryData) => {
var formData = new FormData();
formData.append("data", binaryData);
let customerApi = new CustomerVideoApi();
customerApi.uploadRecording(
videoUUID,
formData,
(res) => {},
(err) => {}
);
};
customerApi.uploadRecording looks like this (using axios).
const uploadRecording = (uuid, data, fn, fnErr) => {
axios
.post(endpoint + "/stream/upload", data, {
headers: {
"Content-Type": "multipart/form-data",
},
})
.then(function (response) {
fn(response);
})
.catch(function (error) {
fnErr(error.response);
});
};
The HTTP request succeeds, and all is well with the world: the server side code to upload is based on laravel:
// this is inside the controller.
public function index( Request $request )
{
// Set file attributes.
$filepath = '/public/chunks/';
$file = $request->file('data');
$filename = $uuid . ".webm";
// streamupload
File::streamUpload($filepath, $filename, $file, true);
return response()->json(['uploaded' => true,'uuid'=>$uuid]);
}
// there's a service provider used to create a new macro on the File:: object, providing the facility for appropriate handling the stream:
public function boot()
{
File::macro('streamUpload', function($path, $fileName, $file, $overWrite = true) {
$resource = fopen($file->getRealPath(), 'r+');
$storageClient = new StorageClient([
'projectId' => 'myprjectid',
'keyFilePath' => '/my/path/to/servicejson.json',
]);
$bucket = $storageClient->bucket('mybucket');
$adapter = new GoogleStorageAdapter($storageClient, $bucket);
$filesystem = new Filesystem($adapter);
return $overWrite
? $filesystem->putStream($fileName, $resource)
: $filesystem->writeStream($fileName, $resource);
});
}
So to reiterate:
React app chunks out blobs,
server side determines if it should create or append in Google Cloud Storage
server side succeeds.
4) Video inside Google Cloud platform is corrupted.
However, the video file, inside the Google Cloud container is corrupted and won't play. I'm unsure exactly why it is corrupted, but my guesses so far:
Some sort of Dodgy Mime type problem.. - different browsers seem to handle the codec / filetype differently from the mediarecorder: e.g. Chrome seems to be x-matroska (.mkv?) - firefox different again.. Ideally I would have a container of .webm - notice how I set the file name server side, and it isn't coming from the client. Should it? I'm unsure how to force the MediaRecorder to be a specific mimeType - I thought the blobOptions option should do it, but changing the extension and mime type seems to have little to no impact on the corruption occurring.
Some sort of problem during upload where an HTTP request doesn't execute and finish in order - e.g.
1 onDataAvailable completes second
2 onDataAvailable completes first
3 onDataAvailable completes third
I've sort of ruled this out because I think the chunks should be small enough.
Some sort of problem with Google Cloud Storage APIs that I'm using, perhaps in the wrong way? Does the cloud platform support streaming, and does this library send the correct params to do so?
Some sort of problem with how I'm uploading - should the axios headers be multipart formdata, or something else?
This is the package I'm using for the Server side: https://github.com/Superbalist/flysystem-google-cloud-storage
Can anyone could shed any light on how to achieve this goal of streaming up into Google Cloud without the video from the mediarecorder being corrupted? Hopefully there's enough detail here in the question to help figure it out. The problem as illustrated isn't on getting the file as far as Google cloud, but rather the resulting file being unplayable in any video format.
Update
I've ordered my chunks client side now, and queued them properly before letting them reach the server. No difference to the output. As some have suggested - a single blob upload request works fine.
Tried using streamable config param (from reading source code it seems like chunks need to be a certain size before Google recognises them as a resumable upload
$filesystem = new Filesystem($adapter, [
'resumable'=>true
]);
Not sure how: https://cloud.google.com/storage/docs/performing-resumable-uploads - is implemented within the libraries I'm using, (or within the Google Cloud APIs themselves if at all?). Do I need to implement that myself? Documentation is light on Google's part.
Short version:
The first thing you should do is buffer the whole video locally, and send a single payload to the server and to google drive. This will validate your code for a small video is actually correct. Once you can verify this you can move onto handling multi-chunk uploads.
Longer version:
For starters, you aren't passing the uuid to the request, it's being used:
const uploadRecording = (uuid, data, fn, fnErr) => {
axios
.post(endpoint + "/stream/upload", data, {
headers: {
"Content-Type": "multipart/form-data",
},
})
.then(function (response) {
fn(response);
})
.catch(function (error) {
fnErr(error.response);
});
};
Next, you can't trust how chunking will work, I think you verified this behavior with the out of order result of chunk logging. You need to assume on your server you will get chunks out of order and handle them correctly.
Each chunk you get on the server needs to put in the right place, you can't just "writeStream", you need to write to the explicit binary block. Specifically, on every request specify the byte range: Google docs:
curl -i -X PUT --data-binary #CHUNK_LOCATION \
-H "Content-Length: CHUNK_SIZE" \
-H "Content-Range: bytes CHUNK_FIRST_BYTE-CHUNK_LAST_BYTE/TOTAL_OBJECT_SIZE" \
"SESSION_URI"
CHUNK_LOCATION is the local path to the
chunk that you're currently uploading. CHUNK_SIZE is the number of
bytes you're uploading in the current request. For example, 524288. CHUNK_FIRST_BYTE is the
starting byte in the overall object that the chunk you're uploading
contains. CHUNK_LAST_BYTE is the ending byte in the
overall object that the chunk you're uploading contains.
TOTAL_OBJECT_SIZE is the total size of the
object you are uploading. SESSION_URI is the value returned in the
Location header when you initiated the resumable upload.
Try to eliminate as many variables as possible and pinpoint where exactly the file is getting corrupted.
Since you are using a React(JS) -> Laravel(PHP) -> GoogleCloud path,
first thing I would suggest is to test each step separately:
React -> Laravel - save the file on your server and check if its corrupted at this point
Laravel -> GoogleCloud - Load a file from the server filesystem and upload to cloud and see if it gets corrupted
I don't have experience with Google cloud, but I did something very similar with AWS and found that their video uploading service was extremely picky about the requests (including order of headers that were sent).
Try to compare the specs on the service you are using with your input, make the smallest possible thing that works and start adding variables until you get to the final state.
Also I don't see any kind of data ordering in your code.
If your chunks are close to each other, and with streaming it is highly possible then there is a chance that they will arrive in different order than originally sent. If you just append them to a file without any control of the sorting then the file will indeed get corrupted. Not sure if for webm that would cause just parts of the video to be broken or the entire thing to die.
Succesfully i have made to Upload files into firebase storage, but now i want to display all files in table and to have option to download each file.I've read the documentation in firebase but it won't work.When i click the button which function is to get all files and the i want to visualize them in table which users can see:
Show file function:
showFileUrl(){
storageRef.child('UploadedFiles/').listAll().then(function(res) {
res.items.forEach(function(folderRef) {
console.log("folderRef",folderRef.toString());
var blob = null;
var xhr = new XMLHttpRequest();
xhr.open("GET", "downloadURL");
xhr.responseType = "blob";
xhr.onload = function()
{
blob = xhr.response;//xhr.response is now a blob object
console.log(blob);
}
xhr.send();
});
}).catch(function(error) {
});
}
This is log of the network which i found when debugging.What i need to do to get all data and visualize it in table and to hava a download button and when is pressed to download the file
Network log:
Storage in firebase:
Blob object of the files:
Your code gets a list of all the files, but it doesn't actually to anything to read the data for each file.
When using the Web client SDK, the only way to get the data for a file is through a download URL as shown here. So you'll need to:
Loop through all the files you get back from listAll() (you're already doing this).
Call `getDownloadURL as shown here, to get a download URL for each file.
Then use another library/function (such as fetch()/XMLHTTPRequest) to read the data for each file.
Alternatively, if your files are images, you can stuff the download URL in an img tag as the preview.
I want to download file that can be in any format viz. pdf, jpeg, png, xlsx, csv etc. The download API on backend using pyramid framework is sending FileResponse as below:
def delivery_item_download_view(request, *args, **kw):
context = request.context
item_row = context.item_row
if item_row and item_row["deleted_at"] is None:
print(request.upload_dir+'/'+item_row["file_name"]+'.'+item_row["file_extension"])
response = FileResponse(
request.upload_dir+'/'+item_row["file_name"]+'.'+item_row["file_extension"],
request=request,
)
response.headers["attachment"] = item_row["name"];
return response
This, when executed using POSTMAN works as expected giving file as output. However,when tried implementing same using ReactJS, it's not working as expected. My client-code is as below:
onDownloadItem= (item) => {
console.log("item id is:", item.item_id)
var apiBaseUrl = "https://dev.incodax.com/api/deliveries_items/"+ item.item_id+ "/download";
fetch(apiBaseUrl, {
method: "GET",
}).then((res) => {
fileDownload(res,item.file_name)
console.log(res)
})
}
This fileDownload function simply downloading file but with no content inside. In downloaded file I could see something like:
[object Response]
I am getting 200 response from server. So I dont't think there is any issue with server side code. How can I handle it on client?
Thanks in advance
Will it suit you if you just redirected user to link to file? Browser will automatically handle it and download it.
The issue is in your fileDownload function which you do not post here. It's not clear what the first parameter is supposed to be but likely it is not the response object. Likely you at least need to pull the body out of the response and save that! The response body can be converted to a buffer object which could work (again it depends on what fileDownload is expecting):
res.arrayBuffer().then(buffer => {
fileDownload(buffer, item.file_name);
});
I am working on meanjs application generated using https://github.com/DaftMonk/generator-angular-fullstack. I am trying to generate a .pdf file using phantomjs and download it to the browser.
The issue is that the downloaded .pdf file always shows the blank pages regardless of the number of pages. The original file on server is not corrupt. When I investigated further, found that the downloaded file is always much larger than the original file on the disk. Also this issue happens only with .pdf files. Other file types are working fine.
I've tried several methods like res.redirect('http://localhost:9000/assets/exports/receipt.pdf');, res.download('client\\assets\\exports\\receipt.pdf'),
var fileSystem = require('fs');
var stat = fileSystem.statSync('client\\assets\\exports\\receipt.pdf');
res.writeHead(200, {
'Content-Type': 'application/pdf',
'Content-Length': stat.size
});
var readStream = fileSystem.createReadStream('client\\assets\\exports\\receipt.pdf');
return readStream.pipe(res);
and even I've tried with https://github.com/expressjs/serve-static with no changes in the result.
I am new to nodejs. What is the best way to download a .pdf file to the browser?
Update:
I am running this on a Windows 8.1 64bit Computer
I had corruption when serving static pdfs too. I tried everything suggested above. Then I found this:
https://github.com/intesso/connect-livereload/issues/39
In essence the usually excellent connect-livereload (package ~0.4.0) was corrupting the pdf.
So just get it to ignore pdfs via:
app.use(require('connect-livereload')({ignore: ['.pdf']}));
now this works:
app.use('/pdf', express.static(path.join(config.root, 'content/files')));
...great relief.
Here is a clean way to serve a file from express, and uses an attachment header to make sure the file is downloaded :
var path = require('path');
var mime = require('mime');
app.get('/download', function(req, res){
//Here do whatever you need to get your file
var filename = path.basename(file);
var mimetype = mime.lookup(file);
res.setHeader('Content-disposition', 'attachment; filename=' + filename);
res.setHeader('Content-type', mimetype);
var filestream = fs.createReadStream(file);
filestream.pipe(res);
});
There are a couple of ways to do this:
If the file is a static one like brochure, readme etc, then you can tell express that my folder has static files (and should be available directly) and keep the file there. This is done using static middleware:
app.use(express.static(pathtofile));
Here is the link: http://expressjs.com/starter/static-files.html
Now you can directly open the file using the url from the browser like:
window.open('http://localhost:9000/assets/exports/receipt.pdf');
or
res.redirect('http://localhost:9000/assets/exports/receipt.pdf');
should be working.
Second way is to read the file, the data must be coming as a buffer. Actually, it should be recognised if you send it directly, but you can try converting it to base64 encoding using:
var base64String = buf.toString('base64');
then set the content type :
res.writeHead(200, {
'Content-Type': 'application/pdf',
'Content-Length': stat.size
});
and send the data as response.
I will try to put an example of this.
EDIT: You dont even need to encode it. You may try that still. But I was able to make it work without even encoding it.
Plus you also do not need to set the headers. Express does it for you. Following is the Snippet of API code written to get the pdf in case it is not public/static. You need API to serve the pdf:
router.get('/viz.pdf', function(req, res){
require('fs').readFile('viz.pdf', function(err, data){
res.send(data);
})
});
Lastly, note that the url for getting the pdf has extension pdf to it, this is for browser to recognise that the incoming file is pdf. Otherwise it will save the file without any extension.
Usually if you are using phantom to generate a pdf then the file will be written to disc and you have to supply the path and a callback to the render function.
router.get('/pdf', function(req, res){
// phantom initialization and generation logic
// supposing you have the generation code above
page.render(filePath, function (err) {
var filename = 'myFile.pdf';
res.setHeader('Content-type', "application/pdf");
fs.readFile(filePath, function (err, data) {
// if the file was readed to buffer without errors you can delete it to save space
if (err) throw err;
fs.unlink(filePath);
// send the file contents
res.send(data);
});
});
});
I don't have experience of the frameworks that you have mentioned but I would recommend using a tool like Fiddler to see what is going on. For example you may not need to add a content-length header since you are streaming and your framework does chunked transfer encoding etc.
I just want to ask you how to simply upload a .csv file to a server .php file. In all other languages is straightforward, here I'm having a lot of troubles, I don't want to do complex things, only upload and see the server response.
Thanks.
Here's my code, I used this library https://github.com/daltoniam/SwiftHTTP
let fileUrl = NSURL.fileURLWithPath("/Users/mypath")
var request = HTTPTask()
request.POST("http://server.com/upload.php", parameters: ["file": HTTPUpload(fileUrl: fileUrl)], success: {(response: HTTPResponse) -> Void in
},failure: {(error: NSError, response: HTTPResponse?) -> Void in
})