I am able to open and stream the file no issue by using the following, however I need to be able to use the file information that is stored inside the bucket.
const db = connection.connections[0].db
const bucket = new mongoose.mongo.GridFSBucket(db, {
bucketName: bucketName
});
bucket.openDownloadStreamByName(filename).pipe(res)
For example I would like to be able to set the following
res.setHeader('Content-Type', (TYPE)),
res.setHeader('Content-Length', (LENGTH)),
I am wondering the following above allows options however I don't know if the pipe stops us from setting the content-type and length after it starts piping.
According to docs, no you can't get file info from stream but in source code seems you can.
According to this and this, you could get contentType by accessing
bucket.openDownloadStreamByName(...).s.files[0].contentType
or
bucket.openDownloadStreamByName(...).s.file?.contentType
Related
The front end enables people to upload their photos, so i was sending the base64 to the server and working with it initially, but there are problems with firewall which blocks the request which contains base64. As an alternative solution I was trying to upload the image to azure blob get the file name and then send that to the server for processing where I generate a sas token for the blob validation and processing.
This works perfectly fine when I work locally and the front end connection works with #azure/storage-blob
and uploadBrowserData() when I send the arrayBuffer as the param
export const uploadSelfieToBlob = async arrayBuffer => {
try {
const blobURL = `https://${accountName}.blob.core.windows.net${sasString}`;
const blobServiceClient = new BlobServiceClient(blobURL, anonymousCredential);
const containerClient = blobServiceClient.getContainerClient(containerName);
let randomString = Math.random().toString(36).substring(7);
const blobName = `${randomString}_${new Date().getTime()}.jpg`;
const blockBlobClient = containerClient.getBlockBlobClient(blobName);
const uploadBlobResponse = await blockBlobClient.uploadBrowserData(arrayBuffer);
return { blobName, blobId: uploadBlobResponse.requestId };
} catch (error) {
console.log('error when uploading to blob', error);
throw new Error('Error Uploading the selfie to blob');
}
};
When I deploy this is not working, the front is deployed in the EastUs2 location and the local development location is different.
I thought the sasString generated for anonymous access had the timezone option so I generated 2 different one's one for local and one for hosted server with the same location selected.
Failed to send request to https://xxxx.blob.core.windows.net/contanainer-name/26pcie_1582087489288.jpg?sv=2019-02-02&ss=b&srt=c&sp=rwdlac&se=2023-09-11T07:57:29Z&st=2020-02-18T00:57:29Z&spr=https&sig=9IWhXo5i%2B951%2F8%2BTDqIY5MRXbumQasOnY4%2Bju%2BqF3gw%3D
What am I missing any lead would be helpful thanks
First, as mentioned in the comments there was an issue with the CORS Settings because of which you're getting the initial error.
AuthorizationResourceTypeMismatchThis
request is not authorized to perform this operation using this
resource type. RequestId:7ec96c83-101e-0001-4ef1-e63864000000
Time:2020-02-19T06:57:31.2867563Z
I looked up this error code here and then closely looked at your SAS URL.
One thing I noticed in your SAS URL is that you have set the signed resource type (srt) as c (container) and trying to upload the blob. If you look at the description of the kind of operations you can do using srt=c here, you will notice that blob related operations are not supported.
In order to perform blob related operations (like blob upload), you would need to set signed resource type value to o (for object).
Please regenerate your SAS Token and include signed resource type as object (you can also include container and/or service in there as well) and then your request should work. So essentially your srt in your SAS URL should be something like srt=o or srt=co or srt=sco.
I couldn't notice anything wrong with the code you mentioned about, but I have been using a different method to upload files to Azure Blog Storage using React, the method is exactly the same as in this blog article which works perfectly for me.
https://medium.com/#stuarttottle/upload-to-azure-blob-storage-with-react-34f37805fdfc
I am trying to select a local json file and load it in my blazor client component.
<input type="file" onchange="LoadFile" accept="application/json;.json" class="btn btn-primary" />
protected async Task LoadFile(UIChangeEventArgs args)
{
string data = args.Value as string;
}
P,S I do not understand , do i need to keep track both the name of the file and the content when retrieving it ?
I guess you're trying to read the contents of a JSON file on the client (Blazor), right? Why not on the server !?
Anyhow, args.Value can only furnish you with the name of the file. In order to read the contents of the file, you can use the FileReader API (See here: https://developer.mozilla.org/en-US/docs/Web/API/FileReader). That means that you should use JSIntrop to communicate with the FileReader API. But before you start, I'd suggest you try to find out if this API have been implemented by the community (something like the localStorage, etc.). You may also need to deserialize the read contents into something meaningful such as a C# object.
Hope this helps...
There is a tool that can help, but it currently doesn't support the 3.0 preview. https://github.com/jburman/W8lessLabs.Blazor.LocalFiles
(no affiliation with the developer)
The input control will give you the location of the file as a full path along with the name of the file. Then you still have to retrieve the file and download it to the server.
Late response but with 3.1 there is an additional AspNetCore.Components module you can download via NuGet to get access to HttpClient extensions. These make it simple:
// fetch mock data for now
var results = await _http.GetJsonAsync<WellDetail[]>("sample-data/well.json");
You could inject the location of the file from your input control in place of the "sample-data/well.json" string.
Something like:
using Microsoft.AspNetCore.Components;
private async Task<List<MyData>> LoadFile(string filePath)
{
HttpClient _http;
// fetch data
// convert file data to MyData object
var results = await _http.GetJsonAsync<MyData[]>(filePath);
return results.ToList();
}
I am using the PDF.js library to display PDf files within my site (using the pdf_viewer.js to display documents on-screen), but the PDF files I am displaying are confidential and I need to be able to show them within the site but block non-authorized public folks from being able to view the same files just by typing in theie URLs and seeing them show up right in their browser.
I tried to add the Deny from all line in my htaccess file, but that also of courfse blocked the viewer from showing the docs, so that seems to be a no-go. Clearly anyone could simply look at inspector and see the pdf file that is being read by the viewer, so it seems a direct URL is not going to be secure in any way.
I did read about PDF.js being able to read binary data, but I have no knowledge of how I might read in a PDF in my own file system and prep it for use by the library, eveen if that means it is all a bit slower in loading to get the file contents and prep it on the fly.
Anyone have a solution that allows PDFJS to work without revealing the source PDF URL, or to otherwise read the file using local file calls?
Okay, after some testing, the solution is very easy:
Get the PDF data using an Ajax-called function that can figure out what actual file is to be viewed.
In that PHP file...
Read the file into memory, using fopen and fread normally.
Convert to base64 using the base64_encode
Pass that string back to the calling Javascript.
In the original calling function, use the following to convert the string to a Uint array and then pass that to the PDFJS library...
## The function that turns the base64 string into whatever a Uint8 array is...
function base64ToUint8Array(base64) {
var raw = atob(base64);
var uint8Array = new Uint8Array(raw.length);
for (var i = 0; i < raw.length; i++) {
uint8Array[i] = raw.charCodeAt(i);
}
return uint8Array;
}
## the guts that gets the file data, calls the above function to convert it, and then calls PDF.JS to display it
$.ajax({
type: "GET",
data: {file: <a file id or whatever distinguishes this PDF>},
url: 'getFilePDFdata.php', (the PHP file that reads the data and returns it encoded)
success: function(base64Data){
var pdfData = base64ToUint8Array(base64Data);
## Loading document.
PDFJS.getDocument(pdfData).then(function (pdfDocument) {
## Document loaded, specifying document for the viewer and
## the (optional) linkService.
pdfViewer.setDocument(pdfDocument);
pdfLinkService.setDocument(pdfDocument, null);
});
}
});
I am working on meanjs application generated using https://github.com/DaftMonk/generator-angular-fullstack. I am trying to generate a .pdf file using phantomjs and download it to the browser.
The issue is that the downloaded .pdf file always shows the blank pages regardless of the number of pages. The original file on server is not corrupt. When I investigated further, found that the downloaded file is always much larger than the original file on the disk. Also this issue happens only with .pdf files. Other file types are working fine.
I've tried several methods like res.redirect('http://localhost:9000/assets/exports/receipt.pdf');, res.download('client\\assets\\exports\\receipt.pdf'),
var fileSystem = require('fs');
var stat = fileSystem.statSync('client\\assets\\exports\\receipt.pdf');
res.writeHead(200, {
'Content-Type': 'application/pdf',
'Content-Length': stat.size
});
var readStream = fileSystem.createReadStream('client\\assets\\exports\\receipt.pdf');
return readStream.pipe(res);
and even I've tried with https://github.com/expressjs/serve-static with no changes in the result.
I am new to nodejs. What is the best way to download a .pdf file to the browser?
Update:
I am running this on a Windows 8.1 64bit Computer
I had corruption when serving static pdfs too. I tried everything suggested above. Then I found this:
https://github.com/intesso/connect-livereload/issues/39
In essence the usually excellent connect-livereload (package ~0.4.0) was corrupting the pdf.
So just get it to ignore pdfs via:
app.use(require('connect-livereload')({ignore: ['.pdf']}));
now this works:
app.use('/pdf', express.static(path.join(config.root, 'content/files')));
...great relief.
Here is a clean way to serve a file from express, and uses an attachment header to make sure the file is downloaded :
var path = require('path');
var mime = require('mime');
app.get('/download', function(req, res){
//Here do whatever you need to get your file
var filename = path.basename(file);
var mimetype = mime.lookup(file);
res.setHeader('Content-disposition', 'attachment; filename=' + filename);
res.setHeader('Content-type', mimetype);
var filestream = fs.createReadStream(file);
filestream.pipe(res);
});
There are a couple of ways to do this:
If the file is a static one like brochure, readme etc, then you can tell express that my folder has static files (and should be available directly) and keep the file there. This is done using static middleware:
app.use(express.static(pathtofile));
Here is the link: http://expressjs.com/starter/static-files.html
Now you can directly open the file using the url from the browser like:
window.open('http://localhost:9000/assets/exports/receipt.pdf');
or
res.redirect('http://localhost:9000/assets/exports/receipt.pdf');
should be working.
Second way is to read the file, the data must be coming as a buffer. Actually, it should be recognised if you send it directly, but you can try converting it to base64 encoding using:
var base64String = buf.toString('base64');
then set the content type :
res.writeHead(200, {
'Content-Type': 'application/pdf',
'Content-Length': stat.size
});
and send the data as response.
I will try to put an example of this.
EDIT: You dont even need to encode it. You may try that still. But I was able to make it work without even encoding it.
Plus you also do not need to set the headers. Express does it for you. Following is the Snippet of API code written to get the pdf in case it is not public/static. You need API to serve the pdf:
router.get('/viz.pdf', function(req, res){
require('fs').readFile('viz.pdf', function(err, data){
res.send(data);
})
});
Lastly, note that the url for getting the pdf has extension pdf to it, this is for browser to recognise that the incoming file is pdf. Otherwise it will save the file without any extension.
Usually if you are using phantom to generate a pdf then the file will be written to disc and you have to supply the path and a callback to the render function.
router.get('/pdf', function(req, res){
// phantom initialization and generation logic
// supposing you have the generation code above
page.render(filePath, function (err) {
var filename = 'myFile.pdf';
res.setHeader('Content-type', "application/pdf");
fs.readFile(filePath, function (err, data) {
// if the file was readed to buffer without errors you can delete it to save space
if (err) throw err;
fs.unlink(filePath);
// send the file contents
res.send(data);
});
});
});
I don't have experience of the frameworks that you have mentioned but I would recommend using a tool like Fiddler to see what is going on. For example you may not need to add a content-length header since you are streaming and your framework does chunked transfer encoding etc.
I'm trying to replace a PDF file in a Google Drive Folder using a script. Since GAS does not provide a method for adding revisions (versions), I'm trying to replace the content of the file, but all I get is a blank PDF.
I can't use the DriveApp.File class since our Admin has disabled the new API, so I have to use DocsList.File instead.
Input:
OldFile.pdf (8 pages)
NewFile.pdf (20 pages)
Output expected:
OldFile.pdf with the same content as NewFile.pdf
Real Output:
OldFile.pdf with 20 empty pages.
Process:
var old = DocsList.getFileById("####");
var new = DocsList.getFileById("####");
old.replace(new.getContentAsString());
Any ideas, please?
Thanks a lot in advance.
PS.: I also tried calling old.clear() first, but I'd say the problem lies on the getContentAsString method.
The Advanced Drive Service can be used to replace the content of an existing PDF file in Google Drive. This answer also includes an example of how to update a PDF file in a shared Drive.
function overwriteFile(blobOfNewContent,currentFileID) {
var currentFile;
currentFile = DriveApp.getFileById(currentFileID);
if (currentFile) {//If there is a truthy value for the current file
Drive.Files.update({
title: currentFile.getName(), mimeType: currentFile.getMimeType()
}, currentFile.getId(), blobOfNewContent);
}
}
References
https://developers.google.com/apps-script/advanced/drive
https://developers.google.com/drive/api/v3/reference/files/update
An example of using with a shared Drive:
Drive.Files.update({ title: currentFile.getName(), mimeType:
currentFile.getMimeType() }, currentFile.getId(), blobOfNewContent,
{supportsTeamDrives: true});
Try to get it as a blob datatype instead.