I am using the PDF.js library to display PDf files within my site (using the pdf_viewer.js to display documents on-screen), but the PDF files I am displaying are confidential and I need to be able to show them within the site but block non-authorized public folks from being able to view the same files just by typing in theie URLs and seeing them show up right in their browser.
I tried to add the Deny from all line in my htaccess file, but that also of courfse blocked the viewer from showing the docs, so that seems to be a no-go. Clearly anyone could simply look at inspector and see the pdf file that is being read by the viewer, so it seems a direct URL is not going to be secure in any way.
I did read about PDF.js being able to read binary data, but I have no knowledge of how I might read in a PDF in my own file system and prep it for use by the library, eveen if that means it is all a bit slower in loading to get the file contents and prep it on the fly.
Anyone have a solution that allows PDFJS to work without revealing the source PDF URL, or to otherwise read the file using local file calls?
Okay, after some testing, the solution is very easy:
Get the PDF data using an Ajax-called function that can figure out what actual file is to be viewed.
In that PHP file...
Read the file into memory, using fopen and fread normally.
Convert to base64 using the base64_encode
Pass that string back to the calling Javascript.
In the original calling function, use the following to convert the string to a Uint array and then pass that to the PDFJS library...
## The function that turns the base64 string into whatever a Uint8 array is...
function base64ToUint8Array(base64) {
var raw = atob(base64);
var uint8Array = new Uint8Array(raw.length);
for (var i = 0; i < raw.length; i++) {
uint8Array[i] = raw.charCodeAt(i);
}
return uint8Array;
}
## the guts that gets the file data, calls the above function to convert it, and then calls PDF.JS to display it
$.ajax({
type: "GET",
data: {file: <a file id or whatever distinguishes this PDF>},
url: 'getFilePDFdata.php', (the PHP file that reads the data and returns it encoded)
success: function(base64Data){
var pdfData = base64ToUint8Array(base64Data);
## Loading document.
PDFJS.getDocument(pdfData).then(function (pdfDocument) {
## Document loaded, specifying document for the viewer and
## the (optional) linkService.
pdfViewer.setDocument(pdfDocument);
pdfLinkService.setDocument(pdfDocument, null);
});
}
});
Related
I have successfully retrieved a json file from s3 storage. It is returned as a blob. I am able to turn the blob into text with this code (taken from https://docs.amplify.aws/lib/storage/download/q/platform/js/#monitor-progress-of-download):
export async function getS3Item(filename)
{
const result = await Storage.get(filename, { download: true });\
result.Body.text().then(string => {
// // handle the String data return String
console.log(string)
});
}
but the text is all gibberish (I'm assuming since object is in binary?)... such as: "h�b```f�d`a}��ǀ|#1V ..."
Is there a way I can directly read this as a json object in javascript so that I can extract data from it...?
Optionally, I can download the json file (which is shown in the link above and I've gotten this to work) -- but I'd prefer not to download it -- just to extract legible data from the file
thanks so much (I'm quite unfamiliar with blobs).
I am able to open and stream the file no issue by using the following, however I need to be able to use the file information that is stored inside the bucket.
const db = connection.connections[0].db
const bucket = new mongoose.mongo.GridFSBucket(db, {
bucketName: bucketName
});
bucket.openDownloadStreamByName(filename).pipe(res)
For example I would like to be able to set the following
res.setHeader('Content-Type', (TYPE)),
res.setHeader('Content-Length', (LENGTH)),
I am wondering the following above allows options however I don't know if the pipe stops us from setting the content-type and length after it starts piping.
According to docs, no you can't get file info from stream but in source code seems you can.
According to this and this, you could get contentType by accessing
bucket.openDownloadStreamByName(...).s.files[0].contentType
or
bucket.openDownloadStreamByName(...).s.file?.contentType
I am trying to select a local json file and load it in my blazor client component.
<input type="file" onchange="LoadFile" accept="application/json;.json" class="btn btn-primary" />
protected async Task LoadFile(UIChangeEventArgs args)
{
string data = args.Value as string;
}
P,S I do not understand , do i need to keep track both the name of the file and the content when retrieving it ?
I guess you're trying to read the contents of a JSON file on the client (Blazor), right? Why not on the server !?
Anyhow, args.Value can only furnish you with the name of the file. In order to read the contents of the file, you can use the FileReader API (See here: https://developer.mozilla.org/en-US/docs/Web/API/FileReader). That means that you should use JSIntrop to communicate with the FileReader API. But before you start, I'd suggest you try to find out if this API have been implemented by the community (something like the localStorage, etc.). You may also need to deserialize the read contents into something meaningful such as a C# object.
Hope this helps...
There is a tool that can help, but it currently doesn't support the 3.0 preview. https://github.com/jburman/W8lessLabs.Blazor.LocalFiles
(no affiliation with the developer)
The input control will give you the location of the file as a full path along with the name of the file. Then you still have to retrieve the file and download it to the server.
Late response but with 3.1 there is an additional AspNetCore.Components module you can download via NuGet to get access to HttpClient extensions. These make it simple:
// fetch mock data for now
var results = await _http.GetJsonAsync<WellDetail[]>("sample-data/well.json");
You could inject the location of the file from your input control in place of the "sample-data/well.json" string.
Something like:
using Microsoft.AspNetCore.Components;
private async Task<List<MyData>> LoadFile(string filePath)
{
HttpClient _http;
// fetch data
// convert file data to MyData object
var results = await _http.GetJsonAsync<MyData[]>(filePath);
return results.ToList();
}
I'm actually struggeling with a problem handling some kml files with google map in my Javascript application.
I wrote a method with that I'm reading a KML file from an URL or my local file system and storing the content as a String in a Database. Now i would like to activate layers that are stored in my db by clicking a button. Everything is fine up to here.
In every example i can find they are only using the url-attribute of a KmlLayer by passing an url to a KML-File.
like here:
var ctaLayer = new google.maps.KmlLayer({
url: 'http://googlemaps.github.io/js-v2-samples/ggeoxml/cta.kml',
map: map
});
But since my files are stored as Strings in my db I don't have an url to a file, only the content. I can't find a way to only pass the XML-String as content.
Somebody here who can help?
Maybe someday somebody will struggle with a similar problem. The solution was a little bit tricky. I needed to create a Blob with the content of my String. With the blob I created a file and packed it into an URL. This URL you can pass to your kml parser. I used https://github.com/geocodezip/geoxml3 for that.
vm.activeLayers.forEach(function(value, key) {
var file = new Blob([value], {type: 'kml'})
var url = URL.createObjectURL(file);
var myParser = new geoXML3.parser({
map : map
});
myParser.parse(url);
})
I am trying to build a simple photo upload app on Ionic (Cordova). I am using the cordovaImagePicker plugin to have the user select images from the mobile device. This plugin returns an array of paths on the device.
For handling the upload part I am using jquery-file-upload (mostly because that is what I used for the browser version and I am doing all kinds of processing for which I have the code ready). The problem is however that jquery-file-upload expects to work with an input element <input type="file"> which creates a javascript File object containing all kinds of metadata.
So in order to get the cordovaImagePicker to work with jquery-file-upload, I figure I have to convert the filepath to a File object. Below I am using the cordova file plugin to achieve this:
$cordovaImagePicker.getPictures($scope.pickOptions).then(function(filelist) {
$.each(filelist, function (index, filepath) {
$window.resolveLocalFileSystemURL(filepath, function(fileEntry) {
fileEntry.file(function(file) {
var reader = new FileReader();
reader.onloadend = function(e) {
fileObj = new File([this.result],"filename.jpg",{type: "image/jpeg"});
// send filelist from cordovaImagePicker to jquery-fileupload as if through file input
$('#fileupload').fileupload('send', {files: fileObj});
};
reader.readAsArrayBuffer(file);
}, function(e){$scope.errorHandler(e)});
}, function(e){$scope.errorHandler(e)});
});
}, function(error) {
// error getting photos
console.log('Error selecting images through $cordovaImagePicker');
});
So first of all this is not really working correctly, apparently I am doing doing something wrong, since for example the type attribute ends up being another object that contains the type attribute with the correct values (and other such weird issues). I would be happy if someone could point out what I am doing wrong.
Surely there must be something (cordova plugin?) that I am not aware of that does this conversion for me (including for example adding a thumbnail)? Alternatively, maybe there is something that can easily make jquery-file-upload work with filepaths? I couldn't find anything so far...
However, it feels I am trying too hard here to force connecting two components that were just not built to work together (File objects vs filepath) and I should maybe just rewrite the processing and use the cordova file transfer plugin?
I ended up rewriting the uploader with the cordova-file-transfer which works like a charm. I wasted more time trying to work around it than just rewriting it from scratch.