How can I get the .data3d.buffer from my uploaded 3d model through storage API - archilogic

I can upload my 3d model through storage api. But I can not get the .data3d.buffer from storage api. I found the .data3d.buffer is necessary for aframe to load the 3d model. How can I get the .data3d.buffer through storage api??

If you have the storage key, you can directly download the model from storage.3d.io.
Example:
io3d.storage.put(myFile).then(function (storageKey) {
console.log('the data3d.buffer is now at', 'https://storage.3d.io' + storageKey)
})
Note that the file will have a .gz.data3d.buffer as the browser is going to decompress the packed asset when downloading. You may have to remove the .gz to be able to use the file directly.
The Storage API can be used directly, but will automatically parse the binary file into JSON for you.
To use the model in A-Frame you would only need the storage key anyway (in the browser):
io3d.storage.put(myFile).then(function (storageKey) {
var model = document.createElement('a-entity')
model.setAttribute('io3d-data3d', 'key: ' + storageKey)
document.querySelector('a-scene').appendChild(model)
})

Related

How To Upload A Large File (>6MB) To SalesForce Through A Lightning Component Using Apex Aura Methods

I am aiming to take a file a user attaches through an Lightning Component and create a document object containing the data.
So far I have overcome the request size limits by chunking the data being uploaded into 1MB chunks. When the Apex Aura method receives these chunks of data it will either create a new document (if it is the first chunk), or will retrieve the existing document and add the new chunk to the end.
Data is received Base64 encoded, and then decoded server-side.
As the document data is stored as a Blob, the original file contents will be read as a String, and then appended with the chunk received. The new contents are then converted back into a Blob to be stored within the ContentVersion object.
The problem I'm having is that strings in Apex have a maximum length of 6,000,000 or so. Whenever the file size exceeds 6MB, this limit is hit during the concatenation, and will cause the file upload to halt.
I have attempted to avoid this limit by converting the Blob to a String only when necessary for the concatenation (as suggested here https://developer.salesforce.com/forums/?id=906F00000008w9hIAA) but this hasn't worked. I'm guessing it was patched because it's still technically allocating a string larger then the limit.
Code's really simple when appending so far:
ContentVersion originalDocument = [SELECT Id, VersionData FROM ContentVersion WHERE Id =: <existing_file_id> LIMIT 1];
Blob originalData = originalDocument.VersionData;
Blob appendedData = EncodingUtil.base64Decode(<base_64_data_input>);
Blob newData = Blob.valueOf(originalData.toString() + appendedData.toString());
originalDocument.VersionData = newData;
You will have hard time with it.
You could try offloading the concatenation to asynchronous process (#future/Queueable/Schedulable/Batchable), they'll have 12MB RAM instead of 6. Could buy you some time.
You could try cheating by embedding an iframe (Visualforce or lightning:container tag? Or maybe a "canvas app") that would grab your file and do some manual JavaScript magic calling normal REST API for document upload: https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/dome_sobject_insert_update_blob.htm (last code snippet is about multiple documents). Maybe jsforce?
Can you upload it somewhere else (SharePoint? Heroku?) and have that system call into SF to push them (no Apex = no heap size limit). Or even look "Files Connect" up.
Can you send an email with attachments? Crude but if you write custom Email-to-Case handler class you'll have 36 MB of RAM.
You wrote "we needed multiple files to be uploaded and the multi-file-upload component provided doesn't support all extensions". That may be caused by these:
In Experience Builder sites, the file size limits and types allowed follow the settings determined by site file moderation.
lightning-file-upload doesn't support uploading multiple files at once on Android devices.
if the Don't allow HTML uploads as attachments or document records security setting is enabled for your organization, the file uploader cannot be used to upload files with the following file extensions: .htm, .html, .htt, .htx, .mhtm, .mhtml, .shtm, .shtml, .acgi, .svg.

Can Hive in Flutter Store BigData(some GB files)

I was thinking if I store a video or a movie and open that box will that video will be stored in my RAM or else it just load from ROM. I am a bit confused: Can anyone explain this to me?
I think you have misunderstood the concept of Database.
Any Database solution is to only store pure informational organized data. Not to store large files such as media, documents, or images.
On the contrary, storage need not be organized, all files can exist in one folder.
So, any database solution you use, always store Data Types.
In this case you can have a Data Model, which is also an essential thing in using a Database.
#HiveType(typeId: 0)
class Movie extends HiveObject {
#HiveField(0)
String name;
#HiveField(1)
int path;
}
Since Hive supports Dart objects, you don't have to convert toJson or any such for string the Data.
So when you have the file fetched from Storag, you can get the path using path_provider or from the File itself, and then Create a Object
File file = await // get the movie file using any means
final path = file.path
var box = await Hive.openBox('Movies');
var m = Movie()
..name = 'Batman Begins'
..path = path ;
box.add(m);
m.save();
Hope this clears your doubt.
Copy/save your video/media files in the Local File Storage and save file path in Hive Box.
Whenever you need get path from hive then get the file from local storage using that path.

How to upload a local json file using Blazor

I am trying to select a local json file and load it in my blazor client component.
<input type="file" onchange="LoadFile" accept="application/json;.json" class="btn btn-primary" />
protected async Task LoadFile(UIChangeEventArgs args)
{
string data = args.Value as string;
}
P,S I do not understand , do i need to keep track both the name of the file and the content when retrieving it ?
I guess you're trying to read the contents of a JSON file on the client (Blazor), right? Why not on the server !?
Anyhow, args.Value can only furnish you with the name of the file. In order to read the contents of the file, you can use the FileReader API (See here: https://developer.mozilla.org/en-US/docs/Web/API/FileReader). That means that you should use JSIntrop to communicate with the FileReader API. But before you start, I'd suggest you try to find out if this API have been implemented by the community (something like the localStorage, etc.). You may also need to deserialize the read contents into something meaningful such as a C# object.
Hope this helps...
There is a tool that can help, but it currently doesn't support the 3.0 preview. https://github.com/jburman/W8lessLabs.Blazor.LocalFiles
(no affiliation with the developer)
The input control will give you the location of the file as a full path along with the name of the file. Then you still have to retrieve the file and download it to the server.
Late response but with 3.1 there is an additional AspNetCore.Components module you can download via NuGet to get access to HttpClient extensions. These make it simple:
// fetch mock data for now
var results = await _http.GetJsonAsync<WellDetail[]>("sample-data/well.json");
You could inject the location of the file from your input control in place of the "sample-data/well.json" string.
Something like:
using Microsoft.AspNetCore.Components;
private async Task<List<MyData>> LoadFile(string filePath)
{
HttpClient _http;
// fetch data
// convert file data to MyData object
var results = await _http.GetJsonAsync<MyData[]>(filePath);
return results.ToList();
}

make a copy of an image in blobstore

I have an image in blob store which is uploaded by users(their profile pic). I want to make a copy of the same and and re-size the copy so that it can be displayed as a thumbnail. I want to make a copy of the same instead of using the ImageService because this would be used more often compared to the profile image.
What I am doing here is this:
reader = profile_image.open() #get binary data from blob
data = reader.read()
file_name = files.blobstore.create(mime_type=profile_image.content_type)#file to write to
with files.open(file_name, 'a') as f:
f.write(data)
files.finalize(file_name)
blob_key = files.blobstore.get_blob_key(file_name)
image = images.Image(blob_key = blob_key)
image.resize(width=32, height=32)
entity.small_profile_pic = <MyImageModel>(caption=<caption given by user>,
picture=str(blob_key))
This is giving me error:
BadValueError: Image instance must have a complete key before it can be stored as a reference.
I think this is because the blob is not saved(put()) into the datastore, but how do i do it. Doed files.blobstore.get_blob_key(file_name) not do it ?
I would also like to ask: does the blobstore also cache the dynamically transformed images images served using get_serving_url() ...
I would use the get_serving_url method. In the doc is stated that:
The get_serving_url() method allows you to generate a stable, dedicated URL for serving web-suitable image thumbnails. You simply store a single copy of your original image in Blobstore, and then request a high-performance per-image URL. This special URL can serve that image resized and/or cropped automatically, and serving from this URL does not incur any CPU or dynamic serving load on your application (though bandwidth is still charged as usual). Images are served with low latency from a highly optimized, cookieless infrastructure.
Also the code you posted doesn't seem to follow the exampled posted in the docs. I would use something like this
img = images.Image(blob_key=original_image_key)
img.resize(width=32, height=32)
thumbnail = img.execute_transforms(output_encoding=images.JPEG)
file_name = files.blobstore.create(mime_type='image/jpeg')#file to write to
with files.open(file_name, 'a') as f:
f.write(thumbnail)
files.finalize(file_name)
blob_key = files.blobstore.get_blob_key(file_name)

Image serving from the high performance blobstore without direct access to get_serving_url()

I'm converting my site over to using the blobstore for image serving and am having a problem. I have a page with a large number of images being rendered dynamically (through jinja), and the only data available are entity keys that point to image objects that contain the relevant serving url.
Previously each image had a url along the lines of "/show-image?key={{image_key}}", which points to a request handler along the lines of this:
def get(self):
imageInfo = db.get(self.request.args.get("key"))
imagedata = imageInfo.data // the image is stored as a blob in the normal datastore
response = Response()
response.data = imagedata
response.headers['Content-Type'] = imageInfo.type
return response
My question is: How can I modify this so that, rather than returning a response with imageInfo.data, I return a response with imageInfo.saved_serving_url (generated from get_serving_url when the image object was created). More importantly, is this even a good idea? It seems like converting the saved_serving_url back into data (eg using urllib.fetch) might just counteract the speed and efficiency of using the high-speed datastore in the first place?
Maybe I should just rewrite my code so that the jinja template has direct access to the serving urls of each image. But ideally I'd like to avoid that due to the amount of parallel lists I'd have to pass about.
why not returning the serving url instead of the imagedata?
<img src="/show-image?key={{image_key}}" />
def get(self):
imageInfo = db.get(self.request.args.get("key"))
return imageInfo.saved_serving_url

Resources