In Tensorflow JS, using Node (tfjs-node) is there any way to Load Universal Sentence Encoder (USE) from local File? - tensorflow.js

I have a tensorflow.js script/app that runs in Node.js using tfjs-node and Universal Sentence Encoder (USE).
Each Time the script runs, it downloads a 525 MegaByte File (the USE model file).
Is there any way to load the Universal Sentence Encoder Model File from the local file system to avoid downloading such a large file every time I need to run the node.js tensorflow script?
I've noted several similar model loading examples but none that work with Universal Sentence Encoder as it does not appear to have the same type functionality. Below is a stripped down example of a functioning script that downloads the 525 MB file every time it executes.
Any help or recommendations would be appreciated.
const tf = require('#tensorflow/tfjs-node');
const use = require('#tensorflow-models/universal-sentence-encoder');
// No Form of Universal Sentence Encoder loader appears to be present
let model = tf.loadGraphModel('file:///Users/ray/Documents/tf_js_model_save_load2/models/model.json');
use.load().then(model => {
const sentences = [
'Hello.',
'How are you?'
];
model.embed(sentences).then(embeddings => {
embeddings.print(true /* verbose */);
});
});
I've tried several recommendations that appear to work for other models but not Universal Sentence Encoder such as:
const tf = require('#tensorflow/tfjs');
const tfnode = require('#tensorflow/tfjs-node');
async function loadModel(){
const handler = tfnode.io.fileSystem('tfjs_model/model.json');
const model = await tf.loadLayersModel(handler);
console.log("Model loaded")
}
loadModel();

its not a model issue per-say, its a module issue.
model can be loaded any way you want, but the module #tensorflow-models/universal-sentence-encoder implements only a specific internal way on how it loads actual model data.
specifically, it internally uses tf.util.fetch.
solution? use some library (or write your own) to register a global fetch handler that knows how to handle file:// prefixes - if global fetch handler exists, tf.util.fetch will simply just use it.
hint: https://gist.github.com/joshua-gould/58e1b114a67127273eef239ec0af8989

Related

How to read in files with a specific file ending at compile time in nim?

I am working on a desktop application using nim's webgui package, which sort of works like electron in that it renders a gui using HTML + CSS + JS. However, instead of bundling its own browser and having a backend in node, it uses the browser supplied by the OS (Epiphany under Linux/GNOME, Edge under Windows, Safari under iOS) and allows writing the backend in nim.
In that context I am basically writing an SPA in Angular and need to load in the HTML, JS and CSS files at compile-time into my binary.
Reading from a known absolute filepath is not an issue, you can use nim's staticRead method for that.
However, I would like to avoid having to adjust the filenames in my application code all the time, e.g. when a new build of the SPA changes a file name from main.a72efbfe86fbcbc6.js to main.b72efbfe86fbcbc6.js.
There is an iterator in std/os that you can use at runtime called walkFiles and walkPattern, but these fail when used at compileTime!
import std/[os, sequtils, strformat, strutils]
const resourceFolder = "/home/philipp/dev/imagestable/html" # Put into config file
const applicationFiles = toSeq(walkFiles(fmt"{resourceFolder}/*"))
/home/philipp/.choosenim/toolchains/nim-#devel/lib/pure/os.nim(2121, 11) Error: cannot 'importc' variable at compile time; glob
How do I get around this?
Thanks to enthus1ast from nim's discord server I arrived at an answer: using the collect macro with the walkDir iterator.
The walkDir iterator does not make use of things that are only available at runtime and thus can be safely used at compiletime. With the collect macro you can iterate over all your files in a specific directory and collect their paths into a compile-time seq!
Basically you start writing collect-block, which is a simple for-loop that at its end evaluates to some form of value. The collect macro will put them all into a seq at the end.
The end result looks pretty much like this:
import std/[sequtils, sugar, strutils, strformat, os]
import webgui
const resourceFolder = "/home/philipp/dev/imagestable/html"
proc getFilesWithEnding(folder: string, fileEnding: string): seq[string] {.compileTime.} =
result = collect:
for path in walkDir(folder):
if path.path.endswith(fmt".{fileEnding}"): path.path
proc readFilesWithEnding(folder: string, fileEnding: string): seq[string] {.compileTime.} =
result = getFilesWithEnding(folder, fileEnding).mapIt(staticRead(it))

Trying to make an image classification model using AutoML Vision run on a website

I've created a classification model using AutoML Vision and tried to use this tutorial to make a small web app to make the model classify an image using the browser.
The code I'm using is basically the same as the tutorial with some slight changes:
<script src="https://unpkg.com/#tensorflow/tfjs"></script>
<script src="https://unpkg.com/#tensorflow/tfjs-automl"></script>
<img id="test" crossorigin="anonymous" src="101_SI_24_23-01-2019_iOS_1187.JPG">
<script>
async function run() {
const model = await tf.automl.loadImageClassification('model.json');
const image = document.getElementById('test');
const predictions = await model.classify(image);
console.log(predictions);
// Show the resulting object on the page.
const pre = document.createElement('pre');
pre.textContent = JSON.stringify(predictions, null, 2);
document.body.append(pre);
}
run();
This index.html file above is located in the same folder of the model files and the image file. The problem is that when I try to run the file I'm receiving this error:
error received
I have no idea what I should do to fix this error. I've tried many things without success, I've only changed the error.
models built with AutoML should not have dynamic ops, but seems that yours does.
if that is truly model designed using AutoML, then AutoML should be expanded to use asynchronous execution.
if model was your own (not AutoML), it would be a simple await model.executeAsync() instead of model.execute(), but in AutoML case, that part is hidden inside AutoML library module inside classify call and that needs to be addressed by tfjs AutoML team.
best to open an issue at https://github.com/tensorflow/tfjs/issues
btw, why post a link to an image containing error message instead of copying the message as text here??

React Load Binary File / URL scheme "file" is not supported

Background
I built an app, which converts files from type A to type B (a binary file). I want to import and use a dummy file of type B to fill the data of file type A. The dummy always stays the same. The app has no backend. I want to share the html, so anything which requires turning off browser security etc., isn't an option.
Problem
At the moment, I load the files as I found here, but this works only with a backend server:
Requesting blob images and transforming to base64 with fetch API
import dummy from '../templates/Grid2.shp';
let hex = await fetch(dummy)
.then( response => response.blob() )
.then( blob => new Promise( callback =>{
let reader = new FileReader() ;
reader.onload = function(){
const serumShp = atob(this.result.substring(37)); // 37 strips the base64 info data:...
callback(binaryToHex(serumShp))
} ;
reader.readAsDataURL(blob) ;
}) ) ;
It works in my development but not at the built stage. As the browsers requests from the filesystem.
I found a solution over a file loader, but this solution also throws an error:
Using file-loader to load binary file in react
import/no-webpack-loader-syntax
Also, I don't see any configuration files for Webpack. As far as I have seen I would need to eject them, which is also not recommended.
Question:
How can I import binary files into my app without a backend server/any changes, etc.?
Sorry, I cannot help, but pointing out that there is a general discussion in CRA to support a more elegant way of importing binary/raw data. Sadly there doesn't seem to be much progress, the proposal is from 2018.

How to encode/compress gltf to draco

I want to compress/encode gltf file using draco programatically in threejs with reactjs. I dont want to use any commandline tool, I want it to be done programatically. Please suggest me a solution.
I tried using gltf-pipeline but its not working in client side. Cesium library was showing error when I used it in reactjs.
Yes, this is can be implemented with glTF-Transform. There's also an open feature request on three.js, not yet implemented.
First you'll need to download the Draco encoder/decoder libraries (the versions currently published to NPM do not work client side), host them in a folder, and then load them as global script tags. There should be six files, and two script tags (which will load the remaining files).
Files:
draco_decoder.js
draco_decoder.wasm
draco_wasm_wrapper.js
draco_encoder.js
draco_encoder.wasm
draco_encoder_wrapper.js
<script src="assets/draco_encoder.js"></script>
<script src="assets/draco_decoder.js"></script>
Then you'll need to write code to load a GLB file, apply compression, and do something with the compressed result. This will require first installing the two packages shown below, and then bundling the web application with your tool of choice (I used https://www.snowpack.dev/ here).
import { WebIO } from '#gltf-transform/core';
import { DracoMeshCompression } from '#gltf-transform/extensions';
const io = new WebIO()
.registerExtensions([DracoMeshCompression])
.registerDependencies({
'draco3d.encoder': await new DracoEncoderModule(),
'draco3d.decoder': await new DracoDecoderModule(),
});
// Load an uncompressed GLB file.
const document = await io.read('./assets/Duck.glb');
// Configure compression settings.
document.createExtension(DracoMeshCompression)
.setRequired(true)
.setEncoderOptions({
method: DracoMeshCompression.EncoderMethod.EDGEBREAKER,
encodeSpeed: 5,
decodeSpeed: 5,
});
// Create compressed GLB, in an ArrayBuffer.
const arrayBuffer = io.writeBinary(document); // ArrayBuffer
In the latest version of 1.5.0, what are these two files? draco_decoder_gltf.js and draco_encoder_gltf.js. Does this means we no longer need the draco_encoder and draco_decoder files? and how do we invoke the transcoder interface without using MeshBuilder. A simpler API would be musth better.

Parsing Swagger JSON data and storing it in .net class

I want to parse Swagger data from the JSON I get from {service}/swagger/docs/v1 into dynamically generated .NET class.
The problem I am facing is that different APIs can have different number of parameters and operations. How do I dynamically parse Swagger JSON data for different services?
My end result should be list of all APIs and it's operations in a variable on which I can perform search easily.
Did you ever find an answer for this? Today I wanted to do the same thing, so I used the AutoRest open source project from MSFT, https://github.com/Azure/autorest. While it looks like it's designed for generating client code (code to consume the API documented by your swagger document), at some point on the way producing this code it had to of done exactly what you asked in your question - parse the Swagger file and understand the operations, inputs and outputs the API supports.
In fact we can get at this information - AutoRest publically exposes this information.
So use nuget to install AutoRest. Then add a reference to AutoRest.core and AutoRest.Model.Swagger. So far I've just simply gone for:
using Microsoft.Rest.Generator;
using Microsoft.Rest.Generator.Utilities;
using System.IO;
...
var settings = new Settings();
settings.Modeler = "Swagger";
var mfs = new MemoryFileSystem();
mfs.WriteFile("AutoRest.json", File.ReadAllText("AutoRest.json"));
mfs.WriteFile("Swagger.json", File.ReadAllText("Swagger.json"));
settings.FileSystem = mfs;
var b = System.IO.File.Exists("AutoRest.json");
settings.Input = "Swagger.json";
Modeler modeler = Microsoft.Rest.Generator.Extensibility.ExtensionsLoader.GetModeler(settings);
Microsoft.Rest.Generator.ClientModel.ServiceClient serviceClient;
try
{
serviceClient = modeler.Build();
}
catch (Exception exception)
{
throw new Exception(String.Format("Something nasty hit the fan: {0}", exception.Message));
}
The swagger document you want to parse is called Swagger.json and is in your bin directory. The AutoRest.json file you can grab from their GitHub (https://github.com/Azure/autorest/tree/master/AutoRest/AutoRest.Core.Tests/Resource). I'm not 100% sure how it's used, but it seems it's needed to inform the tool about what is supports. Both JSON files need to be in your bin.
The serviceClient object is what you want. It will contain information about the methods, model types, method groups
Let me know if this works. You can try it with their resource files. I used their ExtensionLoaderTests for reference when I was playing around(https://github.com/Azure/autorest/blob/master/AutoRest/AutoRest.Core.Tests/ExtensionsLoaderTests.cs).
(Also thank you to the Denis, an author of AutoRest)
If still a question you can use Swagger Parser library:
https://github.com/swagger-api/swagger-parser
as simple as:
// parse a swagger description from the petstore and get the result
SwaggerParseResult result = new OpenAPIParser().readLocation("https://petstore3.swagger.io/api/v3/openapi.json", null, null);

Resources