Unable to use models created by the tensorflowjs converter - tensorflow.js

I am trying to reuse with tensorflowjs models created by tensorflow. In order to understand how the converter works I have tried to convert the mobilenetv2 model:
tensorflowjs_converter --input_format=tf_hub --output_format=tensorflowjs 'https://tfhub.dev/google/imagenet/mobilenet_v2_050_224/classification/2' ./web_model
That seems to work. Then I have tried to used this new converted model within the mobilenet demo by changing the way the model is loaded:
// const model = await mobilenet.load({version, alpha});
// replaced by
const model = await mobilenet.load({ modelUrl: './web_model/model.json', version, alpha, inputRange: [0, 1], fromTFHub: true });
// Classify the image.
const predictions = await model.classify(img);
The classify call triggers an error:
Uncaught (in promise) Error: Activation relu6 has not been implemented for the WebGL backend.
I have no clue on how the official tensorflowjs mobilenet model has been generated :(

from keras.applications import MobileNetV2
model = MobileNetV2(weights='imagenet', include_top=False)
save_model(
model,
"mobilenet2.h5",
overwrite=True,
)
Convert mobilenet feature extractor to js
tensorflowjs_converter --input_format keras \
path/to/mobilenet2.h5 \
path/to/tfjs_target_dir

The operator of relu6 has been just added 1 week ago. It should be available in the next TensorFlow.js release.
Please try to use the latest version once it's released.
See: https://github.com/tensorflow/tfjs/pull/2016

this issue has nothing to do with new release. I had same issue and went round in circles. If you are working in GPU runtime (i used Colab GPU runtime), this issue happens. You just have to fit/fit_generate models in CPU mode, and your model will be ready in happy state.

Related

In Tensorflow JS, using Node (tfjs-node) is there any way to Load Universal Sentence Encoder (USE) from local File?

I have a tensorflow.js script/app that runs in Node.js using tfjs-node and Universal Sentence Encoder (USE).
Each Time the script runs, it downloads a 525 MegaByte File (the USE model file).
Is there any way to load the Universal Sentence Encoder Model File from the local file system to avoid downloading such a large file every time I need to run the node.js tensorflow script?
I've noted several similar model loading examples but none that work with Universal Sentence Encoder as it does not appear to have the same type functionality. Below is a stripped down example of a functioning script that downloads the 525 MB file every time it executes.
Any help or recommendations would be appreciated.
const tf = require('#tensorflow/tfjs-node');
const use = require('#tensorflow-models/universal-sentence-encoder');
// No Form of Universal Sentence Encoder loader appears to be present
let model = tf.loadGraphModel('file:///Users/ray/Documents/tf_js_model_save_load2/models/model.json');
use.load().then(model => {
const sentences = [
'Hello.',
'How are you?'
];
model.embed(sentences).then(embeddings => {
embeddings.print(true /* verbose */);
});
});
I've tried several recommendations that appear to work for other models but not Universal Sentence Encoder such as:
const tf = require('#tensorflow/tfjs');
const tfnode = require('#tensorflow/tfjs-node');
async function loadModel(){
const handler = tfnode.io.fileSystem('tfjs_model/model.json');
const model = await tf.loadLayersModel(handler);
console.log("Model loaded")
}
loadModel();
its not a model issue per-say, its a module issue.
model can be loaded any way you want, but the module #tensorflow-models/universal-sentence-encoder implements only a specific internal way on how it loads actual model data.
specifically, it internally uses tf.util.fetch.
solution? use some library (or write your own) to register a global fetch handler that knows how to handle file:// prefixes - if global fetch handler exists, tf.util.fetch will simply just use it.
hint: https://gist.github.com/joshua-gould/58e1b114a67127273eef239ec0af8989

Trying to make an image classification model using AutoML Vision run on a website

I've created a classification model using AutoML Vision and tried to use this tutorial to make a small web app to make the model classify an image using the browser.
The code I'm using is basically the same as the tutorial with some slight changes:
<script src="https://unpkg.com/#tensorflow/tfjs"></script>
<script src="https://unpkg.com/#tensorflow/tfjs-automl"></script>
<img id="test" crossorigin="anonymous" src="101_SI_24_23-01-2019_iOS_1187.JPG">
<script>
async function run() {
const model = await tf.automl.loadImageClassification('model.json');
const image = document.getElementById('test');
const predictions = await model.classify(image);
console.log(predictions);
// Show the resulting object on the page.
const pre = document.createElement('pre');
pre.textContent = JSON.stringify(predictions, null, 2);
document.body.append(pre);
}
run();
This index.html file above is located in the same folder of the model files and the image file. The problem is that when I try to run the file I'm receiving this error:
error received
I have no idea what I should do to fix this error. I've tried many things without success, I've only changed the error.
models built with AutoML should not have dynamic ops, but seems that yours does.
if that is truly model designed using AutoML, then AutoML should be expanded to use asynchronous execution.
if model was your own (not AutoML), it would be a simple await model.executeAsync() instead of model.execute(), but in AutoML case, that part is hidden inside AutoML library module inside classify call and that needs to be addressed by tfjs AutoML team.
best to open an issue at https://github.com/tensorflow/tfjs/issues
btw, why post a link to an image containing error message instead of copying the message as text here??

OpenLayers v3.5.0 map, loading features from a GeoJSON using bbox strategy

I'm trying to use the approach described in this question, but instead of using jQuery to perform the ajax request, I'm using angularJS $http method. I've already verified and the features are being loaded into the source of the layer, but nothing is shown.
Here is the definition of the source:
var vectorSource = new ol.source.Vector({
loader: function(extent, resolution){
$http.get(url).success(function(data){
var formatGeo = new ol.format.GeoJSON();
var features = formatGeo.readFeatures(data,
{featureProjection: 'EPSG:4326'});
vectorSource.addFeatures(features);
console.log(vectorSource.getFeatures().length);
})},
strategy: ol.loadingstrategy.bbox
});
Is there any incompatibility problems with using angularJS and openlayers?
The problem was the mismatch of the projection of the data in my GeoJSON (EPSG:4326) and of the map (OpenLayers3 default, EPSG:3857).
To solve the problem, I changed the projection of the data that I was using to build the GeoJSON to EPSG:3857. Since the data was stored in a postGis database, I used the function ST_Transform to change the projection of the geom column contaning the objects.

Parsing Swagger JSON data and storing it in .net class

I want to parse Swagger data from the JSON I get from {service}/swagger/docs/v1 into dynamically generated .NET class.
The problem I am facing is that different APIs can have different number of parameters and operations. How do I dynamically parse Swagger JSON data for different services?
My end result should be list of all APIs and it's operations in a variable on which I can perform search easily.
Did you ever find an answer for this? Today I wanted to do the same thing, so I used the AutoRest open source project from MSFT, https://github.com/Azure/autorest. While it looks like it's designed for generating client code (code to consume the API documented by your swagger document), at some point on the way producing this code it had to of done exactly what you asked in your question - parse the Swagger file and understand the operations, inputs and outputs the API supports.
In fact we can get at this information - AutoRest publically exposes this information.
So use nuget to install AutoRest. Then add a reference to AutoRest.core and AutoRest.Model.Swagger. So far I've just simply gone for:
using Microsoft.Rest.Generator;
using Microsoft.Rest.Generator.Utilities;
using System.IO;
...
var settings = new Settings();
settings.Modeler = "Swagger";
var mfs = new MemoryFileSystem();
mfs.WriteFile("AutoRest.json", File.ReadAllText("AutoRest.json"));
mfs.WriteFile("Swagger.json", File.ReadAllText("Swagger.json"));
settings.FileSystem = mfs;
var b = System.IO.File.Exists("AutoRest.json");
settings.Input = "Swagger.json";
Modeler modeler = Microsoft.Rest.Generator.Extensibility.ExtensionsLoader.GetModeler(settings);
Microsoft.Rest.Generator.ClientModel.ServiceClient serviceClient;
try
{
serviceClient = modeler.Build();
}
catch (Exception exception)
{
throw new Exception(String.Format("Something nasty hit the fan: {0}", exception.Message));
}
The swagger document you want to parse is called Swagger.json and is in your bin directory. The AutoRest.json file you can grab from their GitHub (https://github.com/Azure/autorest/tree/master/AutoRest/AutoRest.Core.Tests/Resource). I'm not 100% sure how it's used, but it seems it's needed to inform the tool about what is supports. Both JSON files need to be in your bin.
The serviceClient object is what you want. It will contain information about the methods, model types, method groups
Let me know if this works. You can try it with their resource files. I used their ExtensionLoaderTests for reference when I was playing around(https://github.com/Azure/autorest/blob/master/AutoRest/AutoRest.Core.Tests/ExtensionsLoaderTests.cs).
(Also thank you to the Denis, an author of AutoRest)
If still a question you can use Swagger Parser library:
https://github.com/swagger-api/swagger-parser
as simple as:
// parse a swagger description from the petstore and get the result
SwaggerParseResult result = new OpenAPIParser().readLocation("https://petstore3.swagger.io/api/v3/openapi.json", null, null);

go-endpoint Invalid date format and method not found

Hy, I have some problems with the Go endpoints and Dart client library.
I use the Go library https://github.com/crhym3/go-endpoints and the dart generator https://github.com/dart-lang/discovery_api_dart_client_generator
The easy examples works fine. But they show never how to use time.Time.
In my project, I have a struct with a field:
Created time.Time `json:"created"`
The output in the explorer looks like this:
"created": "2014-12-08T20:42:54.299127593Z",
When i use it in the dart client library, I get the error
FormatException: Invalid date format 2014-12-08T20:53:56.346129718Z
Should I really format every time fields in the go app (Format Timestamp in outgoing JSON in Golang?)?
My research come to that the dart accept something:
t.Format(time.RFC3339) >> 2014-12-08T20:53:56Z
Second problem, if comment out the Created field or leave it blank. I get a other error:
The null object does not have a method 'map'.
NoSuchMethodError: method not found: 'map' Receiver: null Arguments:
[Closure: (dynamic) => dynamic]
But I can't figure it out which object is null. I'm not sure if I'm using the Dart client correct
import 'package:http/browser_client.dart' as http;
...
var nameValue = querySelector('#name').value;
var json = {'name':nameValue};
LaylistApi api = new LaylistApi(new http.BrowserClient());
api.create(new NewLayListReq.fromJson(json)).then((LayList l) {
print(l);
}).catchError((e) {
querySelector('#err-message').innerHtml=e.toString();
});
Does anyone know of a larger project on github with Go endpoint and Dart?
Thanks for any advice
UPDATE[2014-12-11]:
I fixed the
NoSuchMethodError
with the correct discovery url https://constant-wonder-789.appspot.com/_ah/api/discovery/v1/apis/greeting/v1/rest
The problem with the time FormatExcetion still open, but I'm one step further. If i create a new item, it doesn' work. But if I load the items from the datastore and send it back, this works.
I guess this can be fixed with implementing Marshaler interface, thanks Alex. I will update my source soon.
See my example:
http://constant-wonder-789.appspot.com/
The full source code:
https://github.com/cloosli/greeting-example

Resources