I am trying to utilize the Azure Maps Cluster Pie Chart feature with on a map in a Perspective mode, and this does not appear to be possible.
Is there a way to implement this feature in Perspective mode?
https://azuremapscodesamples.azurewebsites.net/HTML%20Markers/HtmlMarkerLayer/Clustered%20Pie%20Chart%20HTML%20Markers.html
I found the bug in the HtmlMarkerLayer class. It's actually being caused by a bug deeper within the SDK itself which I have reported to the team but will take some time to get resolved. In the mean time, here is a workaround you can implement to resolve the issue in your app:
Open the HtmlMarkerLayer.ts file I'll refer to the line numbers in the file I just linked.
At line 289 there is a line of code that looks like this:
var shapes = this._map.layers.getRenderedShapes(null, this, this._options.filter);
Replace this line with the following code:
const source = this.getSource();
const sourceId = (typeof source === 'string')? source : source.getId();
//#ts-ignore
const shapes = this._map.map.querySourceFeatures(sourceId, {
sourceLayer: this.getOptions().sourceLayer,
filter: opt.filter
});
Related
I have a tensorflow.js script/app that runs in Node.js using tfjs-node and Universal Sentence Encoder (USE).
Each Time the script runs, it downloads a 525 MegaByte File (the USE model file).
Is there any way to load the Universal Sentence Encoder Model File from the local file system to avoid downloading such a large file every time I need to run the node.js tensorflow script?
I've noted several similar model loading examples but none that work with Universal Sentence Encoder as it does not appear to have the same type functionality. Below is a stripped down example of a functioning script that downloads the 525 MB file every time it executes.
Any help or recommendations would be appreciated.
const tf = require('#tensorflow/tfjs-node');
const use = require('#tensorflow-models/universal-sentence-encoder');
// No Form of Universal Sentence Encoder loader appears to be present
let model = tf.loadGraphModel('file:///Users/ray/Documents/tf_js_model_save_load2/models/model.json');
use.load().then(model => {
const sentences = [
'Hello.',
'How are you?'
];
model.embed(sentences).then(embeddings => {
embeddings.print(true /* verbose */);
});
});
I've tried several recommendations that appear to work for other models but not Universal Sentence Encoder such as:
const tf = require('#tensorflow/tfjs');
const tfnode = require('#tensorflow/tfjs-node');
async function loadModel(){
const handler = tfnode.io.fileSystem('tfjs_model/model.json');
const model = await tf.loadLayersModel(handler);
console.log("Model loaded")
}
loadModel();
its not a model issue per-say, its a module issue.
model can be loaded any way you want, but the module #tensorflow-models/universal-sentence-encoder implements only a specific internal way on how it loads actual model data.
specifically, it internally uses tf.util.fetch.
solution? use some library (or write your own) to register a global fetch handler that knows how to handle file:// prefixes - if global fetch handler exists, tf.util.fetch will simply just use it.
hint: https://gist.github.com/joshua-gould/58e1b114a67127273eef239ec0af8989
I've created a classification model using AutoML Vision and tried to use this tutorial to make a small web app to make the model classify an image using the browser.
The code I'm using is basically the same as the tutorial with some slight changes:
<script src="https://unpkg.com/#tensorflow/tfjs"></script>
<script src="https://unpkg.com/#tensorflow/tfjs-automl"></script>
<img id="test" crossorigin="anonymous" src="101_SI_24_23-01-2019_iOS_1187.JPG">
<script>
async function run() {
const model = await tf.automl.loadImageClassification('model.json');
const image = document.getElementById('test');
const predictions = await model.classify(image);
console.log(predictions);
// Show the resulting object on the page.
const pre = document.createElement('pre');
pre.textContent = JSON.stringify(predictions, null, 2);
document.body.append(pre);
}
run();
This index.html file above is located in the same folder of the model files and the image file. The problem is that when I try to run the file I'm receiving this error:
error received
I have no idea what I should do to fix this error. I've tried many things without success, I've only changed the error.
models built with AutoML should not have dynamic ops, but seems that yours does.
if that is truly model designed using AutoML, then AutoML should be expanded to use asynchronous execution.
if model was your own (not AutoML), it would be a simple await model.executeAsync() instead of model.execute(), but in AutoML case, that part is hidden inside AutoML library module inside classify call and that needs to be addressed by tfjs AutoML team.
best to open an issue at https://github.com/tensorflow/tfjs/issues
btw, why post a link to an image containing error message instead of copying the message as text here??
I am trying to reuse with tensorflowjs models created by tensorflow. In order to understand how the converter works I have tried to convert the mobilenetv2 model:
tensorflowjs_converter --input_format=tf_hub --output_format=tensorflowjs 'https://tfhub.dev/google/imagenet/mobilenet_v2_050_224/classification/2' ./web_model
That seems to work. Then I have tried to used this new converted model within the mobilenet demo by changing the way the model is loaded:
// const model = await mobilenet.load({version, alpha});
// replaced by
const model = await mobilenet.load({ modelUrl: './web_model/model.json', version, alpha, inputRange: [0, 1], fromTFHub: true });
// Classify the image.
const predictions = await model.classify(img);
The classify call triggers an error:
Uncaught (in promise) Error: Activation relu6 has not been implemented for the WebGL backend.
I have no clue on how the official tensorflowjs mobilenet model has been generated :(
from keras.applications import MobileNetV2
model = MobileNetV2(weights='imagenet', include_top=False)
save_model(
model,
"mobilenet2.h5",
overwrite=True,
)
Convert mobilenet feature extractor to js
tensorflowjs_converter --input_format keras \
path/to/mobilenet2.h5 \
path/to/tfjs_target_dir
The operator of relu6 has been just added 1 week ago. It should be available in the next TensorFlow.js release.
Please try to use the latest version once it's released.
See: https://github.com/tensorflow/tfjs/pull/2016
this issue has nothing to do with new release. I had same issue and went round in circles. If you are working in GPU runtime (i used Colab GPU runtime), this issue happens. You just have to fit/fit_generate models in CPU mode, and your model will be ready in happy state.
I'm in the early stages of developing an app with react-native, and I need a DB implementation for for testing and development. I thought that the obvious choice would be to use simple JSON files included with the source, but the only way I see to load JSON files requires that you know the file name ahead of time. This means that the following does not work:
getTable = (tableName) => require('./table-' + tableName + '.json') // ERROR!
I cannot find a simple way to load files at runtime.
What is the proper way to add test data to a react-native app?
I cannot find a simple way to load files at runtime.
In node you can use import() though I'm not sure if this is available in react-native. The syntax would be something like:
async function getTable(tableName){
const fileName = `./table-${tableName}.json`
try {
const file = await import(fileName)
} catch(err){
console.log(err
}
}
though like I said I do not know if this is available in react-natives javascript environment so ymmv
Unfortunately dynamic import not supported by react-native but there is a way so to do this
import tableName1 from './table/tableName1.json';
import tableName2 from './table/tableName2.json';
then create own object like
const tables = {
tableName1,
tableName2,
};
after that, you can access the table through bracket notation like
getTable = (tableName) => tables[tableName];
I am using bizreview as the theme for my Drupal 7 site. I am using the Feeds module to import thousands of records that are in CSV files into the site. I need to use a geofield to store the locations.
For this I created a field 'Coordinates' in my content type, made it a geofield and set the widget type to latitude/longitude. I can add the locations manually and they do show up in the map, but I just can't import the coordinates with Feeds.
This seems to be an ongoing issue with the geofield/feeds interface (see Drupal issue here). I had the same problem but applied the patch in comment #12 from the aforementioned link which worked.
One suggestion: If the current version of geofield is not the same as the one used in the patch, or if you are running WAMP without Cygwin, I would suggest applying the patch manually by following the directions here, making sure to save a safe backup file in the process. If you haven't worked with patches before, basically all that you (or the patch command) will do for this particular case is add the following lines of code after line 143 in the ./sites/all/modules/geofield/geofield.feeds.inc file (I am working with geofield version 7.x-2.3):
foreach ($field[LANGUAGE_NONE] as $delta => $value) {
if (!empty($value['lat']) && !empty($value['lon'])) {
// Build up geom data.
$field[LANGUAGE_NONE][$delta] = geofield_compute_values($value, 'latlon');
}
}