Error running AutoML Edge model with tfjs-node - tensorflow.js

I am still very new to Tensorflow but when trying to run an AutoML Object Detection edge model with tfjs-node I run into this error.
Error: Session fail to run with error: Unknown image file format. One of JPEG, PNG, GIF, BMP required.
[[{{node map/while/DecodeJpeg}}]]
I cannot figure out why Tensorflow is throwing this error when I have checked the string is a JPEG in Base64. To further elaborate this was the example I was following but had to deviate as I was getting errors about inputs needing to be image_bytes and key.
import { readFileSync } from 'fs';
import { node, scalar } from '#tensorflow/tfjs-node';
const model = await node.loadSavedModel("[model directory path]", ['serve'], 'serving_default');
const input = readFileSync("[image path]", "base64");
const output = model.predict({
image_bytes: scalar(input).expandDims(0),
key: scalar("").expandDims(0),
});
const scores = await output['detection_scores'].arraySync();
const boxes = await output['detection_boxes'].arraySync();
const names = await output['detection_classes'].arraySync();
output['detection_scores'].dispose();
output['detection_boxes'].dispose();
output['detection_classes'].dispose();
output['num_detections'].dispose();
console.log(scores);
console.log(boxes);
console.log(names);
EDIT: I've found the file in Tensorflow that throws this error but I am still unsure of how to fix it as I am not familiar with C++.

Related

How to send a local image instead of URL to Computer Vision API using React

I would like to upload local image file and extract text from it. I followed the below link and it works as expected when I pass URL. https://learn.microsoft.com/en-us/azure/developer/javascript/tutorial/static-web-app/add-computer-vision-react-app
I managed to configure for local image and get the base64 encoded dataURL of the uploaded image. But when I pass base64 encoded dataURL to Computer Vision API , it says "Input data is not a valid image" (POST 400 status code). I am getting error in the line that is shown below:
const analysis = await computerVisionClient.analyzeImage(urlToAnalyze, { visualFeatures });
The code I have included for handling local image:
const handleChange = (e) => {
var file = e.target.files[0];
var reader = new FileReader();
reader.onloadend = function()
{
setFileSelected(reader.result) // this is the base64 encoded dataurl
}
reader.readAsDataURL(file);
}
In computerVision.js file, I have changed the 'contentType' in header as below.
const computerVisionClient = new ComputerVisionClient(
new ApiKeyCredentials({ inHeader: {'Ocp-Apim-Subscription-Key': key, 'Content-Type': 'application/octet-stream'} }), endpoint);
I tried replacing client.read() with readTextInStream() as per docs in computerVision.js (please refer above link), but still throws error.
May I know why I get the error "Input data is not a valid image" ? Thanks.
Here is the link for input requirements.
There is a brand new online portal provided by Microsoft https://preview.vision.azure.com/demo/OCR
The advantage is that it will directly list your available resources so you just have to pick the right one, then you test, and there are also some samples.

React App Using TensorFlow Detects Wrong Object

I am following a tutorial on Object Detection with Javascript.
Here's a code snippet that bothers me right now:
let classesDir = {
1: {
name: 'Kangaroo',
id: 1,
},
2: {
name: 'Other',
id: 2,
}
}
So far, the app should detect object Kangaroo or it'll detect other objects as Other. Interesting thing is whatever object I try to detect using device camera, it detects that as Kangaroo. Right now, it uses the following model:
const model = await loadGraphModel("https://raw.githubusercontent.com/hugozanini/TFJS-object-detection/master/models/kangaroo-detector/model.json");
When I tried instead to use the following model that's bit different:
const model = await loadGraphModel("https://storage.googleapis.com/tfjs-models/savedmodel/ssdlite_mobilenet_v2/model.json");
it throws an exception:
Unhandled Rejection (TypeError): Cannot read properties of undefined (reading 'arraySync')
Here's the code snippet where the exception starts in the index.js file:
//Getting predictions
const boxes = predictions[4].arraySync(); //Exception here
const scores = predictions[5].arraySync();
const classes = predictions[6].dataSync();
For the above scenario, I changed the object name to something else to detect but every time, it detects every object as the same one even that's different. Is there anything that requires to do with the model, I am not sure?

Using client in different files

Is it possible to use the client/bot constant across different files?
My index.js file has gotten pretty clogged up, so I tried splitting different functions into different files.
I already tried exporting the bot constant from index.js:
// in index.js
module.exports = {
Gbot: bot
}
// in different file
const index = require('../index.js')
const bot = index.Gbot
bot.on('message', message => {
message.channel.send("test")
})
The second file does not do anything, it does not respond with "test"
There is no errors either
Is this not possible or am I doing something wrong?
This is possible, but why do you want to do that? If you are using a Command Handler you define client or bot to use it everywhere. And if not you are running everything in index.js, the file where you defined client or bot.
Edit:
//index.js
module.exports.Gbot = bot;
//other file
const index = require("../index.js");
const bot = index.Gbot;

mxCodec doesn't decode xml correctly

I integrated mxGraph(mxgraph npm package) in my react app, so when I trying to load graph using xml, I am getting an error in console
TypeError: geo.clone is not a function
The same I am doing in single html file and it's working.
I investigated and found that the mxCell in react app is different from the html one.
In case of HTML there is filled geometry prop instead of react(check screens below)
Сan someone help me to decode xml correctly?
Decoded mxCell from single HTML console: https://monosnap.com/file/yAHAi29zFGFpauqU2RtDcvmfPpZ0YJ
Decoded mxCell from React app console: https://monosnap.com/file/0XxPwyEracX7hMCnMHckAmI8Rl6OEh
Source code from React component:
const graph = new mx.mxGraph(this.automationRef.current)
new mx.mxRubberband(graph);
const xml = '<root>...</root>';
const doc = mx.mxUtils.parseXml(xml);
const codec = new mxCodec(doc);
let elt = doc.documentElement.firstChild;
const cells = [];
while (elt != null){
const cell = codec.decodeCell(elt)
cells.push(cell);
graph.refresh();
elt = elt.nextSibling;
}
graph.addCells(cells);
Found the issue.
Here's the solution:
https://github.com/jgraph/mxgraph/issues/301#issuecomment-514284868
Quote:
you should add
window['mxGraphModel'] = mxGraphModel;
window['mxGeometry'] = mxGeometry;
before
let doc = mxUtils.parseXml(xml);
let codec = new mxCodec(doc);
codec.decode(doc.documentElement, graph.getModel());'
I found that the decode method resolve xml need windows param
End quote

How to send parsed .csv file as a byte array or ArrayBuffer data from Node.js backend to AngularJS frontend?

I'm working on AngularJS app.
Module I'm currently working on should be able to either show a preview of a spreadsheet file or allow to download it.
The steps:
When clicked on "Preview File" it should send request with needed file's name as a parameter of POST request.
Backend will find neede file, which is a .csv file, convert it to byte array type and send it to frontend.
Frontend should handle this byte array and convert it to .xls or .xlsx filetype
The spreadsheet data should be opened in some small preview read-only window, like 1000x1000 px.
The POST request line looks like that:
this.$http.post(this.url + 'endpoint/getFile', params,
{responseType: "arraybuffer", showLoadingOverlay: true}
)
The response looks indeed like ArrayBuffer: three of it in one object, i.e. Uint8Array, Uint16Array and Uint32Array.
The code which should read this and convert to content suitable for preview is not working:
const byteArray = new Uint8Array(data);
const blob = new Blob([byteArray], { type: 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet' });
const objectUrl = URL.createObjectURL(blob);
this.$window.open(objectUrl, 'C-Sharpcorner', 'width=1000,height=1000');
Because when created the blob, it already has 0 length in bytes, so there's no data inside.
The matter of visualising the .xls in browser window, I think, can be achieved with canvas-datagrid library. Haven't used but it looks cool.
Also, I have a problem with trying to set up a mock data for node.js (and AngularMock), for local testing when there's no data on a java backend.
I'm using 'fs' and 'csv-parse':
const fs = require('fs');
const csvParse = require("csv-parse/lib/es5");
module.exports = function stir(app) {
const getFile = () => {
const csvOutput = csvParse('../static/someData.csv', (parsed) => {
return parsed;
});
fs.readFileSync(csvOutput);
};
app.post('/stir/getFile', (req, res) => res.json(getFile()));
};
Which results in error:
TypeError: path must be a string or Buffer
What is the proper way of parsing the .csv using 'csv-parse' and sending parsed data as an ArrayBuffer to frontend in Node and AngularMock?
csv-parse docs are telling that underneath, the lib will convert the parsed object to node stream.
So why that error happens?

Resources