Is Python runtime needed to use Tensorflow.js Node? - tensorflow.js

As mentioned in the node version installation instructions, is Python runtime needed to use Tensorflow.js Node? I can install whatever is required but not sure if our production servers have it.

It depends of what you want to do.
If you already have a tensorflow model written in python that you would like to deploy for inference in nodejs,
you can use the tensorflow.js converter. In this case you will need a python runtime
since the version 1.3 of tfjs-node, it is possible to load directly the savedModel in js (only possible in nodejs) using loadSavedModel.
But If you want to write your complete pipeline in js, you don't need to have python installed.

Related

Can Tensorflow.js be used for face recognition?

There is a python and C++ api available for doing image recognition.The tutorial is provided in tensorflow.org but as tensorflow.js is released few months back only does it support all the api's available in the python and c++ implementation.
Vincent Mühler has created face-api.js, a JavaScript API base on tensorflow.js. You can find out the blog & code on the links below.
https://itnext.io/face-api-js-javascript-api-for-face-recognition-in-the-browser-with-tensorflow-js-bcc2a6c4cf07
https://github.com/justadudewhohacks/face-api.js
Adding to the answers above, tensorflow on JavaScript can be quite slow. Here's a quick comparison:
However, If you do run tensorflow.js on Node, you could make use of the binding directly to the TensorFlow APIs written in C which would be fast. You could also run the CUDA versions if you import the right packages on Node.js
On the browser WebGL is used to run tensorflow. Using the tensorflow or some ML on the browser, opens up whole new opportunities to do things from right within the browser.
As Jirapol suggested, you could take a look at https://github.com/justadudewhohacks/face-api.js which is super easy to use. It actually took a very short while for me to start writing a facial recognition login system on node using face-api.js. Here's a link if you want to take a look at the unfinshed code: https://github.com/WinstonMarvel/face-recognition-authentication
does it support all the api's available in the python and c++ implementation.
No, it still has a limited amount of features. Keep in mind it still in version 0.11.6 so that will change. You can look at the documentation to see what's available.
If you want to port a specific model to tfjs try to get it as a keras model then convert it using tensorflowjs_converter to a tfjs compatible one like this tutorial shows.
There is even a tfjs examples which works with webcam data (Tutorial, Live Demo), so you could look into that to start.
Yes it can.
and also with the help of webassembly and SIMD in the browser.
you can have a smooth experience of image processing and video processing in the browser.
have a look at this link from google v8.
the good news is that with the same api you can run Tensorflow.js in the browser, node.js and React Native all with the native speed and using native capabilities

DC/OS Mesosphere install/deploy my own application across a cluster

I am trying to deploy my own cluster using DC/OS CLI installation. Mesosphere has a huge support as there are many packages ready to install provided in Mesosphere Universe repo (https://github.com/mesosphere/universe).
However, I would like to make one step further. I am trying to install my own applications to my cluster using the DC/OS CLI installation process. To do this, as far as I understand, I need to either (i) make my application recognizable to the system repo (as the other repo packages that are provided in Universe) or (ii) make a new image that consists all my applications and modify the DC/OS script to make the installation possible.
Unfortunately, my modest knowledge is flawed and I could not find any where a clear answer to this.
Therefore, I would like to ask:
1) Is it possible to do what I am trying to do?
2) If the answer is YES, how exactly should I do? My goal is to install my awesome apps for my own purpose, not to publish them. But to add my apps as repo into Universe, it seems like I have to publish them.
It is possible! :)
Please follow these instructions

Tensorflow Keras API on Google cloud

I have a question on using tensorflow on google cloud platform.
I heard that Google cloud tensorflow doesnt support Keras (keras.io). However, now i can see that Tensorflow has its own API to access Keras (https://www.tensorflow.org/api_docs/python/tf/contrib/keras).
Given this, can I use the above mentioned API inside google cloud, since it is coming out along with Tensorflow package? Any idea sir?
I am able to access this API from the tensorflow installed on a anaconda machine.
Option 1# Please try package-path option.
As per the docs...
-package-path=PACKAGE_PATH
"Path to a Python package to build. This should point to a directory containing the Python source for the job"
Try and give a relative path to keras from your main script.
More details here:
https://cloud.google.com/sdk/gcloud/reference/beta/ml-engine/local/train
Option 2# If you have a setup.py file
Inside your setup.py file within setup call pass argument install_requires=['keras']
Google Cloud Machine Learning Engine does support Keras (keras.io), but you have to list it as a dependency when starting a training job. For some specific instructions, see this SO post, or a longer exposition on this blog page. If you'd like to serve your model on Google Cloud Machine Learning or using TensorFlow Serving, then see this SO post about exporting your model.
That said, you can also use tf.contrib.keras, as long as you use the --runtime-version=1.2 flag. Just keep in mind that packages in contrib are experimental and may introduce breaking API changes between versions.
Have a look at this example on git which I saw was recenly added:
Keras Cloud ML Example

Is it possible to embed a package without to copy it?

Say there we have the package encoding/json. Can I just create a package mypackage and embed all the functions (at least the public functions) into my package without to copy them by hand and basically do calls back to the actual json package? I'm developing a cross platform (Google app engine / native ) solution and I would find a such solution quite useful.
No.
It sounds like you want some sort of package inheritance, this is not a supported feature.

Should i be using gradle for continuous deployment?

anyone has past experience with gradle? i'm thinking of using it for continuous deployment... i'm considering either using my own scripts (python) or gradle.
can anyone tell from experience which way he thinks recommanded to go? note i already use maven and i don't intend to move away for my dependency management and project management.
thanks
We have implemented Gradle-based deployment and environment management in a big governmental project (100+ servers). But we had to develop a custom set of plugins (which is actually rather straight forward process in Gradle) to handle tasks like remote SSH command execution through Groovy DSL, creation of application server domains/clusters (we are using WebLogic), application/configuration deployment.
We also are thinking of integrating Gradle with Puppet for easier Linux administration.
If you are coming from Java world, then using Gradle (which is Groovy-based) would be rather simple for you, because you can reuse your Java/Ant/Maven/Groovy knowledge to write scripts. Also an ability to create DSLs in Groovy may allow you to build interesting abstractions. Gradle also has very clean API which allows building nice dependencies between tasks. It also integrates very well with Maven infrastructure and you can reuse all Ant tasks.
Yes, Gradle-based deployment possible with gradle-ssh-plugin
Here is an article with good usage example.

Resources