Tensorflowjs: does loadFrozenModel mean i cannot acess layers? - tensorflow.js

used to load model by calling tf.loadModel('https://storage.googleapis.com/tfjs-models/tfjs/mobilenet_v1_0.25_128/model.json'), however, i needed to change the mobilinet version.
So, i took the version i needed from tensorflowhub, ran it on tensorflow_converter, and got the two files (.pb and the weight file). Then i loaded it using tf.loadGraphModel. hower, model.getLayer throws:
model.getLayer is not a function.
the loading looks like this:
const model = await tf.loadGraphModel(modelUrl); //url points to .pb
Then i saved the mobilinet model as a frozen model, ran it again on tensorflow_converter, and tried to load it as tf.loadFrozenModel. which returned the same thing.
im confused.
Is there a way to get layers from a non-keras model?
EDIT: for clarification, the model i took from tensoflowhub is :
https://tfhub.dev/google/imagenet/mobilenet_v2_075_96/classification/2

LoadFrozenModel is deprecated since 0.15. LoadGraphModel does the same thing with less parameter. It takes only as parameter the model topology file.
If there is no layers in the object loaded, it is either because the model is not well loaded or the frozen model does not contain any .

TF.js supports two APIs and corresponding serialization formats: the Layers API (corresponding to Keras models) and the lower-level Core API (corresponding to arbitrary TensorFlow graphs).
Depending on where you obtain a model and how you convert it, your file can be loaded either via tf.loadLayersModel() or tf.loadGraphModel(), but not both. Please see the table of available conversions.
Even if a model was originally trained using Keras, it may have been saved as a low-level TensorFlow graph where the Keras layers structure is lost. I believe this is currently the case for all TF-Hub modules. Thus your current approach gives you a tf.GraphModel, from which the layers cannot be reconstituted.
We provide MobileNet v1 already converted from the original Keras to the TF.js Layers format at the URL you listed, so you can use loadLayersModel() (formerly loadModel()) with that directly. We don't currently host a converted MobileNet v2. However you can get the original Keras .h5 model here, and then convert that to TF.js Layers format using tensorflowjs_converter.

Related

Flink reference data advice/best practice

Looking for some advice on where to store/access Flink reference data. Use case here is really simple - I have a single column text file with a list of countries. I am streaming twitter data and then matching the countries from the text file based on the (parsed) Location field of the tweet. In the IDE (Eclipse) its all good as I have a static ArrayList populated when the routine fires up via a static Build method in my Flink Mapper (ie implements Flinks MapFunction). This class is now inner static as it gets shirt on serialization otherwise. Point is, when the overridden map function is invoked at runtime from within the stream, the static array of country data is their waiting, fully populated and ready to be matched against. Works a charm. BUT, when deployed into a Flink cluster ( and it took me to hell and back last week to actually get the code to FIND the text file), the array is only populated as part of the Build method. When it comes to being used the data has mysteriously disappeared and I am left with an array size of 0. (ergo, not a lot of matches get found. Thus, 2 questions - why does it work in Eclipse and not on deploy (renders a lot of Eclipse unit tests pointless as well). Or possibly just more generally, what is the right way to cross reference this kind of static, fixed reference data within Flink? (and in a way that it is found in both Eclipse and the cluster...)
The standard way to handle static reference data is to load the data in the open method of a RichMapFunction or RichFlatMapFunction. Rich functions have open and close methods that are useful for creating and finalizing local state, and can access the runtime context.

What's the preferred way to go about using backbone with non-crud resources?

New to backbone/marionette, but I believe that I understand how to use backbone when dealing with CRUD/REST; however, consider something like results from a search query. How should one model this? Of course the results likely relate to some model(s), but they are not meant to be tied to said model(s).
Part of me thinks that I should use a collection using a model that doesn't actually sync with a data store through the server, but instead just exists as a means of a modeling a search result object.
Another solution could be to have a collection with no models and just override parse.
I assume that the former is preferred, but again I have no experience with the framework. If there's an alternative/better solution than those listed above, please advise.
I prefer having one object which is responsible for both request and response parsing. It can parse the response to appropriate models and nothing more. I mean - if some of those parsed models are required somewhere in your page, there is something that keeps reference to this wrapper object and takes models from response it requires via wrapper methods.
Another option is to have Radio (https://github.com/marionettejs/backbone.radio) in this wrapper - you will not have to keep wrapper object in different places but call for data via Radio.

Can/should a backbone model store (temporary) data that's not on the server?

With Backbone, I do a somewhat expensive calculation for each Model in my Collection and there can be a lot of Models. I'm thinking I'd like to store the result in each Model with set(), but I don't want to save it to the server. Is this generally a bad idea?
If that's not good, is the better practice to keep it in an array variable or a Model (a calculations results model separate from the cached server data model)?
Why do I think this might be a good idea?
I wouldn't ever have to give thought to the array variable's scope/context.
No looking up the array contents once I have the relevant Model.
Data is more encapsulated
Why do I think this might be a bad idea?
Mixes cached server data with calculated local data.
Probably have to write sync code so that save() only saves the attributes the server should get.
Thanks!
EDIT
Found someone exploring a similar issue, with good discussion: Custom Model Property in Template.
This seems to have a pretty thorough answer that I am exploring: Backbone Computed Properties.
One solutions might be to override the toJSON function of your Model.
This function is called by the save function to get the attributes to be send back to the server.
Looking at the docs of the toJSON function is basically is saying you could use it for your specific purpose:
Return a copy of the model's attributes for JSON stringification. This can be used for
persistence, serialization, or for augmentation before being sent to the server.
I would personally not consider it as bad practice but all depends on the amount of and the calculations itself that is needed. So it would depend on your specific use case.
Also you could not store the calculated object in your model.attributes object but somewhere in your model instance. That way it would be hidden from the model attributes you will synchronize back and forth with your server.

When using an object database, how do you handle significant changes to your object model?

If you use an object database, what happens when you need to change the structure of your object model?
For instance, I'm playing around with the Google App Engine. While I'm developing my app, I've realized that in some cases, I mis-named a class, and I want to change the name. And I have two classes that I think I need to consolidate.
However,I don't think I can, because the name of the class in intuitively tied into the datastore, and there is actual data stored under those class names.
I suppose the good thing about the "old way" of abstracting the object model from the data storage is that the data storage doesn't know anything about the object model --it's just data. So, you can change your object model and just load the data out of the datastore differently.
So, in general, when using a datastore which is intimate with your data model...how do you change things around?
If it's just class naming you're concerned about, you can change the class name without changing the kind (the identifier that is used in the datastore):
class Foo(db.Model):
#classmethod
def kind(cls):
return 'Bar'
If you want to rename your class, just implement the kind() method as above, and have it return the old kind name.
If you need to make changes to the actual representation of data in the datastore, you'll have to run a mapreduce to update the old data.
The same way you do it in relational databases, except without a nice simple SQL script: http://code.google.com/appengine/articles/update_schema.html
Also, just like the old days, objects without properties don't automatically get defaults and properties that don't exist in the schema still hang around as phantoms in the objects.
To rename a property, I expect you can remove the old property (the phantom hangs around) add the new name, populate the data with a copy from the old (phantom) property. The re-written object will only have the new property
You may be able to do it the way we are doing it in our project:
Before we update the object-model (schema), we export our data to a file or blob in json format using a custom export function and version tag on top. After the schema has been updated we import the json with another custom function which creates new entities and populates them with old data. Of course the import version needs to know the json format associated with each version number.

Google App Engine: Saving a list of objects?

I need to save in my model a list of objects from a certain class on the datastore.
Is there any simple way to archive this with ListProperty and custom property's without going into pickled/simplejson blob data?
I just want something like this:
class Test:
pass
class model(db.Model):
list = db.ListProperty(Test)
Looking at GAE documentation I can't really tell if this is impossible with the current version or not.
I was trying to avoid pickling because it's slow and has size limits.
You can only store a limited set of types directly in the datastore. To store your own types, you need to convert them into one of the accepted types in some manner - pickling is one common approach, as is serializing it as JSON.
The size limit isn't unique to pickling - 1MB is the largest Entity you can insert regardless of the fields and types.
You could save your Test objects in the datastore directly, by making a Test model/entity type. Otherwise you will have to serialize them somehow (using something like pickle or json)
You could have a list of keys
or you could give 'Test' entities a parent that is a entity of your 'model' class

Resources