tensorflowjs-converter:Failed to import metagraph, check error log for more info - tensorflow.js

I just want to convert python model to tensorflow.js model, but after saving as .pb, I run "tensorflowjs_converter --input_format=tf_saved_model --output_format=tfjs_graph_model --signature_name=serving_default --saved_model_tags=serve ./saved_model ./web_model", error appeared.
2019-03-20 23:07:05.970985: I tensorflow/core/grappler/devices.cc:53] Number of eligible GPUs (core count >= 8): 0 (Note: TensorFlow was not compiled with CUDA support)
2019-03-20 23:07:05.978764: I tensorflow/core/grappler/clusters/single_machine.cc:359] Starting new session
2019-03-20 23:07:05.985340: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2019-03-20 23:07:06.072370: E tensorflow/core/grappler/grappler_item_builder.cc:636] Init node Variable/Assign doesn't exist in graph
Traceback (most recent call last):
File "d:\anaconda3\lib\site-packages\tensorflow\python\grappler\tf_optimizer.py", line 43, in OptimizeGraph
verbose, graph_id, status)
SystemError: returned NULL without setting an error
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "d:\anaconda3\lib\runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "d:\anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "D:\Anaconda3\Scripts\tensorflowjs_converter.exe__main__.py", line 9, in
File "d:\anaconda3\lib\site-packages\tensorflowjs\converters\converter.py", line 358, in main
strip_debug_ops=FLAGS.strip_debug_ops)
File "d:\anaconda3\lib\site-packages\tensorflowjs\converters\tf_saved_model_conversion_v2.py", line 271, in convert_tf_saved_model
concrete_func)
File "d:\anaconda3\lib\site-packages\tensorflow\python\framework\convert_to_constants.py", line 140, in convert_variables_to_constants_v2
graph_def = _run_inline_graph_optimization(func)
File "d:\anaconda3\lib\site-packages\tensorflow\python\framework\convert_to_constants.py", line 59, in _run_inline_graph_optimization
return tf_optimizer.OptimizeGraph(config, meta_graph)
File "d:\anaconda3\lib\site-packages\tensorflow\python\grappler\tf_optimizer.py", line 43, in OptimizeGraph
verbose, graph_id, status)
File "d:\anaconda3\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 548, in exit
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: Failed to import metagraph, check error log for more info.
This is my code. And the version of tensorflow is 1.14.0(preview as I failed to install tf 2.0)
# coding=utf-8#
import tensorflow as tf
import numpy as np
x_data = [[0.0, 0.0], [0.0, 1.0], [1.0, 0.0], [1.0, 1.0]]
y_data = [[0.0], [1.0], [1.0], [0.0]]
x_test = [[0.0, 1.0], [1.0, 1.0]]
xs = tf.placeholder(tf.float32, [None, 2])
ys = tf.placeholder(tf.float32, [None, 1])
W1 = tf.Variable(tf.random_normal([2, 10]))
B1 = tf.Variable(tf.zeros([1, 10]) + 0.1)
out1 = tf.nn.relu(tf.matmul(xs, W1) + B1)
W2 = tf.Variable(tf.random_normal([10, 1]))
B2 = tf.Variable(tf.zeros([1, 1]) + 0.1)
prediction = tf.add(tf.matmul(out1, W2), B2, name="model")
loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys - prediction), reduction_indices=[1]))
train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
for i in range(40):
sess.run(train_step, feed_dict={xs: x_data, ys: y_data})
print(sess.run(loss, feed_dict={xs: x_data, ys: y_data}))
re = sess.run(prediction, feed_dict={xs: x_test})
print(re)
for x in re:
if x[0] > 0.5:
print(1)
else:
print(0)
tf.saved_model.simple_save(sess, "./saved_model", inputs={"x": xs, }, outputs={"model": prediction, })

In the end, I give up it, as latest version has removed loadFrozenModel, and the support is little. I try to use keras model and it works. However, I still hope that somebody tell me why my tf model fails to convert to tfjs model.

just add
tf.enable_resource_variables()
before initializing x_data
and use this command for conversion
tensorflowjs_converter --input_format=tf_saved_model --output_format=tfjs_graph_model ./saved_model ./web_model

Related

How i can run GridSearchCV in dast_ml despite this error?

This is my code in Google Colab:
import cupy as cp
import numpy as np
import joblib
import dask_ml.model_selection as dcv
def ParamSelection(X, Y, nfolds):
param_grid = {'C': [0.001, 0.01, 0.1, 1, 10, 100],'kernel':['linear'], 'gamma':[0.001, 0.01, 0.1, 1, 10, 100]}
svc = svm.SVC()
grid_search = dcv.GridSearchCV(svc, param_grid, cv = nfolds)
grid_search.fit(X, Y)
print(grid_search.best_params_)
print(grid_search.best_estimator_)
print(grid_search.best_score_)
return grid_search.best_estimator_
svc = ParamSelection(X_train.astype(cp.int_), y_train.astype(cp.int_), 10)
I have this error
TypeError Traceback (most recent call last)
<ipython-input-163-56196d6a31bd> in <module>()
15 return grid_search.best_estimator_
16
---> 17 svc = ParamSelection(X_train.astype(cp.int_), y_train.astype(cp.int_), 10)
18
9 frames
/usr/local/lib/python3.7/site-packages/cudf/core/frame.py in __array__(self, dtype)
1677 def __array__(self, dtype=None):
1678 raise TypeError(
-> 1679 "Implicit conversion to a host NumPy array via __array__ is not "
1680 "allowed, To explicitly construct a GPU array, consider using "
1681 "cupy.asarray(...)\nTo explicitly construct a "
TypeError: Implicit conversion to a host NumPy array via __array__ is not allowed, To explicitly construct a GPU array, consider using cupy.asarray(...)
To explicitly construct a host array, consider using .to_array()
For train_test_split I use function from :
from dask_ml.model_selection import train_test_split
I don't really know, where is problem.
Any suggestions?
Somewhere in the internals, Dask ML is likely calling np.asarray on a cupy array. This method of implicitly causing a CPU to GPU transfer is generally not permitted, so an error is thrown.
If you instead use CPU based data with a cuML estimator, this should work as expected.
import cupy as cp
import dask_ml.model_selection as dcv
from sklearn.datasets import make_classification
from cuml import svm
​
X, y = make_classification(
n_samples=100
)
​
def ParamSelection(X, Y, nfolds):
param_grid = {'C': [0.001, 10, 100],'gamma':[0.001, 100]}
svc = svm.SVC()
grid_search = dcv.GridSearchCV(svc, param_grid, cv = nfolds)
grid_search.fit(X, Y)
print(grid_search.best_params_)
print(grid_search.best_estimator_)
print(grid_search.best_score_)
return grid_search.best_estimator_
​
svc = ParamSelection(X, y, 2)
{'C': 10, 'gamma': 0.001}
SVC()
0.8399999737739563

Datastore error: BadValueError: Expected integer, got [0, 1, 2, 3]

Others have reported a similar error, but the solutions given do not solve my problem.
For example there is a good answer here. The answer in the link mentions how ndb changes from a first use to a later use and suggests there is a problem because a first run produces a None in the Datastore. I cannot reproduce or see that happening in the Datastore for my sdk, but that may be because I am running it here from the interactive console.
I am pretty sure I got an initial good run with the GAE interactive console, but every run since then has failed with the error in my Title to this question.
I have left the print statements in the following code because they show good results and assure me that the error is occuring in the put() at the very end.
from google.appengine.ext import ndb
class Account(ndb.Model):
week = ndb.IntegerProperty(repeated=True)
weeksNS = ndb.IntegerProperty(repeated=True)
weeksEW = ndb.IntegerProperty(repeated=True)
terry=Account(week=[],weeksNS=[],weeksEW=[])
terry_key=terry.put()
terry = terry_key.get()
print terry
for t in list(range(4)): #just dummy input, but like real input
terry.week.append(t)
print terry.week
region = 1 #same error message for region = 0
if region :
terry.weeksEW.append(terry.week)
else:
terry.weeksNS.append(terry.week)
print 'EW'+str(terry.weeksEW)
print 'NS'+str(terry.weeksNS)
terry.week = []
print 'week'+str(terry.week)
terry.put()
The idea of my code is to first build up the terry.week list values incrementally and then later store the whole list to the appropriate region, either NS or EW. So I'm looking for a workaround for this scheme.
The error message is likely of no value but I am reproducing it here.
Traceback (most recent call last):
File "/Users/brian/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/python/runtime/request_handler.py", line 237, in handle_interactive_request
exec(compiled_code, self._command_globals)
File "<string>", line 55, in <module>
File "/Users/brian/google-cloud-sdk/platform/google_appengine/google/appengine/ext/ndb/model.py", line 3458, in _put
return self._put_async(**ctx_options).get_result()
File "/Users/brian/google-cloud-sdk/platform/google_appengine/google/appengine/ext/ndb/tasklets.py", line 383, in get_result
self.check_success()
File "/Users/brian/google-cloud-sdk/platform/google_appengine/google/appengine/ext/ndb/tasklets.py", line 427, in _help_tasklet_along
value = gen.throw(exc.__class__, exc, tb)
File "/Users/brian/google-cloud-sdk/platform/google_appengine/google/appengine/ext/ndb/context.py", line 824, in put
key = yield self._put_batcher.add(entity, options)
File "/Users/brian/google-cloud-sdk/platform/google_appengine/google/appengine/ext/ndb/tasklets.py", line 430, in _help_tasklet_along
value = gen.send(val)
File "/Users/brian/google-cloud-sdk/platform/google_appengine/google/appengine/ext/ndb/context.py", line 358, in _put_tasklet
keys = yield self._conn.async_put(options, datastore_entities)
File "/Users/brian/google-cloud-sdk/platform/google_appengine/google/appengine/datastore/datastore_rpc.py", line 1858, in async_put
pbs = [entity_to_pb(entity) for entity in entities]
File "/Users/brian/google-cloud-sdk/platform/google_appengine/google/appengine/ext/ndb/model.py", line 697, in entity_to_pb
pb = ent._to_pb()
File "/Users/brian/google-cloud-sdk/platform/google_appengine/google/appengine/ext/ndb/model.py", line 3167, in _to_pb
prop._serialize(self, pb, projection=self._projection)
File "/Users/brian/google-cloud-sdk/platform/google_appengine/google/appengine/ext/ndb/model.py", line 1422, in _serialize
values = self._get_base_value_unwrapped_as_list(entity)
File "/Users/brian/google-cloud-sdk/platform/google_appengine/google/appengine/ext/ndb/model.py", line 1192, in _get_base_value_unwrapped_as_list
wrapped = self._get_base_value(entity)
File "/Users/brian/google-cloud-sdk/platform/google_appengine/google/appengine/ext/ndb/model.py", line 1180, in _get_base_value
return self._apply_to_values(entity, self._opt_call_to_base_type)
File "/Users/brian/google-cloud-sdk/platform/google_appengine/google/appengine/ext/ndb/model.py", line 1352, in _apply_to_values
value[:] = map(function, value)
File "/Users/brian/google-cloud-sdk/platform/google_appengine/google/appengine/ext/ndb/model.py", line 1234, in _opt_call_to_base_type
value = _BaseValue(self._call_to_base_type(value))
File "/Users/brian/google-cloud-sdk/platform/google_appengine/google/appengine/ext/ndb/model.py", line 1255, in _call_to_base_type
return call(value)
File "/Users/brian/google-cloud-sdk/platform/google_appengine/google/appengine/ext/ndb/model.py", line 1331, in call
newvalue = method(self, value)
File "/Users/brian/google-cloud-sdk/platform/google_appengine/google/appengine/ext/ndb/model.py", line 1781, in _validate
(value,))
BadValueError: Expected integer, got [0, 1, 2, 3]
I believe the error comes from these lines:
terry.weeksEW.append(terry.week)
terry.weeksNS.append(terry.week)
You are not appending another integer; You are appending a list, when an integer is expected.
>>> aaa = [1,2,3]
>>> bbb = [4,5,6]
>>> aaa.append(bbb)
>>> aaa
[1, 2, 3, [4, 5, 6]]
>>>
This fails the ndb.IntegerProperty test.
Try:
terry.weeksEW += terry.week
terry.weeksNS += terry.week
EDIT: To save a list of lists, do not use the IntegerProperty(), but instead the JsonProperty(). Better still, the ndb datastore is deprecated, so... I recommend Firestore, which uses JSON objects by default. At least use Cloud Datastore, or Cloud NDB.

error: (-209) The operation is neither 'array op array'....-(Python 3.4,opencv,picamera)

I want to give an input, a video which was taken from picamera(.h264) to my python code in my PC, [as opencv-python is not getting installed in my raspbian OS].
I converted the video which was in .h264 to mp4 and gave that as input video file,and I get the below error.
OpenCV Error: Sizes of input arguments do not match (The operation is neither 'array op array' (where arrays have the same size and the same number of channels), nor 'array op scalar', nor 'scalar op array') in arithm_op, file /io/opencv/modules/core/src/arithm.cpp, line 659
Traceback (most recent call last):
File "/home/ramakrishna/PycharmProjects/Lanedect/driving-lane-departure-warning-master/main.py", line 36, in <module>
clip = clip1.fl_image(process_frame) #NOTE: it should be in BGR format
File "/home/ramakrishna/.local/lib/python3.4/site-packages/moviepy/video/VideoClip.py", line 533, in fl_image
return self.fl(lambda gf, t: image_func(gf(t)), apply_to)
File "/home/ramakrishna/.local/lib/python3.4/site-packages/moviepy/Clip.py", line 136, in fl
newclip = self.set_make_frame(lambda t: fun(self.get_frame, t))
File "<decorator-gen-57>", line 2, in set_make_frame
File "/home/ramakrishna/.local/lib/python3.4/site-packages/moviepy/decorators.py", line 14, in outplace
f(newclip, *a, **k)
File "/home/ramakrishna/.local/lib/python3.4/site-packages/moviepy/video/VideoClip.py", line 694, in set_make_frame
self.size = self.get_frame(0).shape[:2][::-1]
File "<decorator-gen-14>", line 2, in get_frame
File "/home/ramakrishna/.local/lib/python3.4/site-packages/moviepy/decorators.py", line 89, in wrapper
return f(*new_a, **new_kw)
File "/home/ramakrishna/.local/lib/python3.4/site-packages/moviepy/Clip.py", line 95, in get_frame
return self.make_frame(t)
File "/home/ramakrishna/.local/lib/python3.4/site-packages/moviepy/Clip.py", line 136, in <lambda>
newclip = self.set_make_frame(lambda t: fun(self.get_frame, t))
File "/home/ramakrishna/.local/lib/python3.4/site-packages/moviepy/video/VideoClip.py", line 533, in <lambda>
return self.fl(lambda gf, t: image_func(gf(t)), apply_to)
File "/home/ramakrishna/PycharmProjects/Lanedect/driving-lane-departure-warning-master/lane.py", line 619, in process_frame
output = create_output_frame(offcenter, pts, img_undist_, fps, curvature, curve_direction, binary_sub)
File "/home/ramakrishna/PycharmProjects/Lanedect/driving-lane-departure-warning-master/lane.py", line 486, in create_output_frame
output = cv2.addWeighted(undist_ori, 1, newwarp_, 0.3, 0)
cv2.error: /io/opencv/modules/core/src/arithm.cpp:659: error: (-209) The operation is neither 'array op array' (where arrays have the same size and the same number of channels), nor 'array op scalar', nor 'scalar op array' in function arithm_op
Process finished with exit code 1
Please help me solve this error.
Below are the size of input file already set in the code.But my video file size is 20.5 MB and dimensions 1920 x 1080. How can I change the dimensions if I have to?
left_lane = Lane()
right_lane = Lane()
frame_width = 1280
frame_height = 720
LANEWIDTH = 3.7 # highway lane width in US: 3.7 meters
input_scale = 4
output_frame_scale = 4
N = 4 # buffer previous N lines
# fullsize:1280x720
x = [194, 1117, 705, 575]
y = [719, 719, 461, 461]
X = [290, 990, 990, 290]
Y = [719, 719, 0, 0]

OpenMDAO 1.5 : Running DOEdriver with array as desvar

I have used the example described here (http://openmdao.readthedocs.org/en/1.5.0/usr-guide/tutorials/doe-drivers.html?highlight=driver) to show my problem. I want to use the same approach for one component were "params" are array and no longer float . See example below
from openmdao.api import IndepVarComp, Group, Problem, ScipyOptimizer, ExecComp, DumpRecorder, Component
from openmdao.drivers.latinhypercube_driver import LatinHypercubeDriver, OptimizedLatinHypercubeDriver
import numpy as np
class Paraboloid(Component):
""" Evaluates the equation f(x,y) = (x-3)^2 + xy + (y+4)^2 - 3 """
def __init__(self):
super(Paraboloid, self).__init__()
self.add_param('x', val=0.0)
self.add_param('y', val=0.0)
self.add_output('f_xy', val=0.0)
def solve_nonlinear(self, params, unknowns, resids):
"""f(x,y) = (x-3)^2 + xy + (y+4)^2 - 3
"""
x = params['x']
y = params['y']
unknowns['f_xy'] = (x-3.0)**2 + x*y + (y+4.0)**2 - 3.0
def linearize(self, params, unknowns, resids):
#""" Jacobian for our paraboloid."""
x = params['x']
y = params['y']
J = {}
J['f_xy', 'x'] = 2.0*x - 6.0 + y
J['f_xy', 'y'] = 2.0*y + 8.0 + x
return J
class ParaboloidArray(Component):
""" Evaluates the equation f(x,y) = (x-3)^2 + xy + (y+4)^2 - 3 """
def __init__(self):
super(ParaboloidArray, self).__init__()
self.add_param('X', val=np.array([0., 0.]))
self.add_output('f_xy', val=0.0)
def solve_nonlinear(self, params, unknowns, resids):
"""f(x,y) = (x-3)^2 + xy + (y+4)^2 - 3
"""
x = params['X'][0]
y = params['y'][1]
unknowns['f_xy'] = (x-3.0)**2 + x*y + (y+4.0)**2 - 3.0
top = Problem()
root = top.root = Group()
root.add('p1', IndepVarComp('x', 50.0), promotes=['*'])
root.add('p2', IndepVarComp('y', 50.0), promotes=['*'])
root.add('comp', Paraboloid(), promotes=['*'])
top.driver = OptimizedLatinHypercubeDriver(num_samples=4, seed=0, population=20, generations=4, norm_method=2)
top.driver.add_desvar('x', lower=-50.0, upper=50.0)
top.driver.add_desvar('y', lower=-50.0, upper=50.0)
top.driver.add_objective('f_xy')
top.setup()
top.run()
top.cleanup()
###########################
print("case float ok")
top = Problem()
root = top.root = Group()
root.add('p1', IndepVarComp('X', np.array([50., 50.])), promotes=['*'])
root.add('comp', ParaboloidArray(), promotes=['*'])
top.driver = OptimizedLatinHypercubeDriver(num_samples=4, seed=0, population=20, generations=4, norm_method=2)
top.driver.add_desvar('X', lower=np.array([-50., -50.]), upper=np.array([50., 50.]))
top.driver.add_objective('f_xy')
top.setup()
top.run()
top.cleanup()
I obtain the following error :
Traceback (most recent call last):
File "C:\Program Files (x86)\Wing IDE 101 5.0\src\debug\tserver\_sandbox.py", line 102, in <module>
File "D:\tlefeb\Anaconda2\Lib\site-packages\openmdao\core\problem.py", line 1038, in run
self.driver.run(self)
File "D:\tlefeb\Anaconda2\Lib\site-packages\openmdao\drivers\predeterminedruns_driver.py", line 108, in run
for run in runlist:
File "D:\tlefeb\Anaconda2\Lib\site-packages\openmdao\drivers\latinhypercube_driver.py", line 57, in _build_runlist
design_var_buckets = self._get_buckets(bounds['lower'], bounds['upper'])
File "D:\tlefeb\Anaconda2\Lib\site-packages\openmdao\drivers\latinhypercube_driver.py", line 101, in _get_buckets
bucket_walls = np.linspace(low, high, self.num_samples + 1)
File "D:\tlefeb\Anaconda2\Lib\site-packages\numpy\core\function_base.py", line 102, in linspace
if step == 0:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
Did I misunderstood something in my way of coding ?
I get a different error than you, using the the latest OpenMDAO master, but I get an error non-the-less. There isn't anything wrong with the mode, but rather there are some bugs with using array variables for DOEs. I've added a bug-fix story to the OpenMDAO backlog, which we'll hopefully be able to deal with in the next couple weeks. We'd gladly accept a pull request if you develop a fix before we get to it though.

Blender from_pydata error with reading vertex positions from file

I get the error;
Error: Array length mismatch (expected 3, got 13)
TypeError: a float is required
Traceback (most recent call last):
File "\Test.py", line 393, in from_pydata
File "C:\Program Files (x86)\Blender Foundation\Blender\2.68\2.68\scripts\modules\bpy_types.py", line 393, in from_pydata
self.vertices.foreach_set("co", vertices_flat)
TypeError: couldn't access the py sequence
Error: Python script fail, look in the console for now...
Here is the code:
filePath = "C:\\Users\\siba\\Desktop\\1x1x1.blb"
f = open(filePath)
line = f.readline()
while line:
if(line == "POSITION:\n"):
POS1 = f.readline().replace('\n','')
line = f.readline()
f.close()
coord1 = POS1
Verts = [coord1]
import bpy
profile_mesh = bpy.data.meshes.new("Base_Profile_Data")
profile_mesh.from_pydata(Verts, [], [])
profile_mesh.update()
profile_object = bpy.data.objects.new("Base_Profile", profile_mesh)
profile_object.data = profile_mesh
scene = bpy.context.scene
scene.objects.link(profile_object)
profile_object.select = True
Here is 1x1x1.blb:
POSITION:
0.5 0.5 0.5
Just a stab in the dark, as I don't script Blender and I cannot be bothered to find the docs, but I would imagine Verts needs to be a list of floats, and you are providing a space-separated string, so this might work:
coord1 = POS1.split(' ')
map(float, coord1)
Verts = coord1

Resources