TypeError: The JSON content is required to be a `dict`, but found <class 'list'> - tensorflow.js
when I run "tensorflowjs_converter" in Python 3.7.
It reported error:
TypeError: The JSON content is required to be a dict, but found class'list'.
I want to transform the file of json to keras_save_model:
tensorflowjs_converter --input_format tfjs_layers_model --output_format keras_saved_model tiny_face_js/tiny_face_detector_model-weights_manifest.json tiny_face_h5
But it failed, I looks into the json file.
[{"weights":[{"name":"conv0/filters","shape":[3,3,3,16],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.009007044399485869,"min":-1.2069439495311063}},{"name":"conv0/bias","shape":[16],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.005263455241334205,"min":-0.9211046672334858}},{"name":"conv1/depthwise_filter","shape":[3,3,16,1],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.004001977630690033,"min":-0.5042491814669441}},{"name":"conv1/pointwise_filter","shape":[1,1,16,32],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.013836609615999109,"min":-1.411334180831909}},{"name":"conv1/bias","shape":[32],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.0015159862590771096,"min":-0.30926119685173037}},{"name":"conv2/depthwise_filter","shape":[3,3,32,1],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.002666276225856706,"min":-0.317286870876948}},{"name":"conv2/pointwise_filter","shape":[1,1,32,64],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.015265831292844286,"min":-1.6792414422128714}},{"name":"conv2/bias","shape":[64],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.0020280554598453,"min":-0.37113414915168985}},{"name":"conv3/depthwise_filter","shape":[3,3,64,1],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.006100742489683862,"min":-0.8907084034938438}},{"name":"conv3/pointwise_filter","shape":[1,1,64,128],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.016276211832083907,"min":-2.0508026908425725}},{"name":"conv3/bias","shape":[128],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.003394414279975143,"min":-0.7637432129944072}},{"name":"conv4/depthwise_filter","shape":[3,3,128,1],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.006716050119961009,"min":-0.8059260143953211}},{"name":"conv4/pointwise_filter","shape":[1,1,128,256],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.021875603993733724,"min":-2.8875797271728514}},{"name":"conv4/bias","shape":[256],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.0041141652009066415,"min":-0.8187188749804216}},{"name":"conv5/depthwise_filter","shape":[3,3,256,1],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.008423839597141042,"min":-0.9013508368940915}},{"name":"conv5/pointwise_filter","shape":[1,1,256,512],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.030007277283014035,"min":-3.8709387695088107}},{"name":"conv5/bias","shape":[512],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.008402082966823203,"min":-1.4871686851277068}},{"name":"conv8/filters","shape":[1,1,512,25],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.028336129469030042,"min":-4.675461362389957}},{"name":"conv8/bias","shape":[25],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.002268134028303857,"min":-0.41053225912299807}}],"paths":["tiny_face_detector_model-shard1"]}]
I tried to delete the "[]", it reports:
Traceback (most recent call last): File
"e:\users\admin\anaconda3\envs\ai_python3.7\lib\runpy.py", line 193,
in _run_module_as_main
"main", mod_spec) File "e:\users\admin\anaconda3\envs\ai_python3.7\lib\runpy.py", line 85, in
_run_code
exec(code, run_globals) File "C:\Users\admin\AppData\Roaming\Python\Python37\Scripts\tensorflowjs_converter.exe__main__.py",
line 7, in File
"C:\Users\admin\AppData\Roaming\Python\Python37\site-packages\tensorflowjs\converters\converter.py",
line 638, in pip_main
main([' '.join(sys.argv[1:])]) File "C:\Users\admin\AppData\Roaming\Python\Python37\site-packages\tensorflowjs\converters\converter.py",
line 642, in main
convert(argv[0].split(' ')) File "C:\Users\admin\AppData\Roaming\Python\Python37\site-packages\tensorflowjs\converters\converter.py",
line 605, in convert
args.output_path) File "C:\Users\admin\AppData\Roaming\Python\Python37\site-packages\tensorflowjs\converters\converter.py",
line 257, in dispatch_tensorflowjs_to_keras_saved_model_conversion
model = keras_tfjs_loader.load_keras_model(config_json_path) File
"C:\Users\admin\AppData\Roaming\Python\Python37\site-packages\tensorflowjs\converters\keras_tfjs_loader.py",
line 194, in load_keras_model
_check_config_json(config_json) File "C:\Users\admin\AppData\Roaming\Python\Python37\site-packages\tensorflowjs\converters\keras_tfjs_loader.py",
line 96, in _check_config_json
raise KeyError('Field "modelTopology" is missing from the JSON content.') KeyError: 'Field "modelTopology" is missing from the JSON
content.'
Is there any workaround to resolve the problem?
Thanks & Regards!
Jun Yan
When you specify tfjs_layers_model as input format, the input should be model.json generated by tfjs-converter in advance. The format looks as follows.
{
"format": "layers-model",
"generatedBy": "1.13.1",
"convertedBy": "TensorFlow.js Converter v1.4.0",
"userDefinedMetadata": {
//...
}
}
One note is that tfjs_layers_model is only created from keras or keras_saved_model, tf_saved_model is not supported for the layers model. The command to create layers model may look like this.
$ tensorflowjs_converter \
--input_format=keras \
--output_format=tfjs_layers_model \
/path/to/keras_model \
/path/to/tfjs_model
Then you can recreate keras model like this.
$ tensorflowjs_converter \
--input_format tfjs_layers_model \
--output_format keras_saved_model \
/path/to/tfjs_model/model.json \
/path/to/tiny_face_h5
See for more detail: Converting a TensorFlow SavedModel, TensorFlow Hub module, Keras HDF5 or tf.keras SavedModel to a web-friendly format
Related
Neural prophet Value Error without any message
I will try to be as short as possible. I ran a Neural prophet forecasting job on multiple products Task 'model_selection': Exception encountered during task execution! Traceback (most recent call last): File "/root/.cache/pypoetry/virtualenvs/--prefect-workflows-9TtSrW0h-py3.8/lib/python3.8/site-packages/prefect/engine/task_runner.py", line 880, in get_task_run_state value = prefect.utilities.executors.run_task_with_timeout( File "/root/.cache/pypoetry/virtualenvs/--prefect-workflows-9TtSrW0h-py3.8/lib/python3.8/site-packages/prefect/utilities/executors.py", line 468, in run_task_with_timeout return task.run(*args, **kwargs) # type: ignore File "/builds/-/--prefect-workflows/workflows/worker_flow.py", line 108, in model_selection File "/root/.cache/pypoetry/virtualenvs/--prefect-workflows-9TtSrW0h-py3.8/lib/python3.8/site-packages/python_translation/model_selection_master.py", line 483, in run_model_selection ) = cross_validate_neuralprophet( File "/root/.cache/pypoetry/virtualenvs/--prefect-workflows-9TtSrW0h-py3.8/lib/python3.8/site-packages/python_translation/models/NeuralProphet.py", line 169, in cross_validate_neuralprophet train = NeuralProphet_model.fit(df=df_train, freq="W-MON") File "/root/.cache/pypoetry/virtualenvs/--prefect-workflows-9TtSrW0h-py3.8/lib/python3.8/site-packages/neuralprophet/forecaster.py", line 592, in fit metrics_df = self._train(df_dict, progress=progress) File "/root/.cache/pypoetry/virtualenvs/--prefect-workflows-9TtSrW0h-py3.8/lib/python3.8/site-packages/neuralprophet/forecaster.py", line 1806, in _train loader = self._init_train_loader(df_dict) File "/root/.cache/pypoetry/virtualenvs/--prefect-workflows-9TtSrW0h-py3.8/lib/python3.8/site-packages/neuralprophet/forecaster.py", line 1572, in _init_train_loader self.config_normalization.init_data_params( File "/root/.cache/pypoetry/virtualenvs/--prefect-workflows-9TtSrW0h-py3.8/lib/python3.8/site-packages/neuralprophet/configure.py", line 41, in init_data_params self.local_data_params, self.global_data_params = df_utils.init_data_params( File "/root/.cache/pypoetry/virtualenvs/--prefect-workflows-9TtSrW0h-py3.8/lib/python3.8/site-packages/neuralprophet/df_utils.py", line 260, in init_data_params global_data_params = data_params_definition( File "/root/.cache/pypoetry/virtualenvs/--py3.8/lib/python3.8/site-packages/neuralprophet/df_utils.py", line 176, in data_params_definition data_params[covar] = get_normalization_params( File "/root/.cache/pypoetry/virtualenvs/--prefect-workflows-9TtSrW0h-py3.8/lib/python3.8/site-packages/neuralprophet/df_utils.py", line 300, in get_normalization_params norm_type = auto_normalization_setting(array) File "/root/.cache/pypoetry/virtualenvs/--prefect-workflows-9TtSrW0h-py3.8/lib/python3.8/site-packages/neuralprophet/df_utils.py", line 290, in auto_normalization_setting raise ValueError ValueError Describe the bug Ran a forecasting job ... and it raised a ValueError without any additional mentions. To Reproduce I really do not know. It was a Prefect job that I ran over 200 products. And I have no idea why it failed. Expected behavior I expected it to forecast without returning an error. What actually happens It crashes with a ValueError Screenshots Printouts are above. Environement (please complete the following information): Python environment: 3.8.10 NeuralProphet version: neuralprophet 0.3.2, installed from PYPI with pip install neuralprophet Additional context These are scheduled as a Prefect workflow. Hence I do not run things manually. Around 150 products ran without any issues. And this returned a ValueError.
Error Importing Pandas in Volttron Platform
I have created a Volttron agent and trying to use pandas library. But I am getting error of pandas not being installed, while its there. ERROR:volttron.platform.packaging:b'Traceback (most recent call last):\n File "setup.py", line 11, in <module> _temp = __import__(agent_module, globals(), locals(), [\'__version__\'], 0) File "/tmp/tmpd_5srlls/pkg/weather/agent.py", line 12, in <module> import pandas as pd File "/home/pi/volttron/env/lib/python3.7/site-packages/pandas/__init__.py", line 17, in <module> "Unable to import required dependencies:\\n" + "\\n".join(missing_dependencies) ImportError: Unable to import required dependencies:\nnumpy: IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE! Importing the numpy C-extensions failed. This error can happen for many reasons, often due to issues with your setup or how NumPy was\ninstalled.\n\nWe have compiled some common reasons and troubleshooting tips at: https://numpy.org/devdocs/user/troubleshooting-importerror.html\n\nPlease note and check the following:\n\n * The Python version is: Python3.7 from "/home/pi/volttron/env/bin/python" * The NumPy version is: "1.19.5"\n\nand make sure that they are the versions you expect.\nPlease carefully study the documentation linked above for further help.Original error was: libf77blas.so.3: cannot open shared object file: No such file or directory\n\n' Traceback (most recent call last): File "scripts/install-agent.py", line 340, in <module> if not os.path.isfile(opts.package): File "/usr/lib/python3.7/genericpath.py", line 30, in isfile st = os.stat(path) TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType
The shared library cannot be located. Please try: sudo apt-get install libatlas-base-dev
Nose GAE : Cannot import dev_appserver, but app engine is still in PYTHONPATH
I am getting the following error when trying to run the nosetest from my GAE project: nosetests --nologcapture --with-gae --without-sandbox --gae-lib-root=/Users/dsinha/Dropbox/code/google-cloud-sdk/platform/google_appengine but I get the following error: Traceback (most recent call last): File "/usr/local/bin/nosetests", line 8, in <module> load_entry_point('nose==1.3.4', 'console_scripts', 'nosetests')() File "/Library/Python/2.7/site-packages/nose/core.py", line 121, in __init__ **extra_args) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/unittest/main.py", line 94, in __init__ self.parseArgs(argv) File "/Library/Python/2.7/site-packages/nose/core.py", line 145, in parseArgs self.config.configure(argv, doc=self.usage()) File "/Library/Python/2.7/site-packages/nose/config.py", line 346, in configure self.plugins.configure(options, self) File "/Library/Python/2.7/site-packages/nose/plugins/manager.py", line 284, in configure cfg(options, config) File "/Library/Python/2.7/site-packages/nose/plugins/manager.py", line 99, in __call__ return self.call(*arg, **kw) File "/Library/Python/2.7/site-packages/nose/plugins/manager.py", line 167, in simple result = meth(*arg, **kw) File "/Library/Python/2.7/site-packages/nosegae.py", line 87, in configure from google.appengine.tools import old_dev_appserver as dev_appserver ImportError: cannot import name old_dev_appserver The sys.path reads: '/Users/dsinha/Downloads/eclipse/plugins/org.python.pydev_3.9.0.201411111611/pysrc', '/Users/dsinha/Dropbox/code/google-cloud-sdk/platform/google_appengine/lib/antlr3', '/Users/dsinha/Dropbox/code/google-cloud-sdk/platform/google_appengine/lib/fancy_urllib', '/Users/dsinha/Dropbox/code/google-cloud-sdk/platform/google_appengine/lib/ipaddr', '/Users/dsinha/Dropbox/code/google-cloud-sdk/platform/google_appengine/lib/yaml-3.10', '/Users/dsinha/Dropbox/code/google-cloud-sdk/platform/google_appengine/lib/rsa', '/Users/dsinha/Dropbox/code/google-cloud-sdk/platform/google_appengine/lib/pyasn1', '/Users/dsinha/Dropbox/code/google-cloud-sdk/platform/google_appengine/lib/pyasn1_modules', '/Users/dsinha/Dropbox/code/google-cloud-sdk/platform/google_appengine', '/Users/dsinha/Dropbox/code/google-cloud-sdk/platform/google_appengine/lib/simplejson', '/Users/dsinha/Dropbox/code/google-cloud-sdk/platform/google_appengine/lib/django-1.4', '/Users/dsinha/Dropbox/code/google-cloud-sdk/platform/goo... so the app engine libraries should be getting pulled in. Actually its first failing to pull in dev_appserver, and then trying and failing to pull old_dev_appserver The dir inside the appengine/lib file is: bash-3.2$ ls __init__.py appcfg.py backends_xml_parser.py dispatch_xml_parser.py handler_generator.py php_cli.py value_mixin.pyc __init__.pyc appcfg_java.py boolean_action.py docker handler_generator.pyc queue_xml_parser.py web_xml_parser.py adaptive_thread_pool.py appengine_rpc.py boolean_action.pyc dos_xml_parser.py jarfile.py queue_xml_parser.pyc web_xml_parser.pyc api_server.py appengine_rpc.pyc bulkload_client.py download_appstats.py java_quickstart.py remote_api_shell.py xml_parser_utils.py app_engine_config_exception.py appengine_rpc_httplib2.py bulkloader.py endpointscfg.py java_quickstart.pyc requeue.py xml_parser_utils.pyc app_engine_config_exception.pyc augment_mimetypes.py cron_xml_parser.py gen_protorpc.py java_utils.py sdk_update_checker.py yaml_translator.py app_engine_web_xml_parser.py augment_mimetypes.pyc dev-channel-js.js handler.py java_utils.pyc sdk_update_checker.pyc yaml_translator.pyc app_engine_web_xml_parser.pyc backends_conversion.py devappserver2 handler.pyc os_compat.py value_mixin.py bash-3.2$ pwd /Users/dsinha/Dropbox/code/google-cloud-sdk/platform/google_appengine/google/appengine/tools I also tried to find the modules available inside the google.appengine.tools package: >>> import pkgutil >>> [name for _, name, _ in pkgutil.iter_modules(['testpkg'])] [] This problem started occuring after I upgraded to App Engine 1.9.10 (to use the async search features). In a problem that I think is related, when I try to run the debug server from PyDev, it just silently terminated on any page request (localhost:8080). Running dev_appserver . from the command line works fine though.
Nose-GAE broke with App Engine 1.9.17: https://github.com/Trii/NoseGAE/issues/6 Downgrading to 1.9.15 made the problem go away temporarily while waiting for the issue to be resolved by nose-gae
Exception in idle (python 2.7) - possible bug in idle?
I'm trying to run a meta-analysis on a database of fMRI data, using the neurosynth python library through idle. When I try to run even some of the most basic functions, I get an error, not an error my own code, or in the neurosynth modules, the error seems to be a bug in idle itself. I uninstalled and reinstalled python 2.7, reinstalled neurosynth and its dependencies, and ran into the same error. I've pasted my code below, followed by the error message, which appears in the unix shell (not in the idle shell). Has anybody come across this error before using idle and python 2.7? The script: from neurosynth.base.dataset import Dataset from neurosynth.analysis import meta, decode, network import neurosynth neurosynth.set_logging_level('info') dataset = Dataset('data/database.txt') dataset.add_features('data/features.txt') dataset.save('dataset.pkl') print 'done' The error message which appeared in the unix shell: ---------------------------------------- Unhandled server exception! Thread: SockThread Client Address: ('127.0.0.1', 46779) Request: <socket._socketobject object at 0xcb8d7c0> Traceback (most recent call last): File "/usr/global/python/2.7.3/lib/python2.7/SocketServer.py", line 284, in _handle_request_noblock self.process_request(request, client_address) File "/usr/global/python/2.7.3/lib/python2.7/SocketServer.py", line 310, in process_request self.finish_request(request, client_address) File "/usr/global/python/2.7.3/lib/python2.7/SocketServer.py", line 323, in finish_request self.RequestHandlerClass(request, client_address, self) File "/usr/global/python/2.7.3/lib/python2.7/idlelib/rpc.py", line 503, in __init__ SocketServer.BaseRequestHandler.__init__(self, sock, addr, svr) File "/usr/global/python/2.7.3/lib/python2.7/SocketServer.py", line 638, in __init__ self.handle() File "/usr/global/python/2.7.3/lib/python2.7/idlelib/run.py", line 265, in handle rpc.RPCHandler.getresponse(self, myseq=None, wait=0.05) File "/usr/global/python/2.7.3/lib/python2.7/idlelib/rpc.py", line 280, in getresponse response = self._getresponse(myseq, wait) File "/usr/global/python/2.7.3/lib/python2.7/idlelib/rpc.py", line 300, in _getresponse response = self.pollresponse(myseq, wait) File "/usr/global/python/2.7.3/lib/python2.7/idlelib/rpc.py", line 424, in pollresponse message = self.pollmessage(wait) File "/usr/global/python/2.7.3/lib/python2.7/idlelib/rpc.py", line 376, in pollmessage packet = self.pollpacket(wait) File "/usr/global/python/2.7.3/lib/python2.7/idlelib/rpc.py", line 347, in pollpacket r, w, x = select.select([self.sock.fileno()], [], [], wait) error: (4, 'Interrupted system call') *** Unrecoverable, server exiting! ---------------------------------------- Thanks in advance!
Idle is meant for interactive exploration in the shell, for editing in an editor, and for testing programs by running them from an editor. It is not meant for production running of programs once developed. If there is a problem, one should separate the Idle part from the running with Python part. So in the unix shell, run python -m idlelib (for instance) to see if Idle starts correctly. Then, in an appropriate directory, run python path-to-my-file.py. Which does not work? The error message is definitely odd, as it has more than just the python traceback. On the other hand, it does not start with a line of your code. I have no idea why the select call would be interrupted.
‘platform’ import disappearing from selenium as a result of my script
Linux ip-172-31-36-170 3.10.35-43.137.amzn1.x86_64 #1 SMP Wed Apr 2 09:36:59 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Amazon Linux AMI release 2014.03 cpe:/o:amazon:linux:2014.03:ga I've run into a weird problem with a script that uses selenium.webdriver.PhantomJS. SYMPTOMS… My script uses the following to start a phantomjs session from selenium import selenium from selenium import webdriver from selenium.webdriver.common.keys import Keys from selenium.webdriver.common.action_chains import ActionChains (…) def load_driver(self, _driver = "phantomjs", _path = "./phantomjs"): if "phantom" in str(_driver).lower(): self.driver = webdriver.PhantomJS(_path) Fails with: selenium.common.exceptions.WebDriverException: Message: 'Unable to start phantomjs with ghostdriver.' ; Screenshot: available via screen HOWEVER, at the python command line, everything works fine… from selenium import webdriver _path = './phantomjs.exe' driver = webdriver.PhantomJS(_path) PLATFORM: Linux platform.system() != 'Windows': True (I’ll explain “PLATFORM: Linux”, and “platform.system() != 'Windows': True” below) SO… I traced the error to the “/usr/lib/python2.6/site-packages/selenium/webdriver/phantomjs/service.py” and specifically this code… def start(self): """ """ try: print "PLATFORM:", platform.system() #ADDED BY ME # FOLLOWING ADDED BY ME, NOW CAUSING THE ERROR print "platform.system() != 'Windows': ", platform.system() != 'Windows' self.process = subprocess.Popen(self.service_args, stdin=subprocess.PIPE, close_fds=platform.system() != 'Windows', # <-- THIS CAUSED ORIG PROB stdout=self._log, stderr=self._log) except Exception as e: raise WebDriverException("Unable to start phantomjs with ghostdriver.", e) AND HERE’S THE ISSUE… When I run the code at the python command line (see above) everything is fine, AND the responses to ‘platform.system()’ and “platform.system() != 'Windows' ” are correct. However, when I run my script, the ‘platform.system()’ reports blank and “platform.system() != 'Windows' ” errors. (See actual output below). So, for some reason, when my script loads selenium and runs…THE SELENIUM CODE loses the ‘platform’ import. Thanks for the help! ACTUAL OUTPUT BELOW (NOTICE that 'PLATFORM:' is followed by 'blank' (instead of 'Linux'), and the next line “platform.system() != 'Windows' ” triggers the error. These lines were added by me into the SELENIUM code, not my code!) PLATFORM: Traceback (most recent call last): File "./agmarknet.py", line 834, in <module> username = options.username # --username File "./agmarknet.py", line 124, in __init__ self.load_driver(driver, driver_path) File "./agmarknet.py", line 521, in load_driver self.driver = webdriver.PhantomJS(_path) File "/usr/lib/python2.6/site-packages/selenium/webdriver/phantomjs/webdriver.py", line 50, in __init__ self.service.start() File "/usr/lib/python2.6/site-packages/selenium/webdriver/phantomjs/service.py", line 64, in start print "PLATFORM:", platform.system() #333 File "/usr/lib64/python2.6/platform.py", line 1272, in system return uname()[0] File "/usr/lib64/python2.6/platform.py", line 1239, in uname processor = _syscmd_uname('-p','') File "/usr/lib64/python2.6/platform.py", line 995, in _syscmd_uname output = string.strip(f.read()) File "./agmarknet.py", line 350, in _signal_handler self._cleanup() File "./agmarknet.py", line 194, in _cleanup self.driver.close() AttributeError: 'Agmarknet' object has no attribute 'driver' 2014-05-18 12:42:40,281 - Agmarknet - INFO - Closing WebDriver... Error in atexit._run_exitfuncs: Traceback (most recent call last): File "/usr/lib64/python2.6/atexit.py", line 24, in _run_exitfuncs func(*targs, **kargs) File "./agmarknet.py", line 194, in _cleanup self.driver.close() AttributeError: 'Agmarknet' object has no attribute 'driver' Error in sys.exitfunc: Traceback (most recent call last): File "/usr/lib64/python2.6/atexit.py", line 24, in _run_exitfuncs func(*targs, **kargs) File "./agmarknet.py", line 194, in _cleanup self.driver.close()