I will try to be as short as possible.
I ran a Neural prophet forecasting job on multiple products
Task 'model_selection': Exception encountered during task execution!
Traceback (most recent call last):
File "/root/.cache/pypoetry/virtualenvs/--prefect-workflows-9TtSrW0h-py3.8/lib/python3.8/site-packages/prefect/engine/task_runner.py", line 880, in get_task_run_state
value = prefect.utilities.executors.run_task_with_timeout(
File "/root/.cache/pypoetry/virtualenvs/--prefect-workflows-9TtSrW0h-py3.8/lib/python3.8/site-packages/prefect/utilities/executors.py", line 468, in run_task_with_timeout
return task.run(*args, **kwargs) # type: ignore
File "/builds/-/--prefect-workflows/workflows/worker_flow.py", line 108, in model_selection
File "/root/.cache/pypoetry/virtualenvs/--prefect-workflows-9TtSrW0h-py3.8/lib/python3.8/site-packages/python_translation/model_selection_master.py", line 483, in run_model_selection
) = cross_validate_neuralprophet(
File "/root/.cache/pypoetry/virtualenvs/--prefect-workflows-9TtSrW0h-py3.8/lib/python3.8/site-packages/python_translation/models/NeuralProphet.py", line 169, in cross_validate_neuralprophet
train = NeuralProphet_model.fit(df=df_train, freq="W-MON")
File "/root/.cache/pypoetry/virtualenvs/--prefect-workflows-9TtSrW0h-py3.8/lib/python3.8/site-packages/neuralprophet/forecaster.py", line 592, in fit
metrics_df = self._train(df_dict, progress=progress)
File "/root/.cache/pypoetry/virtualenvs/--prefect-workflows-9TtSrW0h-py3.8/lib/python3.8/site-packages/neuralprophet/forecaster.py", line 1806, in _train
loader = self._init_train_loader(df_dict)
File "/root/.cache/pypoetry/virtualenvs/--prefect-workflows-9TtSrW0h-py3.8/lib/python3.8/site-packages/neuralprophet/forecaster.py", line 1572, in _init_train_loader
self.config_normalization.init_data_params(
File "/root/.cache/pypoetry/virtualenvs/--prefect-workflows-9TtSrW0h-py3.8/lib/python3.8/site-packages/neuralprophet/configure.py", line 41, in init_data_params
self.local_data_params, self.global_data_params = df_utils.init_data_params(
File "/root/.cache/pypoetry/virtualenvs/--prefect-workflows-9TtSrW0h-py3.8/lib/python3.8/site-packages/neuralprophet/df_utils.py", line 260, in init_data_params
global_data_params = data_params_definition(
File "/root/.cache/pypoetry/virtualenvs/--py3.8/lib/python3.8/site-packages/neuralprophet/df_utils.py", line 176, in data_params_definition
data_params[covar] = get_normalization_params(
File "/root/.cache/pypoetry/virtualenvs/--prefect-workflows-9TtSrW0h-py3.8/lib/python3.8/site-packages/neuralprophet/df_utils.py", line 300, in get_normalization_params
norm_type = auto_normalization_setting(array)
File "/root/.cache/pypoetry/virtualenvs/--prefect-workflows-9TtSrW0h-py3.8/lib/python3.8/site-packages/neuralprophet/df_utils.py", line 290, in auto_normalization_setting
raise ValueError
ValueError
Describe the bug
Ran a forecasting job ... and it raised a ValueError without any additional mentions.
To Reproduce
I really do not know. It was a Prefect job that I ran over 200 products. And I have no idea why it failed.
Expected behavior
I expected it to forecast without returning an error.
What actually happens
It crashes with a ValueError
Screenshots
Printouts are above.
Environement (please complete the following information):
Python environment: 3.8.10
NeuralProphet version: neuralprophet 0.3.2, installed from PYPI with pip install neuralprophet
Additional context
These are scheduled as a Prefect workflow. Hence I do not run things manually. Around 150 products ran without any issues. And this returned a ValueError.
Related
I'm really interested running load tests with locust/selenium. I saw some real promising results using the older framework 'realbrowserlocusts' but I'm having an issues getting the locust_plugins to run on Windows.
Do you have any projects in Github that would run on windows. I've started the Selenium server and have the chromedriver in the right place.
Here's the call stack:
(venv) C:\Users\localuser\PycharmProjects\pythonProject\locust-plugins\examples>locust -f cyberw_test.py
[2021-05-28 11:24:51,348] LHTU05CD943125T/INFO/locust.main: Starting web interface at http://0.0.0.0:8089 (accepting connections from all network interfaces)
[2021-05-28 11:24:51,356] LHTU05CD943125T/INFO/locust.main: Starting Locust 1.4.3
Traceback (most recent call last):
File "src\gevent\greenlet.py", line 906, in gevent._gevent_cgreenlet.Greenlet.run
File "c:\users\localuser\pycharmprojects\pythonproject\venv\lib\site-packages\locust\web.py", line 339, in start_server
self.server.serve_forever()
File "c:\users\localuser\pycharmprojects\pythonproject\venv\lib\site-packages\gevent\baseserver.py", line 398, in serve_forever
self.start()
File "c:\users\localuser\pycharmprojects\pythonproject\venv\lib\site-packages\gevent\baseserver.py", line 336, in start
self.init_socket()
File "c:\users\localuser\pycharmprojects\pythonproject\venv\lib\site-packages\gevent\pywsgi.py", line 1545, in init_socket
StreamServer.init_socket(self)
File "c:\users\localuser\pycharmprojects\pythonproject\venv\lib\site-packages\gevent\server.py", line 180, in init_socket
self.socket = self.get_listener(self.address, self.backlog, self.family)
File "c:\users\localuser\pycharmprojects\pythonproject\venv\lib\site-packages\gevent\server.py", line 192, in get_listener
return _tcp_listener(address, backlog=backlog, reuse_addr=cls.reuse_addr, family=family)
File "c:\users\localuser\pycharmprojects\pythonproject\venv\lib\site-packages\gevent\server.py", line 288, in _tcp_listener
sock.bind(address)
File "c:\users\localuser\pycharmprojects\pythonproject\venv\lib\site-packages\gevent_socketcommon.py", line 563, in bind
return self._sock.bind(address)
OSError: [WinError 10048] Only one usage of each socket address (protocol/network address/port) is normally permitted: ('', 8089)
2021-05-28T17:24:51Z <Greenlet at 0x4f5c820: <bound method WebUI.start_server of <locust.web.WebUI object at 0x04F98130>>> failed with OSError
[2021-05-28 11:24:51,388] LHTU05CD943125T/CRITICAL/locust.web: Unhandled exception in greenlet: <Greenlet at 0x4f5c820: <bound method WebUI.start_server of <locust.web.WebUI object at 0x0
4F98130>>>
Traceback (most recent call last):
File "src\gevent\greenlet.py", line 906, in gevent._gevent_cgreenlet.Greenlet.run
File "c:\users\localuser\pycharmprojects\pythonproject\venv\lib\site-packages\locust\web.py", line 339, in start_server
self.server.serve_forever()
File "c:\users\localuser\pycharmprojects\pythonproject\venv\lib\site-packages\gevent\baseserver.py", line 398, in serve_forever
self.start()
File "c:\users\localuser\pycharmprojects\pythonproject\venv\lib\site-packages\gevent\baseserver.py", line 336, in start
self.init_socket()
File "c:\users\localuser\pycharmprojects\pythonproject\venv\lib\site-packages\gevent\pywsgi.py", line 1545, in init_socket
StreamServer.init_socket(self)
File "c:\users\localuser\pycharmprojects\pythonproject\venv\lib\site-packages\gevent\server.py", line 180, in init_socket
self.socket = self.get_listener(self.address, self.backlog, self.family)
File "c:\users\localuser\pycharmprojects\pythonproject\venv\lib\site-packages\gevent\server.py", line 192, in get_listener
return _tcp_listener(address, backlog=backlog, reuse_addr=cls.reuse_addr, family=family)
File "c:\users\localuser\pycharmprojects\pythonproject\venv\lib\site-packages\gevent\server.py", line 288, in _tcp_listener
sock.bind(address)
File "c:\users\localuser\pycharmprojects\pythonproject\venv\lib\site-packages\gevent_socketcommon.py", line 563, in bind
return self._sock.bind(address)
OSError: [WinError 10048] Only one usage of each socket address (protocol/network address/port) is normally permitted: ('', 8089)
[2021-05-28 11:24:51,389] LHTU05CD943125T/INFO/locust.main: Running teardowns...
[2021-05-28 11:24:51,390] LHTU05CD943125T/INFO/locust.main: Shutting down (exit code 2), bye.
[2021-05-28 11:24:51,390] LHTU05CD943125T/INFO/locust.main: Cleaning up runner...
Name # reqs # fails | Avg Min Max Median | req/s failures/s
Aggregated 0 0(0.00%) | 0 0 0 0 | 0.00 0.00
Response time percentiles (approximated)
Type Name 50% 66% 75% 80% 90% 95% 98% 99% 99.9% 99.99% 100% # reqs
--------|------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|
--------|------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|
I think this is the underlying problem:
OSError: [WinError 10048] Only one usage of each socket address (protocol/network address/port) is normally permitted: ('', 8089)
Make sure you dont have some previously started instance of Locust running.
locust-plugins is mainly built/tested for linux/macOS so you may run in to other issues as well, but this issue seems to be a pure locust issue.
For Windows 10 users wanting to run with locust-plugins, especially the webdriver.py.
It took a combination of the right versions of a few things to get it running. In running in secure environments there are additional roadblocks that you need to work-around. If you install with this list of software versions you'll be all setup to run.
Many thanks to Cyberwiz for his help.
This is an odd case for me.
The following function rpc_presence which uses the pypresence library: https://qwertyquerty.github.io/pypresence/html/index.html
uses asyncio.
Here's my code:
def stuff():
print("do stuff")
def rpc_presence():
while True: # The presence will stay on as long as the program is running
RPC.update(details="Great", state=random.choice(quotes), large_image="actual_logo")
time.sleep(1)
def main_func():
rpc_thread = thread.Threading(target=rpc_presence)
rpc_thread.start()
stuff_thread = thread.Threading(target=rpc_presence)
stuff_thread.start()
I get the error:
Exception in thread Thread-3:
Traceback (most recent call last):
File "C:\Users\Zylly\AppData\Local\Programs\Python\Python39\lib\threading.py", line 954, in _bootstrap_inner
self.run()
File "C:\Users\Zylly\AppData\Local\Programs\Python\Python39\lib\threading.py", line 892, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\Zylly\Desktop\Sypherus\Sypherus.py", line 29, in rpc_presence_thread
RPC = Presence(client_id,pipe=0)
File "C:\Users\Zylly\AppData\Local\Programs\Python\Python39\lib\site-packages\pypresence\presence.py", line 13, in __init__
super().__init__(*args, **kwargs)
File "C:\Users\Zylly\AppData\Local\Programs\Python\Python39\lib\site-packages\pypresence\baseclient.py", line 40, in __init__
self.update_event_loop(self.get_event_loop())
File "C:\Users\Zylly\AppData\Local\Programs\Python\Python39\lib\site-packages\pypresence\baseclient.py", line 83, in get_event_loop
loop = asyncio.get_event_loop()
File "C:\Users\Zylly\AppData\Local\Programs\Python\Python39\lib\asyncio\events.py", line 642, in get_event_loop
raise RuntimeError('There is no current event loop in thread %r.'
RuntimeError: There is no current event loop in thread 'Thread-3'.
I spoke to the developer and he said he's not much of an asyncio dev nor a threading dev so he can't really help me.
How can I run this function + another function at the same time, while that other function already uses asyncio.
Feel free to look through the source code of the pypresence library, also, I recommend going through the baseclient.py, especially in the get_event_loop() function.
I created a training script with hard coded input. It works as expected using a training job but I couldn't make it work using local mode.
It brings up a container on my local docker and exits with code (1)
Code:
estimator = SKLearn(entry_point="train_model.py",
train_instance_type="local")
estimator.fit()
Here is the exception:
2020-02-22 06:21:05,470 sagemaker-containers INFO Imported framework sagemaker_sklearn_container.training
2020-02-22 06:21:05,480 sagemaker-containers INFO No GPUs detected (normal if no gpus installed)
2020-02-22 06:21:05,504 sagemaker_sklearn_container.training INFO Invoking user training script.
2020-02-22 06:21:06,407 sagemaker-containers ERROR Reporting training FAILURE
2020-02-22 06:21:06,407 sagemaker-containers ERROR framework error:
Traceback (most recent call last):
File "/miniconda3/lib/python3.7/site-packages/sagemaker_containers/_trainer.py", line 81, in train
entrypoint()
File "/miniconda3/lib/python3.7/site-packages/sagemaker_sklearn_container/training.py", line 36, in main
train(framework.training_env())
File "/miniconda3/lib/python3.7/site-packages/sagemaker_sklearn_container/training.py", line 32, in train
training_environment.to_env_vars(), training_environment.module_name)
File "/miniconda3/lib/python3.7/site-packages/sagemaker_containers/_modules.py", line 301, in run_module
_files.download_and_extract(uri, _env.code_dir)
File "/miniconda3/lib/python3.7/site-packages/sagemaker_containers/_files.py", line 129, in download_and_extract
s3_download(uri, dst)
File "/miniconda3/lib/python3.7/site-packages/sagemaker_containers/_files.py", line 164, in s3_download
s3.Bucket(bucket).download_file(key, dst)
File "/miniconda3/lib/python3.7/site-packages/boto3/s3/inject.py", line 246, in bucket_download_file
ExtraArgs=ExtraArgs, Callback=Callback, Config=Config)
File "/miniconda3/lib/python3.7/site-packages/boto3/s3/inject.py", line 172, in download_file
extra_args=ExtraArgs, callback=Callback)
File "/miniconda3/lib/python3.7/site-packages/boto3/s3/transfer.py", line 307, in download_file
future.result()
File "/miniconda3/lib/python3.7/site-packages/s3transfer/futures.py", line 106, in result
return self._coordinator.result()
File "/miniconda3/lib/python3.7/site-packages/s3transfer/futures.py", line 265, in result
raise self._exception
File "/miniconda3/lib/python3.7/site-packages/s3transfer/tasks.py", line 255, in _main
self._submit(transfer_future=transfer_future, **kwargs)
File "/miniconda3/lib/python3.7/site-packages/s3transfer/download.py", line 345, in _submit
**transfer_future.meta.call_args.extra_args
File "/miniconda3/lib/python3.7/site-packages/botocore/client.py", line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/miniconda3/lib/python3.7/site-packages/botocore/client.py", line 661, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden
An error occurred (403) when calling the HeadObject operation: Forbidden
tmpe_msr8pi_algo-1-kt1vh_1 exited with code 1
I found out that docker restart solved the issue.
After a while it happened again - and it solved it again.
I'm using docker for windows, and the issue is probably related to the created container configuration
My Flask-based GAE app has been running for a few weeks without issue. Today I noticed the root URL produces a 500 Internal Server Error most of the time. In the logging I see this appears to be related to session handling in Flask (using Flask-Session). Before transitioning to GAE, this app ran on a VM with local Redis instance for well over a year without any problems.
The Memorystore instance has only about 1500 keys at this time and 3 or 4 mb of data, so it is not heavily loaded. The server itself receives very little traffic (just me and the occasional robot). I am looking for insight as to what has produced this change in behavior or what diagnostic procedures I should pursue since I am new to GAE and the Google Cloud environment.
A typical traceback of the failure looks like this:
Traceback (most recent call last):
File "/env/lib/python3.7/site-packages/flask/app.py", line 1969, in finalize_request response = self.process_response(response)
File "/env/lib/python3.7/site-packages/flask/app.py", line 2268, in process_response self.session_interface.save_session(self, ctx.session, response)
File "/env/lib/python3.7/site-packages/flask_session/sessions.py", line 166, in save_session time=total_seconds(app.permanent_session_lifetime))
File "/env/lib/python3.7/site-packages/redis/client.py", line 1540, in setex return self.execute_command('SETEX', name, time, value)
File "/env/lib/python3.7/site-packages/redis/client.py", line 836, in execute_command conn = self.connection or pool.get_connection(command_name, **options)
File "/env/lib/python3.7/site-packages/redis/connection.py", line 1065, in get_connection if connection.can_read():
File "/env/lib/python3.7/site-packages/redis/connection.py", line 682, in can_read return self._parser.can_read(timeout)
File "/env/lib/python3.7/site-packages/redis/connection.py", line 295, in can_read return self._buffer and self._buffer.can_read(timeout)
File "/env/lib/python3.7/site-packages/redis/connection.py", line 205, in can_read raise_on_timeout=False)
File "/env/lib/python3.7/site-packages/redis/connection.py", line 173, in _read_from_socket data = recv(self._sock, socket_read_size)
File "/env/lib/python3.7/site-packages/redis/_compat.py", line 58, in recv return sock.recv(*args, **kwargs) ConnectionResetError: [Errno 104] Connection reset by peer
Again, this is new behavior. The server worked flawlessly for a couple of weeks. What might have changed and where should I look?
Possible related issue: https://github.com/andymccurdy/redis-py/issues/1186
Using health_check_interval eliminated most, but not all of these "Connection reset by peer" errors for us (GAE Python 2.7):
self._redis = Redis(
environ.get("REDISHOST", "localhost"),
int(environ.get("REDISPORT", 6379)),
health_check_interval=30,
)
Perhaps a value lower than 30 would eliminate the remaining occurrences.
I'm trying to run a meta-analysis on a database of fMRI data, using the neurosynth python library through idle. When I try to run even some of the most basic functions, I get an error, not an error my own code, or in the neurosynth modules, the error seems to be a bug in idle itself.
I uninstalled and reinstalled python 2.7, reinstalled neurosynth and its dependencies, and ran into the same error. I've pasted my code below, followed by the error message, which appears in the unix shell (not in the idle shell).
Has anybody come across this error before using idle and python 2.7?
The script:
from neurosynth.base.dataset import Dataset
from neurosynth.analysis import meta, decode, network
import neurosynth
neurosynth.set_logging_level('info')
dataset = Dataset('data/database.txt')
dataset.add_features('data/features.txt')
dataset.save('dataset.pkl')
print 'done'
The error message which appeared in the unix shell:
----------------------------------------
Unhandled server exception!
Thread: SockThread
Client Address: ('127.0.0.1', 46779)
Request: <socket._socketobject object at 0xcb8d7c0>
Traceback (most recent call last):
File "/usr/global/python/2.7.3/lib/python2.7/SocketServer.py", line 284, in _handle_request_noblock
self.process_request(request, client_address)
File "/usr/global/python/2.7.3/lib/python2.7/SocketServer.py", line 310, in process_request
self.finish_request(request, client_address)
File "/usr/global/python/2.7.3/lib/python2.7/SocketServer.py", line 323, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/usr/global/python/2.7.3/lib/python2.7/idlelib/rpc.py", line 503, in __init__
SocketServer.BaseRequestHandler.__init__(self, sock, addr, svr)
File "/usr/global/python/2.7.3/lib/python2.7/SocketServer.py", line 638, in __init__
self.handle()
File "/usr/global/python/2.7.3/lib/python2.7/idlelib/run.py", line 265, in handle
rpc.RPCHandler.getresponse(self, myseq=None, wait=0.05)
File "/usr/global/python/2.7.3/lib/python2.7/idlelib/rpc.py", line 280, in getresponse
response = self._getresponse(myseq, wait)
File "/usr/global/python/2.7.3/lib/python2.7/idlelib/rpc.py", line 300, in _getresponse
response = self.pollresponse(myseq, wait)
File "/usr/global/python/2.7.3/lib/python2.7/idlelib/rpc.py", line 424, in pollresponse
message = self.pollmessage(wait)
File "/usr/global/python/2.7.3/lib/python2.7/idlelib/rpc.py", line 376, in pollmessage
packet = self.pollpacket(wait)
File "/usr/global/python/2.7.3/lib/python2.7/idlelib/rpc.py", line 347, in pollpacket
r, w, x = select.select([self.sock.fileno()], [], [], wait)
error: (4, 'Interrupted system call')
*** Unrecoverable, server exiting!
----------------------------------------
Thanks in advance!
Idle is meant for interactive exploration in the shell, for editing in an editor, and for testing programs by running them from an editor. It is not meant for production running of programs once developed. If there is a problem, one should separate the Idle part from the running with Python part. So in the unix shell, run python -m idlelib (for instance) to see if Idle starts correctly. Then, in an appropriate directory, run python path-to-my-file.py. Which does not work?
The error message is definitely odd, as it has more than just the python traceback. On the other hand, it does not start with a line of your code. I have no idea why the select call would be interrupted.