Python Behave tests work and then stop working and then work without change - python-behave

I have a simple feature where I am passing in 2 examples.
Background: I create context params calls
Given I create context params calls
And I populate default array1
And I populate default array2
Scenario Outline: I enter x array <a> and <b> and <c> and <d> and <e> and <f>and <g> and <h> and <i> and <j> and <k> and <l>
Given I have a a array <a>
And I have a b array <b>
And I have a c array <c>
And I have a d array <d>
And I have a e array <e>
When I call the interface
Then I will see <f> <g> <h> <i> <j> <k> <l>
When I run the test it might work first time I type behave into the command line or it might work the third time I type the behave command into the command line or the seventh time.
At first I assumed this was my local set up so I transferred to another computer and installed behave and the same problem happened. I assume this is an error with my steps files but the only error I get is:
Exception OSError: raw write() returned invalid length 1508 (should have been between 0 and 754)
Traceback (most recent call last):
File "\lib\runpy.py", line 184, in _run_module_as_main
"__main__", mod_spec)
File "\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "\Scripts\behave.exe\__main__.py", line 9, in <module>
File "\lib\site-packages\behave\__main__.py", line 183, in main
return run_behave(config)
File "\lib\site-packages\behave\__main__.py", line 127, in run_behave
failed = runner.run()
File "\lib\site-packages\behave\runner.py", line 804, in run
return self.run_with_paths()
File "\lib\site-packages\behave\runner.py", line 824, in run_with_paths
return self.run_model()
File "\lib\site-packages\behave\runner.py", line 626, in run_model
failed = feature.run(self)
File "\lib\site-packages\behave\model.py", line 321, in run
failed = scenario.run(runner)
File "\lib\site-packages\behave\model.py", line 1114, in run
failed = scenario.run(runner)
File "c\lib\site-packages\behave\model.py", line 711, in run
if not step.run(runner):
File "\lib\site-packages\behave\model.py", line 1311, in run
formatter.match(match)
File "\lib\site-packages\behave\formatter\pretty.py", line 130, in match
self.print_statement()
File "lib\site-packages\behave\formatter\pretty.py", line 265, in print_statement
self.stream.write("\n")
OSError: raw write() returned invalid length 1508 (should have been between 0 and 754)
Currently I assume it is something to do with my steps.py not initialising correctly but why would it work after multiple inputs of the behave command? And does any one know how to resolve this issue?

I found https://github.com/Microsoft/vscode/issues/39149 as I was using pycharm terminal I believe I was suffering from this problem. When I switched to native terminal it runs every time.

Related

Trying to upload compressed data (unicode) via bulkuploader

I ran into an issue where the data being uploaded to db.text was over 1 mb, so I compressed the information using zlib. Bulkloader by default didn't support the unicode data data being uploaded, so I switched out the source code to use unicodecsv rather than python's built in csv module. The problem that I'm running into is that Google App Engine's bulkload is unable to support the unicode characters (even though the db.Text entity is unicode).
[ERROR ] [Thread-12] DataSourceThread:
Traceback (most recent call last):
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/bulkloader.py", line 1611, in run
self.PerformWork()
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/bulkloader.py", line 1730, in PerformWork
for item in content_gen.Batches():
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/bulkloader.py", line 542, in Batches
self._ReadRows(key_start, key_end)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/bulkloader.py", line 452, in _ReadRows
row = self.reader.next()
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/bulkload/csv_connector.py", line 219, in generate_import_record
for input_dict in self.dict_generator:
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/unicodecsv/__init__.py", line 188, in next
row = csv.DictReader.next(self)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/csv.py", line 108, in next
row = self.reader.next()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/unicodecsv/__init__.py", line 106, in next
row = self.reader.next()
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/bulkload/csv_connector.py", line 55, in utf8_recoder
for line in codecs.getreader(encoding)(stream):
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/codecs.py", line 612, in next
line = self.readline()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/codecs.py", line 527, in readline
data = self.read(readsize, firstline=True)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/codecs.py", line 474, in read
newchars, decodedbytes = self.decode(data, self.errors)
UnicodeDecodeError: 'utf8' codec can't decode byte 0x9c in position 29: invalid start byte
I know that for my local testing I could modify the python files to use unicodecsv's module instead but that doesn't help solve the problem for using GAE's Datastore on production. Is there an existing solution to this problem that anyone is aware of?
Solved this the other week, you just need to base64 encode the results so you won't have any issues with bulkloader size increases by 30-50% but since zlib already compressed my data to 10% of the original this wasn't too bad.

Unicode fonts in pdf at GAE with web2py/pyfpdf

I'm writing an app, which results with pdf file with some text with unicode characters. On GAE devserver it works good, but after deploy it can't import font file (crash after add_font() (pyfpdf)).
The code is:
# -*- coding: utf-8 -*-
def fun1():
from gluon.contrib.pyfpdf import FPDF, HTMLMixin
class MyFPDF(FPDF, HTMLMixin):
pass
pdf =MyFPDF()
pdf.add_font('DejaVu', '', 'DejaVuSansCondensed.ttf', uni=True)
pdf.add_page()
pdf.set_font('DejaVu','',16)
pdf.write(10,'test-ąśł')
response.headers['Content-Type']='application/pdf'
return pdf.output(dest='S')
The font files (with a file DejaVuSansCondensed.pkl generated after first run on web2py server...) is in /gluon/contrib/fpdf/font. I didn't add anything to routers.py (I'm using Pattern-based system) also app.yaml is not changed. And I get this:
In FILE: /base/data/home/apps/s~myapp/web2py-04.369240954601780983/applications/app3/controllers/default.py
Traceback (most recent call last):
File "/base/data/home/apps/s~myapp/web2py-04.369240954601780983/gluon/restricted.py", line 212, in restricted
exec ccode in environment
File "/base/data/home/apps/s~myapp/web2py-04.369240954601780983/applications/app3/controllers/default.py", line 674, in <module>
File "/base/data/home/apps/s~myapp/web2py-04.369240954601780983/gluon/globals.py", line 194, in <lambda>
self._caller = lambda f: f()
File "/base/data/home/apps/s~myapp/web2py-04.369240954601780983/applications/app3/controllers/default.py", line 493, in fun1
pdf.add_font('DejaVu', '', 'DejaVuSansCondensed.ttf', uni=True)
File "/base/data/home/apps/s~myapp/web2py-04.369240954601780983/gluon/contrib/fpdf/fpdf.py", line 432, in add_font
font_dict = pickle.load(fh)
File "/base/data/home/runtimes/python27p/python27_dist/lib/python2.7/pickle.py", line 1378, in load
return Unpickler(file).load()
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/pickle.py", line 858, in load
dispatch[key](self)
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/pickle.py", line 966, in load_string
raise ValueError, "insecure string pickle"
ValueError: insecure string pickle
As I said on local (both web2py/rocket and gae) it works well. After deploy only something like this works:
pdf =MyFPDF()
pdf.add_page()
pdf.set_font('Arial','',16)
pdf.write(10,'testąśł')
But without "unusual" characters...
The best solution would be to add my font files (like DejaVu), but basically I need unicode characters in any font... maybe some "half-solution" to use "generic GAE unicode" fonts... if it exist something like this...
Thanks for suggestion Tim!
I found some solution... it isn't the best one, but it works...
The problem is with using pickle on GAE. The best solution (probably) would be to overload/rewrite the add_font() function where for GAE, in such a way, that it would write to a datastore instead of a filesystem. Additionaly ValueError: insecure string pickle error can still occur, I tried b64 encoding according to this. But still I get errors. So my solution is to overload add_font() function with commented out/deleted parts:
if os.path.exists(unifilename):
fh = open(unifilename)
try:
font_dict = pickle.load(fh)
finally:
fh.close()
else:
and
try:
fh = open(unifilename, "w")
pickle.dump(font_dict, fh)
fh.close()
except IOError, e:
if not e.errno == errno.EACCES:
raise # Not a permission error.
Because of this the function every time calculates little bit more instead of just reading data from the pickle... but it works on GAE.

GAE Full Text Search development console UnicodeEncodeError

I have an index with manny words with accent (e.g: São Paulo, José, etc).
The search api works fine, but when try to do some test queries on development console, I can't access index data.
This error only occurs on development environment. On production GAE everything works fine.
Bellow the traceback:
Traceback (most recent call last):
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/webapp/_webapp25.py", line 701, in __call__
handler.get(*groups)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/admin/__init__.py", line 1704, in get
'values': self._ProcessSearchResponse(resp),
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/admin/__init__.py", line 1664, in _ProcessSearchResponse
value = TruncateValue(doc.fields[field_name].value)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/admin/__init__.py", line 158, in TruncateValue
value = str(value)
UnicodeEncodeError: 'ascii' codec can't encode character u'\xc1' in position 5: ordinal not in range(128)

GAE error (Error code 104) while creating a new blob

On GAE this line of code:
file_name = files.blobstore.create(mime_type='image/png')
drops google.appengine.runtime.DeadlineExceededError
Here is the full method code:
class UploadsHandler(JSONRequestHandler):
def upload_blob(self, content, filename):
file_name = files.blobstore.create(mime_type='image/png')
file_str_list = split_len(content, 65520)
with files.open(file_name, 'a') as f:
for line in file_str_list:
f.write(line)
files.finalize(file_name)
return files.blobstore.get_blob_key(file_name)
Logging message ends with:
A serious problem was encountered with the process that handled this request, causing it to exit. This is likely to cause a new process to be used for the next request to your application. If you see this message frequently, you may be throwing exceptions during the initialization of your application. (Error code 104)
Full error stack:
<class 'google.appengine.runtime.DeadlineExceededError'>:
Traceback (most recent call last):
File "/base/data/home/apps/s~mockup-cloud/1.352909931378411668/main.py", line 389, in main
util.run_wsgi_app(application)
File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/webapp/util.py", line 98, in run_wsgi_app
run_bare_wsgi_app(add_wsgi_middleware(application))
File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/webapp/util.py", line 116, in run_bare_wsgi_app
result = application(env, _start_response)
File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/webapp/_webapp25.py", line 703, in __call__
handler.post(*groups)
File "/base/data/home/apps/s~mockup-cloud/1.352909931378411668/main.py", line 339, in post
original_key = "%s" % self.upload_blob(src)
File "/base/data/home/apps/s~mockup-cloud/1.352909931378411668/main.py", line 268, in upload_blob
file_name = files.blobstore.create(mime_type='image/png')
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/files/blobstore.py", line 68, in create
return files._create(_BLOBSTORE_FILESYSTEM, params=params)
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/files/file.py", line 487, in _create
_make_call('Create', request, response)
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/files/file.py", line 228, in _make_call
rpc.wait()
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/apiproxy_stub_map.py", line 533, in wait
self.__rpc.Wait()
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/apiproxy_rpc.py", line 119, in Wait
rpc_completed = self._WaitImpl()
File "/base/python_runtime/python_lib/versions/1/google/appengine/runtime/apiproxy.py", line 131, in _WaitImpl
rpc_completed = _apphosting_runtime___python__apiproxy.Wait(self)
Blob is created while file upload. Other methods of the app work great. It looks like blobstore is not responding for under 30 sec.
Any ideas why this happens?
Thanks!
Seems like you're not the only one having this issue:
http://groups.google.com/group/google-appengine/browse_thread/thread/27e52484946cbdc1#
(posted today)
Seems like Google had some reconfigurations of their servers. Now everything's working fine as it was before.
A runtime.DeadlineExceededError occurs when your request handler takes too long to execute - the blobstore call just happened to be what was running when that happened. You need to profile your handler with appstats to see why it's so slow.

What is TombstonedTaskError from App Engine's Task Queue?

What does the TombstonedTaskError mean? It is being raised while trying to add a task to the queue, from a cron-job:
Traceback (most recent call last):
File "/base/python_lib/versions/1/google/appengine/ext/webapp/__init__.py", line 501, in __call__
handler.get(*groups)
File "/base/data/home/apps/.../tasks.py", line 132, in get
).add(queue_name = 'userfeedcheck')
File "/base/python_lib/versions/1/google/appengine/api/labs/taskqueue/taskqueue.py", line 495, in add
return Queue(queue_name).add(self)
File "/base/python_lib/versions/1/google/appengine/api/labs/taskqueue/taskqueue.py", line 563, in add
self.__TranslateError(e)
File "/base/python_lib/versions/1/google/appengine/api/labs/taskqueue/taskqueue.py", line 619, in __TranslateError
raise TombstonedTaskError(error.error_detail)
TombstonedTaskError
Searching the documentation only has the following to say:
exception TombstonedTaskError(InvalidTaskError)
Task has been tombstoned.
..which isn't particularly helpful.
I couldn't find anything useful in the App Engine code either..
You've added a task with that exact name before. Although it's already run, executed task names are kept around for some time to prevent accidental duplicates. If you're assigning task names, you should be using ones that are globally unique to prevent this occurring.

Resources