mmrotate eval_map.py IndexError: tuple index out of range - eval

When I trained the /mmrotate/configs/rotated_retinanet/rotated_retinanet_obb_r50_fpn_6x_hrsc_rr_le90.py with fine_grained, I got IndexError: tuple index out of range.
After trained this baseline without evaluation(--no-validate), it can execute successfully.
This is the link to the baseline from GitHub.(https://github.com/open-mmlab/mmrotate/blob/main/configs/rotated_retinanet/rotated_retinanet_obb_r50_fpn_6x_hrsc_rr_le90.py)
The following is the traceback. Could you tell me how to solve it? I just want to evaluate the result of baseline with fine_grained on HRSC2016 dataset. Thank you very much.
Traceback (most recent call last):
File "tools/train.py", line 192, in <module>
main()
File "tools/train.py", line 181, in main
train_detector(
File "/root/autodl-tmp/mmrotate/mmrotate/apis/train.py", line 141, in train_detector
runner.run(data_loaders, cfg.workflow)
File "/root/miniconda3/envs/mmrotate/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 136, in run
epoch_runner(data_loaders[i], **kwargs)
File "/root/miniconda3/envs/mmrotate/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 58, in train
self.call_hook('after_train_epoch')
File "/root/miniconda3/envs/mmrotate/lib/python3.8/site-packages/mmcv/runner/base_runner.py", line 317, in call_hook
getattr(hook, fn_name)(self)
File "/root/miniconda3/envs/mmrotate/lib/python3.8/site-packages/mmcv/runner/hooks/evaluation.py", line 271, in after_train_epoch
self._do_evaluate(runner)
File "/root/miniconda3/envs/mmrotate/lib/python3.8/site-packages/mmdet/core/evaluation/eval_hooks.py", line 63, in _do_evaluate
key_score = self.evaluate(runner, results)
File "/root/miniconda3/envs/mmrotate/lib/python3.8/site-packages/mmcv/runner/hooks/evaluation.py", line 367, in evaluate
eval_res = self.dataloader.dataset.evaluate(
File "/root/autodl-tmp/mmrotate/mmrotate/datasets/hrsc.py", line 251, in evaluate
mean_ap, _ = eval_rbbox_map(
File "/root/autodl-tmp/mmrotate/mmrotate/core/evaluation/eval_map.py", line 243, in eval_rbbox_map
print_map_summary(
File "/root/autodl-tmp/mmrotate/mmrotate/core/evaluation/eval_map.py", line 305, in print_map_summary
label_names[j], num_gts[i, j], results[j]['num_dets'],
IndexError: tuple index out of range
By the way, I checked the num_classes=33 and the classwise=True.
To evaluate the https://github.com/open-mmlab/mmrotate/blob/main/configs/rotated_retinanet/rotated_retinanet_obb_r50_fpn_6x_hrsc_rr_le90.py with fine_grained on HRSC2016 dataset.

Related

ODOO 12 server error regarding invoice sequencing

I am trying to change the sequence of my invoicing. Instead of resetting it each new year, I can keep the count going upwards continuously.
(for example)
inv/2021/0001 date 1/1/2023   (this one should be 2366)
inv/2021/2365    date 31/12/2022
researching on the subject I found out I need to go into technical -> sequences to get the invoice numbers I want.
but my problem is, once i click sequences I get the following server error:
Error:
Odoo Server Error
Traceback (most recent call last):
File "/odoo/odoo-server/odoo/api.py", line 1039, in get
value = self._data[key][field][record._ids[0]]
KeyError: 254
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/odoo/odoo-server/odoo/fields.py", line 981, in __get__
value = record.env.cache.get(record, self)
File "/odoo/odoo-server/odoo/api.py", line 1041, in get
raise CacheMiss(record, field)
odoo.exceptions.CacheMiss: ('ir.sequence(254,).number_next_actual', None)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/odoo/odoo-server/odoo/http.py", line 656, in _handle_exception
return super(JsonRequest, self)._handle_exception(exception)
File "/odoo/odoo-server/odoo/http.py", line 314, in _handle_exception
raise pycompat.reraise(type(exception), exception, sys.exc_info()[2])
File "/odoo/odoo-server/odoo/tools/pycompat.py", line 87, in reraise
raise value
File "/odoo/odoo-server/odoo/http.py", line 698, in dispatch
result = self._call_function(**self.params)
File "/odoo/odoo-server/odoo/http.py", line 346, in _call_function
return checked_call(self.db, *args, **kwargs)
File "/odoo/odoo-server/odoo/service/model.py", line 97, in wrapper
return f(dbname, *args, **kwargs)
File "/odoo/odoo-server/odoo/http.py", line 339, in checked_call
result = self.endpoint(*a, **kw)
File "/odoo/odoo-server/odoo/http.py", line 941, in __call__
return self.method(*args, **kw)
File "/odoo/odoo-server/odoo/http.py", line 519, in response_wrap
response = f(*args, **kw)
File "/odoo/odoo-server/addons/web/controllers/main.py", line 904, in search_read
return self.do_search_read(model, fields, offset, limit, domain, sort)
File "/odoo/odoo-server/addons/web/controllers/main.py", line 926, in do_search_read
offset=offset or 0, limit=limit or False, order=sort or False)
File "/odoo/odoo-server/odoo/models.py", line 4589, in search_read
result = records.read(fields)
File "/odoo/odoo-server/odoo/models.py", line 2791, in read
vals[name] = convert(record[name], record, use_name_get)
File "/odoo/odoo-server/odoo/models.py", line 5117, in __getitem__
return self._fields[key].__get__(self, type(self))
File "/odoo/odoo-server/odoo/fields.py", line 985, in __get__
self.determine_value(record)
File "/odoo/odoo-server/odoo/fields.py", line 1098, in determine_value
self.compute_value(recs)
File "/odoo/odoo-server/odoo/fields.py", line 1052, in compute_value
self._compute_value(records)
File "/odoo/odoo-server/odoo/fields.py", line 1043, in _compute_value
getattr(records, self.compute)()
File "/odoo/odoo-server/odoo/addons/base/models/ir_sequence.py", line 96, in _get_number_next_actual
seq.number_next_actual = _predict_nextval(self, seq_id)
File "/odoo/odoo-server/odoo/addons/base/models/ir_sequence.py", line 68, in _predict_nextval
self.env.cr.execute(query % {'seq_id': seq_id})
File "/odoo/odoo-server/odoo/sql_db.py", line 148, in wrapper
return f(self, *args, **kwargs)
File "/odoo/odoo-server/odoo/sql_db.py", line 225, in execute
res = self._obj.execute(query, params)
psycopg2.ProgrammingError: relation "ir_sequence_1000015" does not exist
LINE 6: FROM ir_sequence_1000015
I believe it could be a database error but I am not sure what this is about. Any idea?
Thanks!

PyFlink Expected IPC message of type schema but got record batch

Feature: Windows of size 10 minutes that slides by 5 minutes for data aggregate, then do something, almost 2GB data per window, 1 million data items.
Job params:
bin/yarn-session.sh -s 2 -jm 2048 -tm 48768 \
-Dyarn.containers.vcores=4 \
-Dtaskmanager.memory.managed.consumer-weights=DATAPROC:30,PYTHON:70 \
-Dtaskmanager.memory.managed.fraction=0.7 \
-Dtaskmanager.memory.task.off-heap.size=5120m \
-nm $task_name -qu $queue -d
Exception msg as below:
Traceback (most recent call last):
File "/data1/hadoopdata/nodemanager/local/usercache/prod_intl_discount_car/appcache/application_1571902879759_12031/python-dist-2659d300-efda-4c34-863d-d5a3a8aa369f/python-archives/venv.zip/venv/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py", line 253, in _execute
response = task()
File "/data1/hadoopdata/nodemanager/local/usercache/prod_intl_discount_car/appcache/application_1571902879759_12031/python-dist-2659d300-efda-4c34-863d-d5a3a8aa369f/python-archives/venv.zip/venv/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py", line 310, in <lambda>
lambda: self.create_worker().do_instruction(request), request)
File "/data1/hadoopdata/nodemanager/local/usercache/prod_intl_discount_car/appcache/application_1571902879759_12031/python-dist-2659d300-efda-4c34-863d-d5a3a8aa369f/python-archives/venv.zip/venv/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py", line 480, in do_instruction
getattr(request, request_type), request.instruction_id)
File "/data1/hadoopdata/nodemanager/local/usercache/prod_intl_discount_car/appcache/application_1571902879759_12031/python-dist-2659d300-efda-4c34-863d-d5a3a8aa369f/python-archives/venv.zip/venv/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py", line 515, in process_bundle
bundle_processor.process_bundle(instruction_id))
File "/data1/hadoopdata/nodemanager/local/usercache/prod_intl_discount_car/appcache/application_1571902879759_12031/python-dist-2659d300-efda-4c34-863d-d5a3a8aa369f/python-archives/venv.zip/venv/lib/python3.7/site-packages/apache_beam/runners/worker/bundle_processor.py", line 978, in process_bundle
element.data)
File "/data1/hadoopdata/nodemanager/local/usercache/prod_intl_discount_car/appcache/application_1571902879759_12031/python-dist-2659d300-efda-4c34-863d-d5a3a8aa369f/python-archives/venv.zip/venv/lib/python3.7/site-packages/apache_beam/runners/worker/bundle_processor.py", line 218, in process_encoded
self.output(decoded_value)
File "apache_beam/runners/worker/operations.py", line 330, in apache_beam.runners.worker.operations.Operation.output
File "apache_beam/runners/worker/operations.py", line 332, in apache_beam.runners.worker.operations.Operation.output
File "apache_beam/runners/worker/operations.py", line 195, in apache_beam.runners.worker.operations.SingletonConsumerSet.receive
File "pyflink/fn_execution/beam/beam_operations_fast.pyx", line 71, in pyflink.fn_execution.beam.beam_operations_fast.FunctionOperation.process
File "pyflink/fn_execution/beam/beam_operations_fast.pyx", line 73, in pyflink.fn_execution.beam.beam_operations_fast.FunctionOperation.process
File "/data1/hadoopdata/nodemanager/local/usercache/prod_intl_discount_car/appcache/application_1571902879759_12031/python-dist-2659d300-efda-4c34-863d-d5a3a8aa369f/python-archives/venv.zip/venv/lib/python3.7/site-packages/pyflink/fn_execution/beam/beam_coder_impl_slow.py", line 627, in decode_from_stream
yield self._decode_one_batch_from_stream(in_stream, in_stream.read_var_int64())
File "/data1/hadoopdata/nodemanager/local/usercache/prod_intl_discount_car/appcache/application_1571902879759_12031/python-dist-2659d300-efda-4c34-863d-d5a3a8aa369f/python-archives/venv.zip/venv/lib/python3.7/site-packages/pyflink/fn_execution/beam/beam_coder_impl_slow.py", line 638, in _decode_one_batch_from_stream
return arrow_to_pandas(self._timezone, self._field_types, [next(self._batch_reader)])
File "/data1/hadoopdata/nodemanager/local/usercache/prod_intl_discount_car/appcache/application_1571902879759_12031/python-dist-2659d300-efda-4c34-863d-d5a3a8aa369f/python-archives/venv.zip/venv/lib/python3.7/site-packages/pyflink/fn_execution/beam/beam_coder_impl_slow.py", line 631, in _load_from_stream
reader = pa.ipc.open_stream(stream)
File "/data1/hadoopdata/nodemanager/local/usercache/prod_intl_discount_car/appcache/application_1571902879759_12031/python-dist-2659d300-efda-4c34-863d-d5a3a8aa369f/python-archives/venv.zip/venv/lib/python3.7/site-packages/pyarrow/ipc.py", line 137, in open_stream
return RecordBatchStreamReader(source)
File "/data1/hadoopdata/nodemanager/local/usercache/prod_intl_discount_car/appcache/application_1571902879759_12031/python-dist-2659d300-efda-4c34-863d-d5a3a8aa369f/python-archives/venv.zip/venv/lib/python3.7/site-packages/pyarrow/ipc.py", line 61, in __init__
self._open(source)
File "pyarrow/ipc.pxi", line 352, in pyarrow.lib._RecordBatchStreamReader._open
File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status
OSError: Expected IPC message of type schema but got record batch
Yes, this is indeed a bug, please refer to FLINK-21208

How to create a Tensorflow Dataset without labels? Input 'filename' of 'ReadFile' Op has type float32 that does not match expected type of string

Using Tensorflow 2.3, I'm trying to create a tf.data.Dataset without labels.
I have my .png files in a folder './Folder/'. For creating the minimal working sample, I think the only relevant line is the one where I am calling tf.keras.preprocessing.image_dataset_from_directory. The class definition is here.
dataset = tf.keras.preprocessing.image_dataset_from_directory('./Folder/',label_mode=None,batch_size=100)
When the Python interpreter reaches the line above, it returns this error message:
Traceback (most recent call last):
File "/home/roi/.local/lib/python3.8/site-packages/tensorflow/python/framework/op_def_library.py", line 465, in _apply_op_helper
values = ops.convert_to_tensor(
File "/home/roi/.local/lib/python3.8/site-packages/tensorflow/python/framework/ops.py", line 1473, in convert_to_tensor
raise ValueError(
ValueError: Tensor conversion requested dtype string for Tensor with dtype float32: <tf.Tensor 'args_0:0' shape=() dtype=float32>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "04-vaeAnomalyScores.py", line 135, in <module>
historicKLD, encoder, decoder, vae = artVAE_Instance.run_autoencoder() # Train
File "/media/roi/9b168630-3b62-4215-bb7d-fed9ba179dc7/images/largePatches/artvae.py", line 386, in run_autoencoder
trainingDataSet = self.loadImages(self.trainingDir)
File "/media/roi/9b168630-3b62-4215-bb7d-fed9ba179dc7/images/largePatches/artvae.py", line 231, in loadImages
dataset = tf.keras.preprocessing.image_dataset_from_directory(dir[:-1]+'Downscaled/',label_mode=None,batch_size=self.BATCH_SIZE)
File "/home/roi/.local/lib/python3.8/site-packages/tensorflow/python/keras/preprocessing/image_dataset.py", line 192, in image_dataset_from_directory
dataset = paths_and_labels_to_dataset(
File "/home/roi/.local/lib/python3.8/site-packages/tensorflow/python/keras/preprocessing/image_dataset.py", line 219, in paths_and_labels_to_dataset
img_ds = path_ds.map(
File "/home/roi/.local/lib/python3.8/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 1695, in map
return MapDataset(self, map_func, preserve_cardinality=True)
File "/home/roi/.local/lib/python3.8/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 4041, in __init__
self._map_func = StructuredFunctionWrapper(
File "/home/roi/.local/lib/python3.8/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 3371, in __init__
self._function = wrapper_fn.get_concrete_function()
File "/home/roi/.local/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 2938, in get_concrete_function
graph_function = self._get_concrete_function_garbage_collected(
File "/home/roi/.local/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 2906, in _get_concrete_function_garbage_collected
graph_function, args, kwargs = self._maybe_define_function(args, kwargs)
File "/home/roi/.local/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 3213, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/home/roi/.local/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 3065, in _create_graph_function
func_graph_module.func_graph_from_py_func(
File "/home/roi/.local/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 986, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/home/roi/.local/lib/python3.8/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 3364, in wrapper_fn
ret = _wrapper_helper(*args)
File "/home/roi/.local/lib/python3.8/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 3299, in _wrapper_helper
ret = autograph.tf_convert(func, ag_ctx)(*nested_args)
File "/home/roi/.local/lib/python3.8/site-packages/tensorflow/python/autograph/impl/api.py", line 255, in wrapper
return converted_call(f, args, kwargs, options=options)
File "/home/roi/.local/lib/python3.8/site-packages/tensorflow/python/autograph/impl/api.py", line 532, in converted_call
return _call_unconverted(f, args, kwargs, options)
File "/home/roi/.local/lib/python3.8/site-packages/tensorflow/python/autograph/impl/api.py", line 339, in _call_unconverted
return f(*args, **kwargs)
File "/home/roi/.local/lib/python3.8/site-packages/tensorflow/python/keras/preprocessing/image_dataset.py", line 220, in <lambda>
lambda x: path_to_image(x, image_size, num_channels, interpolation))
File "/home/roi/.local/lib/python3.8/site-packages/tensorflow/python/keras/preprocessing/image_dataset.py", line 228, in path_to_image
img = io_ops.read_file(path)
File "/home/roi/.local/lib/python3.8/site-packages/tensorflow/python/ops/gen_io_ops.py", line 574, in read_file
_, _, _op, _outputs = _op_def_library._apply_op_helper(
File "/home/roi/.local/lib/python3.8/site-packages/tensorflow/python/framework/op_def_library.py", line 492, in _apply_op_helper
raise TypeError("%s expected type of %s." %
TypeError: Input 'filename' of 'ReadFile' Op has type float32 that does not match expected type of string.
Thank you so much for your help.
One way to fix this I found is to put all your images in another sub-directory inside the directory whose path you are feeding to the image_dataset_from_directory.
Taking your example, you would create a new folder, let's call it new_folder, inside of ./Folder/ where you would put all your images, such that now the path to all your images is ./Folder/new_folder/. Then you can call the image_dataset_from_directory method with the exact same arguments as you have done in your question:
tf.keras.preprocessing.image_dataset_from_directory(
'./Folder/',
label_mode=None,
batch_size=100
)
I found this to work for me so hopefully someone else will also find it helpful!

Migrations fails on Django 1.11.20 with ugettext() got an unexpected keyword argument 'default'

I tried to make a migrations on a Django project version 1.11.20.
But I have a error that I don't understand where it from.
They must had some migration before because the project work, I just can't add some modification to the project and apply a migration
Traceback (most recent call last):
File "manage.py", line 22, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python3.6/site-packages/django/core/management/__init__.py", line 364, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python3.6/site-packages/django/core/management/__init__.py", line 356, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python3.6/site-packages/django/core/management/base.py", line 283, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/local/lib/python3.6/site-packages/django/core/management/base.py", line 330, in execute
output = self.handle(*args, **options)
File "/usr/local/lib/python3.6/site-packages/django/core/management/commands/makemigrations.py", line 193, in handle
self.write_migration_files(changes)
File "/usr/local/lib/python3.6/site-packages/django/core/management/commands/makemigrations.py", line 231, in write_migration_files
migration_string = writer.as_string()
File "/usr/local/lib/python3.6/site-packages/django/db/migrations/writer.py", line 163, in as_string
operation_string, operation_imports = OperationWriter(operation).serialize()
File "/usr/local/lib/python3.6/site-packages/django/db/migrations/writer.py", line 120, in serialize
_write(arg_name, arg_value)
File "/usr/local/lib/python3.6/site-packages/django/db/migrations/writer.py", line 72, in _write
arg_string, arg_imports = MigrationWriter.serialize(item)
File "/usr/local/lib/python3.6/site-packages/django/db/migrations/writer.py", line 293, in serialize
return serializer_factory(value).serialize()
File "/usr/local/lib/python3.6/site-packages/django/db/migrations/serializer.py", line 44, in serialize
item_string, item_imports = serializer_factory(item).serialize()
File "/usr/local/lib/python3.6/site-packages/django/db/migrations/serializer.py", line 229, in serialize
return self.serialize_deconstructed(path, args, kwargs)
File "/usr/local/lib/python3.6/site-packages/django/db/migrations/serializer.py", line 101, in serialize_deconstructed
arg_string, arg_imports = serializer_factory(arg).serialize()
File "/usr/local/lib/python3.6/site-packages/django/db/migrations/serializer.py", line 332, in serializer_factory
value = force_text(value)
File "/usr/local/lib/python3.6/site-packages/django/utils/encoding.py", line 76, in force_text
s = six.text_type(s)
File "/usr/local/lib/python3.6/site-packages/django/utils/functional.py", line 119, in __text_cast
return func(*self.__args, **self.__kw)
TypeError: ugettext() got an unexpected keyword argument 'default'

Google App Engine : Bulkuploader : int64 too big error

i am getting this error while uploading data to datastore using bulkuploader. Data used to be uploaded fine with the previous csv file. the new csv file has an extrafield that contains a list of strings. (ex. A,B,E,G,E,F). Following is the error that i get.
Traceback (most recent call last):
File "/opt/google_appengine_1.6.4/google/appengine/tools/adaptive_thread_pool.py", line 176, in WorkOnItems
status, instruction = item.PerformWork(self.__thread_pool)
File "/opt/google_appengine_1.6.4/google/appengine/tools/bulkloader.py", line 764, in PerformWork
transfer_time = self._TransferItem(thread_pool)
File "/opt/google_appengine_1.6.4/google/appengine/tools/bulkloader.py", line 935, in _TransferItem
self.request_manager.PostEntities(self.content)
File "/opt/google_appengine_1.6.4/google/appengine/tools/bulkloader.py", line 1420, in PostEntities
datastore.Put(entities)
File "/opt/google_appengine_1.6.4/google/appengine/api/datastore.py", line 576, in Put
return PutAsync(entities, **kwargs).get_result()
File "/opt/google_appengine_1.6.4/google/appengine/datastore/datastore_rpc.py", line 786, in get_result
results = self.__rpcs[0].get_result()
File "/opt/google_appengine_1.6.4/google/appengine/api/apiproxy_stub_map.py", line 592, in get_result
return self.__get_result_hook(self)
File "/opt/google_appengine_1.6.4/google/appengine/datastore/datastore_rpc.py", line 1556, in __put_hook
self.check_rpc_success(rpc)
File "/opt/google_appengine_1.6.4/google/appengine/datastore/datastore_rpc.py", line 1191, in check_rpc_success
rpc.check_success()
File "/opt/google_appengine_1.6.4/google/appengine/api/apiproxy_stub_map.py", line 558, in check_success
self.__rpc.CheckSuccess()
File "/opt/google_appengine_1.6.4/google/appengine/api/apiproxy_rpc.py", line 156, in _WaitImpl
self.request, self.response)
File "/opt/google_appengine_1.6.4/google/appengine/ext/remote_api/remote_api_stub.py", line 248, in MakeSyncCall
handler(request, response)
File "/opt/google_appengine_1.6.4/google/appengine/ext/remote_api/remote_api_stub.py", line 397, in _Dynamic_Put
'datastore_v3', 'Put', put_request, put_response)
File "/opt/google_appengine_1.6.4/google/appengine/ext/remote_api/remote_api_stub.py", line 177, in MakeSyncCall
self._MakeRealSyncCall(service, call, request, response)
File "/opt/google_appengine_1.6.4/google/appengine/ext/remote_api/remote_api_stub.py", line 185, in _MakeRealSyncCall
request_pb.set_request(request.Encode())
File "/opt/google_appengine_1.6.4/google/net/proto/ProtocolBuffer.py", line 56, in Encode
self.Output(e)
File "/opt/google_appengine_1.6.4/google/net/proto/ProtocolBuffer.py", line 205, in Output
self.OutputUnchecked(e)
File "/opt/google_appengine_1.6.4/google/appengine/datastore/datastore_pb.py", line 4400, in OutputUnchecked
self.entity_[i].OutputUnchecked(out)
File "/opt/google_appengine_1.6.4/google/appengine/datastore/entity_pb.py", line 2380, in OutputUnchecked
self.property_[i].OutputUnchecked(out)
File "/opt/google_appengine_1.6.4/google/appengine/datastore/entity_pb.py", line 1307, in OutputUnchecked
self.value_.OutputUnchecked(out)
File "/opt/google_appengine_1.6.4/google/appengine/datastore/entity_pb.py", line 945, in OutputUnchecked
self.referencevalue_.OutputUnchecked(out)
File "/opt/google_appengine_1.6.4/google/appengine/datastore/entity_pb.py", line 675, in OutputUnchecked
self.pathelement_[i].OutputUnchecked(out)
File "/opt/google_appengine_1.6.4/google/appengine/datastore/entity_pb.py", line 135, in OutputUnchecked
out.putVarInt64(self.id_)
File "/opt/google_appengine_1.6.4/google/net/proto/ProtocolBuffer.py", line 402, in putVarInt64
raise ProtocolBufferEncodeError, "int64 too big"
Changing the data type of problematic entries from IntegerProperty to StringProperty might help.
I was having the same problem, as I was storing user_id for Users entity as Integer, but when confronted with a bigger number, it simply can't hold it. So I am storing it as String now.

Resources