do_rootfs function failed in yocto project - c

I am just getting started with the yocto project and trying to build an image for x86 architecture to be emulated using QEMU emulator (running on Ubuntu 16.04 ).I am getting the following error while building the OS image.
ERROR: core-image-sato-1.0-r0 do_rootfs: Error executing a python function in exec_python_func() autogenerated:
The stack trace of python calls that resulted in this exception/failure was:
File: 'exec_python_func() autogenerated', lineno: 2, function: <module>
0001:
*** 0002:do_rootfs(d)
0003:
File: '/home/rahul/poky/poky/meta/classes/image.bbclass', lineno: 258, function: do_rootfs
0254: progress_reporter.next_stage()
0255:
0256: # generate rootfs
0257: d.setVarFlag('REPRODUCIBLE_TIMESTAMP_ROOTFS', 'export', '1')
*** 0258: create_rootfs(d, progress_reporter=progress_reporter, logcatcher=logcatcher)
0259:
0260: progress_reporter.finish()
0261:}
0262:do_rootfs[dirs] = "${TOPDIR}"
File: '/home/rahul/poky/poky/meta/lib/oe/rootfs.py', lineno: 1010, function: create_rootfs
1006: env_bkp = os.environ.copy()
1007:
1008: img_type = d.getVar('IMAGE_PKGTYPE')
1009: if img_type == "rpm":
*** 1010: RpmRootfs(d, manifest_dir, progress_reporter, logcatcher).create()
1011: elif img_type == "ipk":
1012: OpkgRootfs(d, manifest_dir, progress_reporter, logcatcher).create()
1013: elif img_type == "deb":
1014: DpkgRootfs(d, manifest_dir, progress_reporter, logcatcher).create()
File: '/home/rahul/poky/poky/meta/lib/oe/rootfs.py', lineno: 201, function: create
0197: if self.progress_reporter:
0198: self.progress_reporter.next_stage()
0199:
0200: # call the package manager dependent create method
*** 0201: self._create()
0202:
0203: sysconfdir = self.image_rootfs + self.d.getVar('sysconfdir')
0204: bb.utils.mkdirhier(sysconfdir)
0205: with open(sysconfdir + "/version", "w+") as ver:
File: '/home/rahul/poky/poky/meta/lib/oe/rootfs.py', lineno: 450, function: _create
0446: rpm_pre_process_cmds = self.d.getVar('RPM_PREPROCESS_COMMANDS')
0447: rpm_post_process_cmds = self.d.getVar('RPM_POSTPROCESS_COMMANDS')
0448:
0449: # update PM index files
*** 0450: self.pm.write_index()
0451:
0452: execute_pre_post_process(self.d, rpm_pre_process_cmds)
0453:
0454: if self.progress_reporter:
File: '/home/rahul/poky/poky/meta/lib/oe/package_manager.py', lineno: 543, function: write_index
0539:
0540: def write_index(self):
0541: lockfilename = self.d.getVar('DEPLOY_DIR_RPM') + "/rpm.lock"
0542: lf = bb.utils.lockfile(lockfilename, False)
*** 0543: RpmIndexer(self.d, self.rpm_repo_dir).write_index()
0544: bb.utils.unlockfile(lf)
0545:
0546: def insert_feeds_uris(self, feed_uris, feed_base_paths, feed_archs):
0547: from urllib.parse import urlparse
File: '/home/rahul/poky/poky/meta/lib/oe/package_manager.py', lineno: 105, function: write_index
0101: else:
0102: signer = None
0103:
0104: createrepo_c = bb.utils.which(os.environ['PATH'], "createrepo_c")
*** 0105: result = create_index("%s --update -q %s" % (createrepo_c, self.deploy_dir))
0106: if result:
0107: bb.fatal(result)
0108:
0109: # Sign repomd
File: '/home/rahul/poky/poky/meta/lib/oe/package_manager.py', lineno: 21, function: create_index
0017:def create_index(arg):
0018: index_cmd = arg
0019:
0020: bb.note("Executing '%s' ..." % index_cmd)
*** 0021: result = subprocess.check_output(index_cmd, stderr=subprocess.STDOUT, shell=True).decode("utf-8")
0022: if result:
0023: bb.note(result)
0024:
0025:"""
File: '/usr/lib/python3.5/subprocess.py', lineno: 626, function: check_output
0622: # empty string. That is maintained here for backwards compatibility.
0623: kwargs['input'] = '' if kwargs.get('universal_newlines', False) else b''
0624:
0625: return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
*** 0626: **kwargs).stdout
0627:
0628:
0629:class CompletedProcess(object):
0630: """A process that has finished running.
File: '/usr/lib/python3.5/subprocess.py', lineno: 708, function: run
0704: raise
0705: retcode = process.poll()
0706: if check and retcode:
0707: raise CalledProcessError(retcode, process.args,
*** 0708: output=stdout, stderr=stderr)
0709: return CompletedProcess(process.args, retcode, stdout, stderr)
0710:
0711:
0712:def list2cmdline(seq):
Exception: subprocess.CalledProcessError: Command '/home/rahul/poky/poky/build/tmp/work/qemux86-poky-linux/core-image-sato/1.0-r0/recipe-sysroot-native/usr/bin/createrepo_c --update -q /home/rahul/poky/poky/build/tmp/work/qemux86-poky-linux/core-image-sato/1.0-r0/oe-rootfs-repo' returned non-zero exit status 1
Subprocess output:
Temporary repodata directory /home/rahul/poky/poky/build/tmp/work/qemux86-poky-linux/core-image-sato/1.0-r0/oe-rootfs-repo/.repodata/ already exists! (Another createrepo process is running?)
ERROR: core-image-sato-1.0-r0 do_rootfs: Function failed: do_rootfs
ERROR: Logfile of failure stored in: /home/rahul/poky/poky/build/tmp/work/qemux86-poky-linux/core-image-sato/1.0-r0/temp/log.do_rootfs.5019
ERROR: Task (/home/rahul/poky/poky/meta/recipes-sato/images/core-image-sato.bb:do_rootfs) failed with exit code '1'
Build process runs upto almost 90 percent after which this error comes up which terminates the process.What could be the issue ?

I got the same error when my host machine shut down abruptly, but everything worked well after I delete the .repodata folder with sudo rm -r build/tmp/work/qemux86-poky-linux/core-image-sato/1.0-r0/oe-rootfs-repo/.repodata/ and then build again.

I stopped a build using Ctrl-C and got the python error described in the original question.
The .repodata folder (please see the answer from jmiranda) was empty. So I deleted the oe-rootfs-repo folder and this worked for me.

I get the same issue, but with error "Directory not empty" instead, when building in Docker container. Deleting the destination directory using rm -r, and running the build again, works.

None of these methods worked for me.
I then clean the build using bitbake -c clean mybuildname and then again made the build and it worked flawlessly, i hope it helps someone.

Related

`vespa` tutorial : ./src/python/user_search.py U33527 10 KeyError: 'children'

I'm following step by step the Vespa tutorials: https://docs.vespa.ai/en/tutorials/news-5-recommendation.html
(vespa) raphy#pc:~/vespa/sample-apps/news$ python3 src/python/train_cold_start.py mind 10
Total loss after epoch 9: 534.6995239257812 (0.4087916910648346 avg)
{'auc': 0.8578, 'mrr': 0.4789, 'ndcg#5': 0.5482, 'ndcg#10': 0.6013}
{'auc': 0.6265, 'mrr': 0.2846, 'ndcg#5': 0.3117, 'ndcg#10': 0.3747}
Total loss after epoch 10: 517.1571044921875 (0.39538004994392395 avg)
{'auc': 0.8758, 'mrr': 0.5073, 'ndcg#5': 0.5817, 'ndcg#10': 0.6315}
{'auc': 0.6246, 'mrr': 0.2843, 'ndcg#5': 0.3113, 'ndcg#10': 0.3732}
(vespa) raphy#pc:~/vespa/sample-apps/news$
But I'm encountering this problem:
(vespa) raphy#pc:~/vespa/sample-apps/news$ ./src/python/user_search.py U33527 10
Traceback (most recent call last):
File "./src/python/user_search.py", line 58, in <module>
main()
File "./src/python/user_search.py", line 51, in main
user_vector = query_user_embedding(user_id)
File "./src/python/user_search.py", line 21, in query_user_embedding
embedding = parse_embedding(result["root"]["children"][0])
KeyError: 'children'
(vespa) raphy#pc:~/vespa/sample-apps/news$
(vespa) raphy#pc:~/vespa/sample-apps/news$ grep "U33527" mind/vespa_user_embeddings.json
{"put": "id:user:user::U33527", "fields": {"user_id":"U33527", "embedding": {"values": [0.000000,0.060903,0.158397,0.003585,0.230960,0.005171,-0.300856,-0.295116,-0.042150,-0.416067,-0.173345,-0.241960,-0.140207,-0.000399,0.463869,-0.294422,-0.080257,-0.208765,-0.070218,0.189583,0.031040,-0.073909,-0.147883,-0.164819,-0.229605,-0.248327,0.174647,-0.168265,-0.370106,-0.209611,-0.206252,-0.288447,0.091576,-0.122662,0.000394,0.172982,-0.147844,0.326629,-0.103831,-0.312612,-0.209032,0.190745,-0.335539,0.261593,0.699852,0.041234,0.241921,0.052331,0.103968,-0.216830,-0.279406]} }},
OS: Ubuntu 20.04
How to solve it ?
The Vespa index has no user documents here, so most likely the user and news embeddings have not been fed to the system. After they are calculated in the previous step (https://docs.vespa.ai/en/tutorials/news-4-embeddings.html), be sure to feed them to Vespa:
$ java -jar vespa-http-client-jar-with-dependencies.jar \
--file mind/vespa_user_embeddings.json \
--endpoint http://localhost:8080
$ java -jar vespa-http-client-jar-with-dependencies.jar \
--file mind/vespa_news_embeddings.json \
--endpoint http://localhost:8080
That will solve the problem.

The error about “dropDatabase fail: chunk is not in COMPLETE state”

I run the following code in python:
exchange_path = f"dfs://market/{exchange.value}"
script = f'''if(existsDatabase(\"{exchange_path}\")) {{ dropDatabase(\"{exchange_path}\") }}'''
session.run(script)
There is an error:
RuntimeError: <Server Exception> in run: dropDatabase("dfs://market/BINANCE") => deleteSubChunks failed on '/market/BINANCE', chunk 17a19f25-cc16-15ae-714b-5ee30d5d6795 is not in COMPLETE state
Firstly drop the partitions by dropPartition and then use dropDatabase to delete the database.
dbName=exchange_path
fileCond=dbName + "%"
t=exec substr(file,strlen(dbName)) from rpc(getControllerAlias(),getClusterChunksStatus) where file like fileCond, state != "COMPLETE"
dropPartition(database("dfs:/"+dbName),t,,true)

State of process is Launching when using lldb module in python

I am learning to use the LLDB.py module in python and am trying to run the following example I found on http://lldb.llvm.org/python-reference. I already added lldb.so to PYTHONPATH. Here is the result I got:
Creating a target for './a.out'
a.out
SBBreakpoint: id = 1, name = 'main', module = a.out, locations = 1
SBProcess: pid = 0, state = launching, threads = 0, executable = a.out
It seems like the program doesn't get started, the state of process is always Launching. Is there any configuration problems or any missing codes?
import lldb
import os
def disassemble_instructions(insts):
for i in insts:
print i
# Set the path to the executable to debug
exe = "./a.out"
# Create a new debugger instance
debugger = lldb.SBDebugger.Create()
# When we step or continue, don't return from the function until the process
# stops. Otherwise we would have to handle the process events ourselves which, while doable is
#a little tricky. We do this by setting the async mode to false.
debugger.SetAsync (False)
# Create a target from a file and arch
print "Creating a target for '%s'" % exe
target = debugger.CreateTargetWithFileAndArch (exe, lldb.LLDB_ARCH_DEFAULT)
if target:
# If the target is valid set a breakpoint at main
main_bp = target.BreakpointCreateByName ("main", target.GetExecutable().GetFilename());
print main_bp
# Launch the process. Since we specified synchronous mode, we won't return
# from this function until we hit the breakpoint at main
process = target.LaunchSimple (["./story.txt"], None, os.getcwd())
# Make sure the launch went ok
if process:
# Print some simple process info
state = process.GetState ()
print process
if state == lldb.eStateStopped:
# Get the first thread
thread = process.GetThreadAtIndex (0)
if thread:
# Print some simple thread info
print thread
# Get the first frame
frame = thread.GetFrameAtIndex (0)
if frame:
# Print some simple frame info
print frame
function = frame.GetFunction()
# See if we have debug info (a function)
if function:
# We do have a function, print some info for the function
print function
# Now get all instructions for this function and print them
insts = function.GetInstructions(target)
disassemble_instructions (insts)
else:
# See if we have a symbol in the symbol table for where we stopped
symbol = frame.GetSymbol();
if symbol:
# We do have a symbol, print some info for the symbol
print symbol
}

dronekit newbie query with hello.py example

I have installed dronekit and dronekit sitl and created the hello.py example and I get the following eror I am a novice so sorry if its obvious. This is the output I am running on a raspberry 3 under debian with python 2.7 . Here is the code
print "Start simulator (SITL)"
import dronekit_sitl
sitl = dronekit_sitl.start_default()
connection_string = sitl.connection_string()
# Import DroneKit-Python
from dronekit import connect, VehicleMode
# Connect to the Vehicle.
print("Connecting to vehicle on: %s" % (connection_string,))
vehicle = connect(connection_string, wait_ready=True)
# Get some vehicle attributes (state)
print "Get some vehicle attribute values:"
print " GPS: %s" % vehicle.gps_0
print " Battery: %s" % vehicle.battery
print " Last Heartbeat: %s" % vehicle.last_heartbeat
print " Is Armable?: %s" % vehicle.is_armable
print " System status: %s" % vehicle.system_status.state
print " Mode: %s" % vehicle.mode.name # settable
# Close vehicle object before exiting script
vehicle.close()
# Shut down simulator
sitl.stop()
print("Completed")
And here is the output I get:
python hellp.py
Start simulator (SITL)
Starting copter simulator (SITL)
SITL already Downloaded and Extracted.
Ready to boot.
Traceback (most recent call last):
File "hellp.py", line 3, in <module>
sitl = dronekit_sitl.start_default()
File "/home/gus/.local/lib/python2.7/site-packages/dronekit_sitl/__init__.py", line 341, in start_default
sitl.launch(sitl_args, await_ready=True, restart=True)
File "/home/gus/.local/lib/python2.7/site-packages/dronekit_sitl/__init__.py", line 271, in launch
p = Popen([self.path] + args, cwd=wd, shell=sys.platform == 'win32', stdout=PIPE, stderr=PIPE)
File "/usr/lib/python2.7/subprocess.py", line 390, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1024, in _execute_child
raise child_exception
OSError: [Errno 8] Exec format error
Thanks so much in advance for any help
As you are running SITL on ARM, this might be the issue. You might have installed packages directly by apt-get. You can try to build the SITL on the pi itself and try run after that.

Looping over with open() thows IOError

I am iterating over several DB records and writing data from their respective BLOB fields into files:
def build(self, records):
"""
Builds openimmo.anhang
"""
result = None
anh_records = [r for r in records if type(r) == anhaenge]
if not anh_records:
return result
anhang = []
print('RECORDS: ' + str(len(anh_records)))
for anh_record in anh_records:
if anh_record.daten:
__, path = mkstemp()
with open(path, 'wb') as target:
target.write(anh_record.daten)
anh = openimmo.anhang()
anh.anhangtitel = anh_record.anhangtitel
anh.format = 'image/jpeg' #MIMEUtil.getmime(path)
anh.daten = openimmo.daten()
anh.daten.pfad = path
anh.location = id2location.get(anh_record.location)
anh.gruppe = id2gruppe.get(anh_record.gruppe)
anhang.append(anh)
try:
result.validateBinding()
except:
self.log.err('Could not build "anhang": ' + str(result))
if anhang:
result = openimmo.anhaenge()
result.anhang = anhang
return result
This, however produces the following error:
RECORDS: 5
Message: "[Errno 24] Too many open files: '/tmp/tmpo54qfq'
daemon panic:
Caught unexpected exception in _main() on 2014-08-20 11:53:37.918353
Message: "[Errno 24] Too many open files: '/tmp/tmpo54qfq'" of type "<class 'IOError'>"
Traceback (most recent call last):
File "/usr/local/lib/python3.2/dist-packages/homie_core-1.0-py3.2.egg/homie/serv/daemon.py", line 345, in __run
File "/usr/local/lib/python3.2/dist-packages/homie_core-1.0-py3.2.egg/homie/serv/service.py", line 72, in _main
File "/usr/local/lib/python3.2/dist-packages/homie_core-1.0-py3.2.egg/homie/api/itf.py", line 127, in export
File "/usr/local/lib/python3.2/dist-packages/homie_openimmodb-0.2_indev-py3.2.egg/openimmodb/itf.py", line 51, in _retrieve
File "/usr/local/lib/python3.2/dist-packages/homie_openimmodb-0.2_indev-py3.2.egg/openimmodb/conv.py", line 27, in decode
File "/usr/local/lib/python3.2/dist-packages/homie_openimmodb-0.2_indev-py3.2.egg/openimmodb/factories/openimmo/immobilie.py", line 60, in build
File "/usr/local/lib/python3.2/dist-packages/homie_openimmodb-0.2_indev-py3.2.egg/openimmodb/factories/openimmo/anhaenge.py", line 30, in build
IOError: [Errno 24] Too many open files: '/tmp/tmpo54qfq'
According to lsof the whole process has over 5k open files:
# lsof| grep python3| wc -l
5375
I checked it several times: I am using with open(file) as desc everywhere in the code, when I open a file.
Shouldn't the files be closed automatically at the end of each with block, or am I missing something?
tempfile.mkstemp() opens a file for you:
fd, path = mkstemp()
with open(fd, 'wb') as target:
# os.close(fd) is called automatically
You don't need open(path) that opens another file (with the same name).
You could use tempfile.NamedTemporaryFile(delete=False) instead of tempfile.mkstemp().

Resources