Cycle in dependencies between targets 'FBReactNativeSpec' and 'Yoga'; building could produce unreliable results - reactjs

Whenever I run "react-native run-ios" I'm getting the following build error
error: Cycle in dependencies between targets 'FBReactNativeSpec' and
'Yoga'; building could produce unreliable results. Cycle path:
FBReactNativeSpec → Folly → glog → YogaKit → Yoga → FBReactNativeSpec
Cycle details: → Target 'FBReactNativeSpec' has target dependency on
Target 'Folly' → Target 'Folly' has target dependency on Target 'glog'
→ Target 'glog' has compile command with input
'/Users/ajayhg/Myproj/ios/Pods/Target Support Files/glog/glog-dummy.m'
○ That command depends on command in Target 'YogaKit': script phase
“Copy generated compatibility header” → Target 'YogaKit' has target
dependency on Target 'Yoga' → Target 'Yoga' has compile command with
input '/Users/ajayhg/Myproj/ios/Pods/Target Support
Files/Yoga/Yoga-dummy.m'
how to resolve this?

I solved the problem by doing ⌘+Shift+K and rebuilding.

I have exact same problem on our app, but unfortunately no solution yet.
Known workaronds are:
Switch to legacy build system. It doesn't react on those "fake" cycles
Clean and rebuild usually helps. If not, then try to clean and reinstal cocoapods.

Relevant thread here.
It appears as though Firebase adds a script phase in an order that creates a dependency loop. You can fix it (laboriously, after every pod install) by going to XCode > Pods (in the left bar) > FBReactNativeSpec and moving [CP-User] Generate Specs up above the Headers script(s).

Related

SWUpdate on RPi4 via yocto - error parsing configuration file

After booting SWUpdate yocto-generated image for the first time, executing swupdate results in error message:
Error parsing configuration file: 'globals' section missing, exiting.
I tried to strictly follow SWUpdate's documentation, but it gets short when it comes to yocto integration. I'm using meta-swupdate, meta-swupdate-boards, and meta-openembedded layers together with poky example repository all at Kirkstone tag, building via bitbake update-image and having modyfied local.conf as:
MACHINE ??= "raspberrypi4-64"
ENABLE_UART = "1"
RPI_USE_U_BOOT = "1"
IMAGE_FSTYPES = "wic ext4.gz"
PREFERRED_PROVIDER_u-boot-fw-utils = "libubootenv"
IMAGE_INSTALL:append = " swupdate"
Is there anything else I need to modify to generate the configuration file and be able to run SWUpdate binary properly?
Side question: In the documentation, it's recommended to append swupdate-www to achieve a better web server. However, if I append it, there is no swupdate-www binary inside the `/usr/bin' directory.
As with other recipes folders the recipes-support/swupdate/swupdate/raspberrypi4-64 folder was missing inside the meta-swupdate-boards layer. Therefore, an empty config file was always generated. After adding this folder and all related files, strongly inspired by raspberrypi3 folder, the error was gone and swupdate -h provided the expected output.
There was also one new error during build process thrown by yocto. It was related to missing systemd requirement and was solved by adding:
DISTRO_FEATURES_append = " systemd"
to local.conf

Are the added dependencies really been compiled by shadow-cljs? If so, why do the values stay the same?

I am following shadow-cljs Quick Start documentation on a minimal example of a project. Here is the link.
Initially, I had this shadow-cljs.edn file:
;; shadow-cljs configuration
{:source-paths
["src/dev"
"src/main"
"src/test"]
:dev-http {8080 "public"}
:dependencies
[]
:builds
{:frontend
{:target :browser
:modules {:main {:init-fn acme.frontend.app/init}}
}}}
In /Users/pedro/projects/acme-app/src/main/acme/frontend/app.cljs, I also have:
(ns acme.frontend.app)
(defn init []
(println "Hello World"))
I can build and watch it with the command:
$ npx shadow-cljs compile frontend
shadow-cljs - config: /Users/pedro/projects/acme-app/shadow-cljs.edn
shadow-cljs - updating dependencies
shadow-cljs - dependencies updated
[:frontend] Compiling ...
[:frontend] Build completed. (79 files, 0 compiled, 0 warnings, 4.88s)
I have been adding dependencies such as:
:dependencies [[day8.re-frame/re-frame-10x "1.2.1"]
[proto-repl "0.3.1"]
[re-frame "1.2.0"]
[com.degel/re-frame-firebase "0.9.6-SNAPSHOT"]
[bidi "2.1.5"]
[re-com "2.13.2-106-180ea1f-SNAPSHOT-TALLYFOR"]
[com.andrewmcveigh/cljs-time "0.5.2"]
[com.pupeno/free-form "0.6.0"]
[binaryage/dirac "RELEASE"]
[hickory "0.7.1"]
[cljs-hash "0.0.2"]
[medley "1.2.0"]]
But, the build does not change in terms of files, compiled, and warnings. Just the time changes a bit - time is probably somewhat random/stochastic (79 files, 0 compiled, 0 warnings, 5.59s).
Are the dependencies really been compiled? How do I know if the dependencies were compiled too?
If they are being compiled, why does the number of files stay the same?
Obs.: note that I am not invoking the function being used inside the dependencies - and I do not want to invoke them for debugging reasons.
Adding the :dependencies does very little, they'll not be compiled on their own. They are only made available on the classpath.
They will only be compiled and loaded once you add them the :require in the ns forms of your files, or dynamically require at the REPL. Without an explicit request (ie. :require) to load them, they are just passive resources that are unused.

Trace visualization tool (VizTrace) for UnetStack JSON trace files

I am trying to use the VizTrace Trace visualization tool. I install Julia on the system successfully. When I am trying with command - julia --project viztrace.jl trace.jso. I am getting some errors as shown in the image. Please help to solve these errors.
I'm unable to reproduce your problem. If I download your trace.json file and run it with the latest unet-contrib version of viztrace.jl, I get:
» julia --project viztrace.jl ~/Downloads/trace.json
Specify a trace:
1: 1644310724592 [B] AddressAllocReq ⟦ node → arp ⟧ (1 events)
2: 1644310724591 [A] AddressAllocReq ⟦ node → arp ⟧ (1 events)
3: 1644310742254 [A] AddressResolutionReq ⟦ websh → arp ⟧ (1 events)
4: 1644310742298 [A] RangeReq ⟦ websh → ranging ⟧ (23 events)
I'm running on Julia 1.7.2, but there should be no strong dependency on the exact Julia version.
Try with a fresh copy in a new directory, and make sure you don't have any conflicting packages in your global Julia environment.

what does gitolite setup fix?

gitolite info didn't work, adding keys turned them into a no access key and did NOT create a corresponding entry in auth-keys file.
To fix this run gitolite setup on gitolite server
Question: what could have landed me in that mess?
And what does gitolite setup do when invoked for the n-th time (it's no longer setting things up, according to the docs it fixes hooks, but I wonder what the use case would be and which was mine)?
More details on gitolite info
gitolite info command is invoked like so:
> ssh git-user#ser-git
PTY allocation request failed on channel 0
hello git-admin, this is ...#... running gitolite3 3.6.7-2 (Debian) on git 2.17.1
R W some-repository
R W gitolite-admin
R W testing
Connection to ser-git closed.
Bad output is: FATAL: unknown git/gitolite command: 'info'
More details: keys without access.
gitolite sshkeys-lint was showing keys with (no access), now those keys have access as I set them (now meaning after gitolite setup).
ssh-keygen -lf /home/repo/.ssh/authorized_keys | wc -l (or without piped part, regardless) number of keys and their names indicated I didn't have the newest one added.
Similar question that did not work for me: keydir entries not propagating to authorized_keys
Docs pretty much had the answer once I dug deeper, I guess. Which is fairly nice of #sitaramc.
Without options, 'gitolite setup' is a general "fix up everything" command
(for example, if you brought in repos from outside, or someone messed
around with the hooks, or you made an rc file change that affects access
rules, etc.)
Symptoms keys stopped propagating and error FATAL: unknown git/gitolite command: 'info' on ssh git-user#ser-git. Fix was to run gitolite setup. So onto first question, the title one:
what does gitolite setup fix?
gitolite setup is implemented here
my Perl is rather weak, but there's a setup function in line 56. It calls args (which parses options, so here it had nothing to parse), then unless h_only (hooks only arg for setup), which wasn't used, so we skip compile and POST_COMPILE trigger and go for the hooks.
sub setup {
my ( $admin, $pubkey, $h_only, $message ) = args();
unless ($h_only) {
setup_glrc();
setup_gladmin( $admin, $pubkey, $message );
_system("gitolite compile");
_system("gitolite trigger POST_COMPILE");
}
hook_repos(); # all of them, just to be sure
}
package Gitolite::conf::store has hook_repos(), line 228: we change the dir to repo base dir (as per config file), and for each phy_repo we do hook_1(phy_repo). What is a phy_repo? a physical one.
same package, different method and line: hook_1($repo) in line 354.
Method hook_1($repo)
It's quite literally about fixing all the hooks.
Recreates dirs for common and admin hooks.
Rewrites update_hook (common) and post_update_hook (admin).
Sets 755 permissions for both common and admin hooks.
Then using ln_sf it symlinks the folders for common/admin hooks.
ln_sf is in common module, in line 162

rpm and Yum don't believe a package is installed after Chef installs

Running chef-solo (Installing Chef Omnibus (12.3)) on centos6.6
My recipe has the following simple code:
package 'cloud-init' do
action :install
end
log 'rpm-qi' do
message `rpm -qi cloud-init`
level :warn
end
log 'yum list' do
message `yum list cloud-init`
level :warn
end
But it outputs the following:
- install version 0.7.5-10.el6.centos.2 of package cloud-init
* log[rpm-qi] action write[2015-07-16T16:46:35+00:00] WARN: package cloud-init is not installed
[2015-07-16T16:46:35+00:00] WARN: Loaded plugins: fastestmirror, presto
Available Packages
cloud-init.x86_64 0.7.5-10.el6.centos.2 extras
I am at a loss as to why rpm/yum and actually rpmquery don't see the package as installed.
EDIT: To clarify I am specifically looking for the following string post package install to then apply a change to the file (I understand this is not a very chef way to do something I am happy to accept suggestions):
rpmquery -l cloud-init | grep 'distros/__init__.py$'
I have found that by using the following:
install_report = shell_out('yum install -y cloud-init').stdout
cloudinit_source = shell_out("rpmquery -l cloud-init | grep 'distros/__init__.py$'").stdout
I can then get the file I am looking for and perform
Chef::Util::FileEdit.new(cloudinit_source.chomp(''))
The file moves based on the distribution but I need to edit that file specifically with in place changes.
Untested code, just to give the idea:
package 'cloud-init' do
action :install
notifies :run,"ruby_block[update_cloud_init]"
end
ruby_block 'update_cloud_init' do
block do
cloudinit_source = shell_out("rpmquery -l cloud-init | grep 'distros/__init__.py$'").stdout
rc = Chef::Util::FileEdit.new(cloudinit_source.chomp(''))
rc.search_file_replace_line(/^what to find$/,
"replacement datas for the line")
rc.write_file
end
end
ruby_block example taken and adapted from here
I would better go using a template to manage the whole file, what I don't understand is why you don't know where it will be at first...
Previous answer
I assume it's a compile vs converge problem. at the time the message is stored (and so your command is executed) the package is not already installed.
Chef run in two phase, compile then converge.
At compile time it build a collection of resources and at converge time it execute code for the resource to get them in the described state.
When your log resource is compiled, the ugly back-ticks are evaluated, at this time there's a package resource in the collection but the resource has not been executed, so the output is correct.
I don't understand what you want to achieve with those log resources at all.
If you want to test your node state after chef-run use a handler maybe calling ServerSpec as in Test-Kitchen.

Resources