I have a Grails 2.4.4 app and when I try to install Neo4j by means of the plugin, I get this:
| Error Resolve error obtaining dependencies: Could not find artifact org.neo4j:neo4j-jdbc:jar:2.0.2 in grailsCentral (https://repo.grails.org/grails/plugins) (Use --stacktrace to see the full trace)
| Error Resolve error obtaining dependencies: Could not find artifact org.neo4j:neo4j-jdbc:jar:2.0.2 in grailsCentral (https://repo.grails.org/grails/plugins) (Use --stacktrace to see the full trace)
| Error Resolve error obtaining dependencies: Could not find artifact org.neo4j:neo4j-jdbc:jar:2.0.2 in grailsCentral (https://repo.grails.org/grails/plugins) (Use --stacktrace to see the full trace)
| Error Could not find artifact org.neo4j:neo4j-jdbc:jar:2.0.2 in grailsCentral (https://repo.grails.org/grails/plugins)
This is the relevant output from grails dependency-report:
+--- org.grails.plugins:neo4j:2.0.0-M02
| \--- org.grails:grails-datastore-gorm-neo4j:2.0.0-M02
| >>>> org.neo4j:neo4j-jdbc:2.0.2
| \--- org.neo4j:neo4j-community:2.0.3
| \--- org.neo4j:neo4j-kernel:2.0.3
| \--- org.apache.geronimo.specs:geronimo-jta_1.1_spec:1.1.1
| \--- org.neo4j:neo4j-lucene-index:2.0.3
| \--- org.apache.lucene:lucene-core:3.6.2
| \--- org.neo4j:neo4j-graph-algo:2.0.3
| \--- org.neo4j:neo4j-udc:2.0.3
| \--- org.neo4j:neo4j-graph-matching:2.0.3
| \--- org.neo4j:neo4j-cypher:2.0.3
| \--- org.neo4j:neo4j-cypher-commons:2.0.3
| \--- org.neo4j:neo4j-cypher-compiler-1.9:2.0.3
| \--- org.neo4j:neo4j-cypher-compiler-2.0:2.0.3
| \--- org.parboiled:parboiled-scala_2.10:1.1.6
| \--- org.parboiled:parboiled-core:1.1.6
| \--- net.sf.opencsv:opencsv:2.0
| \--- org.scala-lang:scala-library:2.10.3
| \--- org.neo4j:neo4j-jmx:2.0.3
| \--- org.grails:grails-datastore-gorm-plugin-support:2.0.7.RELEASE
| \--- org.grails:grails-datastore-web:2.0.7.RELEASE
(The >>>> org.neo4j:neo4j-jdbc:2.0.2 line is highlighted in red.)
Is there another version of the plugin that would work? Or a newer version of Grails? Or do I just need a newer version of some coffee?
The Neo4j JDBC driver is not available via Maven central. Instead it can be found on http://m2.neo4j.org/. So you need repositories closure in BuildConfig.groovy amend:
mavenRepo 'http://m2.neo4j.org/content/repositories/releases/'
Related
I am using yarn to run react-scripts build on a private Typescript repository. The build fails with the following message:
yarn build -v
yarn run v1.22.17
$ react-scripts build -v
Creating an optimized production build...
Failed to compile.
TS2322: Type 'string | undefined' is not assignable to type 'string | number | boolean'.
Type 'undefined' is not assignable to type 'string | number | boolean'.
114 | withCredentials: true,
115 | headers: {
> 116 | Authorization: token ? `Bearer ${token}` : undefined,
| ^^^^^^^^^^^^^
117 | },
118 | });
119 |
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
I understand the error, but I cannot find what file the error is in because react-scripts build doesn't seem to output the file name? I'm not sure if this is a problem with my local configuration but is there a way to get react-scripts to tell me where exactly this error is?
PS C:\code\Projects\Exam portal\examfront> ng add #angular/material
see about_Execution_Policies at https:/go.microsoft.com/fwlink/?LinkID=135170.
At line:1 char:1
+ ng add #angular/material
+ ~~
+ CategoryInfo : SecurityError: (:) [], PSSecurityException
+ FullyQualifiedErrorId : UnauthorizedAccess
Step 1:> PS C:\code\Projects\Exam portal\examfront> set-ExecutionPolicy RemoteSigned -S
Step 2:> PS C:\code\Projects\Exam portal\examfront> Get-ExecutionPolicy
RemoteSigned
Scope ExecutionPolicy
MachinePolicy Undefined
UserPolicy Undefined
Process Undefined
CurrentUser RemoteSigned
LocalMachine Undefined
Step 3:> PS C:\code\Projects\Exam portal\examfront> ng --version
_ _ ____ _ ___
/ \ _ __ __ _ _ _| | __ _ _ __ / ___| | |_ _|
/ △ \ | '_ \ / _` | | | | |/ _` | '__| | | | | | |
/ ___ \| | | | (_| | |_| | | (_| | | | |___| |___ | |
/_/ \_\_| |_|\__, |\__,_|_|\__,_|_| \____|_____|___|
|___/
Angular CLI: 12.0.3
Node: 14.15.3
Package Manager: npm 6.14.9
OS: win32 x64
Angular: 12.0.3
... platform-browser, platform-browser-dynamic, router
Package Version
---------------------------------------------------------
#angular-devkit/architect 0.1200.3
#angular-devkit/build-angular 12.0.3
#angular-devkit/core 12.0.3
#angular-devkit/schematics 12.0.3
#schematics/angular 12.0.3
rxjs 6.6.7
typescript 4.2.4
Step 4:> PS C:\code\Projects\Exam portal\examfront> ng add #angular/material
ℹ Using package manager: npm
✔ Found compatible package version: #angular/material#12.0.3.
✔ Package information loaded.
The package #angular/material#12.0.3 will be installed and executed.
Would you like to proceed? Yes
✔ Package successfully installed.
? Choose a prebuilt theme name, or "custom" for a custom theme: Indigo/Pink ngular.io?theme=indigo-pink ]
? Set up global Angular Material typography styles? Yes
? Set up browser animations for Angular Material? Yes
UPDATE package.json (1137 bytes)
✔ Packages installed successfully.
UPDATE src/app/app.module.ts (502 bytes)
UPDATE angular.json (3260 bytes)
UPDATE src/index.html (577 bytes)
UPDATE src/styles.css (181 bytes)
Trying to install arangodb on mint 16.04, I was able to install it but it is giving following error
Setting up arangodb3 (3.2.10) ...
FATAL ERROR: EXIT_FAILED - "exit with error"
dpkg: error processing package arangodb3 (--configure):
subprocess installed post-installation script returned error exit status 1
Errors were encountered while processing:
arangodb3
E: Sub-process /usr/bin/dpkg returned an error code (1)
Can someone help me with this?
try this,
sudo dpkg --purge arangodb3
Then upgrade your system to check for the same error,
sudo apt-get upgrade
Then reinstall your arangodb3.
It works for me :-)
Error?: I had a similar issue; i checked https://github.com/arangodb/arangodb/issues/7705, raised issue here, where they claim the error is as a result of default configuration of SSE4.2 that clashes with lower CPU types, check the conversation thread:
Solution ?: i directly downloaded the .deb file , my specific version was
arangodb3-3.3.0-1_amd64.deb(you can play around with the version numbers because the current version is arangodb3-3.9.1-1 ) , then i run the command to install below:
sudo dpkg -i arangodb3-3.3.0-1_amd64.deb
and on executing the command arangosh on the terminal ,wallah!:
Please specify a password:
_
__ _ _ __ __ _ _ __ __ _ ___ ___| |__
/ _` | '__/ _` | '_ \ / _` |/ _ \/ __| '_ \
| (_| | | | (_| | | | | (_| | (_) \__ \ | | |
\__,_|_| \__,_|_| |_|\__, |\___/|___/_| |_|
|___/
arangosh (ArangoDB 3.3.0 [linux] 64bit, using jemalloc, VPack 0.1.30, RocksDB 5.6.0, ICU 58.1, V8 5.7.492.77, OpenSSL 1.1.0f 25 May 2017)
Copyright (c) ArangoDB GmbH
Pretty printing values.
Connected to ArangoDB 'http+tcp://127.0.0.1:8529' version: 3.3.0 [server], database: '_system', username: 'root'
I found I can select target machine under project properties. There is a large list:
MachineX86 (/MACHINE:X86)
MachineAM33 (/MACHINE:AM33)
MachineARM (/MACHINE:ARM)
MachineEBC (/MACHINE:EBC)
MachineIA64 (/MACHINE:IA64)
MachineM32R (/MACHINE:M32R)
MachineMIPS (/MACHINE:MIPS)
MachineMIPS16 (/MACHINE:MIPS16)
MachineMIPSFPU (/MACHINE:MIPSFPU)
MachineMIPSFPU16 (/MACHINE:MIPSFPU16)
MachineMIPSR41XX (/MACHINE:MIPSR41XX)
MachineSH3 (/MACHINE:SH3)
MachineSH3DSP (/MACHINE:SH3DSP)
MachineSH4 (/MACHINE:SH4)
MachineSH5 (/MACHINE:SH5)
MachineTHUMB (/MACHINE:THUMB)
MachineX64 (/MACHINE:X64)
I have the following questions:
What is the meaning of that acronym?
Required hardware?
Required software and/or operating system?
It is possible to build binaries on x86 system? If yes, how to configure Visual Studio 2008 Express?
Does any change must be made in code if it compiles fine for x86, x64, itanium?
This setting is for configuring what CPU architecture to compile your source code for. Depending on the architecture you choose, you'll get a different binary.
A binary compiled for a specific CPU architecture has a structure that, after loaded into memory, the CPU can understand. There's, for example, commands for the CPU to execute, and data to load.
Whether or not the same source code can be compiled for different architectures depends on what you're doing. You'll usually be OK, but it's easy to break cross-compatibility by doing data-size sensitive stuff, manipulating memory directly, depending on architecture-specific features, etc.
Also, it's easy to write code which depends on features available in a specific OS. This isn't directly related to this setting, but of course if you're calling e.g. a Windows API that's only available on the PC and trying to compile for ARM, it won't work - There won't be any library compiled for ARM which defines that function to link with.
As an answer for question 1.
You can find the list of conventional machine type acronyms in this book: "The Common Language Infrastructure Annotated Standard", by James S. Miller, Susann Ragsdale, p.738.
+------------------------------+---------+------------------------------------------------+
| IMAGE_FILE_MACHINE_AM33 | 0x1d3 | Matsushita AM33 |
| IMAGE_FILE_MACHINE_AMD64 | 0x8664 | AMD AMD64 |
| IMAGE_FILE_MACHINE_ARM | 0x1c0 | ARM, little endian |
| IMAGE_FILE_MACHINE_CEE | 0xc0ee | clr pure MSIL (object only) |
| IMAGE_FILE_MACHINE_EBC | 0xebc | EFI Byte Code |
| IMAGE_FILE_MACHINE_I386 | 0x14c | Intel 386 or later, and compatible processors |
| IMAGE_FILE_MACHINE_IA64 | 0x200 | Intel IA64 |
| IMAGE_FILE_MACHINE_M32R | 0x9041 | Mitsubishi M32R, little endian |
| IMAGE_FILE_MACHINE_MIPS16 | 0x266 | |
| IMAGE_FILE_MACHINE_MIPSFPU | 0x366 | MIPS with FPU |
| IMAGE_FILE_MACHINE_MIPSFPU16 | 0x466 | MIPS16 with FPU |
| IMAGE_FILE_MACHINE_POWERPC | 0x1f0 | Power PC, little endian |
| IMAGE_FILE_MACHINE_POWERPCFP | 0x1f1 | Power PC with floating point support |
| IMAGE_FILE_MACHINE_R4000 | 0x166 | MIPS, little endian |
| IMAGE_FILE_MACHINE_SH3 | 0x1a2 | Hitachi SH3 |
| IMAGE_FILE_MACHINE_SH3DSP | 0x1a3 | Hitachi SH3 DSP |
| IMAGE_FILE_MACHINE_SH4 | 0x1a6 | Hitachi SH4 |
| IMAGE_FILE_MACHINE_SH5 | 0x1a8 | Hitachi SH5 |
| IMAGE_FILE_MACHINE_THUMB | 0x1c2 | Thumb |
| IMAGE_FILE_MACHINE_WCEMIPSV2 | 0x169 | MIPS WCE v2, little endian |
+------------------------------+---------+------------------------------------------------+
Note the difference in namings:
1. IMAGE_FILE_MACHINE_AMD64 -> MachineX64
2. IMAGE_FILE_MACHINE_I386 -> MachineX86
Not sure about "MachineMIPSR41XX" from your list though
I'm running a stock example route in karaf. I know it's working because when I change stream:out to log:test, I can see the messages in the log every 5 seconds. However, I don't seem to get anything in the karaf console. I also have camel-stream installed and don't see any errors in the log. Should I expect to see the stream:out data when I run the karaf bin/client? thanks!
<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">
<camelContext xmlns="http://camel.apache.org/schema/blueprint" id="simple">
<route>
<from uri="timer:simple?period=5000"/>
<setBody>
<simple>Hello World</simple>
</setBody>
<to uri="stream:out"/>
</route>
</camelContext>
</blueprint>
I added the features info below. I'm using karaf 2.2.9 now. I tried it on 2.3.0:
karaf#root> features:list | grep stream
[installed ] [2.9.4 ] camel-stream camel-2.9.4
[uninstalled] [2.9.4 ] camel-xstream camel-2.9.4
karaf#root> features:list | grep bluepr
[installed ] [2.9.4 ] camel-blueprint camel-2.9.4
[installed ] [5.7.0 ] activemq-blueprint activemq-5.7.0
karaf#root> ^D
I've tried the following as well:
service KARAF-service stop
cd /usr/local/karaf
rm -rf data/cache/*
service KARAF-service start
echo "features:install camel-blueprint camel-stream" | bin/client
cp /root/simple-camel-blueprint.xml deploy/ && tail -f data/log/karaf.log
I get:
***deleted startup messages***
2012-12-01 14:56:42,166 | INFO | - timer://simple | test | 64 - org.apache.camel.camel-core - 2.9.4 | Exchange[ExchangePattern:InOnly, BodyType:String, Body:Hello World]
2012-12-01 14:56:47,134 | INFO | - timer://simple | test | 64 - org.apache.camel.camel-core - 2.9.4 | Exchange[ExchangePattern:InOnly, BodyType:String, Body:Hello World]
2012-12-01 14:56:52,135 | INFO | - timer://simple | test | 64 - org.apache.camel.camel-core - 2.9.4 | Exchange[ExchangePattern:InOnly, BodyType:String, Body:Hello World]
2012-12-01 14:56:57,135 | INFO | - timer://simple | test | 64 - org.apache.camel.camel-core - 2.9.4 | Exchange[ExchangePattern:InOnly, BodyType:String, Body:Hello World]
2012-12-01 14:57:02,135 | INFO | - timer://simple | test | 64 - org.apache.camel.camel-core - 2.9.4 | Exchange[ExchangePattern:InOnly, BodyType:String, Body:Hello World]
2012-12-01 14:57:07,136 | INFO | - timer://simple | test | 64 - org.apache.camel.camel-core - 2.9.4 | Exchange[ExchangePattern:InOnly, BodyType:String, Body:Hello World]
2012-12-01 14:57:12,135 | INFO | - timer://simple | test | 64 - org.apache.camel.camel-core - 2.9.4 | Exchange[ExchangePattern:InOnly, BodyType:String, Body:Hello World]
2012-12-01 14:57:17,135 | INFO | - timer://simple | test | 64 - org.apache.camel.camel-core - 2.9.4 | Exchange[ExchangePattern:InOnly, BodyType:String, Body:Hello World]
if I change the 'to uri' from "log:test" to "stream:out", then run bin/client I don't get anything. is that a valid console out test?
I should add this too:
karaf#root> camel:route-info route3
Camel Route route3
Camel Context: simple
Properties
id = route3
parent = 7fcb9c0a
Statistics
Exchanges Total: 5
Exchanges Completed: 5
Exchanges Failed: 0
Min Processing Time: 1ms
Max Processing Time: 5ms
Mean Processing Time: 1ms
Total Processing Time: 9ms
Last Processing Time: 1ms
Load Avg: 0.00, 0.00, 0.00
First Exchange Date: 2012-12-01 15:08:32
Last Exchange Completed Date: 2012-12-01 15:08:52
Definition
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<route id="route3" xmlns="http://camel.apache.org/schema/spring">
<from uri="timer:simple?period=5000"/>
<setBody id="setBody3">
<simple>Hello World</simple>
</setBody>
<to uri="stream:out" id="to3"/>
</route>
Which Camel and Karaf version do you use?
And have you installed camel-stream feature? You would need to do that before installing and starting your own bundle.