Its the first time I used the OpenEJB container system. When I use the lockup-Method of the InitialContext, I get a NameNotFoundException. I've read lots of examples and tutorials and in every example the lookup method looks like:
initialContext.lookup("NameOfBean");
Now I've found another solution which uses the lookup like the following code snippet which works for me too.
initialContext.lookup("java:global/classpath.ear/ProjectName/NameofBean");
The question is why the first version doesn't work for me and what I've done wrong?
Excerpts of the OpenEJB log:
INFO - ********************************************************************************
INFO - OpenEJB http://openejb.apache.org/
INFO - Startup: Sat Dec 22 13:17:59 CET 2012
INFO - Copyright 1999-2012 (C) Apache OpenEJB Project, All Rights Reserved.
INFO - Version: 4.5.1
INFO - Build date: 20121209
INFO - Build time: 08:47
INFO - ********************************************************************************
INFO - openejb.home = D:\workspace\ProjectName
INFO - openejb.base = D:\workspace\ProjectName
INFO - Succeeded in installing singleton service
INFO - Cannot find the configuration file [conf/openejb.xml]. Will attempt to create one for the beans deployed.
INFO - Configuring Service(id=Default Security Service, type=SecurityService, provider-id=Default Security Service)
INFO - Configuring Service(id=Default Transaction Manager, type=TransactionManager, provider-id=Default Transaction Manager)
INFO - Using 'openejb.deployments.classpath.include=.*'
INFO - Found EjbModule in classpath: D:\workspace\ProjectName\build\classes
INFO - Searched 17 classpath urls in 2184 milliseconds. Average 128 milliseconds per url.
INFO - Beginning load: D:\workspace\ProjectName\build\classes
INFO - Configuring enterprise application: D:\workspace\ProjectName\classpath.ear
WARNUNG - Method 'lookup' is not available for 'javax.annotation.Resource'. Probably using an older Runtime.
INFO - Auto-deploying ejb NameOfBean: EjbDeployment(deployment-id=NameOfBean)
[... AUTHORS'S NOTE: SOME MORE BEANS]
INFO - Assembling app: D:\workspace\ProjectName\classpath.ear
INFO - Hibernate Validator 4.2.0.Final
INFO - Ignoring XML configuration.
JAVA AGENT NOT INSTALLED. The JPA Persistence Provider requested installation of a ClassFileTransformer which requires a JavaAgent. See http://openejb.apache.org/3.0/javaagent.html
INFO - OpenJPA dynamically loaded a validation provider.
INFO - Jndi(name=NameOfBeanRemote) --> Ejb(deployment-id=NameofBean)
INFO - Jndi(name=global/classpath.ear/ProjectName/NameOfBean!de.mypath.stateless.NameOfBeanInterface) --> Ejb(deployment-id=NameofBean)
INFO - Jndi(name=global/classpath.ear/ProjectName/NameofBean) --> Ejb(deployment-id=NameOfBean)
[... AUTHORS'S NOTE: SOME FOR OTHER BEANS]
INFO - OpenWebBeans Container is starting...
INFO - Adding OpenWebBeansPlugin : [CdiPlugin]
INFO - All injection points are validated successfully.
INFO - OpenWebBeans Container has started, it took 250 ms.
INFO - Created Ejb(deployment-id=NameOfBean, ejb-name=NameOfBean, container=Default Stateless Container)
[... AUTHORS'S NOTE: SOME FOR OTHER BEANS]
INFO - Quartz scheduler 'OpenEJB-TimerService-Scheduler' initialized from an externally provided properties instance.
INFO - Quartz scheduler version: 2.1.6
INFO - Scheduler OpenEJB-TimerService-Scheduler_$_OpenEJB started.
INFO - Started Ejb(deployment-id=NameOfBean, ejb-name=NameOfBean, container=Default Stateless Container)
[... AUTHORS'S NOTE: SOME FOR OTHER BEANS]
This is my TestClass:
public class NamerOfBeanOpenEJBTest {
private static InitialContext initialContext;
#BeforeClass
public static void setUp() throws Exception {
Properties properties = new Properties();
properties.setProperty(Context.INITIAL_CONTEXT_FACTORY, "org.apache.openejb.client.LocalInitialContextFactory");
properties.setProperty("openejb.deployments.classpath.include", ".*");
initialContext = new InitialContext(properties);
}
#Test
public void testBean() throws NamingException{
Object object = initialContext.lookup("java:global/classpath.ear/ProjectName/NameOfBean");
assertNotNull(object);
assertTrue(object instanceof NameOfBean);
}
#AfterClass
public static void afterClass() throws Exception {
if (initialContext != null) {
initialContext.close();
}}
}
Does someone have tipps or solutions for me?
Thanks a lot.
Edit:
In JBoss AS 7.1 the lookup can places like this example:
new InitialContext().lookup("ejb:/ProjectName//NameOfBean!de." + "mypath.sessionbean.stateless.NameOfBeanInterface");
Isnt that possible in OpenEJB? Do I have to change every lookup call in every bean to have a local test with OpenEJB? That wouldn't be really effective and time-saving.
Problem solved!
The strukture of the lookup is {deploymentId}{interfaceType.annotationName}. Therefore in my case it must be
initialContext.lookup("NameOfBeanLocal");
or
initialContext.lookup("NameOfBeanRemote");
depending on the type of the interface.
To get the problem with JBoss solved you can switch from default lookup
new InitialContext().lookup("ejb:/ProjectName//NameOfBean!de." + "mypath.sessionbean.stateless.NameOfBeanInterface");
to something more flexible like Dependcy-Lookup or Dependency-Injection and use the #EJB annotation. Both ways are supportet by JBoss and OpenEJB.
Related
I am new to flink. I am trying to run the flink example on my local PC(windows).
However, after I run the start-cluster.bat, I login to the dashboard, it shows the task manager is 0.
I checked the log and seems it fails to initialize:
2020-02-21 23:03:14,202 ERROR org.apache.flink.runtime.taskexecutor.TaskManagerRunner - TaskManager initialization failed.
org.apache.flink.configuration.IllegalConfigurationException: Failed to create TaskExecutorResourceSpec
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.resourceSpec.FromConfig(TaskExecutorResourceUtils.java:72)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.startTaskManager(TaskManagerRunner.java:356)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.<init>(TaskManagerRunner.java:152)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManager(TaskManagerRunner.java:308)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.lambda$runTaskManagerSecurely$2(TaskManagerRunner.java:322)
at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManagerSecurely(TaskManagerRunner.java:321)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.main(TaskManagerRunner.java:287)
Caused by: org.apache.flink.configuration.IllegalConfigurationException: The required configuration option Key: 'taskmanager.cpu.cores' , default: null (fallback keys: []) is not set
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.checkConfigOptionIsSet(TaskExecutorResourceUtils.java:90)
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.lambda$checkTaskExecutorResourceConfigSet$0(TaskExecutorResourceUtils.java:84)
at java.util.Arrays$ArrayList.forEach(Arrays.java:3880)
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.checkTaskExecutorResourceConfigSet(TaskExecutorResourceUtils.java:84)
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.resourceSpecFromConfig(TaskExecutorResourceUtils.java:70)
... 7 more
2020-02-21 23:03:14,217 INFO org.apache.flink.runtime.blob.TransientBlobCache - Shutting down BLOB cache
Basically, it looks like a required option 'taskmanager.cpu.cores' is not set. However, I can't find this property in flink-conf.yaml and in the document(https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/config.html) either.
I am using flink 1.10.0. Any help would be highly appreciated!
That configuration option is intended for internal use only -- it shouldn't be user configured, which is why it isn't documented.
The windows start-cluster.bat is failing because of a bug introduced in Flink 1.10. See https://jira.apache.org/jira/browse/FLINK-15925.
One workaround is to use the bash script, start-cluster.sh, instead.
See also this mailing list thread: https://lists.apache.org/thread.html/r7693d0c06ac5ced9a34597c662bcf37b34ef8e799c32cc0edee373b2%40%3Cdev.flink.apache.org%3E
I am facing an issue with Ninject IOC container.
I am using Sitecore 8.2 update 5 and switching from Lucene to Solr search engine using the steps mentioned in https://sitecorerockz.wordpress.com/2018/08/01/lucene-to-solr/
I am using Solr 6.6.3. Earlier this project was on Sitecore 6.X version and from time to time some upgrades happened, and now it is in Sitecore 8.2 update 5.
The same Solr setup is working fine for the fresh Sitecore 8.2 update 5 setup.
I created Solr diagnostic page and kept it in /Sitecore/admin folder to check the error details, I am getting the below error for all the indexes:
Solr Indexes Microsoft.Practices.ServiceLocation.ActivationException: Activation error occured while trying to get instance of type ISolrOperations`1, key "sitecore_analytics_index" ---> Ninject.ActivationException: Error activating ISolrOperations{Dictionary{string, Object}} No matching bindings are available, and the type is not self-bindable. Activation path: 1) Request for ISolrOperations{Dictionary{string, Object}} Suggestions: 1) Ensure that you have defined a binding for ISolrOperations{Dictionary{string, Object}}. 2) If the binding was defined in a module, ensure that the module has been loaded into the kernel. 3) Ensure you have not accidentally created more than one kernel. 4) If you are using constructor arguments, ensure that the parameter name matches the constructors parameter name. 5) If you are using automatic module loading, ensure the search path and filters are correct. at Ninject.KernelBase.Resolve(IRequest request) in c:\Projects\Ninject\ninject\src\Ninject\KernelBase.cs:line 376 at Ninject.ResolutionExtensions.Get(IResolutionRoot root, Type service, String name, IParameter[] parameters) in c:\Projects\Ninject\ninject\src\Ninject\Syntax\ResolutionExtensions.cs:line 164 at MyLibrary.test.Infrastructure.NinjectServiceLocator.DoGetInstance(Type serviceType, String key) in C:\test_Git\Sitecore\src\test\Infrastructure\NinjectServiceLocator.cs:line 15 at Microsoft.Practices.ServiceLocation.ServiceLocatorImplBase.GetInstance(Type serviceType, String key) in c:\Home\Chris\Projects\CommonServiceLocator\main\Microsoft.Practices.ServiceLocation\ServiceLocatorImplBase.cs:line 49 --- End of inner exception stack trace --- at Microsoft.Practices.ServiceLocation.ServiceLocatorImplBase.GetInstance(Type serviceType, String key) in c:\Home\Chris\Projects\CommonServiceLocator\main\Microsoft.Practices.ServiceLocation\ServiceLocatorImplBase.cs:line 53 at Microsoft.Practices.ServiceLocation.ServiceLocatorImplBase.GetInstance[TService](String key) in c:\Home\Chris\Projects\CommonServiceLocator\main\Microsoft.Practices.ServiceLocation\ServiceLocatorImplBase.cs:line 103 at Sitecore.ContentSearch.SolrProvider.SolrSearchIndex.Initialize() at ASP._Page_sitecore_admin_solr_diagnostic_cshtml.Execute() in c:\test_Git\Sitecore\build\25Sep2019\Website\sitecore\admin\solr-diagnostic.cshtml:line 29
What am I missing, could you please advise me?
The SetLocatorProvider was getting initialized two times
Modified the Custom code related to Ninject IOC
Our current solution was using Ninject.dll 3.0.0.0 now I used new version of Ninject.dll which is 3.2.2.0 under folder bin>>Social
Replaced all the Solr dll files from fresh Sitecore 8.2 update 5 files
Setting up JanusGraph i noticed the following in the console:
09:04:12,175 INFO ReflectiveConfigOptionLoader:173 - Loaded and initialized config classes: 10 OK out of 12 attempts in PT0.023S
09:04:12,230 INFO Reflections:224 - Reflections took 28 ms to scan 1 urls, producing 2 keys and 2 values
09:04:12,291 WARN GraphDatabaseConfiguration:1445 - Local setting index.search.index-name=entity (Type: GLOBAL_OFFLINE) is overridden by globally managed value (janusgraph). Use the ManagementSystem interface instead of the local configuration to control this setting.
09:04:12,294 WARN GraphDatabaseConfiguration:1445 - Local setting index.search.backend=solr (Type: GLOBAL_OFFLINE) is overridden by globally managed value (elasticsearch). Use the ManagementSystem interface instead of the local configuration to control this setting.
09:04:12,300 INFO CassandraThriftStoreManager:628 - Closed Thrift connection pooler.
and then i see the following:
Exception in thread "main" java.lang.IllegalArgumentException: Could not instantiate implementation: org.janusgraph.diskstorage.es.ElasticSearchIndex
How do i stop using elasticsearch and switch to Solr?
My properties file is as follows:
index.search.backend=solr
index.search.directory=/path/to/directory/for/solr/index/something
index.search.index-name=something
index.search.solr.mode=http
index.search.solr.http-urls=http://127.0.0.1:8983/solr
storage.backend=cassandrathrift
storage.hostname=127.0.0.1
cache.db-cache = true
cache.db-cache-clean-wait = 20
cache.db-cache-time = 180000
cache.db-cache-size = 0.25
The answer to this basically the same as this one for Titan. JanusGraph was forked from Titan.
You are probably trying to connect to an existing graph that was previously configured to use Elasticsearch. By default, the keyspace is named janusgraph.
1) You could connect to a different keyspace by updating conf/janusgraph-cassandra.properties
gremlin.graph=org.janusgraph.core.JanusGraphFactory
storage.backend=cassandrathrift
storage.hostname=127.0.0.1
storage.cassandra.keyspace=mygraph
2) You could drop the existing keyspace. If you used bin/janusgraph.sh start from the quick start directions (which starts a single node Cassandra and a single node Elasticsearch),
bin/janusgraph.sh clean
Or if you have a standalone Cassandra installation:
$CASSANDRA_HOME/bin/cqlsh -e 'drop keyspace if exists janusgraph'
Then you would be able to connect with the default conf/janusgraph-cassandra.properties.
While going through the Google docs, I'm getting the below stack trace on the final export command (executed from the master instance with appropriate env variables set).
${HADOOP_HOME}/bin/hadoop jar ${HADOOP_BIGTABLE_JAR} export-table -libjars ${HADOOP_BIGTABLE_JAR} <table-name> <gs://bucket>
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/hbase-install/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/hadoop-install/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2016-02-08 23:39:39,068 INFO [main] mapreduce.Export: versions=1, starttime=0, endtime=9223372036854775807, keepDeletedCells=false
2016-02-08 23:39:39,213 INFO [main] gcs.GoogleHadoopFileSystemBase: GHFS version: 1.4.4-hadoop2
java.lang.IllegalAccessError: tried to access field sun.security.ssl.Handshaker.localSupportedSignAlgs from class sun.security.ssl.ClientHandshaker
at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:278)
at sun.security.ssl.Handshaker.processLoop(Handshaker.java:913)
at sun.security.ssl.Handshaker.process_record(Handshaker.java:849)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1035)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1344)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1371)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1355)
at sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:153)
at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:93)
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:972)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:419)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:352)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:469)
at com.google.cloud.hadoop.gcsio.GoogleCloudStorageImpl.getBucket(GoogleCloudStorageImpl.java:1599)
at com.google.cloud.hadoop.gcsio.GoogleCloudStorageImpl.getItemInfo(GoogleCloudStorageImpl.java:1554)
at com.google.cloud.hadoop.gcsio.CacheSupplementedGoogleCloudStorage.getItemInfo(CacheSupplementedGoogleCloudStorage.java:547)
at com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.getFileInfo(GoogleCloudStorageFileSystem.java:1042)
at com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.exists(GoogleCloudStorageFileSystem.java:383)
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.configureBuckets(GoogleHadoopFileSystemBase.java:1650)
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem.configureBuckets(GoogleHadoopFileSystem.java:71)
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.configure(GoogleHadoopFileSystemBase.java:1598)
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.initialize(GoogleHadoopFileSystemBase.java:783)
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.initialize(GoogleHadoopFileSystemBase.java:746)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2625)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2607)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:167)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:352)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at org.apache.hadoop.hbase.util.DynamicClassLoader.<init>(DynamicClassLoader.java:104)
at org.apache.hadoop.hbase.protobuf.ProtobufUtil.<clinit>(ProtobufUtil.java:241)
at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.convertScanToString(TableMapReduceUtil.java:509)
at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:207)
at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:168)
at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:291)
at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:92)
at org.apache.hadoop.hbase.mapreduce.IdentityTableMapper.initJob(IdentityTableMapper.java:51)
at org.apache.hadoop.hbase.mapreduce.Export.createSubmittableJob(Export.java:75)
at org.apache.hadoop.hbase.mapreduce.Export.main(Export.java:187)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:145)
at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:153)
at com.google.cloud.bigtable.mapreduce.Driver.main(Driver.java:35)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Here's my ENV var set up in case it's helpful:
export HBASE_HOME=/home/hadoop/hbase-install
export HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath`
export HADOOP_HOME=/home/hadoop/hadoop-install
export HADOOP_CLIENT_OPTS="-Xbootclasspath/p:${HBASE_HOME}/lib/bigtable/alpn-boot-7.1.3.v20150130.jar"
export HADOOP_BIGTABLE_JAR=${HBASE_HOME}/lib/bigtable/bigtable-hbase-mapreduce-0.2.2-shaded.jar
export HADOOP_HBASE_JAR=${HBASE_HOME}/lib/hbase-server-1.1.2.jar
Also, when I try to run hbase shell and then list tables it just hangs and doesn't fetch me the list of tables. This is what happens:
~$ hbase shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/hbase-install/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/hadoop-install/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2016-02-09 00:02:01,334 INFO [main] grpc.BigtableSession: Opening connection for projectId mystical-height-89421, zoneId us-central1-b, clusterId twitter-data, on data host bigtable.googleapis.com, table admin host bigtabletableadmin.googleapis.com.
2016-02-09 00:02:01,358 INFO [BigtableSession-startup-0] grpc.BigtableSession: gRPC is using the JDK provider (alpn-boot jar)
2016-02-09 00:02:01,648 INFO [bigtable-connection-shared-executor-pool1-t2] io.RefreshingOAuth2CredentialsInterceptor: Refreshing the OAuth token
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.1.2, rcc2b70cf03e3378800661ec5cab11eb43fafe0fc, Wed Aug 26 20:11:27 PDT 2015
hbase(main):001:0> list
TABLE
I've tried:
Double checking ALPN and ENV variables are appropriately set
Double checking hbase-site.xml and hbase-env.sh to make sure nothing looks wrong.
I also even tried connecting to my cluster (like I was previously able to following these directions) from ANOTHER gcloud instance, but it seems like I can't seem to get that to work now either...(it also hangs)
user#gcloud-instance:hbase-1.1.2$ bin/hbase shell
2016-02-09 00:07:03,506 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2016-02-09 00:07:03,913 INFO [main] grpc.BigtableSession: Opening connection for projectId <project>, zoneId us-central1-b, clusterId <cluster>, on data host bigtable.googleapis.com, table admin host bigtabletableadmin.googleapis.com.
2016-02-09 00:07:04,039 INFO [BigtableSession-startup-0] grpc.BigtableSession: gRPC is using the JDK provider (alpn-boot jar)
2016-02-09 00:07:05,138 INFO [Credentials-Refresh-0] io.RefreshingOAuth2CredentialsInterceptor: Refreshing the OAuth token
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.1.2, rcc2b70cf03e3378800661ec5cab11eb43fafe0fc, Wed Aug 26 20:11:27 PDT 2015
hbase(main):001:0> list
TABLE
Feb 09, 2016 12:07:08 AM com.google.bigtable.repackaged.io.grpc.internal.TransportSet$1 run
INFO: Created transport com.google.bigtable.repackaged.io.grpc.netty.NettyClientTransport#7b480442(bigtabletableadmin.googleapis.com/64.233.183.219:443) for bigtabletableadmin.googleapis.com/64.233.183.219:443
Any ideas with what I'm doing wrong? Looks like an access issue - how do I fix it?
Thanks!
You can spin up a Dataproc cluster w/ Bigtable enabled following these instructions.
ssh to the master by ./cluster.sh ssh
hbase shell to verify that all is in order.
hadoop jar ${HADOOP_BIGTABLE_JAR} export-table -libjars ${HADOOP_BIGTABLE_JAR} <table-name> gs://<bucket>/some-folder
gsutil ls gs://<bucket>/some-folder/** and see if _SUCCESS exists. If so, the remaining files are your data.
exit from your cluster master
./cluster.sh delete to get rid of the cluster, if you no longer require it.
You ran into a problem with the weekly java runtime update, that has been corrected.
I have a solution with several web projects.
I have built each project into its own folder (there's about 10 projects):
/build/projA
/build/projB
/build/projC
If I call
aspnet_compiler -v / -p \"build/#{proj}\" merged/#{proj}
on each one in turn, everything is fine.
If I make all 10 calls to aspnet_compiler in parallel, I get assembly loading errors, saying the referenced assemblies are in use.
Is this a bug? Why would it be in use? It shouldn't be.
The assemblies that cant be loaded are in multiple projects, but each one has its own copy in its bin folder.
Update: it seems to occur (for some) of the assemblies that are references of references. Those loaded with complete binding information seem to work.
Here's a dump from one of the failed loadings from fuslogvw
The operation failed.
Bind result: hr = 0x80070020. The process cannot access the file because it is being used by another process.
Assembly manager loaded from: C:\Windows\Microsoft.NET\Framework\v4.0.30319\clr.dll
Running under executable c:\WINDOWS\Microsoft.NET\Framework\v4.0.30319\aspnet_compiler.exe
--- A detailed error log follows.
=== Pre-bind state information ===
LOG: User = andrew.bullock
LOG: DisplayName = Yahoo.Yui.Compressor
(Partial)
WRN: Partial binding information was supplied for an assembly:
WRN: Assembly Name: Yahoo.Yui.Compressor | Domain ID: 2
WRN: A partial bind occurs when only part of the assembly display name is provided.
WRN: This might result in the binder loading an incorrect assembly.
WRN: It is recommended to provide a fully specified textual identity for the assembly,
WRN: that consists of the simple name, version, culture, and public key token.
WRN: See whitepaper http://go.microsoft.com/fwlink/?LinkId=109270 for more information and common solutions to this issue.
LOG: Appbase = file:///E:/build/ProjA/
LOG: Initial PrivatePath = E:\build\ProjA\bin
LOG: Dynamic Base = C:\Windows\Microsoft.NET\Framework\v4.0.30319\Temporary ASP.NET Files\src_projects_proja\061ad2e7
LOG: Cache Base = C:\Windows\Microsoft.NET\Framework\v4.0.30319\Temporary ASP.NET Files\src_projects_proja\061ad2e7
LOG: AppName = 627cc9a9
Calling assembly : (Unknown).
===
LOG: This bind starts in default load context.
LOG: Using application configuration file: E:\build\ProjA\web.config
LOG: Using host configuration file:
LOG: Using machine configuration file from C:\Windows\Microsoft.NET\Framework\v4.0.30319\config\machine.config.
LOG: Policy not being applied to reference at this time (private, custom, partial, or location-based assembly bind).
LOG: Attempting download of new URL file:///C:/Windows/Microsoft.NET/Framework/v4.0.30319/Temporary ASP.NET Files/src_projects_proja/061ad2e7/627cc9a9/Yahoo.Yui.Compressor.DLL.
LOG: Attempting download of new URL file:///C:/Windows/Microsoft.NET/Framework/v4.0.30319/Temporary ASP.NET Files/src_projects_proja/061ad2e7/627cc9a9/Yahoo.Yui.Compressor/Yahoo.Yui.Compressor.DLL.
LOG: Attempting download of new URL file:///E:/build/ProjA/bin/Yahoo.Yui.Compressor.DLL.
LOG: Assembly download was successful. Attempting setup of file: E:\build\ProjA\bin\Yahoo.Yui.Compressor.dll
LOG: Entering download cache setup phase.
ERR: Error extracting manifest import from file (hr = 0x80070020).
ERR: Setup failed with hr = 0x80070020.
ERR: Failed to complete setup of assembly (hr = 0x80070020). Probing terminated.
If you call the ClientBuildManager directly, outside of aspnet_compiler there doesn't seem to be a problem, no idea why.
var parameter = new ClientBuildManagerParameter
{
PrecompilationFlags = PrecompilationFlags.ForceDebug
};
var client = new ClientBuildManager("/", src, dest, parameter);
client.PrecompileApplication();