Spring Turbine dashboard not working - hystrix

I am facing some issue while working on turbine dashboard. As I am able to get turbine stream for given cluster but not able to see anything on dashboard as it is just getting loaded as shown in below screenshots. Kindly help if any configuration is missing.
Below are my configurations:
config.properties
turbine.aggregator.clusterConfig=SpringHystrixDemo2
turbine.instanceUrlSuffix=:9080/hystrix.stream
turbine.EurekaInstanceDiscovery.hystrix2.instances=localhost
InstanceDiscovery.impl=com.netflix.turbine.discovery.EurekaInstanceDiscovery.class
turbine.InstanceMonitor.eventStream.skipLineLogic.enabled=false
Application.yml
server:
port: 8080
turbine:
aggregator:
clusterConfig: SPRINGHYSTRIXDEMO2
clusterNameExpression: new String("default")
appConfig: SpringHystrixDemo2
InstanceMonitor:
eventStream:
skipLineLogic:
enabled: false
bootstrap.yml
spring:
application:
name: SpringTurbine
cloud:
config:
discovery:
enabled: true
eureka:
instance:
nonSecurePort: ${server.port:8080}
client:
serviceUrl:
defaultZone: http://${eureka.host:localhost}:${eureka.port:8761}/eureka/
Application.java
#SpringBootApplication
#EnableHystrix
#EnableEurekaClient
#EnableHystrixDashboard
#EnableTurbine
public class DemoApplication {
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
}
For cluster SpringHystrixDemo2 I have configured it in different application running on other port:
application.yml -
server:
port: 9080
hystrix:
command:
RemoteMessageClientCommand:
execution:
isolation:
thread:
timeoutInMilliseconds: 5000
RemoteMessageAnnotationClient:
execution:
isolation:
thread:
timeoutInMilliseconds: 5000
bootstrap.yml
spring:
application:
name: SpringHystrixDemo2
cloud:
config:
enabled: true
discovery:
enabled: true
serviceId: SPRINGCONFIGSERVER
eureka:
instance:
nonSecurePort: ${server.port:9080}
client:
serviceUrl:
defaultZone: http://${eureka.host:localhost}:${eureka.port:8761}/eureka/
Application.java - this is from hystrix dashboard service.
#SpringBootApplication
#EnableHystrix
#EnableHystrixDashboard
#EnableEurekaClient
#EnableDiscoveryClient
public class DemoApplication {
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
}
I have configured eureka server on 8761 port. which is lisening all other eureka client.as
Eureka server
This is how I am not able to see any turbine dashboard. as it is just getting loaded.
turbine stream view

First thing that comes to my mind is that you could declare a management endpoint like this:
management:
port: 9081
contextPath: /management
Then the turbine stream would be accesible via {yourHost}:9081/management/turbine.stream, while hystrix dashboard will be served under {yourhost}:9080/hystrix

From what I have read and known, from your configuration
turbine.aggregator.clusterConfig=SpringHystrixDemo2
turbine.instanceUrlSuffix=:9080/hystrix.stream
turbine.EurekaInstanceDiscovery.hystrix2.instances=localhost InstanceDiscovery.impl=com.netflix.turbine.discovery.EurekaInstanceDiscovery.class
turbine.InstanceMonitor.eventStream.skipLineLogic.enabled=false
following maybe the problem(s).
You are short of a few configs and maybe you have some extra configs too.
You do not need "turbine.EurekaInstanceDiscovery.hystrix2.instances" unless you really have multiple instances.
You do not need "turbine.InstanceMonitor.eventStream.skipLineLogic.enabled" because its false by default, and its required if you want it to be true when u you have high latency.
you need "turbine.appConfig=". In your case i think its something like SpringHystrixDemo2 or maybe hystrix2 ... use proper name here.
you need "turbine.aggregator.clusterConfig=" which worked for me only when i used in CAPITAL i.e. HYSTRIX2
if you are using different management port on your service, "turbine.instanceUrlSuffix.HYSTRIX2=:/hystrix.stream
then this "turbine.instanceInsertPort=false" will disable default port insertion by turbine.. basically, you are telling eureka not to insert any port on its own when trying to search for hystrix.strem..
following are my properties.
#turbine.clusterNameExpression=new String('default')
#turbine.clusterNameExpression="'default'"
turbine.instanceInsertPort=false
turbine.appConfig=service1
turbine.aggregator.clusterConfig=SERVICE1
turbine.instanceUrlSuffix.SERVICE1=:51512/hystrix.stream
#turbine.ConfigPropertyBasedDiscovery.USER.instances=service1-host1.abc.com,service1-host2.abc.com
InstanceDiscovery.impl=com.netflix.turbine.discovery.EurekaInstanceDiscovery.class
#for high latencies
#turbine.InstanceMonitor.eventStream.skipLineLogic.enabled=false
and try the turbine stream on
http://host:port/turbine.stream?cluster=SERVICE1

Related

Change Camel's basic /camel url

Issue:
I would need to change the basic /camel url which camel uses by default, but when i try to change it in application.yml nothing happens to it.
Would like to keep other systems intact without changing their urls, from what they already have (would require quiet a bit of work in back-end systems)
Current URL: http://localhost:8080/camel/hello
Desired URL: http://localhost:8080/service/hello
Checked links which are NOT working for me:
Link1
Link2
Link3
EG: application.yml
camel:
springboot:
name: CamelRestContext
component:
servlet:
mapping:
enabled: true
context-path: /service
So apparently this way works:
camel:
springboot:
name: RestDSLContext
servlet:
mapping:
context-path: /service/*
rest:
context-path: /service

How do I connect Ecto to CockroachDB Serverless?

I'd like to use CockroachDB Serverless for my Ecto application. How do I specify the connection string?
I get an error like this when trying to connect.
[error] GenServer #PID<0.295.0> terminating
** (Postgrex.Error) FATAL 08004 (sqlserver_rejected_establishment_of_sqlconnection) codeParamsRoutingFailed: missing cluster name in connection string
(db_connection 2.4.1) lib/db_connection/connection.ex:100: DBConnection.Connection.connect/2
CockroachDB Serverless says to connect by including the cluster name in the connection string, like this:
postgresql://username:<ENTER-PASSWORD>#free-tier.gcp-us-central1.cockroachlabs.cloud:26257/defaultdb?sslmode=verify-full&sslrootcert=$HOME/.postgresql/root.crt&options=--cluster%3Dcluster-name-1234
but I'm not sure how to get Ecto to create this connection string via its configuration.
The problem is that Postgrex is not able to parse all of the information from the connection URL - notable the SSL configuration. The solution is to specify the connection parameters explicitly, including the cacertfile SSL option. Assuming that you have downloaded your cluster's CA certificate to priv/certs/ca-cert.crt, you can use the following config as a template:
config :my_app, MyApp.Repo,
username: "my_user",
password: "my_password",
database: "defaultdb",
hostname: "free-tier.gcp-us-central1.cockroachlabs.cloud",
port: "26257",
ssl: true,
ssl_opts: [
cacertfile: Path.expand("priv/certs/ca-cert.crt"),
],
parameters: [options: "--cluster=my-cluster-123"]
Possible Other Issues
Table Locking
Since that CockroachDB also does not support the locking that Ecto/Postgrex attempts on the migration table, the :migration_lock config needs to be disabled as well:
config :my_app, MyApp.Repo,
# ...
migration_lock: false
Auth generator
Finally, the new phx.gen.auth generator defaults to using the citext extension for storing a user's email address in a case-insensitive manner. The line in the generated migration that executes CREATE EXTENSION IF NOT EXISTS citext should be removed, and the column type for the :email field should be changed from :citext to :string.
This configuration allows Ecto to connect to CockroachDB Serverless correctly:
config :myapp, MyApp.repo,
username: "username",
password: "xxxx",
database: "defaultdb",
hostname: "free-tier.gcp-us-central1.cockroachlabs.cloud",
port: 26257,
ssl: true,
ssl_opts: [
cert_pem: "foo.pem",
key_pem: "bar.pem"
],
show_sensitive_data_on_connection_error: true,
pool_size: 10,
parameters: [
options: "--cluster=cluster-name-1234"
]

Apache SkyWalking, Zipkin Receiver is not activated

By following the documentation steps, I wasn't able to activate Zipkin Receiver in apache-skywalking-apm-bin-es7-8.7.0
my application.yml is
cluster:
selector: ${SW_CLUSTER:standalone}
...
storage:
selector: ${SW_STORAGE:zipkin-elasticsearch7}
...
zipkin-elasticsearch7:
nameSpace: ${SW_NAMESPACE:""}
clusterNodes: ${SW_STORAGE_ES_CLUSTER_NODES:localhost:9200}
protocol: ${SW_STORAGE_ES_HTTP_PROTOCOL:"http"}
trustStorePath: ${SW_STORAGE_ES_SSL_JKS_PATH:""}
trustStorePass: ${SW_STORAGE_ES_SSL_JKS_PASS:""}
dayStep: ${SW_STORAGE_DAY_STEP:1}
indexShardsNumber: ${SW_STORAGE_ES_INDEX_SHARDS_NUMBER:1}
indexReplicasNumber: ${SW_STORAGE_ES_INDEX_REPLICAS_NUMBER:1}
superDatasetDayStep: ${SW_SUPERDATASET_STORAGE_DAY_STEP:-1}
superDatasetIndexShardsFactor: ${SW_STORAGE_ES_SUPER_DATASET_INDEX_SHARDS_FACTOR:5}
superDatasetIndexReplicasNumber: ${SW_STORAGE_ES_SUPER_DATASET_INDEX_REPLICAS_NUMBER:0}
user: ${SW_ES_USER:""}
password: ${SW_ES_PASSWORD:""}
secretsManagementFile: ${SW_ES_SECRETS_MANAGEMENT_FILE:""}
bulkActions: ${SW_STORAGE_ES_BULK_ACTIONS:5000}
flushInterval: ${SW_STORAGE_ES_FLUSH_INTERVAL:15}
concurrentRequests: ${SW_STORAGE_ES_CONCURRENT_REQUESTS:2}
resultWindowMaxSize: ${SW_STORAGE_ES_QUERY_MAX_WINDOW_SIZE:10000}
metadataQueryMaxSize: ${SW_STORAGE_ES_QUERY_MAX_SIZE:5000}
segmentQueryMaxSize: ${SW_STORAGE_ES_QUERY_SEGMENT_SIZE:200}
profileTaskQueryMaxSize: ${SW_STORAGE_ES_QUERY_PROFILE_TASK_SIZE:200}
oapAnalyzer: ${SW_STORAGE_ES_OAP_ANALYZER:"{\"analyzer\":{\"oap_analyzer\":{\"type\":\"stop\"}}}"}
oapLogAnalyzer: ${SW_STORAGE_ES_OAP_LOG_ANALYZER:"{\"analyzer\":{\"oap_log_analyzer\":{\"type\":\"standard\"}}}"}
advanced: ${SW_STORAGE_ES_ADVANCED:""}
...
receiver_zipkin:
selector: ${SW_RECEIVER_ZIPKIN:-}
default:
host: ${SW_RECEIVER_ZIPKIN_HOST:0.0.0.0}
port: ${SW_RECEIVER_ZIPKIN_PORT:9411}
contextPath: ${SW_RECEIVER_ZIPKIN_CONTEXT_PATH:/}
jettyMinThreads: ${SW_RECEIVER_ZIPKIN_JETTY_MIN_THREADS:1}
jettyMaxThreads: ${SW_RECEIVER_ZIPKIN_JETTY_MAX_THREADS:200}
jettyIdleTimeOut: ${SW_RECEIVER_ZIPKIN_JETTY_IDLE_TIMEOUT:30000}
jettyAcceptorPriorityDelta: ${SW_RECEIVER_ZIPKIN_JETTY_DELTA:0}
jettyAcceptQueueSize: ${SW_RECEIVER_ZIPKIN_QUEUE_SIZE:0}
instanceNameRule: ${SW_RECEIVER_ZIPKIN_INSTANCE_NAME_RULE:[spring.instance_id,node_id]}
Started with
./bin/startup.sh
ports 11800 12900 are listed as listening
ui is working
port 9411 is not listed as listening
You should set SW_RECEIVER_ZIPKIN= default through the system environment, or change the application.yml about selector: ${SW_RECEIVER_ZIPKIN:default}
You should follow the Level 2 instruction in the backend set up doc, https://skywalking.apache.org/docs/main/v8.7.0/en/setup/backend/backend-setup/#applicationyml

FailedToStartRouteException exception while using camel-spring-boot, amqp and kafka starters with SpringBoot, unable to find connectionFactory bean

I am creating an application using Apache Camel to transfer messages from AMQP to Kafka. Code can also be seen here - https://github.com/prashantbhardwaj/qpid-to-kafka-using-camel
I thought of creating it as standalone SpringBoot app using spring, amqp and kafka starters. Created a route like
#Component
public class QpidToKafkaRoute extends RouteBuilder {
public void configure() throws Exception {
from("amqp:queue:destinationName")
.to("kafka:topic");
}
}
And SpringBoot application configuration is
#SpringBootApplication
public class CamelSpringJmsKafkaApplication {
public static void main(String[] args) {
SpringApplication.run(CamelSpringJmsKafkaApplication.class, args);
}
#Bean
public JmsConnectionFactory jmsConnectionFactory(#Value("${qpidUser}") String qpidUser, #Value("${qpidPassword}") String qpidPassword, #Value("${qpidBrokerUrl}") String qpidBrokerUrl) {
JmsConnectionFactory jmsConnectionFactory = new JmsConnectionFactory(qpidPassword, qpidPassword, qpidBrokerUrl);
return jmsConnectionFactory;
}
#Bean
#Primary
public CachingConnectionFactory jmsCachingConnectionFactory(JmsConnectionFactory jmsConnectionFactory) {
CachingConnectionFactory cachingConnectionFactory = new CachingConnectionFactory(jmsConnectionFactory);
return cachingConnectionFactory;
}
jmsConnectionFactory bean which is created using Spring Bean annotation should be picked by amqp starter and should be injected into the route. But it is not happening. When I started this application, I got following exception -
org.apache.camel.FailedToStartRouteException: Failed to start route route1 because of Route(route1)[From[amqp:queue:destinationName] -> [To[kafka:.
Caused by: java.lang.IllegalArgumentException: connectionFactory must be specified
If I am not wrong connectionFactory should be created automatically if I pass right properties in application.properties file.
My application.properties file looks like :
camel.springboot.main-run-controller = true
camel.component.amqp.enabled = true
camel.component.amqp.connection-factory = jmsCachingConnectionFactory
camel.component.amqp.async-consumer = true
camel.component.amqp.concurrent-consumers = 1
camel.component.amqp.map-jms-message = true
camel.component.amqp.test-connection-on-startup = true
camel.component.kafka.brokers = localhost:9092
qpidBrokerUrl = amqp://localhost:5672?jms.username=guest&jms.password=guest&jms.clientID=clientid2&amqp.vhost=default
qpidUser = guest
qpidPassword = guest
Could you please help suggest why during autoconfiguring connectionFactory object is not being used? When I debug this code, I can clearly see that connectionFactory bean is getting created.
I can even see one more log line -
CamelContext has only been running for less than a second. If you intend to run Camel for a longer time then you can set the property camel.springboot.main-run-controller=true in application.properties or add spring-boot-starter-web JAR to the classpath.
however if you see my application.properties file, required property is present at the very first line.
One more log line, I can see at the beginning of application startup -
[main] trationDelegate$BeanPostProcessorChecker : Bean 'org.apache.camel.spring.boot.CamelAutoConfiguration' of type [org.apache.camel.spring.boot.CamelAutoConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
Is this log line suggesting anything?
Note - One interesting fact that exactly same code was running fine last night, just restarted my desktop and there is not even a single word changed and now it is throwing exception.
This just refers to an interface
camel.component.amqp.connection-factory = javax.jms.ConnectionFactory
Instead it should refer to an existing factory instance, such as
camel.component.amqp.connection-factory = #myFactory
Which you can setup via spring boot #Bean annotation style.

Flink to Nifi the Magic Header was not present

I am trying to use this example to connect Nifi to Flink:
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
SiteToSiteClientConfig clientConfig = new SiteToSiteClient.Builder()
.url("http://localhost:8090/nifi")
.portName("Data for Flink")
.requestBatchCount(5)
.buildConfig();
SourceFunction<NiFiDataPacket> nifiSource = new NiFiSource(clientConfig);
DataStream<NiFiDataPacket> streamSource = env.addSource(nifiSource).setParallelism(2);
DataStream<String> dataStream = streamSource.map(new MapFunction<NiFiDataPacket, String>() {
#Override
public String map(NiFiDataPacket value) throws Exception {
return new String(value.getContent(), Charset.defaultCharset());
}
});
dataStream.print();
env.execute();
I am running Nifi as a standalone server with default properties, except these properties:
nifi.remote.input.host=localhost
nifi.remote.input.secure=false
nifi.remote.input.socket.port=8090
nifi.remote.input.http.enabled=true
The call fails each time, with following log in Nifi:
[Site-to-Site Worker Thread-24] o.a.nifi.remote.SocketRemoteSiteListener
Unable to communicate with remote instance null due to
org.apache.nifi.remote.exception.HandshakeException: Handshake
with nifi://localhost:61680 failed because the Magic Header
was not present; closing connection
Nifi version: 1.7.1, Flink version: 1.7.1
After using the nifi-toolkit I removed the custom value of nifi.remote.input.socket.port and then added transportProtocol(SiteToSiteTransportProtocol.HTTP) to my SiteToSiteClientConfig and http://localhost:8080/nifi as the URL.
The reason why I changed the port in the first place is that without specifying the protocol HTTP it will use RAW by default.
And when using the RAW protocol from Flink side, the client cannot create Transaction and prints the following warning:
Unable to refresh Remote Group's peers due to Remote instance of NiFi
is not configured to allow RAW Socket site-to-site communications
That's why I thought it was a port issue
So now with the default config of Nifi, this works as expected:
SiteToSiteClientConfig clientConfig = new SiteToSiteClient.Builder()
.url("http://localhost:8080/nifi")
.portName("portNameAsInNifi")
.transportProtocol(SiteToSiteTransportProtocol.HTTP)
.requestBatchCount(1)
.buildConfig();

Resources