LaunchDarkly - Connection unexpectedly closed - launchdarkly

Facing issues in updating key. In logs getting.
`com.launchdarkly.shaded.com.launchdarkly.eventsource.EventSource` - Connection unexpectedly closed.
launchdarkly - implementation group: 'com.launchdarkly', name: 'launchdarkly-client', version: '4.61'
Creating new Client -
new LDClient(credentials.get(ApiConstants.LD_CLIENT_CREDS_KEY), ldConfig);
#Bean
#Profile("aws")
public LDConfig ldConfig()
{
return new LDConfig.Builder()
.connectTimeout(30)
.socketTimeout(30)
.build();
}

Related

Spring boot: RabbitMq and Database management when error occurred in Exchange

I have a problem to manage the RabbitMQ and Database transaction in case the exchange is not found. This is the simple sequence:
Put message to Exchange
Mark as sent the sent message in database
When the Exchange is not found, the message is not sent, however the row is updated in database, which doesn't respect the transactional behavior.
Transaction are correctly managed for other error cases (DB error or RabbitMQ not available).
How can I manage this use case as transactional processing?
Transaction enabled in configuration:
#Bean
#ConditionalOnMissingClass("org.springframework.orm.jpa.JpaTransactionManager")
public RabbitTransactionManager rabbitTransactionManager(ConnectionFactory connectionFactory) {
return new RabbitTransactionManager(connectionFactory);
}
#Bean
public RabbitTemplate rabbitTemplate(ConnectionFactory connectionFactory) {
RabbitTemplate template = new RabbitTemplate(connectionFactory);
template.setMessageConverter(jacksonMessageConverter());
template.setChannelTransacted(true);
return template;
}
My service:
#Override
#Transactional
public void push(Message message) {
rabbitTemplate.convertAndSend(
"MessageExchange",
"binding.key",
objectMapper.writeValueAsString(message));
repository.markAsSent(message.getId());
}
Error fired after leaving method, not in the rabbitTemplate.convertAndSend method :
[AMQP Connection] ERROR o.s.a.r.c.CachingConnectionFactory.log : Channel shutdown: channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - no exchange 'MessageExchange' in vhost '/', class-id=60, method-id=40)
[ThreadPoolTaskScheduler1] ERROR o.s.t.s.TransactionSynchronizationUtils.invokeAfterCompletion : TransactionSynchronization.afterCompletion threw exception
java.lang.IllegalStateException: Channel closed during transaction
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory$CachedChannelInvocationHandler.invoke(CachingConnectionFactory.java:1171)
at com.sun.proxy.$Proxy143.txCommit(Unknown Source)
at org.springframework.amqp.rabbit.connection.RabbitResourceHolder.commitAll(RabbitResourceHolder.java:153)
at org.springframework.amqp.rabbit.connection.ConnectionFactoryUtils$RabbitResourceSynchronization.afterCompletion(ConnectionFactoryUtils.java:332)

FailedToStartRouteException exception while using camel-spring-boot, amqp and kafka starters with SpringBoot, unable to find connectionFactory bean

I am creating an application using Apache Camel to transfer messages from AMQP to Kafka. Code can also be seen here - https://github.com/prashantbhardwaj/qpid-to-kafka-using-camel
I thought of creating it as standalone SpringBoot app using spring, amqp and kafka starters. Created a route like
#Component
public class QpidToKafkaRoute extends RouteBuilder {
public void configure() throws Exception {
from("amqp:queue:destinationName")
.to("kafka:topic");
}
}
And SpringBoot application configuration is
#SpringBootApplication
public class CamelSpringJmsKafkaApplication {
public static void main(String[] args) {
SpringApplication.run(CamelSpringJmsKafkaApplication.class, args);
}
#Bean
public JmsConnectionFactory jmsConnectionFactory(#Value("${qpidUser}") String qpidUser, #Value("${qpidPassword}") String qpidPassword, #Value("${qpidBrokerUrl}") String qpidBrokerUrl) {
JmsConnectionFactory jmsConnectionFactory = new JmsConnectionFactory(qpidPassword, qpidPassword, qpidBrokerUrl);
return jmsConnectionFactory;
}
#Bean
#Primary
public CachingConnectionFactory jmsCachingConnectionFactory(JmsConnectionFactory jmsConnectionFactory) {
CachingConnectionFactory cachingConnectionFactory = new CachingConnectionFactory(jmsConnectionFactory);
return cachingConnectionFactory;
}
jmsConnectionFactory bean which is created using Spring Bean annotation should be picked by amqp starter and should be injected into the route. But it is not happening. When I started this application, I got following exception -
org.apache.camel.FailedToStartRouteException: Failed to start route route1 because of Route(route1)[From[amqp:queue:destinationName] -> [To[kafka:.
Caused by: java.lang.IllegalArgumentException: connectionFactory must be specified
If I am not wrong connectionFactory should be created automatically if I pass right properties in application.properties file.
My application.properties file looks like :
camel.springboot.main-run-controller = true
camel.component.amqp.enabled = true
camel.component.amqp.connection-factory = jmsCachingConnectionFactory
camel.component.amqp.async-consumer = true
camel.component.amqp.concurrent-consumers = 1
camel.component.amqp.map-jms-message = true
camel.component.amqp.test-connection-on-startup = true
camel.component.kafka.brokers = localhost:9092
qpidBrokerUrl = amqp://localhost:5672?jms.username=guest&jms.password=guest&jms.clientID=clientid2&amqp.vhost=default
qpidUser = guest
qpidPassword = guest
Could you please help suggest why during autoconfiguring connectionFactory object is not being used? When I debug this code, I can clearly see that connectionFactory bean is getting created.
I can even see one more log line -
CamelContext has only been running for less than a second. If you intend to run Camel for a longer time then you can set the property camel.springboot.main-run-controller=true in application.properties or add spring-boot-starter-web JAR to the classpath.
however if you see my application.properties file, required property is present at the very first line.
One more log line, I can see at the beginning of application startup -
[main] trationDelegate$BeanPostProcessorChecker : Bean 'org.apache.camel.spring.boot.CamelAutoConfiguration' of type [org.apache.camel.spring.boot.CamelAutoConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
Is this log line suggesting anything?
Note - One interesting fact that exactly same code was running fine last night, just restarted my desktop and there is not even a single word changed and now it is throwing exception.
This just refers to an interface
camel.component.amqp.connection-factory = javax.jms.ConnectionFactory
Instead it should refer to an existing factory instance, such as
camel.component.amqp.connection-factory = #myFactory
Which you can setup via spring boot #Bean annotation style.

Issues with testing Cloud SQL locally using App Engine

I've been trying to connect to a Cloud SQL instance using HikariCP from App Engine locally so I can make queries. Every time I run App Engine using the ./gradlew appengineRun command, I get a java.net.SocketException: already connected error. This works fine when I deploy it to App Engine, but locally it just won't work. I'm stumped.
Here's the configuration for Hikari:
val config = HikariConfig().apply {
jdbcUrl = "jdbc:postgresql://google/[DB-NAME]"
username = "[USERNAME]"
password = "[PASSWORD]"
addDataSourceProperty("cloudSqlInstance", "[INSTANCE-CONNECTION-NAME")
addDataSourceProperty("socketFactory", "com.google.cloud.sql.postgres.SocketFactory")
}
private val dataSource = HikariDataSource(config)
private val connection = dataSource.connection
And then to execute the query:
connection.use { connection ->
connection.prepareCall("SELECT EXISTS(SELECT 1 FROM profiles WHERE username = '$username')").use { statement ->
statement.executeQuery().use { resultSet ->
try {
val exists = generateSequence {
if (resultSet.next()) resultSet.getBoolean(1) else null
}.toList()
onComplete(exists.any { it }, null)
} catch (e: Exception) {
onComplete(false, e)
}
}
}
}
I was certain this was the correct configuration to connect to SQL, but I keep getting this stacktrace:
Caused by: com.zaxxer.hikari.pool.HikariPool$PoolInitializationException: Failed to initialize pool: The connection attempt failed.
at com.zaxxer.hikari.pool.HikariPool.throwPoolInitializationException(HikariPool.java:597)
at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:576)
at com.zaxxer.hikari.pool.HikariPool.<init>(HikariPool.java:115)
at com.zaxxer.hikari.HikariDataSource.<init>(HikariDataSource.java:81)
at appengine.sql.repository.ProfileRepository.<clinit>(ProfileRepository.kt:34)
... 48 more
Caused by: org.postgresql.util.PSQLException: The connection attempt failed.
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:262)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:67)
at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:216)
at org.postgresql.Driver.makeConnection(Driver.java:406)
at org.postgresql.Driver.connect(Driver.java:274)
at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138)
at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:353)
at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:201)
at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:473)
at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:562)
... 51 more
Caused by: java.net.SocketException: already connected
at java.net.Socket.connect(Socket.java:569)
at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:668)
at org.postgresql.core.PGStream.<init>(PGStream.java:64)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:133)
... 60 more
Are the connection being properly closed after use? Is it possible the application is attempting to reuse connection after they have been closed by the SQL instance? I'd advise implementing a connection pool within the application for efficient use of connections to the db. Some CloudSQL documentation pages that may help cover these topics [1][2].
[1] https://cloud.google.com/sql/docs/mysql/manage-connections#opening_and_closing_connections
[2] https://cloud.google.com/sql/docs/mysql/diagnose-issues

Flink to Nifi the Magic Header was not present

I am trying to use this example to connect Nifi to Flink:
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
SiteToSiteClientConfig clientConfig = new SiteToSiteClient.Builder()
.url("http://localhost:8090/nifi")
.portName("Data for Flink")
.requestBatchCount(5)
.buildConfig();
SourceFunction<NiFiDataPacket> nifiSource = new NiFiSource(clientConfig);
DataStream<NiFiDataPacket> streamSource = env.addSource(nifiSource).setParallelism(2);
DataStream<String> dataStream = streamSource.map(new MapFunction<NiFiDataPacket, String>() {
#Override
public String map(NiFiDataPacket value) throws Exception {
return new String(value.getContent(), Charset.defaultCharset());
}
});
dataStream.print();
env.execute();
I am running Nifi as a standalone server with default properties, except these properties:
nifi.remote.input.host=localhost
nifi.remote.input.secure=false
nifi.remote.input.socket.port=8090
nifi.remote.input.http.enabled=true
The call fails each time, with following log in Nifi:
[Site-to-Site Worker Thread-24] o.a.nifi.remote.SocketRemoteSiteListener
Unable to communicate with remote instance null due to
org.apache.nifi.remote.exception.HandshakeException: Handshake
with nifi://localhost:61680 failed because the Magic Header
was not present; closing connection
Nifi version: 1.7.1, Flink version: 1.7.1
After using the nifi-toolkit I removed the custom value of nifi.remote.input.socket.port and then added transportProtocol(SiteToSiteTransportProtocol.HTTP) to my SiteToSiteClientConfig and http://localhost:8080/nifi as the URL.
The reason why I changed the port in the first place is that without specifying the protocol HTTP it will use RAW by default.
And when using the RAW protocol from Flink side, the client cannot create Transaction and prints the following warning:
Unable to refresh Remote Group's peers due to Remote instance of NiFi
is not configured to allow RAW Socket site-to-site communications
That's why I thought it was a port issue
So now with the default config of Nifi, this works as expected:
SiteToSiteClientConfig clientConfig = new SiteToSiteClient.Builder()
.url("http://localhost:8080/nifi")
.portName("portNameAsInNifi")
.transportProtocol(SiteToSiteTransportProtocol.HTTP)
.requestBatchCount(1)
.buildConfig();

Problem with grails web app running in production: "No such property: save for class: JsecRole"

I've got a grails 1.1 web app running great in development but when I try and run it in production with
an sqlserver database it crashes in a weird way.
The relevant part of my datasource.groovy is as follows:
environments {
development {
dataSource {
dbCreate = "create-drop" // one of 'create', 'create-drop','update'
url = "jdbc:hsqldb:mem:devDB"
}
}
test {
dataSource {
dbCreate = "update"
url = "jdbc:hsqldb:mem:testDb"
}
}
production {
dataSource {
dbCreate = "update"
driverClassName = "com.microsoft.sqlserver.jdbc.SQLServerDriver"
endUsername = "sa"
password = "pw4db"
url = "jdbc:sqlserver://localhost:1433;databaseName=ReleasePlanner;selectMethod=cursor"
The error message I receive is:
Message: No such property: save for class: JsecRole
Caused by: groovy.lang.MissingPropertyException: No such property: save for class: JsecRole
Class: ProjectController
At Line: [28]
Code Snippet:
27: println "###about to create project roles"
28: userManagerService.createProjectRoles(project)
29: userManagerService.addUserToProject(session.user.id.toString(), project, 'owner')
}
}
}
The stacktrace is as follows:
org.codehaus.groovy.runtime.InvokerInvocationException: groovy.lang.MissingPropertyException: No such property: save for class: JsecRole
at org.jsecurity.web.servlet.JSecurityFilter.doFilterInternal(JSecurityFilter.java:382)
at org.jsecurity.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:180)
Caused by: groovy.lang.MissingPropertyException: No such property: save for class: JsecRole
at UserManagerService.createProjectRoles(UserManagerService.groovy:9)
at UserManagerService$$FastClassByCGLIB$$6fa73713.invoke(<generated>)
at net.sf.cglib.proxy.MethodProxy.invoke(MethodProxy.java:149)
at UserManagerService$$EnhancerByCGLIB$$fcf60984.createProjectRoles(<generated>)
at UserManagerService$createProjectRoles.call(Unknown Source)
at ProjectController$_closure4.doCall(ProjectController.groovy:28)
at ProjectController$_closure4.doCall(ProjectController.groovy)
... 2 more
Any help is appreciated.
Thanks
Sarah
I fixed my problem by deleting my database and creating a new database. I think some of the fields in my database weren't mapping correctly as I changed my domain objects. The error didn't really point me in this direction though!
Sarah
This problem is discussed in this thread on the Grails mailing list. It is supposed to be fixed in Grails 1.2. A workaround for earlier versions of Grails is to add the following to Bootstrap.groovy
JsecRole.get(-1)

Resources