Setting connection timeout in RestEASY - resteasy

JBoss tells us
http://docs.jboss.org/seam/3/rest/latest/reference/en-US/html/rest.client.html
that to set the timeout for a RestEASY ClientRequest we must create a custom ClientExecutor, then call deprecated static methods on ConnManagerParams. This seems rather hokey. Is there a better way? This is RestEASY 2.3.6.

Here is a clean working solution :-)
#Singleton
public class RestEasyConfig {
#Inject
#MyConfig
private Integer httpClientMaxConnectionsPerRoute;
#Inject
#MyConfig
private Integer httpClientTimeoutMillis;
#Inject
#MyConfig
private Integer httpClientMaxTotalConnections;
#Produces
private ClientExecutor clientExecutor;
#PostConstruct
public void createExecutor() {
final BasicHttpParams params = new BasicHttpParams();
HttpConnectionParams.setConnectionTimeout(params, this.httpClientTimeoutMillis);
final SchemeRegistry schemeRegistry = new SchemeRegistry();
schemeRegistry.register(new Scheme("http", 80, PlainSocketFactory.getSocketFactory()));
final ThreadSafeClientConnManager connManager = new ThreadSafeClientConnManager(schemeRegistry);
connManager.setDefaultMaxPerRoute(this.httpClientMaxConnectionsPerRoute);
connManager.setMaxTotal(this.httpClientMaxTotalConnections);
final HttpClient httpClient = new DefaultHttpClient(connManager, params);
this.clientExecutor = new ApacheHttpClient4Executor(httpClient);
}
}

With RestEASY 3.12.1.Final, as explained by redhat website, I did it this way:
private Client clientBuilder() {
return new ResteasyClientBuilder()
.connectTimeout(2, TimeUnit.SECONDS)
.readTimeout(10, TimeUnit.SECONDS)
.build()
.register(ClientRestLoggingFilter.class)
.register(ObjectMapperContextResolver.class);
}

Related

is JSONDeserializationSchema() deprecated in Flink?

I am new to Flink and doing something very similar to the below link.
Cannot see message while sinking kafka stream and cannot see print message in flink 1.2
I am also trying to add JSONDeserializationSchema() as a deserializer for my Kafka input JSON message which is without a key.
But I found JSONDeserializationSchema() is not present.
Please let me know if I am doing anything wrong.
JSONDeserializationSchema was removed in Flink 1.8, after having been deprecated earlier.
The recommended approach is to write a deserializer that implements DeserializationSchema<T>. Here's an example, which I've copied from the Flink Operations Playground:
import org.apache.flink.api.common.serialization.DeserializationSchema;
import org.apache.flink.api.common.typeinfo.TypeInformation;
import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectMapper;
import java.io.IOException;
/**
* A Kafka {#link DeserializationSchema} to deserialize {#link ClickEvent}s from JSON.
*
*/
public class ClickEventDeserializationSchema implements DeserializationSchema<ClickEvent> {
private static final long serialVersionUID = 1L;
private static final ObjectMapper objectMapper = new ObjectMapper();
#Override
public ClickEvent deserialize(byte[] message) throws IOException {
return objectMapper.readValue(message, ClickEvent.class);
}
#Override
public boolean isEndOfStream(ClickEvent nextElement) {
return false;
}
#Override
public TypeInformation<ClickEvent> getProducedType() {
return TypeInformation.of(ClickEvent.class);
}
}
For a Kafka producer you'll want to implement KafkaSerializationSchema<T>, and you'll find examples of that in that same project.
To solve the problem of reading non-key JSON messages from Kafka I used case class and JSON parser.
The following code makes a case class and parses the JSON field using play API.
import play.api.libs.json.JsValue
object CustomerModel {
def readElement(jsonElement: JsValue): Customer = {
val id = (jsonElement \ "id").get.toString().toInt
val name = (jsonElement \ "name").get.toString()
Customer(id,name)
}
case class Customer(id: Int, name: String)
}
def main(args: Array[String]): Unit = {
val env = StreamExecutionEnvironment.getExecutionEnvironment
val properties = new Properties()
properties.setProperty("bootstrap.servers", "xxx.xxx.0.114:9092")
properties.setProperty("group.id", "test-grp")
val consumer = new FlinkKafkaConsumer[String]("customer", new SimpleStringSchema(), properties)
val stream1 = env.addSource(consumer).rebalance
val stream2:DataStream[Customer]= stream1.map( str =>{Try(CustomerModel.readElement(Json.parse(str))).getOrElse(Customer(0,Try(CustomerModel.readElement(Json.parse(str))).toString))
})
stream2.print("stream2")
env.execute("This is Kafka+Flink")
}
The Try method lets you overcome the exception thrown while parsing the data
and returns the exception in one of the fields (if we want) or else it can just return the case class object with any given or default fields.
The sample output of the Code is:
stream2:1> Customer(1,"Thanh")
stream2:1> Customer(5,"Huy")
stream2:3> Customer(0,Failure(com.fasterxml.jackson.databind.JsonMappingException: No content to map due to end-of-input
at [Source: ; line: 1, column: 0]))
I am not sure if it is the best approach but it is working for me as of now.

camel route test - the registry for: mock:result

Hi I have complex camel route and in between the route I am sending mesage to MQ using Bean.
.bean("{{tp.mqservice}}")
application.yaml
mqservice: bean:mqService
application-test.yaml
mqservice: mock:result
Below is my PortfolioRouteTest
#ActiveProfiles("test")
#RunWith(CamelSpringBootRunner.class)
#SpringBootTest(classes = MainApplication.class, webEnvironment = WebEnvironment.RANDOM_PORT)
#MockEndpoints
public class PortfolioTncRouteTest {
#EndpointInject(value = "{{trade-publisher.portfolio-tnc.source-endpoint}}")
private ProducerTemplate producerTemplate;
#EndpointInject(value = "{{trade-publisher.mqservice}}")
private MockEndpoint mock;
}
Junit
#Test
public void portfolioTncRouteTest() throws InterruptedException {
data = ...
Mockito.when(service.search(Mockito.any(....class))).thenReturn(...);
producerTemplate.sendBody(data);
mock.expectedMessageCount(1);
mock.assertIsSatisfied(30000);
}
however when I run the test I am getting below error. Am I missing something?
Stacktrace
Caused by: org.apache.camel.NoSuchBeanException: No bean could be found in the registry for: mock:result
at org.apache.camel.component.bean.RegistryBean.getBean(RegistryBean.java:92)
at org.apache.camel.component.bean.RegistryBean.createCacheHolder(RegistryBean.java:67)
at org.apache.camel.reifier.BeanReifier.createProcessor(BeanReifier.java:57)
at org.apache.camel.reifier.ProcessorReifier.createProcessor(ProcessorReifier.java:485)
at org.apache.camel.reifier.ProcessorReifier.createOutputsProcessorImpl(ProcessorReifier.java:448)
at org.apache.camel.reifier.ProcessorReifier.createOutputsProcessor(ProcessorReifier.java:415)
at org.apache.camel.reifier.ProcessorReifier.createOutputsProcessor(ProcessorReifier.java:212)
at org.apache.camel.reifier.ExpressionReifier.createFilterProcessor(ExpressionReifier.java:39)
at org.apache.camel.reifier.WhenReifier.createProcessor(WhenReifier.java:32)
at org.apache.camel.reifier.WhenReifier.createProcessor(WhenReifier.java:24)
at org.apache.camel.reifier.ProcessorReifier.createProcessor(ProcessorReifier.java:485)
at org.apache.camel.reifier.ChoiceReifier.createProcessor(ChoiceReifier.java:54)
at org.apache.camel.reifier.ProcessorReifier.createProcessor(ProcessorReifier.java:485)
at org.apache.camel.reifier.ProcessorReifier.createOutputsProcessorImpl(ProcessorReifier.java:448)
at org.apache.camel.reifier.ProcessorReifier.createOutputsProcessor(ProcessorReifier.java:415)
at org.apache.camel.reifier.TryReifier.createProcessor(TryReifier.java:38)
at org.apache.camel.reifier.ProcessorReifier.createProcessor(ProcessorReifier.java:485)
at org.apache.camel.reifier.ProcessorReifier.createOutputsProcessorImpl(ProcessorReifier.java:448)
at org.apache.camel.reifier.ProcessorReifier.createOutputsProcessor(ProcessorReifier.java:415)
at org.apache.camel.reifier.ProcessorReifier.createOutputsProcessor(ProcessorReifier.java:212)
at org.apache.camel.reifier.ProcessorReifier.createChildProcessor(ProcessorReifier.java:231)
at org.apache.camel.reifier.SplitReifier.createProcessor(SplitReifier.java:42)
at org.apache.camel.reifier.ProcessorReifier.makeProcessorImpl(ProcessorReifier.java:536)
at org.apache.camel.reifier.ProcessorReifier.makeProcessor(ProcessorReifier.java:497)
at org.apache.camel.reifier.ProcessorReifier.addRoutes(ProcessorReifier.java:241)
at org.apache.camel.reifier.RouteReifier.addRoutes(RouteReifier.java:358)
... 56 more
Use .to instead of .bean so its sending to a Camel endpoint, then you can send to the mock endpoint. When using .bean then its for calling a POJO Java bean only.

on JpaRepository.save(Entity e) e have negative value as primary key in ms sql server database

When i do JpaRepository.save(Entity e) the primary key generated with the help of hibernate sequence is saved as any random value generally starting from -43 or -42.
I am spring a spring boot project with JPA.
Below is my property file:
hibernate.dialect=org.hibernate.dialect.SQLServer2012Dialect
hibernate.hbm2ddl.auto=validate
hibernate.ejb.naming_strategy=org.hibernate.cfg.ImprovedNamingStrategy
hibernate.show_sql=false
hibernate.format_sql=true
This is my entity on which i am calling save. Sequence name - CPU_Responses_Seq is already present in DB
#Entity
#Table(name="CPU_Responses")
public class CPUResponses extends BaseEnity{
/**
*
*/
private static final long serialVersionUID = 1L;
#Id
#GeneratedValue(generator="CPUResponseSeq",strategy=GenerationType.SEQUENCE)
#SequenceGenerator(name="CPUResponseSeq",sequenceName="CPU_Responses_Seq")
#Column(name = "Response_ID", nullable=false,updatable=false)
private long responseId;
This is my persistance config class
#Configuration
#EnableTransactionManagement
#EnableJpaRepositories(basePackages= {"package path"})
#PropertySource("classpath:application.properties")
public class PersistanceConfiguration {
#Autowired
private Environment env;
public Environment getEnv() {
return env;
}
public void setEnv(Environment env) {
this.env = env;
}
#Bean
LocalContainerEntityManagerFactoryBean entityManagerFactory() throws NamingException {
LocalContainerEntityManagerFactoryBean entityManagerFactoryBean = new LocalContainerEntityManagerFactoryBean();
entityManagerFactoryBean.setDataSource(dataSource());
entityManagerFactoryBean.setJpaVendorAdapter(new HibernateJpaVendorAdapter());
entityManagerFactoryBean.setPackagesToScan("entity path");
Properties jpaProperties = new Properties();
//Configures the used database dialect. This allows Hibernate to create SQL
//that is optimized for the used database.
jpaProperties.put("hibernate.dialect", env.getRequiredProperty("hibernate.dialect"));
entityManagerFactoryBean.setJpaProperties(jpaProperties);
return entityManagerFactoryBean;
}
#Bean
public DataSource dataSource() throws NamingException {
JndiObjectFactoryBean bean = new JndiObjectFactoryBean();
bean.setJndiName("java:comp/env/jdbc/CPUDB");
bean.setProxyInterface(DataSource.class);
bean.setLookupOnStartup(false);
bean.afterPropertiesSet();
return (DataSource) bean.getObject();
}
#Bean
JpaTransactionManager transactionManager(EntityManagerFactory entityManagerFactory) {
JpaTransactionManager transactionManager = new JpaTransactionManager();
transactionManager.setEntityManagerFactory(entityManagerFactory);
return transactionManager;
}
}
I don't know what is getting wrong. Data is getting saved in DB but with negative primary key. My sequencer in DB have min value as zero so sequencer is correct.
Kindly help
I think this is related to the changes introduced in Hibernate in their sequence generator, try to add
hibernate.id.new_generator_mappings=false
or
spring.jpa.properties.hibernate.id.new_generator_mappings=false
note that the "new generator" is not compatible with previous version so start from a clean database to avoid issues.

How to test fluent migrations with an in-process migration runner and a in memory SQLite database

I have just started to use FluentMigration for my current project. I wrote my first migration but I have some trouble writing a unit test for it.
Here is some sample code:
private ServiceProvider CreateServiceProvider()
{
return new ServiceCollection()
.AddLogging(lb => lb.AddFluentMigratorConsole())
.AddFluentMigratorCore()
.ConfigureRunner(
builder => builder
.AddSQLite()
.WithGlobalConnectionString("Data Source=:memory:;Version=3;New=True;")
.WithMigrationsIn(typeof(MigrationOne).Assembly))
.BuildServiceProvider();
}
private void PerformMigrateUp(IServiceScope scope)
{
var runner = scope.ServiceProvider.GetRequiredService<IMigrationRunner>();
runner.MigrateUp(1);
}
[Test]
public void ShouldHaveTablesAfterMigrateUp()
{
var provider = this.CreateServiceProvider();
using (var scope = provider.CreateScope())
{
this.PerformMigrateUp(scope);
// here I'd like to test if tables have been created in the database by the migration
}
}
I don't know how (or if it is possible) to access the current database connection so I can perform a query. Any suggestions would be helpful. Thanks.
Ok, I found a solution. I have to use the Process method of the runner's processor to perform my own sql query.
It looks like this:
private ServiceProvider CreateServiceProvider()
{
return new ServiceCollection()
.AddLogging(lb => lb.AddFluentMigratorConsole())
.AddFluentMigratorCore()
.ConfigureRunner(
builder => builder
.AddSQLite()
.WithGlobalConnectionString(#"Data Source=:memory:;Version=3;New=True;")
.WithMigrationsIn(typeof(MigrationDate20181026113000Zero).Assembly))
.BuildServiceProvider();
}
[Test]
public void ShouldHaveNewVersionAfterMigrateUp()
{
var serviceProvider = this.CreateServiceProvider();
var scope = serviceProvider.CreateScope();
var runner = scope.ServiceProvider.GetRequiredService<IMigrationRunner>();
runner.MigrateUp(1);
string sqlStatement = "SELECT Description FROM VersionInfo";
DataSet dataSet = runner.Processor.Read(sqlStatement, string.Empty);
Assert.That(dataSet, Is.Not.Null);
Assert.That(dataSet.Tables[0].Rows[0].ItemArray[0], Is.EqualTo("Migration1"));
}
This is an old question but an important one. I find it strange that I couldnt find any documentation on this.
In any case here is my solution which I find to be a bit better as you dont need to rely on the runner. Since you dont need that the options open up hugely for constructor arguments.
Firstly make sure you install Microsoft.Data.Sqlite or you will get a strange error.
SQLite in memory databases exist for as long as the connection does - and 1 database per connection on first glance. Actually though there is a way to share the database between connections as long as at least 1 connection is open at all times according to my experiments. You just need to name it.
https://learn.microsoft.com/en-us/dotnet/standard/data/sqlite/connection-strings#sharable-in-memory
So to begin with I created a connection that will stay open until the test finishes. It will be named using Guid.NewGuid() so that subsequent connections will work as expected.
var dbName = Guid.NewGuid().ToString();
var connectionString = $"Data Source={dbName};Mode=Memory;Cache=Shared";
var connection = new SqliteConnection(connectionString);
connection.Open();
After that the crux of running the migrations is the same as previously answered but the connection string uses the named database:
var sp = services.AddFluentMigratorCore()
.ConfigureRunner(fluentMigratorBuilder => fluentMigratorBuilder
.AddSQLite()
.WithGlobalConnectionString(connectionString)
.ScanIn(AssemblyWithMigrations).For.Migrations()
)
.BuildServiceProvider();
var runner = sp.GetRequiredService<IMigrationRunner>();
runner.MigrateUp();
Here is a class I use to inject a connection factory everywhere that needs to connect to the database for normal execution:
internal class PostgresConnectionFactory : IConnectionFactory
{
private readonly string connectionString;
public PostgresConnectionFactory(string connectionString)
{
this.connectionString = connectionString;
}
public DbConnection Create()
{
return new NpgsqlConnection(connectionString);
}
}
I just replaced this (all hail dependency inversion) with:
internal class InMemoryConnectionFactory : IConnectionFactory
{
private readonly string connectionstring;
public InMemoryConnectionFactory(string connectionstring)
{
this.connectionstring = connectionstring;
}
public DbConnection Create()
{
return new SqliteConnection(connectionstring);
}
}
where the connection string is the same named one I defined above.
Now you can simply use that connection factory anywhere that needs to connect to the same in memory database, and since we can now connect multiple times possibilities for integration testing open up.
Here is the majority of my implementation:
public static IDisposable CreateInMemoryDatabase(Assembly AssemblyWithMigrations, IServiceCollection services = null)
{
if (services == null)
services = new ServiceCollection();
var connectionString = GetSharedConnectionString();
var connection = GetPersistantConnection(connectionString);
MigrateDb(services, connectionString, AssemblyWithMigrations);
services.AddSingleton<IConnectionFactory>(new InMemoryConnectionFactory(connectionString));
return services.BuildServiceProvider()
.GetRequiredService<IDisposableUnderlyingQueryingTool>();
}
private static string GetSharedConnectionString()
{
var dbName = Guid.NewGuid().ToString();
return $"Data Source={dbName};Mode=Memory;Cache=Shared";
}
private static void MigrateDb(IServiceCollection services, string connectionString, Assembly assemblyWithMigrations)
{
var sp = services.AddFluentMigratorCore()
.ConfigureRunner(fluentMigratorBuilder => fluentMigratorBuilder
.AddSQLite()
.WithGlobalConnectionString(connectionString)
.ScanIn(assemblyWithMigrations).For.Migrations()
)
.BuildServiceProvider();
var runner = sp.GetRequiredService<IMigrationRunner>();
runner.MigrateUp();
}
private static IDbConnection GetPersistantConnection(string connectionString)
{
var connection = new SqliteConnection(connectionString);
connection.Open();
return connection;
}
Then here is a sample test:
public Test : IDisposable {
private readonly IDisposable _holdingConnection;
public Test() {
_holdingConnection = CreateInMemoryDatabase(typeof(MyFirstMigration).Assembly);
}
public void Dispose() {
_holdingConnection.Dispose();
}
}
You may notice that the static factory returns a custom interface. Its just an interface that extends the normal tooling I inject to repositories, but also implements IDisposable.
Untested bonus for integration testing where you will have a service collection created via WebApplicationFactory or TestServer etc:
public void AddInMemoryPostgres(Assembly AssemblyWithMigrations)
{
var lifetime = services.BuildServiceProvider().GetService<IHostApplicationLifetime>();
var holdingConnection= InMemoryDatabaseFactory.CreateInMemoryDapperTools(AssemblyWithMigrations, services);
lifetime.ApplicationStopping.Register(() => {
holdingConnection.Dispose();
});
}

How to set receiveTimeout and connection timeout for cxfEndpoint

I am trying to set receiveTimeout and connection timeout for cxfEndpoint in below code .. I got so many spring dsl related answer but i am using camel dsl specifically.
I am trying to set receiveTimeout and connection timeout for cxfEndpoint in below code .. I got so many spring dsl related answer but i am using camel dsl specifically.
I am trying to set receiveTimeout and connection timeout for cxfEndpoint in below code .. I got so many spring dsl related answer but i am using camel dsl specifically.
I am trying to set receiveTimeout and connection timeout for cxfEndpoint in below code .. I got so many spring dsl related answer but i am using camel dsl specifically.
void configure() throws Exception {
super.configure()
CamelContext context=getContext()
String version=context.resolvePropertyPlaceholders('{{'+ CommonConstants.VERSION_PROPERTY+ '}}')
String region=context.resolvePropertyPlaceholders('{{'+ CommonConstants.REGION_PROPERTY + '}}')
String getContextRoot=context.resolvePropertyPlaceholders('{{' + CommonConstants.CONTEXT_ROOT_PROPERTY + '}}')
boolean validateResponse=getContextRoot
//main route exposing a GET
rest("/$version/$region/")
.get("/$getContextRoot")
.produces('application/json')\
.to('direct:validate')
from('direct:validate')
.routeId('validate')
.bean(ValidatorSubRouteHelper.class,'validate')
.to('direct:get-deviceIdentification')
from('direct:get-deviceIdentification')
.routeId('get-deviceIdentification')
//pre-processing closure
.process {
it.out.body = [ it.properties[MessageReferenceConstants.USER_AGENT_HEADER], new CallContext() ]
it.in.headers[CxfConstants.OPERATION_NAME] = context.resolvePropertyPlaceholders('{{'+MessageReferenceConstants.PROPERTY_OPERATION_NAME+'}}')
it.in.headers[Exchange.SOAP_ACTION] = context.resolvePropertyPlaceholders('{{'+MessageReferenceConstants.PROPERTY_SOAP_ACTION+'}}')
Map<String, Object> reqCtx = new HashMap<String, Object>();
HTTPClientPolicy clientHttpPolicy = new HTTPClientPolicy();
clientHttpPolicy.setReceiveTimeout(10000);
reqCtx.put(HTTPClientPolicy.class.getName(), clientHttpPolicy)
it.in.headers[Client.REQUEST_CONTEXT]=reqCtx
}
.to(getEndpointURL())
//In case of SOAPFault from device, handling the exception in processSOAPResponse
.onException(SoapFault.class)
.bean(ProcessResponseExceptionHelper.class,"processSOAPResponse")
.end()
//post-processing closure
.process {
log.info("processing the response retrieved from device service")
MessageContentsList li = it.in.getBody(MessageContentsList.class)
DeviceFamily deviceFamily = (DeviceFamily) li.get(0)
log.debug('device type is '+deviceFamily.deviceType.value)
it.properties[MessageReferenceConstants.PROPERTY_RESPONSE_BODY] = deviceFamily.deviceType.value
}.to('direct:transform')
from('direct:transform')
.routeId('transform')
//transform closure
.process {
log.info("Entering the FilterTransformSubRoute(transform)")
Device device=new Device()
log.debug('device type '+it.properties[MessageReferenceConstants.PROPERTY_RESPONSE_BODY])
device.familyName = it.properties[MessageReferenceConstants.PROPERTY_RESPONSE_BODY]
it.out.body=device
}
.choice()
.when(simple('{{validateResponse}}'))
.to('direct:validateResponse')
if(validateResponse) {
from('direct:validateResponse')
.bean(DataValidator.getInstance('device.json'))
}
}
/**
* Constructs the endpoint url.
* Formatting end point URL for device identification service call
* #return the endpoint url
*/
private String getEndpointURL() {
CamelContext context=getContext()
def serviceURL=context.resolvePropertyPlaceholders('{{'+MessageReferenceConstants.SERVICE_URL+'}}')
def wsdlURL=context.resolvePropertyPlaceholders('{{'+MessageReferenceConstants.WSDL_URL+'}}')
boolean isGZipEnable=CommonConstants.TRUE.equalsIgnoreCase(context.resolvePropertyPlaceholders('{{'+MessageReferenceConstants.GZIP_ENABLED+'}}'))
def serviceClass = context.resolvePropertyPlaceholders('{{'+MessageReferenceConstants.PROPERTY_SERVICE_CLASS+'}}')
def serviceName = context.resolvePropertyPlaceholders('{{'+MessageReferenceConstants.PROPERTY_SERVICE_NAME+'}}')
def url="cxf:$serviceURL?"+
"wsdlURL=$wsdlURL"+
"&serviceClass=$serviceClass"+
"&serviceName=$serviceName"
if(isGZipEnable) {
url+= "&cxfEndpointConfigurer=#deviceIdentificationServiceCxfConfigurer"
}
log.debug("endpoint url is " + url)
url
}
You already find a the cxfEndpointConfigurer option for it.
Now you just need implement the configurer interface like this:
public static class MyCxfEndpointConfigurer implements CxfEndpointConfigurer {
#Override
public void configure(AbstractWSDLBasedEndpointFactory factoryBean) {
// Do nothing here
}
#Override
public void configureClient(Client client) {
// reset the timeout option to override the spring configuration one
HTTPConduit conduit = (HTTPConduit) client.getConduit();
HTTPClientPolicy policy = new HTTPClientPolicy();
// You can setup the timeout option here
policy.setReceiveTimeout(60000);
policy.setConnectionTimeout(30000);
conduit.setClient(policy);
}
#Override
public void configureServer(Server server) {
// Do nothing here
}
}

Resources