GWT file not found trouble during database connection - database

I'm trying to create a GWT app and I'm working with a local postgreSQL Database.
I'm working with GWT 2.4 on eclipse Juno.
I implemented in this way the Server-side implementation (TaskServiceImpl):
public class TaskServiceImpl extends ServiceImpl implements TaskService {
#Override
public List<Task> getAllTasks() {
em = this.getEntityManager();
Query q = em.createQuery("SELECT x FROM Task x");
List<Task> list = createList(q.getResultList().toArray(),
new ArrayList<Task>(), em);
em.close();
return list;
}
and this is the Database connection class in the client-side:
public class DatabaseConnection {
public static final TaskServiceAsync taskService;
static {
taskService = GWT.create(TaskService.class);
}
}
I try now to run a getAllTask() in this way
public void onModuleLoad() {
DatabaseConnection.taskService.getAllTasks(new AsyncCallback<List<Task>>() {
#Override
public void onSuccess(List<Task> result) {
System.out.println("Success!");
}
#Override
public void onFailure(Throwable caught) {
System.out.println("Fail!");
}
});
}
And always returns "fail!" and gives me this error:
com.google.appengine.tools.development.LocalResourceFileServlet doGet
WARNING: No file found for: /fantapgl/task
This is my web.xml
<servlet>
<servlet-name>taskServiceImpl</servlet-name>
<servlet-class>fieldProject.server.service.TaskServiceImpl</servlet-class>
</servlet>
<servlet-mapping>
<servlet-name>taskServiceImpl</servlet-name>
<url-pattern>/fantaPGL/task</url-pattern>
</servlet-mapping>
to open the connection to the DB I have this code in the persistence.xml:
<properties>
<property name="openjpa.jdbc.DBDictionary" value="postgres" />
<property name="openjpa.jdbc.SynchronizeMappings" value="buildSchema()"/>
<property name="openjpa.ConnectionDriverName" value="org.postgresql.Driver"/>
<property name="openjpa.ConnectionURL" value="jdbc:postgresql://localhost:5432/db" />
<property name="openjpa.ConnectionUserName" value="postgres" />
<property name="openjpa.ConnectionPassword" value="password" />
</properties>
I don't understand where I'm wrong. Can someone plz help me!?

I'm not sure what the problem is. But the error messages seems to suggest you have google appengine enabled. That doesn't make sense because you would only need that if you want to deploy on Google app engine, and you are clearly developing for something else since you can't run PostgreSql on Google appengine.
Futhremore, make sure to close database connections by placing the close in a finally statement and prefer to return specific datatypes; that is, ArrayList instead of List. Otherwise the compiler will generate code for all subclasses of List, because at compile time the compiler can't know what subclass will be used.

Before executing the query:
1) Use jdbc driver
try {
Class.forName("org.postgresql.Driver");
} catch (ClassNotFoundException e) {
System.out.println("Your PostgreSQL JDBC Driver is missing! "
+ "Include in your library path!");
e.printStackTrace();
return "error";
}
2) Connect to database
Connection connection = null;
connection =
DriverManager.getConnection( "jdbc:postgresql://127.0.0.1:5432/"YourDB",
"admin",
"pass" );
3) Then Execute the query
if (connection != null) { //Your query }

you need to add #RemoteServiceRelativePath annotation at the begin of the serviceImpl class.
please refer to https://developers.google.com/web-toolkit/doc/latest/tutorial/RPC
or if you have installed google eclipse plugin, create a new project with sample code, you can refer to the sample code as well.

Related

Apache Camel "transacted" does not work well with sql component`s "outputType=StreamList"

In my transacted camel route I need to:
Call oracle package to set value for variable in this package;
Execute sql statement which is using variable from package;
Note that package variable is only visible in connection from which it was set - so I need to use "transacted" here.
Here is a sample code which demonstrates the problem:
from("direct-vm:process")
.transacted()
.to("sql:call my_pack.set_v1('10')")
.to("sql:select my_pack.get_v1 from dual?outputType=StreamList")
.split(body()).streaming()
.log("${body}")
.end();
Result for above code will be: GET_V1=null
If I comment ".transacted()" I will get: GET_V1=10
If I remove "StreamList" option from sql and un-comment ".transacted()": GET_V1=10
Question: is it not possible for "transacted" to work with sql component`s "StreamList" option?
Additional info:
If I start above route in multiple threads, like this:
Map<String, String> map = new HashMap<>();
map.put("10", "10");
map.put("20", "20");
map.put("30", "30");
map.put("40", "40");
map.put("50", "50");
map.forEach((key, values) -> {
from("timer://runOnce?repeatCount=1")
.setHeader("key", constant(key))
.setHeader("value", constant(values))
.inOnly("seda:processParallel");
});
from("seda:processParallel?concurrentConsumers=5")
.to("direct:process");
from("direct:process")
//.transacted()
.to("sql:call my_pack.pset_v1(:#value)?dataSource=generalDataSource")
.to("sql:select :#key key, my_pack.get_v1 value from dual?outputType=StreamList")
.split(body()).streaming()
.to("log:row")
.end();
I will get inconsistent results:
KEY=**20**, VALUE=**50**
KEY=**50**, VALUE=**40**
KEY=**40**, VALUE=**20**
KEY=**10**, VALUE=**30**
KEY=**30**, VALUE=**10**
Transaction manager configures as shown below:
#Bean
public DataSourceTransactionManager dataSourceTransactionManager(DataSource dataSource) {
DataSourceTransactionManager dataSourceTransactionManager = new DataSourceTransactionManager();
dataSourceTransactionManager.setDataSource(dataSource);
return dataSourceTransactionManager;
}
if it is only for same database , you dont need transacted() which is XA transaction
usually it convers different resources for example one JMS , another Database
Can you show us how you have defined your transactionManager ? In particular, did you bind this txManager to your datasource ?
<bean id="txManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="generalDataSource"/>
</bean>

Spring boot Auto connection to database [duplicate]

I have a nice little Spring Boot JPA web application. It is deployed on Amazon Beanstalk and uses an Amazon RDS for persisting data. It is however not used that often and therefore fails after a while with this kind of exception:
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was 79,870,633 milliseconds ago.
The last packet sent successfully to the server was 79,870,634 milliseconds ago. is longer than the server configured value of 'wait_timeout'.
You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.
I am not sure how to configure this setting and can not find information on it on http://spring.io (a very good site though). What are some ideas or pointers to information?
I assume that boot is configuring the DataSource for you. In this case, and since you are using MySQL, you can add the following to your application.properties up to 1.3
spring.datasource.testOnBorrow=true
spring.datasource.validationQuery=SELECT 1
As djxak noted in the comment, 1.4+ defines specific namespaces for the four connections pools Spring Boot supports: tomcat, hikari, dbcp, dbcp2 (dbcp is deprecated as of 1.5). You need to check which connection pool you are using and check if that feature is supported. The example above was for tomcat so you'd have to write it as follows in 1.4+:
spring.datasource.tomcat.testOnBorrow=true
spring.datasource.tomcat.validationQuery=SELECT 1
Note that the use of autoReconnect is not recommended:
The use of this feature is not recommended, because it has side effects related to session state and data consistency when applications don't handle SQLExceptions properly, and is only designed to be used when you are unable to configure your application to handle SQLExceptions resulting from dead and stale connections properly.
The above suggestions did not work for me.
What really worked was the inclusion of the following lines in the application.properties
spring.datasource.testWhileIdle = true
spring.datasource.timeBetweenEvictionRunsMillis = 3600000
spring.datasource.validationQuery = SELECT 1
You can find the explanation out here
Setting spring.datasource.tomcat.testOnBorrow=true in application.properties didn't work.
Programmatically setting like below worked without any issues.
import org.apache.tomcat.jdbc.pool.DataSource;
import org.apache.tomcat.jdbc.pool.PoolProperties;
#Bean
public DataSource dataSource() {
PoolProperties poolProperties = new PoolProperties();
poolProperties.setUrl(this.properties.getDatabase().getUrl());
poolProperties.setUsername(this.properties.getDatabase().getUsername());
poolProperties.setPassword(this.properties.getDatabase().getPassword());
//here it is
poolProperties.setTestOnBorrow(true);
poolProperties.setValidationQuery("SELECT 1");
return new DataSource(poolProperties);
}
I just moved to Spring Boot 1.4 and found these properties were renamed:
spring.datasource.dbcp.test-while-idle=true
spring.datasource.dbcp.time-between-eviction-runs-millis=3600000
spring.datasource.dbcp.validation-query=SELECT 1
whoami's answer is the correct one. Using the properties as suggested I was unable to get this to work (using Spring Boot 1.5.3.RELEASE)
I'm adding my answer since it's a complete configuration class so it might help someone using Spring Boot:
#Configuration
#Log4j
public class SwatDataBaseConfig {
#Value("${swat.decrypt.location}")
private String fileLocation;
#Value("${swat.datasource.url}")
private String dbURL;
#Value("${swat.datasource.driver-class-name}")
private String driverName;
#Value("${swat.datasource.username}")
private String userName;
#Value("${swat.datasource.password}")
private String hashedPassword;
#Bean
public DataSource primaryDataSource() {
PoolProperties poolProperties = new PoolProperties();
poolProperties.setUrl(dbURL);
poolProperties.setUsername(userName);
poolProperties.setPassword(password);
poolProperties.setDriverClassName(driverName);
poolProperties.setTestOnBorrow(true);
poolProperties.setValidationQuery("SELECT 1");
poolProperties.setValidationInterval(0);
DataSource ds = new org.apache.tomcat.jdbc.pool.DataSource(poolProperties);
return ds;
}
}
I have similar problem. Spring 4 and Tomcat 8. I solve the problem with Spring configuration
<bean id="dataSource" class="org.apache.tomcat.jdbc.pool.DataSource" destroy-method="close">
<property name="initialSize" value="10" />
<property name="maxActive" value="25" />
<property name="maxIdle" value="20" />
<property name="minIdle" value="10" />
...
<property name="testOnBorrow" value="true" />
<property name="validationQuery" value="SELECT 1" />
</bean>
I have tested. It works well! This two line does everything in order to reconnect to database:
<property name="testOnBorrow" value="true" />
<property name="validationQuery" value="SELECT 1" />
In case anyone is using custom DataSource
#Bean(name = "managementDataSource")
#ConfigurationProperties(prefix = "management.datasource")
public DataSource dataSource() {
return DataSourceBuilder.create().build();
}
Properties should look like the following. Notice the #ConfigurationProperties with prefix. The prefix is everything before the actual property name
management.datasource.test-on-borrow=true
management.datasource.validation-query=SELECT 1
A reference for Spring Version 1.4.4.RELEASE
As some people already pointed out, spring-boot 1.4+, has specific namespaces for the four connections pools. By default, hikaricp is used in spring-boot 2+. So you will have to specify the SQL here. The default is SELECT 1. Here's what you would need for DB2 for example:
spring.datasource.hikari.connection-test-query=SELECT current date FROM sysibm.sysdummy1
Caveat: If your driver supports JDBC4 we strongly recommend not setting this property. This is for "legacy" drivers that do not support the JDBC4 Connection.isValid() API. This is the query that will be executed just before a connection is given to you from the pool to validate that the connection to the database is still alive. Again, try running the pool without this property, HikariCP will log an error if your driver is not JDBC4 compliant to let you know. Default: none
For those who want to do it from YAML with multiple data sources, there is a great blog post about it: https://springframework.guru/how-to-configure-multiple-data-sources-in-a-spring-boot-application/
It basically says you both need to configure data source properties and datasource like this:
#Bean
#Primary
#ConfigurationProperties("app.datasource.member")
public DataSourceProperties memberDataSourceProperties() {
return new DataSourceProperties();
}
#Bean
#Primary
#ConfigurationProperties("app.datasource.member.hikari")
public DataSource memberDataSource() {
return memberDataSourceProperties().initializeDataSourceBuilder()
.type(HikariDataSource.class).build();
}
Do not forget to remove #Primary from other datasources.

apache camel: custom sftp configuration with sftp component

I am trying to add a custom sftp component in Apache Camel to wrap the username, host, port and password in a configuration object to be passed to a sftpcomponent.
Below is the code that I have tried:
#Configuration
class SftpConfig {
#Bean("sourceSftp")
public SftpComponent getSourceSftpComponent(
#Qualifier("sftpConfig")
SftpConfiguration sftpConfig) throws Exception{
SftpComponent sftpComponent = new SftpComponent();
// not getting way to set the configuration
return sftpComponent;
}
#Bean("sftpConfig")
public SftpConfiguration getSftpConfig(
#Value("${host}") String host,
#Value("${port}") int port,
#Value("${applicationUserName}") String applicationUserName,
#Value("${password}") String password) {
SftpConfiguration sftpConfiguration = new SftpConfiguration();
sftpConfiguration.setHost(host);
sftpConfiguration.setPort(port);
sftpConfiguration.setUsername(applicationUserName);
sftpConfiguration.setPassword(password);
return sftpConfiguration;
}
}
//In other class
from("sourceSftp:<path of directory>") ---custom component
A similar approach in JMSComponent works fine where I have created a bean for sourcejms, but I am not able to do it for sftp as SftpComponent doesn't have set call for sftpconfiguration.
The Camel maintainers seem to be moving away from providing individual components with a "setXXXConfiguration" method to configure their properties. The "approved" method of providing properties -- which works with the SFTP -- is to specify them on the connection URL:
from ("sftp://host:port/foo?username=foo&password=bar")
.to (....)
An alternative approach is to instantiate an endpoint and set its properties, and then use a reference to the endpoint in the from() call. There's a gazillion ways of configuring Camel -- this works for me for XML-based configuration:
<endpoint id="fred" uri="sftp://acme.net/test/">
<property key="username" value="xxxxxxx"/>
<property key="password" value="yyyyyyy"/>
</endpoint>
<route>
<from uri="fred"/>
<to uri="log:foo"/>
</route>
You can customize it by extending the SftpComponent. This allows you to define multiple endpoints without providing the username/password for each endpoint definition.
Step 1: Extend SftpComponent and give your component a custom name, ie customSftp
#Component("customSftp")
public class CustomSftpComponent extends SftpComponent {
private static final Logger LOG = LoggerFactory.getLogger(CustomSftpComponent.class);
#Value("${sftp.username}")
private String username;
#Value("${sftp.password}")
private String password;
#SuppressWarnings("rawtypes")
protected void afterPropertiesSet(GenericFileEndpoint<SftpRemoteFile> endpoint) throws Exception {
SftpConfiguration config = (SftpConfiguration) endpoint.getConfiguration();
config.setUsername(username);
config.setPassword(password);
}
}
Step 2: Create a camel route to poll 2 different folders using your custom component name.
#Component
public class PollSftpRoute extends RouteBuilder {
#Override
public void configure() throws Exception {
from("{{sftp.endpoint1}}").routeId("pollSftpRoute1")
.log(LoggingLevel.INFO, "Downloaded file from input folder 1.")
.to("file:data/out1");
from("{{sftp.endpoint2}}").routeId("pollSftpRoute2")
.log(LoggingLevel.INFO, "Downloaded file from input folder 2.")
.to("file:data/out2");
}
}
Step 3: Place this in application.properties
camel.springboot.main-run-controller=true
sftp.endpoint1=customSftp://localhost.net/input/1?delay=30s
sftp.endpoint2=customSftp://localhost.net/input/2?delay=30s
sftp.username=sftp_user1_l
sftp.password=xxxxxxxxxxxx
With this you don't have to repeat the username/password for each endpoints.
Note: With this approach you wont be able to set the username/password in URI endpoint configuration. Anything you set in URI will be replaced in afterPropertiesSet.

CXF: WARNING: No message body writer has been found for response class ArrayList

I'm getting the following error:
WARNING: No message body writer has been found for response class ArrayList.
On the following code:
#GET
#Consumes("application/json")
public List getBridges() {
return new ArrayList(bridges);
}
I know it's possible for CXF to handle this case because I've done it before - with a platform that defined the CXF and related maven artifacts behind the scenes (i.e. I didn't know how it was done).
So, the question: how can I get CXF to support this without adding XML bindings or other source code modifications?
Note the following answer addressed the same problem with XML bindings, which is not satisfactory for my case:
No message body writer has been found for response class ArrayList
The problem turns out to be a simple missing Accept header:
Accept: application/json
Adding this to the request resolves the problem.
The best thing is indeed to use Jackson.
The following post gives a great description of why and how to do it.
for your convinience I've summerized the main things:
You will need to set Jackson as the provider, the best way to do it is to use your custom Application overriding javax.ws.rs.core.Application
you will need to add the following code
#Override
public Set<Object> getSingletons() {
Set<Object> s = new HashSet<Object>();
// Register the Jackson provider for JSON
// Make (de)serializer use a subset of JAXB and (afterwards) Jackson annotations
// See http://wiki.fasterxml.com/JacksonJAXBAnnotations for more information
ObjectMapper mapper = new ObjectMapper();
AnnotationIntrospector primary = new JaxbAnnotationIntrospector();
AnnotationIntrospector secondary = new JacksonAnnotationIntrospector();
AnnotationIntrospector pair = new AnnotationIntrospector.Pair(primary, secondary);
mapper.getDeserializationConfig().setAnnotationIntrospector(pair);
mapper.getSerializationConfig().setAnnotationIntrospector(pair);
// Set up the provider
JacksonJaxbJsonProvider jaxbProvider = new JacksonJaxbJsonProvider();
jaxbProvider.setMapper(mapper);
s.add(jaxbProvider);
return s;
}
Finally,
do not forget to add Jackson to your maven
<dependency>
<groupId>org.glassfish.jersey.media</groupId>
<artifactId>jersey-media-json-jackson</artifactId>
<version>2.7</version>
<scope>compile</scope>
</dependency>
Having the same problem I've finally solved it like this. In your Spring context.xml define bean:
<bean id="jsonProvider" class="org.codehaus.jackson.jaxrs.JacksonJaxbJsonProvider"/>
And use it in the <jaxrs:server> as a provider:
<jaxrs:server id="restService" address="/resource">
<jaxrs:providers>
<ref bean="jsonProvider"/>
</jaxrs:providers>
</jaxrs:server>
In your Maven pom.xml add:
<dependency>
<groupId>org.codehaus.jackson</groupId>
<artifactId>jackson-jaxrs</artifactId>
<version>1.9.0</version>
</dependency>
If you are using Jackson you can write custom message body writer.
public class KPMessageBodyWriter implements
MessageBodyWriter<ArrayList<String>> {
private static final Logger LOG = LoggerFactory.getLogger(KPMessageBodyWriter.class);
public boolean isWriteable(Class<?> type, Type genericType,
Annotation[] annotations, MediaType mediaType) {
return true;
}
public long getSize(ArrayList<String> t, Class<?> type, Type genericType,
Annotation[] annotations, MediaType mediaType) {
return t.size();
}
public void writeTo(ArrayList<String> t, Class<?> type, Type genericType,
Annotation[] annotations, MediaType mediaType,
MultivaluedMap<String, Object> httpHeaders,
OutputStream entityStream) throws IOException,
WebApplicationException {
ObjectMapper mapper = new ObjectMapper();
mapper.writeValue(entityStream, t);
}
}
In cx configuration file add the provider
<jaxrs:providers>
<bean class="com.kp.KPMessageBodyWriter" />
</jaxrs:providers>

Google App Engine : use mapreduce to empty datastore

I am trying to use an early experimental release of mapper implementation to empty the datastore. This solution was proposed in a similar SO question.
This is the AppEngineMapper I am currently using. It just deletes the entity.
public class EmptyFixesMapper extends AppEngineMapper<Key, Entity, NullWritable, NullWritable> {
public EmptyFixesMapper() {
}
#Override
public void taskSetup(Context context) {
}
#Override
public void taskCleanup(Context context) {
}
#Override
public void setup(Context context) throws IOException, InterruptedException {
super.setup(context);
}
#Override
public void cleanup(Context context) {
getAppEngineContext(context).flush();
}
#Override
public void map(Key key, Entity value, Context context) {
log.warning("Mapping key: " + key);
DatastoreMutationPool mutationPool =
this.getAppEngineContext(context).getMutationPool();
mutationPool.delete(value.getKey());
}
}
This is my mapreduce.xml configuration file:
<configurations>
<configuration name="Empty Entities">
<property>
<name>mapreduce.map.class</name>
<value>com.google.appengine.demos.mapreduce.EmptyFixesMapper</value>
</property>
<property>
<name>mapreduce.inputformat.class</name>
<value>com.google.appengine.tools.mapreduce.DatastoreInputFormat</value>
</property>
<property>
<name human="Entity Kind to Map Over">mapreduce.mapper.inputformat.datastoreinputformat.entitykind</name>
<value template="optional">Fix</value>
</property>
</configuration>
...
When I enter the the mapreduce control panel in mydomain/mapreduce/status, I can launch the tasks, but they never complete. This is the screenshot where you can see a field "0/0 shards":
And I can see some tasks are created in the appengine default task queue, with a lot of retries:
An finally, in my GAE application logs I see:
1.
09-11 03:23AM 08.556 /mapreduce/mapperCallback 500 10081ms
0cpu_ms 0kb AppEngine-Google;
(+http://code.google.com/appengine)
0.1.0.2 - - [11/Sep/2010:03:23:18 -0700] "POST
/mapreduce/mapperCallback HTTP/1.1"
500 0
"http://xxx.appspot.com/mapreduce/command/start_job"
"AppEngine-Google;
(+http://code.google.com/appengine)"
xxx.appspot.com" ms=10081 cpu_ms=0
api_cpu_ms=0 cpm_usd=0.000057
queue_name=default
task_name=worker-attempt-1284198892815-0001-m-000002-1--0
2.
W 09-11 03:23AM 18.638
Request was aborted after waiting too long to attempt to service
your request. This may happen
sporadically when the App Engine
serving cluster is under unexpectedly
high or uneven load. If you see this
message frequently, please contact the
App Engine team.
What could be happening? I'm sure I've followed steps described in the getting started guide, and I have less than 1000 entities in the datastore...
Well, the problem has nothing to do with appengine-mapreduce. I was securing /mapreduce/** URIs, so the task in the default task queue was not being able to reach /mapreduce/mapperCallback, /mapreduce/command/start_job, etc because no username/password information is sent.
It is an interesting issue anyway, because I don't really want to open /mapreduce/** to everyone...

Resources