After specifying an alternate datasource, grails is throwing a sql exception for embedded h2 - sql-server

I have a small set of grails 3.0.11 applications. We have a domain module which is shared between the applications.
When one of the apps(app1, app2, etc) starts up, it connects to the datasource and creates a table for every class in the domain module. All of the apps in the suite will attempt to create these tables.
I modified the application.yml so that it would use an MSSQL instance rather than the internal h2 db, and I set dbCreate to update so that the schema will persist through shutdowns of the application.
I am trying to split up the domain so that each application will only manage the schema for its relevant classes. ie, app1 will handle ddl for classA, classB, and classC on startup, and app2 will handle classX, classY, classZ.
Following this guide, I have defined a second datasource unique to each app (i.e., app1db, app2db, etc), and I have added mapping = { datasource 'appXdb'} to each class specifying the relevant app.
Now, when I start the app, I am getting a sql exception:
Caused by: java.sql.SQLException: Driver:jTDS 1.3.1 returned null for
URL:jdbc:h2:mem:grailsDB;MVCC=TRUE;LOCK_TIMEOUT=10000
Why is my app still trying to access an h2 db after redefining datasource to point to an mssql instance and adding a second datasource which also points to mssql?
application.yml:
---
server:
port: 3434
contextPath: '/app1'
---
grails:
profile: web
codegen:
defaultPackage: cars.app
info:
app:
name: '#info.app.name#'
version: '#info.app.version#'
grailsVersion: '#info.app.grailsVersion#'
spring:
groovy:
template:
check-template-location: false
---
grails:
mime:
disable:
accept:
header:
userAgents:
- Gecko
- WebKit
- Presto
- Trident
types:
all: '*/*'
atom: application/atom+xml
css: text/css
csv: text/csv
form: application/x-www-form-urlencoded
html:
- text/html
- application/xhtml+xml
js: text/javascript
json:
- application/json
- text/json
multipartForm: multipart/form-data
pdf: application/pdf
rss: application/rss+xml
text: text/plain
hal:
- application/hal+json
- application/hal+xml
xml:
- text/xml
- application/xml
urlmapping:
cache:
maxsize: 1000
controllers:
defaultScope: singleton
converters:
encoding: UTF-8
views:
default:
codec: html
gsp:
encoding: UTF-8
htmlcodec: xml
codecs:
expression: html
scriptlets: html
taglib: none
staticparts: none
---
hibernate:
cache:
queries: false
use_second_level_cache: true
use_query_cache: false
region.factory_class: 'org.hibernate.cache.ehcache.EhCacheRegionFactory'
endpoints:
jmx:
unique-names: true
shutdown:
enabled: true
dataSources:
dataSource:
pooled: true
jmxExport: true
driverClassName: net.sourceforge.jtds.jdbc.Driver
username: grails
password: password
app1DataSource:
pooled: true
jmxExport: true
driverClassName: net.sourceforge.jtds.jdbc.Driver
username: grails
password: password
environments:
development:
dataSource:
dbCreate: update
url: jdbc:jtds:sqlserver://127.0.0.1;databaseName=cars_demo
appDataSource:
dbCreate: update
url: jdbc:jtds:sqlserver://1127.0.0.1;databaseName=cars_demo
dataSource:
dbCreate: update
url: jdbc:h2:mem:testDb;MVCC=TRUE;LOCK_TIMEOUT=10000;DB_CLOSE_ON_EXIT=FALSE;DB_CLOSE_DELAY=-1
production:
dataSource:
dbCreate: update
url: jdbc:h2:mem:prodDb;MVCC=TRUE;LOCK_TIMEOUT=10000;DB_CLOSE_ON_EXIT=FALSE;DB_CLOSE_DELAY=-1
properties:
jmxEnabled: true
initialSize: 5
maxActive: 50
minIdle: 5
maxIdle: 25
maxWait: 10000
maxAge: 600000
timeBetweenEvictionRunsMillis: 5000
minEvictableIdleTimeMillis: 60000
validationQuery: SELECT 1
validationQueryTimeout: 3
validationInterval: 15000
testOnBorrow: true
testWhileIdle: true
testOnReturn: false
jdbcInterceptors: ConnectionState
defaultTransactionIsolation: 2 # TRANSACTION_READ_COMMITTED
mapping element: static mapping = {datasource 'app1DataSource'}
edit1: added application.yml and mapping element.

turns out, I had not properly nested my environmental overwrites in application.yml. I added a parent element dataSources: about the actual ds defs in the main section but did not do the same in the environment sections which mean that my ds's were loading w/o a url which then defaulted to grails h2 db.
Thanks #quindimildev! I recognized my oversight while trying to figure out formatting to follow your suggestion.

In my case, I had to clear all old compiled files which had an old configuration in them.
To make sure I had a fresh compile I did the following:
grails clean-all
grails refresh-dependencies
grails compile
grails prod war
After that the error that returned "null for URL:jdbc:h2:mem:grailsDB;MVCC=TRUE;LOCK_TIMEOUT=10000" went away.

Related

Error - io:job could not be initialized: missing field accessing 'heartbeat.monitors.0.hosts.0' (source:'/etc/heartbeat.yml')

Heartbeat configuration file is below
# Directory + glob pattern to search for configuration files
path: ${path.config}/monitors.d/*.yml
# If enabled, heartbeat will periodically check the config.monitors path for changes
reload.enabled: true
# How often to check for changes
reload.period: 10s
heartbeat.monitors:
- type: http
id: my_app
name: "Check my_app liveness endpoint"
labels.application.name: my_app
schedule: '#every 1m'
service.name: 'my_app' # must be same as in apm
hosts: ["https://${host}/path/to/destination1", "https://${host}/path/to/destination2"]
check.request.method: HEAD
check.response.status: [200]
fields_under_root: true
fields:
service.environment: "${my_env}"
labels.application.name: my_app
####Enabling logging to heartbeat###
logging.level: debug
logging.to_files: true
logging.files.path: /usr/share/heartbeat/logs
logging.files.name: heartbeat-log
logging.files.keepfiles: 30
logging.files.permissions: 0640
output.kafka:
hosts: ["${KAFKA_URL}"]
ssl.verification_mode: "none"
topic: "heartbeat"
partition.round_robin:
reachable_only: true
client_id: ${MY_APPLICATION}-heartbeat-${MY_ENVIRONMENT}
required_acks: 1
monitoring:
enabled: false
this configuration is deployed as a Configmap inside the heartbeat pod.
But after the deployment, we are getting this error in Kibana Uptime Monitor :
Also tried hardcoded the variables which are there inside the yaml posted. The result is the same.
Can anybody help me?

How to run spring boot app even if database fails

I have a spring boot app which has multiple databases. I want to run the app regardless of the state of the database. Here is how my application yaml looks like
logging:
level:
org:
hibernate:
SQL: DEBUG
type:
descriptor:
sql:
BasicBinder: TRACE
spring:
sql:
init:
continue-on-error: true # <-- didn't work
jpa:
properties:
hibernate:
format_sql: 'true'
show-sql: 'true'
datasource:
azure-read-only:
url: <azure_db1_url>
username: '<username>'
password: '<password>'
driver-class-name: com.microsoft.sqlserver.jdbc.SQLServerDriver
continue-on-error: true
hikari: # <--- add this to bypass boot failure but it didn't work either
minimum-idle: 0
maximum-pool-size: 15
connection-timeout: 10000 #10s
idle-timeout: 300000 #5m
max-lifetime: 600000 #10m
initialization-fail-timeout: -1
validation-timeout: 1000 #1s
azure-read-write:
url: <azure_db2_url>
username: '<username>'
password: '<password>'
driver-class-name: com.microsoft.sqlserver.jdbc.SQLServerDriver
continue-on-error: true
hikari:
minimum-idle: 0
maximum-pool-size: 15
connection-timeout: 10000 #10s
idle-timeout: 300000 #5m
max-lifetime: 600000 #10m
initialization-fail-timeout: -1
validation-timeout: 1000 #1s
db2-testdb1:
url: <another_db_not_hosted_on_azure>
username: '<username>'
password: '<passwor'
driver-class-name: com.ibm.db2.jcc.DB2Driver
continue-on-error: true
hikari:
minimum-idle: 0
maximum-pool-size: 15
connection-timeout: 10000 #10s
idle-timeout: 300000 #5m
max-lifetime: 600000 #10m
initialization-fail-timeout: -1
validation-timeout: 1000 #1s
For datasources, I have one datasource which is Primary bean and others are non-primary. How can i run spring boot app even if it fails to connect to ALL dbs.
You can instantiate a datasource "manually" then feed it to Spring. For example:
#Configuration
public class MyConfigBean {
#Lazy
#Autowired
private DataSource dataSource;
#Bean
public DataSource getDataSource() {
log.debug(">>> Building the DATA-SOURCE");
DataSourceBuilder<?> dataSourceBuilder = DataSourceBuilder.create();
dataSourceBuilder.driverClassName(JDBC_DRIVER_CLASS_NAME);
dataSourceBuilder.url(props.getJDBCUrl());
dataSourceBuilder.username(props.getJDBCUsername());
dataSourceBuilder.password(props.getJDBCPassword());
return dataSourceBuilder.build();
}
}
The #Lazy annotation leaves the call to getDataSource() method to the last. This way you can retrieve/populate the URL and credentials earlier, so they are ready for the moment they are needed.

How do I connect Ecto to CockroachDB Serverless?

I'd like to use CockroachDB Serverless for my Ecto application. How do I specify the connection string?
I get an error like this when trying to connect.
[error] GenServer #PID<0.295.0> terminating
** (Postgrex.Error) FATAL 08004 (sqlserver_rejected_establishment_of_sqlconnection) codeParamsRoutingFailed: missing cluster name in connection string
(db_connection 2.4.1) lib/db_connection/connection.ex:100: DBConnection.Connection.connect/2
CockroachDB Serverless says to connect by including the cluster name in the connection string, like this:
postgresql://username:<ENTER-PASSWORD>#free-tier.gcp-us-central1.cockroachlabs.cloud:26257/defaultdb?sslmode=verify-full&sslrootcert=$HOME/.postgresql/root.crt&options=--cluster%3Dcluster-name-1234
but I'm not sure how to get Ecto to create this connection string via its configuration.
The problem is that Postgrex is not able to parse all of the information from the connection URL - notable the SSL configuration. The solution is to specify the connection parameters explicitly, including the cacertfile SSL option. Assuming that you have downloaded your cluster's CA certificate to priv/certs/ca-cert.crt, you can use the following config as a template:
config :my_app, MyApp.Repo,
username: "my_user",
password: "my_password",
database: "defaultdb",
hostname: "free-tier.gcp-us-central1.cockroachlabs.cloud",
port: "26257",
ssl: true,
ssl_opts: [
cacertfile: Path.expand("priv/certs/ca-cert.crt"),
],
parameters: [options: "--cluster=my-cluster-123"]
Possible Other Issues
Table Locking
Since that CockroachDB also does not support the locking that Ecto/Postgrex attempts on the migration table, the :migration_lock config needs to be disabled as well:
config :my_app, MyApp.Repo,
# ...
migration_lock: false
Auth generator
Finally, the new phx.gen.auth generator defaults to using the citext extension for storing a user's email address in a case-insensitive manner. The line in the generated migration that executes CREATE EXTENSION IF NOT EXISTS citext should be removed, and the column type for the :email field should be changed from :citext to :string.
This configuration allows Ecto to connect to CockroachDB Serverless correctly:
config :myapp, MyApp.repo,
username: "username",
password: "xxxx",
database: "defaultdb",
hostname: "free-tier.gcp-us-central1.cockroachlabs.cloud",
port: 26257,
ssl: true,
ssl_opts: [
cert_pem: "foo.pem",
key_pem: "bar.pem"
],
show_sensitive_data_on_connection_error: true,
pool_size: 10,
parameters: [
options: "--cluster=cluster-name-1234"
]

Getting started with grails 3.3.9 and PostgreSQL 12.7: Error "org.postgresql.Driver"

i tried to connect a postgresdatabase in my grails 3.3.9 project;
My posgrest server is working, since I can connect and operate on the posgres database from Intelli J 2021 databse, but I can't connect to grails 3.3.9.
The password and user are correct but it always throws me this error:
Running application...
2021-07-05 19:06:25.298 ERROR --- [ main] org.postgresql.Driver : Connection error:
org.postgresql.util.PSQLException: El servidor requiere autenticación basada en contraseña, pero no se ha provisto ninguna contraseña.
And this mi Application.yml
hibernate:
cache:
queries: false
use_second_level_cache: false
use_query_cache: false
dataSource:
IkebanaUsuarios:
pooled: true
jmxExport: true
driverClassName: "org.postgresql.Driver"
username: "postgres"
password: "postgres"
environments:
development:
dataSource:
dbCreate: update
url: jdbc:postgresql://localhost:5432/IkebanaERP
test:
dataSource:
dbCreate: update
url: jdbc:postgresql://localhost:5432/IkebanaERP
production:
dataSource:
dbCreate: update
url: jdbc:postgresql://localhost:5432/IkebanaERP
properties:
jmxEnabled: true
initialSize: 5
maxActive: 50
minIdle: 5
maxIdle: 25
maxWait: 10000
maxAge: 600000
timeBetweenEvictionRunsMillis: 5000
minEvictableIdleTimeMillis: 60000
validationQuery: SELECT 1
validationQueryTimeout: 3
validationInterval: 15000
testOnBorrow: true
testWhileIdle: true
testOnReturn: false
jdbcInterceptors: ConnectionState
defaultTransactionIsolation: 2 # TRANSACTION_READ_COMMITTED
and my build.graddle is this
.......
compile "org.grails.plugins:cache"
compile "org.grails.plugins:async"
compile "org.grails.plugins:scaffolding"
compile "org.grails.plugins:events"
compile "org.grails.plugins:hibernate5"
compile "org.hibernate:hibernate-core:5.1.16.Final"
compile "org.grails.plugins:gsp"
console "org.grails:grails-console"
profile "org.grails.profiles:web"
runtime "org.glassfish.web:el-impl:2.1.2-b03"
runtime "com.h2database:h2"
runtime "org.apache.tomcat:tomcat-jdbc"
runtime "com.bertramlabs.plugins:asset-pipeline-grails:2.15.1"
runtime 'org.xerial:sqlite-jdbc:3.6.17'
runtime 'org.postgresql:postgresql:9.4.1208.jre1.8'
runtime 'mysql:mysql-connector-java:5.1.29'
runtime 'org.postgresql:postgresql:42.2.1.jre7'
testCompile "org.grails:grails-gorm-testing-support"
testCompile "org.grails.plugins:geb"
testCompile "org.grails:grails-web-testing-support"
testRuntime "org.seleniumhq.selenium:selenium-htmlunit-driver:2.47.1"
testRuntime "net.sourceforge.htmlunit:htmlunit:2.18"
}
Help pleases; thanks a lot
You are defining properties for your IkebanaUsuarios dataSource, but not for the default one used by Grails.
You may remove the IkebanaUsuarios block, leaving this:
dataSource:
pooled: true
jmxExport: true
driverClassName: "org.postgresql.Driver"
username: "postgres"
password: "postgres"
Or, if you needed that secondary datasource as well, you may define properties for both. I'm guessing you did not need it, since you did not mention intentionally having two.

Unable to parse Open API, or Google Service Configuration specification from openapiapp.yaml

I try to follow exactly [this tutorial] (https://cloud.google.com/community/tutorials/exposing-aspnet-webapi-using-dotnetcore-with-cloud-endpoints ), but I get the following error at trying gcloud endpoints services deploy openapi.yaml :
ERROR: (gcloud.endpoints.services.deploy) Unable to parse Open API, or Google Service Configuration specification from [SampleSolution]
The body of openapi.yaml :
openapi: 3.0.1
info:
title: Notes API
version: v1
host: [google cloud project ID].appspot.com
paths:
/WeatherForecast:
get:
tags:
- WeatherForecast
responses:
'200':
description: Success
content:
text/plain:
schema:
type: array
items:
$ref: '#/components/schemas/WeatherForecast'
application/json:
schema:
type: array
items:
$ref: '#/components/schemas/WeatherForecast'
text/json:
schema:
type: array
items:
$ref: '#/components/schemas/WeatherForecast'
components:
schemas:
WeatherForecast:
type: object
properties:
date:
type: string
format: date-time
temperatureC:
type: integer
format: int32
temperatureF:
type: integer
format: int32
readOnly: true
summary:
type: string
nullable: true
additionalProperties: false
I only see two things there:
openapi: 3.0.1. This should be swagger: "2.0" accordint ot the Basic structure of an OpenAPI document
host: [google cloud project ID].appspot.com. This should contain the proper project Id.
UPDATE
After checking the composition of your file and the Endpoints openAPI Docs:
OpenAPI feature limitations:
Currently, Cloud Endpoints accepts only version 2 of the OpenAPI Specification.
The Components Section in the Swagger docs mentions that applies to openAPI3. This is not compatible with point 1.
I suggest to remake your file following The basic structure of an OpenAPI document
Swashbuckle.AspNetCore now supports OpenApi 3, but it also has backwards compatibility with Swagger v2, by setting it up in Configure method (Startup class) with:
app.UseSwagger(c =>
{
c.SerializeAsV2 = true;
});

Resources