I upgraded to mongoid 3.0.1 and created the new format of mongoid.yml - my mongoid.yml looks like this :
production:
sessions:
default:
database: grbr_production
hosts:
- localhost:27017
options:
consistency: :strong
options:
raise_not_found_error: false
test:
sessions:
default:
database: grbr_test
hosts:
- localhost:27017
options:
consistency: :strong
raise_not_found_error: false
development:
sessions:
default:
database: grbr_development
hosts:
- localhost:27017
options:
consistency: :strong
raise_not_found_error: false
in development, I see correct db getting picked. but in production I see that "admin" database is getting picked and that breaks my app. I have set RAILS_ENV to "production" in my production machine but still I see this error. Another very strange thing I see is that in production, moped does not even query the database.
The following logs from development and production shows that :
Development log:
MOPED: 127.0.0.1:27017 COMMAND database=admin command={:ismaster=>1} (0.6645ms)**
MOPED: 127.0.0.1:27017 QUERY database=grbr_development collection=topsearches selector={"$query"=>{"type"=>"books"}, "$orderby"=>{"cnt"=>-1}} flags=[] limit=10 skip=0 fields=nil (0.8984ms)
Production Log:
MOPED: 127.0.0.1:27017 COMMAND database=admin command={:ismaster=>1} (0.6878ms)**
so in production, I can not see the queries getting fired on the production db.
Why is your production pointing to localhost? That seems off.
You might try something like this:
production:
sessions:
default:
uri: "YOUR-DB-ADDRESS"
Related
I have a simple NFS server (followed instructions here) connected to a Kubernetes (v1.24.2) cluster as a storage class. When a new PVC is created, it creates a PV as expected with a new directory on the NFS server.
The NFS provider was deployed as instructed here.
My issue is that containers don't seem to be able to perform all the functions they expect to when interacting with the NFS server. For example:
A PVC and PV are created with the following yml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mssql-data
spec:
storageClassName: nfs-client
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
This creates a directory on the NFS server as expected.
Then this deployment is created to use the PVC:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mssql-deployment
spec:
replicas: 1
selector:
matchLabels:
app: mssql
template:
metadata:
labels:
app: mssql
spec:
terminationGracePeriodSeconds: 30
hostname: mssqlinst
securityContext:
fsGroup: 10001
containers:
- name: mssql
image: mcr.microsoft.com/mssql/server:2019-latest
ports:
- containerPort: 1433
env:
- name: MSSQL_PID
value: "Developer"
- name: ACCEPT_EULA
value: "Y"
- name: SA_PASSWORD
value: "Password123"
volumeMounts:
- name: mssqldb
mountPath: /var/opt/mssql
volumes:
- name: mssqldb
persistentVolumeClaim:
claimName: mssql-data
The server comes up and responds to requests but does so with the error:
[S0002][823] com.microsoft.sqlserver.jdbc.SQLServerException: The operating system returned error 1117(The request could not be performed because of an I/O device error.) to SQL Server during a read at offset 0x0000000009a000 in file '/var/opt/mssql/data/master.mdf'. Additional messages in the SQL Server error log and operating system error log may provide more detail. This is a severe system-level error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.
My /etc/exports file has the following contents:
/srv *(rw,no_subtree_check,no_root_squash)
When the SQL container starts, it doesn't undergo any container restarts but the SQL service within the container appears to get into some sort of restart loop until a connection is attempted and then it throws the error and appears to stop.
Is there something I'm missing in the /etc/exports file? I tried variations with sync, async, and insecure but can't seem to get past the SQL error.
I gather from the error that this has something to do with the container's ability to read/write from/to the disk. Am I in the right ballpark?
The config that ended up working was:
/srv *(rw,no_root_squash,insecure,sync,no_subtree_check)
This was after a reinstall of the cluster. No significant changes elsewhere but still seems like there may have been more to the issue than this one config.
I want to use mongo atlas with strapi. SO I am reading the values from .env
these are the settings
connections: {
default: {
connector: 'mongoose',
settings: {
host: env('DB_MONGO_URL', '127.0.0.1'),
srv: env.bool('DATABASE_SRV', false),
port: env.int('DB_PORT', 27017),
database: env('DATABASE_NAME', 'mydb'),
username: env('DB_USER_NAME', null),
password: env('DB_PASSWORD', null)
},
options: {
authenticationDatabase: env('AUTHENTICATION_DATABASE', null),
ssl: env.bool('DB_SSL_ENABLE', false),
},
},
and here are the values
DB_MONGO_URL=myproject.random.mongodb.net/mydbt?ssl_cert_reqs=CERT_NONE
DB_USER_NAME=myuser
DB_PASSWORD=mypasswor
but this is showing
[2021-10-12T13:18:25.485Z] debug ⛔️ Server wasn't able to start properly.
[2021-10-12T13:18:25.490Z] error Error connecting to the Mongo database. Could not connect to any servers in your MongoDB Atlas cluster. One common reason is that you're trying to access the database from an IP that isn't whitelisted. Make sure your current IP address is on your Atlas cluster's IP whitelist: https://docs.atlas.mongodb.com/security-whitelist/
error Command failed with exit code 1.
In network i have allowed, 0.0.0.0 that includes any IP address.
If i pass this url in MongoDB Compass
mongodb+srv://myproject.random.mongodb.net/mydbt?ssl_cert_reqs=CERT_NONE
then this works
I tried the same as well
DB_MONGO_URL=mongodb+srv://myproject.random.mongodb.net/mydbt?ssl_cert_reqs=CERT_NONE
DB_USER_NAME=myuser
DB_PASSWORD=mypassword
but this shows
[2021-10-13T06:59:01.372Z] debug ⛔️ Server wasn't able to start properly.
[2021-10-13T06:59:01.374Z] error Error connecting to the Mongo database. getaddrinfo ENOTFOUND mongodb+srv
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
If i tried enable srv true
[2021-10-13T07:01:20.065Z] debug ⛔️ Server wasn't able to start properly.
[2021-10-13T07:01:20.067Z] error Error connecting to the Mongo database. URI does not have hostname, domain name and tld
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
how can i solve this?
I am new to both GKE and Django. I made an app in Django, made a docker container and push it to gcr and deploy it via GKE. The deployment works fine but when i try to login, I got the OperationalError. For database connection, I am using CloudSQL proxy.I have collected the static file and stored in google storage. Any help will be highly appreciated.
I have tried quite many opinions available already online but failed to succeed.
When i try to login as admin, I got the following output after input my username and password for login.
OperationalError at /admin/login
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
Following are my database setting in Django.
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'polls',
'USER': os.getenv('DATABASE_USER'),
'PASSWORD': os.getenv('DATABASE_PASSWORD'),
'HOST': '127.0.0.1',
'PORT': '5432',
}
}
error while trying to login as admin
You should check in the docker logs , and check if there is any error connecting to the database
If it is a databse connection issue, then you can try the following in your docker-compose.yml . You can customize the rest of the variables mentioned, as needed for your polls application
you can try this
web:
build: ./app
image: {imagename}
depends_on:
- cloud-sql-proxy
environment:
- SQL_ENGINE=django.db.backends.postgresql_psycopg2
- SQL_DATABASE=test_db
- SQL_USER=postgres1
- SQL_PASSWORD=6728298
- SQL_HOST=cloud-sql-proxy
- SQL_PORT=5432
- DATABASE=postgres
cloud-sql-proxy:
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: /cloud_sql_proxy -instances=<INSTANCE_CONNECTION_NAME>=tcp:0.0.0.0:5432 -credential_file=/config
volumes:
- {service_account_creds_path.json}:/config
You could read this article https://adilsoncarvalho.com/how-to-use-cloud-sql-proxy-on-docker-compose-f7418c53eed9 for reference. The article is about mysql, but the concepts are the same . Good Luck!
I can't get my Symfony 3 project on Google App Engine Flexible to connect to Google Cloud SQL (MySQL, 2. Generation).
Starting off with these examples
https://github.com/GoogleCloudPlatform/php-docs-samples/tree/master/appengine/flexible/symfony
https://github.com/GoogleCloudPlatform/php-docs-samples/tree/master/appengine/flexible/cloudsql-mysql
I created a Google App Engine and a Google Cloud SQL (MySQL, 2. Generation), both at europe-west3.
I tested out these examples on my GAE and both run fine.
Then I went on and created a new, very simple Symfony 3.3 project from scratch.
After creation (with only the welcome screen and "Your application is now ready. You can start working on it at: /app/") I deployed that to my GAEflex and it works fine.
After that I added one simple entity and an extra controller for that.
This works fine, as it should, when starting Googles cloud_sql_proxy and the Symfony built-in webserver "php bin/console server:run".
It reads and writes from and to the Cloud SQL, all easy peasy.
But, no matter what I tried I always get a "Connection Refused" on one (or the other, depending of my state of experimentation) of the commands from "symfony-scripts" "post-install-cmd" in my composer.json.
"$ gcloud beta app deploy" output (excerpt)
...
Step #1: Install PHP extensions...
Step #1: Running composer...
Step #1: Loading composer repositories with package information
...
Step #1: [Doctrine\DBAL\Exception\ConnectionException]
Step #1: An exception occurred in driver: SQLSTATE[HY000] [2002] Connection refused
Step #1:
Step #1:
Step #1: [Doctrine\DBAL\Driver\PDOException]
Step #1: SQLSTATE[HY000] [2002] Connection refused
Step #1:
Step #1:
Step #1: [PDOException]
Step #1: SQLSTATE[HY000] [2002] Connection refused
...
parameters.yml (excerpt)
parameters:
database_host: 127.0.0.1
database_port: 3306
database_name: gae-test-project
database_user: root
database_password: secretpassword
config.yml (excerpt)
doctrine:
dbal:
driver: pdo_mysql
host: '%database_host%'
port: '%database_port%'
unix_socket: '/cloudsql/restof:europe-west3:connectionname'
dbname: '%database_name%'
user: '%database_user%'
password: '%database_password%'
charset: utf8mb4
default_table_options:
charset: utf8mb4
collate: utf8mb4_unicode_ci
composer.json (excerpt)
"require": {
"php": "^7.1",
"doctrine/doctrine-bundle": "^1.6",
"doctrine/orm": "^2.5",
"incenteev/composer-parameter-handler": "^2.0",
"sensio/distribution-bundle": "^5.0.19",
"sensio/framework-extra-bundle": "^3.0.2",
"sensio/generator-bundle": "^3.0",
"symfony/monolog-bundle": "^3.1.0",
"symfony/polyfill-apcu": "^1.0",
"symfony/swiftmailer-bundle": "^2.3.10",
"symfony/symfony": "3.3.*",
"twig/twig": "^1.0||^2.0"
},
"require-dev": {
"symfony/phpunit-bridge": "^3.0"
},
"scripts": {
"symfony-scripts": [
"Incenteev\\ParameterHandler\\ScriptHandler::buildParameters",
"Sensio\\Bundle\\DistributionBundle\\Composer\\ScriptHandler::buildBootstrap",
"Sensio\\Bundle\\DistributionBundle\\Composer\\ScriptHandler::clearCache",
"Sensio\\Bundle\\DistributionBundle\\Composer\\ScriptHandler::installAssets",
"Sensio\\Bundle\\DistributionBundle\\Composer\\ScriptHandler::installRequirementsFile",
"Sensio\\Bundle\\DistributionBundle\\Composer\\ScriptHandler::prepareDeploymentTarget"
],
"post-install-cmd": [
"chmod -R ug+w $APP_DIR/var",
"#symfony-scripts"
],
"post-update-cmd": [
"#symfony-scripts"
]
},
app.yml (complete)
service: default
runtime: php
env: flex
runtime_config:
document_root: web
env_variables:
SYMFONY_ENV: prod
MYSQL_USER: root
MYSQL_PASSWORD: secretpassword
MYSQL_DSN: mysql:unix_socket=/cloudsql/restof:europe-west3:connectionname;dbname=gae-test-project
WHITELIST_FUNCTIONS: libxml_disable_entity_loader
#[START cloudsql_settings]
# Use the connection name obtained when configuring your Cloud SQL instance.
beta_settings:
cloud_sql_instances: "restof:europe-west3:connectionname"
#[END cloudsql_settings]
Keep in mind that I added or altered only lines which examples and documentation advised me to.
What Do I do wrong?
The examples both work fine, a Symfony project connecting to the DB through a running cloud_sql_proxy works fine, when I try to get doctrine to connect to a CloudSQL from within an GAEflex I get no connection.
Can anyone spot the rookies mistake I might have made?
Has anyone had this problem and maybe I forgot one or the other setting with the GAE or CloudSQL?
Can anyone point me to a repository of a working Symfony3/Doctrine project that uses GAEFlex and CloudSQL?
I am stuck as I can't find any uptodate documentation about how to get this, rather obvious, combination to run.
Any help is greatly appreciated!
Cheers /Carsten
I found it.
I read this post Flexible env: post-deploy-cmd from composer.json it not being executed which yielded two interesting infos.
App Engine Flex for PHP, doesn't run post-deploy-cmdany more
just use post-install-cmd as a drop-in replacement, unless you need DB connection.
So, during deployment, there is simply no database connection / socket available!
That's why Google introduced the (non-standard) post-deploy-cmd which unfortunately seems to have caused some other trouble, so they removed it again. Bummer.
In effect, as of to date, one can't use anything that utilizes the database within post-install-cmd.
In the case above, I just changed
composer.json (excerpt)
"scripts": {
"symfony-scripts": [
"Incenteev\\ParameterHandler\\ScriptHandler::buildParameters",
"Sensio\\Bundle\\DistributionBundle\\Composer\\ScriptHandler::buildBootstrap",
"Sensio\\Bundle\\DistributionBundle\\Composer\\ScriptHandler::clearCache",
"Sensio\\Bundle\\DistributionBundle\\Composer\\ScriptHandler::installAssets",
"Sensio\\Bundle\\DistributionBundle\\Composer\\ScriptHandler::installRequirementsFile",
"Sensio\\Bundle\\DistributionBundle\\Composer\\ScriptHandler::prepareDeploymentTarget"
],
"pre-install-cmd": [
"Sensio\\Bundle\\DistributionBundle\\Composer\\ScriptHandler::installAssets",
"post-install-cmd": [
"chmod -R ug+w $APP_DIR/var",
"Incenteev\\ParameterHandler\\ScriptHandler::buildParameters",
"Sensio\\Bundle\\DistributionBundle\\Composer\\ScriptHandler::buildBootstrap",
"Sensio\\Bundle\\DistributionBundle\\Composer\\ScriptHandler::clearCache",
"Sensio\\Bundle\\DistributionBundle\\Composer\\ScriptHandler::installRequirementsFile",
"Sensio\\Bundle\\DistributionBundle\\Composer\\ScriptHandler::prepareDeploymentTarget"
],
"post-update-cmd": [
"#symfony-scripts"
]
},
And everything runs fine...
I'm not sure about the innards of installAssets and if I still achieve the originally intended effect when I move it to pre-install but, well, it works for a very simple example application...
In the next days I'm going to work on carrying this over to my more complex application and see if I can get that to work.
Further discussion will most likely be over at the Google Cloud Platform Community
Cheers everyone!
Followed the steps described here to connect my grails 3.2.9 app to google cloud-sql instance on google-app-engine flexible env
http://guides.grails.org/grails-google-cloud/guide/index.html#deployingTheApp
My grails version is as follows
==> grails -version
| Grails Version: 3.2.9
| Groovy Version: 2.4.10
| JVM Version: 1.8.0_131
My application.yml looks as follows
# tag::dataSourceConfiguration[]
dataSource:
pooled: true
jmxExport: true
dialect: org.hibernate.dialect.MySQL5InnoDBDialect
environments:
development:
dataSource:
dbCreate: create-drop
url: jdbc:h2:mem:devDb;MVCC=TRUE;LOCK_TIMEOUT=10000;DB_CLOSE_ON_EXIT=FALSE
test:
dataSource:
dbCreate: update
url: jdbc:h2:mem:testDb;MVCC=TRUE;LOCK_TIMEOUT=10000;DB_CLOSE_ON_EXIT=FALSE
production:
dataSource:
driverClassName: com.mysql.jdbc.Driver
dbCreate: create-drop
url: jdbc:cloudsql://google/{DATABASE_NAME}?cloudSqlInstance={INSTANCE_NAME}&socketFactory=com.google.cloud.sql.mysql.SocketFactory&user={USERNAME}&password={PASSWORD}&useSSL=false
properties:
When I run locally using
grails run-app
the app runs correctly
I run
./gradlew appengineDeploy to deploy and it deploys correctly
But when I try to open the scaffolded pages in the browser, I see the following error in the logs
==> gcloud app logs tail -s default
ERROR --- [ main] o.h.engine.jdbc.spi.SqlExceptionHelper
: Driver:com.mysql.jdbc.Driver#75b3ef1a returned null for
URL:jdbc:cloudsql://google/{DATABASE_NAME}?cloudSqlInstance=
{INSTANCE_NAME}&socketFactory=com.google.cloud.sql.mysql.SocketFactory&us
er={USERNAME}&password={PASSWORD}&useSSL=false
In addition the following error is also seen in the logs
ERROR --- [ Thread-16] .SchemaDropperImpl$DelayedDropActionImpl :
HHH000478: Unsuccessful: alter table property drop foreign key
FKgcduyfiunk1ewg7920pw4l3o9
Does the HH indicate that it is using the h2 database in production env?
Please help debug.
It seems the issue linked to hibernate. The same error for grails discussed here:
https://hibernate.atlassian.net/browse/HHH-11470
You are using MySQL because as you can see the error code say
Driver:com.mysql.jdbc.Driver#75b3ef1a returned null for
Your problem is that you need to configure the URL with your particular details changing {DATABASE_NAME} with the name of your database
you can see how to replace in the example at http://guides.grails.org/grails-google-cloud/guide/index.html#dataSourceGoogleCloudSQL