How to update the configuration of an existing OpenFaas cluster like
--set faasIdler.dryRun=true/false
While creating the cluster we can specify the configuration. But how to update the existing configuration using Arkade.
you can rerun arkade install again with new parameters and it will upgrade. If you want to test in a safe space, use ark get kind, and then use kind to build a test cluster on your local. That's what I did to get the output (below).
Background: Under the hood arkade uses Helm to manage applications installed into kubernetes clusters, and Helm can do in place upgrades.
Below is an example
Before, with 1 gateway replica:
kubectl get pods -n openfaas
NAMESPACE NAME READY STATUS RESTARTS AGE
openfaas alertmanager-697bb8b556-8mtt7 1/1 Running 0 2m41s
openfaas basic-auth-plugin-858495b9c6-jnr2m 1/1 Running 0 2m41s
openfaas gateway-755d7f49fb-8q987 2/2 Running 0 2m41s
openfaas nats-cdc589ff7-7l8x8 1/1 Running 0 2m41s
openfaas prometheus-666d8674bb-958td 1/1 Running 0 2m41s
openfaas queue-worker-79876dbdc4-hpxg6 1/1 Running 0 2m41s
Upgrading to 2 gateway replicas:
ark install openfaas --max-inflight=5 --set gateway.replicas=2
The output for arkade install will show you the actual Helm command used. In this case helm upgrade --install, which will install if the app doesn't exist, and upgrade if it does:
VALUES values.yaml
Command: /home/kylos/.arkade/bin/helm [upgrade --install openfaas openfaas/openfaas --namespace openfaas --values /tmp/charts/openfaas/values.yaml --set clusterRole=false --set operator.create=false --set openfaasImagePullPolicy=IfNotPresent --set faasnetes.imagePullPolicy=Always --set basicAuthPlugin.replicas=1 --set queueWorker.replicas=1 --set serviceType=NodePort --set gateway.directFunctions=true --set gateway.replicas=2 --set ingressOperator.create=false --set queueWorker.maxInflight=5 --set basic_auth=true]
Release "openfaas" has been upgraded. Happy Helming!
Here you'll see two gateway pods:
kubectl get pods -n openfaas
NAME READY STATUS RESTARTS AGE
alertmanager-697bb8b556-8mtt7 1/1 Running 0 7m41s
basic-auth-plugin-858495b9c6-jnr2m 1/1 Running 0 7m41s
gateway-755d7f49fb-8q987 2/2 Running 0 7m41s
gateway-755d7f49fb-vw8z8 2/2 Running 0 4m39s
nats-cdc589ff7-7l8x8 1/1 Running 0 7m41s
prometheus-666d8674bb-958td 1/1 Running 0 7m41s
queue-worker-79876dbdc4-hpxg6 1/1 Running 0 7m41s
Related
I'm running two 12c R2 databases on linux oracle 7.9
I've installed the first database (CDB) and also a listener and both of them were running fine. Then, I've installed another 12c R2 database (CDB2) on the same system.
These are my instances:
CDB
CDB2
The problem is that if I want to startup only the second database(CDB2), and then the listener, when running lsnrctl status LISTENER it says: The listener supports no services
If I startup the first database after that (CDB), the listener comes to life saying that it supports both of the services (CDB and CDB2).
So, it only support CDB2 if I also startup CDB. If I startup only CDB2, it does not support it.
But, after I started CDB and after listener started supporting both services, if i shutdown CDB, the listener is still supporting CDB2.
So, as a summary:
If i startup CDB2 and then the listener, the listener does not support any services. If I startup CDB after, the listener supports both dbs. If I shutdown CDB after, the listener supports only CDB2 which is what i want in the first place.
First step:
[oracle#oel7 ~]$ ps -ef | grep pmon
oracle 4350 2463 0 20:56 pts/0 00:00:00 grep --color=auto pmon
[oracle#oel7 ~]$ ps -ef | grep tns
root 37 2 0 20:37 ? 00:00:00 [netns]
oracle 4458 2463 0 20:57 pts/0 00:00:00 grep --color=auto tns
[oracle#oel7 ~]$
Starting CDB2 & listener:
[oracle#oel7 ~]$ ps -ef | grep pmon
oracle 2547 1 0 20:41 ? 00:00:00 ora_pmon_CDB2
oracle 4498 2463 0 20:58 pts/0 00:00:00 grep --color=auto pmon
[oracle#oel7 ~]$
[oracle#oel7 ~]$ ps -ef | grep tns
root 37 2 0 20:37 ? 00:00:00 [netns]
oracle 4537 1 0 20:58 ? 00:00:00 /u01/app/oracle/product/12.2.0/dbhome_1/bin/tnslsnr LISTENER -inherit
oracle 4563 2463 0 20:59 pts/0 00:00:00 grep --color=auto tns
[oracle#oel7 ~]$
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.2.0.1.0 - Production
Start Date 21-JAN-2021 20:42:01
Uptime 0 days 0 hr. 0 min. 0 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/oracle/product/12.2.0/dbhome_1/network/admin/ listener.ora
Listener Log File /u01/app/oracle/diag/tnslsnr/oel7/listener/alert/log.x ml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=oel7.localdomain)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
The listener supports no services
The command completed successfully
After starting CDB:
[oracle#oel7 dbhome_1]$ ps -ef | grep pmon
oracle 2351 1 0 20:41 ? 00:00:00 ora_pmon_CDB
oracle 2547 1 0 20:41 ? 00:00:00 ora_pmon_CDB2
oracle 4814 2463 0 21:02 pts/0 00:00:00 grep --color=auto pmon
[oracle#oel7 dbhome_1]$
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.2.0.1.0 - Production
Start Date 21-JAN-2021 20:42:01
Uptime 0 days 0 hr. 0 min. 49 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/oracle/product/12.2.0/dbhome_1/network/admin/ listener.ora
Listener Log File /u01/app/oracle/diag/tnslsnr/oel7/listener/alert/log.x ml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=oel7.localdomain)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oel7.localdomain)(PORT=5500))(Secur ity=(my_wallet_directory=/u01/app/oracle/admin/CDB/xdb_wallet))(Presentation=HTT P)(Session=RAW))
Services Summary...
Service "CDB.localdomain" has 1 instance(s).
Instance "CDB", status READY, has 1 handler(s) for this service...
Service "CDB2" has 1 instance(s).
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "CDB2XDB" has 1 instance(s).
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "CDBXDB.localdomain" has 1 instance(s).
Instance "CDB", status READY, has 1 handler(s) for this service...
Service "b8c025790af43eafe0536f64a8c04644.localdomain" has 1 instance(s).
Instance "CDB", status READY, has 1 handler(s) for this service...
Service "cdbpdb1.localdomain" has 1 instance(s).
Instance "CDB", status READY, has 1 handler(s) for this service...
The command completed successfully
After shutdown of CDB and only CDB2 running:
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.2.0.1.0 - Production
Start Date 21-JAN-2021 20:58:52
Uptime 0 days 0 hr. 4 min. 55 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/oracle/product/12.2.0/dbhome_1/network/admin/listener.ora
Listener Log File /u01/app/oracle/diag/tnslsnr/oel7/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=oel7.localdomain)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
Services Summary...
Service "CDB2" has 1 instance(s).
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "CDB2XDB" has 1 instance(s).
Instance "CDB2", status READY, has 1 handler(s) for this service...
The command completed successfully
[oracle#oel7 dbhome_1]$
And also, even if i stop and start again the listener (having only CDB2 running - closing CDB), it is working as it should be:
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.2.0.1.0 - Production
Start Date 21-JAN-2021 20:58:52
Uptime 0 days 0 hr. 4 min. 55 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/oracle/product/12.2.0/dbhome_1/network/admin/listener.ora
Listener Log File /u01/app/oracle/diag/tnslsnr/oel7/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=oel7.localdomain)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
Services Summary...
Service "CDB2" has 1 instance(s).
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "CDB2XDB" has 1 instance(s).
Instance "CDB2", status READY, has 1 handler(s) for this service...
The command completed successfully
[oracle#oel7 dbhome_1]$
Both database have the same Oracle Home in:/u01/app/oracle/product/12.2.0/dbhome_1
Network files:
[oracle#oel7 admin]$ cat listener.ora
# listener.ora Network Configuration File: /u01/app/oracle/product/12.2.0/dbhome_1/network/admin/listener.ora
# Generated by Oracle configuration tools.
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = oel7.localdomain)(PORT = 1521))
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
)
)
[oracle#oel7 admin]$ cat tnsnames.ora
# tnsnames.ora Network Configuration File: /u01/app/oracle/product/12.2.0/dbhome_1/network/admin/tnsnames.ora
# Generated by Oracle configuration tools.
LISTENER_CDB =
(ADDRESS = (PROTOCOL = TCP)(HOST = oel7.localdomain)(PORT = 1521))
CDB2 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = oel7.localdomain)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = CDB2)
)
)
CDB =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = oel7.localdomain)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = CDB.localdomain)
)
)
What is the problem? Why the listener does not support the second database (CDB2) from the first time and I have to startup also the first database (CDB) in order for it to support both of them?
Thanks.
overall i would suggest to create a second listener definition in listener.ora
with the service of cdb2 in it.
so that every database has its own listener with its own services in it.
and define those listener names that you put in listener.ora, in tnsnames.ora file as well like how you tried to define LISTENER_CDB in the tnsnames.
and then you start each listener separately and manually
like "lsnrctl start LISTENER_CDB1"
and "lsnrctal start LISTENER_CDB2"
also "lsnrctl status LISTENER_CDB1" etc..
regarding your current definitions i think the problem may be just that ,
that you defined LISTENER_CDB in tnsnames.ora but in listener.ora you only have LISTENER.
on a side note -
"PMON process wakes up at every 60 seconds and provide information to the listener. If any problem arises and your PMON process fails then it's not possible to register information to listener periodically. In this case you can do 'Manual service registration' using command:
ALTER SYSTEM REGISTER;"
also something to consider as a troubleshooting utility.
Developing nodejs & angular app, using Gitlab to store code and using GL CI/CD to deploy current version of app on GCP. Approximate a month ago started to get error during node.js installation:
Step #1: INFO[0028] RUN /usr/local/bin/install_node '>=12'
Step #1: INFO[0028] cmd: /bin/sh
Step #1: INFO[0028] args: [-c /usr/local/bin/install_node '>=12']
Step #1: % Total % Received % Xferd Average Speed Time Time Time Current
Step #1: Dload Upload Total Spent Left Speed
Step #1: 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 32.1M 100 32.1M 0 0 47.5M 0 --:--:-- --:--:-- --:--:-- 47.5M
Step #1: % Total % Received % Xferd Average Speed Time Time Time Current
Step #1: Dload Upload Total Spent Left Speed
Step #1: 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 3838 100 3838 0 0 21107 0 --:--:-- --:--:-- --:--:-- 21204
Step #1: gpg: Signature made Thu Sep 10 15:04:50 2020 UTC using RSA key ID C17AB93C gpg: Can't check signature: public key not found
Step #1: The Node.js binary could not be verified.
Step #1: This means it may not be an officially released Node.js binary
Step #1: or may have been tampered with.
I thought that it's issue with GL & Google accounts, created project from scratch (both GL & GAE), but got no luck.
My gitlab-ci.yml script section
- echo $GCP_SERVICE_KEY > /tmp/$CI_PIPELINE_ID.json
- gcloud auth activate-service-account --key-file /tmp/$CI_PIPELINE_ID.json
- gcloud --project $GCP_PROJECT_ID app deploy app.yaml dispatch.yaml
app.yaml
runtime: nodejs
env: flex
automatic_scaling:
min_num_instances: 1
max_num_instances: 4
env_variables:
CLOUD_STORAGE_BUCKET: gs://[projectname]-deploy
Could anyone help me to solve this error or point where to find steps to solve it?
Thanks in advance.
For anyone who facing same issue - remove engine section from package.json, because it possibly frustrate GCP installer.
The answer can be found in:
https://github.com/GoogleCloudPlatform/nodejs-docker/issues/214
Its seems like the issue is with node 14.10
I recently moved my PostgreSQL data_directory from /var/lib/pgsql/data to /home/databasepostgre/. I have followed these steps from
$sudo systemctl stop postgresql-9.4.service
edit postgresql.conf data_directory to /home/databasepostgre/pgsql/9.4/data
$sudo rsync -av /var/lib/pgsql/9.4/data /home/databasepostgre/pgsql/9.4/data
$su postgres
psql
SHOW data_directory; "it shows new directory which is /home/databasepostgre/pgsql/9.4/data"
systemctl start service
But each time i execute step #6, I always end up with this error :
Job for postgresql-9.4.service failed because a timeout was exceeded. See systemctl status postgresql-9.4.service and journalctl -xe for details.
From journalctl -xe the error is as follows:
May 11 13:35:04 systemd[1]: postgresql-9.4.service start operation timed out. Terminating.
May 11 13:35:04 systemd[1]: Failed to start PostgreSQL 9.4 database server.
-- Subject: Unit postgresql-9.4.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit postgresql-9.4.service has failed.
--
-- The result is failed.
May 11 13:35:04 systemd[1]: Unit postgresql-9.4.service entered failed state.
May 11 13:35:04 systemd[1]: postgresql-9.4.service failed.
Can anyone please help me? This is a production server and I still cannot find the issue and how to solve it.
I am new to flink and trying to deploy my jar on EMR cluster. I have used 3 node cluster (1 master and 2 slaves) with their default configuration. I have not done any configuration changes and sticking with default configuration. On running the following command on my master node:
flink run -m yarn-cluster -yn 2 -c Main /home/hadoop/myjar-0.1.jar
I am getting the following error:
INFO org.apache.flink.yarn.YarnClusterDescriptor- Deployment took more than 60 seconds. Please check if the requested resources are available in the YARN cluster
Can anyone please explain what could be the possible reason for this error?
As you didn't determine any resources (Memory, CPU core), I guess it's because the YARN cluster has not the desired resource, especially memory.
Try submitting your jar file using the following type of commands:
flink run -m yarn-cluster -yn 5 -yjm 768 -ytm 1400 -ys 2 -yqu streamQ my_program.jar
You can find more information about the command here
You can check application logs in YARN WebUI to see what's the problem exactly.
Also, check this posts:
Post1
post2
I've built a docker image which consists of two parts:
simple nodejs app which is listening to port 8080
haskell service which is using snap framework (port 8000)
I know that it's better to run those two parts in different containers, but there is a reason to keep them in one. So I found a way how to run two services in one container with the use of supervisord.
In the dockerfile I expose 8080, and when I run the docker image locally it works just fine. I can make POST requests to nodejs app, which in its turn is making POST request to the haskellmodule using port 8000. I run it with the following command:
docker run -p 8080:8080 image_name
So I pushed the image to google container registry and deployed it with the use of --image-url flag. The deployment process goes without any error, though after that I cannot reach my app. If I look to the running version's logs, I see the following:
A /usr/lib/python2.7/dist-packages/supervisor/options.py:296: UserWarning: Supervisord is running as root and it is searching for its configuration file in default locations (including its current working directory); you probably want to specify a "-c" argument specifying an absolute path to a configuration file for improved security.
A 'Supervisord is running as root and it is searching '
A 2017-10-08 14:08:45,368 CRIT Supervisor running as root (no user in config file)
A 2017-10-08 14:08:45,368 WARN Included extra file "/etc/supervisor/conf.d/supervisord.conf" during parsing
A 2017-10-08 14:08:45,423 INFO RPC interface 'supervisor' initialized
A 2017-10-08 14:08:45,423 CRIT Server 'unix_http_server' running without any HTTP authentication checking
A 2017-10-08 14:08:45,424 INFO supervisord started with pid 1
A 2017-10-08 14:08:46,425 INFO spawned: 'haskellmodule' with pid 7
A 2017-10-08 14:08:46,427 INFO spawned: 'nodesrv' with pid 8
A 2017-10-08 14:08:47,429 INFO success: haskellmodule entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
A 2017-10-08 14:08:47,429 INFO success: nodesrv entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
A 2017-10-08 14:13:49,124 WARN received SIGTERM indicating exit request
A 2017-10-08 14:13:49,127 INFO waiting for haskellmodule, nodesrv to die
A 2017-10-08 14:13:49,128 INFO stopped: nodesrv (terminated by SIGTERM)
A 2017-10-08 14:13:49,138 INFO stopped: haskellmodule (terminated by SIGTERM)
Then it starts over and everything is repeated over and over again.
My Dockerfile:
FROM node:latest
RUN apt-get update
RUN curl -sSL https://get.haskellstack.org/ | sh
COPY ./nodesrv /nodesrv
COPY ./haskellmodule /haskellmodule
RUN mkdir /log
WORKDIR /haskellmodule
RUN stack build
WORKDIR /
RUN apt-get update && apt-get install -y supervisor
ADD ./configs/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
EXPOSE 8080
ENTRYPOINT ["/usr/bin/supervisord"]
My supervisord config:
[supervisord]
nodaemon=true
[program:nodesrv]
command=node index.js
directory=/nodesrv/
user=root
[program:haskellmodule]
command=stack exec haskellmodule-exe
directory=/haskellmodule/
user=root
My app.yaml file I use for deployment:
runtime: custom
env: flex
So seems like google app engine is shutting supervisor down (taking into account that everything is working on localhost). What could be a reason of that?
Thanks in advance
You need to configure your app.yaml file to open ports 8080 and 8000. You need to do this in addition to opening the port in your Dockerfile with EXPOSE. The documentation for how to setup your app.yaml file is located here, and the example from the docs is copied below:
Add the following to your app.yaml:
network:
instance_tag: TAG_NAME
name: NETWORK_NAME
subnetwork_name: SUBNETWORK_NAME
forwarded_ports:
- PORT
- HOST_PORT:CONTAINER_PORT
- PORT/tcp
- HOST_PORT:CONTAINER_PORT/udp
Run supervisord with -n argument.
This will run supervisor in foreground.
Works fine for me in app engine flexible environment.
Thanks