How to run kubectl exec on scripts that never end - sql-server

I have a mssql pod that I need to use the sql_exporter to export its metrics.
I was able to set up this whole thing manually fine:
download the binary
install the package
run ./sql_exporter on the pod to start listening on port for metrics
I tried to automate this using kubectl exec -it ... and was able to do step 1 and 2. When I try to do step 3 with kubectl exec -it "$mssql_pod_name" -- bash -c ./sql_exporter the script just hangs and I understand as the server is just going to be listening forever, but this stops the rest of my installation scripts.
I0722 21:26:54.299112 435 main.go:52] Starting SQL exporter (version=0.5, branch=master, revision=fc5ed07ee38c5b90bab285392c43edfe32d271c5) (go=go1.11.3, user=root#f24ba5099571, date=20190114-09:24:06)
I0722 21:26:54.299534 435 config.go:18] Loading configuration from sql_exporter.yml
I0722 21:26:54.300102 435 config.go:131] Loaded collector "mssql_standard" from mssql_standard.collector.yml
I0722 21:26:54.300207 435 main.go:67] Listening on :9399
<nothing else, never ends>
Any tips on just silencing this and let it run in the background (I cannot ctrl-c as that will stop the port-listening). Or is there a better way to automate plugin install upon pod deployment? Thank you

To answer your question:
This answer should help you. You should (!?) be able to use ./sql_exporter & to run the process in the background (when not using --stdin --tty). If that doesn't work, you can try nohup as described by the same answer.
To recommend a better approach:
Using kubectl exec is not a good way to program a Kubernetes cluster.
kubectl exec is best used for debugging rather than deploying solutions to a cluster.
I assume someone has created a Kubernetes Deployment (or similar) for Microsoft SQL Server. You now want to complement that Deployment with the exporter.
You have options:
Augment the existing Deployment and add the sql_exporter as a sidecar (another container) in the Pod that includes the Microsoft SQL Server container. The exporter accesses the SQL Server via localhost. This is a common pattern when deploying functionality that complements an application (e.g. logging, monitoring)
Create a new Deployment (or similar) for the sql_exporter and run it as a standalone Service. Configure it scrape one|more Microsoft SQL Server instances.
Both these approaches:
take more work but they're "more Kubernetes" solutions and provide better repeatability|auditability etc.
require that you create a container for sql_exporter (although I assume the exporter's authors already provide this).

Related

How to use `gcloud builds` for database migrations

I have a Rails 5 app deployed with Google App Engine using Cloud SQL for MySQL following their tutorial.
When I run a database migration,
bundle exec rake appengine:exec -- bundle exec rake db:migrate
I get a deprecation warning:
WARNING: This command is deprecated and will be removed on or after 2018-10-31. Please use `gcloud builds submit` instead.
Before I go off on a vision quest to sort this out, has anyone else converted their Rails app to use gcloud builds for rake tasks like this? Mind sharing the gist? Thanks!
Go to the Cloud SQL Instances page in the Google Cloud Platform Console. ...
Select the instance you want to add the database to.
Select the Databases tab.
Click Create database.
In the Create a database dialog, specify the name of the database, and optionally the character set and collation. ...
Click Create.
If this isn't what your looking for then try to start over
I ended up finding this answer which goes through installing cloud sql proxy so you can run the migration locally:
RAILS_ENV=production bin/rails db:migrate
I'm still interested in a new way to easily execute the command in the cloud, but running locally with a db proxy totally works for now.

Running different versions of postgresql side by side

I have postgresql 9.3 installed.
I would like to have also postgres 9.6.1 installed.
Each application is using a different DB. Most of the times I don't run both applications, so I don't need them to run concurrently.
I downloaded the installer recommended by postgres, and installed 9.6.1, but then it seems that 9.3 is not able to start anymore. I'm getting an error trying to run sudo service postgres start:
Starting PostgreSQL 9.3 database server
The PostgreSQL server failed to start. Please check the log output.
The log file is empty (not sure that's the interesting one) - /var/log/postgresql/postgresql-9.3-main.log
Any idea how to be able to run both instances?
You need to check the postgresql.conf config file.
If you want to run both instances at the same time then they will need to be run on different ports otherwise they will conflict. The default is 5432, change this for one of the DB's.
Then make sure that the data directory, log file are unique for each instance.

What is the common practice to create db schema in Cloud Foundry?

I have been quested for a while for a best practice to initialize the relational database schema and pre-populated data.
There are a couple of ways to make it happen:
Install the cf-ex-phpmyadmin and import the data and schema thru it
Use the VMC cli tool to create a tunnel the service from this link
If using ruby or python, use the db migration command in the manifest.yml. However, it will be executed on each instance and every time the instance re-stages.
Which one is commonly used and most effective?
VMC is very old and is no longer supported. I'd be surprised if it even works against a Cloud Foundry installation that has been deployed within the last couple years. You should use the new cf CLI.
If you were to put the command in your manifest, you could avoid having it run on every instance if you had a conditional guard that would only run the migrations if $CF_INSTANCE_INDEX equals 0, however it's not always a great idea to run migrations in your start command, since there is a hard timeout on your start command, and you don't want migrations to be interrupted if they are long migrations.
A good suggestion I've heard [1] is that migrations should be handled as a separate part of your deploy process, either by cf ssh or running them locally, pointed at the URL and credentials of your database service instance.
[1] credit to Travis Grathwell for this suggestion.

Heroku auto restore

I need to implement an automatic trasfer of daily backups from one DB to another DB. Both DB's and apps are hosted on heroku.
I know this is possible if to do it manually from local machine with the command:
heroku pgbackups:restore DATABASE `heroku pgbackups:url --app production-app` --app staging-app
But this process should be automated and run not from local machine.
I have an idea to write a rake which will execute this command; and run this rake daily with the help of Heroku Scheduler add-on.
Any ideas how it is better to do this? Or maybe there is a better way for this task?
Thanks in advance.
I managed to solve the issue myself. It appeared to be not so complex. Here is the solution, maybe it'll be useful to somebody else:
1. I wrote a script which copies the latest dump from a certain server to the DB of the current server
namespace :backup do
desc "copy the latest dump from a certain server to the DB of the current server"
task restore_last_write_dump: :environment do
last_dump_url = %x(heroku pgbackups:url --app [source_app_name])
system("heroku pgbackups:restore [DB_to_target_app] '#{last_dump_url}' -a [target_app_name] --confirm [target_app_name]")
puts "Restored dump: #{last_dump_url}"
end
end
To avoid authenication upon each request to the servers, craete a file .netrc in the app root (see details here https://devcenter.heroku.com/articles/authentication#usage-examples)
Setup Scheduler add-on for heroku and add our rake task along with the frequency of its running.
That is all.

Running batch file remotely using Hudson

What is the simplest way to schedule a batch file to run on a remote machine using Hudson (latest and greatest version)? I was exploring the master slave setup. I created a dumb slave but I am not sure what the parameters should be so that I can trigger the batch file in the remote slave machine.
Basically, I am trying to run 2 different batch files on two different remote machines sequentially, triggered from my machine (the master). The Step by step guide on the Hudson website is a dead link. There are similar questions posted on SO but it does not quite work for me when I use the parameters they mention.
If anyone has done something similar please suggest ways to make this work.
(I know how to set up jobs, and add a step to run a batch file etc what I am having trouble configuring is doing this on a remote machine using hudson in built features)
UPDATE
Thank you all for the suggestions. Quick update on this:
What I wanted to get done is partially working, below are the steps followed to get to it -
Created new Node from Manage Nodes -> New Node -> set # of Executors as 1, Remote FS root set as '/var/hudson', set Launch method as using JNLP, set slavename and saved.
Once slave was set up (from master machine), I logged into the Slave physical machine, I downloaded the _slave.jar from http://masterserver:port/jnlpJars/slave.jar, and ran the following from command line at the download location -> java -jar _slave.jar -jnlpUrl http://masterserver:port/computer/slavename/slave-agent.jnlp. The connection was made successfully.
Checked 'Restrict where this project can be run' in the Master job configuration, and set paramater as slavename.
Checked "Add Build Step" for adding my batch job script
What I am still missing now is a way to connect to 2 slaves from one job in sequence, is that possible?
It is fairly easy and straight forward. Lets assume you already have a slave running. Then you configure the job as if you are locally on the target box. The setting for Restrict where this project can be run needs to be the node that you want to on. This is all for the job configuration.
For the slave configuration read the following pages.
Installing Hudson as a Windows service
Distributed builds
On windows I prefer to run the slave as a service and let the remote machine manage the start up and shut down of the slave. The only disadvantage with this is, you need to upgrade the client every time you update the server Just get the new client.jar from the server, after the upgrade and put it on the slave. Then restart the slave and you are done.
I had troubles using the install as a service option for the slave even though I did it as a local administrator. I used then srvany to wrap the jar into a service. Here is a blog about it. The command that you need to wrap, you will get from your Hudson server from the slave page. For all of this to work, you should set up the slave management as jnlp.
If you have an ssh server on your target machine, you can use the ssl slave settings. These work for me like a charm. I use them with my unix slaves. So far the ssl option with unix is less of an hassle, than the windows service clients.
I had some similar trouble with slave setup and wrote up this blog post - I was running on Linux rather than Windows, but hopefully this will help.
I dont know about how to use built-in hudson features for this job - but in one of my project builds, i run a batch file that in turn uses PSTools
to run the job on a remote server. I found PS tools extremely easy to use - download, unpack and run the command with the right parameters, hence opted to use this.

Resources