What's wrong with this simple file.managed saltstack configuration? - file

From a fresh salt stack installation on server and client, the goal is to serve a file with a number inside:
SERVER
$vim /etc/salt/master
...
file_roots:
base:
- /srv/salt
...
$echo 1 > /srv/salt/tmp/salt.config.version
$cat /srv/salt/top.sls
base:
'*':
- tmpversion
$cat /srv/salt/tmpversion/init.sls
/tmp/salt.config.version:
file.managed:
- source: salt://tmp/salt.config.version
- user: root
- group: root
- mode: 644
CLIENT (minion)
$vim /etc/salt/minion
...
master: <masterhostnamehere>
...
I'm using salt '*' state.sls tmpversion to apply the configuration. I don't know how to get the changes applied automatically..

Salt doesn't do anything until you tell it to. So that means that you have to run the salt command on the cli when you want a state to be applied, or you can use Salt's internal scheduler or your system's cron to run the job regularly.

Related

Setup Xdebug for Shopware docker failed

I try to setup Xdebug for shopware-docker without success.
VHOST_[FOLDER_NAME_UPPER_CASE]_IMAGE=ghcr.io/shyim/shopware-docker/6/nginx:php74-xdebug
After replacing your Folder Name and running swdc up Xdebug should be activated.
Which folder name should I place?
Using myname, the same name as in /var/www/html/myname, return error on swdc up myname:
swdc up myname
[+] Running 2/0
⠿ Network shopware-docker_default Created 0.0s
⠿ Container shopware-docker-mysql-1 Created 0.0s
[+] Running 1/1
⠿ Container shopware-docker-mysql-1 Started 0.3s
.database ready!
[+] Running 0/1
⠿ app_myname Error 1.7s
Error response from daemon: manifest unknown
EDIT #1
With this setup VHOST_MYNAME_IMAGE=ghcr.io/shyim/shopware-docker/6/nginx:php81-xdebug (versioned Xdebug) the app started:
// $HOME/.config/swdc/env
...
VHOST_MYNAME_IMAGE=ghcr.io/shyim/shopware-docker/6/nginx:php81-xdebug
But set a debug breakpoint (e.g. in index.php), nothing happens
EDIT #2
As #Alex recommend, i place xdebug_break() inside my code and it works.
Stopping on the breakpoint the debugger log aswers with hints/warnings like described in the manual:
...
Cannot find a local copy of the file on server /var/www/html/%my_path%
Local path is //var/www/html/%my_path%
...
click on Click to set up path mapping to open the modal
click inside modal select input Use path mapping (...)
input field File path in project response with undefined
But i have already set up the mapping like described in the manual, go to File | Settings | PHP | Servers:
Why does not work my mapping? Where failed my set up?
The path mapping needs to be between your local project path on your workstation and the path inside the docker containers. Without xDebug has a hard time mapping the breakpoints from PHPStorm to the actual code inside the container.
If mapping the path correctly does not work and if its a possibility for you, i can highly recommend switching to http://devenv.sh for your development enviroment. Shopware itself promotes this new enviroment in their documentation: https://developer.shopware.com/docs/guides/installation/devenv and provides an example on how to enable xdebug:
# devenv.local.nix File
{ pkgs, config, lib, ... }:
{
languages.php.package = pkgs.php.buildEnv {
extensions = { all, enabled }: with all; enabled ++ [ amqp redis blackfire grpc xdebug ];
extraConfig = ''
# Copy the config from devenv.nix and append the XDebug config
# [...]
xdebug.mode=debug
xdebug.discover_client_host=1
xdebug.client_host=127.0.0.1
'';
};
}
A correct path mapping should not be needed here, as your local file location is the same for XDebug and your PHPStorm.

In Python, how do I construct a URL to test if my SQL Server instance is running?

I'm using Python 3.8 with the pytest-docker-compose plugin -- https://pypi.org/project/pytest-docker-compose/ . Does anyone know how to write a URL that would eventually tell me if my SQL Server is running?
I have this docker-compose.yml file
version: "3.2"
services:
sql-server-db:
build: ./
container_name: sql-server-db
image: microsoft/mssql-server-linux:2017-latest
ports:
- "1433:1433"
environment:
SA_PASSWORD: "password"
ACCEPT_EULA: "Y"
but I don't know what URL to pass to my Retry object to test that the server is running. This fails ...
import pytest
import requests
from urllib3.util.retry import Retry
from requests.adapters import HTTPAdapter
...
#pytest.fixture(scope="function")
def wait_for_api(function_scoped_container_getter):
"""Wait for sql server to become responsive"""
request_session = requests.Session()
retries = Retry(total=5,
backoff_factor=0.1,
status_forcelist=[500, 502, 503, 504])
request_session.mount('http://', HTTPAdapter(max_retries=retries))
service = function_scoped_container_getter.get("sql-server-db").network_info[0]
api_url = "http://%s:%s/" % (service.hostname, service.host_port)
assert request_session.get(api_url)
return request_session, api_url
with this exception
raise ConnectionError(e, request=request)
E requests.exceptions.ConnectionError: HTTPConnectionPool(host='0.0.0.0', port=1433): Max retries exceeded with url: / (Caused by ProtocolError('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')))
if connection.is_connected():
db_Info = connection.get_server_info()
print("Connected to MySQL Server version ", db_Info)
cursor = connection.cursor()
cursor.execute("select database();")
record = cursor.fetchone()
print("You're connected to database: ", record)
you could use something like this and it would output if it was connected
Here is a sample function that will retry to connect to the DB and won't return until it has successfully connected or the defined maxRetries is reached:
def waitDb(server, database, username, password, maxAttempts, waitBetweenAttemptsSeconds):
"""
Returns True if the connection is successfully established before the maxAttempts number is reached
Conversely returns False
pyodbc.connect has a built-in timeout. Use a waitBetweenAttemptsSeconds greater than zero to add a delay on top of this timeout
"""
for attemptNumber in range(maxAttempts):
cnxn = None
try:
cnxn = pyodbc.connect('DRIVER={ODBC Driver 17 for SQL Server};SERVER='+server+';DATABASE='+database+';UID='+username+';PWD='+ password)
cursor = cnxn.cursor()
except Exception as e:
print(traceback.format_exc())
finally:
if cnxn:
print("The DB is up and running: ")
return True
else:
print("DB not running yet on attempt numer " + str(attemptNumber))
time.sleep(waitBetweenAttemptsSeconds)
print("Max attempts waiting for DB to come online exceeded")
return False
I wrote a minimal example here: https://github.com/claudiumocanu/flow-pytest-docker-compose-wait-mssql.
I included the three actions that can be executed independently, but you can jump to the last step specifically for what you asked:
1. Connected from python to the mssql launched by the compose-file:
For me it was quite annoying to find and install the appropriate ODBC Driver and its dependencies - ODBC Driver 17 for SQL Server worked the best for me on Ubuntu 18.
To perform only this step, docker-compose up the the docker-compose.yml in my example, then run the example-connect.py
2. Created a function that attempts to connect to the DB with a maxAttemptsNumber and a delay between retries:
Just run this example-waitDb.py. You can play with the maxAttempts and the delayBetweenAttempts values, then bring up the database at randomly, to test it.
3. Put everything together in the test_db.py test suite:
the waitDb function described above.
same wrapper and annotations that you provided in your example to spin-up the resources defined in the compose-file
a dummy integration test that will not be executed before waitDb returns (if you want to block this tests completely, you can throw instead of returning False from the waitDb function)
PS: Keep using ENVs/vault etc rather than storing the real passwords like I did for the dummy example.

drone ci publish generated latex pdf

actually I am using travis but I want to change to drone.
For all tex documents I'm using a small Makefile with a Container to generate my pdf file and deploy it on my repository.
But since I'm using gitea I want to set up my integration pipeline with drone, but I don't know how I can configure the .drone.yml to deploy my pdf file on every tag als release.
Actually I'm using the following .drone.yml and I am happy the say, that's build process works fine at the moment.
clone:
git:
image: plugins/git
tags: true
pipeline:
pdf:
image: volkerraschek/docker-latex:latest
pull: true
commands:
- make
and this is my Makefile
# Docker Image
IMAGE := volkerraschek/docker-latex:latest
# Input tex-file and output pdf-file
FILE := index
TEX_NAME := ${FILE}.tex
PDF_NAME := ${FILE}.pdf
latexmk:
latexmk \
-shell-escape \
-synctex=1 \
-interaction=nonstopmode \
-file-line-error \
-pdf ${TEX_NAME}
docker-latexmk:
docker run \
--rm \
--user="$(shell id -u):$(shell id -g)" \
--net="none" \
--volume="${PWD}:/data" ${IMAGE} \
make latexmk
Which tags and conditions are missing in my drone.yml to deploy my index.pdf as release in gitea when I push a new git tag?
Volker
I have this setup on my gitea / drone pair. This is a MWE of my .drone.yml:
pipeline:
build:
image: tianon/latex
commands:
- pdflatex <filename.tex>
gitea_release:
image: plugins/gitea-release
base_url: <gitea domain>
secrets: [gitea_token]
files: <filename.pdf>
when:
event: tag
So rather than setting up the docker build in the Makefile, we add a step using docker image with latex, compile the pdf, and use a pipeline step to release.
You'll also have to set your drone repo to trigger builds on a tags and set a gitea API token to use. To set the API token, you can use the command line interface:
$ drone secret add <org/repo> --name gitea_token --value <token value> --image plugins/gitea-release
You can set up the drone repo to trigger builds in the repository settings in the web UI.
Note that you'll also likely have to allow *.pdf attachments in your gitea settings, as they are disallowed by default. In your gitea app.ini add this to the attachment section:
[attachment]
ENABLED = true
PATH = /data/gitea/attachments
MAX_SIZE = 10
ALLOWED_TYPES = */*
In addition to Gabe's answer, if you are using an NGINX reverse proxy, you might also have to allow larger file uploads in your nginx.conf. (This applies to all file types, not just .pdf)
server {
[ ... ]
location / {
client_max_body_size 10M; # add this line
proxy_pass http://gitea:3000;
}
}
This fixed the problem for me.

AWS: Why does my RDS instance keep starting after I turned it off?

I have an RDS database instance on AWS and have turned it off for now. However, every few days it starts up on its own. I don't have any other services running right now.
There is this event in my RDS log:
"DB instance is being started due to it exceeding the maximum allowed time being stopped."
Why is there a limit to how long my RDS instance can be stopped? I just want to put my project on hold for a few weeks, but AWS won't let me turn off my DB? It costs $12.50/mo to have it sit idle, so I don't want to pay for this, and I certainly don't want AWS starting an instance for me that does not get used.
Please help!
That's a limitation of this new feature.
You can stop an instance for up to 7 days at a time. After 7 days, it will be automatically started. For more details on stopping and starting a database instance, please refer to Stopping and Starting a DB Instance in the Amazon RDS User Guide.
You can setup a cron job to stop the instance again after 7 days. You can also change to a smaller instance size to save money.
Another option is the upcoming Aurora Serverless which stops and starts for you automatically. It might be more expensive than a dedicated instance when running 24/7.
Finally, there is always Heroku which gives you a free database instance that starts and stops itself with some limitations.
You can also try saving the following CloudFormation template as KeepDbStopped.yml and then deploy with this command:
aws cloudformation deploy --template-file KeepDbStopped.yml --stack-name stop-db --capabilities CAPABILITY_IAM --parameter-overrides DB=arn:aws:rds:us-east-1:XXX:db:XXX
Make sure to change arn:aws:rds:us-east-1:XXX:db:XXX to your RDS ARN.
Description: Automatically stop RDS instance every time it turns on due to exceeding the maximum allowed time being stopped
Parameters:
DB:
Description: ARN of database that needs to be stopped
Type: String
AllowedPattern: arn:aws:rds:[a-z0-9\-]+:[0-9]+:db:[^:]*
Resources:
DatabaseStopperFunction:
Type: AWS::Lambda::Function
Properties:
Role: !GetAtt DatabaseStopperRole.Arn
Runtime: python3.6
Handler: index.handler
Timeout: 20
Code:
ZipFile:
Fn::Sub: |
import boto3
import time
def handler(event, context):
print("got", event)
db = event["detail"]["SourceArn"]
id = event["detail"]["SourceIdentifier"]
message = event["detail"]["Message"]
region = event["region"]
rds = boto3.client("rds", region_name=region)
if message == "DB instance is being started due to it exceeding the maximum allowed time being stopped.":
print("database turned on automatically, setting last seen tag...")
last_seen = int(time.time())
rds.add_tags_to_resource(ResourceName=db, Tags=[{"Key": "DbStopperLastSeen", "Value": str(last_seen)}])
elif message == "DB instance started":
print("database started (and sort of available?)")
last_seen = 0
for t in rds.list_tags_for_resource(ResourceName=db)["TagList"]:
if t["Key"] == "DbStopperLastSeen":
last_seen = int(t["Value"])
if time.time() < last_seen + (60 * 20):
print("database was automatically started in the last 20 minutes, turning off...")
time.sleep(10) # even waiting for the "started" event is not enough, so add some wait
rds.stop_db_instance(DBInstanceIdentifier=id)
print("success! removing auto-start tag...")
rds.add_tags_to_resource(ResourceName=db, Tags=[{"Key": "DbStopperLastSeen", "Value": "0"}])
else:
print("ignoring manual database start")
else:
print("error: unknown database event!")
DatabaseStopperRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Action:
- sts:AssumeRole
Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
Policies:
- PolicyName: Notify
PolicyDocument:
Version: '2012-10-17'
Statement:
- Action:
- rds:StopDBInstance
Effect: Allow
Resource: !Ref DB
- Action:
- rds:AddTagsToResource
- rds:ListTagsForResource
- rds:RemoveTagsFromResource
Effect: Allow
Resource: !Ref DB
Condition:
ForAllValues:StringEquals:
aws:TagKeys:
- DbStopperLastSeen
DatabaseStopperPermission:
Type: AWS::Lambda::Permission
Properties:
Action: lambda:InvokeFunction
FunctionName: !GetAtt DatabaseStopperFunction.Arn
Principal: events.amazonaws.com
SourceArn: !GetAtt DatabaseStopperRule.Arn
DatabaseStopperRule:
Type: AWS::Events::Rule
Properties:
EventPattern:
source:
- aws.rds
detail-type:
- "RDS DB Instance Event"
resources:
- !Ref DB
detail:
Message:
- "DB instance is being started due to it exceeding the maximum allowed time being stopped."
- "DB instance started"
Targets:
- Arn: !GetAtt DatabaseStopperFunction.Arn
Id: DatabaseStopperLambda
It has worked for at least one person. If you have issues please report here.

Geonetwork database whit Ldap Connection error

I'm trying to connect my ldap with the geonetwork database but every time I log in it doesn't show the administrator button. Then I check the database and it is empty. I am using GeOrchestra 13.09 in a localhost enviroment, the geoserver and mapfishapp are running well and they log in without a problem.
My config-security.properties is
Core security properties
logout.success.url=/index.html
passwordSalt=secret-hash-salt=
# LDAP Connection Settings
ldap.base.provider.url=ldap://localhost:389
ldap.base.dn=dc=geobolivia,dc=gob,dc=bo
ldap.security.principal=cn=admin,dc=geobolivia,dc=gob,dc=bo
ldap.security.credentials=geobolivia
ldap.base.search.base=ou=users
ldap.base.dn.pattern=uid={0},${ldap.base.search.base}
#ldap.base.dn.pattern=mail={0},${ldap.base.search.base}
# Define if groups and profile information are imported from LDAP. If not, local database is used.
# When a new user connect first, the default profile is assigned. A user administrator can update
# privilege information.
ldap.privilege.import=true
ldap.privilege.export=true
ldap.privilege.create.nonexisting.groups=false
# Define the way to extract profiles and privileges from the LDAP
# 1. Define one attribute for the profile and one for groups in config-security-overrides.properties
# 2. Define one attribute for the privilege and define a custom pattern (use LDAPUserDetailsContextMapperWithPa$
ldap.privilege.pattern=
#ldap.privilege.pattern=CAT_(.*)_(.*)
ldap.privilege.pattern.idx.group=1
ldap.privilege.pattern.idx.profil=2
# 3. Define custom location for extracting group and role (no support for group/role combination) (use LDAPUser$
#ldap.privilege.search.group.attribute=cn
#ldap.privilege.search.group.object=ou=groups
#ldap.privilege.search.group.query=(&(objectClass=posixGroup)(memberUid={0})(cn=EL_*))
#ldap.privilege.search.group.pattern=EL_(.*)
#ldap.privilege.search.privilege.attribute=cn
#ldap.privilege.search.privilege.object=ou=groups
#ldap.privilege.search.privilege.query=(&(objectClass=posixGroup)(memberUid={0})(cn=SV_*))
#ldap.privilege.search.privilege.pattern=SV_(.*)
ldap.privilege.search.group.attribute=cn
ldap.privilege.search.group.object=ou=groups
ldap.privilege.search.group.query=(&(objectClass=posixGroup)(memberUid={1})(cn=EL_*))
ldap.privilege.search.group.pattern=EL_(.*)
ldap.privilege.search.privilege.attribute=cn
ldap.privilege.search.privilege.object=ou=groups
ldap.privilege.search.privilege.query=(&(objectClass=posixGroup)(memberUid={1})(cn=SV_ADMIN))
ldap.privilege.search.privilege.pattern=SV_(.*)
# Run LDAP sync every day at 23:30
# Run LDAP sync every day at 23:30
#ldap.sync.cron=0 30 23 * * ?
ldap.sync.cron=0 * * * * ?
#ldap.sync.cron=0 0/1 * 1/1 * ? *
ldap.sync.startDelay=60000
ldap.sync.user.search.base=${ldap.base.search.base}
ldap.sync.user.search.filter=(&(objectClass=*)(mail=*#*)(givenName=*))
ldap.sync.user.search.attribute=uid
ldap.sync.group.search.base=ou=groups
ldap.sync.group.search.filter=(&(objectClass=posixGroup)(cn=EL_*))
ldap.sync.group.search.attribute=cn
ldap.sync.group.search.pattern=EL_(.*)
# CAS properties
cas.baseURL=https://localhost:8443/cas
cas.ticket.validator.url=${cas.baseURL}
cas.login.url=${cas.baseURL}/login
cas.logout.url=${cas.baseURL}/logout?url=${geonetwork.https.url}/
<import resource="config-security-cas.xml"/>
<import resource="config-security-cas-ldap.xml"/>
# either the hardcoded url to the server
# or if has the form it will be replaced with
# the server details from the server configuration
geonetwork.https.url=https://localhost/geonetwork-private/
#geonetwork.https.url=https://geobolivia.gob.bo:443
#geonetwork.https.url=https://localhost:443
The geonetwork.log shows these results:
2014-03-11 13:41:00,004 DEBUG [geonetwork.ldap] - LDAPSynchronizerJob starting ...
2014-03-11 13:41:00,006 DEBUG [org.springframework.ldap.core.support.AbstractContextSource] - Got Ldap context on server 'ldap://localhost:389/dc=geobolivia,dc=gob,dc=bo'
2014-03-11 13:41:00,008 DEBUG [org.springframework.beans.factory.support.DefaultListableBeanFactory] - Returning cached instance of singleton bean 'resourceManager'
2014-03-11 13:41:00,026 DEBUG [geonetwork.ldap] - LDAPSynchronizerJob done.
2014-03-11 13:41:26,429 INFO [geonetwork.lucene] - Done running PurgeExpiredSearchersTask. 0 versions still cached.
2014-03-11 13:41:56,430 INFO [geonetwork.lucene] - Done running PurgeExpiredSearchersTask. 0 versions still cached.
and the that appear in the geonetwork.log is
2014-03-11 13:44:06,426 INFO [jeeves.service] - Dispatching : xml.search.keywords
2014-03-11 13:44:06,427 ERROR [jeeves.service] - Exception when executing service
2014-03-11 13:44:06,427 ERROR [jeeves.service] - (C) Exc : java.lang.IllegalArgumentException: The thesaurus external.theme.inspire-service-taxonomy does not exist, there for the query cannot be excuted: 'Query [query=SELECT DISTINCT id,uppc,lowc,broader,spa_prefLabel,spa_note FROM {id} rdf:type {skos:Concept},[{id} gml:BoundedBy {} gml:upperCorner {uppc}],[{id} gml:BoundedBy {} gml:lowerCorner {lowc}],[{id} skos:broader {broader}],[{id} skos:prefLabel {spa_prefLabel} WHERE lang(spa_prefLabel) LIKE "es" IGNORE CASE],[{id} skos:scopeNote {spa_note} WHERE lang(spa_note) LIKE "es" IGNORE CASE] WHERE (spa_prefLabel LIKE "***" IGNORE CASE OR id LIKE "*") LIMIT 35 USING NAMESPACE skos=<http://www.w3.org/2004/02/skos/core#>,gml=<http://www.opengis.net/gml#>, interpreter=KeywordResultInterpreter]'
The version of GeoNetwork currently used in geOrchestra does not show the "administration" button on its first page. You have to fire a search, then in "other actions" menu on the top right, you should be able to get to the administration interface. We know that it is not very intuitive, but it should change in the next months (we recently planned an upgrade of GeoNetwork before the end of the year).
Did you solve it? I think in your config-security.properties, at this place ldap.base.dn.pattern=uid={0},${ldap.base.search.base}
you need to replace {0} with the username typed in the sign-in screen of geonetwork

Resources