In Python, how do I construct a URL to test if my SQL Server instance is running? - sql-server

I'm using Python 3.8 with the pytest-docker-compose plugin -- https://pypi.org/project/pytest-docker-compose/ . Does anyone know how to write a URL that would eventually tell me if my SQL Server is running?
I have this docker-compose.yml file
version: "3.2"
services:
sql-server-db:
build: ./
container_name: sql-server-db
image: microsoft/mssql-server-linux:2017-latest
ports:
- "1433:1433"
environment:
SA_PASSWORD: "password"
ACCEPT_EULA: "Y"
but I don't know what URL to pass to my Retry object to test that the server is running. This fails ...
import pytest
import requests
from urllib3.util.retry import Retry
from requests.adapters import HTTPAdapter
...
#pytest.fixture(scope="function")
def wait_for_api(function_scoped_container_getter):
"""Wait for sql server to become responsive"""
request_session = requests.Session()
retries = Retry(total=5,
backoff_factor=0.1,
status_forcelist=[500, 502, 503, 504])
request_session.mount('http://', HTTPAdapter(max_retries=retries))
service = function_scoped_container_getter.get("sql-server-db").network_info[0]
api_url = "http://%s:%s/" % (service.hostname, service.host_port)
assert request_session.get(api_url)
return request_session, api_url
with this exception
raise ConnectionError(e, request=request)
E requests.exceptions.ConnectionError: HTTPConnectionPool(host='0.0.0.0', port=1433): Max retries exceeded with url: / (Caused by ProtocolError('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')))

if connection.is_connected():
db_Info = connection.get_server_info()
print("Connected to MySQL Server version ", db_Info)
cursor = connection.cursor()
cursor.execute("select database();")
record = cursor.fetchone()
print("You're connected to database: ", record)
you could use something like this and it would output if it was connected

Here is a sample function that will retry to connect to the DB and won't return until it has successfully connected or the defined maxRetries is reached:
def waitDb(server, database, username, password, maxAttempts, waitBetweenAttemptsSeconds):
"""
Returns True if the connection is successfully established before the maxAttempts number is reached
Conversely returns False
pyodbc.connect has a built-in timeout. Use a waitBetweenAttemptsSeconds greater than zero to add a delay on top of this timeout
"""
for attemptNumber in range(maxAttempts):
cnxn = None
try:
cnxn = pyodbc.connect('DRIVER={ODBC Driver 17 for SQL Server};SERVER='+server+';DATABASE='+database+';UID='+username+';PWD='+ password)
cursor = cnxn.cursor()
except Exception as e:
print(traceback.format_exc())
finally:
if cnxn:
print("The DB is up and running: ")
return True
else:
print("DB not running yet on attempt numer " + str(attemptNumber))
time.sleep(waitBetweenAttemptsSeconds)
print("Max attempts waiting for DB to come online exceeded")
return False
I wrote a minimal example here: https://github.com/claudiumocanu/flow-pytest-docker-compose-wait-mssql.
I included the three actions that can be executed independently, but you can jump to the last step specifically for what you asked:
1. Connected from python to the mssql launched by the compose-file:
For me it was quite annoying to find and install the appropriate ODBC Driver and its dependencies - ODBC Driver 17 for SQL Server worked the best for me on Ubuntu 18.
To perform only this step, docker-compose up the the docker-compose.yml in my example, then run the example-connect.py
2. Created a function that attempts to connect to the DB with a maxAttemptsNumber and a delay between retries:
Just run this example-waitDb.py. You can play with the maxAttempts and the delayBetweenAttempts values, then bring up the database at randomly, to test it.
3. Put everything together in the test_db.py test suite:
the waitDb function described above.
same wrapper and annotations that you provided in your example to spin-up the resources defined in the compose-file
a dummy integration test that will not be executed before waitDb returns (if you want to block this tests completely, you can throw instead of returning False from the waitDb function)
PS: Keep using ENVs/vault etc rather than storing the real passwords like I did for the dummy example.

Related

Why does dev_appserver.py exit without throwing errors?

I am trying to run a simple python 2 server code with AppEngine and Datastore. When I run dev_appserver.py app.yaml, the program immediately exits (without an error) after the following outputs:
/home/username/google-cloud-sdk/lib/third_party/google/auth/crypt/_cryptography_rsa.py:22: CryptographyDeprecationWarning: Python 2 is no longer supported by the Python core team. Support for it is now deprecated in cryptography, and will be removed in the next release.
import cryptography.exceptions
INFO 2022-12-20 11:59:41,931 devappserver2.py:239] Using Cloud Datastore Emulator.
We are gradually rolling out the emulator as the default datastore implementation of dev_appserver.
If broken, you can temporarily disable it by --support_datastore_emulator=False
Read the documentation: https://cloud.google.com/appengine/docs/standard/python/tools/migrate-cloud-datastore-emulator
INFO 2022-12-20 11:59:41,936 devappserver2.py:316] Skipping SDK update check.
INFO 2022-12-20 11:59:42,332 datastore_emulator.py:156] Starting Cloud Datastore emulator at: http://localhost:22325
INFO 2022-12-20 11:59:42,981 datastore_emulator.py:162] Cloud Datastore emulator responded after 0.648865 seconds
INFO 2022-12-20 11:59:42,982 <string>:384] Starting API server at: http://localhost:38915
Ideally, it should have continued by runnning the server on port 8000. Also, it works with option --support_datastore_emulator=False.
This is the code:
import webapp2
import datetime
from google.appengine.ext import db, deferred, ndb
import uuid
from base64 import b64decode, b64encode
import logging
class Email(ndb.Model):
email = ndb.StringProperty()
class DB(webapp2.RequestHandler):
def post(self):
try:
mail = Email()
mail.email = 'Test'
mail.put()
except Exception as e:
print(e)
self.response.headers['Content-Type'] = 'application/json; charset=utf-8'
return self.response.out.write(e)
def get(self):
try:
e1 = Email.query()
logging.critical('count is: %s' % e1.count)
e1k = e1.get(keys_only=True)
logging.critical('count 2 is: %s' % e1k.count)
e1 = e1.get()
key = unicode(e1.key.urlsafe())
logging.critical('This is a critical message: %s' % key)
logging.critical('This is a critical message: %s' % e1k)
e2 = ndb.Key(urlsafe=key).get()
self.response.headers['Content-Type'] = 'application/json; charset=utf-8'
return self.response.out.write(str(e2.email))
except Exception as e:
print(e)
self.response.headers['Content-Type'] = 'application/json; charset=utf-8'
return self.response.out.write(e)
app = webapp2.WSGIApplication([
('/', DB)
], debug=True)
How can I find the reason this is not working?
Edit: I figured that the dev server works and writes to a datastore even with support_datastore_emulator=False option. I am confused by this option. I also don't know where the database is stored currently.
It should be count() and not count i.e.
logging.critical('count is: %s' % e1.count())
A get returns only 1 record and so it doesn't make sense to do a count after calling a get. Besides, the count operation is a method of the query instance not the results. This means the following code is incorrect
e1k = e1.get(keys_only=True)
logging.critical('count 2 is: %s' % e1k.count)
You should replace it with
elk = e1.fetch(keys_only=True) # fetch gives an array
logging.critical('count 2 is: %s' % len(e1k))
When you first run your App, it will execute the GET part of your code and because this is the first time your App is being run, you have no record in Datastore. This means e1 = e1.get() will return None and key = unicode(e1.key.urlsafe()) will lead to an error.
You have to modify your code to first check you have a value for e1 or e2 before you attempt to use the keys.
I ran your code with dev_appserver.py and it displayed these errors for me in the logs. But I ran it with an older version of gcloud SDK (Google Cloud SDK 367.0.0). I don't know why yours exited without displaying any errors. Maybe it's due to the version...??
Separately - Don't know why you're importing db. Google moved on to ndb long ago and you don't use db in your code
The default datastore (for the older generation runtimes like Python 2) is in .config (hidden folder) > gcloud > emulators > datastore
You can also specify your own location by using the flag --datastore_path. See documentation

Phpstan doctrine database connection error

I'm using doctrine in a project (not symfony). In this project I also use phpstan, i installed both phpstan/phpstan-doctrine and phpstan/extension-installer.
My phpstan.neon is like this:
parameters:
level: 8
paths:
- src/
doctrine:
objectManagerLoader: tests/object-manager.php
Inside tests/object-manager.php it return the result of a call to a function that return the entity manager.
Here is the code that create the entity manager
$database_url = $_ENV['DATABASE_URL'];
$isDevMode = $this->isDevMode();
$proxyDir = null;
$cache = null;
$useSimpleAnnotationReader = false;
$config = Setup::createAnnotationMetadataConfiguration(
[$this->getProjectDirectory() . '/src'],
$isDevMode,
$proxyDir,
$cache,
$useSimpleAnnotationReader
);
// database configuration parameters
$conn = [
'url' => $database_url,
];
// obtaining the entity manager
$entityManager = EntityManager::create($conn, $config);
When i run vendor/bin/phpstan analyze i get this error:
Internal error: An exception occurred in the driver: SQLSTATE[08006] [7] could not translate host name "postgres_db" to address: nodename nor servname provided, or not known
This appear because i'm using docker and my database url is postgres://user:password#postgres_db/database postgres_db is the name of my database container so the hostname is known inside the docker container.
When i run phpstan inside the container i do not have the error.
So is there a way to run phpstan outside docker ? Because i'm pretty sure that when i'll push my code the github workflow will fail because of this
Do phpstan need to try to reach the database ?
I opened an issue on the github of phpstan-doctrine and i had an answer by #jlherren that explained :
The problem is that Doctrine needs to know what version of the DB server it is working with in order to instantiate the correct AbstractPlatform implementation, of which there are several available for the same DB vendor (e.g. PostgreSQL94Platform or PostgreSQL100Platform for postgres, and similarly for other DB drivers). To auto-detect this information, it will simply connect to the DB and query the version.
I just changed my database url from:
DATABASE_URL=postgres://user:password#database_ip/database
To:
DATABASE_URL=postgres://user:password#database_ip/database?serverVersion=14.2

OperationalError: 250003: Failed to execute request: 'SSLSocket' object has no attribute 'connection' < snowflake.connector trouble

I'm running my script in Spyder/conda... was forced to update one thing, which lead to another. Finally, I am stuck trying to fix this error, which is pretty important, because I cannot get my data or fill tables with the result of my analysis without connecting to snowflake.
To verify my connection I am now simply trying to run their boilerplate connection script:
#!/usr/bin/env python
import snowflake.connector
# Gets the version
ctx = snowflake.connector.connect(
user='email',
password='pw',
account='acct'
)
cs = ctx.cursor()
try:
cs.execute("SELECT current_version()")
one_row = cs.fetchone()
print(one_row[0])
finally:
cs.close()
ctx.close()
I keep getting the "OperationalError: 250003: Failed to execute request: 'SSLSocket' object has no attribute 'connection'" error, and am not sure what to do after reinstalling sqlalchemy, pyarrow, pandas, snowflake-connector-python
Any help/suggestions welcome! I'm out of ideas today.

AWS: Why does my RDS instance keep starting after I turned it off?

I have an RDS database instance on AWS and have turned it off for now. However, every few days it starts up on its own. I don't have any other services running right now.
There is this event in my RDS log:
"DB instance is being started due to it exceeding the maximum allowed time being stopped."
Why is there a limit to how long my RDS instance can be stopped? I just want to put my project on hold for a few weeks, but AWS won't let me turn off my DB? It costs $12.50/mo to have it sit idle, so I don't want to pay for this, and I certainly don't want AWS starting an instance for me that does not get used.
Please help!
That's a limitation of this new feature.
You can stop an instance for up to 7 days at a time. After 7 days, it will be automatically started. For more details on stopping and starting a database instance, please refer to Stopping and Starting a DB Instance in the Amazon RDS User Guide.
You can setup a cron job to stop the instance again after 7 days. You can also change to a smaller instance size to save money.
Another option is the upcoming Aurora Serverless which stops and starts for you automatically. It might be more expensive than a dedicated instance when running 24/7.
Finally, there is always Heroku which gives you a free database instance that starts and stops itself with some limitations.
You can also try saving the following CloudFormation template as KeepDbStopped.yml and then deploy with this command:
aws cloudformation deploy --template-file KeepDbStopped.yml --stack-name stop-db --capabilities CAPABILITY_IAM --parameter-overrides DB=arn:aws:rds:us-east-1:XXX:db:XXX
Make sure to change arn:aws:rds:us-east-1:XXX:db:XXX to your RDS ARN.
Description: Automatically stop RDS instance every time it turns on due to exceeding the maximum allowed time being stopped
Parameters:
DB:
Description: ARN of database that needs to be stopped
Type: String
AllowedPattern: arn:aws:rds:[a-z0-9\-]+:[0-9]+:db:[^:]*
Resources:
DatabaseStopperFunction:
Type: AWS::Lambda::Function
Properties:
Role: !GetAtt DatabaseStopperRole.Arn
Runtime: python3.6
Handler: index.handler
Timeout: 20
Code:
ZipFile:
Fn::Sub: |
import boto3
import time
def handler(event, context):
print("got", event)
db = event["detail"]["SourceArn"]
id = event["detail"]["SourceIdentifier"]
message = event["detail"]["Message"]
region = event["region"]
rds = boto3.client("rds", region_name=region)
if message == "DB instance is being started due to it exceeding the maximum allowed time being stopped.":
print("database turned on automatically, setting last seen tag...")
last_seen = int(time.time())
rds.add_tags_to_resource(ResourceName=db, Tags=[{"Key": "DbStopperLastSeen", "Value": str(last_seen)}])
elif message == "DB instance started":
print("database started (and sort of available?)")
last_seen = 0
for t in rds.list_tags_for_resource(ResourceName=db)["TagList"]:
if t["Key"] == "DbStopperLastSeen":
last_seen = int(t["Value"])
if time.time() < last_seen + (60 * 20):
print("database was automatically started in the last 20 minutes, turning off...")
time.sleep(10) # even waiting for the "started" event is not enough, so add some wait
rds.stop_db_instance(DBInstanceIdentifier=id)
print("success! removing auto-start tag...")
rds.add_tags_to_resource(ResourceName=db, Tags=[{"Key": "DbStopperLastSeen", "Value": "0"}])
else:
print("ignoring manual database start")
else:
print("error: unknown database event!")
DatabaseStopperRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Action:
- sts:AssumeRole
Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
Policies:
- PolicyName: Notify
PolicyDocument:
Version: '2012-10-17'
Statement:
- Action:
- rds:StopDBInstance
Effect: Allow
Resource: !Ref DB
- Action:
- rds:AddTagsToResource
- rds:ListTagsForResource
- rds:RemoveTagsFromResource
Effect: Allow
Resource: !Ref DB
Condition:
ForAllValues:StringEquals:
aws:TagKeys:
- DbStopperLastSeen
DatabaseStopperPermission:
Type: AWS::Lambda::Permission
Properties:
Action: lambda:InvokeFunction
FunctionName: !GetAtt DatabaseStopperFunction.Arn
Principal: events.amazonaws.com
SourceArn: !GetAtt DatabaseStopperRule.Arn
DatabaseStopperRule:
Type: AWS::Events::Rule
Properties:
EventPattern:
source:
- aws.rds
detail-type:
- "RDS DB Instance Event"
resources:
- !Ref DB
detail:
Message:
- "DB instance is being started due to it exceeding the maximum allowed time being stopped."
- "DB instance started"
Targets:
- Arn: !GetAtt DatabaseStopperFunction.Arn
Id: DatabaseStopperLambda
It has worked for at least one person. If you have issues please report here.

Error in Jenkins using SQL Server Driver

I'm trying to run a Groovy script to connect to Microsoft SQL Server within Jenkins and insert new data . I want to use the SQL Server driver and I placed the driver in Jenkins\war\WEB-INF\lib. I get an error when I Tried to run the following code in the build step - Execute Groovy Script:
import groovy.sql.Sql
import com.microsoft.sqlserver.jdbc.SQLServerDriver
class Connection {
def sqlConnection
def route = "xxxx"
def user = "xxxx"
def password = "xxxxx"
def driver = "com.microsoft.sqlserver.jdbc.SQLServerDriver"
Connection(db){
this.route+=db.toString()
this.sqlConnection = Sql.newInstance( route, user, password, driver )
}
static main(args) {
Connection con = new Connection("nameDataBase")
}
}
The error is:
1 error org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
E:\Jenkins\workspace\pruebaDB\hudson1035758401251924782.groovy: 2:
unable to resolve class com.microsoft.sqlserver.jdbc.SQLServerDriver # line 2, column 1.
import com.microsoft.sqlserver.jdbc.SQLServerDriver`
The most evident case of such exception is that necessary class not yet in classpath during script execution.
During a which build step script is executed?
If my assumptions are right and you can't shift script execution time to the other build step (when all necessary classes/libs will be in the cp) try to use class loader.
Here is a few links which should help you with this:
class lodding fun
class loading in groovy

Resources