Openshift action hook can't access environment variables - database

For my application on Openshift, I am trying to write a pre_build script that accesses the database. The goal is to have migration scripts between database versions that are executed when the code is deployed. The script would compare the current database version with the version needed by the application code and then run the correct script to migrate the database.
Now the problem is that apparently the pre_build script is executed on Jenkins and not on the destination cartridge and therefore the environment variables with the database connection arguments are not available.
This is the pre_build script that I've written so far:
#!/usr/bin/env python
print "*** Database migration script ***"
# get goal version
import os
homedir = os.environ["OPENSHIFT_HOMEDIR"]
migration_scripts_dir = homedir + "app-root/runtime/repo/.openshift/action_hooks/migration-scripts/"
f = open(migration_scripts_dir + "db-version.txt")
goal = int(f.read())
f.close()
print "I need database version " + str(goal)
# get database connection details
# TODO: find a solution of not hard coding the connection details here!!!
# Maybe by using jenkins environment variables like OPENSHIFT_APP_NAME and JOB_NAME
db_host = "..."
db_port = "..."
db_user = "..."
db_password = "..."
db_name = "..."
import psycopg2
try:
conn = psycopg2.connect("dbname='" + db_name + "' user='" + db_user + "' host='" + db_host + "' password='" + db_password + "' port='" + db_port + "'")
print "Successfully connected to the database"
except:
print "I am unable to connect to the database"
cur = conn.cursor()
def get_current_version(cur):
try:
cur.execute("""SELECT * from db_version""")
except:
conn.set_isolation_level(0)
cur.execute("""CREATE TABLE db_version (db_version bigint NOT NULL)""")
cur.execute("""INSERT INTO db_version VALUES (0)""")
cur.execute("""SELECT * from db_version""")
current_version = cur.fetchone()[0]
print "The current database version is " + str(current_version)
return current_version
def recursive_execute_migration(cursor):
current_version = get_current_version(cursor)
if (current_version == goal):
print "Database is on the correct version"
return
elif (current_version < goal):
sql_filename = "upgrade" + str(current_version) + "-" + str(current_version + 1) + ".sql"
print "Upgrading database with " + sql_filename
cursor.execute(open(migration_scripts_dir + sql_filename, "r").read())
recursive_execute_migration(cursor)
else:
sql_filename = "downgrade" + str(current_version) + "-" + str(current_version - 1) + ".sql"
print "Downgrading database with " + sql_filename
cursor.execute(open(migration_scripts_dir + sql_filename, "r").read())
recursive_execute_migration(cursor)
conn.set_isolation_level(0)
recursive_execute_migration(cur)
cur.close()
conn.close()
Is there another way of doing automatic database migrations?
Thanks for your help.

Related

Can't restore a flink job that uses Table API and Kafka connector with savepoint

I canceled a flink job with a savepoint, then tried to restore the job with the savepoint (just using the same jar file) but it said it cannot map savepoint state. I was just using the same jar file so I think the execution plan should be the same? Why would it have a new operator id if I didn't change the code? I wonder if it's possible to restore from savepoint for a job using Kafka connector & Table API.
Related errors:
used by: java.util.concurrent.CompletionException: java.lang.IllegalStateException: Failed to rollback to checkpoint/savepoint file:/root/flink-savepoints/savepoint-5f285c-c2749410db07. Cannot map checkpoint/savepoint state for operator dd5fc1f28f42d777f818e2e8ea18c331 to the new program, because the operator is not available in the new program. If you want to allow to skip this, you can set the --allowNonRestoredState option on the CLI.
used by: java.lang.IllegalStateException: Failed to rollback to checkpoint/savepoint file:/root/flink-savepoints/savepoint-5f285c-c2749410db07. Cannot map checkpoint/savepoint state for operator dd5fc1f28f42d777f818e2e8ea18c331 to the new program, because the operator is not available in the new program. If you want to allow to skip this, you can set the --allowNonRestoredState option on the CLI.
My Code:
public final class FlinkJob {
public static void main(String[] args) {
final String JOB_NAME = "FlinkJob";
final EnvironmentSettings settings = EnvironmentSettings.inStreamingMode();
final TableEnvironment tEnv = TableEnvironment.create(settings);
tEnv.getConfig().set("pipeline.name", JOB_NAME);
tEnv.getConfig().setLocalTimeZone(ZoneId.of("UTC"));
tEnv.executeSql("CREATE TEMPORARY TABLE ApiLog (" +
" `_timestamp` TIMESTAMP(3) METADATA FROM 'timestamp' VIRTUAL," +
" `_partition` INT METADATA FROM 'partition' VIRTUAL," +
" `_offset` BIGINT METADATA FROM 'offset' VIRTUAL," +
" `Data` STRING," +
" `Action` STRING," +
" `ProduceDateTime` TIMESTAMP_LTZ(6)," +
" `OffSet` INT" +
") WITH (" +
" 'connector' = 'kafka'," +
" 'topic' = 'api.log'," +
" 'properties.group.id' = 'flink'," +
" 'properties.bootstrap.servers' = '<mykafkahost...>'," +
" 'format' = 'json'," +
" 'json.timestamp-format.standard' = 'ISO-8601'" +
")");
tEnv.executeSql("CREATE TABLE print_table (" +
" `_timestamp` TIMESTAMP(3)," +
" `_partition` INT," +
" `_offset` BIGINT," +
" `Data` STRING," +
" `Action` STRING," +
" `ProduceDateTime` TIMESTAMP(6)," +
" `OffSet` INT" +
") WITH ('connector' = 'print')");
tEnv.executeSql("INSERT INTO print_table" +
" SELECT * FROM ApiLog");
}
}

Export all Tables and Views from a database (data dump)

I understood that you could not do a full snowflake data dump and need to use the COPY command to unload data from a table into an internal (i.e. Snowflake) stage.
To automate the process, I thought to do it with Python. Do you think that is the best method?
import traceback
import snowflake.connector
import pandas as pd
from snowflake.sqlalchemy import URL
from sqlalchemy import create_engine
url = URL(
user='??????',
password='????????',
account='??????-??????',
database='SNOWFLAKE',
role = 'ACCOUNTADMIN'
)
out_put_string = ""
try:
engine = create_engine(url)
connection = engine.connect()
# Get all the views from the SNOWFLAKE database
query = '''
show views in database SNOWFLAKE
'''
df = pd.read_sql(query, connection)
# Loop over all the views
df = df.reset_index() # make sure indexes pair with number of rows
for index, row in df.iterrows():
out_put_string += "VIEW:----------" + row['schema_name'] + "." + row['name'] + "----------\n"
df_view = pd.read_sql('select * from ' + row['schema_name'] + "." + row['name'], connection)
df_view.to_csv("/Temp/Output_CVS/" + row['schema_name'] + "-" + row['name'] + ".csv")
out_put_string += df_view.to_string() + "\n"
except:
print("ERROR:")
traceback.print_exc()
connection.close()
#Export all the Views in one file
text_file = open("/Temp/Output_CVS/AllViewsData.txt", "w")
text_file.write(out_put_string)
text_file.close()

Python Ftp Not a Direcoty error

I am trying to download files from a ftp server and importing the data to django. So i created a list contain server address,login details,path,file name,and the path where the file to be download and pass to a function which do downloading. it is working file in my sytem when move it to client server it showing error like
" error downloading C_VAR1_31012014_1.DAT - [Errno 20] Not a directory: 'common/VARRate/C_VAR1_31012014_1.DAT"
this is how the list look like
self.fileDetails = {
'NSE FO VAR RATE FILE': ('ftp.xxx.com', username, passwd, 'common/VARRate', 'C_VAR1_\d{4}201[45]_\d{1}.DAT', 'Data/samba/Ftp/Capex10/NSECM/VAR RATE'),
}
for fileType in self.fileDetails:
self.ftpDownloadFiles(fileType)
This details will pass to the function following function
def ftpDownloadFiles(self, fileType):
logging.info('Started ' + str(fileType))
try:
ftpclient = ftplib.FTP(self.fileDetails[fileType][FDTL_SRV_POS],
self.fileDetails[fileType][FDTL_USR_POS],
self.fileDetails[fileType][FDTL_PSWD_POS],
timeout=120)
#ftpclient.set_debuglevel(2)
ftpclient.set_pasv(True)
logging.info('Logged in to ' + self.fileDetails[fileType][FDTL_SRV_POS] +\
time.asctime())
logging.info('\tfor type: '+ fileType)
except BaseException as e:
print e
return
remotepath = self.fileDetails[fileType][FDTL_PATH_POS]
#matched, unmatched, downloaded = 0
try:
ftpclient.cwd(remotepath)
ftpclient.dir(filetimestamps.append)
except BaseException as e:
logging.info('\tchange dir error : ' + remotepath + ' ' +\
e.__str__())
self.walkTree(ftpclient, remotepath, fileType)
#logging.info('\n\tMatched %d, Unmatched %d, Downloaded %d'
# % (matched, unmatched, downloaded))
ftpclient.close()
From here it will call next function here the download process will start
def walkTree(self, ftpclient, remotepath, fileType):
# process files inside remotepath; cwd already done
# remotepath to be created if it doesnt exist locally
copied=matched=downloaded=imported = 0
files = ftpclient.nlst()
localpath = self.fileDetails[fileType][FDTL_DSTPATH_POS]
rexpCompiled = re.compile(self.fileDetails[fileType][FDTL_PATRN_POS])
for eachFile in files:
try:
ftpclient.cwd(remotepath+'/'+eachFile)
self.walkTree(ftpclient, remotepath+'/'+eachFile+'/', fileType)
except ftplib.error_perm: # not a folder, process the file
# every file to be saved in same local folder as on ftp srv
saveFolder = remotepath
saveTo = remotepath + '/' + eachFile
if not os.path.exists(saveFolder):
try:
os.makedirs(saveFolder)
print "directory created"
except BaseException as e:
logging.info('\tcreating %s : %s' % (saveFolder, e.__str__()))
if (not os.path.exists(saveTo)):
try:
ftpclient.retrbinary('RETR ' + eachFile, open(saveTo, 'wb').write)
#logging.info('\tdownloaded ' + saveTo)
downloaded += 1
except BaseException as e:
logging.info('\terror downloading %s - %s' % (eachFile, e.__str__()))
except ftplib.error_perm:
logging.info('\terror downloading %s - %s' % (eachFile, ftplib.error_perm))
elif (fileType == 'NSE CASH CLOSING FILE'): # spl case if file exists
try:
# rename file
yr = int(time.strftime('%Y')) - 1
os.rename(saveTo, saveTo + str(yr))
# download it
ftpclient.retrbinary('RETR ' + eachFile, open(saveTo, 'wb').write)
downloaded += 1
except BaseException as e:
logging.info('\terror rename/ download %s - %s' % (eachFile, e.__str__()))
Can any one help me to resolve this problem
Try to use os.path.join() in stead of the hardcoded slashes as path dividers for the os to download to. / or \ depends of the local os.
e.g. in your code:
saveTo = remotepath + '/' + eachFile
would become:
saveTo = os.path.join(remotepath,eachFile)
see https://docs.python.org/2/library/os.path.html

Not able to access MSSQL analysis services cubes

I am using the following code to access mssql analysis services cubes from java using OLAP4j 1.1.0
Class.forName("org.olap4j.driver.xmla.XmlaOlap4jDriver");
OlapConnection con = (OlapConnection)
DriverManager.getConnection("jdbc:xmla:Server=http://mssql.com/mssql/msmdpump.dll;"+
"Cache=org.olap4j.driver.xmla.cache.XmlaOlap4jNamedMemoryCache;"+
"Cache.Name=MyNiftyConnection;Cache.Mode=LFU;Cache.Timeout=600;Cache.Size=100",
"username", "password");
OlapWrapper wrapper = (OlapWrapper) con;
OlapConnection olapConnection = wrapper.unwrap(OlapConnection.class);
OlapStatement stmt = olapConnection.createStatement();
CellSet cellSet = stmt.executeOlapQuery("SELECT {" +
" [Measures].[LoginTime_Format]," +
"[Measures].[EngageTime_Format]," +
"[Measures].[ChatTime_Format]," +
"[Measures].[AverageHandleTime_Format]," +
"[Measures].[MultipleChatTime_Format]," +
"[Measures].[NonEngagedTime_Format]," +
"[Measures].[TimeAvailable_Format]," +
"[Measures].[TimeAvailableNotChatting_Format]," +
"[Measures].[TimeNotAvailable_Format]," +
"[Measures].[TimeNotAvailableChatting_Format]," +
"[Measures].[AcdTimeouts]," +
"[Measures].[AvgConcurrency]," +
"[Measures].[OperatorUtilization]} ON 0," +
" NON EMPTY ([DimTime].[CalenderDayHour].[CalenderDayHour], [DimAgent].[Agent]."+
"[Agent],[DimAgent].[Agent Name].[Agent Name]) ON 1" +
" FROM (SELECT [DimClient].[Client].&[4] ON 0 FROM" +
" (SELECT [DimTime].[CalenderDayHour].[CalenderDayHour].&[2013010100]:"+
"[DimTime].[CalenderDayHour].[CalenderDayHour].&[2013121216] ON 0 FROM [247OLAP]))");
When I run this code I get the following exception at executeOlapQuery line-
Exception in thread "main" java.lang.RuntimeException: [FATAL]:1:1: Content is not allowed in prolog.
at org.olap4j.driver.xmla.XmlaOlap4jUtil.checkForParseError(XmlaOlap4jUtil.java:134)
at org.olap4j.driver.xmla.XmlaOlap4jUtil.parse(XmlaOlap4jUtil.java:83)
at org.olap4j.driver.xmla.XmlaOlap4jConnection.executeMetadataRequest(XmlaOlap4jConnection.java:884)
at org.olap4j.driver.xmla.XmlaOlap4jDatabaseMetaData.getMetadata(XmlaOlap4jDatabaseMetaData.java:137)
at org.olap4j.driver.xmla.XmlaOlap4jDatabaseMetaData.getMetadata(XmlaOlap4jDatabaseMetaData.java:67)
at org.olap4j.driver.xmla.XmlaOlap4jDatabaseMetaData.getDatabaseProperties(XmlaOlap4jDatabaseMetaData.java:1044)
at org.olap4j.driver.xmla.XmlaOlap4jConnection.makeConnectionPropertyList(XmlaOlap4jConnection.java:324)
at org.olap4j.driver.xmla.XmlaOlap4jConnection.generateRequest(XmlaOlap4jConnection.java:1037)
at org.olap4j.driver.xmla.XmlaOlap4jConnection.populateList(XmlaOlap4jConnection.java:849)
at org.olap4j.driver.xmla.DeferredNamedListImpl.populateList(DeferredNamedListImpl.java:136)
at org.olap4j.driver.xmla.DeferredNamedListImpl.getList(DeferredNamedListImpl.java:90)
at org.olap4j.driver.xmla.DeferredNamedListImpl.size(DeferredNamedListImpl.java:116)
at org.olap4j.driver.xmla.XmlaOlap4jConnection.getOlapDatabase(XmlaOlap4jConnection.java:451)
at org.olap4j.driver.xmla.XmlaOlap4jConnection.getOlapCatalog(XmlaOlap4jConnection.java:501)
at org.olap4j.driver.xmla.XmlaOlap4jConnection.getCatalog(XmlaOlap4jConnection.java:496)
at org.olap4j.driver.xmla.XmlaOlap4jStatement.executeOlapQuery(XmlaOlap4jStatement.java:291)
at com.tfsc.ilabs.olap4j.POC.main(POC.java:28)
Caused by: org.xml.sax.SAXParseException: Content is not allowed in prolog.
at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
at org.olap4j.driver.xmla.XmlaOlap4jUtil.parse(XmlaOlap4jUtil.java:80)
Any help will be much appreciated.
You should check what's being sent back by the server, using WireShark or something similar. This kind of error happens when the XML parser tries to parse the response it got. The server is probably not sending XML content back.

Question_ regarding Active Directory of remote server

I am new to Active Directory and still learning some of the concepts.
The code below shows connecting to AD on my local machine and this code works properly
DirectoryEntry entry = new DirectoryEntry("LDAP://CN=testing1,CN=Users,DC=mydomain,DC=com");
DirectoryEntryConfiguration entryConfiguration = entry.Options;
Console.WriteLine("Server: " + entryConfiguration.GetCurrentServerName());
Console.WriteLine("Page Size: " + entryConfiguration.PageSize.ToString());
Console.WriteLine("Password Encoding: " + entryConfiguration.PasswordEncoding.ToString());
Console.WriteLine("Password Port: " + entryConfiguration.PasswordPort.ToString());
Console.WriteLine("Referral: " + entryConfiguration.Referral.ToString());
Console.WriteLine("Security Masks: " + entryConfiguration.SecurityMasks.ToString());
Console.WriteLine("Is Mutually Authenticated: " + entryConfiguration.IsMutuallyAuthenticated().ToString());
Console.WriteLine();
Console.ReadLine();
Here is my problem: when I replace mydomain in the LDAP path of another machine it gives me an error
LDAP://CN=testing1,CN=Users,DC=XXXX,DC=com
gives me this error
System.DirectoryServices.DirectoryServicesCOMException was unhandled
Message=A referral was returned from the server.
This was basically a teething error
Instead of this:
LDAP://CN=testing1,CN=Users,DC=XXXX,DC=com
I should have written
LDAP://XXX.com/CN=testing1,CN=Users,DC=XXXX,DC=com

Resources