WildFly environment variable parsing a JDBC string with semicolon - sql-server

I am having a heck of a time using an environment variable with a semicolon in a properties file read by WildFly (24) in Linux. One like:
DATABASE_JDBC_URL=jdbc:sqlserver://sqlserver.c3klg5a2ws.us-east-1.rds.amazonaws.com:1433;DatabaseName=ejbca;encrypt=false
The issue is that its truncating things at the semicolon if I don't use quotes so I end up with it trying to write to master since it thinks no database is specified.
I have it setup so that variable is in a file called datasource.properties that gets read from standalone.conf where this variable sits:
JAVA_OPTS="$JAVA_OPTS -DDATABASE_JDBC_URL=${DATABASE_JDBC_URL}"
It's read in with the following in standalone.conf:
set -a
. /opt/wildfly_config/datasource.properties
set +a
That in turn gets populated in standalone.xml with:
<connection-url>${env.DATABASE_JDBC_URL}</connection-url>
I try putting it in quotes and oddly enough it doesn’t start at all. Standalone.sh is no longer able to parse it:
/opt/wildfly/bin/standalone.sh: line 338: --add-exports=java.desktop/sun.awt=ALL-UNNAMED: No such file or directory
So I then escape it in quotes like this:
DATABASE_JDBC_URL="jdbc:sqlserver://sqlserver.c3klg5a2ws.us-east-1.rds.amazonaws.com:1433\;DatabaseName=ejbca\;encrypt=false"
Startup looks good in the log output this way:
-DDATABASE_JDBC_URL=jdbc:sqlserver://sqlserver.c3klg5a2ws.us-east-1.rds.amazonaws.com:1433;DatabaseName=ejbca;encrypt=false
But then java doesn’t like it, for some reason it sees the escape ticks:
Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: The port number 1433\ is not valid.
I can use sed to change the value in the standalone.xml, but all of the other properties I am doing work fine with the exception of this one and:
<check-valid-connection-sql>${env.DATABASE_CONNECTION_CHECK}</check-valid-connection-sql>
Where that value is "SELECT 1;" which it also does not like. That one worked with "'SELECT 1;'" but this one does not. I tried single quotes as well. That also gives the parsing error above. Is there any way to read in this environment variable that keeps wildfly happy?

You can enclose the characters you want to escape in { and } braces.
From the SQL Server documentation:
For example, {;} escapes a semicolon.
Just to note: Different database vendors will most likely have different ways of escaping characters in their connection URLs. The above approach works for SQL Server. But just to give one different example, MySQL uses URL encoding.

A alternate solution was to change how the variables were created. Part of the problem I had was that sourcing them from a properties file made them properties and not variables. I ended up creating a /opt/wildfly/bin/start.sh script with:
#!/bin/bash
export DATABASE_JDBC_URL="jdbc:sqlserver://sqlserver.c3klg5a2ws.us-east-1.rds.amazonaws.com:1433;DatabaseName=ejbca;encrypt=false"
/opt/wildfly/bin/standalone.sh
I then changed the wildfly service point to that new start.sh script. No longer have any parsing issues as long as the variables are started in memory.
No escaping was needed after that.

Related

Remotely changing a chrome config file on a raspbian system via MacOS' ssh

Long time listener (and a massive fan, this platform has me gotten out of predicaments more times that I can count), first time writer:
I am looking after a project where we run several Raspbian installs (Jesse Lite) that run chromium via an server in fullscreen - the whole boot process is automated - and the two tabs that get opened come from the /.config/chromium/default/preferences file;
In the relevant section, it says:
{"restore_on_startup":4,"startup_urls":["http://www.example.com/","https://www.example.org"]}
Since these units are essentially headless and I want to (Apple)Script my way to changing these URLs remotely, I looked into calling sed via ssh with public key on the raspberry pi 400s, and I have made good progress around the ssh and public key situation, but I am still not finding it easy to get my head around patterns and (double) escaping this query...
Purely on MacOS first, before getting into the ssh side of things, this is what I have come up with thus far:
sed -i .new 's/"startup_urls":[".*"]}/"startup_urls":["http://www.example.net","http://www.example.com"]}/g' ~/Library/Application\ Support/Google/Chrome/Default/Secure\ Preferences
However, that just gives me:
sed: 1: "s/"startup_urls":[".*"] ...": bad flag in substitute command: '/'
Any help and/or pointers greatly appreciated, otherwise - carry on!
Cheers,
Fred
It would seem there are a few issues here.
The default delimiter / is conflicting with characters in your input data, you would need to use an alternative delimiter
The start of the bracket is never matched [ as it needs to be escaped
You can try this sed to match and replace everything within the brackets
$ sed -i.bak s'|\[[^]]*|["http://www.example.net","http://www.example.com"|' ~/Library/Application\ Support/Google/Chrome/Default/Secure\ Preferences
{"restore_on_startup":4,"startup_urls":["http://www.example.net","http://www.example.com"]}

Why is Jena tdb2.tdbquery optimization stuck on "Reorder/generic"

I am using apache-jena-4.5.0 and fuseki pretty much out-of-the-box. I had created a TDB2 dataset using fuseki, but now shut it off and using command-line utilities of jena on a Windows box inside a bash shell.
My basic command is:
java -cp "*" tdb2.tdbquery --loc ~/path/to/databases/DEMO--explain --set arq:logExec=FINE --time --query ~/path/to/demoquery.txt
And my question is why does the output always contain only Reorder/generic like this:
15:56:00 INFO exec :: Reorder/generic
Even after I have tried all these:
successfully run tdb2.tdbstats and gotten a reasonable-looking temp.opt file as output
moved that temp.opt to each of /path/to/DEMO/stats.opt and /path/to/DEMO/Data-001/stats.opt
tried uppercase STATS.OPT for each since I'm on windows, just to be sure
Still I don't seem to be able to produce any output with Reorder/stats
This question did not contain enough detail to answer. The intended question was why won't TDB2 optimize my query and the answer was in the SPARQL, not in the invocation of tdb2.tdbquery or the location of the stats.opt file.
My SPARQL contained multiple FROM clauses, which forced TDB into BGP mode (instead of quads) and thwarted any optimization. As best we can tell at the moment, one wishing to use the TDB2 optimizer should use either the default graph, or a combination of FROM NAMED and GRAPH which causes the evaluation of graphs one at a time.

SSIS - Flat File with escape characters

I have a large flat file I'm using to recover data. It was exported from a system using double quotes " as the qualifier and a pipe | a the delimiter. SSIS can be configured to this without a problem, but where I'm running into issues is with the \ escape char.
the row causing the issue:
"125004267"|"125000316"|"125000491"|"height"|"5' 11\""|"12037"|"46403"|""|"t"|""|"2012-10-01 22:34:01"|"2012-10-01 22:34:01"|"1900-01-01 00:00:00"
The fourth column in the database should be 5' 11".
I'm getting the following error:
Error: 0xC0202055 at Data Flow Task 1, Flat File Source [2]: The column delimiter for column "posting_value" was not found.
How can I tell SSIS to handle the \ escape character?
I know this is quite old, but I just ran into a similar issue regarding escaping quotes in CSV's in SSIS. It seems odd there isn't more flexible support for this but it does support VB-style double-double quotes. So in your example you could pre-parse the file to translate it into
"125004267"|"125000316"|"125000491"|"height"|"5' 11"""|"12037"|"46403"|""|"t"|""|"2012-10-01 22:34:01"|"2012-10-01 22:34:01"|"1900-01-01 00:00:00"
to get your desired output. This at least works on Sql Server 2014.
This also works for Excel (tested with 2010). Though, oddly, only when inserting data from a text file, not when opening a CSV with Excel.
This does appear to be the standardized method according to RFC 4180
which states
Fields containing line breaks (CRLF), double quotes, and commas
should be enclosed in double-quotes
...
If double-quotes are used to enclose fields, then a double-quote
appearing inside a field must be escaped by preceding it with
another double quote.
This probably isn't the answer you are looking for, but...
I'd reach out to the technical contacts of the source of data, and explain to them that if they're going to send you a file that uses double-quotes as text qualifiers, then that implies that there are never any double-quotes in the text. If that is possible, as it happens here, tell them to use another text qualifier, or none at all.
Since there are pipe delimeters in use, what's the point of having text qualifiers?
Seems redundant.

XSLT document() function with escaped chars in file name

Am parsing XML with file names with escaped characters. This is the file name on the server:
Account-V%29%27%22%3B%3A%3E.layout
When I apply the document function, it is automatically transforming the escaped characters.
`<xsl:apply-templates select="document('Account-V%29%27%22%3B%3A%3E.layout')/Layout"/>
The above yields an error as it cannot find this file on the server:
Account-V)'";:>.layout
Is there way to tell the document() function to not transform the escaped chars in the file? I tried wrapping this around variables but it did not work.
If you're using XSLT 2.0, try using encode-for-uri()
select="document(encode-for-uri('Account-V%29%27%22%3B%3A%3E.layout'))/Layout"
The way that the URI you pass to the document() function gets dereferenced is in many ways implementation-defined, and many XSLT processors give you some control over it, for example by allowing you to supply a user-written URIResolver.
So I don't think the question can be answered without knowing your XSLT processor.
Found a workaround, which works. Isn't the prettiest, but before performing the XSLT, do a string in Java to replace such as fileNames.replace("%","%25") This forces the document() function to escape the percent sign to a percent, which generates the correct file name on the server.

Can you make SQLCMD immediately run each statement in a script without the use of "GO"?

When running a script in SQLCMD, is there a setting, parameter or any other way of making it effectively run a GO command at the end of each SQL statement?
This would prevent the need to insert periodic GO commands into very large scripts.
No.
SQLCMD would have to implement a full T-SQL parser itself in order to determine where the statement boundaries are. Whereas as it currently exists, all it has to do is find lines with "GO" on them.
Determining statement boundaries can be quite tricky, given that ";" terminators are still optional for most statements.
Hmmm, I guess this is not going to help but is an answer to your question:
http://msdn.microsoft.com/en-us/library/ms162773.aspx
-c cmd_end
Specifies the batch terminator. By default, commands are terminated and sent to SQL Server by typing the word "GO" on a line by itself. When you reset the batch terminator, do not use Transact-SQL reserved keywords or characters that have special meaning to the operating system, even if they are preceded by a backslash.
Especially the last sentence sounds daunting....
If all your lines end in ; and there is no ; any where else (in text fields for example)
try
sqlcmd -c ;

Resources