execute stored procedure with dbslim with Fitnesse (Selenium,Xebium) - sql-server

https://github.com/markfink/dbslim
I'd like to execute the stored procedures with DbSlim using Fitnesse (Selenium, Xebium)
now what I tried to do is:
!define dbQuerySelectCustomerbalance (
execute dbo.uspLogError
)
| script | Db Slim Select Query | !-${dbQuerySelectCustomerbalance}-! |
which gives a green indicator,
however Microsoft SQL Server profiler gives no actions/logging...
so what i'd like to know is: is it possible to use dbslim for executing stored procedures,
if yes
what is the correct way to do it?
By the way, the connection to the Database i've on 1 page, and on the query page i included the connection to the database. (is that ok?)

Take out the !- ... -!. It is used to escape wikified words. But in this case you want it to be translated to the actual query.
!define dbQuerySelectCustomerbalance ( execute dbo.uspLogError )
| script | Db Slim Select Query | ${dbQuerySelectCustomerbalance} |
| show | data by column index | 1 | and row index | 1 |
You can add in the last line which outputing the first column of the first row for testing purpose if your SP is returning some result (or you can create one simple SP just to test this out)
Specifying the connection anywhere before this block will be fine, be it on the same page or in an SetUp/SuiteSetUp/normal page included/executed before.

Related

How to check if a request located in JDBC_SESSION_INIT_STATEMENT is working? DataframeReader

I am trying to connect to sql server with spark-jdbc, using JDBC_SESSION_INIT_STATEMENT to create a temporary table and then download data from the temporary table in the main query.
I have the following code:
//df is org.apache.spark.sql.DataFrameReader
val s = """select * into #tmp_table from ( SELECT op.ID,
| op.Date,
| op.DocumentID,
| op.Amount,
| op.AmountCurr,
| op.CurrencyID,
| operson.ObjectTypeId AS PersonOT,
| op.PersonID,
| ocontract.ObjectTypeId AS ContractOT,
| op.ContractID,
| op.DocNum,
| op.MomentCreate,
| op.ObjectTypeID,
| op.OwnerObjectID
|FROM dbo.Operation op With (Index = IX_Operation_Date) --Без хинта временами уходит в скан всей таблицы
|LEFT JOIN dbo.Object ocontract ON op.ContractID = ocontract.ID
|LEFT JOIN dbo.Object operson ON op.PersonID = operson.ID
|WHERE op.Date>='2019-01-01' and op.Date<'2020-01-01' AND 1=1
|) wrap_for_single_connect
|OPTION (LOOP JOIN, FORCE ORDER, MAX_GRANT_PERCENT=25)""".stripMargin
df
.option(JDBCOptions.JDBC_SESSION_INIT_STATEMENT, s)
.jdbc(
jdbcUrl,
"(select * from tempdb.#tmp_table) sub",
connectionProps)
i get com.microsoft.sqlserver.jdbc.SQLServerException: Invalid object name '#tmp_table'.
And I have a feeling that JDBC_SESSION_INIT_STATEMENT is not working, because I deliberately tried to mess up the request and still got the Invalid object error.
How can I check if the request is working in JDBC_SESSION_INIT_STATEMENT?
One way to know whether your JDBCOptions.JDBC_SESSION_INIT_STATEMENT is executed is to enable INFO logging level for org.apache.spark.sql.execution.datasources.jdbc logger.
That should trigger this line and print out the following message to the logs:
Executing sessionInitStatement: [sql]
Given the comment I don't think you should use it to create a source table to load records from:
// This executes a generic SQL statement (or PL/SQL block) before reading
// the table/query via JDBC. Use this feature to initialize the database
// session environment, e.g. for optimizations and/or troubleshooting.
You should use dbtable or query parameter instead.

How to convert regexp_substr(Oracle) to SQL Server?

I have a data table which has a column as Acctno what is expected shows in separate column
|Acctno | expected_output|
|ABC:BKS:1023049101 | 1023049101 |
|ABC:UWR:19048234582 | 19048234582 |
|ABC:UEW:1039481843 | 1039481843 |
I know in Oracle SQL which I used the below
select regexp_substr(acctno,'[^:]',1,3) as expected_output
from temp_mytable
but in Microsoft SQL Server I am getting an error that regexp_substr is not a built in function
How can I resolve this issue?
We can use PATINDEX with SUBSTRING here:
SELECT SUBSTRING(acctno, PATINDEX('%:[0-9]%', acctno) + 1, LEN(acctno)) AS expected_output
FROM temp_mytable;
Demo
Note that this answer assumes that the third component would always start with a digit, and that the first two components would not have any digits. If this were not true, then we would have to do more work.
Just another option if the desired value is the last portion of the string and there are not more than 4 segments.
Select *
,NewValue = parsename(replace(Acctno,':','.'),1)
from YourTable

SSRS System.InvalidCastException - at OracleDataReader.GetDecimal(Int32 i)

I have an SSRS report that was pointed to SQL Server views, which pointed to Oracle tables. I edited the SSRS report Dataset so as to query directly from the Oracle db. It seems like a very simple change until I got this error message:
System.InvalidCastException: Specified cast is not valid.
With the following details...
Field ‘UOM_QTY’ and it also says at
Oracle.ManagedDataAccess.Client.OracleDataReader.GetDecimal(Int32 i).
The SELECT statement on that field is pretty simple:
, (DELV_RECEIPT.INV_LBS/ITEM_UOM_XREF.CONV_TO_LBS) AS UOM_QTY
Does anyone know what would cause the message, and how to resolve the error? My objective is use to use the ORACLE datasource instead of SQL SERVER.
Error 1
Severity Code Description Project File Line Suppression State
Warning [rsErrorReadingDataSetField] The dataset ‘dsIngredientCosts’ contains a definition for the Field ‘UOM_QTY’. The data extension returned an error during reading the field. System.InvalidCastException: Specified cast is not valid.
at Oracle.ManagedDataAccess.Client.OracleDataReader.GetDecimal(Int32 i)
at Oracle.ManagedDataAccess.Client.OracleDataReader.GetValue(Int32 i)
at Microsoft.ReportingServices.DataExtensions.DataReaderWrapper.GetValue(Int32 fieldIndex)
at Microsoft.ReportingServices.DataExtensions.MappingDataReader.GetFieldValue(Int32 aliasIndex) C:\Users\bl0040\Documents\Visual Studio 2015\Projects\SSRS\Project_ssrs2016\Subscription Reports\Feed Ingredient Weekly Price Avg.rdl 0
Error 2
Severity Code Description Project File Line Suppression State
Warning [rsMissingFieldInDataSet] The dataset ‘dsIngredientCosts’ contains a definition for the Field ‘UOM_QTY’. This field is missing from the returned result set from the data source. C:\Users\bl0040\Documents\Visual Studio 2015\Projects\SSRS\Project_ssrs2016\Subscription Reports\Feed Ingredient Weekly Price Avg.rdl 0
Source Tables:
+------------+---------------+-------------+---------------+-----------+
| Source | TABLE_NAME | COLUMN_NAME | DataSize | COLUMN_ID |
+------------+---------------+-------------+---------------+-----------+
| ORACLE | DELV_RECEIPT | INV_LBS | NUMBER (7,0) | 66 |
+------------+---------------+-------------+---------------+-----------+
| ORACLE | ITEM_UOM_XREF | CONV_TO_LBS | NUMBER (9,4) | 3 |
+------------+---------------+-------------+---------------+-----------+
| SQL SERVER | DELV_RECEIPT | INV_LBS | numeric (7,0) | 66 |
+------------+---------------+-------------+---------------+-----------+
| SQL SERVER | ITEM_UOM_XREF | CONV_TO_LBS | numeric (9,4) | 3 |
+------------+---------------+-------------+---------------+-----------+
The error went away after adding a datatype conversion statement to the data selection.
, CAST(DELV_RECEIPT.INV_LBS/ITEM_UOM_XREF.CONV_TO_LBS AS NUMERIC(9,4)) AS UOM_QTY
Can anyone provide some information on why the original query would be a problem and why the CAST would fix these errors? I tried casting the results because someone on Code Project forum said...
why don't you use typed datasets? you get such head aches just because
of not coding in a type-safe manner. you have a dataset designer in
the IDE which makes the life better, safer, easier and you don't use
it. I really can't understand.
Here is an approach to fix this error with an extension method instead of modifying the SQL-Query.
public static Decimal MyGetDecimal(this OracleDataReader reader, int i)
{
try
{
return reader.GetDecimal(i);
}
catch (System.InvalidCastException)
{
Oracle.ManagedDataAccess.Types.OracleDecimal hlp = reader.GetOracleDecimal(i);
Oracle.ManagedDataAccess.Types.OracleDecimal hlp2 = Oracle.ManagedDataAccess.Types.OracleDecimal.SetPrecision(hlp, 27);
return hlp2.Value;
}
}
Thank you for this but what happens if your query looks like:
SELECT x.* from x
and .GetDecimal appears nowhere?
Any suggestions in that case? I have created a function in ORACLE itself that rounds all values in a result set to avoid this for basic select statements but this seems wrong for loading updateable datasets...
Obviously this is an old-school approach to getting data.

Sqoop & Hadoop - How to join/merge old data and new data imported by Sqoop in lastmodified mode?

Background:
I have a table with the following schema on a SQL server. Updates to existing rows is possible and new rows are also added to this table.
unique_id | user_id | last_login_date | count
123-111 | 111 | 2016-06-18 19:07:00.0 | 180
124-100 | 100 | 2016-06-02 10:27:00.0 | 50
I am using Sqoop to add incremental updates in lastmodified mode. My --check-column parameter is the last_login_date column. In my first run, I got the above two records into Hadoop - let's call this current data. I noted that the last value (the max value of the the check column from this first import) is 2016-06-18 19:07:00.0.
Assuming there is a change on the SQL server side, I now have the following changes on the SQL server side:
unique_id | user_id | last_login_date | count
123-111 | 111 | 2016-06-25 20:10:00.0 | 200
124-100 | 100 | 2016-06-02 10:27:00.0 | 50
125-500 | 500 | 2016-06-28 19:54:00.0 | 1
I have the row 123-111 updated with a more recent last_login_date value and the count column has also been updated. I also have a new row 125-500 added.
On my second run, sqoop looks at all columns with a last_login_date column greater than my known last value from the previous import - 2016-06-18 19:07:00.0
This gives me only the changed data, i.e. 123-111 and 125-500 records. Let's call this - new data.
Question
How do I do a merge join in Hadoop/Hive using the current data and the new data so that I end up with the updated version of 123-111, 124-100, and the newly added 125-500?
Changed data load using scoop is a two phase process.
1st phase - load changed data into some temp (stage) table using
sqoop import utility.
2nd phase - Merge changed data with old data using sqoop-merge
utility.
If the table is small(say few M records) then use full load using sqoop import.
Sometimes it's possible to load only latest partition - in such case use sqoop import utility to load partition using custom query, then instead of merge simply insert overwrite loaded partition into target table, or copy files - this will work faster than sqoop merge.
You can change the existing Sqoop query (by specifying a new custom query) to get ALL the data from the source table instead of getting only the changed data. Refer using_sqoop_to_move_data_into_hive. This would be the simplest way to accomplish this - i.e doing a full data refresh instead of applying deltas.

How to get last access/modification date of a PostgreSQL database?

On development server I'd like to remove unused databases. To realize that I need to know if database is still used by someone or not.
Is there a way to get last access or modification date of given database, schema or table?
You can do it via checking last modification time of table's file.
In postgresql,every table correspond one or more os files,like this:
select relfilenode from pg_class where relname = 'test';
the relfilenode is the file name of table "test".Then you could find the file in the database's directory.
in my test environment:
cd /data/pgdata/base/18976
ls -l -t | head
the last command means listing all files ordered by last modification time.
There is no built-in way to do this - and all the approaches that check the file mtime described in other answers here are wrong. The only reliable option is to add triggers to every table that record a change to a single change-history table, which is horribly inefficient and can't be done retroactively.
If you only care about "database used" vs "database not used" you can potentially collect this information from the CSV-format database log files. Detecting "modified" vs "not modified" is a lot harder; consider SELECT writes_to_some_table(...).
If you don't need to detect old activity, you can use pg_stat_database, which records activity since the last stats reset. e.g.:
-[ RECORD 6 ]--+------------------------------
datid | 51160
datname | regress
numbackends | 0
xact_commit | 54224
xact_rollback | 157
blks_read | 2591
blks_hit | 1592931
tup_returned | 26658392
tup_fetched | 327541
tup_inserted | 1664
tup_updated | 1371
tup_deleted | 246
conflicts | 0
temp_files | 0
temp_bytes | 0
deadlocks | 0
blk_read_time | 0
blk_write_time | 0
stats_reset | 2013-12-13 18:51:26.650521+08
so I can see that there has been activity on this DB since the last stats reset. However, I don't know anything about what happened before the stats reset, so if I had a DB showing zero activity since a stats reset half an hour ago, I'd know nothing useful.
PostgreSQL 9.5 let us to track last modified commit.
Check track commit is on or off using the following query
show track_commit_timestamp;
If it return "ON" go to step 3 else modify postgresql.conf
cd /etc/postgresql/9.5/main/
vi postgresql.conf
Change
track_commit_timestamp = off
to
track_commit_timestamp = on
Restart the postgres / system
Repeat step 1.
Use the following query to track last commit
SELECT pg_xact_commit_timestamp(xmin), * FROM YOUR_TABLE_NAME;
SELECT pg_xact_commit_timestamp(xmin), * FROM YOUR_TABLE_NAME where COLUMN_NAME=VALUE;
My way to get the modification date of my tables:
Python Function
CREATE OR REPLACE FUNCTION py_get_file_modification_timestamp(afilename text)
RETURNS timestamp without time zone AS
$BODY$
import os
import datetime
return datetime.datetime.fromtimestamp(os.path.getmtime(afilename))
$BODY$
LANGUAGE plpythonu VOLATILE
COST 100;
SQL Query
SELECT
schemaname,
tablename,
py_get_file_modification_timestamp('*postgresql_data_dir*/*tablespace_folder*/'||relfilenode)
FROM
pg_class
INNER JOIN
pg_catalog.pg_tables ON (tablename = relname)
WHERE
schemaname = 'public'
I'm not sure if things like vacuum can mess this aproach, but in my tests it's a pretty acurrate way to get tables that are no longer used, at least, on INSERT/UPDATE operations.
I guess you should activate some log options. You can get information about logging on postgreSQL here.

Resources