Retrieve data from multiple tables from sqlite database titanium - database

I was trying to retrieve data from multiple tables from sqlite database with titanium using select statement with left join property.
It retrieves null or undefined and I used this statement with sqlite administrator, it retrieves data.
You can check my code:
var db = Ti.Database.install('Path/DB_Name.s3db', 'DB_Name');
var rows = db.execute(
'select c.CustomerID,c.Name, c.CustomerCode,v.ConfirmationDate from Customer as c' +
' left join Visits as v on c.CustomerID==v.CustomerID ' +
' order by v.ConfirmationDate desc');

Try it like this:
var db = Ti.Database.install('Path/DB_Name.s3db', 'DB_Name');
var rows = db.execute(
'select c.CustomerID,c.Name, c.CustomerCode,v.ConfirmationDate from Customer as c' +
' left join Visits as v on c.CustomerID=v.CustomerID ' +
' order by v.ConfirmationDate desc');
Single "="

Related

Implementing geometry_columns view in MS SQL Server

(We're using MSSQL Server 2014 as far as I know)
I have never seen a good solution for maintaining a geometry_columns table in MSSQL Server. https://gis.stackexchange.com/questions/71558 never got figured out, and even if it did, the PostGIS approach of using a view (rather than a table) is a much better solution.
With that said, I can't seem to figure out how to implement the basics of how this might work.
The basic schema of the geometry_columns view - from PostGIS is:
(the DDL is a bit more complicated, but can be provided if need be)
MS SQL Server will allow you to query your information_schema table to show tables with a 'geometry' data type:
select *
FROM information_schema.columns
where data_type = 'geometry'
I'm imagining the geometry_columns view could be defined with something similar to the following, but I can't figure out how to get the information about the geometry columns to populate in the query:
SELECT
TABLE_CATALOG as f_table_catalog
, TABLE_SCHEMA as f_table_schema
, table_name as f_table_name
, COLUMN_NAME as f_geometry_column
/*how to deal with these in view?
, geometry_column.STDimension() as coord_dimension
, geometry_column.STSrid as srid
, geometry_column.STGeometryType() as type
*/
FROM information_schema.columns where data_type = 'geometry'
I'm hung up as to how the three ST operators can dynamically report the dimension, srid, and geometry type in the view when trying to query from the information_schema table. Perhaps this is a SQL problem more than anything, but I can't wrap my head around it for some reason.
Here's what the PostGIS geometry columns table looks like:
Also please let me know if this question a) could be asked differently because it is a general SQL question and/or b) it belongs on another forum (GIS.SE didn't have an answer, as I believe this is more on the database side than spatial/GIS)
Based on a little reading, it seems that PostGIS - as befits a dedicated GIS system - is a little more clever than SQL Server, when it comes to geometry columns. It looks like in PostGIS you can say that a particular geometry column will only ever contain, say, a POINT, or a LINESTRING. This is how the geometry_columns view can then be more specific about the columns it is describing.
I don't believe it is possible to readily constrain a SQL Server geometry in this way (triggers or constraints might allow, but would be messy). PostGIS can have a general geometry column with no further restriction. Let's suppose you're happy for your SQL Server geometry_columns view to return the dimension, SRID, and type based on an arbitrary row of data.
We can get the column metadata out of the catalog views, but I think the only way to do the necessary querying to also get the geometry metadata is with dynamic SQL. This rules out views and functions. I can do you a stored procedure though:
CREATE PROCEDURE GetGeometryColumns
AS
BEGIN
DECLARE #sql nvarchar(max);
SET #sql = ( SELECT
STUFF((
SELECT ' UNION ALL ' + Query
FROM
( SELECT
'SELECT ''' + s.name + ''' SchemaName'
+ ', ''' + t.name + ''' TableName'
+ ', ''' + c.name + ''' ColumnName'
+ ', ( SELECT TOP (1) ' + c.name + '.STDimension() FROM ' + s.name + '.' + t.name + ') Dimension'
+ ', ( SELECT TOP (1) ' + c.name + '.STSrid FROM ' + s.name + '.' + t.name + ') SRID'
+ ', ( SELECT TOP (1) ' + c.name + '.STGeometryType() FROM ' + s.name + '.' + t.name + ') GeometryType'
AS Query
FROM
sys.schemas s
INNER JOIN sys.tables t ON s.schema_id = t.schema_id
INNER JOIN sys.columns c on t.object_id = c.object_id
WHERE
system_type_id = 240
) GeometryColumn
FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 10, '')
);
EXEC ( #sql );
END
This builds a SQL statement which is a UNION of SELECTs, one for each geometry column defined in the database. Note that I'm using the sys. catalog views, which for SQL Server are better than using INFORMATION_SCHEMA.
Each of the individual SELECTs that this builds will return the name of the column, plus metadata from the value in the first row (artibtarily picked).
The sproc then executes the statements its built, and returns.
To use:
CREATE TABLE T1 (
Id int NOT NULL PRIMARY KEY
, Region geometry
)
;
CREATE TABLE T2 (
Id int NOT NULL PRIMARY KEY
, Source geometry
, Destination geometry
)
;
INSERT T1 VALUES ( 1, geometry::STGeomFromText('POLYGON((1 1, 3 3, 3 1, 1 1))', 4236)) ;
INSERT T2 VALUES ( 10
, geometry::STGeomFromText('POINT(1.3 2.4)', 4236)
, geometry::STGeomFromText('POINT(2.6 2.5)', 4236)) ;
then simply
EXEC GetGeometryColumns;
to get
SchemaName TableName ColumnName Dimension SRID GeometryType
---------- --------- ----------- ----------- ----------- ----------------------
dbo T1 Region 2 4236 Polygon
dbo T2 Source 0 4236 Point
dbo T2 Destination 0 4236 Point
If you want the results in a table, you can for example:
DECLARE #geometryColumn TABLE
(
SchemaName sysname
, TableName sysname
, ColumnName sysname
, Dimension int
, SRID int
, GeometryType nvarchar(100)
);
INSERT #geometryColumn EXEC GetGeometryColumns
SELECT * FROM #geometryColumn
I'd be interested to see if anyone can get the necessary logic into an actual VIEW...

T-SQL performance issue with bulk insert in millions of data

I have created a query which is doing a bulk insert of millions of rows of data.
While running this query, I'm getting a temdb memory error.
This is the query:
INSERT INTO ods.contact_method (cmeth_cust_id, cmeth_chan_type_id, cmeth_address_id,
cmeth_identifier, cmeth_active, cmeth_review_date,
cmeth_last_validated, cmeth_updatesrc_id, cmeth_updated_date)
SELECT
custpers_cust_id, 5, ad.adet_id,
COALESCE(street3, '') + ' ' + COALESCE(street2, '') + ' '
+ COALESCE(housenumber, '') + ' ' + COALESCE(street, ''),
CASE custpers_status
WHEN 'InActive' THEN 'N'
ELSE 'Y'
END,
Dateadd(year, 2, last_update_date),
last_update_date, 1, Getdate()
FROM
ods.address_detail (nolock) ad
JOIN
ods.customer_persona (nolock) cp ON cp.custpers_cust_id = ad.adet_updated_by
JOIN
ods.tempcust_address_insert (nolock)tp ON tp.bvoc = cp.custpers_bvoc_id
WHERE
NOT EXISTS (SELECT 1
FROM ods.contact_method (nolock) cm
WHERE cm.cmeth_cust_id = cp.custpers_cust_id
AND cm.cmeth_address_id IS NOT NULL
AND ad.adet_id = cm.cmeth_address_id)
I need help optimizing this query; should I use Left join or not exists condition on millions of data for bulk insert?
you are getting memory error in temp db can be due to below 2 issues-
1) your query is have performance issue and selecting unnecessary data. - i can not comment on this without knowing table structure, index, fragmentation and size of data. however changing if exists condition to Left join surely help to improve performance -
FROM ods.address_detail (nolock) ad
JOIN ods.customer_persona (nolock) cp
ON cp.custpers_cust_id = ad.adet_updated_by
JOIN ods.tempcust_address_insert (nolock)tp
ON tp.bvoc = cp.custpers_bvoc_id
left join contact_method cm (nolock)
on cm.cmeth_cust_id = cp.custpers_cust_id
AND ad.adet_id = cm.cmeth_address_id
AND cm.cmeth_address_id IS NOT NULL -- not sure if this condtion is required
Where cm.cmeth_cust_id is null -- add all primary key columns of contact_method here
2) temp db memory error will also come if you are selecting huge amount of data as compare to temp db size -
to solve this issue you can use 'top' while inserting the data and run the same query multiple time and left join condition in your insert query will make sure that no duplicate data is inserted.
SELECT top 1000000 -- this will make sure your are selecting limited data
custpers_cust_id,
5,
ad.adet_id,
COALESCE(street3, '') + ' '
........
If this is not a one time activity that your have to write a while loop using ##rowcont value to insert the data -
while COUNT( #count>0)
begin
<your insert statement wiht select top >
set #count = ##ROWCOUNT
end

MS SQL combine first last name and the compare to different single value

Try to pull first and last name from a database combine into one value, and then combine the results with a different server database. The problem being my database has first and last separate, the target database has first and last combine in one string. Basically trying to get a list from both databases matching on the full name.
select a.empid,
select (SELECT REPLACE(RTRIM(COALESCE(a.FNAM + ' ', '') +
COALESCE(a.LNAM, '')), ' ', ' '))name1,
a.Email
from [db]..[user].[table] a, [server].[db].[dbo].[tblUsers] t
where name1 = t.Name
Witht he above, it just says invalid column anme1, which makes sense because it is just a result set column name. How can I make this full name value from my DB and then match it with the full name value of column t.Name
select a.empid,
a.FNAM, a.LNAM, t.Name
from [db]..[user].[table] a
join [server].[db].[dbo].[tblUsers] t
on Replace(a.FNAM + a.LNAM, ' ', '')
= Replace(t.Name, ' ', '')

sqlserver Query time taking

I am executing below query. It takes 80 seconds for just 17 records.
can any body tell me reason if knows. I have already tried with Indexes.
SELECT DISTINCT t.i_UserID,
u.vch_LoginName,
t.vch_PreviousEmailAddress AS 'vch_EmailAddress',
u.vch_DisplayName,
t.d_TransactionDate AS 'd_DateAdded',
'Old' AS 'vch_RecordStatus'
FROM tblEmailTransaction t
INNER JOIN tblUser u
ON t.i_UserID = u.i_UserID
WHERE t.vch_PreviousEmailAddress LIKE '%kala%'
Change collation for vch_PreviousEmailAddress column on Latin1_General_100_BIN2
Create covered index:
CREATE NONCLUSTERED INDEX ix
ON dbo.tblEmailTransaction (vch_PreviousEmailAddress)
INCLUDE (i_UserID, d_TransactionDate)
GO
And have fun with this query:
SELECT t.i_UserID,
u.vch_LoginName,
t.vch_PreviousEmailAddress AS vch_EmailAddress,
u.vch_DisplayName,
t.d_TransactionDate AS d_DateAdded,
'Old' AS vch_RecordStatus
FROM (
SELECT DISTINCT i_UserID,
vch_PreviousEmailAddress,
d_TransactionDate
FROM dbo.tblEmailTransaction
WHERE vch_PreviousEmailAddress LIKE '%kala%' COLLATE Latin1_General_100_BIN2
) t
JOIN dbo.tblUser u ON t.i_UserID = u.i_UserID
One other thing, which I find useful in solving problems like this:
Try running the following script. It will tell you which indexes you could ask to your SQL Server database, which would make the most (positive) improvement.
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT TOP 100
ROUND(s.avg_total_user_cost * s.avg_user_impact * (s.user_seeks + s.user_scans),0) AS 'Total Cost',
s.avg_user_impact,
d.statement AS 'Table name',
d.equality_columns,
d.inequality_columns,
d.included_columns,
'CREATE INDEX [IndexName] ON ' + d.statement + ' ( '
+ case when (d.equality_columns IS NULL OR d.inequality_columns IS NULL)
then ISNULL(d.equality_columns, '') + ISNULL(d.inequality_columns, '')
else ISNULL(d.equality_columns, '') + ', ' + ISNULL(d.inequality_columns, '')
end + ' ) '
+ CASE WHEN d.included_columns IS NULL THEN '' ELSE 'INCLUDE ( ' + d.included_columns + ' )' end AS 'CREATE INDEX command'
FROM sys.dm_db_missing_index_groups g,
sys.dm_db_missing_index_group_stats s,
sys.dm_db_missing_index_details d
WHERE d.database_id = DB_ID()
AND s.group_handle = g.index_group_handle
AND d.index_handle = g.index_handle
ORDER BY [Total Cost] DESC
The right-hand column displays the CREATE INDEX command which you'd need to run, to create that index.
This one of those lifesaver scripts, which I run on our in-house databases once ever so often.
But yes, in your example, this is just likely to tell you that you need an index on the vch_PreviousEmailAddress field in your tblEmailTransaction table.
The probable bottleneck are 2:
Missing Index on tblEmailTransaction.i_UserID: Check if the table has the index
Missing Index on tblUser.i_UserID: Check if the table has the index
Like Statement: Like statement is know to be not good in performance, as Devart suggested, try to specify collection in this way:
WHERE vch_PreviousEmailAddress LIKE '%kala%' COLLATE Latin1_General_100_BIN2
To have a better view on your query, You have to run this command with your query:
SET IO STATISTICS ON
It will write all the IO Access that the query does and the we can see what happen.
Just a final question ?
How many rows contains the two tables?
Ciao

Database Tuning Advisor recommends to create an existing index

When I run SQL Server 2005 Database Tuning Advisor, it gives a recommendation to create an index, but it will recommends to index a column which already has an index on it. Why does it give a recommendation to create the same index again?
Here is my SQL:
SELECT t.name AS 'affected_table'
, 'Create NonClustered Index IX_' + t.name + '_'
+ CAST(ddmid.index_handle AS VARCHAR(10))
+ ' On ' + ddmid.STATEMENT
+ ' (' + IsNull(ddmid.equality_columns,'')
+ CASE
WHEN ddmid.equality_columns IS NOT NULL
AND ddmid.inequality_columns IS NOT NULL
THEN ','
ELSE ''
END
+ ISNULL(ddmid.inequality_columns, '')
+ ')'
+ ISNULL(' Include (' + ddmid.included_columns + ');', ';')
AS sql_statement
, ddmigs.user_seeks
, ddmigs.user_scans
, CAST((ddmigs.user_seeks + ddmigs.user_scans)
* ddmigs.avg_user_impact AS INT) AS 'est_impact'
, ddmigs.last_user_seek
FROM
sys.dm_db_missing_index_groups AS ddmig
INNER JOIN sys.dm_db_missing_index_group_stats AS ddmigs
ON ddmigs.group_handle = ddmig.index_group_handle
INNER JOIN sys.dm_db_missing_index_details AS ddmid
ON ddmig.index_handle = ddmid.index_handle
INNER Join sys.tables AS t
ON ddmid.OBJECT_ID = t.OBJECT_ID
WHERE
ddmid.database_id = DB_ID()
AND CAST((ddmigs.user_seeks + ddmigs.user_scans)
* ddmigs.avg_user_impact AS INT) > 100
ORDER BY
CAST((ddmigs.user_seeks + ddmigs.user_scans)
* ddmigs.avg_user_impact AS INT) DESC;
Perhaps try "DESC" to order a different way?
This worked in another similar SO question... Why does SQL Server 2005 Dynamic Management View report a missing index when it is not?
You may need to run your queries and suggest the index that is already there.
SELECT * FROM table WITH INDEX(IX_INDEX_SHOULD_BE_USED) WHERE x = y
The index that is there might not be thought of as useful from SQL Server. Run the query that is suggesting the need for the index and see the execution path in SQL Server and then build other indexes that are needed.
Can u please list the full index missing warning message? generally, it's asking to create an index on the table BUT only to return certain fields, instead of an index on the table, which will return all fields by default.
Go ahead and script out the details of both your current index strucutre and then compare this to reccomendations made by the DTA.
I suspect that you will find there are structural differences in the results.

Resources