Use IN operator along with NOT LIKE considering wild card matching - database

I am having two tables.
TableA(% is a wild card character)
---------------
str |
---------------
abc% |
xyz% |
pop% |
---------------
TableB
--------------------
id | name | domain |
--------------------
1 | Paul | zzz.ko |
2 | John | abc.lo |
3 | Kal | pop.cm |
--------------------
I want to fetch all the records from TableB where domain does not match with TableA str field(
wild card matching).

A crude query would look like this:
select * from TableB b
where not exists (select 1 from TableA a where b.domain like a.str)
Also, you might want to avoid using "str" as a column name.

Related

T-SQL Query comparing Member counts between 2 tables

TABLE 1: Data sent to vendor
| MemberID | FirstName | LastName | Etc |
| :------: | :-------: | :------: | :-: |
| 1 | John | Smith | Etc |
| 2 | Jane | Doe | Etc |
| 3 | Dan | Laren | Etc |
TABLE 2: Data returned from vendor
| MemberID | FirstName | LastName | Etc |
| :------: | :-------: | :------: | :-: |
| 1 | John | Smith | Etc |
| 2 | Jane | Doe | Etc |
| 3 | Dan | Laren | Etc |
We send data to a vendor which is used for their matching algorithm and they return the data with new information. The members are matched with a MemberID data element. How would I write a query which shows me which MemberIDs we sent to the vendor but the vendor didn't return?
NOT EXITS would be my first choice here.
Example
SELECT *
FROM Table1 A
WHERE NOT EXISTS (SELECT 1
FROM Table2 B
WHERE A.MemberID = B.MemberID )
SELECT MemberID
FROM Table1
WHERE MemberID NOT IN (SELECT MemberID FROM Table2)
Using EXCEPT is one option.
SELECT sent.[MemberID] FROM Tbl1_SentToVendor sent
EXCEPT
SELECT recv.[MemberID] FROM Tbl2_ReturnedFromVendor recv
This is just on MemberID, but the "EXCEPT" syntax can also support additional columns (e.g., in case you want to filter out data that may be the same as what you already have.)

TSQL query parser in TSQL

I would like to have something like a procedure that takes a query definition as input and output a set of tables containing the individual elements of the query.
Searching the internet for this yields me numerous results in various programming languages, but not in tsql itself. Is there such a resource around?
An example in order to illustrate what I mean by parser:
Input example (any query, really:)
'select t1.col1,t2.col2
from table1 t1
inner join table2.col2
on t1.t2ref=t2.key'
The output, of course, will be a multitude of data. I mentioned tables, but it could be in any form eg an xml. Here is a VERY SIMPLISTIC and arbitrary example of decomposition for the query above:
tables_used:
+----+-----------+--------+------------+
| id | object_id | name | alias used |
+----+-----------+--------+------------+
| 1 | 43252345 | table1 | t1 |
| 2 | 6542625 | table2 | t2 |
+----+-----------+--------+------------+
columns_used:
+----------+-------------+
| table_id | column name |
+----------+-------------+
| 1 | col1 |
| 1 | t2ref |
| 2 | key |
| 2 | col2 |
+----------+-------------+
joins_used:
+-----+-----+-------+-----------------+
| tb1 | tb2 | type | on |
+-----+-----+-------+-----------------+
| 1 | 2 | inner | t1.t2ref=t2.key |
+-----+-----+-------+-----------------+

ACCESS: Compare numbers from different tables

I don't know much Microsoft Access, but I need it to solve this issue:
Let's suppose that I have two tables:
Table A:
Zipcode Start | ZipCode End | Etc1 | Etc2
==============================================================
20000-000 | 29999-999 | Sample data 1 | Another Sample 1
30000-000 | 39999-999 | Sample data 2 | Another Sample 2
40000-000 | 49999-999 | Sample data 3 | Another Sample 3
Table B:
NAME | ZipCode | Etc1 | Etc2
=============================================
John Doe | 31564-888 | |
Johnny | 22559-010 | |
James | 44411-000 | |
How can I compare the Zipcodes on table B with the specified ranges on table A? And return the "Etc1" and "Etc2" that matches it?
Thank you ALL!!
You can do like this:
Select
TableB.Name,
TableB.ZipCode,
TableA.Etc1,
TableA.Etc2
From
TableA,
TableB
Where
TableB.ZipCode Between TableA.[ZipCode Start] And TableA.[ZipCode End]

Unpivot SQL each row in SQL table to key-value pairs with group IDs

Running SQL Server 2012, I have a table in the following format:
ENC_ID | Name | ED_YN | Seq
-------------------------------------
1234 | John | Y | 1
1234 | Sally | N | 2
2345 | Chris | N | 1
2345 | Sally | N | 2
I would like to unpivot this into a entity-attribute-value list (if that's the right terminology - I am thinking of them as key-value pairs grouped by IDs), with the following format:
ENC_ID | Seq | Key | Value
--------------------------------------
1234 | 1 | Name | John
1234 | 1 | ED_YN | Y
1234 | 2 | Name | Sally
1234 | 2 | ED_YN | N
2345 | 1 | Name | Chris
2345 | 1 | ED_YN | N
2345 | 2 | Name | Sally
2345 | 2 | ED_YN | N
I have seen various answers to this using UNION or UNPIVOT, but these solutions tend to be long and must be closely customized to the table. I'm looking for a solution that can be reused without a great deal of rewriting as this pattern solves a problem I expect to run into frequently (ingesting data from star-schema into Tableau via extracts, where I don't necessarily know the number of value columns).
The closest thing I've found to solve this is this question/answer but I haven't had success altering the solution to add ENC_ID and Seq to each row in the result table.
Thanks.
I will use Cross Apply with table valued Constructor to do this unpivoting
select ENC_ID, Seq, Key, Value
from yourtable
cross apply
(values ('Name',Name),
('ED_YN',ED_YN)) CS (Key,Value)
You can try below query. This uses classic UNPIVOT syntax.
Since I am unsure about your column types I have casted both of them as varchar(100). You can increase length from 100.
Working sql fiddle demo link :
select enc_id,seq, k [key],v [value] from
(select enc_id,seq,cast(name as varchar(100)) as name, cast(ed_yn as varchar(100)) as ed_yn from r)s
UNPIVOT
( v for k in ([name],[ed_yn])
)up

MySQL I want to optimize this further

So I started off with this query:
SELECT * FROM TABLE1 WHERE hash IN (SELECT id FROM temptable);
It took forever, so I ran an explain:
mysql> explain SELECT * FROM TABLE1 WHERE hash IN (SELECT id FROM temptable);
+----+--------------------+-----------------+------+---------------+------+---------+------+------------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+--------------------+-----------------+------+---------------+------+---------+------+------------+-------------+
| 1 | PRIMARY | TABLE1 | ALL | NULL | NULL | NULL | NULL | 2554388553 | Using where |
| 2 | DEPENDENT SUBQUERY | temptable | ALL | NULL | NULL | NULL | NULL | 1506 | Using where |
+----+--------------------+-----------------+------+---------------+------+---------+------+------------+-------------+
2 rows in set (0.01 sec)
It wasn't using an index. So, my second pass:
mysql> explain SELECT * FROM TABLE1 JOIN temptable ON TABLE1.hash=temptable.hash;
+----+-------------+-----------------+------+---------------+----------+---------+------------------------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-----------------+------+---------------+----------+---------+------------------------+------+-------------+
| 1 | SIMPLE | temptable | ALL | hash | NULL | NULL | NULL | 1506 | |
| 1 | SIMPLE | TABLE1 | ref | hash | hash | 5 | testdb.temptable.hash | 527 | Using where |
+----+-------------+-----------------+------+---------------+----------+---------+------------------------+------+-------------+
2 rows in set (0.00 sec)
Can I do any other optimization?
You can gain some more speed by using a covering index, at the cost of extra space consumption. A covering index is one which can satisfy all requested columns in a query without performing a further lookup into the clustered index.
First of all get rid of the SELECT * and explicitly select the fields that you require. Then you can add all the fields in the SELECT clause to the right hand side of your composite index. For example, if your query will look like this:
SELECT first_name, last_name, age
FROM table1
JOIN temptable ON table1.hash = temptable.hash;
Then you can have a covering index that looks like this:
CREATE INDEX ix_index ON table1 (hash, first_name, last_name, age);

Resources