Apologies if title is too vague, but I had trouble to phrase it properly.
So basically I'm trying to figure out whether Apache Spark, together with Apache Kafka is able to sync data from my relational database to Elasticsearch.
My plan is to use one of the Kafka connectors to read data from RDBMS and push it into Kafka topics. That would be the ERD of the model and DDL. Quite basic, Report and Product tables that have many-to-many relationship that exists in ReportProduct table:
CREATE TABLE dbo.Report (
ReportID INT NOT NULL PRIMARY KEY,
Title NVARCHAR(500) NOT NULL,
PublishedOn DATETIME2 NOT NULL);
CREATE TABLE dbo.Product (
ProductID INT NOT NULL PRIMARY KEY,
ProductName NVARCHAR(100) NOT NULL);
CREATE TABLE dbo.ReportProduct (
ReportID INT NOT NULL,
ProductID INT NOT NULL,
PRIMARY KEY (ReportID, ProductID),
FOREIGN KEY (ReportID) REFERENCES dbo.Report (ReportID),
FOREIGN KEY (ProductID) REFERENCES dbo.Product (ProductID));
INSERT INTO dbo.Report (ReportID, Title, PublishedOn)
VALUES (1, N'Yet Another Apache Spark StackOverflow question', '2017-09-12T19:15:28');
INSERT INTO dbo.Product (ProductID, ProductName)
VALUES (1, N'Apache'), (2, N'Spark'), (3, N'StackOverflow'), (4, N'Random product');
INSERT INTO dbo.ReportProduct (ReportID, ProductID)
VALUES (1, 1), (1, 2), (1, 3), (1, 4);
SELECT *
FROM dbo.Report AS R
INNER JOIN dbo.ReportProduct AS RP
ON RP.ReportID = R.ReportID
INNER JOIN dbo.Product AS P
ON P.ProductID = RP.ProductID;
My goal is to transform this into document with the following structure:
{
"ReportID":1,
"Title":"Yet Another Apache Spark StackOverflow question",
"PublishedOn":"2017-09-12T19:15:28+00:00",
"Product":[
{
"ProductID":1,
"ProductName":"Apache"
},
{
"ProductID":2,
"ProductName":"Spark"
},
{
"ProductID":3,
"ProductName":"StackOverflow"
},
{
"ProductID":4,
"ProductName":"Random product"
}
]
}
I was able to form such kind of structure using static data that I have mocked up locally:
report.join(
report_product.join(product, "product_id")
.groupBy("report_id")
.agg(
collect_list(struct("product_id", "product_name")).alias("product")
), "report_id").show
But I realize that this is too basic and streams are going to be way more complicated.
Data is changing irregularly, reports and their products are being constantly changed, products are changed once in a while (mostly on a weekly basis).
I would like to replicate any kind of changes into Elasticsearch that have happened in one of these tables.
Kafka Connect to pull the data from your source DB - you can use the JDBC Source which is available as part of Confluent Platform (or separately), and may also want to investigate kafka-connect-cdc-mssql
Once you've got the data in Kafka, use either the Kafka Streams API to manipulate the data as desired, or look at the newly released KSQL. Which you choose will be driven by things like your preference for coding in Java (with Kafka Streams) or manipulating data in a SQL-like environment (with KSQL). Regardless, the output of both of these is going to be another Kafka topic.
Finally, stream the Kafka topic from above into Elasticsearch using the Elasticsearch Kafka Connect plugin (available here, or as part of the Confluent Platform)
Related
In CitusDB, I can create an empty table with:
CREATE TABLE table1 (col1 text, col2 text);
I can tell table1 how to partition the data, which will later be loaded into the table, by running this:
SELECT create_distributed_table('table1', 'col1');
In this moment, I will then know how my table is distributed across CitusDB nodes.
However, if I come across a new table that I didn't create, but I know it is distributed, how do I know what column the table is distributed on?
You want to use the citusDB column_to_column function described in the citus db docs: http://docs.citusdata.com/en/v9.3/develop/api_udf.html
SELECT column_to_column_name(logicalrelid, partkey) AS dist_col_name
FROM pg_dist_partition
WHERE logicalrelid='<table>'::regclass;
I am building a SQLite database and am not sure how to proceed with this scenario.
I'll use a real-world example to explain what I need:
I have a list products that are sold by many stores in various states. Not every Store sells a particular Product at all, and those that do, may only sell it in one State or another. Most stores sell a product in most states, but not all.
For example, let's say I am trying to buy a vacuum cleaner in Hawaii. Joe's Hardware sells vacuums in 18 states, but not in Hawaii. Walmart sells vacuums in Hawaii, but not microwaves. Burger King does not sell vacuums at all, but will give me a Whopper anywhere in the US.
So if I am in Hawaii and search for a vacuum, I should only get Walmart as a result. While other stores may sell vacuums, and may sell in Hawaii, they don't do both but Walmart does.
How do I efficiently create this type of relationship in a relational database (specifically, I am currently using SQLite, but need to be able to convert to MySQL in the future).
Obviously, I would need tables for Product, Store, and State, but I am at a loss on how to create and query the appropriate join tables...
If I, for example, query a certain Product, how would I determine which Store would sell it in a particular State, keeping in mind that Walmart may not sell vacuums in Hawaii, but they do sell tea there?
I understand the basics of 1:1, 1:n, and M:n relationships in RD, but I am not sure how to handle this complexity where there is a many-to-many-to-many situation.
If you could show some SQL statements (or DDL) that demonstrates this, I would be very grateful. Thank you!
An accepted and common way is the utilisation of a table that has a column for referencing the product and another for the store. There's many names for such a table reference table, associative table mapping table to name some.
You want these to be efficient so therefore try to reference by a number which of course has to uniquely identify what it is referencing. With SQLite by default a table has a special column, normally hidden, that is such a unique number. It's the rowid and is typically the most efficient way of accessing rows as SQLite has been designed this common usage in mind.
SQLite allows you to create a column per table that is an alias of the rowid you simple provide the column followed by INTEGER PRIMARY KEY and typically you'd name the column id.
So utilising these the reference table would have a column for the product's id and another for the store's id catering for every combination of product/store.
As an example three tables are created (stores products and a reference/mapping table) the former being populated using :-
CREATE TABLE IF NOT EXISTS _products(id INTEGER PRIMARY KEY, productname TEXT, productcost REAL);
CREATE TABLE IF NOT EXISTS _stores (id INTEGER PRIMARY KEY, storename TEXT);
CREATE TABLE IF NOT EXISTS _product_store_relationships (storereference INTEGER, productreference INTEGER);
INSERT INTO _products (productname,productcost) VALUES
('thingummy',25.30),
('Sky Hook',56.90),
('Tartan Paint',100.34),
('Spirit Level Bubbles - Large', 10.43),
('Spirit Level bubbles - Small',7.77)
;
INSERT INTO _stores (storename) VALUES
('Acme'),
('Shops-R-Them'),
('Harrods'),
('X-Mart')
;
The resultant tables being :-
_product_store_relationships would be empty
Placing products into stores (for example) could be done using :-
-- Build some relationships/references/mappings
INSERT INTO _product_store_relationships VALUES
(2,2), -- Sky Hooks are in Shops-R-Them
(2,4), -- Sky Hooks in x-Mart
(1,3), -- thingummys in Harrods
(1,1), -- and Acme
(1,2), -- and Shops-R-Them
(4,4), -- Spirit Level Bubbles Large in X-Mart
(5,4), -- Spiirit Level Bubble Small in X-Mart
(3,3) -- Tartn paint in Harrods
;
The _product_store_relationships would then be :-
A query such as the following would list the products in stores sorted by store and then product :-
SELECT storename, productname, productcost FROM _stores
JOIN _product_store_relationships ON _stores.id = storereference
JOIN _products ON _product_store_relationships.productreference = _products.id
ORDER BY storename, productname
;
The resultant output being :-
This query will only list stores that have a product name that contains an s or S (as like is typically case sensitive) the output being sorted according to productcost in ASCending order, then storename, then productname:-
SELECT storename, productname, productcost FROM _stores
JOIN _product_store_relationships ON _stores.id = storereference
JOIN _products ON _product_store_relationships.productreference = _products.id
WHERE productname LIKE '%s%'
ORDER BY productcost,storename, productname
;
Output :-
Expanding the above to consider states.
2 new tables states and store_state_reference
Although no real need for a reference table (a store would only be in one state unless you consider a chain of stores to be a store, in which case this would also cope)
The SQL could be :-
CREATE TABLE IF NOT EXISTS _states (id INTEGER PRIMARY KEY, statename TEXT);
INSERT INTO _states (statename) VALUES
('Texas'),
('Ohio'),
('Alabama'),
('Queensland'),
('New South Wales')
;
CREATE TABLE IF NOT EXISTS _store_state_references (storereference, statereference);
INSERT INTO _store_state_references VALUES
(1,1),
(2,5),
(3,1),
(4,3)
;
If the following query were run :-
SELECT storename,productname,productcost,statename
FROM _stores
JOIN _store_state_references ON _stores.id = _store_state_references.storereference
JOIN _states ON _store_state_references.statereference =_states.id
JOIN _product_store_relationships ON _stores.id = _product_store_relationships.storereference
JOIN _products ON _product_store_relationships.productreference = _products.id
WHERE statename = 'Texas' AND productname = 'Sky Hook'
;
The output would be :-
Without the WHERE clause :-
make Stores-R-Them have a presence in all states :-
The following would make Stores-R-Them have a presence in all states :-
INSERT INTO _store_state_references VALUES
(2,1),(2,2),(2,3),(2,4)
;
Now the Sky Hook's in Texas results in :-
Note This just covers the basics of the topic.
You will need to create combine mapping table of product, states and stores as tbl_product_states_stores which will store mapping of products, state and store. The columns will be id, product_id, state_id, stores_id.
This question was asked in a interview. Design a organizational structure, where an employee can have direct reports, and indirect reports (that is reportees of reportee). The design should be such that, in a single query it should be able to retrieve either direct or indirect reportees or both.
I suggested,
Employee
----------
id
name
Reportee
------
emp_id FK
reportee_id FK
isDirect
The interviewer said the optimistic solution is
Employee
-------
id
name
reporting_path like (a>b>c)
Adding additional table, takes more space, but query will be executed faster. I said that due to string matching, the path based approach is bad and yields bad performance.
So which approach is optimistic?
The interviewer's approach is dumb because it does not use referential integrity.
For a purely hierarchical model (an employee cannot report to more than one boss), then this is the best approach:
create table employees (
employee_id int primary key,
name varchar(whatever) not null,
supervisor_id int null references employees(employee_id)
);
insert into employees (employee_id, name, supervisor_id) values
(1, 'Big Boss Bill', null),
(2, 'Vice President Victor', 1),
(3, 'Underling Ulysses', 2),
(4, 'Subordinate Sam', 2);
You can then use Recursive Common Table Expressions to query reports.
Some example queries here:
http://blog.databasepatterns.com/2014/02/trees-paths-recursive-cte-postgresql.html
I have several different tables in my database and I'm trying to use Sphinx to do fast full-text searches. For ease of discussion, let's say the main records of interest are packing slips, one of which is included when an order ships. How do I use Sphinx to execute complex queries across all of these tables without completely denormalizing the database?
Each packing slip lists the order number, shipper, recipient, and the tracking number of each box included with the shipment. A separate table contains information about the order items. An additional table contains the customer address information. So, orders contain boxes and boxes contain items. (Example schema listed at the bottom of this question).
I would like to be able to query Sphinx to answers to questions like:
How many people who live on a street named "Maple" ordered an item with "large" in the description?
Which orders contain include the word "blue" in either the box description or order items' description?
To answer these types of questions, I need to refer to several tables. Since Sphinx doesn't have JOINs, one option is to denormalize the database. Denormalizing using a view, so that each row represents an order item--plus all of the data of it's parent box and order, would result in billions of very wide rows. So I've been creating a separate index for each table instead. But that doesn't allow me to query across tables as a SQL JOIN would. Is there another solution?
Example database
CREATE TABLE orders (
id integer PRIMARY KEY,
date_ordered date,
customer_po varchar
);
INSERT INTO orders VALUES (1, '2012-12-13', NULL);
INSERT INTO orders VALUES (2, '2012-12-14', 'DF312442');
CREATE TABLE parties (
id integer PRIMARY KEY,
order_id integer NOT NULL REFERENCES orders(id),
party_type varchar,
company varchar,
city varchar,
state char(2)
);
INSERT INTO parties VALUES (1, 1, 'shipper', 'ACME, Inc.', 'New York', 'NY');
INSERT INTO parties VALUES (2, 1, 'recipient', 'Wylie Coyote Corp.', 'Flagstaff', 'AZ');
INSERT INTO parties VALUES (3, 2, 'shipper', 'Cyberdyne', 'Las Vegas', 'NV');
-- Please disregard the fact that this design permits multiple shippers and multiple recipients
-- per order. This is a vastly simplified version of the system I'm working on.
CREATE TABLE boxes (
id integer PRIMARY KEY,
order_id integer NOT NULL REFERENCES orders(id),
tracking_num varchar NOT NULL,
description varchar NOT NULL,
);
INSERT INTO boxes VALUES (1, 1, '1234567890', 'household goods');
INSERT INTO boxes VALUES (2, 1, '0987654321', 'kitchen appliances');
INSERT INTO boxes VALUES (3, 2, 'ABCDE12345', 'audio equipment');
CREATE TABLE box_contents (
id integer PRIMARY KEY,
order_id integer NOT NULL REFERENCES orders(id),
box integer NOT NULL REFERENCES boxes(id),
qty_units integer,
description varchar
);
INSERT INTO box_contents VALUES (1, 1, 1, 4, 'cookbook');
INSERT INTO box_contents VALUES (2, 1, 1, 2, 'baby bottle');
INSERT INTO box_contents VALUES (3, 1, 2, 1, 'television');
INSERT INTO box_contents VALUES (4, 2, 3, 2, 'lamp');
You put the JOIN in the sql_query that builds the index. The tables remain normalized, but you denormalize when building the index.
Its only a basic example, but your query would be something like.. .
sql_query = SELECT o.id,customer_po,UNIX_TIMESTAMP(date_ordered) AS date_ordered, \
GROUP_CONCAT(DISTINCT party_type) AS party_type, \
GROUP_CONCAT(DISTINCT company) AS company, \
GROUP_CONCAT(DISTINCT city) AS city, \
GROUP_CONCAT(DISTINCT description) AS description \
FROM orders o \
INNER JOIN parties p ON (o.id = p.order_id) \
INNER JOIN box_contents b ON (o.id = b.order_id) \
GROUP BY o.id \
ORDER BY NULL
Update: alternatively can use sql_joined_field to do the same but avoid actual sql_query joins. Sphinx then does the join process for you
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I have a Competitions results table which holds team member's names and their ranking on one hand.
On the other hand I need to maintain a table of unique competitors names:
CREATE TABLE Competitors (cName nvarchar(64) primary key)
Now I have some 200,000 results in the 1st table and when the competitors table is empty I can perform this:
INSERT INTO Competitors SELECT DISTINCT Name FROM CompResults
And the query only takes some 5 seconds to insert about 11,000 names.
So far this is not a critical application so I can consider truncate the Competitors table once a month, when I receive the new competition results with some 10,000 rows.
But what is the best practice when new results are added, with new AND existing competitors? I don't want to truncate existing competitors table
I need to perform INSERT statement for new competitors only and do nothing if they exists.
Semantically you are asking "insert Competitors where doesn't already exist":
INSERT Competitors (cName)
SELECT DISTINCT Name
FROM CompResults cr
WHERE
NOT EXISTS (SELECT * FROM Competitors c
WHERE cr.Name = c.cName)
Another option is to left join your Results table with your existing competitors Table and find the new competitors by filtering the distinct records that donĀ“t match int the join:
INSERT Competitors (cName)
SELECT DISTINCT cr.Name
FROM CompResults cr left join
Competitors c on cr.Name = c.cName
where c.cName is null
New syntax MERGE also offer a compact, elegant and efficient way to do that:
MERGE INTO Competitors AS Target
USING (SELECT DISTINCT Name FROM CompResults) AS Source ON Target.Name = Source.Name
WHEN NOT MATCHED THEN
INSERT (Name) VALUES (Source.Name);
Don't know why anyone else hasn't said this yet;
NORMALISE.
You've got a table that models competitions? Competitions are made up of Competitors? You need a distinct list of Competitors in one or more Competitions......
You should have the following tables.....
CREATE TABLE Competitor (
[CompetitorID] INT IDENTITY(1,1) PRIMARY KEY
, [CompetitorName] NVARCHAR(255)
)
CREATE TABLE Competition (
[CompetitionID] INT IDENTITY(1,1) PRIMARY KEY
, [CompetitionName] NVARCHAR(255)
)
CREATE TABLE CompetitionCompetitors (
[CompetitionID] INT
, [CompetitorID] INT
, [Score] INT
, PRIMARY KEY (
[CompetitionID]
, [CompetitorID]
)
)
With Constraints on CompetitionCompetitors.CompetitionID and CompetitorID pointing at the other tables.
With this kind of table structure -- your keys are all simple INTS -- there doesn't seem to be a good NATURAL KEY that would fit the model so I think a SURROGATE KEY is a good fit here.
So if you had this then to get the the distinct list of competitors in a particular competition you can issue a query like this:
DECLARE #CompetitionName VARCHAR(50) SET #CompetitionName = 'London Marathon'
SELECT
p.[CompetitorName] AS [CompetitorName]
FROM
Competitor AS p
WHERE
EXISTS (
SELECT 1
FROM
CompetitionCompetitor AS cc
JOIN Competition AS c ON c.[ID] = cc.[CompetitionID]
WHERE
cc.[CompetitorID] = p.[CompetitorID]
AND cc.[CompetitionName] = #CompetitionNAme
)
And if you wanted the score for each competition a competitor is in:
SELECT
p.[CompetitorName]
, c.[CompetitionName]
, cc.[Score]
FROM
Competitor AS p
JOIN CompetitionCompetitor AS cc ON cc.[CompetitorID] = p.[CompetitorID]
JOIN Competition AS c ON c.[ID] = cc.[CompetitionID]
And when you have a new competition with new competitors then you simply check which ones already exist in the Competitors table. If they already exist then you don't insert into Competitor for those Competitors and do insert for the new ones.
Then you insert the new Competition in Competition and finally you just make all the links in CompetitionCompetitors.
You will need to join the tables together and get a list of unique competitors that don't already exist in Competitors.
This will insert unique records.
INSERT Competitors (cName)
SELECT DISTINCT Name
FROM CompResults cr LEFT JOIN Competitors c ON cr.Name = c.cName
WHERE c.Name IS NULL
There may come a time when this insert needs to be done quickly without being able to wait for the selection of unique names. In that case, you could insert the unique names into a temporary table, and then use that temporary table to insert into your real table. This works well because all the processing happens at the time you are inserting into a temporary table, so it doesn't affect your real table. Then when you have all the processing finished, you do a quick insert into the real table. I might even wrap the last part, where you insert into the real table, inside a transaction.
The answers above which talk about normalizing are great! But what if you find yourself in a position like me where you're not allowed to touch the database schema or structure as it stands? Eg, the DBA's are 'gods' and all suggested revisions go to /dev/null?
In that respect, I feel like this has been answered with this Stack Overflow posting too in regards to all the users above giving code samples.
I'm reposting the code from INSERT VALUES WHERE NOT EXISTS which helped me the most since I can't alter any underlying database tables:
INSERT INTO #table1 (Id, guidd, TimeAdded, ExtraData)
SELECT Id, guidd, TimeAdded, ExtraData
FROM #table2
WHERE NOT EXISTS (Select Id, guidd From #table1 WHERE #table1.id = #table2.id)
-----------------------------------
MERGE #table1 as [Target]
USING (select Id, guidd, TimeAdded, ExtraData from #table2) as [Source]
(id, guidd, TimeAdded, ExtraData)
on [Target].id =[Source].id
WHEN NOT MATCHED THEN
INSERT (id, guidd, TimeAdded, ExtraData)
VALUES ([Source].id, [Source].guidd, [Source].TimeAdded, [Source].ExtraData);
------------------------------
INSERT INTO #table1 (id, guidd, TimeAdded, ExtraData)
SELECT id, guidd, TimeAdded, ExtraData from #table2
EXCEPT
SELECT id, guidd, TimeAdded, ExtraData from #table1
------------------------------
INSERT INTO #table1 (id, guidd, TimeAdded, ExtraData)
SELECT #table2.id, #table2.guidd, #table2.TimeAdded, #table2.ExtraData
FROM #table2
LEFT JOIN #table1 on #table1.id = #table2.id
WHERE #table1.id is null
The above code uses different fields than what you have, but you get the general gist with the various techniques.
Note that as per the original answer on Stack Overflow, this code was copied from here.
Anyway my point is "best practice" often comes down to what you can and can't do as well as theory.
If you're able to normalize and generate indexes/keys -- great!
If not and you have the resort to code hacks like me, hopefully the
above helps.
Good luck!
Normalizing your operational tables as suggested by Transact Charlie, is a good idea, and will save many headaches and problems over time - but there are such things as interface tables, which support integration with external systems, and reporting tables, which support things like analytical processing; and those types of tables should not necessarily be normalized - in fact, very often it is much, much more convenient and performant for them to not be.
In this case, I think Transact Charlie's proposal for your operational tables is a good one.
But I would add an index (not necessarily unique) to CompetitorName in the Competitors table to support efficient joins on CompetitorName for the purposes of integration (loading of data from external sources), and I would put an interface table into the mix: CompetitionResults.
CompetitionResults should contain whatever data your competition results have in it. The point of an interface table like this one is to make it as quick and easy as possible to truncate and reload it from an Excel sheet or a CSV file, or whatever form you have that data in.
That interface table should not be considered part of the normalized set of operational tables. Then you can join with CompetitionResults as suggested by Richard, to insert records into Competitors that don't already exist, and update the ones that do (for example if you actually have more information about competitors, like their phone number or email address).
One thing I would note - in reality, Competitor Name, it seems to me, is very unlikely to be unique in your data. In 200,000 competitors, you may very well have 2 or more David Smiths, for example. So I would recommend that you collect more information from competitors, such as their phone number or an email address, or something which is more likely to be unique.
Your operational table, Competitors, should just have one column for each data item that contributes to a composite natural key; for example it should have one column for a primary email address. But the interface table should have a slot for old and new values for a primary email address, so that the old value can be use to look up the record in Competitors and update that part of it to the new value.
So CompetitionResults should have some "old" and "new" fields - oldEmail, newEmail, oldPhone, newPhone, etc. That way you can form a composite key, in Competitors, from CompetitorName, Email, and Phone.
Then when you have some competition results, you can truncate and reload your CompetitionResults table from your excel sheet or whatever you have, and run a single, efficient insert to insert all the new competitors into the Competitors table, and single, efficient update to update all the information about the existing competitors from the CompetitionResults. And you can do a single insert to insert new rows into the CompetitionCompetitors table. These things can be done in a ProcessCompetitionResults stored procedure, which could be executed after loading the CompetitionResults table.
That's a sort of rudimentary description of what I've seen done over and over in the real world with Oracle Applications, SAP, PeopleSoft, and a laundry list of other enterprise software suites.
One last comment I'd make is one I've made before on SO: If you create a foreign key that insures that a Competitor exists in the Competitors table before you can add a row with that Competitor in it to CompetitionCompetitors, make sure that foreign key is set to cascade updates and deletes. That way if you need to delete a competitor, you can do it and all the rows associated with that competitor will get automatically deleted. Otherwise, by default, the foreign key will require you to delete all the related rows out of CompetitionCompetitors before it will let you delete a Competitor.
(Some people think non-cascading foreign keys are a good safety precaution, but my experience is that they're just a freaking pain in the butt that are more often than not simply a result of an oversight and they create a bunch of make work for DBA's. Dealing with people accidentally deleting stuff is why you have things like "are you sure" dialogs and various types of regular backups and redundant data sources. It's far, far more common to actually want to delete a competitor, whose data is all messed up for example, than it is to accidentally delete one and then go "Oh no! I didn't mean to do that! And now I don't have their competition results! Aaaahh!" The latter is certainly common enough, so, you do need to be prepared for it, but the former is far more common, so the easiest and best way to prepare for the former, imo, is to just make foreign keys cascade updates and deletes.)
Ok, this was asked 7 years ago, but I think the best solution here is to forego the new table entirely and just do this as a custom view. That way you're not duplicating data, there's no worry about unique data, and it doesn't touch the actual database structure. Something like this:
CREATE VIEW vw_competitions
AS
SELECT
Id int
CompetitionName nvarchar(75)
CompetitionType nvarchar(50)
OtherField1 int
OtherField2 nvarchar(64) --add the fields you want viewed from the Competition table
FROM Competitions
GO
Other items can be added here like joins on other tables, WHERE clauses, etc. This is most likely the most elegant solution to this problem, as you now can just query the view:
SELECT *
FROM vw_competitions
...and add any WHERE, IN, or EXISTS clauses to the view query.
Additionally, if you have multiple columns to insert and want to check if they exists or not use the following code
Insert Into [Competitors] (cName, cCity, cState)
Select cName, cCity, cState from
(
select new.* from
(
select distinct cName, cCity, cState
from [Competitors] s, [City] c, [State] s
) new
left join
(
select distinct cName, cCity, cState
from [Competitors] s
) existing
on new.cName = existing.cName and new.City = existing.City and new.State = existing.State
where existing.Name is null or existing.City is null or existing.State is null
)