All:
I am having some problem figuring out how to do cascading parameters with hard coded values.
I have Company that shows the following and these are hard coded parameter values into #Company parameter:
--Select a Company--
Walmart
Target
KMart
When a user selects a company, I need to populate a second parameter, #Site, with hard coded values as well, but the #Site values change depending upon the #Company selected.
All the values are hard coded and none of them come from a database. All the examples I have found show pulling the information from a database.
Would anyone be able to help?
You can simulate a database table.
Create a new datasource if you don't already have one.
I assumed you have values (ID's) and Labels (Company Names) in your 1st parameter and that it is called CompanyID, adjust the following code to suit if not.
Then create a dataset something like this.
DECLARE #t TABLE(CompanyID int, CompanyName varchar(100), Site varchar(100))
INSERT INTO #t
VALUES
(1, 'Walmart', 'Site A'),
(1, 'Walmart', 'Site B'),
(1, 'Walmart', 'Site C'),
(2, 'Target', 'Site 1'),
(2, 'Target', 'Site 2'),
(2, 'Target', 'Site 3'),
(3, 'KMart', 'Site X'),
(3, 'KMart', 'Site Y'),
(3, 'KMart', 'Site Z')
SELECT Site FROM #t WHERE CompanyID = #CompanyID
And don't forget to set your seconds parameter to be multi-value if you want more than one site returning.
Related
I am trying to remove specific words/phrases from a list of strings. I have created a table of words to exclude. I need to remove any of these words that appear in the string. I would like to maintain a list of exclusion words/phrases which will grow over time as I discover more words/phrases I want to exclude.
As an example, I want to exclude the phase 'blown tyre', but if the word 'blown' appears without tyre, then I want to keep it.
Below I have some code which is close to working but doesn't fully exclude every word from ever line due to the cross apply. This also does not work with phases due to the spaces inbetween words. I am not sure if the cross apply and string_split is the correct approach to take.
Please note: I am using SQL Server 2016 with compatibility level 130
declare #String table (stringid int, string varchar(100))
declare #WordExclude table (wordexcludeid int, wordexclude varchar(50))
insert into #String values (1, 'The police were called'), (2, 'I noticed ice froze the door
shut and wind blew the roof off'), (3, 'The car had a blown tyre due to the storm')
insert into #WordExclude values (1,'Police'), (2, 'Noticed'), (3, 'Shut'), (4, 'Blown tyre'),
(5, 'Storm')
select
s.stringid
,value stringwords
,replace(s.string, e.wordexclude, '')
from #String s
cross apply string_split(s.string, ' ')
join #WordExclude e
on string like '%'+e.wordexclude+'%'
Let's say that we have to store information of different types of product in a database. However, these products have different specifications. For example:
Phone: cpu, ram, storage...
TV: size, resolution...
We want to store each specification in a column of a table, and all the products (whatever the type) must have a different ID.
To comply with that, now I have one general table named Products (with an auto increment ID) and one subordinate table for each type of product (ProductsPhones, ProductsTV...) with the specifications and linked with the principal with a Foreign Key.
I find this solution inefficient since the table Products has only one column (the auto incremented ID).
I would like to know if there is a better approach to solve this problem using relational databases.
The short answer is no. The relational model is a first-order logical model, meaning predicates can vary over entities but not over other predicates. That means dependent types and EAV models aren't supported.
EAV models are possible in SQL databases, but they don't qualify as relational since the domain of the value field in an EAV row depends on the value of the attribute field (and sometimes on the value of the entity field as well). Practically, EAV models tend to be inefficient to query and maintain.
PostgreSQL supports shared sequences which allows you to ensure unique auto-incremented IDs without a common supertype table. However, the supertype table may still be a good idea for FK constraints.
You may find some use for your Products table later to hold common attributes like Type, Serial number, Cost, Warranty duration, Number in stock, Warehouse, Supplier, etc...
Having Products table is fine. You can put there all the columns common across all types like product name, description, cost, price just to name some. So it's not just auto increment ID. Having an internal ID of type int or long int as the primary key is recommended. You may also add another field "code" or whatever you want to call it for user-entered or user-friendly which is common with product management systems. Make sure you index it if used in searching or query criteria.
HTH
While this can't be done completely relationally, you can still normalize your tables some and make it a little easier to code around.
You can have these tables:
-- what are the products?
Products (Id, ProductTypeId, Name)
-- what kind of product is it?
ProductTypes (Id, Name)
-- what attributes can a product have?
Attributes (Id, Name, ValueType)
-- what are the attributes that come with a specific product type?
ProductTypeAttributes (Id, ProductTypeId, AttributeId)
-- what are the values of the attributes for each product?
ProductAttributes (ProductId, ProductTypeAttributeId, Value)
So for a Phone and TV:
ProductTypes (1, Phone) -- a phone type of product
ProductTypes (2, TV) -- a tv type of product
Attributes (1, ScreenSize, integer) -- how big is the screen
Attributes (2, Has4G, boolean) -- does it get 4g?
Attributes (3, HasCoaxInput, boolean) -- does it have an input for coaxial cable?
ProductTypeAttributes (1, 1, 1) -- a phone has a screen size
ProductTypeAttributes (2, 1, 2) -- a phone can have 4g
-- a phone does not have coaxial input
ProductTypeAttributes (3, 2, 1) -- a tv has a screen size
ProductTypeAttributes (4, 2, 3) -- a tv can have coaxial input
-- a tv does not have 4g (simple example)
Products (1, 1, CoolPhone) -- product 1 is a phone called coolphone
Products (2, 1, AwesomePhone) -- prod 2 is a phone called awesomephone
Products (3, 2, CoolTV) -- prod 3 is a tv called cooltv
Products (4, 2, AwesomeTV) -- prod 4 is a tv called awesometv
ProductAttributes (1, 1, 6) -- coolphone has a 6 inch screen
ProductAttributes (1, 2, True) -- coolphone has 4g
ProductAttributes (2, 1, 4) -- awesomephone has a 4 inch screen
ProductAttributes (2, 2, False) -- awesomephone has NO 4g
ProductAttributes (3, 3, 70) -- cooltv has a 70 inch screen
ProductAttributes (3, 4, True) -- cooltv has coax input
ProductAttributes (4, 3, 19) -- awesometv has a 19 inch screen
ProductAttributes (4, 4, False) -- awesometv has NO coax input
The reason this is not fully relational is that you'll still need to evaluate the value type (bool, int, etc) of the attribute before you can use it in a meaningful way in your code.
I'm just tired by finding out an issue from last more then 5 hours.
I have a table named tblDomains. When I insert a simple record into that table, it inserts a record into another table tblSites also. I don't know how its happening.
There was only one trigger created on table tblDomains. I dropped the trigger and still its inserting a row in another table.
Below is the simple query I'm using to insert records in table tblDomains
INSERT INTO tblDomains(DomainName, CompanyName, Logo,
Address1, Address2, City, State, Zip, Phone, Fax,
tblConfigs_ID, Enabled, tblPricingPlans_ID, ExpireDate, AllowExport,
InactiveDate, tblDomainTypes_ID, ActiveLicenses, AllowPPC, Debitor)
VALUES ('Dname2', 'Cname2', '', 'ad1', 'ad2', 'ct', 'st', '12345678', '21212121211',
'32132132131', 4, '1', 1, null, 0, null, 1, 5, 0, 0);
Can anyone help that how I can track the issue?
I'm not sure what is issue but you can do one this. If possible change the name of tblSites or insert a such values in the tblDomains so that it will throw error while inserting tblSites. I think from error message you may get some clue who is inserting data into tblSites :)
I'm writing a booking procedure for a mock airline booking database and what I really want to do is something like this:
IF EXISTS (SELECT * FROM LeadCustomer
WHERE FirstName = 'John' AND Surname = 'Smith')
THEN
INSERT INTO LeadCustomer (Firstname, Surname, BillingAddress, email)
VALUES ('John', 'Smith', '6 Brewery close,
Buxton, Norfolk', 'cmp.testing#example.com');
But Postgres doesn't support IF statements without loading the PL/pgSQL extension. I was wondering if there was a way to do some equivalent of this or if there's just going to have to be some user interaction in this step?
That specific command can be done like this:
insert into LeadCustomer (Firstname, Surname, BillingAddress, email)
select
'John', 'Smith',
'6 Brewery close, Buxton, Norfolk', 'cmp.testing#example.com'
where not exists (
select 1 from leadcustomer where firstname = 'John' and surname = 'Smith'
);
It will insert the result of the select statement, and the select will only return a row if that customer does not exist.
As of 9.5 version of pgsql upsert is included, using INSERT ... ON CONFLICT DO UPDATE ...
The answer below is no longer relevant. Postgres 9.5 was released a couple years later with a better solution.
Postgres doesn't have "upsert" functionality without adding new functions.
What you'll have to do is run the select query and see if you have matching rows. If you do, then insert it.
I know you're not wanting an upsert exactly, but it's pretty much the same.
-- Use follwing format to insert data in any table like this --
create table user
(
user_id varchar(25) primary key,
phone_num numeric(15),
failed_login int not null default 0,
Login time timestamp
);
INSERT INTO USER(user_id, phone_num, failed_login, Login time)
VALUES ('12345','123456789','3',' 2021-01-16 04:24:01.755');
I have several different tables in my database and I'm trying to use Sphinx to do fast full-text searches. For ease of discussion, let's say the main records of interest are packing slips, one of which is included when an order ships. How do I use Sphinx to execute complex queries across all of these tables without completely denormalizing the database?
Each packing slip lists the order number, shipper, recipient, and the tracking number of each box included with the shipment. A separate table contains information about the order items. An additional table contains the customer address information. So, orders contain boxes and boxes contain items. (Example schema listed at the bottom of this question).
I would like to be able to query Sphinx to answers to questions like:
How many people who live on a street named "Maple" ordered an item with "large" in the description?
Which orders contain include the word "blue" in either the box description or order items' description?
To answer these types of questions, I need to refer to several tables. Since Sphinx doesn't have JOINs, one option is to denormalize the database. Denormalizing using a view, so that each row represents an order item--plus all of the data of it's parent box and order, would result in billions of very wide rows. So I've been creating a separate index for each table instead. But that doesn't allow me to query across tables as a SQL JOIN would. Is there another solution?
Example database
CREATE TABLE orders (
id integer PRIMARY KEY,
date_ordered date,
customer_po varchar
);
INSERT INTO orders VALUES (1, '2012-12-13', NULL);
INSERT INTO orders VALUES (2, '2012-12-14', 'DF312442');
CREATE TABLE parties (
id integer PRIMARY KEY,
order_id integer NOT NULL REFERENCES orders(id),
party_type varchar,
company varchar,
city varchar,
state char(2)
);
INSERT INTO parties VALUES (1, 1, 'shipper', 'ACME, Inc.', 'New York', 'NY');
INSERT INTO parties VALUES (2, 1, 'recipient', 'Wylie Coyote Corp.', 'Flagstaff', 'AZ');
INSERT INTO parties VALUES (3, 2, 'shipper', 'Cyberdyne', 'Las Vegas', 'NV');
-- Please disregard the fact that this design permits multiple shippers and multiple recipients
-- per order. This is a vastly simplified version of the system I'm working on.
CREATE TABLE boxes (
id integer PRIMARY KEY,
order_id integer NOT NULL REFERENCES orders(id),
tracking_num varchar NOT NULL,
description varchar NOT NULL,
);
INSERT INTO boxes VALUES (1, 1, '1234567890', 'household goods');
INSERT INTO boxes VALUES (2, 1, '0987654321', 'kitchen appliances');
INSERT INTO boxes VALUES (3, 2, 'ABCDE12345', 'audio equipment');
CREATE TABLE box_contents (
id integer PRIMARY KEY,
order_id integer NOT NULL REFERENCES orders(id),
box integer NOT NULL REFERENCES boxes(id),
qty_units integer,
description varchar
);
INSERT INTO box_contents VALUES (1, 1, 1, 4, 'cookbook');
INSERT INTO box_contents VALUES (2, 1, 1, 2, 'baby bottle');
INSERT INTO box_contents VALUES (3, 1, 2, 1, 'television');
INSERT INTO box_contents VALUES (4, 2, 3, 2, 'lamp');
You put the JOIN in the sql_query that builds the index. The tables remain normalized, but you denormalize when building the index.
Its only a basic example, but your query would be something like.. .
sql_query = SELECT o.id,customer_po,UNIX_TIMESTAMP(date_ordered) AS date_ordered, \
GROUP_CONCAT(DISTINCT party_type) AS party_type, \
GROUP_CONCAT(DISTINCT company) AS company, \
GROUP_CONCAT(DISTINCT city) AS city, \
GROUP_CONCAT(DISTINCT description) AS description \
FROM orders o \
INNER JOIN parties p ON (o.id = p.order_id) \
INNER JOIN box_contents b ON (o.id = b.order_id) \
GROUP BY o.id \
ORDER BY NULL
Update: alternatively can use sql_joined_field to do the same but avoid actual sql_query joins. Sphinx then does the join process for you