Program/Currency lookup table - sql-server

I need to create a lookup table so that I can determine by program and currency if the current day of a particular month is a holiday. I thought about building a calendar for each but that is going to get to big to deal with as programs may come and go.
A sample set of data would be something like:
Program | Currency | January | February | March | April | May | June | July | August | September | October | November | December
--------| ---------| --------| ---------| ------| -----------| ----------| -----| -----| -------| ----------| --------| ---------| -----------
Default | AUD | 1, 27 | - | - | 10, 13 | - | 8 | - | - | - | 5 | - | 25, 28
Default | CAD | 1 | 17 | - | 10 | 18 | - | 1 | 3 | 7 | 12 | 11 | 25, 28
Default | CHF | 1, 2 | - | - | 10, 13 | 1, 21 | 1 | - | - | - | - | - | 25
Default | DKK | 1 | - | - | 9, 10, 13 | 8, 21, 22 | 1, 5 | - | - | - | - | - | 24, 25, 31
Default | EUR | 1 | - | - | 10, 13 | 1 | - | - | - | - | - | - | 25
Default | GBP | 1 | - | - | 10, 13 | 8, 25 | - | - | 31 | - | - | - | 25, 28
I am not sure how to define this table.

I'd suggest something like:
Create Table Holiday_List
(
ID Int
, Program Varchar(50)
, Currency Char(3)
, Holiday_Date Date
)
Of course, you'll need an interface to maintain that table, but that should be pretty simple.

Related

Order a group of transactions with multiple levels using FIFO?

I have a group of transactions with multiple levels that need to be ordered using FIFO. I don't have a typical parent child record set either. Transaction numbers are not sequenced, but each transaction has a "other transaction #" to relate it to any transactions that are transferred between accounts.
A transfer is movement of units between accounts in the same fund.
A switch is a switch of units between funds on the same account.
In order of levels, my transactions below:
The result I am looking for should have the same result set, but an additional column for "order" of transactions. The order needs to use FIFO based where the first transaction is number 1 and the order will follow transfers and switches in and out appropriately.
The challenge the order needs follow the transfers and switches down the whole hierarchy before moving to the next transaction in the same level.
Sample record set
CREATE TABLE #data (
[Level] INT NOT NULL,
FundId INT NOT NULL,
AccountId INT NOT NULL,
TransId VARCHAR(20) NOT NULL PRIMARY KEY CLUSTERED,
OtherTransId VARCHAR(20) NULL,
TransType VARCHAR(20) NOT NULL,
TransDate DATE NOT NULL,
AgeDays INT NULL,
Units INT NOT NULL
);
INSERT #data VALUES
(1,200,5000,'00000015','00000035','Switch In','2019-01-01',NULL,500),
(2,200,5000,'00000035','00000015','Switch Out','2019-01-01',NULL,500),
(2,200,5000,'00000070',NULL,'Buy','2018-09-09',452,700),
(2,200,5000,'00000046',NULL,'Sell','2018-09-12',449,200),
(1,100,5000,'00000001',NULL,'Buy','2019-06-30',159,100),
(1,100,5000,'00000002',NULL,'Sell','2019-07-15',NULL,20),
(1,100,5000,'00000003',NULL,'Buy','2019-07-31',128,50),
(1,100,5000,'00000004','00000011','Transfer Out','2019-08-15',NULL,45),
(2,100,6000,'00000005','00000020','Transfer In','2019-08-17',NULL,200),
(2,100,6000,'00000020','00000005','Transfer Out','2019-08-17',NULL,200),
(2,100,6000,'00000044',NULL,'Buy','2019-06-11',177,70),
(2,100,6000,'00000050','00000088','Transfer In','2019-06-10',NULL,130),
(3,100,7000,'00000088','00000050','Transfer Out','2019-06-10',NULL,130),
(3,100,7000,'00000079',NULL,'Buy','2019-06-01',187,130);
+-------+--------+-----------+---------------+---------------------+------------------+-----------------+------------+-------+
| Level | Fund # | Account # | Transaction # | Other Transaction # | Transaction Type | Date | Age (Days) | Units |
+-------+--------+-----------+---------------+---------------------+------------------+-----------------+------------+-------+
| 1 | 200 | 5000 | 00000015 | 00000035 | Switch In | January 1, 2019 | | 500 |
| 2 | 200 | 5000 | 00000035 | 00000015 | Switch Out | January 1, 2019 | | 500 |
| 2 | 200 | 5000 | 00000070 | | Buy | Sept 9, 2018 | 452 | 700 |
| 2 | 200 | 5000 | 00000046 | | Sell | Sept 12, 2018 | 449 | 200 |
| 1 | 100 | 5000 | 00000001 | | Buy | June 30, 2019 | 159 | 100 |
| 1 | 100 | 5000 | 00000002 | | Sell | July 15, 2019 | | 20 |
| 1 | 100 | 5000 | 00000003 | | Buy | July 31, 2019 | 128 | 50 |
| 1 | 100 | 5000 | 00000004 | 00000011 | Transfer Out | August 15, 2019 | | 45 |
| 2 | 100 | 6000 | 00000005 | 00000020 | Transfer In | August 17, 2019 | | 200 |
| 2 | 100 | 6000 | 00000020 | 00000005 | Transfer Out | August 17, 2019 | | 200 |
| 2 | 100 | 6000 | 00000044 | | Buy | June 11, 2019 | 177 | 70 |
| 2 | 100 | 6000 | 00000050 | 00000088 | Transfer In | June 10, 2019 | | 130 |
| 3 | 100 | 7000 | 00000088 | 00000050 | Transfer Out | June 10, 2019 | | 130 |
| 3 | 100 | 7000 | 00000079 | | Buy | June 1, 2019 | 187 | 130 |
+-------+--------+-----------+---------------+---------------------+------------------+-----------------+------------+-------+
Desired output: (added order column)
+-------+-------+--------+-----------+---------------+---------------------+------------------+-----------------+------------+-------+
| Order | Level | Fund # | Account # | Transaction # | Other Transaction # | Transaction Type | Date | Age (Days) | Units |
+-------+-------+--------+-----------+---------------+---------------------+------------------+-----------------+------------+-------+
| 1 | 1 | 200 | 5000 | 00000015 | 00000035 | Switch In | January 1, 2019 | | 500 |
| 2 | 2 | 200 | 5000 | 00000035 | 00000015 | Switch Out | January 1, 2019 | | 500 |
| 3 | 2 | 200 | 5000 | 00000070 | | Buy | Sept 9, 2018 | 452 | 700 |
| 4 | 2 | 200 | 5000 | 00000046 | | Sell | Sept 12, 2018 | 449 | 200 |
| 5 | 1 | 100 | 5000 | 00000001 | | Buy | June 30, 2019 | 159 | 100 |
| 6 | 1 | 100 | 5000 | 00000002 | | Sell | July 15, 2019 | | 20 |
| 7 | 1 | 100 | 5000 | 00000003 | | Buy | July 31, 2019 | 128 | 50 |
| 8 | 1 | 100 | 5000 | 00000004 | 00000011 | Transfer Out | August 15, 2019 | | 45 |
| 9 | 2 | 100 | 6000 | 00000005 | 00000020 | Transfer In | August 17, 2019 | | 200 |
| 10 | 2 | 100 | 6000 | 00000020 | 00000005 | Transfer Out | August 17, 2019 | | 200 |
| 11 | 2 | 100 | 6000 | 00000044 | | Buy | June 11, 2019 | 177 | 70 |
| 12 | 2 | 100 | 6000 | 00000050 | 00000088 | Transfer In | June 10, 2019 | | 130 |
| 13 | 3 | 100 | 7000 | 00000088 | 00000050 | Transfer Out | June 10, 2019 | | 130 |
| 14 | 3 | 100 | 7000 | 00000079 | | Buy | June 1, 2019 | 187 | 130 |
+-------+-------+--------+-----------+---------------+---------------------+------------------+-----------------+------------+-------+

get all rows where column value is same in cassandra cql

This is my table.
cqlsh:sachhya> select * FROM emp;
emp_id | age | emp_name | exp | mobile
--------+-----+--------------+-----+------------
5 | 29 | RAHUL SHARMA | 9 | 2312343123
1 | 24 | SACHHYA | 15 | 9090987876
2 | 14 | SACHHYA | 15 | 9090987876
4 | 22 | ANKUR | 32 | 3213456321
90 | 30 | sumeet | 2 | 91234212
3 | 14 | SACHHYA | 3 | 9090987876
PRIMARY KEY (Partition key) IS emp_id.
I want to display all rows where emp_name is 'SACHHYA'. What command should i use?
Below is the cql query that i am using.
select * FROM emp WHERE emp_name='SACHHYA';
But i am getting an error:
InvalidRequest: Error from server: code=2200 [Invalid query]
message="Predicates on non-primary-key columns (emp_name) are not yet
supported for non secondary index queries"
I have found one solution for my question, We can crate index on 'emp_name' column after that we can use 'emp_name' filter.
EX:
CREATE INDEX NameIndx ON emp (emp_name);
SELECT * from sachhya.emp WHERE emp_name = 'SACHHYA';
My output:
emp_id | age | desegnation | emp_name | exp | mobile
--------+-----+------------------+----------+-----+------------
711 | 22 | Trainee Engineer | SACHHYA | 1 | 9232189345
2 | 24 | Engineer | SACHHYA | 3 | 9033864540
My Table:
emp_id | age | desegnation | emp_name | exp | mobile
--------+-----+------------------+----------+------+------------
5 | 29 | Technical Lead | RAHUL | 9 | 2312343123
10 | 45 | Deleviry Manager | ANDREW | 22 | 9214569345
711 | 22 | Trainee Engineer | SACHHYA | 1 | 9232189345
2 | 24 | Engineer | SACHHYA | 3 | 9033864540
4 | 26 | Engineer | ANKUR | 3 | 3213456321
22 | 20 | Intern | SAM | null | 8858699345
7 | 22 | Trainee Engineer | JACOB | 1 | 9232189345
17 | 28 | Senior Engineer | JACK | 4 | 8890341799
90 | 30 | Senior Engineer | HERCULES | 6 | 9353405163
3 | 32 | Technical Lead | ROSS | 8 | 7876561355

Cumulative value per record in particular year and month

I have got a problem with query. I have got a table. Values in the table are different, I'm presenting the same value (10) in each row to simplify and better understand the problem
| ID | UniqueProcutId | Year | Month | Value |
-----|----------------|------|-------|-------|
| 1 | wwee124 | 2015 | 1 | 10 |
| 2 | wwee124 | 2015 | 2 | 10 |
| 3 | wwee124 | 2015 | 3 | 10 |
| 4 | wwee124 | 2015 | 4 | 10 |
| 5 | wwee124 | 2015 | 5 | 10 |
| 6 | wwee124 | 2015 | 6 | 10 |
| 7 | wwee124 | 2015 | 7 | 10 |
| 8 | wwee124 | 2015 | 8 | 10 |
| 9 | wwee124 | 2015 | 9 | 10 |
| 10 | wwee124 | 2015 | 10 | 10 |
| 11 | wwee124 | 2015 | 11 | 10 |
| 12 | wwee124 | 2015 | 12 | 10 |
| 13 | wwee124 | 2016 | 1 | 10 |
| 14 | wwee124 | 2016 | 2 | 10 |
| 15 | wwee124 | 2016 | 3 | 10 |
And what I want to achive is query that will return a cumulative value for each month in year. I mean:
SELECT ID, PRODUCTID, YEAR, MONTH,
SUM(VALUE) OVER(?????)
I cant handle it :(
Query should return:
| ID | UniqueProcutId | Year | Month | Value |
-----|----------------|------|-------|-------|
| 1 | wwee124 | 2015 | 1 | 10 |
| 2 | wwee124 | 2015 | 2 | 20 |
| 3 | wwee124 | 2015 | 3 | 30 |
| 4 | wwee124 | 2015 | 4 | 40 |
| 5 | wwee124 | 2015 | 5 | 50 |
| 6 | wwee124 | 2015 | 6 | 60 |
| 7 | wwee124 | 2015 | 7 | 70 |
| 8 | wwee124 | 2015 | 8 | 80 |
| 9 | wwee124 | 2015 | 9 | 90 |
| 10 | wwee124 | 2015 | 10 | 100 |
| 11 | wwee124 | 2015 | 11 | 110 |
| 12 | wwee124 | 2015 | 12 | 120 |
| 13 | wwee124 | 2016 | 1 | 10 |
| 14 | wwee124 | 2016 | 2 | 20 |
| 15 | wwee124 | 2016 | 3 | 30 |
You are close. Try this:
SELECT
ID,
UniqueProcutId,
[Year],
[Month],
SUM(Value) OVER(Partition BY YEAR ORDER BY [Month] ASC ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)
FROM
YourTable

SQL tables with different detail level - one containing the sum of the other

I am trying to build a simple model with tables containing forecast data.
The first I will call [Year Forecast] and the second [Month Forecast]. Like so:
| Year | Forecast |
-------------------
| 2018 | 144000 |
| 2019 | 180000 |
| 2020 | 220000 |
| .... | ...... |
I want the DB to allow manual input in the [Year Forecast] for [Year] > year(getdate())+2. So in the example the forecast number of 2020 would have been manually entered as a whole number. (Note that Year would be a unique identifier)
For [Year] < year(getdate())+2 the table [Year Forecast] should take the sum of [Month Forecast]. This would be for 2018 and 2019 in this example.
| ID | Year | Month | Forecast |
--------------------------------
| 1 | 2018 | 1 | 12000 |
| 2 | 2018 | 2 | 12000 |
| 3 | 2018 | 3 | 12000 |
| 4 | 2018 | 4 | 12000 |
| 5 | 2018 | 5 | 12000 |
| 6 | 2018 | 6 | 12000 |
| 7 | 2018 | 7 | 12000 |
| 8 | 2018 | 8 | 12000 |
| 9 | 2018 | 9 | 12000 |
| 10 | 2018 | 10 | 12000 |
| 11 | 2018 | 11 | 12000 |
| 12 | 2018 | 12 | 12000 |
| 13 | 2019 | 1 | 15000 |
| 14 | 2019 | 2 | 15000 |
| .. | .... | ..... | ........ |
Relationship would be straightforward, but I want to define a procedure that takes the sum of Forecast of the related year in [Month Forecast] and prohibits manual data input for [Year] < year(getdate())+2
I've come quite far in my SQL journey and I know this should be possible but is still a bit above my skill level. How should I go about this?

SQL Server : How to subtract values throughout the rows by using values in another column?

I have a table named stock and sales as below :
Stock Table :
+--------+----------+---------+
| Stk_ID | Stk_Name | Stk_Qty |
+--------+----------+---------+
| 1001 | A | 20 |
| 1002 | B | 50 |
+--------+----------+---------+
Sales Table :
+----------+------------+------------+-----------+
| Sales_ID | Sales_Date | Sales_Item | Sales_Qty |
+----------+------------+------------+-----------+
| 2001 | 2016-07-15 | A | 5 |
| 2002 | 2016-07-20 | B | 7 |
| 2003 | 2016-07-23 | A | 4 |
| 2004 | 2016-07-29 | A | 2 |
| 2005 | 2016-08-03 | B | 15 |
| 2006 | 2016-08-07 | B | 10 |
| 2007 | 2016-08-10 | A | 5 |
+----------+------------+------------+-----------+
With the table above, how can I find the available stock Ava_Stk for each stock after every sales?
Ava_Stk is expected to subtract Sales_Qty from Stk_Qty after every sales.
+----------+------------+------------+-----------+---------+
| Sales_ID | Sales_Date | Sales_Item | Sales_Qty | Ava_Stk |
+----------+------------+------------+-----------+---------+
| 2001 | 2016-07-15 | A | 5 | 15 |
| 2002 | 2016-07-20 | B | 7 | 43 |
| 2003 | 2016-07-23 | A | 4 | 11 |
| 2004 | 2016-07-29 | A | 2 | 9 |
| 2005 | 2016-08-03 | B | 15 | 28 |
| 2006 | 2016-08-07 | B | 10 | 18 |
| 2007 | 2016-08-10 | A | 5 | 4 |
+----------+------------+------------+-----------+---------+
Thank you!
You want a cumulative sum and to subtract it from the stock table. In SQL Server 2012+:
select s.*,
(st.stk_qty -
sum(s.sales_qty) over (partition by s.sales_item order by sales_date)
) as ava_stk
from sales s join
stock st
on s.sales_item = st.stk_name;

Resources