I have a problem on on Qlikview.
I would like to get the associated value of the max date when I select and ID.
For example, if I have the following data:
id value date
1 2 1
1 4 2
1 6 3
1 5 4
When I select ID=1 I would like to get the associated value of the max date (4), which is 5
Thank you!
You can use set analysis to achieve this:
= Sum( {< date = {"$(=Max(date))"} >} value )
You can read more about set analysis on Qlik Help website
Related
I have below requirement.
Input is like as below.
Create table Numbers
(
Num int
)
Insert into Numbers
values (1),(2),(3),(4),(5),(6),(7),(8),(9),(10),(11),(12),(13),(14),(15)
Create table FromTo
(
FromNum int
,ToNum int
)
Select * From FromTo
Output should be as below.
FromNum ToNum
1 5
6 10
11 15
Actual Requirement is as below.
I need to load the data for a column into a table which will have thousands of records with different no's.
Consider like below.
1,2,5,7,9,11,15,34,56,78,98,123,453,765 etc..
I need to load these into other table which is having FROM and TO columns with the intervals of 5000. For example in the first 5000 if i have the no's till 3000, my 1st row should have FromNo as 1 and ToNum as 3000. second row: if the data is not having till 10000 and the next no started as 12312(This is the 2nd Row FromNum) the ToNum value should be +5000 i.e 17312. Here also if we don't have the no's data till 17312 it need to consider the ToNum between the 12312 and 17312
Output should be as below.
FromNum ToNum
1 3205
1095806 1100805
1100808 1105806
1105822 1110820
Can you guys please help me with the solution for the above.
Thanks in advance.
What you may try in this situation is to group data and get the expected results:
DECLARE #interval int = 5
INSERT INTO FromTo (FromNum, ToNum)
SELECT MIN(Num) AS FromNum, MAX(Num) AS ToNum
FROM Numbers
GROUP BY (Num - 1) / #interval
when I run the following script:
tbl: update prob: 1?100 from tbl;
I was expecting that I get a new column created with each row having a random number. However, I get back a column containing the same number for all the rows in the table.
How do I resolve this? I need to update my existing table and not create a table from scratch.
When you are using 1?100 you are only requesting 1 random value within the range of 0-100. If you use 10?100, you will be returned a list of 10 random values between 0-100.
So to do this in an update you want to use something like this
tbl:([]time:5?.z.p;sym:5?`3;price:5?10f;qty:5?10)
time sym price qty
-----------------------------------------------
2012.02.19D18:34:27.148501760 gkn 8.376952 9
2008.07.29D20:23:13.601434560 odo 7.041609 3
2007.02.07D08:17:59.482332864 pbl 0.955069 9
2001.04.27D03:36:44.475531384 aph 1.127308 2
2010.03.03D03:35:55.253069888 mgi 0.7663449 6
update r:abs count[i]?0h from tbl
time sym price qty r
-----------------------------------------------------
2012.02.19D18:34:27.148501760 gkn 8.376952 9 23885
2008.07.29D20:23:13.601434560 odo 7.041609 3 19312
2007.02.07D08:17:59.482332864 pbl 0.955069 9 10372
2001.04.27D03:36:44.475531384 aph 1.127308 2 25281
2010.03.03D03:35:55.253069888 mgi 0.7663449 6 27503
Note that I am using type short and abs to return positive values.
You need to seed your initial data, using something like rand(time), otherwise it will use the same seed, and thus, give the same sequence of random numbers.
EDIT: Per https://code.kx.com/wiki/Reference/SystemCommands
Use \S?n, where n is any integer.
EDIT2: Check out https://code.kx.com/wiki/Reference/SystemCommands#.5CS_.5Bn.5D_-_random_seed for how to use random numbers, please.
Just generate as many random numbers as you have rows using count tbl:
First create your table tbl:
tbl:([]date:reverse .z.d-til 100;price:sums 100?1f)
date price
--------------------
2018.04.26 0.2426471
2018.04.27 0.6163571
2018.04.28 1.179559
..
Then add a column of random numbers between 0 and 100:
update rdn:(count tbl)?100 from tbl
date price rdn
------------------------
2018.04.26 0.2426471 25
2018.04.27 0.6163571 33
2018.04.28 1.179559 13
..
Below member returns me Running Total between first and chosen date. It's possible to aggregate data up to one day/week/month before?
WITH
MEMBER [Measures].[SUM] AS
AGGREGATE(
NULL:TAIL(EXISTING [Date].[Date].[Date].Members).Item(0),
[Measures].[X]
)
Here is example (date can be a day, month, year...) :
DATE X SUM
------------
1 1 NULL
2 4 1
3 2 5
4 2 7
I think you've almost got it - to end the aggregation x number of days before you can use lag:
WITH
MEMBER [Measures].[SUM] AS
AGGREGATE(
NULL
:
TAIL(
EXISTING [Date].[Date].[Date].Members
).Item(0).lag(7) //<<<< finishes 7 days before chosen date
,[Measures].[X]
)
I have two data sets. FIRST is a list of products and their daily prices from a supplier and SECOND is a list of start and end dates (as well as other important data for analysis). How can I tell Stata to pull the price at the beginning date and then the price at the end date from FIRST into SECOND for the given dates. Please note, if there is no exact matching date I would like it to grab the last date available. For example, if SECOND has the date 1/1/2013 and FIRST has prices on ... 12/30/2012, 12/31/2012, 1/2/2013, ... it would grab the 12/31/2012 price.
I would usually do this with Excel, but I have millions of observations, and it is not feasible.
I have put an example of FIRST and SECOND as well as what the optimal solution would give as an output POST_SECOND
FIRST
Product Price Date
1 3 1/1/2010
1 3 1/3/2010
1 4 1/4/2010
1 2 1/8/2010
2 1 1/1/2010
2 5 2/5/2010
3 7 12/26/2009
3 2 1/1/2010
3 6 4/3/2010
SECOND
Product Start Date End Date
1 1/3/2010 1/4/2010
2 1/1/2010 1/1/2010
3 12/26/2009 4/3/2010
POST_SECOND
Product Start Date End Date Price_Start Price_End
1 1/3/2010 1/4/2010 3 4
2 1/1/2010 1/1/2010 1 1
3 12/26/2009 4/3/2010 7 6
Here's a merge/keep/sort/collapse* solution that relies on using the last date. I altered your example data slightly.
/* Make Fake Data & Convert Dates to Date Format */
clear
input byte Product byte Price str12 str_date
1 3 "1/1/2010"
1 3 "1/3/2010"
1 4 "1/4/2010"
1 2 "1/8/2010"
2 1 "1/1/2010"
2 5 "2/5/2010"
3 7 "12/26/2009"
3 7 "12/28/2009"
3 2 "1/1/2010"
3 6 "4/3/2010"
4 8 "12/30/2012"
4 9 "12/31/2012"
4 10 "1/2/2013"
4 10 "1/3/2013"
end
gen Date = date(str_date,"MDY")
format Date %td
drop str_date
save "First.dta", replace
clear
input byte Product str12 str_Start_Date str12 str_End_Date
1 "1/3/2010" "1/4/2010"
2 "1/1/2010" "1/1/2010"
3 "12/27/2009" "4/3/2010"
4 "1/1/2013" "1/2/2013"
end
gen Start_Date = date(str_Start_Date,"MDY")
gen End_Date = date(str_End_Date,"MDY")
format Start_Date End_Date %td
drop str_*
save "Second.dta", replace
/* Data Transformation */
use "First.dta", clear
merge m:1 Product using "Second.dta", nogen
bys Product: egen ads = min(abs(Start_Date-Date))
bys Product: egen ade = min(abs(End_Date - Date))
keep if (ads==abs(Date - Start_Date) & Date <= Start_Date) | (ade==abs(Date - End_Date) & Date <= End_Date)
sort Product Date
collapse (first) Price_Start = Price (last) Price_End = Price, by(Product Start_Date End_Date)
list, clean noobs
*Some people are reshapers. Others are collapsers. Often both can get the job done, but I think collapse is easier in this case.
In Stata, I've never been able to get something like this to work nicely in one step (something you can do in SAS via a SQL call). In any case, I think you'd be better off creating an intermediate file from FIRST.dta and then merging that 2x on each of your StartDate and EndDate variables in SECOND.dta.
Say you have data for price adjustments from Jan 1, 2010 to Dec 31, 2013 (specified with varied intervals as you have shown above). I assume all the date variables are already in date format in FIRST.dta & SECOND.dta, and that variable names in SECOND do not have spaces in them.
tempfile prod prices
use FIRST.dta, clear
keep Product
duplicates drop
save `prod'
clear
set obs 1096
g Date=date("12-31-2009","MDY")+_n
format date %td
cross using `prod'
merge 1:1 Product Date using FIRST.dta, assert(1 3) nogen
gsort +Product +Date /*this ensures the data are sorted properly for the next step */
replace price=price[_n-1] if price==. & Product==Product[_n-1]
save `prices'
use SECOND.dta, clear
foreach i in Start End {
rename `i'Date Date
merge 1:1 Product Date using `prices', assert(2 3) keep(3) nogen
rename Price Price_`i'
rename Date `i'Date
}
This should work if I understand your data structures correctly, and it should address the issue being discussed in the comments to #Dimitriy's answer. I'm open to critiques on how to make this nicer as its something I've had to do a few times and this is how I usually go about it.
I have a select query which returns the following o/p..
1 Arun,
2 Kumar,
3 Babu,
4 Ram,
Is it possible to add a intial value to this o/p without inserting a value to the table, in other means hardcoding the intial value.
Can i get the o/p as
0 Select,
1 Arun,
2 Kumar,
3 Babu,
4 Ram
Try this:
select
0,
'Select'
union all
<your query>