I have a dataframe with cumulative stock returns from 1 to 5 days:
1dReturn 2dReturn 3dReturn 4dReturn 5dReturn
Ticker
LUNA -3.077 -3.077 -6.923 -6.915 -6.615
YTEN -2.139 -2.139 -18.182 -16.043 -16.578
I would like to compute the daily returns. Is there a function for that?
Code below creates the table above:
df = pd.DataFrame({'1dReturn': [-3.077, -2.139],
'2dReturn': [-3.077, -2.139],
'3dReturn': [-6.923, -18.182],
'4dReturn': [-6.915, -16.043],
'5dReturn': [-6.615, -16.578],},
index=['LUNA', 'YTEN'])
The formula to arrive at the daily returns works as follows:
daily returns day 2: cD2/d1
daily returns day 3: cD4/(d1*d2)
daily returns day 4: cD5/(d1*d2*d3)
daily returns day 5: cD5/(d1*d2*d3*d4)
where cD1 is the cum return of day 1 and d1 is the daily return for d1 etc.
np.exp(np.log(cumReturn + 1.0).diff()) - 1
cumReturn is the cumulative return series in Pandas.
R_cum_i = (1 + R_daily_i) * (1 + R_daily_i-1) ... - 1
R_cum_i-1 = (1 + R_daily_i-1) * (1 + R_daily_i-2) ... - 1
so
R_cum_i =(R_cum_i-1 + 1) * (1 + R_daily_i-1) - 1
1 + R_daily_i-1 = (R_cum_i + 1) / (R_cum_i-1 + 1)
1 + R_daily_i-1 = exp(log((R_cum_i + 1) / (R_cum_i-1 + 1)))
1 + R_daily_i-1 = exp(log(R_cum_i + 1) - log(R_cum_i-1 + 1))
1 + R_daily_i-1 = exp(log(R_cum_i + 1).diff())
then
R_daily_i-1 = exp(log(R_cum_i + 1).diff()) - 1
Related
I indeed got a relatively big dataset and my mixed effects logistic regression is like below. Is that normal to take that long to run? or I made some mistakes?
library(lme4)
glmer_EBRD_undersample_1 <- glmer(leave_happened ~
performance_rating_2016 + performance_rating_2017 + performance_rating_2018 + performance_rating_2019 + performance_rating_2020
+ gender
+ target_group
+ target_pmf_band
+ target_hq_or_ro
+ target_office_location_country_distilled
+ target_org_unit_cost_centre_code_distilled
+ target_ebrd_region_distilled
+ target_contract_group_distilled
+ target_position_tenure_group
+ target_length_of_service_group_distilled
+ leaves_to_date
+ moves_to_date
+ joins_to_date
+ applied_count_to_date
+ line_reviewed_to_date
+ interviewed_to_date
+ offered_to_date
+ hired_to_date
+ (1 | person_id)
,
data = train_undersample_1,
family = binomial,
control = glmerControl(optimizer = "bobyqa"),
nAGQ = 10
)
summary(glmer_EBRD_undersample_1)
Also gave a warning like this: Warning in commonArgs(par, fn, control, environment()) :
maxfun < 10 * length(par)^2 is not recommended.
My aliases keep getting renamed to Expr1, Expr2, etc. after saving my view. I have multiple CASE WHEN THEN statements in the view that have the same aliases in each case statement (X0,Y0 and Z0.) Essentially all case statements after the first case statement replaces my aliases with this Expr1, Expr2.
Any help would be extremely appreciated. I also tried putting square brackets around the aliases but that did not work.
I should also note that when I go to save, I get a warning from SQL Server Management Studio about my order by clause. It's probably unrelated but you never know.
SELECT
TOP (100) PERCENT
s.dtmEvaluation AS SurgeryDate,
p.idPatient,
d.strLead AS LeadType,
ds.strDBSSite AS Target,
CASE
WHEN intSide = 0 THEN 'L'
ELSE 'R'
END AS Side,
ROUND(s.dblACX, 2, 1) AS ACX,
ROUND(s.dblACZ, 2, 1) AS ACY,
ROUND(s.dblACZ, 2, 1) AS ACZ,
ROUND(s.dblPCX, 2, 1) AS PCX,
ROUND(s.dblPCY, 2, 1) AS PCY,
ROUND(s.dblPCZ, 2, 1) AS PCZ,
ROUND(s.dblInitX, 2, 1) AS InitialX,
ROUND(s.dblInitY, 2, 1) AS [Initial Y],
ROUND(s.dblInitZ, 2, 1) AS [Initial Z],
s.dblACPCAngle AS InitialACPCAngle,
s.dblCenterAngle AS InitialCLangle,
s.dblMicroPasses AS MicroPasses,
s.dblMacroPasses AS MacroPasses,
ROUND(s.dblFinalX, 2, 1) AS FinalX,
ROUND(s.dblFinalY, 2, 1) AS FinalY,
ROUND(s.dblFinalZ, 2, 1) AS FinalZ,
ROUND(s.dblMeasuredX, 2, 1) AS MeasX,
ROUND(s.dblMeasuredY, 2, 1) AS MeasY,
ROUND(s.dblMeasuredZ, 2, 1) AS MeasZ,
s.dblMeasuredACPCAngle,
s.dblMeasuredCtrAngle,
ROUND(SQRT(POWER(s.dblMeasuredX - s.dblFinalX, 2) + POWER(s.dblMeasuredY - s.dblFinalY, 2) + POWER(s.dblMeasuredZ - s.dblFinalZ,2)), 2, 1) AS Delta,
CASE
WHEN s.intSide = 1 THEN s.dblMeasuredx + ((dblelectrodelength - 1) + 0 * (dblelectrodespacing + dblelectrodelength) + (dblelectrodelength / 2)) * SIN(RADIANS(dblMeasuredCtrAngle))
ELSE dblMeasuredx - ((dblelectrodelength - 1) + 0 * (dblelectrodespacing + dblelectrodelength) + (dblelectrodelength / 2)) * SIN(RADIANS(dblMeasuredCtrAngle))
END AS X0,
s.dblMeasuredY +
(((d.dblElectrodeLength - 1)
+ 0 * (d.dblElectrodeSpacing + d.dblElectrodeLength)) +
d.dblElectrodeLength / 2) * COS(RADIANS(s.dblMeasuredACPCAngle))
* COS(RADIANS(s.dblMeasuredCtrAngle)) AS Y0,
s.dblMeasuredZ + (((d.dblElectrodeLength - 1) + 0 *
(d.dblElectrodeSpacing + d.dblElectrodeLength))
+ d.dblElectrodeLength / 2) *
SIN(RADIANS(s.dblMeasuredACPCAngle)) *
COS(RADIANS(s.dblMeasuredCtrAngle)) AS Z0,
CASE
WHEN intSide = 1 THEN dblMeasuredx +
((dblelectrodelength - 1) + 1 * (dblelectrodespacing +
dblelectrodelength) + (dblelectrodelength / 2))
* SIN(RADIANS(dblMeasuredCtrAngle))
ELSE dblMeasuredx -
((dblelectrodelength - 1) + 1 * (dblelectrodespacing +
dblelectrodelength) + (dblelectrodelength / 2))
* SIN(RADIANS(dblMeasuredCtrAngle))
END AS Expr1,
s.dblMeasuredY + (((d.dblElectrodeLength - 1) + 1 *
(d.dblElectrodeSpacing + d.dblElectrodeLength))
+ d.dblElectrodeLength / 2) *
COS(RADIANS(s.dblMeasuredACPCAngle)) *
COS(RADIANS(s.dblMeasuredCtrAngle)) AS Expr2,
s.dblMeasuredZ + (((d.dblElectrodeLength - 1) + 1 *
(d.dblElectrodeSpacing + d.dblElectrodeLength)) + d.dblElectrodeLength /
2)
* SIN(RADIANS(s.dblMeasuredACPCAngle)) *
COS(RADIANS(s.dblMeasuredCtrAngle)) AS Expr3,
CASE
WHEN intSide = 1 THEN dblMeasuredx + ((dblelectrodelength - 1)
+ 2 * (dblelectrodespacing + dblelectrodelength) +
(dblelectrodelength / 2)) * SIN(RADIANS(dblMeasuredCtrAngle))
ELSE dblMeasuredx - ((dblelectrodelength - 1)
+ 2 * (dblelectrodespacing + dblelectrodelength) +
(dblelectrodelength / 2)) * SIN(RADIANS(dblMeasuredCtrAngle))
END AS
Expr4,
s.dblMeasuredY + (((d.dblElectrodeLength - 1) + 2 *
(d.dblElectrodeSpacing + d.dblElectrodeLength)) + d.dblElectrodeLength
/ 2)
* COS(RADIANS(s.dblMeasuredACPCAngle)) *
COS(RADIANS(s.dblMeasuredCtrAngle)) AS Expr5,
s.dblMeasuredZ +
(((d.dblElectrodeLength - 1)
+ 2 * (d.dblElectrodeSpacing + d.dblElectrodeLength)) +
d.dblElectrodeLength / 2) * SIN(RADIANS(s.dblMeasuredACPCAngle)) *
COS(RADIANS(s.dblMeasuredCtrAngle))
AS Expr6,
CASE
WHEN intSide = 1 THEN dblMeasuredx +
((dblelectrodelength - 1) + 3 * (dblelectrodespacing +
dblelectrodelength) + (dblelectrodelength / 2))
* SIN(RADIANS(dblMeasuredCtrAngle))
ELSE dblMeasuredx -
((dblelectrodelength - 1) + 3 * (dblelectrodespacing +
dblelectrodelength) + (dblelectrodelength / 2))
* SIN(RADIANS(dblMeasuredCtrAngle))
END AS Expr7,
s.dblMeasuredY + (((d.dblElectrodeLength - 1) + 3 *
(d.dblElectrodeSpacing + d.dblElectrodeLength))
+ d.dblElectrodeLength / 2) *
COS(RADIANS(s.dblMeasuredACPCAngle)) *
COS(RADIANS(s.dblMeasuredCtrAngle)) AS Expr8,
s.dblMeasuredZ + (((d.dblElectrodeLength - 1) + 3 *
(d.dblElectrodeSpacing + d.dblElectrodeLength)) + d.dblElectrodeLength /
2)
* SIN(RADIANS(s.dblMeasuredACPCAngle)) *
COS(RADIANS(s.dblMeasuredCtrAngle)) AS Expr9,
s.dblAtlasScaleX,
s.dblAtlasScaleY,
s.dblAtlasScaleZ,
s.dblAtlasMovementX,
s.dblAtlasMovementY,
s.dblAtlasMovementZ,
s.dblAtlasRotationX,
s.dblAtlasRotationY,
s.dblAtlasRotationZ
FROM dbo.tblDBSSurgery AS s
INNER JOIN dbo.tblPatientDemographics AS p
ON p.idPatient =
s.idPatient
LEFT OUTER JOIN dbo.tblLookupLeads AS d
ON d.idLead = s.intLeadType
LEFT OUTER JOIN dbo.tblLookupDBSSites AS ds
ON ds.idDBSSite = s.intSite
WHERE (s.intProcedure = 0
OR s.intProcedure = 2)
AND (s.blnOutside = 0)
AND (NOT
(p.strMRN = '09999999'))
AND (NOT (p.strMRN = '08888888'))
ORDER BY surgerydate
I have following anonymous function (with x as an array):
f = #(x) 312*x(2) - 240*x(1) + 30*x(3) - 24*x(4) + 282*x(1)*x(2) + 30*x(1)*x(3) + 18*x(1)*x(4) + 54*x(2)*x(3) + 6*x(2)*x(4) + 6*x(3)*x(4) + 638*x(1)^2 + 207*x(2)^2 + 6*x(3)^2 + 3*x(4)^2 + 4063
I want to make gradient of this function and save it for future use. Also with array input.
X = [ 0;...
0;...
0;...
0];
F = f(X)
G = g(X)
Is it possible to archive this with this type of function? Or maybe it is possible to somehow make it via diff command? Something like this:
g = [diff(f, x(1));...
diff(f, x(2));...
diff(f, x(3));...
diff(f, x(4))]
I guess the following is what you want. I'm afraid, you need the Symbolic Math Toolbox for a simple solution, otherwise I'd rather calculate the derivatives by hand.
x = [1 2 3 4];
%// define function
syms a b c d
f = 312*b - 240*a + 30*c - 24*d + 282*a*b + 30*a*c + 18*a*d + 54*b*c + ...
6*b*d + 6*c*d + 638*a^2 + 207*b^2 + 6*c^2 + 3*d^2 + 4063
%// symbolic gradient
g = gradient(f,[a,b,c,d])
%// eval symbolic function
F = subs(f,[a,b,c,d],x)
G = subs(g,[a,b,c,d],x)
%// convert symbolic value to double
Fd = double(F)
Gd = double(G)
or alternatively:
%// convert symbolic function to anonymous function
fd = matlabFunction(f)
gd = matlabFunction(g)
%// eval anonymous function
x = num2cell(x)
Fd = fd(x{:})
Gd = gd(x{:})
f =
638*a^2 + 282*a*b + 30*a*c + 18*a*d - 240*a + 207*b^2 + 54*b*c +
6*b*d + 312*b + 6*c^2 + 6*c*d + 30*c + 3*d^2 - 24*d + 4063
g =
1276*a + 282*b + 30*c + 18*d - 240
282*a + 414*b + 54*c + 6*d + 312
30*a + 54*b + 12*c + 6*d + 30
18*a + 6*b + 6*c + 6*d - 24
F =
7179
G =
1762
1608
228
48
fd =
#(a,b,c,d)a.*-2.4e2+b.*3.12e2+c.*3.0e1-d.*2.4e1+a.*b.*2.82e2+a.*c.*3.0e1+a.*d.*1.8e1+b.*c.*5.4e1+b.*d.*6.0+c.*d.*6.0+a.^2.*6.38e2+b.^2.*2.07e2+c.^2.*6.0+d.^2.*3.0+4.063e3
gd =
#(a,b,c,d)[a.*1.276e3+b.*2.82e2+c.*3.0e1+d.*1.8e1-2.4e2;a.*2.82e2+b.*4.14e2+c.*5.4e1+d.*6.0+3.12e2;a.*3.0e1+b.*5.4e1+c.*1.2e1+d.*6.0+3.0e1;a.*1.8e1+b.*6.0+c.*6.0+d.*6.0-2.4e1]
x =
[1] [2] [3] [4]
Fd =
7179
Gd =
1762
1608
228
48
I am writing a simple c 4x4 matrix math library and wanted some feedback, especially from people with opengl experience.
Typically there's two ways to do matrix multiplication. I tested this code and it works, according to results from wolfram alpha but my main concern is that this matrix is in the right order.
My matrix is just an array of 16 doubles.
The code to do the multiplication is below
out->m[0] = ( a->m[0] * b->m[0]) + (a->m[1] * b->m[4]) + (a->m[2] * b->m[8]) + (a->m[3] * b->m[12] );
out->m[4] = ( a->m[4] * b->m[0]) + (a->m[5] * b->m[4]) + (a->m[6] * b->m[8]) + (a->m[7] * b->m[12] );
out->m[8] = ( a->m[8] * b->m[0]) + (a->m[9] * b->m[4]) + (a->m[10] * b->m[8]) + (a->m[11] * b->m[12] );
out->m[12] = ( a->m[12] * b->m[0]) + (a->m[13] * b->m[4]) + (a->m[14] * b->m[8]) + (a->m[15] * b->m[12] );
out->m[1] = ( a->m[0] * b->m[1]) + (a->m[1] * b->m[5]) + (a->m[2] * b->m[9]) + (a->m[3] * b->m[13] );
out->m[5] = ( a->m[4] * b->m[1]) + (a->m[5] * b->m[5]) + (a->m[6] * b->m[9]) + (a->m[7] * b->m[13] );
out->m[9] = ( a->m[8] * b->m[1]) + (a->m[9] * b->m[5]) + (a->m[10] * b->m[9]) + (a->m[11] * b->m[13] );
out->m[13] = ( a->m[12] * b->m[1]) + (a->m[13] * b->m[5]) + (a->m[14] * b->m[9]) + (a->m[15] * b->m[13] );
out->m[2] = ( a->m[0] * b->m[2]) + (a->m[1] * b->m[6]) + (a->m[2] * b->m[10]) + (a->m[3] * b->m[14] );
out->m[6] = ( a->m[4] * b->m[2]) + (a->m[5] * b->m[6]) + (a->m[6] * b->m[10]) + (a->m[7] * b->m[14] );
out->m[10] = ( a->m[8] * b->m[2]) + (a->m[9] * b->m[6]) + (a->m[10] * b->m[10]) + (a->m[11] * b->m[14] );
out->m[14] = ( a->m[12] * b->m[2]) + (a->m[13] * b->m[6]) + (a->m[14] * b->m[10]) + (a->m[15] * b->m[14] );
out->m[3] = ( a->m[0] * b->m[3]) + (a->m[1] * b->m[7]) + (a->m[2] * b->m[11]) + (a->m[3] * b->m[15] );
out->m[7] = ( a->m[4] * b->m[3]) + (a->m[5] * b->m[7]) + (a->m[6] * b->m[11]) + (a->m[7] * b->m[15] );
out->m[11] = ( a->m[8] * b->m[3]) + (a->m[9] * b->m[7]) + (a->m[10] * b->m[11]) + (a->m[11] * b->m[15] );
out->m[15] = ( a->m[12] * b->m[3]) + (a->m[13] * b->m[7]) + (a->m[14] * b->m[11]) + (a->m[15] * b->m[15] );
I wanted to make sure that this will give me the correct results for setting up my transformation matrix.
matrix m = 1,3,4,-1,5,6,7,-1,8,8,8,-1,0,0,0,1
which is arranged in memory like this:
1,3,4,-1
5,6,7,-1
8,8,8,-1
0,0,0,1
which I think is the way opengl lays out it's matrix as 16 numbers.
using my code my answer comes out to be
[ 48.000000 53.000000 57.000000 -9.000000 ]
[ 91.000000 107.000000 118.000000 -19.000000 ]
[ 112.000000 136.000000 152.000000 -25.000000 ]
[ 0.000000 0.000000 0.000000 1.000000 ]
which is the transpose of wolfram alpha's answer.
(48 | 91 | 112 | 0
53 | 107 | 136 | 0
57 | 118 | 152 | 0
-9 | -19 | -25 | 1)
Typically it looks like this, vertex point v model, view, projection matrices
position = projection * view * model * v
I can't say you why your results differ but one help is, if you send the matrix into a GLSL uniform dMat4, you can use the build in transpose functionallity of OpenGL to get the right matrix alignment:
glUniformMatrix4fv( Uniform_Location, 1, GL_TRUE, MatrixPointer );
The third parameter means, if OpenGL should transpose the matrix before setting the uniform.
Here is the indices code:
`
g = TitanFactory.build().set("storage.backend", "cassandra")
.set("storage.hostname", "127.0.0.1").open();
TitanManagement mgmt = g.getManagementSystem();
PropertyKey db_local_name = mgmt.makePropertyKey("db_local_name")
.dataType(String.class).make();
mgmt.buildIndex("byDb_local_name", Vertex.class).addKey(db_local_name)
.buildCompositeIndex();
PropertyKey db_schema = mgmt.makePropertyKey("db_schema")
.dataType(String.class).make();
mgmt.buildIndex("byDb_schema", Vertex.class).addKey(db_schema)
.buildCompositeIndex();
PropertyKey db_column = mgmt.makePropertyKey("db_column")
.dataType(String.class).make();
mgmt.buildIndex("byDb_column", Vertex.class).addKey(db_column)
.buildCompositeIndex();
PropertyKey type = mgmt.makePropertyKey("type").dataType(String.class)
.make();
mgmt.buildIndex("byType", Vertex.class).addKey(type)
.buildCompositeIndex();
PropertyKey value = mgmt.makePropertyKey("value")
.dataType(Object.class).make();
mgmt.buildIndex("byValue", Vertex.class).addKey(value)
.buildCompositeIndex();
PropertyKey index = mgmt.makePropertyKey("index")
.dataType(Integer.class).make();
mgmt.buildIndex("byIndex", Vertex.class).addKey(index)
.buildCompositeIndex();
mgmt.commit();`
Here is the search for vertices and then add vertex with 3 edges on 3GHz 2GB RAM pc. It does 830 vertices in 3 hours and I have 100,000 data its too slow. The code is below:
for (Object[] rowObj : list) {
// TXN_ID
Iterator<Vertex> iter = g.query()
.has("db_local_name", "Report Name 1")
.has("db_schema", "MPS").has("db_column", "txn_id")
.has("value", rowObj[0]).vertices().iterator();
if (iter.hasNext()) {
vertex1 = iter.next();
logger.debug("vertex1=" + vertex1.getId() + ","
+ vertex1.getProperty("db_local_name") + ","
+ vertex1.getProperty("db_schema") + ","
+ vertex1.getProperty("db_column") + ","
+ vertex1.getProperty("type") + ","
+ vertex1.getProperty("index") + ","
+ vertex1.getProperty("value"));
}
// TXN_TYPE
iter = g.query().has("db_local_name", "Report Name 1")
.has("db_schema", "MPS").has("db_column", "txn_type")
.has("value", rowObj[1]).vertices().iterator();
if (iter.hasNext()) {
vertex2 = iter.next();
logger.debug("vertex2=" + vertex2.getId() + ","
+ vertex2.getProperty("db_local_name") + ","
+ vertex2.getProperty("db_schema") + ","
+ vertex2.getProperty("db_column") + ","
+ vertex2.getProperty("type") + ","
+ vertex2.getProperty("index") + ","
+ vertex2.getProperty("value"));
}
// WALLET_ID
iter = g.query().has("db_local_name", "Report Name 1")
.has("db_schema", "MPS").has("db_column", "wallet_id")
.has("value", rowObj[2]).vertices().iterator();
if (iter.hasNext()) {
vertex3 = iter.next();
logger.debug("vertex3=" + vertex3.getId() + ","
+ vertex3.getProperty("db_local_name") + ","
+ vertex3.getProperty("db_schema") + ","
+ vertex3.getProperty("db_column") + ","
+ vertex3.getProperty("type") + ","
+ vertex3.getProperty("index") + ","
+ vertex3.getProperty("value"));
}
vertex4 = g.addVertex(null);
vertex4.setProperty("db_local_name", "Report Name 1");
vertex4.setProperty("db_schema", "MPS");
vertex4.setProperty("db_column", "amount");
vertex4.setProperty("type", "indivisual_0");
vertex4.setProperty("value", rowObj[3].toString());
vertex4.setProperty("index", i);
vertex1.addEdge("data", vertex4);
logger.debug("vertex1 added");
vertex2.addEdge("data", vertex4);
logger.debug("vertex2 added");
vertex3.addEdge("data", vertex4);
logger.debug("vertex3 added");
i++;
g.commit();
}
Is there anyway to optimize this code?
For completeness, this question was answered in the Aurelius Graphs mailing list:
https://groups.google.com/forum/#!topic/aureliusgraphs/XKT6aokRfFI
Basically:
build/use a real composite index:
mgmt.buildIndex("by_local_name_schema_value", Vertex.class).addKey(db_local_name).addKey(db_schema).addKey(value).buildComposite();
don't call g.commit() after each loop cycle, instead do something
like this: if (++1%10000 == 0) g.commit()
turn on storage.batch-loading if not already doing so
if all you can throw at cassandra is 2G of RAM consider using BerkleyDB. Cassandra prefers 4G of RAM minimum and would probably like "more"
I don't know the nature of your data, but can you pre-sort it and use BatchGraph as described in the Powers of Ten - Part I blog post and in the wiki - Using BatchGraph would prevent you from having to maintain the transaction described in number 2 above.