How to forecast unknown future target values with gluonts DeepAR? - forecasting

How to forecast unknown future target values with gluonts DeepAR?
I have a time series from 1995-01-01 to 2021-10-01. Monthly frequency data. How to forecast values for the future (next 3 months): 2021-11-01 to 2022-01-01? Note that I don't have the target values for 2021-11-01, 2021-12-01 and 2022-01-01.
Many thanks!
from gluonts.model.deepar import DeepAREstimator
from gluonts.mx import Trainer
import numpy as np
import mxnet as mx
np.random.seed(7)
mx.random.seed(7)
estimator = DeepAREstimator(
prediction_length=12
, context_length=120
, freq='M'
, trainer=Trainer(
epochs=5
, learning_rate=1e-03
, num_batches_per_epoch=50))
predictor = estimator.train(training_data=df_train)
# Forecasting
predictions = predictor.predict(df_test)
predictions = list(predictions)[0]
predictions = predictions.quantile(0.5)
print(predictions)
[163842.34 152805.08 161326.3 176823.97 127003.79 126937.78
139575.2 117121.67 115754.67 139211.28 122623.586 120102.65 ]
As I understood, the predictions values are not for "2021-11-01", "2021-12-01" and "2022-01-01". How do I know to which months this values refer to? How to forecast values for the next 3 months: "2021-11-01", "2021-12-01" and "2022-01-01"?
Take a look at this code. It comes from "Advanced Forecasting with Python".
https://github.com/Apress/advanced-forecasting-python/blob/main/Chapter%2020%20-%20Amazon's%20DeepAR.ipynb
It does not seem to forecast unknown future values, once it compares the last 28 values of test_ds (Listing 20-5. R2 score and prediction graph) with the predictions made over this same dataset test_ds (Listing 20-4. Prediction)
How do I forecast unknown future values?
Many thanks!
Data source
https://www.kaggle.com/c/recruit-restaurant-visitor-forecasting
# Listing 20-1. Importing the data
import pandas as pd
y = pd.read_csv('air_visit_data.csv.zip')
y = y.pivot(index='visit_date', columns='air_store_id')['visitors']
y = y.fillna(0)
y = pd.DataFrame(y.sum(axis=1))
y = y.reset_index(drop=False)
y.columns = ['date', 'y']
# Listing 20-2. Preparing the data format requered by the gluonts library
from gluonts.dataset.common import ListDataset
start = pd.Timestamp("01-01-2016", freq="H")
# train dataset: cut the last window of length "prediction_length", add "target" and "start" fields
train_ds = ListDataset([{'target': y.loc[:450,'y'], 'start': start}], freq='H')
# test dataset: use the whole dataset, add "target" and "start" fields
test_ds = ListDataset([{'target': y['y'], 'start': start}],freq='H')
# Listing 20-3. Fitting the default DeepAR model
from gluonts.model.deepar import DeepAREstimator
from gluonts.trainer import Trainer
import mxnet as mx
import numpy as np
np.random.seed(7)
mx.random.seed(7)
estimator = DeepAREstimator(
prediction_length=28,
context_length=100,
freq=’H’,
trainer=Trainer(ctx="gpu", # remove if running on windows
epochs=5,
learning_rate=1e-3,
num_batches_per_epoch=100
)
)
predictor = estimator.train(train_ds)
# Listing 20-4. Prediction
predictions = predictor.predict(test_ds)
predictions = list(predictions)[0]
predictions = predictions.quantile(0.5)
# Listing 20-5. R2 score and prediction graph
from sklearn.metrics import r2_score
print(r2_score( list(test_ds)[0]['target'][-28:], predictions))
import matplotlib.pyplot as plt
plt.plot(predictions)
plt.plot(list(test_ds)[0]['target'][-28:])
plt.legend(['predictions', 'actuals'])
plt.show()

In your case the context length is 120 and prediction length is 12 so the model will look behind 120 data points to predict 12 future data points
The recommendation is to reduce the context to may be 10 and include the data from past 10 months in the df_test table
you can get the start of the forecast using
list(predictor.predict(df_test))[0].start_date
based on this create a future table of 12 dates(as 12 is the prediction length)

Related

Webscraping to a DataFrame

I am trying to get information from a website, and into a Dataframe, but I'm having some trouble.
I have extracted the data, but I'm trying to merge two dataframes, and reshape them into one. Here is what I have:
import numpy as np
import pandas as pd
import requests
from bs4 import BeautifulSoup
url = 'https://www.civilaviation.gov.in/'
resp = requests.get(url)
soup = BeautifulSoup(resp.content.decode(), 'html.parser')
div = soup.find('div', {'class':'airport-col vande-bharat-col'})
div2 = soup.find('div', {'class':'airport-col airport-widget'})
div['class'] = 'Domestic traffic'
div2['class'] = 'International traffic'
dom = div.get_text()
intl = div2.get_text()
def str2frame(estr, sep = '\n', lineterm = '\n\n\n\n\n', set_header = True):
dat = [x.split(sep) for x in estr.split(lineterm)][0:-1]
df = pd.DataFrame(dat)
if set_header:
df = df.T.set_index(0, drop = True).T # flip, set ix, flip back
return df
df1 = str2frame(dom)
df2 = str2frame(intl)
df1.rename(columns={"अन्तर्देशीय यातायात Domestic traffic On 29 Jan 2023":"Domestic Traffic"}, inplace=True)
df2.rename(columns={"अंतर्राष्ट्रीय यातायात International traffic On 29 Jan 2023":"International Traffic"}, inplace=True)
So now I get two separate DataFrames with all the information I want, but not in the format I want. The shape of my dataframes are 6,2(one of the columns is blank)... I need them merged into one dataframe that is 2,6. So basically I show
Domestic Traffic
1 Departing flights 2,967
2 Departing Pax 4,24,224
3 Arriving flights 2,960
4 Arriving Pax 4,18,697
5 Aircraft movements 5,927
6 Airport footfalls 8,42,921
I would like to see two rows, one for domestic and one for international traffic, and each column based on the given values. I apologize if my question or my coding is unclear. This is my first time asking a question on this forum. Thank you for your help.
Not sure if this is the expected result but you could transform and concat your data:
pd.concat([
df1.set_index(df1.columns[0]).T,
df2.set_index(df2.columns[0]).T
]).reset_index()
Output
0
Departing flights
Departing Pax
Arriving flights
Arriving Pax
Aircraft movements
Airport footfalls
0
अन्तर्देशीय यातायात Domestic traffic On 30 Jan 2023
2,862
4,07,957
2,864
4,04,799
5,726
8,12,756
1
अंतर्राष्ट्रीय यातायात International traffic On 30 Jan 2023
433
90,957
516
82,036
949
1,72,993

How can I speed up my optimization with Gekko?

My program is optimizing the charging and decharging of a home battery to minimize the cost of electricity at the end of the year. The electricity usage of homes is measured each 15 minutes, so I have 96 measurement point in 1 day. I want to optimilize the charging and decharging of the battery for 2 days, so that day 1 takes the usage of day 2 into account. I wrote the following code and it works.
from gekko import GEKKO
import numpy as np
import pandas as pd
import time
import math
# ------------------------ Import and read input data ------------------------
file = r'D:\Bedrijfseconomie\MP Thuisbatterijen\Spyder - Gekko\Data Sim 1.xlsx'
data = pd.read_excel(file, sheet_name='Input', na_values='NaN')
dataRead = pd.DataFrame(data, columns= ['Timestep','Verbruik woning (kWh)','Prijs afname (€/kWh)',
'Capaciteit batterij (kW)','Capaciteit batterij (kWh)',
'Rendement (%)','Verbruikersprofiel'])
timestep = dataRead['Timestep'].to_numpy()
usage_home = dataRead['Verbruik woning (kWh)'].to_numpy()
price = dataRead['Prijs afname (€/kWh)'].to_numpy()
cap_batt_kW = dataRead['Capaciteit batterij (kW)'].iloc[0]
cap_batt_kWh = dataRead['Capaciteit batterij (kWh)'].iloc[0]
efficiency = dataRead['Rendement (%)'].iloc[0]
usersprofile = dataRead['Verbruikersprofiel'].iloc[0]
# ---------------------------- Optimization model ----------------------------
# Initialise model
m = GEKKO()
# Global options
m.options.SOLVER = 1
# Constants
snelheid_laden = cap_batt_kW/4
T = len(timestep)
loss_charging = m.Const(value = (1-efficiency)/2)
max_cap_batt = m.Const(value = cap_batt_kWh)
min_cap_batt = m.Const(value = 0)
max_charge = m.Const(value = snelheid_laden) # max battery can charge in 15min
max_decharge = m.Const(value = -snelheid_laden) # max battery can decharge in 15min
# Parameters
dummy = np.array(np.ones([T]))
# Variables
e_batt = m.Array(m.Var, (T), lb = min_cap_batt, ub = max_cap_batt) # energy in battery
usage_net = m.Array(m.Var, (T)) # usage home & charge/decharge battery
price_paid = m.Array(m.Var, (T)) # price paid each 15min
charging = m.Array(m.Var, (T), lb = max_decharge, ub = max_charge) # amount charge/decharge each 15min
# Intermediates
e_batt[0] = m.Intermediate(charging[0])
for t in range(T):
e_batt[t] = m.Intermediate(m.sum([charging[i]*(1-loss_charging) for i in range(t)]))
usage_net = [m.Intermediate(usage_home[t] + charging[t]) for t in range(T)]
price_paid = [m.Intermediate(usage_net[t] * price[t] / 100) for t in range(T)]
total_price = m.Intermediate(m.sum([price_paid[t] for t in range(T)]))
# Equations (constraints)
m.Equation([min_cap_batt*dummy[t] <= e_batt[t] for t in range(T)])
m.Equation([max_cap_batt*dummy[t] >= e_batt[t] for t in range(T)])
m.Equation([max_charge*dummy[t] >= charging[t] for t in range(T)])
m.Equation([max_decharge*dummy[t] <= charging[t] for t in range(T)])
m.Equation([min_cap_batt*dummy[t] <= usage_net[t] for t in range(T)])
m.Equation([(-1*charging[t]) <= (1-loss_charging)*e_batt[t] for t in range(T)])
# Objective
m.Minimize(total_price)
# Solve problem
m.solve()
My code is running and it works but despite that it gives a Solution time of 10 seconds, the total time for it to run is around 8 minutes. Does anyone know a way I can speed it up?
There are a few ways to speed up the Gekko code:
Solve locally instead of on the public server. The option is m=GEKKO(remote=False). The public server can slow down with many jobs.
Use sum() instead of m.sum(). This can be faster for compiling the model. Otherwise, use m.integral(x) if you need the integral of x.
Many of the equations are repeated at each time horizon step. Gekko is more efficient using a single equation definition with IMODE=2 (for algebraic equation models) or IMODE=6 (for differential / algebraic equation models) and then it creates the equations over the time horizon. You may need to use m.vsum() instead of m.sum().
For additional diagnosis, try setting m.options.DIAGLEVEL=1 to get a detailed timing report of how long it takes to compile the model and perform each function, 1st derivative, and 2nd derivative calculation. It also gives a detailed view of the solver versus model time during the solution phase.
Update with Data File Testing
Thanks for sending the data file. The run directory shows that the model file is 58,682 lines long. It takes a while to compile a model that size. Here is the solution from the files you sent:
--------- APM Model Size ------------
Each time step contains
Objects : 193
Constants : 5
Variables : 20641
Intermediates: 578
Connections : 18721
Equations : 20259
Residuals : 19681
Number of state variables: 20641
Number of total equations: - 19873
Number of slack variables: - 1152
---------------------------------------
Degrees of freedom : -384
* Warning: DOF <= 0
----------------------------------------------
Steady State Optimization with APOPT Solver
----------------------------------------------
Iter Objective Convergence
0 3.37044E+01 5.00000E+00
1 2.81987E+01 1.00000E-10
2 2.81811E+01 5.22529E-12
3 2.81811E+01 2.10942E-15
4 2.81811E+01 2.10942E-15
Successful solution
---------------------------------------------------
Solver : APOPT (v1.0)
Solution time : 10.5119999999879 sec
Objective : 28.1811214884047
Successful solution
---------------------------------------------------
Here is a version that uses IMODE=6 instead. You define the variables and equations once and let Gekko handle the time discretization. It makes a much more efficient model because there is no unnecessary duplication of equations.
from gekko import GEKKO
import numpy as np
import pandas as pd
import time
import math
# ------------------------ Import and read input data ------------------------
file = r'Data Sim 1.xlsx'
data = pd.read_excel(file, sheet_name='Input', na_values='NaN')
dataRead = pd.DataFrame(data, columns= ['Timestep','Verbruik woning (kWh)','Prijs afname (€/kWh)',
'Capaciteit batterij (kW)','Capaciteit batterij (kWh)',
'Rendement (%)','Verbruikersprofiel'])
timestep = dataRead['Timestep'].to_numpy()
usage_home = dataRead['Verbruik woning (kWh)'].to_numpy()
price = dataRead['Prijs afname (€/kWh)'].to_numpy()
cap_batt_kW = dataRead['Capaciteit batterij (kW)'].iloc[0]
cap_batt_kWh = dataRead['Capaciteit batterij (kWh)'].iloc[0]
efficiency = dataRead['Rendement (%)'].iloc[0]
usersprofile = dataRead['Verbruikersprofiel'].iloc[0]
# ---------------------------- Optimization model ----------------------------
# Initialise model
m = GEKKO()
m.open_folder()
# Global options
m.options.SOLVER = 1
m.options.IMODE = 6
# Constants
snelheid_laden = cap_batt_kW/4
m.time = timestep
loss_charging = m.Const(value = (1-efficiency)/2)
max_cap_batt = m.Const(value = cap_batt_kWh)
min_cap_batt = m.Const(value = 0)
max_charge = m.Const(value = snelheid_laden) # max battery can charge in 15min
max_decharge = m.Const(value = -snelheid_laden) # max battery can decharge in 15min
# Parameters
usage_home = m.Param(usage_home)
price = m.Param(price)
# Variables
e_batt = m.Var(value=0, lb = min_cap_batt, ub = max_cap_batt) # energy in battery
price_paid = m.Var() # price paid each 15min
charging = m.Var(lb = max_decharge, ub = max_charge) # amount charge/decharge each 15min
usage_net = m.Var(lb=min_cap_batt)
# Equations
m.Equation(e_batt==m.integral(charging*(1-loss_charging)))
m.Equation(usage_net==usage_home + charging)
price_paid = m.Intermediate(usage_net * price / 100)
m.Equation(-charging <= (1-loss_charging)*e_batt)
# Objective
m.Minimize(price_paid)
# Solve problem
m.solve()
import matplotlib.pyplot as plt
plt.plot(m.time,e_batt.value,label='Battery Charge')
plt.plot(m.time,charging.value,label='Charging')
plt.plot(m.time,price_paid.value,label='Price')
plt.plot(m.time,usage_net.value,label='Net Usage')
plt.xlabel('Time'); plt.grid(); plt.legend(); plt.show()
The model is only 31 lines long (see gk0_model.apm) and it solves much faster (a couple seconds total).
--------- APM Model Size ------------
Each time step contains
Objects : 0
Constants : 5
Variables : 8
Intermediates: 1
Connections : 0
Equations : 6
Residuals : 5
Number of state variables: 1337
Number of total equations: - 955
Number of slack variables: - 191
---------------------------------------
Degrees of freedom : 191
----------------------------------------------
Dynamic Control with APOPT Solver
----------------------------------------------
Iter Objective Convergence
0 3.46205E+01 3.00000E-01
1 3.30649E+01 4.41141E-10
2 3.12774E+01 1.98558E-11
3 3.03148E+01 1.77636E-15
4 2.96824E+01 3.99680E-15
5 2.82700E+01 8.88178E-16
6 2.82039E+01 1.77636E-15
7 2.81334E+01 8.88178E-16
8 2.81085E+01 1.33227E-15
9 2.81039E+01 8.88178E-16
Iter Objective Convergence
10 2.81005E+01 8.88178E-16
11 2.80999E+01 1.77636E-15
12 2.80996E+01 8.88178E-16
13 2.80996E+01 8.88178E-16
14 2.80996E+01 8.88178E-16
Successful solution
---------------------------------------------------
Solver : APOPT (v1.0)
Solution time : 0.527499999996508 sec
Objective : 28.0995878585948
Successful solution
---------------------------------------------------
There is no long compile time. Also, the solver time is reduced from 10 sec to 0.5 sec. The objective function is nearly the same (28.18 versus 28.10).
Here is a complete version without the data file dependency (in case the data file isn't available in the future).
from gekko import GEKKO
import numpy as np
import pandas as pd
import time
import math
# ------------------------ Import and read input data ------------------------
timestep = np.arange(1,193)
usage_home = np.array([0.05,0.07,0.09,0.07,0.05,0.07,0.07,0.07,0.06,
0.05,0.07,0.07,0.09,0.07,0.06,0.07,0.07,
0.07,0.16,0.12,0.17,0.08,0.10,0.11,0.06,
0.06,0.06,0.06,0.06,0.07,0.07,0.07,0.08,
0.08,0.06,0.07,0.07,0.07,0.07,0.05,0.07,
0.07,0.07,0.07,0.21,0.08,0.07,0.08,0.27,
0.12,0.09,0.10,0.11,0.09,0.09,0.08,0.08,
0.12,0.15,0.08,0.10,0.08,0.10,0.09,0.10,
0.09,0.08,0.10,0.12,0.10,0.10,0.10,0.11,
0.10,0.10,0.11,0.13,0.21,0.12,0.10,0.10,
0.11,0.10,0.11,0.12,0.12,0.10,0.11,0.10,
0.10,0.10,0.11,0.10,0.10,0.09,0.08,0.12,
0.10,0.11,0.11,0.10,0.06,0.05,0.06,0.06,
0.06,0.07,0.06,0.06,0.05,0.06,0.05,0.06,
0.05,0.06,0.05,0.06,0.07,0.06,0.09,0.10,
0.10,0.22,0.08,0.06,0.05,0.06,0.08,0.08,
0.07,0.08,0.07,0.07,0.16,0.21,0.08,0.08,
0.09,0.09,0.10,0.09,0.09,0.08,0.12,0.24,
0.09,0.08,0.09,0.08,0.10,0.24,0.08,0.09,
0.09,0.08,0.08,0.07,0.06,0.05,0.06,0.07,
0.07,0.05,0.05,0.06,0.05,0.28,0.11,0.20,
0.10,0.09,0.28,0.10,0.15,0.09,0.10,0.18,
0.12,0.13,0.30,0.10,0.11,0.10,0.10,0.11,
0.10,0.21,0.10,0.10,0.12,0.10,0.08])
price = np.array([209.40,209.40,209.40,209.40,193.00,193.00,193.00,
193.00,182.75,182.75,182.75,182.75,161.60,161.60,
161.60,161.60,154.25,154.25,154.25,154.25,150.70,
150.70,150.70,150.70,150.85,150.85,150.85,150.85,
150.00,150.00,150.00,150.00,153.25,153.25,153.25,
153.25,153.25,153.25,153.25,153.25,151.35,151.35,
151.35,151.35,151.70,151.70,151.70,151.70,154.95,
154.95,154.95,154.95,150.20,150.20,150.20,150.20,
153.75,153.75,153.75,153.75,160.55,160.55,160.55,
160.55,179.90,179.90,179.90,179.90,202.00,202.00,
202.00,202.00,220.25,220.25,220.25,220.25,245.75,
245.75,245.75,245.75,222.90,222.90,222.90,222.90,
203.40,203.40,203.40,203.40,205.30,205.30,205.30,
205.30,192.80,192.80,192.80,192.80,177.00,177.00,
177.00,177.00,159.90,159.90,159.90,159.90,152.50,
152.50,152.50,152.50,143.95,143.95,143.95,143.95,
142.10,142.10,142.10,142.10,143.75,143.75,143.75,
143.75,170.80,170.80,170.80,170.80,210.35,210.35,
210.35,210.35,224.45,224.45,224.45,224.45,226.30,
226.30,226.30,226.30,227.85,227.85,227.85,227.85,
225.45,225.45,225.45,225.45,225.80,225.80,225.80,
225.80,224.50,224.50,224.50,224.50,220.30,220.30,
220.30,220.30,220.00,220.00,220.00,220.00,221.90,
221.90,221.90,221.90,230.25,230.25,230.25,230.25,
233.60,233.60,233.60,233.60,225.20,225.20,225.20,
225.20,179.85,179.85,179.85,179.85,171.85,171.85,
171.85,171.85,162.90,162.90,162.90,162.90,158.85,
158.85,158.85,158.85])
cap_batt_kW = 3.00
cap_batt_kWh = 5.00
efficiency = 0.95
usersprofile = 1
# ---------------------------- Optimization model ----------------------------
# Initialise model
m = GEKKO()
#m.open_folder()
# Global options
m.options.SOLVER = 1
m.options.IMODE = 6
# Constants
snelheid_laden = cap_batt_kW/4
m.time = timestep
loss_charging = m.Const(value = (1-efficiency)/2)
max_cap_batt = m.Const(value = cap_batt_kWh)
min_cap_batt = m.Const(value = 0)
max_charge = m.Const(value = snelheid_laden) # max battery can charge in 15min
max_decharge = m.Const(value = -snelheid_laden) # max battery can decharge in 15min
# Parameters
usage_home = m.Param(usage_home)
price = m.Param(price)
# Variables
e_batt = m.Var(value=0, lb = min_cap_batt, ub = max_cap_batt) # energy in battery
price_paid = m.Var() # price paid each 15min
charging = m.Var(lb = max_decharge, ub = max_charge) # amount charge/decharge each 15min
usage_net = m.Var(lb=min_cap_batt)
# Equations
m.Equation(e_batt==m.integral(charging*(1-loss_charging)))
m.Equation(usage_net==usage_home + charging)
price_paid = m.Intermediate(usage_net * price / 100)
m.Equation(-charging <= (1-loss_charging)*e_batt)
# Objective
m.Minimize(price_paid)
# Solve problem
m.solve()
import matplotlib.pyplot as plt
plt.plot(m.time,e_batt.value,label='Battery Charge')
plt.plot(m.time,charging.value,label='Charging')
plt.plot(m.time,price_paid.value,label='Price')
plt.plot(m.time,usage_net.value,label='Net Usage')
plt.xlabel('Time'); plt.grid(); plt.legend(); plt.show()

Issue with loop in trading strategy backtest

I'm currently trying to put together some code that backtests a simple trading strategy involving sequencing through time series price data, incrementally fitting an ARIMA model, making future price predictions, and then either adding a share if the price is predicted to increase, or selling all accumulated shares if the price is predicted to go down. Currently, it's returning nan values for the projected returns from trades and appears to only be selling somehow.
I've attached my code below. There's just a few simple functions for calculating a sharpe ratio and then the main function for running backtests.
import yfinance as yf
from datetime import datetime, timedelta
import matplotlib.pyplot as plt
from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
from statsmodels.tsa.arima.model import ARIMA
import numpy as np
import seaborn as sns
from tqdm import tqdm
import pandas as pd
from statsmodels.tools.sm_exceptions import ValueWarning, HessianInversionWarning, ConvergenceWarning
import warnings
#in practice do not supress these warnings, they carry important information about the status of your model
warnings.filterwarnings('ignore', category=ValueWarning)
warnings.filterwarnings('ignore', category=HessianInversionWarning)
warnings.filterwarnings('ignore', category=ConvergenceWarning)
tickerSymbol = 'SPY'
data = yf.Ticker(tickerSymbol)
prices = data.history(start='2017-01-01', end='2019-01-01').Close
returns = prices.pct_change().dropna()
def std_dev(data):
# Get number of observations
n = len(data)
# Calculate mean
mean = sum(data) / n
# Calculate deviations from the mean
deviations = sum([(x - mean)**2 for x in data])
# Calculate Variance & Standard Deviation
variance = deviations / (n - 1)
s = variance**(1/2)
return s
# Sharpe Ratio From Scratch
def sharpe_ratio(data, risk_free_rate=0):
# Calculate Average Daily Return
mean_daily_return = sum(data) / len(data)
print(f"mean daily return = {mean_daily_return}")
# Calculate Standard Deviation
s = std_dev(data)
# Calculate Daily Sharpe Ratio
daily_sharpe_ratio = (mean_daily_return - risk_free_rate) / s
# Annualize Daily Sharpe Ratio
sharpe_ratio = 252**(1/2) * daily_sharpe_ratio
return sharpe_ratio
def run_simulation(returns, prices, amt, order, thresh, verbose=True, plot=True):
if type(order) == float:
thresh = None
sum_list = []
events_list = []
sharpe_list = []
init_amt = amt
#go through dates
for date, r in tqdm (returns.iloc[14:].items(), total=len(returns.iloc[14:])):
#if you're currently holding the stock, sell it
#get data til just before current date
curr_data = returns[:date]
if type(order) == tuple:
try:
#fit model
model = ARIMA(curr_data, order=order).fit(maxiter=200)
#get forecast
pred = model.forecast()[0][0]
except:
pred = thresh - 1
#if you predict a high enough return and not holding, buy stock
if ((type(order) == float and np.random.random() < order)
or (type(order) == tuple and pred > thresh)):
buy_price = prices.loc[date]
events_list.append(('b', date))
int_buy_price = int(buy_price)
sum_list.append(int_buy_price)
if verbose:
print('Bought at $%s'%buy_price)
print('Predicted Return: %s'%round(pred,4))
print('Actual Return: %s'%(round(ret, 4)))
print('=======================================')
continue
#if you predict below the threshold return, sell the stock
if ((type(order) == float and np.random.random() < order)
or (type(order) == tuple and thresh > pred)
or (order == 'last' and curr_data[-1] > 0)):
sell_price = prices.loc[date]
total_return = len(sum_list) * sell_price
ret = (total_return-sum(sum_list))/sum(sum_list)
amt *= (1+ret)
events_list.append(('s', date, ret))
sharpe_list.append(ret)
sum_list.clear()
if verbose:
print('Sold at $%s'%sell_price)
print('Predicted Return: %s'%round(pred,4))
print('Actual Return: %s'%(round(ret, 4)))
print('=======================================')
if verbose:
sharpe = sharpe_ratio(sharpe_list, risk_free_rate=0)
print('Total Amount: $%s'%round(amt,2))
print(f"Sharpe Ratio: {sharpe}")
#graph
if plot:
plt.figure(figsize=(10,4))
plt.plot(prices[14:])
y_lims = (int(prices.min()*.95), int(prices.max()*1.05))
shaded_y_lims = int(prices.min()*.5), int(prices.max()*1.5)
for idx, event in enumerate(events_list):
plt.axvline(event[1], color='k', linestyle='--', alpha=0.4)
if event[0] == 's':
color = 'green' if event[2] > 0 else 'red'
plt.fill_betweenx(range(*shaded_y_lims),
event[1], events_list[idx-1][1], color=color, alpha=0.1)
tot_return = round(100*(amt / init_amt - 1), 2)
sharpe = sharpe_ratio(sharpe_list, risk_free_rate=0)
tot_return = str(tot_return) + '%'
plt.title("%s Price Data\nThresh=%s\nTotal Amt: $%s\nTotal Return: %s"%(tickerSymbol, thresh, round(amt,2), tot_return), fontsize=20)
plt.ylim(*y_lims)
plt.show()
print(sharpe)
return amt
# A model with a dth difference to fit and ARMA(p,q) model is called an ARIMA process
# of order (p,d,q). You can select p,d, and q with a wide range of methods,
# including AIC, BIC, and empirical autocorrelations (Petris, 2009).
for thresh in [0.001]:
run_simulation(returns, prices, 100000, (7,1,7), thresh, verbose=True)
I've discovered that it's failing to fit the ARIMA model for some reason. Not converging to a solution, I guess. I'm not sure why though because it was fitting it just fine when I was running a different strategy using the same data and order.

Increase speed creation for masked xarray file

I am currently trying to crop a retangular xarray file to the shape of a country using a mask grid. Below you can find my current solution (with simpler and smaller arrays). The code works and I get the desired mask based on 1s and 0s. The problem lies on the fact that the code when run on a real country shape (larger and more complex) takes over 30 minutes to run. Since I am using very basic operations here like nested for loops, I also tried different alternatives like a list approach. However, when timing the process, it did not improve on the code below. I wonder if there is a faster way to obtain this mask (vectorization?) or if I should approach the problem in a different way (tried exploring xarray's properties, but have not found anything that tackles this issue yet).
Code below:
import geopandas as gpd
from shapely.geometry import Polygon, Point
import pandas as pd
import numpy as np
import xarray as xr
df = pd.read_csv('Brazil_borders.csv',index_col=0)
lats = np.array([-20, -5, -5, -20,])
lons = np.array([-60, -60, -30, -30])
lats2 = np.array([-10.25, -10.75, -11.25, -11.75, -12.25, -12.75, -13.25, -13.75,
-14.25, -14.75, -15.25, -15.75, -16.25, -16.75, -17.25, -17.75,
-18.25, -18.75, -19.25, -19.75, -20.25, -20.75, -21.25, -21.75,
-22.25, -22.75, -23.25, -23.75, -24.25, -24.75, -25.25, -25.75,
-26.25, -26.75, -27.25, -27.75, -28.25, -28.75, -29.25, -29.75,
-30.25, -30.75, -31.25, -31.75, -32.25, -32.75])
lons2 = np.array([-61.75, -61.25, -60.75, -60.25, -59.75, -59.25, -58.75, -58.25,
-57.75, -57.25, -56.75, -56.25, -55.75, -55.25, -54.75, -54.25,
-53.75, -53.25, -52.75, -52.25, -51.75, -51.25, -50.75, -50.25,
-49.75, -49.25, -48.75, -48.25, -47.75, -47.25, -46.75, -46.25,
-45.75, -45.25, -44.75, -44.25])
points = []
for i in range(len(lats)):
_= [lats[i],lons[i]]
points.append(_)
poly_proj = Polygon(points)
mask = np.zeros((len(lats2),len(lons2))) # Mask with the dataset's shape and size.
for i in range(len(lats2)): # Iteration to verify if a given coordinate is within the polygon's area
for j in range(len(lons2)):
grid_point = Point(lats2[i], lons2[j])
if grid_point.within(poly_proj):
mask[i][j] = 1
bool_final = mask
bool_final
The alternative based on list approach, but with even worse processing time (according to timeit):
lats = np.array([-20, -5, -5, -20,])
lons = np.array([-60, -60, -30, -30])
lats2 = np.array([-10.25, -10.75, -11.25, -11.75, -12.25, -12.75, -13.25, -13.75,
-14.25, -14.75, -15.25, -15.75, -16.25, -16.75, -17.25, -17.75,
-18.25, -18.75, -19.25, -19.75, -20.25, -20.75, -21.25, -21.75,
-22.25, -22.75, -23.25, -23.75, -24.25, -24.75, -25.25, -25.75,
-26.25, -26.75, -27.25, -27.75, -28.25, -28.75, -29.25, -29.75,
-30.25, -30.75, -31.25, -31.75, -32.25, -32.75])
lons2 = np.array([-61.75, -61.25, -60.75, -60.25, -59.75, -59.25, -58.75, -58.25,
-57.75, -57.25, -56.75, -56.25, -55.75, -55.25, -54.75, -54.25,
-53.75, -53.25, -52.75, -52.25, -51.75, -51.25, -50.75, -50.25,
-49.75, -49.25, -48.75, -48.25, -47.75, -47.25, -46.75, -46.25,
-45.75, -45.25, -44.75, -44.25])
points = []
for i in range(len(lats)):
_= [lats[i],lons[i]]
points.append(_)
poly_proj = Polygon(points)
grid_point = [Point(lats2[i],lons2[j]) for i in range(len(lats2)) for j in range(len(lons2))]
mask = [1 if grid_point[i].within(poly_proj) else 0 for i in range(len(grid_point))]
bool_final2 = np.reshape(mask,(((len(lats2)),(len(lons2)))))
Thank you in advance!
Based on this answer from snowman2, I created this simple function that provides a much faster solution by using geopandas and rioxarray. Instead of using a list of latitudes and longitudes, one has to use a shapefile with the desired shape to be masked (Instructions for GeoDataFrame creation from list of coordinates).
import xarray as xr
import geopandas as gpd
import rioxarray
from shapely.geometry import mapping
def mask_shape_border (DS,shape_shp): #Inputs are the dataset to be cropped and the address of the mask file (.shp )
if 'lat' in DS: #Some datasets use lat/lon, others latitude/longitude
DS.rio.set_spatial_dims(x_dim="lon", y_dim="lat", inplace=True)
elif 'latitude' in DS:
DS.rio.set_spatial_dims(x_dim="longitude", y_dim="latitude", inplace=True)
else:
print("Error: check latitude and longitude variable names.")
DS.rio.write_crs("epsg:4326", inplace=True)
mask = gpd.read_file(shape_shp, crs="epsg:4326")
DS_clipped = DS.rio.clip(mask.geometry.apply(mapping), mask.crs, drop=False)
return(DS_clipped)

Interpolating GFS winds from isobaric to height coordinates using Metpy

I have been tasked with making plots of winds at various levels of the atmosphere to support aviation. While I have been able to make some nice plots using GFS model data (see code below), I'm really having to make a rough approximation of height using pressure coordinates available from the GFS. I'm using winds at 300 hPA, 700 hPA, and 925 hPA to make an approximation of the winds at 30,000 ft, 9000 ft, and 3000 ft. My question is really for those out there who are metpy gurus...is there a way that I can interpolate these winds to a height surface? It sure would be nice to get the actual winds at these height levels! Thanks for any light anyone can share on this subject!
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
import numpy as np
from netCDF4 import num2date
from datetime import datetime, timedelta
from siphon.catalog import TDSCatalog
from siphon.ncss import NCSS
from PIL import Image
from matplotlib import cm
# For the vertical levels we want to grab with our queries
# Levels need to be in Pa not hPa
Levels = [30000,70000,92500]
# Time deltas for days
Deltas = [1,2,3]
#Deltas = [1]
# Levels in hPa for the file names
LevelDict = {30000:'300', 70000:'700', 92500:'925'}
# The path to where our banners are stored
impath = 'C:\\Users\\shell\\Documents\\Python Scripts\\Banners\\'
# Final images saved here
imoutpath = 'C:\\Users\\shell\\Documents\\Python Scripts\\TVImages\\'
# Quick function for finding out which variable is the time variable in the
# netCDF files
def find_time_var(var, time_basename='time'):
for coord_name in var.coordinates.split():
if coord_name.startswith(time_basename):
return coord_name
raise ValueError('No time variable found for ' + var.name)
# Function to grab data at different levels from Siphon
def grabData(level):
query.var = set()
query.variables('u-component_of_wind_isobaric', 'v-component_of_wind_isobaric')
query.vertical_level(level)
data = ncss.get_data(query)
u_wind_var = data.variables['u-component_of_wind_isobaric']
v_wind_var = data.variables['v-component_of_wind_isobaric']
time_var = data.variables[find_time_var(u_wind_var)]
lat_var = data.variables['lat']
lon_var = data.variables['lon']
return u_wind_var, v_wind_var, time_var, lat_var, lon_var
# Construct a TDSCatalog instance pointing to the gfs dataset
best_gfs = TDSCatalog('http://thredds-jetstream.unidata.ucar.edu/thredds/catalog/grib/'
'NCEP/GFS/Global_0p5deg/catalog.xml')
# Pull out the dataset you want to use and look at the access URLs
best_ds = list(best_gfs.datasets.values())[1]
#print(best_ds.access_urls)
# Create NCSS object to access the NetcdfSubset
ncss = NCSS(best_ds.access_urls['NetcdfSubset'])
print(best_ds.access_urls['NetcdfSubset'])
# Looping through the forecast times
for delta in Deltas:
# Create lat/lon box and the time(s) for location you want to get data for
now = datetime.utcnow()
fcst = now + timedelta(days = delta)
timestamp = datetime.strftime(fcst, '%A')
query = ncss.query()
query.lonlat_box(north=78, south=45, east=-90, west=-220).time(fcst)
query.accept('netcdf4')
# Now looping through the levels to create our plots
for level in Levels:
u_wind_var, v_wind_var, time_var, lat_var, lon_var = grabData(level)
# Get actual data values and remove any size 1 dimensions
lat = lat_var[:].squeeze()
lon = lon_var[:].squeeze()
u_wind = u_wind_var[:].squeeze()
v_wind = v_wind_var[:].squeeze()
#converting to knots
u_windkt= u_wind*1.94384
v_windkt= v_wind*1.94384
wspd = np.sqrt(np.power(u_windkt,2)+np.power(v_windkt,2))
# Convert number of hours since the reference time into an actual date
time = num2date(time_var[:].squeeze(), time_var.units)
print (time)
# Combine 1D latitude and longitudes into a 2D grid of locations
lon_2d, lat_2d = np.meshgrid(lon, lat)
# Create new figure
#fig = plt.figure(figsize = (18,12))
fig = plt.figure()
fig.set_size_inches(26.67,15)
datacrs = ccrs.PlateCarree()
plotcrs = ccrs.LambertConformal(central_longitude=-150,
central_latitude=55,
standard_parallels=(30, 60))
# Add the map and set the extent
ax = plt.axes(projection=plotcrs)
ext = ax.set_extent([-195., -115., 50., 72.],datacrs)
ext2 = ax.set_aspect('auto')
ax.background_patch.set_fill(False)
# Add state boundaries to plot
ax.add_feature(cfeature.STATES, edgecolor='black', linewidth=2)
# Add geopolitical boundaries for map reference
ax.add_feature(cfeature.COASTLINE.with_scale('50m'))
ax.add_feature(cfeature.OCEAN.with_scale('50m'))
ax.add_feature(cfeature.LAND.with_scale('50m'),facecolor = '#cc9666', linewidth = 4)
if level == 30000:
spdrng_sped = np.arange(30, 190, 2)
windlvl = 'Jet_Stream'
elif level == 70000:
spdrng_sped = np.arange(20, 100, 1)
windlvl = '9000_Winds_Aloft'
elif level == 92500:
spdrng_sped = np.arange(20, 80, 1)
windlvl = '3000_Winds_Aloft'
else:
pass
top = cm.get_cmap('Greens')
middle = cm.get_cmap('YlOrRd')
bottom = cm.get_cmap('BuPu_r')
newcolors = np.vstack((top(np.linspace(0, 1, 128)),
middle(np.linspace(0, 1, 128))))
newcolors2 = np.vstack((newcolors,bottom(np.linspace(0,1,128))))
cmap = ListedColormap(newcolors2)
cf = ax.contourf(lon_2d, lat_2d, wspd, spdrng_sped, cmap=cmap,
transform=datacrs, extend = 'max', alpha=0.75)
cbar = plt.colorbar(cf, orientation='horizontal', pad=0, aspect=50,
drawedges = 'true')
cbar.ax.tick_params(labelsize=16)
wslice = slice(1, None, 4)
ax.quiver(lon_2d[wslice, wslice], lat_2d[wslice, wslice],
u_windkt[wslice, wslice], v_windkt[wslice, wslice], width=0.0015,
headlength=4, headwidth=3, angles='xy', color='black', transform = datacrs)
plt.savefig(imoutpath+'TV_UpperAir'+LevelDict[level]+'_'+timestamp+'.png',bbox_inches= 'tight')
# Now we use Pillow to overlay the banner with the appropriate day
background = Image.open(imoutpath+'TV_UpperAir'+LevelDict[level]+'_'+timestamp+'.png')
im = Image.open(impath+'Banner_'+windlvl+'_'+timestamp+'.png')
# resize the image
size = background.size
im = im.resize(size,Image.ANTIALIAS)
background.paste(im, (17, 8), im)
background.save(imoutpath+'TV_UpperAir'+LevelDict[level]+'_'+timestamp+'.png','PNG')
Thanks for the question! My approach here is for each separate column to interpolate the pressure coordinate of GFS-output Geopotential Height onto your provided altitudes to estimate the pressure of each height level for each column. Then I can use that pressure to interpolate the GFS-output u, v onto. The GFS-output GPH and winds have very slightly different pressure coordinates, which is why I interpolated twice. I performed the interpolation using MetPy's interpolate.log_interpolate_1d which performs a linear interpolation on the log of the inputs. Here is the code I used!
from datetime import datetime
import numpy as np
import metpy.calc as mpcalc
from metpy.units import units
from metpy.interpolate import log_interpolate_1d
from siphon.catalog import TDSCatalog
gfs_url = 'https://tds.scigw.unidata.ucar.edu/thredds/catalog/grib/NCEP/GFS/Global_0p5deg/catalog.xml'
cat = TDSCatalog(gfs_url)
now = datetime.utcnow()
# A shortcut to NCSS
ncss = cat.datasets['Best GFS Half Degree Forecast Time Series'].subset()
query = ncss.query()
query.var = set()
query.variables('u-component_of_wind_isobaric', 'v-component_of_wind_isobaric', 'Geopotential_height_isobaric')
query.lonlat_box(north=78, south=45, east=-90, west=-220)
query.time(now)
query.accept('netcdf4')
data = ncss.get_data(query)
# Reading in the u(isobaric), v(isobaric), isobaric vars and the GPH(isobaric6) and isobaric6 vars
# These are two slightly different vertical pressure coordinates.
# We will also assign units here, and this can allow us to go ahead and convert to knots
lat = units.Quantity(data.variables['lat'][:].squeeze(), units('degrees'))
lon = units.Quantity(data.variables['lon'][:].squeeze(), units('degrees'))
iso_wind = units.Quantity(data.variables['isobaric'][:].squeeze(), units('Pa'))
iso_gph = units.Quantity(data.variables['isobaric6'][:].squeeze(), units('Pa'))
u = units.Quantity(data.variables['u-component_of_wind_isobaric'][:].squeeze(), units('m/s')).to(units('knots'))
v = units.Quantity(data.variables['v-component_of_wind_isobaric'][:].squeeze(), units('m/s')).to(units('knots'))
gph = units.Quantity(data.variables['Geopotential_height_isobaric'][:].squeeze(), units('gpm'))
# Here we will select our altitudes to interpolate onto and convert them to geopotential meters
altitudes = ([30000., 9000., 3000.] * units('ft')).to(units('gpm'))
# Now we will interpolate the pressure coordinate for model output geopotential height to
# estimate the pressure level for our given altitudes at each grid point
pressures_of_alts = np.zeros((len(altitudes), len(lat), len(lon)))
for ilat in range(len(lat)):
for ilon in range(len(lon)):
pressures_of_alts[:, ilat, ilon] = log_interpolate_1d(altitudes,
gph[:, ilat, ilon],
iso_gph)
pressures_of_alts = pressures_of_alts * units('Pa')
# Similarly, we will use our interpolated pressures to interpolate
# our u and v winds across their given pressure coordinates.
# This will provide u, v at each of our interpolated pressure
# levels corresponding to our provided initial altitudes
u_at_levs = np.zeros((len(altitudes), len(lat), len(lon)))
v_at_levs = np.zeros((len(altitudes), len(lat), len(lon)))
for ilat in range(len(lat)):
for ilon in range(len(lon)):
u_at_levs[:, ilat, ilon], v_at_levs[:, ilat, ilon] = log_interpolate_1d(pressures_of_alts[:, ilat, ilon],
iso_wind,
u[:, ilat, ilon],
v[:, ilat, ilon])
u_at_levs = u_at_levs * units('knots')
v_at_levs = v_at_levs * units('knots')
# We can use mpcalc to calculate a wind speed array from these
wspd = mpcalc.wind_speed(u_at_levs, v_at_levs)
I was able to take my output from this and coerce it into your plotting code (with some unit stripping.)
Your 300-hPa GFS winds
My "30000-ft" GFS winds
Here is what my interpolated pressure fields at each estimated height level look like.
Hope this helps!
I am not sure if this is what you are looking for (I am very new to Metpy), but I have been using the metpy height_to_pressure_std(altitude) function. It puts it in units of hPa which then I convert to Pascals and then a unitless value to use in the Siphon vertical_level(float) function.
I don't think you can use metpy functions to convert height to pressure or vice versus in the upper atmosphere. There errors are too when using the Standard Atmosphere to convert say pressure to feet.

Resources