Related
I'm trying to load the Cornell dataset from PyTorch Geometric to train my Graph Neural Network. I want to apply a mask but I achieve this error (also on Chameleon, Wisconsin, Texas datasets). My Dataset class works perfectly with all the datasets of Planetoid that are mono dimensional tensors, probable bidimensional tensors give problem.
I insert my code that can be ruined on Colab without problems.
!pip install torch-scatter torch-sparse torch-cluster torch-spline-conv torch-geometric -f https://data.pyg.org/whl/torch-1.12.0+cu113.html
import torch_geometric
from torch_geometric.datasets import Planetoid, WebKB
from torch_geometric.utils import to_dense_adj, to_undirected, remove_self_loops
class Dataset(object):
def __init__(self, name):
super(Dataset, self).__init__()
self.name = name
if (name == 'Cora'):
dataset = Planetoid(root='/tmp/Cora', name='Cora', split="full")
if(name == 'Citeseer'):
dataset = Planetoid(root='/tmp/Cora', name='Citeseer', split="full")
if(name == 'PubMed'):
dataset = Planetoid(root='/tmp/Cora', name='Pubmed', split="full")
if(name == 'Cornell'):
dataset = WebKB(root='/tmp/WebKB', name='Cornell')
self.data = dataset[0]
print(self.data)
self.train_mask = self.data.train_mask
self.valid_mask = self.data.val_mask
self.test_mask = self.data.test_mask
def train_val_test_split(self):
train_x = self.data.x[self.data.train_mask]
train_y = self.data.y[self.data.train_mask]
valid_x = self.data.x[self.data.val_mask]
valid_y = self.data.y[self.data.val_mask]
test_x = self.data.x[self.data.test_mask]
test_y = self.data.y[self.data.test_mask]
return train_x, train_y, valid_x, valid_y, test_x, test_y
def get_fullx(self):
return self.data.x
def get_edge_index(self):
return self.data.edge_index
def get_adjacency_matrix(self):
# We will ignore this for the first part
adj = to_dense_adj(self.data.edge_index)[0]
return adj
The error that I achieve is in the title and is obtained in this snippet:
cornell_dataset = Dataset(name = 'Cornell')
train_x, train_y, valid_x, valid_y, test_x, test_y = cornell_dataset.train_val_test_split()
# check and confirm our data shapes match our expectations
print(f"Train shape x: {train_x.shape}, y: {train_y.shape}")
print(f"Val shape x: {valid_x.shape}, y: {valid_y.shape}")
print(f"Test shape x: {test_x.shape}, y: {test_y.shape}")
I tried to follow the examples in the
Link 1 - Sparse Matrix
https://www.tidyverse.org/blog/2020/11/tidymodels-sparse-support/
Link 2 - Workflow_sets
https://www.tmwr.org/workflow-sets.html
I had trouble including the blue print into the workflow sets.
In the examples where workflow_set is defined in link 2
no_pre_proc <-
workflow_set(
preproc = list(simple = model_vars),
models = list(MARS = mars_spec, CART = cart_spec, CART_bagged = bag_cart_spec,
RF = rf_spec, boosting = xgb_spec, Cubist = cubist_spec)
)
and the way we add blue print into the workflow in link 1
wf_sparse <-
workflow() %>%
add_recipe(text_rec, blueprint = sparse_bp) %>%
add_model(lasso_spec)
wf_default <-
workflow() %>%
add_recipe(text_rec) %>%
add_model(lasso_spec)
Where and how do I add the "blueprint = sparse_bp" option in the workflow_set above?
My attempts were
no_pre_proc <-
workflow_set(
preproc = list(simple = model_vars),
models = list(MARS = mars_spec, CART = cart_spec, CART_bagged = bag_cart_spec,
RF = rf_spec, boosting = xgb_spec, Cubist = cubist_spec)) %>%
option_add(update_blueprint(blueprint = sparse_bp))
Running the racing tune gave me this error
Error: Problem with `mutate()` column `option`.
i `option = purrr::map(option, append_options, dots)`.
x All options should be named.
Run `rlang::last_error()` to see where the error occurred
<error/rlang_error>
There were 9 workflows that had no results.
Backtrace:
1. ggplot2::autoplot(...)
2. workflowsets:::autoplot.workflow_set(...)
3. workflowsets:::rank_plot(...)
4. workflowsets:::pick_metric(object, rank_metric, metric)
6. workflowsets:::collect_metrics.workflow_set(x)
7. workflowsets:::check_incompete(x, fail = TRUE)
8. workflowsets:::halt(msg)
Run `rlang::last_trace()` to see the full context.
> rlang::last_trace()
<error/rlang_error>
There were 9 workflows that had no results.
Backtrace:
x
1. +-ggplot2::autoplot(...)
2. \-workflowsets:::autoplot.workflow_set(...)
3. \-workflowsets:::rank_plot(...)
4. \-workflowsets:::pick_metric(object, rank_metric, metric)
5. +-tune::collect_metrics(x)
6. \-workflowsets:::collect_metrics.workflow_set(x)
7. \-workflowsets:::check_incompete(x, fail = TRUE)
8. \-workflowsets:::halt(msg)
>
thanks,
Thank you for asking this question; we definitely are not supporting this use case (passing non-default arguments to the recipe or model) very well right now. We've opened an issue here where you can track our work on this.
In the meantime, you could try a bit of a hacky workaround by manually using update_recipe() on the workflow you are interested in:
library(tidymodels)
#> Registered S3 method overwritten by 'tune':
#> method from
#> required_pkgs.model_spec parsnip
data(parabolic)
set.seed(1)
split <- initial_split(parabolic)
train_set <- training(split)
test_set <- testing(split)
glmnet_spec <-
logistic_reg(penalty = 0.1, mixture = 0) %>%
set_engine("glmnet")
rec <-
recipe(class ~ ., data = train_set) %>%
step_YeoJohnson(all_numeric_predictors())
sparse_bp <- hardhat::default_recipe_blueprint(composition = "dgCMatrix")
wfs_orig <-
workflow_set(
preproc = list(yj = rec,
norm = rec %>% step_normalize(all_numeric_predictors())),
models = list(regularized = glmnet_spec)
)
new_wf <-
wfs_orig %>%
extract_workflow("yj_regularized") %>%
update_recipe(rec, blueprint = sparse_bp)
Created on 2021-12-09 by the reprex package (v2.0.1)
Then (I know this feels hacky for now) manually take this new_wf and stick it in to the wfs_orig$info[[1]]$workflow slot to replace what is there.
I have been tasked with making plots of winds at various levels of the atmosphere to support aviation. While I have been able to make some nice plots using GFS model data (see code below), I'm really having to make a rough approximation of height using pressure coordinates available from the GFS. I'm using winds at 300 hPA, 700 hPA, and 925 hPA to make an approximation of the winds at 30,000 ft, 9000 ft, and 3000 ft. My question is really for those out there who are metpy gurus...is there a way that I can interpolate these winds to a height surface? It sure would be nice to get the actual winds at these height levels! Thanks for any light anyone can share on this subject!
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
import numpy as np
from netCDF4 import num2date
from datetime import datetime, timedelta
from siphon.catalog import TDSCatalog
from siphon.ncss import NCSS
from PIL import Image
from matplotlib import cm
# For the vertical levels we want to grab with our queries
# Levels need to be in Pa not hPa
Levels = [30000,70000,92500]
# Time deltas for days
Deltas = [1,2,3]
#Deltas = [1]
# Levels in hPa for the file names
LevelDict = {30000:'300', 70000:'700', 92500:'925'}
# The path to where our banners are stored
impath = 'C:\\Users\\shell\\Documents\\Python Scripts\\Banners\\'
# Final images saved here
imoutpath = 'C:\\Users\\shell\\Documents\\Python Scripts\\TVImages\\'
# Quick function for finding out which variable is the time variable in the
# netCDF files
def find_time_var(var, time_basename='time'):
for coord_name in var.coordinates.split():
if coord_name.startswith(time_basename):
return coord_name
raise ValueError('No time variable found for ' + var.name)
# Function to grab data at different levels from Siphon
def grabData(level):
query.var = set()
query.variables('u-component_of_wind_isobaric', 'v-component_of_wind_isobaric')
query.vertical_level(level)
data = ncss.get_data(query)
u_wind_var = data.variables['u-component_of_wind_isobaric']
v_wind_var = data.variables['v-component_of_wind_isobaric']
time_var = data.variables[find_time_var(u_wind_var)]
lat_var = data.variables['lat']
lon_var = data.variables['lon']
return u_wind_var, v_wind_var, time_var, lat_var, lon_var
# Construct a TDSCatalog instance pointing to the gfs dataset
best_gfs = TDSCatalog('http://thredds-jetstream.unidata.ucar.edu/thredds/catalog/grib/'
'NCEP/GFS/Global_0p5deg/catalog.xml')
# Pull out the dataset you want to use and look at the access URLs
best_ds = list(best_gfs.datasets.values())[1]
#print(best_ds.access_urls)
# Create NCSS object to access the NetcdfSubset
ncss = NCSS(best_ds.access_urls['NetcdfSubset'])
print(best_ds.access_urls['NetcdfSubset'])
# Looping through the forecast times
for delta in Deltas:
# Create lat/lon box and the time(s) for location you want to get data for
now = datetime.utcnow()
fcst = now + timedelta(days = delta)
timestamp = datetime.strftime(fcst, '%A')
query = ncss.query()
query.lonlat_box(north=78, south=45, east=-90, west=-220).time(fcst)
query.accept('netcdf4')
# Now looping through the levels to create our plots
for level in Levels:
u_wind_var, v_wind_var, time_var, lat_var, lon_var = grabData(level)
# Get actual data values and remove any size 1 dimensions
lat = lat_var[:].squeeze()
lon = lon_var[:].squeeze()
u_wind = u_wind_var[:].squeeze()
v_wind = v_wind_var[:].squeeze()
#converting to knots
u_windkt= u_wind*1.94384
v_windkt= v_wind*1.94384
wspd = np.sqrt(np.power(u_windkt,2)+np.power(v_windkt,2))
# Convert number of hours since the reference time into an actual date
time = num2date(time_var[:].squeeze(), time_var.units)
print (time)
# Combine 1D latitude and longitudes into a 2D grid of locations
lon_2d, lat_2d = np.meshgrid(lon, lat)
# Create new figure
#fig = plt.figure(figsize = (18,12))
fig = plt.figure()
fig.set_size_inches(26.67,15)
datacrs = ccrs.PlateCarree()
plotcrs = ccrs.LambertConformal(central_longitude=-150,
central_latitude=55,
standard_parallels=(30, 60))
# Add the map and set the extent
ax = plt.axes(projection=plotcrs)
ext = ax.set_extent([-195., -115., 50., 72.],datacrs)
ext2 = ax.set_aspect('auto')
ax.background_patch.set_fill(False)
# Add state boundaries to plot
ax.add_feature(cfeature.STATES, edgecolor='black', linewidth=2)
# Add geopolitical boundaries for map reference
ax.add_feature(cfeature.COASTLINE.with_scale('50m'))
ax.add_feature(cfeature.OCEAN.with_scale('50m'))
ax.add_feature(cfeature.LAND.with_scale('50m'),facecolor = '#cc9666', linewidth = 4)
if level == 30000:
spdrng_sped = np.arange(30, 190, 2)
windlvl = 'Jet_Stream'
elif level == 70000:
spdrng_sped = np.arange(20, 100, 1)
windlvl = '9000_Winds_Aloft'
elif level == 92500:
spdrng_sped = np.arange(20, 80, 1)
windlvl = '3000_Winds_Aloft'
else:
pass
top = cm.get_cmap('Greens')
middle = cm.get_cmap('YlOrRd')
bottom = cm.get_cmap('BuPu_r')
newcolors = np.vstack((top(np.linspace(0, 1, 128)),
middle(np.linspace(0, 1, 128))))
newcolors2 = np.vstack((newcolors,bottom(np.linspace(0,1,128))))
cmap = ListedColormap(newcolors2)
cf = ax.contourf(lon_2d, lat_2d, wspd, spdrng_sped, cmap=cmap,
transform=datacrs, extend = 'max', alpha=0.75)
cbar = plt.colorbar(cf, orientation='horizontal', pad=0, aspect=50,
drawedges = 'true')
cbar.ax.tick_params(labelsize=16)
wslice = slice(1, None, 4)
ax.quiver(lon_2d[wslice, wslice], lat_2d[wslice, wslice],
u_windkt[wslice, wslice], v_windkt[wslice, wslice], width=0.0015,
headlength=4, headwidth=3, angles='xy', color='black', transform = datacrs)
plt.savefig(imoutpath+'TV_UpperAir'+LevelDict[level]+'_'+timestamp+'.png',bbox_inches= 'tight')
# Now we use Pillow to overlay the banner with the appropriate day
background = Image.open(imoutpath+'TV_UpperAir'+LevelDict[level]+'_'+timestamp+'.png')
im = Image.open(impath+'Banner_'+windlvl+'_'+timestamp+'.png')
# resize the image
size = background.size
im = im.resize(size,Image.ANTIALIAS)
background.paste(im, (17, 8), im)
background.save(imoutpath+'TV_UpperAir'+LevelDict[level]+'_'+timestamp+'.png','PNG')
Thanks for the question! My approach here is for each separate column to interpolate the pressure coordinate of GFS-output Geopotential Height onto your provided altitudes to estimate the pressure of each height level for each column. Then I can use that pressure to interpolate the GFS-output u, v onto. The GFS-output GPH and winds have very slightly different pressure coordinates, which is why I interpolated twice. I performed the interpolation using MetPy's interpolate.log_interpolate_1d which performs a linear interpolation on the log of the inputs. Here is the code I used!
from datetime import datetime
import numpy as np
import metpy.calc as mpcalc
from metpy.units import units
from metpy.interpolate import log_interpolate_1d
from siphon.catalog import TDSCatalog
gfs_url = 'https://tds.scigw.unidata.ucar.edu/thredds/catalog/grib/NCEP/GFS/Global_0p5deg/catalog.xml'
cat = TDSCatalog(gfs_url)
now = datetime.utcnow()
# A shortcut to NCSS
ncss = cat.datasets['Best GFS Half Degree Forecast Time Series'].subset()
query = ncss.query()
query.var = set()
query.variables('u-component_of_wind_isobaric', 'v-component_of_wind_isobaric', 'Geopotential_height_isobaric')
query.lonlat_box(north=78, south=45, east=-90, west=-220)
query.time(now)
query.accept('netcdf4')
data = ncss.get_data(query)
# Reading in the u(isobaric), v(isobaric), isobaric vars and the GPH(isobaric6) and isobaric6 vars
# These are two slightly different vertical pressure coordinates.
# We will also assign units here, and this can allow us to go ahead and convert to knots
lat = units.Quantity(data.variables['lat'][:].squeeze(), units('degrees'))
lon = units.Quantity(data.variables['lon'][:].squeeze(), units('degrees'))
iso_wind = units.Quantity(data.variables['isobaric'][:].squeeze(), units('Pa'))
iso_gph = units.Quantity(data.variables['isobaric6'][:].squeeze(), units('Pa'))
u = units.Quantity(data.variables['u-component_of_wind_isobaric'][:].squeeze(), units('m/s')).to(units('knots'))
v = units.Quantity(data.variables['v-component_of_wind_isobaric'][:].squeeze(), units('m/s')).to(units('knots'))
gph = units.Quantity(data.variables['Geopotential_height_isobaric'][:].squeeze(), units('gpm'))
# Here we will select our altitudes to interpolate onto and convert them to geopotential meters
altitudes = ([30000., 9000., 3000.] * units('ft')).to(units('gpm'))
# Now we will interpolate the pressure coordinate for model output geopotential height to
# estimate the pressure level for our given altitudes at each grid point
pressures_of_alts = np.zeros((len(altitudes), len(lat), len(lon)))
for ilat in range(len(lat)):
for ilon in range(len(lon)):
pressures_of_alts[:, ilat, ilon] = log_interpolate_1d(altitudes,
gph[:, ilat, ilon],
iso_gph)
pressures_of_alts = pressures_of_alts * units('Pa')
# Similarly, we will use our interpolated pressures to interpolate
# our u and v winds across their given pressure coordinates.
# This will provide u, v at each of our interpolated pressure
# levels corresponding to our provided initial altitudes
u_at_levs = np.zeros((len(altitudes), len(lat), len(lon)))
v_at_levs = np.zeros((len(altitudes), len(lat), len(lon)))
for ilat in range(len(lat)):
for ilon in range(len(lon)):
u_at_levs[:, ilat, ilon], v_at_levs[:, ilat, ilon] = log_interpolate_1d(pressures_of_alts[:, ilat, ilon],
iso_wind,
u[:, ilat, ilon],
v[:, ilat, ilon])
u_at_levs = u_at_levs * units('knots')
v_at_levs = v_at_levs * units('knots')
# We can use mpcalc to calculate a wind speed array from these
wspd = mpcalc.wind_speed(u_at_levs, v_at_levs)
I was able to take my output from this and coerce it into your plotting code (with some unit stripping.)
Your 300-hPa GFS winds
My "30000-ft" GFS winds
Here is what my interpolated pressure fields at each estimated height level look like.
Hope this helps!
I am not sure if this is what you are looking for (I am very new to Metpy), but I have been using the metpy height_to_pressure_std(altitude) function. It puts it in units of hPa which then I convert to Pascals and then a unitless value to use in the Siphon vertical_level(float) function.
I don't think you can use metpy functions to convert height to pressure or vice versus in the upper atmosphere. There errors are too when using the Standard Atmosphere to convert say pressure to feet.
I have wind data from multiple weather stations and I have the coordinates of the each weather station. I want to overlay wind roses from each station over a map using the stations' lat and long. Is there a straight forward way of doing this in R?
This is how I did it so far in R. I saved the windroses as png and then overlayed it on map.
#######################################
########produce map of GTA##############
#######################################
ggmap=get_map(location=c(left=-80.7 , bottom=43 , right=-77.7 , top=44.9))
gta=ggmap(ggmap)+ scale_y_continuous(limits=c(43.2, 44.5))
####################################################
###########for loop to make wind roses##############
###################################################
data_list<- list.files(path= "/home/npak/Documents/weather_data/meso_west_data/", pattern = "\\.csv$", recursive = FALSE, full.names = TRUE)
l<-length(data_list)
for (i in 1:l){
header<-readLines(data_list[i], 8)
variables = strsplit(header, ',')
vars=variables[[7]]
info= strsplit(header, ':')
lat= as.numeric(info[[3]][2])
long= as.numeric(info[[4]][2])
elev= as.numeric(info[[5]][2])
name=info[[2]][2]
data <- read.table(data_list[i], header= FALSE, sep=",", col.names = paste0("V",seq_len(30)), fill=TRUE, skip=8)
missing= length(data)-length(vars)
colnames(data)= c(vars, rep("Empty", missing) )
print(unique(data$Station_ID))
data$ws<-data$wind_speed_set_1
data$wd<-data$wind_direction_set_1
data$date<- as.POSIXct(data$Date_Time, format = "%Y-%m-%d %H:%M")
ID<-unique(data$Station_ID)
png(filename=(file<- paste('/home/npak/Documents/weather_data/meso_west_data/map_1year/', ID ,'all_year_map.png')),
width = 2400, height = 2400,bg = "transparent")
windRose(data,
breaks = c(1.5,3.3,5.5,8),
max.freq = 25
,paddle = FALSE,
, annotate = FALSE, key= FALSE,
auto.text = FALSE, ,grid.line = list(lty =0, value= 10),
cols=c("red", "red2", "red3", "red4"))
#,annotate = FALSE, key= FALSE)
#dev.off()
dev.off.crop(file=file)
mypng <- readPNG(file<- paste('/home/npak/Documents/weather_data/meso_west_data/map_1year/', ID ,'all_year_map.png'))
gta=gta+inset_raster(mypng, ymin = lat-0.2,ymax= lat+0.2,xmin = long-0.2,xmax = long+0.2)
}
######################################################
###############save map into png#######################################
##########################################################
png('/home/npak/Documents/weather_data/meso_west_data/map_1year/gta_annual.png',
width = 2400, height = 2400)
print(gta)
dev.off()
But it doesn't look as good. Right now I am saving the png image with a transparent background but the axis and frames are still there which makes the map look a bit messy. This is how it looks like:
Windroses overlayed on map
So I am looking for something simillar but to directly plot it over the map and not using the png pictures.
I'm trying to do a churn analysis with R and SQL Server 2016.
I have uploaded my dataset on my database in a local SQL Server and I did all the preliminary work on this dataset.
Well, now I have this function trainModel() which I would use to estimate my random model forest:
trainModel = function(sqlSettings, trainTable) {
sqlConnString = sqlSettings$connString
trainDataSQL <- RxSqlServerData(connectionString = sqlConnString,
table = trainTable,
colInfo = cdrColInfo)
## Create training formula
labelVar = "churn"
trainVars <- rxGetVarNames(trainDataSQL)
trainVars <- trainVars[!trainVars %in% c(labelVar)]
temp <- paste(c(labelVar, paste(trainVars, collapse = "+")), collapse = "~")
formula <- as.formula(temp)
## Train gradient tree boosting with mxFastTree on SQL data source
library(RevoScaleR)
rx_forest_model <- rxDForest(formula = formula,
data = trainDataSQL,
nTree = 8,
maxDepth = 16,
mTry = 2,
minBucket = 1,
replace = TRUE,
importance = TRUE,
seed = 8,
parms = list(loss = c(0, 4, 1, 0)))
return(rx_forest_model)
}
But when I run the function I get this wrong output:
> system.time({
+ trainModel(sqlSettings, trainTable)
+ })
user system elapsed
0.29 0.07 58.18
Warning message:
In tempGetNumObs(numObs) :
Number of observations not available for this data source. 'numObs' set to 1e6.
And for this warning message, the function trainModel() does not create the object rx_forest_model
Does anyone have any suggestions on how to solve this problem?
After several attempts, I found the reason why the function trainModel() did not function properly.
Is not a connection string problem and is not even a data source type issue.
The problem is in the syntax of function trainModel().
It is enough to eliminate from the body of the function the statement:
return(rx_forest_model)
In this way, the function returns the same warning message, but creates the object rx_forest_model in the correct way.
So, the correct function is:
trainModel = function(sqlSettings, trainTable) {
sqlConnString = sqlSettings$connString
trainDataSQL <- RxSqlServerData(connectionString = sqlConnString,
table = trainTable,
colInfo = cdrColInfo)
## Create training formula
labelVar = "churn"
trainVars <- rxGetVarNames(trainDataSQL)
trainVars <- trainVars[!trainVars %in% c(labelVar)]
temp <- paste(c(labelVar, paste(trainVars, collapse = "+")), collapse = "~")
formula <- as.formula(temp)
## Train gradient tree boosting with mxFastTree on SQL data source
library(RevoScaleR)
rx_forest_model <- rxDForest(formula = formula,
data = trainDataSQL,
nTree = 8,
maxDepth = 16,
mTry = 2,
minBucket = 1,
replace = TRUE,
importance = TRUE,
seed = 8,
parms = list(loss = c(0, 4, 1, 0)))
}