Error adding scale bar with map.scale - maps

I manage to create a map and even include a north arrow, but can't get the map.scale to work and getting this kind of error:
Error in map.scale(x = -83, y = 12, ratio = FALSE, relwidth = 0.2, cex
= 0.6) : unused arguments (ratio = FALSE, relwidth = 0.2, cex = 0.6)
Here is the code:
library(maps)
library(mapdata)
library(ggmap)
library(mapproj)
library(maptools) #for shapefiles
library(scales) #for transparency
library(GISTools)
range <- readShapePoly("isthmanianpacificmoistforestecoregion") #layer of data for species range
map("worldHires", c('Cost', 'pan', 'Nic', 'Colombia'), xlim=c(-89,-75),ylim=c(5,13), col="lightgray", fill=TRUE) #plot the region I want
map.scale(-81,8,relwidth = 0.15, metric = TRUE, ratio = TRUE)
plot(range, add=TRUE, xlim=c(-89,-75),ylim=c(5,13), col=alpha("green", 0.6), border=TRUE)
map.scale(x=-80, y=10) #, relwidth=0.3, cex=0.5, ratio=FALSE)
north.arrow(xb=-77, yb=12, len=0.2, lab="N", col="black", fill=TRUE) #

The problem is that map.scale() is a function for both maps and GISTools packages. You are trying to use the function from the maps package. Since you are loading first maps and then GISTools, the map.scale() from maps is being masked (probably R throws a warning when loading the last package).
The solution is to specify the package in the function call:
maps::map.scale(-81,8,relwidth = 0.15, metric = TRUE, ratio = TRUE)
Also why two calls to map.scale? You should probably exclude one of them.

Related

Modelling and fitting bi-modal lognormal distributions in a loop using lmfit

I have been spending FAR too much time trying to figure this out - so time to seek help. I am attempting to use lmfit to fit two lognormals (a and c) as well as the sum of these two lognormals (a+c) to a size distribution. Mode a centers around x=0.2, y=1, mode c centers around x=1.2, y=<<<1. There are numerous size distributions (>200) which are all slightly different and are passed in to the following code from an outside loop. For this example, I have provided a real life distribution and have not included the loop. Hopefully my code is sufficiently annotated to allow understanding of what I am trying to achieve.
I must be missing some fundamental understanding of lmfit (spoiler alert - I'm not great at Maths either) as I have 2 problems:
the fits (a, c and a+c) do not accurately represent the data. Note how the fit (red solid line) diverts away from the data (blue solid line). I assume this is something to do with the initial guess parameters. I have tried LOTS and have been unable to get a good fit.
re-running the model with "new" best fit values (results2, results3) doesn't appear to significantly improve the fit at all. Why?
Example result using provided x and y data:
Here is one-I-made-earlier showing the type of fit I am after (produced using the older mpfit module, using different data than provided below and using unique initial best guess parameters (not in a loop). Excuse the legend format, I had to remove certain information):
Any assistance is much appreciated. Here is the code with an example distribution:
from lmfit import models
import matplotlib.pyplot as plt
import numpy as np
# real life data example
y = np.array([1.000000, 0.754712, 0.610303, 0.527856, 0.412125, 0.329689, 0.255756, 0.184424, 0.136819,
0.102316, 0.078763, 0.060896, 0.047118, 0.020297, 0.007714, 0.010202, 0.008710, 0.005579,
0.004644, 0.004043, 0.002618, 0.001194, 0.001263, 0.001043, 0.000584, 0.000330, 0.000179,
0.000117, 0.000050, 0.000035, 0.000017, 0.000007])
x = np.array([0.124980, 0.130042, 0.135712, 0.141490, 0.147659, 0.154711, 0.162421, 0.170855, 0.180262,
0.191324, 0.203064, 0.215738, 0.232411, 0.261810, 0.320252, 0.360761, 0.448802, 0.482528,
0.525526, 0.581518, 0.658988, 0.870114, 1.001815, 1.238899, 1.341285, 1.535134, 1.691963,
1.973359, 2.285620, 2.572177, 2.900414, 3.342739])
# create the joint model using prefixes for each mode
model = (models.LognormalModel(prefix='p1_') +
models.LognormalModel(prefix='p2_'))
# add some best guesses for the model parameters
params = model.make_params(p1_center=0.1, p1_sigma=2, p1_amplitude=1,
p2_center=1, p2_sigma=2, p2_amplitude=0.000000000000001)
# bound those best guesses
# params['p1_amplitude'].min = 0.0
# params['p1_amplitude'].max = 1e5
# params['p1_sigma'].min = 1.01
# params['p1_sigma'].max = 5
# params['p1_center'].min = 0.01
# params['p1_center'].max = 1.0
#
# params['p2_amplitude'].min = 0.0
# params['p2_amplitude'].max = 1
# params['p2_sigma'].min = 1.01
# params['p2_sigma'].max = 10
# params['p2_center'].min = 1.0
# params['p2_center'].max = 3
# actually fit the model
result = model.fit(y, params, x=x)
# ====================================
# ================================
# re-run using the best-fit params derived above
params2 = model.make_params(p1_center=result.best_values['p1_center'], p1_sigma=result.best_values['p1_sigma'],
p1_amplitude=result.best_values['p1_amplitude'],
p2_center=result.best_values['p2_center'], p2_sigma=result.best_values['p2_sigma'],
p2_amplitude=result.best_values['p2_amplitude'], )
# re-fit the model
result2 = model.fit(y, params2, x=x)
# ================================
# re-run again using the best-fit params derived above
params3 = model.make_params(p1_center=result2.best_values['p1_center'], p1_sigma=result2.best_values['p1_sigma'],
p1_amplitude=result2.best_values['p1_amplitude'],
p2_center=result2.best_values['p2_center'], p2_sigma=result2.best_values['p2_sigma'],
p2_amplitude=result2.best_values['p2_amplitude'], )
# re-fit the model
result3 = model.fit(y, params3, x=x)
# ================================
# add individual fine and coarse modes using the revised fit parameters
model_a = models.LognormalModel()
params_a = model_a.make_params(center=result3.best_values['p1_center'], sigma=result3.best_values['p1_sigma'],
amplitude=result3.best_values['p1_amplitude'])
result_a = model_a.fit(y, params_a, x=x)
model_c = models.LognormalModel()
params_c = model_c.make_params(center=result3.best_values['p2_center'], sigma=result3.best_values['p2_sigma'],
amplitude=result3.best_values['p2_amplitude'])
result_c = model_c.fit(y, params_c, x=x)
# ====================================
plt.plot(x, y, 'b-', label='data')
plt.plot(x, result.best_fit, 'r-', label='best_fit_1')
plt.plot(x, result.init_fit, 'lightgrey', ls=':', label='ini_fit_1')
plt.plot(x, result2.best_fit, 'r--', label='best_fit_2')
plt.plot(x, result2.init_fit, 'lightgrey', ls='--', label='ini_fit_2')
plt.plot(x, result3.best_fit, 'r.-', label='best_fit_3')
plt.plot(x, result3.init_fit, 'lightgrey', ls='--', label='ini_fit_3')
plt.plot(x, result_a.best_fit, 'grey', ls=':', label='best_fit_a')
plt.plot(x, result_c.best_fit, 'grey', ls='--', label='best_fit_c')
plt.xscale("log")
plt.yscale("log")
plt.legend()
plt.show()
There are three main pieces of advice I can give:
initial values matter and should not be so far off as to make
portions of the model completely insensitive to the parameter
values. Your initial model is sort of off by several orders of
magnitude.
always look at the fit result. This is the primary
result -- the plot of the fit is a representation of the actual
numerical results. Not showing that you printed out the fit
report is a good indication that you did not look at the actual
result. Really, always look at the results.
if you are judging the quality of the fit based on a plot of
the data and model, use how you choose to plot the data to guide
how you fit the data. Specifically in your case, if you are
plotting on a log scale, then fit the log of the data to the log
of the model: fit in "log space".
Such a fit might look like this:
from lmfit import models, Model
from lmfit.lineshapes import lognormal
import matplotlib.pyplot as plt
import numpy as np
y = np.array([1.000000, 0.754712, 0.610303, 0.527856, 0.412125, 0.329689, 0.255756, 0.184424, 0.136819,
0.102316, 0.078763, 0.060896, 0.047118, 0.020297, 0.007714, 0.010202, 0.008710, 0.005579,
0.004644, 0.004043, 0.002618, 0.001194, 0.001263, 0.001043, 0.000584, 0.000330, 0.000179,
0.000117, 0.000050, 0.000035, 0.000017, 0.000007])
x = np.array([0.124980, 0.130042, 0.135712, 0.141490, 0.147659, 0.154711, 0.162421, 0.170855, 0.180262,
0.191324, 0.203064, 0.215738, 0.232411, 0.261810, 0.320252, 0.360761, 0.448802, 0.482528,
0.525526, 0.581518, 0.658988, 0.870114, 1.001815, 1.238899, 1.341285, 1.535134, 1.691963,
1.973359, 2.285620, 2.572177, 2.900414, 3.342739])
# use a model that is the log of the sum of two log-normal functions
# note to be careful about log(x) for x < 0.
def log_lognormal(x, amp1, cen1, sig1, amp2, cen2, sig2):
comp1 = lognormal(x, amp1, cen1, sig1)
comp2 = lognormal(x, amp2, cen2, sig2)
total = comp1 + comp2
total[np.where(total<1.e-99)] = 1.e-99
return np.log(comp1+comp2)
model = Model(log_lognormal)
params = model.make_params(amp1=5.0, cen1=-4, sig1=1,
amp2=0.1, cen2=-1, sig2=1)
# part of making sure that the lognormals are strictly positive
params['amp1'].min = 0
params['amp2'].min = 0
result = model.fit(np.log(y), params, x=x)
print(result.fit_report()) # <-- HERE IS WHERE THE RESULTS ARE!!
# also, make a plot of data and fit
plt.plot(x, y, 'b-', label='data')
plt.plot(x, np.exp(result.best_fit), 'r-', label='best_fit')
plt.plot(x, np.exp(result.init_fit), 'grey', label='ini_fit')
plt.xscale("log")
plt.yscale("log")
plt.legend()
plt.show()
This will print out
[[Model]]
Model(log_lognormal)
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 211
# data points = 32
# variables = 6
chi-square = 0.91190970
reduced chi-square = 0.03507345
Akaike info crit = -101.854407
Bayesian info crit = -93.0599914
[[Variables]]
amp1: 21.3565856 +/- 193.951379 (908.16%) (init = 5)
cen1: -4.40637490 +/- 3.81299642 (86.53%) (init = -4)
sig1: 0.77286862 +/- 0.55925566 (72.36%) (init = 1)
amp2: 0.00401804 +/- 7.5833e-04 (18.87%) (init = 0.1)
cen2: -0.74055538 +/- 0.13043827 (17.61%) (init = -1)
sig2: 0.64346873 +/- 0.04102122 (6.38%) (init = 1)
[[Correlations]] (unreported correlations are < 0.100)
C(amp1, cen1) = -0.999
C(cen1, sig1) = -0.999
C(amp1, sig1) = 0.997
C(cen2, sig2) = -0.964
C(amp2, cen2) = -0.940
C(amp2, sig2) = 0.849
C(sig1, amp2) = -0.758
C(cen1, amp2) = 0.740
C(amp1, amp2) = -0.726
C(sig1, cen2) = 0.687
C(cen1, cen2) = -0.669
C(amp1, cen2) = 0.655
C(sig1, sig2) = -0.598
C(cen1, sig2) = 0.581
C(amp1, sig2) = -0.567
and generate a plot like

Interpolating GFS winds from isobaric to height coordinates using Metpy

I have been tasked with making plots of winds at various levels of the atmosphere to support aviation. While I have been able to make some nice plots using GFS model data (see code below), I'm really having to make a rough approximation of height using pressure coordinates available from the GFS. I'm using winds at 300 hPA, 700 hPA, and 925 hPA to make an approximation of the winds at 30,000 ft, 9000 ft, and 3000 ft. My question is really for those out there who are metpy gurus...is there a way that I can interpolate these winds to a height surface? It sure would be nice to get the actual winds at these height levels! Thanks for any light anyone can share on this subject!
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
import numpy as np
from netCDF4 import num2date
from datetime import datetime, timedelta
from siphon.catalog import TDSCatalog
from siphon.ncss import NCSS
from PIL import Image
from matplotlib import cm
# For the vertical levels we want to grab with our queries
# Levels need to be in Pa not hPa
Levels = [30000,70000,92500]
# Time deltas for days
Deltas = [1,2,3]
#Deltas = [1]
# Levels in hPa for the file names
LevelDict = {30000:'300', 70000:'700', 92500:'925'}
# The path to where our banners are stored
impath = 'C:\\Users\\shell\\Documents\\Python Scripts\\Banners\\'
# Final images saved here
imoutpath = 'C:\\Users\\shell\\Documents\\Python Scripts\\TVImages\\'
# Quick function for finding out which variable is the time variable in the
# netCDF files
def find_time_var(var, time_basename='time'):
for coord_name in var.coordinates.split():
if coord_name.startswith(time_basename):
return coord_name
raise ValueError('No time variable found for ' + var.name)
# Function to grab data at different levels from Siphon
def grabData(level):
query.var = set()
query.variables('u-component_of_wind_isobaric', 'v-component_of_wind_isobaric')
query.vertical_level(level)
data = ncss.get_data(query)
u_wind_var = data.variables['u-component_of_wind_isobaric']
v_wind_var = data.variables['v-component_of_wind_isobaric']
time_var = data.variables[find_time_var(u_wind_var)]
lat_var = data.variables['lat']
lon_var = data.variables['lon']
return u_wind_var, v_wind_var, time_var, lat_var, lon_var
# Construct a TDSCatalog instance pointing to the gfs dataset
best_gfs = TDSCatalog('http://thredds-jetstream.unidata.ucar.edu/thredds/catalog/grib/'
'NCEP/GFS/Global_0p5deg/catalog.xml')
# Pull out the dataset you want to use and look at the access URLs
best_ds = list(best_gfs.datasets.values())[1]
#print(best_ds.access_urls)
# Create NCSS object to access the NetcdfSubset
ncss = NCSS(best_ds.access_urls['NetcdfSubset'])
print(best_ds.access_urls['NetcdfSubset'])
# Looping through the forecast times
for delta in Deltas:
# Create lat/lon box and the time(s) for location you want to get data for
now = datetime.utcnow()
fcst = now + timedelta(days = delta)
timestamp = datetime.strftime(fcst, '%A')
query = ncss.query()
query.lonlat_box(north=78, south=45, east=-90, west=-220).time(fcst)
query.accept('netcdf4')
# Now looping through the levels to create our plots
for level in Levels:
u_wind_var, v_wind_var, time_var, lat_var, lon_var = grabData(level)
# Get actual data values and remove any size 1 dimensions
lat = lat_var[:].squeeze()
lon = lon_var[:].squeeze()
u_wind = u_wind_var[:].squeeze()
v_wind = v_wind_var[:].squeeze()
#converting to knots
u_windkt= u_wind*1.94384
v_windkt= v_wind*1.94384
wspd = np.sqrt(np.power(u_windkt,2)+np.power(v_windkt,2))
# Convert number of hours since the reference time into an actual date
time = num2date(time_var[:].squeeze(), time_var.units)
print (time)
# Combine 1D latitude and longitudes into a 2D grid of locations
lon_2d, lat_2d = np.meshgrid(lon, lat)
# Create new figure
#fig = plt.figure(figsize = (18,12))
fig = plt.figure()
fig.set_size_inches(26.67,15)
datacrs = ccrs.PlateCarree()
plotcrs = ccrs.LambertConformal(central_longitude=-150,
central_latitude=55,
standard_parallels=(30, 60))
# Add the map and set the extent
ax = plt.axes(projection=plotcrs)
ext = ax.set_extent([-195., -115., 50., 72.],datacrs)
ext2 = ax.set_aspect('auto')
ax.background_patch.set_fill(False)
# Add state boundaries to plot
ax.add_feature(cfeature.STATES, edgecolor='black', linewidth=2)
# Add geopolitical boundaries for map reference
ax.add_feature(cfeature.COASTLINE.with_scale('50m'))
ax.add_feature(cfeature.OCEAN.with_scale('50m'))
ax.add_feature(cfeature.LAND.with_scale('50m'),facecolor = '#cc9666', linewidth = 4)
if level == 30000:
spdrng_sped = np.arange(30, 190, 2)
windlvl = 'Jet_Stream'
elif level == 70000:
spdrng_sped = np.arange(20, 100, 1)
windlvl = '9000_Winds_Aloft'
elif level == 92500:
spdrng_sped = np.arange(20, 80, 1)
windlvl = '3000_Winds_Aloft'
else:
pass
top = cm.get_cmap('Greens')
middle = cm.get_cmap('YlOrRd')
bottom = cm.get_cmap('BuPu_r')
newcolors = np.vstack((top(np.linspace(0, 1, 128)),
middle(np.linspace(0, 1, 128))))
newcolors2 = np.vstack((newcolors,bottom(np.linspace(0,1,128))))
cmap = ListedColormap(newcolors2)
cf = ax.contourf(lon_2d, lat_2d, wspd, spdrng_sped, cmap=cmap,
transform=datacrs, extend = 'max', alpha=0.75)
cbar = plt.colorbar(cf, orientation='horizontal', pad=0, aspect=50,
drawedges = 'true')
cbar.ax.tick_params(labelsize=16)
wslice = slice(1, None, 4)
ax.quiver(lon_2d[wslice, wslice], lat_2d[wslice, wslice],
u_windkt[wslice, wslice], v_windkt[wslice, wslice], width=0.0015,
headlength=4, headwidth=3, angles='xy', color='black', transform = datacrs)
plt.savefig(imoutpath+'TV_UpperAir'+LevelDict[level]+'_'+timestamp+'.png',bbox_inches= 'tight')
# Now we use Pillow to overlay the banner with the appropriate day
background = Image.open(imoutpath+'TV_UpperAir'+LevelDict[level]+'_'+timestamp+'.png')
im = Image.open(impath+'Banner_'+windlvl+'_'+timestamp+'.png')
# resize the image
size = background.size
im = im.resize(size,Image.ANTIALIAS)
background.paste(im, (17, 8), im)
background.save(imoutpath+'TV_UpperAir'+LevelDict[level]+'_'+timestamp+'.png','PNG')
Thanks for the question! My approach here is for each separate column to interpolate the pressure coordinate of GFS-output Geopotential Height onto your provided altitudes to estimate the pressure of each height level for each column. Then I can use that pressure to interpolate the GFS-output u, v onto. The GFS-output GPH and winds have very slightly different pressure coordinates, which is why I interpolated twice. I performed the interpolation using MetPy's interpolate.log_interpolate_1d which performs a linear interpolation on the log of the inputs. Here is the code I used!
from datetime import datetime
import numpy as np
import metpy.calc as mpcalc
from metpy.units import units
from metpy.interpolate import log_interpolate_1d
from siphon.catalog import TDSCatalog
gfs_url = 'https://tds.scigw.unidata.ucar.edu/thredds/catalog/grib/NCEP/GFS/Global_0p5deg/catalog.xml'
cat = TDSCatalog(gfs_url)
now = datetime.utcnow()
# A shortcut to NCSS
ncss = cat.datasets['Best GFS Half Degree Forecast Time Series'].subset()
query = ncss.query()
query.var = set()
query.variables('u-component_of_wind_isobaric', 'v-component_of_wind_isobaric', 'Geopotential_height_isobaric')
query.lonlat_box(north=78, south=45, east=-90, west=-220)
query.time(now)
query.accept('netcdf4')
data = ncss.get_data(query)
# Reading in the u(isobaric), v(isobaric), isobaric vars and the GPH(isobaric6) and isobaric6 vars
# These are two slightly different vertical pressure coordinates.
# We will also assign units here, and this can allow us to go ahead and convert to knots
lat = units.Quantity(data.variables['lat'][:].squeeze(), units('degrees'))
lon = units.Quantity(data.variables['lon'][:].squeeze(), units('degrees'))
iso_wind = units.Quantity(data.variables['isobaric'][:].squeeze(), units('Pa'))
iso_gph = units.Quantity(data.variables['isobaric6'][:].squeeze(), units('Pa'))
u = units.Quantity(data.variables['u-component_of_wind_isobaric'][:].squeeze(), units('m/s')).to(units('knots'))
v = units.Quantity(data.variables['v-component_of_wind_isobaric'][:].squeeze(), units('m/s')).to(units('knots'))
gph = units.Quantity(data.variables['Geopotential_height_isobaric'][:].squeeze(), units('gpm'))
# Here we will select our altitudes to interpolate onto and convert them to geopotential meters
altitudes = ([30000., 9000., 3000.] * units('ft')).to(units('gpm'))
# Now we will interpolate the pressure coordinate for model output geopotential height to
# estimate the pressure level for our given altitudes at each grid point
pressures_of_alts = np.zeros((len(altitudes), len(lat), len(lon)))
for ilat in range(len(lat)):
for ilon in range(len(lon)):
pressures_of_alts[:, ilat, ilon] = log_interpolate_1d(altitudes,
gph[:, ilat, ilon],
iso_gph)
pressures_of_alts = pressures_of_alts * units('Pa')
# Similarly, we will use our interpolated pressures to interpolate
# our u and v winds across their given pressure coordinates.
# This will provide u, v at each of our interpolated pressure
# levels corresponding to our provided initial altitudes
u_at_levs = np.zeros((len(altitudes), len(lat), len(lon)))
v_at_levs = np.zeros((len(altitudes), len(lat), len(lon)))
for ilat in range(len(lat)):
for ilon in range(len(lon)):
u_at_levs[:, ilat, ilon], v_at_levs[:, ilat, ilon] = log_interpolate_1d(pressures_of_alts[:, ilat, ilon],
iso_wind,
u[:, ilat, ilon],
v[:, ilat, ilon])
u_at_levs = u_at_levs * units('knots')
v_at_levs = v_at_levs * units('knots')
# We can use mpcalc to calculate a wind speed array from these
wspd = mpcalc.wind_speed(u_at_levs, v_at_levs)
I was able to take my output from this and coerce it into your plotting code (with some unit stripping.)
Your 300-hPa GFS winds
My "30000-ft" GFS winds
Here is what my interpolated pressure fields at each estimated height level look like.
Hope this helps!
I am not sure if this is what you are looking for (I am very new to Metpy), but I have been using the metpy height_to_pressure_std(altitude) function. It puts it in units of hPa which then I convert to Pascals and then a unitless value to use in the Siphon vertical_level(float) function.
I don't think you can use metpy functions to convert height to pressure or vice versus in the upper atmosphere. There errors are too when using the Standard Atmosphere to convert say pressure to feet.

How to seperately label edge.label.position for parameter estimates and covariances

I am trying to seperately position the edge labels for the parameter estimates and the covariances. For the parameter estimates I would like them at .1 and for the covariances and disturbances I would like them at .5, but I only know how to change them through the edge.label.position argument.
semPaths(fit,"std", layout = "tree3",
whatLabels = "std", edge.label.cex=.9, curvePivot = TRUE,
residuals = TRUE, rotation = 2, sizeMan = 16, sizeMan2 = 6,
sizeInt = 8, exoVar = FALSE, exoCov = FALSE,
nodeLabels = c("Social","Intrapersonal", "Distress
Tolerance","Intrusion","Avoidance","NACM","Arousal",
"BPD Symptoms"), edge.label.position = .5,
label.cex = .8, label.scale = FALSE)
I expect the parameter estimates to be on the left side of the structural paths near the manifest variables and the parameter estimates to be in the middle of the paths for the covariance and disturbance.

Tensorflow Probability Logistic Regression Example

I feel I must be missing something obvious, in struggling to get a positive control for logistic regression going in tensorflow probability.
I've modified the example for logistic regression here, and created a positive control features and labels data. I struggle to achieve accuracy over 60%, however this is an easy problem for a 'vanilla' Keras model (accuracy 100%). What am I missing? I tried different layers, activations, etc.. With this method of setting up the model, is posterior updating actually being performed? Do I need to specify an interceptor object? Many thanks..
### Added positive control
nSamples = 80
features1 = np.float32(np.hstack((np.reshape(np.ones(40), (40, 1)),
np.reshape(np.random.randn(nSamples), (40, 2)))))
features2 = np.float32(np.hstack((np.reshape(np.zeros(40), (40, 1)),
np.reshape(np.random.randn(nSamples), (40, 2)))))
features = np.vstack((features1, features2))
labels = np.concatenate((np.zeros(40), np.ones(40)))
featuresInt, labelsInt = build_input_pipeline(features, labels, 10)
###
#w_true, b_true, features, labels = toy_logistic_data(FLAGS.num_examples, 2)
#featuresInt, labelsInt = build_input_pipeline(features, labels, FLAGS.batch_size)
with tf.name_scope("logistic_regression", values=[featuresInt]):
layer = tfp.layers.DenseFlipout(
units=1,
activation=None,
kernel_posterior_fn=tfp.layers.default_mean_field_normal_fn(),
bias_posterior_fn=tfp.layers.default_mean_field_normal_fn())
logits = layer(featuresInt)
labels_distribution = tfd.Bernoulli(logits=logits)
neg_log_likelihood = -tf.reduce_mean(labels_distribution.log_prob(labelsInt))
kl = sum(layer.losses)
elbo_loss = neg_log_likelihood + kl
predictions = tf.cast(logits > 0, dtype=tf.int32)
accuracy, accuracy_update_op = tf.metrics.accuracy(
labels=labelsInt, predictions=predictions)
with tf.name_scope("train"):
optimizer = tf.train.AdamOptimizer(learning_rate=FLAGS.learning_rate)
train_op = optimizer.minimize(elbo_loss)
init_op = tf.group(tf.global_variables_initializer(),
tf.local_variables_initializer())
with tf.Session() as sess:
sess.run(init_op)
# Fit the model to data.
for step in range(FLAGS.max_steps):
_ = sess.run([train_op, accuracy_update_op])
if step % 100 == 0:
loss_value, accuracy_value = sess.run([elbo_loss, accuracy])
print("Step: {:>3d} Loss: {:.3f} Accuracy: {:.3f}".format(
step, loss_value, accuracy_value))
### Check with basic Keras
kerasModel = tf.keras.models.Sequential([
tf.keras.layers.Dense(1)])
optimizer = tf.train.AdamOptimizer(5e-2)
kerasModel.compile(optimizer = optimizer, loss = 'binary_crossentropy',
metrics = ['accuracy'])
kerasModel.fit(features, labels, epochs = 50) #100% accuracy
Compared to the github example, you forgot to divide by the number of examples when defining the KL divergence:
kl = sum(layer.losses) / FLAGS.num_examples
When I change this to your code, I quickly get to an accuracy of 99.9% on your toy data.
Additionaly, the output layer of your Keras model actually expects a sigmoid activation for this problem (binary classification):
kerasModel = tf.keras.models.Sequential([
tf.keras.layers.Dense(1, activation='sigmoid')])
It's a toy problem, but you will notice that the model gets to 100% accuracy faster with a sigmoid activation.

Wind roses overlayed on map

I have wind data from multiple weather stations and I have the coordinates of the each weather station. I want to overlay wind roses from each station over a map using the stations' lat and long. Is there a straight forward way of doing this in R?
This is how I did it so far in R. I saved the windroses as png and then overlayed it on map.
#######################################
########produce map of GTA##############
#######################################
ggmap=get_map(location=c(left=-80.7 , bottom=43 , right=-77.7 , top=44.9))
gta=ggmap(ggmap)+ scale_y_continuous(limits=c(43.2, 44.5))
####################################################
###########for loop to make wind roses##############
###################################################
data_list<- list.files(path= "/home/npak/Documents/weather_data/meso_west_data/", pattern = "\\.csv$", recursive = FALSE, full.names = TRUE)
l<-length(data_list)
for (i in 1:l){
header<-readLines(data_list[i], 8)
variables = strsplit(header, ',')
vars=variables[[7]]
info= strsplit(header, ':')
lat= as.numeric(info[[3]][2])
long= as.numeric(info[[4]][2])
elev= as.numeric(info[[5]][2])
name=info[[2]][2]
data <- read.table(data_list[i], header= FALSE, sep=",", col.names = paste0("V",seq_len(30)), fill=TRUE, skip=8)
missing= length(data)-length(vars)
colnames(data)= c(vars, rep("Empty", missing) )
print(unique(data$Station_ID))
data$ws<-data$wind_speed_set_1
data$wd<-data$wind_direction_set_1
data$date<- as.POSIXct(data$Date_Time, format = "%Y-%m-%d %H:%M")
ID<-unique(data$Station_ID)
png(filename=(file<- paste('/home/npak/Documents/weather_data/meso_west_data/map_1year/', ID ,'all_year_map.png')),
width = 2400, height = 2400,bg = "transparent")
windRose(data,
breaks = c(1.5,3.3,5.5,8),
max.freq = 25
,paddle = FALSE,
, annotate = FALSE, key= FALSE,
auto.text = FALSE, ,grid.line = list(lty =0, value= 10),
cols=c("red", "red2", "red3", "red4"))
#,annotate = FALSE, key= FALSE)
#dev.off()
dev.off.crop(file=file)
mypng <- readPNG(file<- paste('/home/npak/Documents/weather_data/meso_west_data/map_1year/', ID ,'all_year_map.png'))
gta=gta+inset_raster(mypng, ymin = lat-0.2,ymax= lat+0.2,xmin = long-0.2,xmax = long+0.2)
}
######################################################
###############save map into png#######################################
##########################################################
png('/home/npak/Documents/weather_data/meso_west_data/map_1year/gta_annual.png',
width = 2400, height = 2400)
print(gta)
dev.off()
But it doesn't look as good. Right now I am saving the png image with a transparent background but the axis and frames are still there which makes the map look a bit messy. This is how it looks like:
Windroses overlayed on map
So I am looking for something simillar but to directly plot it over the map and not using the png pictures.

Resources