Receiving different measured values from crossK and lohboot - sf

I have a marked ppp dataset looking at crimes and their relation to locations.
I am performing an inhomogeneous cross-K using the Kcross.inhom, and am using lohboot to bootstrap confidence intervals around the inhomogenous cross-K. However, I am getting different measured values of the iso for the two when we would anticipate identical values.
The crime dataset is 26k rows, unsure of how to subset to create a reproducible example.
#creating the ppp
crime.coords = as.data.frame(st_coordinates(crime)) #coordinates of crimes
center.coords = as.data.frame(st_coordinates(center)) #coordinates of locations
temp = rbind(data.frame(x=crime.coords$X,y=crime.coords$Y,type='crime'),
data.frame(x=center.coords$X,y=center.coords$Y,type='center')) #df for maked ppp
temp = ppp(temp[,1],temp[,2], window=owin(border.coords), marks=relevel(as.factor(temp$type), 'crime')) #creating marked ppp
#creating an intensity model of the crimes
temp = rescale(temp, 10000) #rescaling for polynomial model coefficients
crime.ppp = unmark(split(temp)$crime)
model.crime = ppm(crime.ppp ~ polynom(x, y, 2), Poisson())
ck = Kcross.inhom(temp, i = 'crime', j = 'center', lambdaI = model.crime) #cross K w/ intensity function
ckenv = lohboot(temp, fun='Kcross.inhom', i = 'crime', j='center', lambdaI = model.crime) #bootstrapped CIs for cross K w/ intensity function
Here are the values plotted, showing different curves:
A few things I've noted are that the r are different for both functions, and setting the lohboot r does not in fact make them identical. Unsure of where to go from here, exhausted all my resources in finding a solution. Thank you in advance.

These curves are not guaranteed to be equal. lohboot subdivides the data, randomly resamples the subdivisions, computes the contributions from these randomly selected subdivisions, and averages them. If you repeat the experiment you should get a slightly different answer from lohboot each time. See the help file for lohboot.
It would desirable that the two curves are close. Unfortunately the default behaviour of lohboot does not often achieve that. For consistency, the default behaviour follows the original implementation which was not very good. Try setting block = TRUE for better performance. Also try the other options basicboot and Vcorrection.

Related

MatchIt - how to make matching date specific?

I'm trying to use MatchIt to create two sets of matched investment companies (treatment vs control).
I need to match the treatment companies to the control companies using only data from the 1-3 years proceeding the treatment.
For example if a company received treatment in 2009, then I would want to match it using data from 2009, 2008, 2007 (My after treatment effects dummy would hold a value from 2010 onwards in this case)
I am unsure how to add this selection into my matching code, which currently looks like this:
matchit(signatory ~ totalUSD + brownUSD + country + strategy, data = panel6, method = "full")
Should I consider using the 'after' treatments effects dummy in some way?
Any tips for how I add this in would be greatly appreciated!
There is no straightforward way to do this in MatchIt. You can set a caliper, which requires the control companies to be within a certain number of years from a treated company, but there isn't a way to require that control companies have a year strictly before the treated company. You can perform exact matching on year so that the treated and control companies have exactly the same year using the exact argument.
Another, slightly more involved way is to construct a distance matrix yourself and set to Inf any distances between units that are forbidden to match with each other. The first step would be estimating a propensity score, which you can do manually or using matchit(). Then you construct a distance matrix, and for each entry in the distance matrix, decide whether to set the distance to Inf. FInaly, you can supply the distance matrix to the distance argument of matchit(). Here's how you would do that:
#Estimate the propensity score
ps <- matchit(signatory ~ totalUSD + brownUSD + country + strategy,
data = panel6, method = NULL)$distance
#Create the distance matrix
dist <- optmatch::match_on(signatory ~ ps, data = panel6)
#Loop through the matrix and set set disallowed matches to Inf
t <- which(panel6$signatory == 1)
u <- which(panel6$signatory != 1)
for (i in seq_along(t)) {
for (j in seq_along(u)) {
if (panel6$year[u[j]] > panel6$year[t[i]] || panel6$year[u[j]] < panel6$year[t[i]] - 2)
dist[i,j] <- Inf
}
}
#Note: can be vectorized for speed but shouldn't take long regardless
#Supply the distance matrix to matchit() and match
m <- matchit(signatory ~ totalUSD + brownUSD + country + strategy,
data = panel6, method = "full", distance = dist)
That should work. You can verify by looking at individual groups of matched companies using match.data():
md <- match.data(m, data = panel6)
md <- md[with(md, order(subclass, signatory)),]
View(md) #assuming you're using RStudio
You should see that within subclasses, the control units are 0-2 years below the treated units.

Build Dictionary of Arrays Efficiently in julia

I want to save the (x,y) coordinates in a grid network that are visited by different individuals. Let say I have 1000 individuals and the network size is x = 1:100 and y=1:100. I am using Dict() and here is a sample code about what I want to do:
individuals = 1:1000
x = 1:100
y = 1:100
function Visited_nodes()
nodes_of_inds =Dict{Int64, Array{Tuple{Int64, Int64}}}()
for ind in individuals
dum_array = Array{Tuple{Int64, Int64}}(0)
for i in x
for j in y
if rand()<0.2 # some conditions
push!(dum_array, (i,j))
end
end
end
nodes_of_inds[ind]=unique(dum_array)
end
return nodes_of_inds
end
#time nodes_of_inds = Visited_nodes()
# result: 1.742297 seconds (12.31 M allocations: 607.035 MB, 6.72% gc time)
But this is not efficient. I appreciate any advice how to make it more efficient.
Please see the performance tips. Very first piece of advice there: avoid global variables. individuals, x, and y are all non-constant global variables. Make them arguments to your function instead. That change alone speeds up your function by an order of magnitude.
By construction, you're not going to have any duplicate tuples in your dum_array, so you don't need to call unique. That shaves off another factor of two.
Finally, Array{T} isn't a concrete type. Julia's arrays also encode the dimensionality as a type parameter, which must be included for the dictionary of arrays to be efficient. Use Array{T, 1} or Vector{T} instead. This isn't a major consideration within the time of this function, though.
The major thing that's left is just the O(length(individuals)*length(x)*length(y)) computational complexity. Doing anything ten million times will add up quickly, no matter how efficient it is.
#Matt B., thanks for your response. About the global variables, I tried a simplified version of my code and it did not help the performance.
Let say I read my input data from a couple of csv files and I have three functions with different arguments:
function Read_input_data()
# read input data
individuals = readcsv("file1")
x = readcsv("file2")
y = readcsv("file3")
A = readcsv("file4")
B = readcsv("file5") # and a few other files
# call different functions
result_1 = Function1(individuals , x, y)
result_2 = Function2(result_1 ,y, A, B)
result_3 = Function3(result_2 , individuals, A, B)
return result_1, result_2, result_3
end
result_1, result_2, result_3 = Read_input_data()
I do not know why the performance is not better compared to when I define everything global! I appreciate any if you can comment about this!

MATLAB solve array

I've got multiple arrays that you can't quite fit a curve/equation to, but i do need to solve them for a lot of values. Simplified it looks like this when i plot it, but the real ones have a lot more points:
So say i would like to solve for y=22,how would i do that? As you can see there'd be three solutions to this, but i only need the most left one.
Linear is okay, but i'd rather us a non-linear method.
The only way i found is to fit an equation to a set of points and solve that equation, but an equation can't approximate the array accurately enough.
This implementation uses a first-order interpolation- if you're looking for higher accuracy and it feels appropriate, you can use a similar strategy for another order estimator.
Assuming data is the name of your array containing data with x values in the first column and y values in the second, that the columns are sorted by increasing or decreasing x values, and you wanted to find all data at the value y = 22;
searchPoint = 22; %search for all solutions where y = 22
matchPoints = []; %matrix containing all values of x
for ii = 1:length(data)-1
if (data(ii,2)>searchPoint)&&(data(ii+1,2)<searchPoint)
xMatch = data(ii,1)+(searchPoint-data(ii,2))*(data(ii+1,1)-data(ii,1))/(data(ii+1,2)-data(ii,2)); %Linear interpolation to solve for xMatch
matchPoints = [matchPoints xMatch];
elseif (data(ii,2)<searchPoint)&&(data(ii+1,2)>searchPoint)
xMatch = data(ii,1)+(searchPoint-data(ii,2))*(data(ii+1,1)-data(ii,1))/(data(ii+1,2)-data(ii,2)); %Linear interpolation to solve for xMatch
matchPoints = [matchPoints xMatch];
elseif (data(ii,2)==searchPoint) %check if data(ii,2) is equal
matchPoints = [matchPoints data(ii,1)];
end
end
if(data(end,2)==searchPoint) %Since ii only goes to the rest of the data
matchPoints = [matchPoints data(end,1)];
end
This was written sans-compiler, but the logic was tested in octave (in other words, sorry if there's a slight typo in variable names, but the math should be correct)

Efficiently calculating weighted distance in MATLAB

Several posts exist about efficiently calculating pairwise distances in MATLAB. These posts tend to concern quickly calculating euclidean distance between large numbers of points.
I need to create a function which quickly calculates the pairwise differences between smaller numbers of points (typically less than 1000 pairs). Within the grander scheme of the program i am writing, this function will be executed many thousands of times, so even small gains in efficiency are important. The function needs to be flexible in two ways:
On any given call, the distance metric can be euclidean OR city-block.
The dimensions of the data are weighted.
As far as i can tell, no solution to this particular problem has been posted. The statstics toolbox offers pdist and pdist2, which accept many different distance functions, but not weighting. I have seen extensions of these functions that allow for weighting, but these extensions do not allow users to select different distance functions.
Ideally, i would like to avoid using functions from the statistics toolbox (i am not certain the user of the function will have access to those toolboxes).
I have written two functions to accomplish this task. The first uses tricky calls to repmat and permute, and the second simply uses for-loops.
function [D] = pairdist1(A, B, wts, distancemetric)
% get some information about the data
numA = size(A,1);
numB = size(B,1);
if strcmp(distancemetric,'cityblock')
r=1;
elseif strcmp(distancemetric,'euclidean')
r=2;
else error('Function only accepts "cityblock" and "euclidean" distance')
end
% format weights for multiplication
wts = repmat(wts,[numA,1,numB]);
% get featural differences between A and B pairs
A = repmat(A,[1 1 numB]);
B = repmat(permute(B,[3,2,1]),[numA,1,1]);
differences = abs(A-B).^r;
% weigh difference values before combining them
differences = differences.*wts;
differences = differences.^(1/r);
% combine features to get distance
D = permute(sum(differences,2),[1,3,2]);
end
AND:
function [D] = pairdist2(A, B, wts, distancemetric)
% get some information about the data
numA = size(A,1);
numB = size(B,1);
if strcmp(distancemetric,'cityblock')
r=1;
elseif strcmp(distancemetric,'euclidean')
r=2;
else error('Function only accepts "cityblock" and "euclidean" distance')
end
% use for-loops to generate differences
D = zeros(numA,numB);
for i=1:numA
for j=1:numB
differences = abs(A(i,:) - B(j,:)).^(1/r);
differences = differences.*wts;
differences = differences.^(1/r);
D(i,j) = sum(differences,2);
end
end
end
Here are the performance tests:
A = rand(10,3);
B = rand(80,3);
wts = [0.1 0.5 0.4];
distancemetric = 'cityblock';
tic
D1 = pairdist1(A,B,wts,distancemetric);
toc
tic
D2 = pairdist2(A,B,wts,distancemetric);
toc
Elapsed time is 0.000238 seconds.
Elapsed time is 0.005350 seconds.
Its clear that the repmat-and-permute version works much more quickly than the double-for-loop version, at least for smaller datasets. But i also know that calls to repmat often slow things down, however. So I am wondering if anyone in the SO community has any advice to offer to improve the efficiency of either function!
EDIT
#Luis Mendo offered a nice cleanup of the repmat-and-permute function using bsxfun. I compared his function with my original on datasets of varying size:
As the data become larger, the bsxfun version becomes the clear winner!
EDIT #2
I have finished writing the function and it is available on github [link]. I ended up finding a pretty good vectorized method for computing euclidean distance [link], so i use that method in the euclidean case, and i took #Divakar's advice for city-block. It is still not as fast as pdist2, but its must faster than either of the approaches i laid out earlier in this post, and easily accepts weightings.
You can replace repmat by bsxfun. Doing so avoids explicit repetition, therefore it's more memory-efficient, and probably faster:
function D = pairdist1(A, B, wts, distancemetric)
if strcmp(distancemetric,'cityblock')
r=1;
elseif strcmp(distancemetric,'euclidean')
r=2;
else
error('Function only accepts "cityblock" and "euclidean" distance')
end
differences = abs(bsxfun(#minus, A, permute(B, [3 2 1]))).^r;
differences = bsxfun(#times, differences, wts).^(1/r);
D = permute(sum(differences,2),[1,3,2]);
end
For r = 1 ("cityblock" case), you can use bsxfun to get elementwise subtractions and then use matrix-multiplication, which must speed up things. The implementation would look something like this -
%// Calculate absolute elementiwse subtractions
absm = abs(bsxfun(#minus,permute(A,[1 3 2]),permute(B,[3 1 2])));
%// Perform matrix multiplications with the given weights and reshape
D = reshape(reshape(absm,[],size(A,2))*wts(:),size(A,1),[]);

MATLAB: vectorize filling of 3D-array

I would like to safe a certain amount of grayscale-images (->2D-arrays) as layers in a 3D-array.
Because it should be very fast for a realtime-application I would like to vectorize the following code, where m is the number of shifts:
for i=1:m
array(:,:,i)=imabsdiff(circshift(img1,[0 i-1]), img2);
end
nispio showed me a very advanced version, which you can see here:
I = speye(size(img1,2)); E = -1*I;
ii = toeplitz(1:m,[1,size(img1,2):-1:2]);
D = vertcat(repmat(I,1,m),E(:,ii));
data_c = shape(abs([double(img1),double(img2)]*D),size(data_r,1),size(data_r,2),m);
At the moment the results of both operations are not the same, maybe it shifts the image into the wrong direction. My knowledge is very limited, so I dont understand the code completely.
You could do this:
M = 16; N = 20; img1 = randi(255,M,N); % Create a random M x N image
ii = toeplitz(1:N,circshift(fliplr(1:N)',1)); % Create an indexing variable
% Create layers that are shifted copies of the image
array = reshape(img1(:,ii),M,N,N);
As long as your image dimensions don't change, you only ever need to create the ii variable once. After that, you can call the last line each time your image changes. I don't know for sure that this will give you a speed advantage over a for loop, but it is vectorized like you requested. :)
UPDATE
In light of the new information shared about the problem, this solution should give you an order of magnitudes increase in speed:
clear all;
% Set image sizes
M = 360; N = 500;
% Number of column shifts to test
ncols = 200;
% Create comparison matrix (see NOTE)
I = speye(N); E = -1*I;
ii = toeplitz([1:N],[1,N:-1:(N-ncols+2)]);
D = vertcat(repmat(I,1,ncols),E(:,ii));
% Generate some test images
img1 = randi(255,M,N);
img2 = randi(255,M,N);
% Compare images (vectorized)
data_c = reshape(abs([img2,img1]*D),M,N,ncols);
% Compare images (for loop)
array = zeros(M,N,ncols); % <-- Pre-allocate this array!
for i=1:ncols
array(:,:,i)=imabsdiff(circshift(img1,[0 i-1]),img2);
end
This uses matrix multiplication to do the comparisons instead of generating a whole bunch of shifted copies of the image.
NOTE: The matrix D should only be generated one time if your image size is not changing. Notice that the D matrix is completely independent of the images, so it would be wasteful to regenerate it every time. However, if the image size does change, you will need to update D.
Edit: I have updated the code to more closely match what you seem to be looking for. Then I throw the "original" for-loop implementation in to show that they give the same result. One thing worth noting about the vectorized version is that it has the potential to be very memory instensive. If ncols = N then the D matrix has N^3 elements. Even though D is sparse, things fall apart fast when you multiply D by the non-sparse images.
Also, notice that I pre-allocate array before the for loop. This is always good practice in Matlab, where practical, and it will almost invariably give you a large performance boost over the dynamic sizing.
If question is understood correctly, I think you need for loop
for v=1:1:20
array(:,:,v)=circshift(image,[0 v]);
end

Resources