Time average of set of trials in Julia? - loops

In Julia, I have a function that times each iteration of a for loop using #elapsed macro, shown below. Instead of timing each iteration, I would like to perform each "kth" iteration 100 times, and record the average of those 100 trials as the "time" of each iteration, stored in time_vector array. Is there any way to modify this function to accomplish this?
function timeloop2(k)
time_vector = zeros(k)
for x = 1:k
#for each iteration k, do 100 trials and compute "average" of 100 trials?
#how to store each average of 100 trials in time_vector?
t = #elapsed begin
L₁=rand(10,k)
L₂=rand(k,10)
L₁*L₂
end
time_vector[x] = t
end
time_vector
end

If I've understood correctly, this should work for this specific case
function timeloop2(k)
time_vector = zeros(k)
for x = 1:k
t = #elapsed for j in 1:100
L₁=rand(10,k)
L₂=rand(k,10)
L₁*L₂
end
time_vector[x] = t/100
end
time_vector
end
However, perhaps you would be interested in using BenchmarkTools for benchmarking in general.

For accurate results, it's most advisable to use #benchmark from BenchmarkTools.jl. The output of #benchmark will contain a lot of information like allocs, gctimes, memory, params, times; what you're interested in here is average of the times vector. So, calculating mean(t.times) and dividing by 10^9 will give the correct result in seconds.
Note that #benchmark will automatically run the code for as many times as is required for accurate timing. But note also that we should interpolate variables like $k to avoid dealing with global variables that lead to inaccurate results.
using BenchmarkTools
function timeloop2(k)
time_vector = zeros(k)
for x = 1:k
t = #benchmark begin
L₁ = rand(10, $k)
L₂ = rand($k, 10)
L₁ * L₂
end
time_vector[x] = mean(t.times)/10^9
end
time_vector
end
Example:
timeloop2(9)
9-element Vector{Float64}:
7.992059027777779e-7
8.008377931034483e-7
8.042304827586206e-7
7.996186206896552e-7
8.017322068965517e-7
7.996148275862069e-7
7.990756551724139e-7
8.239104999999999e-7
8.269181679389314e-7

Related

How to optimize this random entropy calculation for a large number of iterations?

What my code is trying to do :
I have an initial array containing an arrangement of molecules, each in a cell and confined to moving in a 2D array ( Up Down Left Right ) ( Array size:200x200 ). At each step I take a random molecule, and move it into a random adjacent cell.
Starting from a certain number and every few iterations, I calculate the entropy of this grid. The grid is cut into small squares of 25x25. I then use the Shannon entropy to calculate the entropy of the system.
Objective:
Doing 1e8+ iterations in a decent time using an i5-6500, no GPU.
My code:
function advance_multi_lattice(grid) #Find the next state of the system
rnd=rand(1:count(!iszero,grid)) #Random number to be used for a random molecule.
slots=find(!iszero,grid) #Cells containing molecules.
chosen_slot=find(!iszero,grid)[rnd] #Random cell. May contain multiple molecules.
dim=size(grid)[1] #Need this for rnd=3,4 later.
grid[chosen_slot]-=1 #Remove the molecules from the cell
rnd_arr=[1,2,3,4] #Array to random from.
while true
rnd=rand(rnd_arr) #Random number to see which side should the molecules go.
if rnd==1 #Right for example.
try #In case moving right is impossible, ie: moving right gets the molecule out. Remove 1 from rnd_arr and repeat.
grid[chosen_slot+1]+=1
break
catch
filter!(e->e!=1,rnd_arr)
end
elseif rnd==2
try #Same
grid[chosen_slot-1]+=1
break
catch
filter!(e->e!=2,rnd_arr)
end
#Repeat for the other numbers : 3 and 4...
return Grid
end
function S(P) #Entropy, if no molecules then return 0.
s=[]
for k in P
if k==0
push!(s,0)
else
push!(s,-k*log(k))
end
end
return s
end
function find_molecules(grid) #How many molecules in the array
s=0
for slot in grid
s+=slot
end
return s
end
function entropy_scale(grid,total_molecules) #Calculate the entropy of the grid.
P_array=Array{Float64}([])
for i=1:8
for j=1:8
push!(P_array,find_molecules(grid[(i-1)*25+1:i*25,(j-1)*25+1:j*25]))
end
end
P_array=P_array./total_molecules
return sum(S(P_array))
end
function entropy_evolution(grid,n) #The loop function. Changes the grid and returns the entropy as a function of steps.
t_arr=Array{Int64}([])
S_arr=Array{Float64}([])
p=Progress(Int(n)) #Progress bar, using ProgressMeter.
total_molecules=find_molecules(grid)
for k=1:1e3
grid=advance_multi_lattice(grid)
next!(p)
end
for k=1e3+1:n
grid=advance_multi_lattice(grid)
if k%500==0 #Only record entropy every 500 steps
push!(S_arr,entropy_scale(grid,totel_molecules))
end
next!(p)
end
return S_arr,grid
end
Results for my code :
For 1e5 iterations, I get 43 seconds. Which means that if I want an interesting result ( 1e9+ ), I need a lot of time, upwards to 1hour+. Changing the entropy calculation threshold barely scratches the performance unless it's really small.
I am assuming you are working under Julia 1.0 (for Julia 0.6 a small change is needed - I noted it in the code).
In order to improve the performance you should keep a vector of molecules - not a grid (you do not need it as you allow molecules to occupy the same location).
We will encode location of a molecule as a tuple (x,y). Now you need a function that randomly moves one molecule. Here is how you can implement it (I hard coded the boundaries but of course you could change them to be a parameter):
function move_molecule((x,y)) # in Julia 0.6 it should be move_molecule(t)
# and here in Julia 0.6 you should add: x, y = t
if x == 1
if y == 1
((1,2), (2,1))[rand(1:2)]
elseif y == 200
((1,199), (2,200))[rand(1:2)]
else
((2,y), (1,y-1), (1, y+1))[rand(1:3)]
end
elseif x == 200
if y == 1
((200,2), (199,1))[rand(1:2)]
elseif y == 200
((200,199), (199,200))[rand(1:2)]
else
((200,y), (200,y-1), (200, y+1))[rand(1:3)]
end
else
if y == 1
((x,2), (x-1,1), (x+1, 1))[rand(1:3)]
elseif y == 200
((x,199), (x-1,200), (x+1, 200))[rand(1:3)]
else
((x+1,y), (x-1,y), (x, y+1), (x,y-1))[rand(1:4)]
end
end
end
Now a function that will move a random molecule in one step a given number of steps is:
function go_sim!(molecules, steps)
for k in 1:steps
i = rand(axes(molecules, 1)) # in Julia 0.6 it should be: i = rand(1:length(molecules))
#inbounds molecules[i] = move_molecule(molecules[i])
if k % 500 == 0
# here do entropy calculation
end
end
end
You did not provide a fully reproducible example so I stop here - but it should be easy enough to rewrite the rest of the code for entropy calculation using this data structure (actually it might be even simpler). Here is a benchmark (the performance does not depend on size of the grid nor on the number of molecules and this is an important advantage over the code that uses grid):
julia> molecules = [(rand(1:200), rand(1:200)) for i in 1:1000];
julia> #time go_sim!(molecules, 1e9)
66.212943 seconds (22.64 k allocations: 1.191 MiB)
And you get 1e9 steps in around one minute (without entropy calculation).
What are key elements needed for a good performance:
do not use try-catch blocks as they are very slow;
try to avoid allocation of memory (i.e. creation of mutable objects); my code essentially does no allocations - in particular that is why I used tuples everywhere (you could use matrices in move_molecule function for simplicity but the performance would be around 2x worse)
Hope this helps.

What is the fastest way to count elements in an array?

In my models, one of the most repeated tasks to be done is counting the number of each element within an array. The counting is from a closed set, so I know there are X types of elements, and all or some of them populate the array, along with zeros that represent 'empty' cells. The array is not sorted in any way, and could by quite long (about 1M elements), and this task is done thousands of times during one simulation (which is also part of hundreds of simulations). The result should be a vector r of size X, so r(k) is the amount of k in the array.
Example:
For X = 9, if I have the following input vector:
v = [0 7 8 3 0 4 4 5 3 4 4 8 3 0 6 8 5 5 0 3]
I would like to get this result:
r = [0 0 4 4 3 1 1 3 0]
Note that I don't want the count of zeros, and that elements that don't appear in the array (like 2) have a 0 in the corresponding position of the result vector (r(2) == 0).
What would be the fastest way to achieve this goal?
tl;dr: The fastest method depend on the size of the array. For array smaller than 214 method 3 below (accumarray) is faster. For arrays larger than that method 2 below (histcounts) is better.
UPDATE: I tested this also with implicit broadcasting, that was introduced in 2016b, and the results are almost equal to the bsxfun approach, with no significant difference in this method (relative to the other methods).
Let's see what are the available methods to perform this task. For the following examples we will assume X has n elements, from 1 to n, and our array of interest is M, which is a column array that can vary in size. Our result vector will be spp1, such that spp(k) is the number of ks in M. Although I write here about X, there is no explicit implementation of it in the code below, I just define n = 500 and X is implicitly 1:500.
The naive for loop
The most simple and straightforward way to cope this task is by a for loop that iterate over the elements in X and count the number of elements in M that equal to it:
function spp = loop(M,n)
spp = zeros(n,1);
for k = 1:size(spp,1);
spp(k) = sum(M==k);
end
end
This is off course not so smart, especially if only little group of elements from X is populating M, so we better look first for those that are already in M:
function spp = uloop(M,n)
u = unique(M); % finds which elements to count
spp = zeros(n,1);
for k = u(u>0).';
spp(k) = sum(M==k);
end
end
Usually, in MATLAB, it is advisable to take advantage of the built-in functions as much as possible, since most of the times they are much faster. I thought of 5 options to do so:
1. The function tabulate
The function tabulate returns a very convenient frequency table that at first sight seem to be the perfect solution for this task:
function tab = tabi(M)
tab = tabulate(M);
if tab(1)==0
tab(1,:) = [];
end
end
The only fix to be done is to remove the first row of the table if it counts the 0 element (it could be that there are no zeros in M).
2. The function histcounts
Another option that can be tweaked quite easily to our need it histcounts:
function spp = histci(M,n)
spp = histcounts(M,1:n+1);
end
here, in order to count all different elements between 1 to n separately, we define the edges to be 1:n+1, so every element in X has it's own bin. We could write also histcounts(M(M>0),'BinMethod','integers'), but I already tested it, and it takes more time (though it makes the function independent of n).
3. The function accumarray
The next option I'll bring here is the use of the function accumarray:
function spp = accumi(M)
spp = accumarray(M(M>0),1);
end
here we give the function M(M>0) as input, to skip the zeros, and use 1 as the vals input to count all unique elements.
4. The function bsxfun
We can even use binary operation #eq (i.e. ==) to look for all elements from each type:
function spp = bsxi(M,n)
spp = bsxfun(#eq,M,1:n);
spp = sum(spp,1);
end
if we keep the first input M and the second 1:n in different dimensions, so one is a column vector the other is a row vector, then the function compares each element in M with each element in 1:n, and create a length(M)-by-n logical matrix than we can sum to get the desired result.
5. The function ndgrid
Another option, similar to the bsxfun, is to explicitly create the two matrices of all possibilities using the ndgrid function:
function spp = gridi(M,n)
[Mx,nx] = ndgrid(M,1:n);
spp = sum(Mx==nx);
end
then we compare them and sum over columns, to get the final result.
Benchmarking
I have done a little test to find the fastest method from all mentioned above, I defined n = 500 for all trails. For some (especially the naive for) there is a great impact of n on the time of execution, but this is not the issue here since we want to test it for a given n.
Here are the results:
We can notice several things:
Interestingly, there is a shift in the fastest method. For arrays smaller than 214 accumarray is the fastest. For arrays larger than 214 histcounts is the fastest.
As expected the naive for loops, in both versions are the slowest, but for arrays smaller than 28 the "unique & for" option is slower. ndgrid become the slowest in arrays bigger than 211, probably because of the need to store very large matrices in memory.
There is some irregularity in the way tabulate works on arrays in size smaller than 29. This result was consistent (with some variation in the pattern) in all the trials I conducted.
(the bsxfun and ndgrid curves are truncated because it makes my computer stuck in higher values, and the trend is quite clear already)
Also, notice that the y-axis is in log10, so a decrease in unit (like for arrays in size 219, between accumarray and histcounts) means a 10-times faster operation.
I'll be glad to hear in the comments for improvements to this test, and if you have another, conceptually different method, you are most welcome to suggest it as an answer.
The code
Here are all the functions wrapped in a timing function:
function out = timing_hist(N,n)
M = randi([0 n],N,1);
func_times = {'for','unique & for','tabulate','histcounts','accumarray','bsxfun','ndgrid';
timeit(#() loop(M,n)),...
timeit(#() uloop(M,n)),...
timeit(#() tabi(M)),...
timeit(#() histci(M,n)),...
timeit(#() accumi(M)),...
timeit(#() bsxi(M,n)),...
timeit(#() gridi(M,n))};
out = cell2mat(func_times(2,:));
end
function spp = loop(M,n)
spp = zeros(n,1);
for k = 1:size(spp,1);
spp(k) = sum(M==k);
end
end
function spp = uloop(M,n)
u = unique(M);
spp = zeros(n,1);
for k = u(u>0).';
spp(k) = sum(M==k);
end
end
function tab = tabi(M)
tab = tabulate(M);
if tab(1)==0
tab(1,:) = [];
end
end
function spp = histci(M,n)
spp = histcounts(M,1:n+1);
end
function spp = accumi(M)
spp = accumarray(M(M>0),1);
end
function spp = bsxi(M,n)
spp = bsxfun(#eq,M,1:n);
spp = sum(spp,1);
end
function spp = gridi(M,n)
[Mx,nx] = ndgrid(M,1:n);
spp = sum(Mx==nx);
end
And here is the script to run this code and produce the graph:
N = 25; % it is not recommended to run this with N>19 for the `bsxfun` and `ndgrid` functions.
func_times = zeros(N,5);
for n = 1:N
func_times(n,:) = timing_hist(2^n,500);
end
% plotting:
hold on
mark = 'xo*^dsp';
for k = 1:size(func_times,2)
plot(1:size(func_times,1),log10(func_times(:,k).*1000),['-' mark(k)],...
'MarkerEdgeColor','k','LineWidth',1.5);
end
hold off
xlabel('Log_2(Array size)','FontSize',16)
ylabel('Log_{10}(Execution time) (ms)','FontSize',16)
legend({'for','unique & for','tabulate','histcounts','accumarray','bsxfun','ndgrid'},...
'Location','NorthWest','FontSize',14)
grid on
1 The reason for this weird name comes from my field, Ecology. My models are a cellular-automata, that typically simulate individual organisms in a virtual space (the M above). The individuals are of different species (hence spp) and all together form what is called "ecological community". The "state" of the community is given by the number of individuals from each species, which is the spp vector in this answer. In this models, we first define a species pool (X above) for the individuals to be drawn from, and the community state take into account all species in the species pool, not only those present in M
We know that that the input vector always contains integers, so why not use this to "squeeze" a bit more performance out of the algorithm?
I've been experimenting with some optimizations of the the two best binning methods suggested by the OP, and this is what I came up with:
The number of unique values (X in the question, or n in the example) should be explicitly converted to an (unsigned) integer type.
It's faster to compute an extra bin and then discard it, than to "only process" valid values (see the accumi_new function below).
This function takes about 30sec to run on my machine. I'm using MATLAB R2016a.
function q38941694
datestr(now)
N = 25;
func_times = zeros(N,4);
for n = 1:N
func_times(n,:) = timing_hist(2^n,500);
end
% Plotting:
figure('Position',[572 362 758 608]);
hP = plot(1:n,log10(func_times.*1000),'-o','MarkerEdgeColor','k','LineWidth',2);
xlabel('Log_2(Array size)'); ylabel('Log_{10}(Execution time) (ms)')
legend({'histcounts (double)','histcounts (uint)','accumarray (old)',...
'accumarray (new)'},'FontSize',12,'Location','NorthWest')
grid on; grid minor;
set(hP([2,4]),'Marker','s'); set(gca,'Fontsize',16);
datestr(now)
end
function out = timing_hist(N,n)
% Convert n into an appropriate integer class:
if n < intmax('uint8')
classname = 'uint8';
n = uint8(n);
elseif n < intmax('uint16')
classname = 'uint16';
n = uint16(n);
elseif n < intmax('uint32')
classname = 'uint32';
n = uint32(n);
else % n < intmax('uint64')
classname = 'uint64';
n = uint64(n);
end
% Generate an input:
M = randi([0 n],N,1,classname);
% Time different options:
warning off 'MATLAB:timeit:HighOverhead'
func_times = {'histcounts (double)','histcounts (uint)','accumarray (old)',...
'accumarray (new)';
timeit(#() histci(double(M),double(n))),...
timeit(#() histci(M,n)),...
timeit(#() accumi(M)),...
timeit(#() accumi_new(M))
};
out = cell2mat(func_times(2,:));
end
function spp = histci(M,n)
spp = histcounts(M,1:n+1);
end
function spp = accumi(M)
spp = accumarray(M(M>0),1);
end
function spp = accumi_new(M)
spp = accumarray(M+1,1);
spp = spp(2:end);
end

Basic operations combining two SharedArrays

I've spent the last month or so learning julia and I'm very impressed. In particular I'm analysing large amount of climate model output, I put all this into SharedArrays and adjust and plot it all in parallel. So far it's very quick and efficient and I've got quite a library of code. My current problem is in creating a function that can do basic operations on two shared arrays. I've successfully written a function that takes two arrays and how you want to process them. The code is based around the example in the parallel section of the julia doc and uses the myrange function as shown there
function myrange(q::SharedArray)
idx = indexpids(q)
##show (idx)
if idx == 0
# This worker is not assigned a piece
return 1:0, 1:0
print("NO WORKERS ASSIGNED")
end
nchunks = length(procs(q))
splits = [round(Int, s) for s in linspace(0,length(q),nchunks+1)]
splits[idx]+1:splits[idx+1]
end
function combine_arrays_chunk!(array_1,array_2,output_array,func, length_range);
##show (length_range)
for i in length_range
output_array[i] = func(array_1[i], array_2[i]);
#hardwired example for func = +
#output_array[i] = +(array_1[i], array_2[i]);
end
output_array
end
combine_arrays_shared_chunk!(array_1,array_2,output_array,func) = combine_arrays_chunk!(array_1,array_2,output_array,func, myrange(array_1));
function combine_arrays_shared(array_1::SharedArray,array_2::SharedArray,func)
if size(array_1)!=size(array_2)
return print("inputs not of the same size")
end
output_array=SharedArray(Float64,size(array_1));
#sync begin
for p in procs(array_1)
#async remotecall_wait(p, combine_arrays_shared_chunk!, array_1,array_2,output_array,func)
end
end
output_array
end
The works so one can do
strain_div = combine_arrays_shared(eps_1,eps_2,+);
strain_tot = combine_arrays_shared(eps_1,eps_2,hypot);
with the correct results an the output as a shared array as required. But ... it's quite slow. It's actually quicker to combine the sharedarray as a normal array on one processor, calculate and then convert back to a sharedarray (for my test cases anyway, with each array approx 200MB, when I move up to GBs I guess not). I can hardwire the combine_arrays_shared function to only do addition (or some other function), and then you get the speed increase, but with function type being passed within combine_arrays_shared the whole thing is slow (10 times slower than the hard wired addition).
I've looked at the FastAnonymous.jl package but I can't see how it would work in this case. I tried, and failed. Any ideas?
I might just resort to writing a different combine_arrays_... function for each basic function I use, or having the func argument as a option and call different functions from within combine_arrays_shared, but I want it to be more elegant! Also this is good way to learn more about Julia.
Harry
This question actually has nothing to do with SharedArrays, and is just "how do I pass functions-as-arguments and get better performance?"
The way FastAnonymous works---and similar to the way closures will work in julia soon---is to create a type with a call method. If you're having trouble with FastAnonymous for some reason, you can always do it manually:
julia> immutable Foo end
julia> Base.call(f::Foo, x, y) = x*y
call (generic function with 1036 methods)
julia> function applyf(f, X)
s = zero(eltype(X))
for x in X
s += f(x, x)
end
s
end
applyf (generic function with 1 method)
julia> X = rand(10^6);
julia> f = Foo()
Foo()
# Run the function once with each type of argument to JIT-compile
julia> applyf(f, X)
333375.63216645207
julia> applyf(*, X)
333375.63216645207
# Compile anything used by #time
julia> #time 1
0.000004 seconds (148 allocations: 10.151 KB)
1
# Now let's benchmark
julia> #time applyf(f, X)
0.002860 seconds (5 allocations: 176 bytes)
333433.439233112
julia> #time applyf(*, X)
0.142411 seconds (4.00 M allocations: 61.035 MB, 19.24% gc time)
333433.439233112
Note the big increase in speed and greatly-reduced memory consumption.

Optimizing the value N to split arrays up for vectorizing an array so it runs the quickest

I'm trying to optimizing the value N to split arrays up for vectorizing an array so it runs the quickest on different machines. I have some test code below
#example use random values
clear all,
t=rand(1,556790);
inner_freq=rand(8193,6);
N=100; # use N chunks
nn = int32(linspace(1, length(t)+1, N+1))
aa_sig_combined=zeros(size(t));
total_time_so_far=0;
for ii=1:N
tic;
ind = nn(ii):nn(ii+1)-1;
aa_sig_combined(ind) = sum(diag(inner_freq(1:end-1,2)) * cos(2 .* pi .* inner_freq(1:end-1,1) * t(ind)) .+ repmat(inner_freq(1:end-1,3),[1 length(ind)]));
toc
total_time_so_far=total_time_so_far+sum(toc)
end
fprintf('- Complete test in %4.4fsec or %4.4fmins\n',total_time_so_far,total_time_so_far/60);
This takes 162.7963sec or 2.7133mins to complete when N=100 on a 16gig i7 machine running ubuntu
Is there a way to find out what value N should be to get this to run the fastest on different machines?
PS: I'm running Octave 3.8.1 on 16gig i7 ubuntu 14.04 but it will also be running on even a 1 gig raspberry pi 2.
This is the Matlab test script that I used to time each parameter. The return is used to break it after the first iteration as it looks like the rest of the iterations are similar.
%example use random values
clear all;
t=rand(1,556790);
inner_freq=rand(8193,6);
N=100; % use N chunks
nn = int32( linspace(1, length(t)+1, N+1) );
aa_sig_combined=zeros(size(t));
D = diag(inner_freq(1:end-1,2));
for ii=1:N
ind = nn(ii):nn(ii+1)-1;
tic;
cosPara = 2 * pi * A * t(ind);
toc;
cosResult = cos( cosPara );
sumParaA = D * cosResult;
toc;
sumParaB = repmat(inner_freq(1:end-1,3),[1 length(ind)]);
toc;
aa_sig_combined(ind) = sum( sumParaA + sumParaB );
toc;
return;
end
The output is indicated as follows. Note that I have a slow computer.
Elapsed time is 0.156621 seconds.
Elapsed time is 17.384735 seconds.
Elapsed time is 17.922553 seconds.
Elapsed time is 18.452994 seconds.
As you can see, the cos operation is what's taking so long. You are running cos on a 8192x5568 matrix (45,613,056 elements) which makes sense that it takes so long.
If you wish to improve performance, use parfor as it appears each iteration is independent. Assuming you had 100 cores to run your 100 iterations, your script would be done in 17 seconds + parfor overhead.
Within the cos calculation, you might want to look into if another method exists to calculate cos of a value faster and more parallel than the stock method.
Another minor optimization is this line. It ensures that the diag function isn't called within the loop as the diagonal matrix is constant. You don't want a 8192x8192 diagonal matrix to be generated every time! I just stored it outside the loop and it gives a bit of a performance boost as well.
D = diag(inner_freq(1:end-1,2));
Note that I didn't use the Matlab profile as it didn't work for me, but you should use that in the future for more functionalized code.

Matlab random sample of a dataset

I have a dataset (Data) which is a vector of, let's say, 1000 real numbers. I would like to extract at random from Data 100 times 10 contiguous numbers. I don't know how to use Datasample for that purpose.
Thanks in advance for you help.
You can just pick 100 random numbers between 1 and 991:
I = randi(991, 100, 1)
Then use them as the starting points to index 10 contiguous elements:
cell2mat(arrayfun(#(x)(Data(x:x+9)), I, 'uni', false))
Here you have a snipet, but instead of using Datasample, I used randi to generate random indexes.
n_times = 100;
l_data = length(Data);
index_random = randi(l_data-9,n_times,1); % '- 9' to not to surpass the vector limit when you read the 10 items
for ind1 = 1:n_times
random_number(ind1,:) = Data(index_random(ind1):index_random(ind1)+9)
end
This is similar to Dan's answer, but avoids using cells and arrayfun, so it may be faster.
Let Ns denote the number of contiguous numbers you want (10 in your example), and Nt the number of times (100 in your example). Then:
result = Data(bsxfun(#plus, randi(numel(Data)-Ns+1, Nt, 1), 0:Ns-1)); %// Nt x Ns
Here is another solution, close to #Luis, but with cumsum instead of bsxfun:
A = rand(1,1000); % The vector to sample
sz = size(A,2);
N = 100; % no. of samples
B = 10; % size of one sample
first = randi(sz-B+1,N,1); % the starting point for all blocks
rand_blocks = A(cumsum([first ones(N,B-1)],2)); % the result
This results in an N-by-B matrix (rand_blocks), each row of it is one sample. Of course, this could be one-lined, but it won't make it faster, and I want to keep it clear. For small N or B this method is slightly faster. If N or B becomes very large then the bsxfun method is slightly faster. This ranking is not affected by the size of A.

Resources