Create array in Ruby with begin value, end value and step for float values - arrays

How can one create an array filled with values within a range (having a begin and end value) and a step? It should support begin and end values of float type.

For floats with custom stepping you can use Numeric#step like so:
-1.25.step(by: 0.5, to: 1.25).to_a
# => [-1.25, -0.75, -0.25, 0.25, 0.75, 1.25]
If you are looking on how to do this with integer values only, see this post or that post on how to create ranges and simply call .to_a at the end. Example:
(-1..1).step(0.5).to_a
# => [-1.0, -0.5, 0.0, 0.5, 1.0]

Related

Tensorflow-probability transform event shape of JointDistribution

I would like to create a distribution for n categorical variables C_1,.., C_n whose event shape is n. Using JointDistributionSequentialAutoBatched the event dimension is a list [[],..,[]]. For example for n=2
import tensorflow_probability.python.distributions as tfd
probs = [
[0.8, 0.2], # C_1 in {0,1}
[0.3, 0.3, 0.4] # C_2 in {0,1,2}
]
D = tfd.JointDistributionSequentialAutoBatched([tfd.Categorical(probs=p) for p in probs])
>>> D
<tfp.distributions.JointDistributionSequentialAutoBatched 'JointDistributionSequentialAutoBatched' batch_shape=[] event_shape=[[], []] dtype=[int32, int32]>
How do I reshape it to get event shape [2]?
A few different approaches could work here:
Create a batch of Categorical distributions and then use tfd.Independent to reinterpret the batch dimension as the event:
vector_dist = tfd.Independent(
tfd.Categorical(
probs=[
[0.8, 0.2, 0.0], # C_1 in {0,1}
[0.3, 0.3, 0.4] # C_2 in {0,1,2}
]),
reinterpreted_batch_ndims=1)
Here I added an extra zero to pad out probs so that both distributions can be represented by a single Categorical object.
Use the Blockwise distribution, which stuffs its component distributions into a single vector (as opposed to the JointDistribution classes, which return them as separate values):
vector_dist = tfd.Blockwise([tfd.Categorical(probs=p) for p in probs])
The closest to a direct answer to your question is to apply the Split bijector, whose inverse is Concat, to the joint distribution:
tfb = tfp.bijectors
D = tfd.JointDistributionSequentialAutoBatched(
[tfd.Categorical(probs=[p] for p in probs])
vector_dist = tfb.Invert(tfb.Split(2))(D)
Note that I had to awkwardly write probs=[p] instead of just probs=p. This is because the Concat bijector, like tf.concat, can't change the tensor rank of its argument---it can concatenate small vectors into a big vector, but not scalars into a vector---so we have to ensure that its inputs are themselves vectors. This could be avoided if TFP had a Stack bijector analogous to tf.stack / tf.unstack (it doesn't currently, but there's no reason this couldn't exist).

Why my 1-D histograms are not showing correcctly?

I have two sets of data (x, y) corresponding to two 1-D histograms that are meant to be plotted next to each other as subplots. Both x and y values are different and hence they would be represented in different axes. The histogram heights (first item in hists) and the corresponding sequence of bins (second items in hists) are given for each subplot as the following:
*Please note that each height correspond to the bin in the sequence; heights are already known for each bin. I just want to put data in a bar format using hist function
array_1 = np.array([ 8.20198063, 8.30645018, 8.30829034, 8.63297701, 0., 0., 10.43478942])
array_random_1 = np.array([ 8.23460584, 8.31556503, 8.3090378, 8.63147021, 0., 0., 10.41481862])
array_2 = np.array([10.4348338, 8.69943553, 8.68710347, 6.67854038])
array_random_2 = np.array([10.41597028, 8.76635268, 8.19516216, 6.68126994])
bins_1, bins_2 = [8.0, 8.6, 9.2, 9.8, 10.4, 11.0, 11.6, 12.2], [0.0, 0.25, 0.5, 0.75, 1.0]
Here is my try to plot these two subplots using hist function from python:
fig, (ax1, ax2) = plt.subplots(1, 2, sharex=False, sharey=False, figsize=(12,3))
ax1.hist(array_1, bins_1, ec='blue', fc='none', lw=1.5, histtype='step', label='1')
ax1.hist(array_random_1, bins_1, ec='red', fc='none', lw=1.5, histtype='step', label='Random_1')
ax1.set_xlabel('X1')
ax1.set_ylabel('Y1')
ax2.hist(array_2, bins_2, ec='blue', fc='none', lw=1.5, histtype='step', label='2')
ax2.hist(array_random_2, bins_2, ec='red', fc='none', lw=1.5, histtype='step', label='Random_2')
ax2.set_xlabel('X2')
plt.show()
However, as you can see bars are not drawn to the correct height (blue bars are missing entirely) in left-side panel and everything is missing from the second panel. What is the issue in making these 1d histograms? Does this mean that I cannot use hist for my purpose?
What I want is the following which is doable using bar. How to do it using hist?
By what I understood.
In your code try replacing:
histtype='step'
with
histtype='bar'

How can I directly modify the weight values in the Julia library Flux?

In the Julia library Flux, we have the ability to take a neural network, let's call it network m and extract the weights of network m with the following code:
params(m)
This returns a Zygote.Params type of object, of the form:
Params([Float32[0.20391908 -0.101616435 0.09610984 -0.1013181 -0.13325627 -0.034813307 -0.13811183 0.27022845 ...]...)
If I wanted to alter each of the weights slightly, how would I be able to access them?
Edit:
As requested, here is the structure for m:
Chain(LSTM(8,10),Dense(10,1))
You can iterate on a Params object to access each set of parameters as an array, which you can manipulate in place.
Supposing you want to change every parameter by 1‰, you could do something like the following:
julia> using Flux
julia> m = Dense(10, 5, σ)
Dense(10, 5, σ)
julia> params(m)
Params([Float32[-0.026854342 -0.57200056 … 0.36827534 -0.39761665; -0.47952518 0.594778 … 0.32624483 0.29363066; … ; -0.22681071 -0.0059174187 … -0.59344876 -0.02679312;
-0.4910349 0.60780525 … 0.114975974 0.036513895], Float32[0.0, 0.0, 0.0, 0.0, 0.0]])
julia> for p in params(m)
p .*= 1.001
end
julia> params(m)
Params([Float32[-0.026881196 -0.5725726 … 0.3686436 -0.39801428; -0.4800047 0.5953728 … 0.32657108 0.2939243; … ; -0.22703752 -0.0059233364 … -0.5940422 -0.026819913; -0.
49152592 0.60841304 … 0.11509095 0.03655041], Float32[0.0, 0.0, 0.0, 0.0, 0.0]])

Efficiently sorting and filtering a JaggedArray by another one

I have a JaggedArray (awkward.array.jagged.JaggedArray) that contains indices that point to positions in another JaggedArray. Both arrays have the same length, but each of the numpy.ndarrays that the JaggedArrays contain can be of different length. I would like to sort the second array using the indices of the first array, at the same time dropping the elements from the second array that are not indexed from the first array. The first array can additionally contain values of -1 (could also be replaced by None if needed, but this is currently not that case) that mean that there is no match in the second array. In such a case, the corresponding position in the first array should be set to a default value (e.g. 0).
Here's a practical example and how I solve this at the moment:
import uproot
import numpy as np
import awkward
def good_index(my_indices, my_values):
my_list = []
for index in my_indices:
if index > -1:
my_list.append(my_values[index])
else:
my_list.append(0)
return my_list
indices = awkward.fromiter([[0, -1], [3,1,-1], [-1,0,-1]])
values = awkward.fromiter([[1.1, 1.2, 1.3], [2.1,2.2,2.3,2.4], [3.1]])
new_map = awkward.fromiter(map(good_index, indices, values))
The resulting new_map is: [[1.1 0.0] [2.4 2.2 0.0] [0.0 3.1 0.0]].
Is there a more efficient/faster way achieving this? I was thinking that one could use numpy functionality such as numpy.where, but due to the different lengths of the ndarrays this fails at least for the ways that I tried.
If all of the subarrays in values are guaranteed to be non-empty (so that indexing with -1 returns the last subelement, not an error), then you can do this:
>>> almost = values[indices] # almost what you want; uses -1 as a real index
>>> almost.content = awkward.MaskedArray(indices.content < 0, almost.content)
>>> almost.fillna(0.0)
<JaggedArray [[1.1 0.0] [2.4 2.2 0.0] [0.0 3.1 0.0]] at 0x7fe54c713c88>
The last step is optional because without it, the missing elements are None, rather than 0.0.
If some of the subarrays in values are empty, you can pad them to ensure they have at least one subelement. All of the original subelements are indexed the same way they were before, since pad only increases the length, if need be.
>>> values = awkward.fromiter([[1.1, 1.2, 1.3], [], [2.1, 2.2, 2.3, 2.4], [], [3.1]])
>>> values.pad(1)
<JaggedArray [[1.1 1.2 1.3] [None] [2.1 2.2 2.3 2.4] [None] [3.1]] at 0x7fe54c713978>

Create array with non-integer increments

I am trying to create time stamp arrays in Swift.
So, say I want to go from 0 to 4 seconds, I can use Array(0...4), which gives [0, 1, 2, 3, 4]
But how can I get [0.0, 0.5 1.0, 2.0, 2.5, 3.0, 3.5, 4.0]?
Essentially I want a flexible delta, such as 0.5, 0.05, etc.
You can use stride(from:through:by:):
let a = Array(stride(from: 0.0, through: 4.0, by: 0.5))
An alternative for non-constant increments (even more viable in Swift 3.1)
The stride(from:through:by:) functions as covered in #Alexander's answer is the fit for purpose solution where, but for the case where readers of this Q&A wants to construct a sequence (/collection) of non-constant increments (in which case the linear-sequence constructing stride(...) falls short), I'll also include another alternative.
For such scenarios, the sequence(first:next:) is a good method of choice; used to construct a lazy sequence that can be repeatedly queried for the next element.
E.g., constructing the first 5 ticks for a log10 scale (Double array)
let log10Seq = sequence(first: 1.0, next: { 10*$0 })
let arr = Array(log10Seq.prefix(5)) // [1.0, 10.0, 100.0, 1000.0, 10000.0]
Swift 3.1 is intended to be released in the spring of 2017, and with this (among lots of other things) comes the implementation of the following accepted Swift evolution proposal:
SE-0045: Add prefix(while:) and drop(while:) to the stdlib
prefix(while:) in combination with sequence(first:next) provides a neat tool for generating sequences with everything for simple next methods (such as imitating the simple behaviour of stride(...)) to more advanced ones. The stride(...) example of this question is a good minimal (very simple) example of such usage:
/* this we can do already in Swift 3.0 */
let delta = 0.05
let seq = sequence(first: 0.0, next: { $0 + delta})
/* 'prefix(while:)' soon available in Swift 3.1 */
let arr = Array(seq.prefix(while: { $0 <= 4.0 }))
// [0.0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0]
// ...
for elem in sequence(first: 0.0, next: { $0 + delta})
.prefix(while: { $0 <= 4.0 }) {
// ...
}
Again, not in contest with stride(...) in the simple case of this Q, but very viable as soon as the useful but simple applications of stride(...) falls short, e.g. for a constructing non-linear sequences.

Resources