How to determine repeating pattern of bits in a single array - c

So here is the problem, I'm building arrays of 128 bits, in C, to represent beat divisions, those divisions can be logically AND/OR/XOR'd against one another, so you can have results like below. The problem I'm having is, how to determine when a pattern repeats, and what the start/end index of the first repeated section are, so that I can loop over just that section to prevent strange things happening when I reach the max (currently 128).
It seems like i'm going to need to increase the size to 256 or larger to account for situations where the more complex nested logic creates patterns that don't repeat for a while.. Looking for advice on how to detect the pattern algorithmic-ally within an array of bits.
2: 01010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101
3: 00100100100100100100100100100100100100100100100100100100100100100100100100100100100100100100100100100100100100100100100100100100
3 OR 2: 01110101110101110101110101110101110101110101110101110101110101110101110101110101110101110101110101110101110101110101110101110101
3 AND 2: 00000100000100000100000100000100000100000100000100000100000100000100000100000100000100000100000100000100000100000100000100000100
3 XOR 2: 01110001110001110001110001110001110001110001110001110001110001110001110001110001110001110001110001110001110001110001110001110001
|| || || || || ...
5: 00001000010000100001000010000100001000010000100001000010000100001000010000100001000010000100001000010000100001000010000100001000
5 OR (3 XOR 2): 01111001110001110001110011110101111001110001110001110011110101111001110001110001110011110101111001110001110001110011110101111001
|| || || || ||
5 OR (3 OR 2): 01111101110101110101110111110101111101110101110101110111110101111101110101110101110111110101111101110101110101110111110101111101
|| || || ||
5: 00001000010000100001000010000100001000010000100001000010000100001000010000100001000010000100001000010000100001000010000100001000
6: 00000100000100000100000100000100000100000100000100000100000100000100000100000100000100000100000100000100000100000100000100000100
5 OR 6: 00001100010100100101000110000100001100010100100101000110000100001100010100100101000110000100001100010100100101000110000100001100
|| || || || ||
7: 00000010000001000000100000010000001000000100000010000001000000100000010000001000000100000010000001000000100000010000001000000100
7 XOR (5 OR 6): 00001110010101100101100110010100000100010000100111000111000100101100000100101101000110000110001101010100000101010110001100001000
shoot, 7 XOR (5 OR 6) doesn't repeat within 128 bits..
8: 00000001000000010000000100000001000000010000000100000001000000010000000100000001000000010000000100000001000000010000000100000001
16: 00000000000000010000000000000001000000000000000100000000000000010000000000000001000000000000000100000000000000010000000000000001
to provide a little more context, this is for a logical clock divider that i've written for a musical module (https://llllllll.co/t/chrono-sage-meadowphysics-logical-clock-divider-v1-2-5/27182) and the issue I'm trying to solve is the ability to have these logical/nested logic combinations of beat divisions line up so there is no stutter when a pattern wraps around.

Ian Abbott commented that the "repeating pattern should be no longer than the least common multiple of M and N" (M and N being the two metres, I assume, although you've shown examples combining more than two). In fact, the naive combination of more than one metre is exactly a repeating pattern of length equal to their least common multiple.
But it looks like you are making additional, arbitrary choices, indicated by the lines in the examples, about where you'd like the "downbeat." On the face of it, without more clarification, we would be hard pressed to tell a computer what these criteria are. What are you thinking about (or better yet, hearing) that advises you to place the downbeats where you did, rather than at the start in the examples? Or is the issue that we don't have access to the original metres that were combined?
3/4 x . x|x . x|x . x|x . x
4/4 x . x x|x . x x|x . x x
(lcm = 12)

Related

Is there any mechanism to auto squeeze in Matlab / Octave

For an nD array, it would be nice to be able to auto squeeze to remove singleton dimensions. Is there a way to do this that I don't know about? This would be especially useful for aggregate functions (e.g. sum, mean, etc) where you always expect a result with fewer dimensions.
Here's a simple example:
>> A = ones(3,3,3);
>> B = mean(A);
>> size(B)
ans =
1 3 3
>> squeeze(B)
ans =
1 1 1
1 1 1
1 1 1
It would be nice if Matlab/Octave would automatically do the squeezing for me. Or if there was a way to turn that option on (something similar to hold on for plots).
As far as I know, Matlab does not have that. And I don't think it would be a good idea. Consider a modified version of your example:
>> A = ones(3,1,1,3);
>> B = mean(A);
>> size(B)
ans =
1 1 1 3
What should "auto-squeeze" do here? Reduce B to size [1 1 3] or to [1 3]?
You could argue that it should remove the same dimension that mean has turned into a singleton. But then it would have to be done within the mean function, perhaps with an optional input argument. Once you get the function output, there is no information how it was obtained.
Or you could argue that it should remove all singleton dimensions, like squeeze (more or less) does. But then it would remove dimensions that were already singleton in the function input, which is probably unwanted.
If you ask me, having a second input in squeeze specifiyng which (singleton) dimensions to remove would be a nice addition (in the same vein as you can use mean(A, 1) to force the operation to be applied along the first dimension even if A happens to be a row vector).
I agree with Luis and Cris, but I would add the following.
Both Matlab and Octave do automatically squeeze extra dimensions, in a very particular scenario: any dimensions at the end that have been reduced to singletons, are automatically squeezed out.
E.g.
A = ones([1,2,3,4]);
B = mean(A, 4);
size(B)
% ans = 1 2 3
Note, how the answer is [1,2,3], and not [1,2,3,1]. This is in contrast to languages like python, for instance, where a size of (1,1) is very different to a size of (1,).
Therefore, with regard to your questions, one way to use this to your advantage could be to ensure that the dimension that is to be reduced is always found at the end, and thus automatically simplified.
This becomes even more useful when you realise that:
size(A(:)) % ans = 24 1 (i.e. 24)
size(A(:,:)) % ans = 1 24
size(A(:,:,:)) % ans = 1 2 12
size(A(:,:,:,:)) % ans = 1 2 3 4
Meaning, if you order your dimensions hierarchically you can ensure that any operations that need to take place over the higher dimensions, can a) be vectorised easily, and b) give a natural result, without the need to waste time squeezing or permuting the resulting dimensions.

Daily Coding Problem 260 : Reconstruct a jumbled array - Intuition?

I'm going through the question below.
The sequence [0, 1, ..., N] has been jumbled, and the only clue you have for its order is an array representing whether each number is larger or smaller than the last. Given this information, reconstruct an array that is consistent with it.
For example, given [None, +, +, -, +], you could return [1, 2, 3, 0, 4].
I went through the solution on this post but still unable to understand it as to why this solution works. I don't think I would be able to come up with the solution if I had this in front of me during an interview. Can anyone explain the intuition behind it? Thanks in advance!
This answer tries to give a general strategy to find an algorithm to tackle this type of problems. It is not trying to prove why the given solution is correct, but lying out a route towards such a solution.
A tried and tested way to tackle this kind of problem (actually a wide range of problems), is to start with small examples and work your way up. This works for puzzles, but even so for problems encountered in reality.
First, note that the question is formulated deliberately to not point you in the right direction too easily. It makes you think there is some magic involved. How can you reconstruct a list of N numbers given only the list of plusses and minuses?
Well, you can't. For 10 numbers, there are 10! = 3628800 possible permutations. And there are only 2⁹ = 512 possible lists of signs. It's a very huge difference. Most original lists will be completely different after reconstruction.
Here's an overview of how to approach the problem:
Start with very simple examples
Try to work your way up, adding a bit of complexity
If you see something that seems a dead end, try increasing complexity in another way; don't spend too much time with situations where you don't see progress
While exploring alternatives, revisit old dead ends, as you might have gained new insights
Try whether recursion could work:
given a solution for N, can we easily construct a solution for N+1?
or even better: given a solution for N, can we easily construct a solution for 2N?
Given a recursive solution, can it be converted to an iterative solution?
Does the algorithm do some repetitive work that can be postponed to the end?
....
So, let's start simple (writing 0 for the None at the start):
very short lists are easy to guess:
'0++' → 0 1 2 → clearly only one solution
'0--' → 2 1 0 → only one solution
'0-+' → 1 0 2 or 2 0 1 → hey, there is no unique outcome, though the question only asks for one of the possible outcomes
lists with only plusses:
'0++++++' → 0 1 2 3 4 5 6 → only possibility
lists with only minuses:
'0-------'→ 7 6 5 4 3 2 1 0 → only possibility
lists with one minus, the rest plusses:
'0-++++' → 1 0 2 3 4 5 or 5 0 1 2 3 4 or ...
'0+-+++' → 0 2 1 3 4 5 or 5 0 1 2 3 4 or ...
→ no very obvious pattern seem to emerge
maybe some recursion could help?
given a solution for N, appending one sign more?
appending a plus is easy: just repeat the solution and append the largest plus 1
appending a minus, after some thought: increase all the numbers by 1 and append a zero
→ hey, we have a working solution, but maybe not the most efficient one
the algorithm just appends to an existing list, no need to really write it recursively (although the idea is expressed recursively)
appending a plus can be improved, by storing the largest number in a variable so it doesn't need to be searched at every step; no further improvements seem necessary
appending a minus is more troublesome: the list needs to be traversed with each append
what if instead of appending a zero, we append -1, and do the adding at the end?
this clearly works when there is only one minus
when two minus signs are encountered, the first time append -1, the second time -2
→ hey, this works for any number of minuses encountered, just store its counter in a variable and sum with it at the end of the algorithm
This is in bird's eye view one possible route towards coming up with a solution. Many routes lead to Rome. Introducing negative numbers might seem tricky, but it is a logical conclusion after contemplating the recursive algorithm for a while.
It works because all changes are sequential, either adding one or subtracting one, starting both the increasing and the decreasing sequences from the same place. That guarantees we have a sequential list overall. For example, given the arbitrary
[None, +, -, +, +, -]
turned vertically for convenience, we can see
None 0
+ 1
- -1
+ 2
+ 3
- -2
Now just shift them up by two (to account for -2):
2 3 1 4 5 0
+ - + + -
Let's look at first to a solution which (I think) is easier to understand, formalize and demonstrate for correctness (but I will only explain it and not demonstrate in a formal way):
We name A[0..N] our input array (where A[k] is None if k = 0 and is + or - otherwise) and B[0..N] our output array (where B[k] is in the range [0, N] and all values are unique)
At first we see that our problem (find B such that B[k] > B[k-1] if A[k] == + and B[k] < B[k-1] if A[k] == -) is only a special case of another problem:
Find B such that B[k] == max(B[0..k]) if A[k] == + and B[k] == min(B[0..k]) if A[k] == -.
Which generalize from "A value must larger or smaller than the last" to "A value must be larger or smaller than everyone before it"
So a solution to this problem is a solution to the original one as well.
Now how do we approach this problem?
A greedy solution will be sufficient, indeed is easy to demonstrate that the value associated with the last + will be the biggest number in absolute (which is N), the one associated with the second last + will be the second biggest number in absolute (which is N-1) ecc...
And in the same time the value associated with the last - will be the smallest number in absolute (which is 0), the one associated with the second last - will be the second smallest (which is 1) ecc...
So we can start filling B from right to left remembering how many + we have seen (let's call this value X), how many - we have seen (let's call this value Y) and looking at what is the current symbol, if it is a + in B we put N-X and we increase X by 1 and if it is a - in B we put 0+Y and we increase Y by 1.
In the end we'll need to fill B[0] with the only remaining value which is equal to Y+1 and to N-X-1.
An interesting property of this solution is that if we look to only the values associated with a - they will be all the values from 0 to Y (where in this case Y is the total number of -) sorted in reverse order; if we look to only the values associated with a + they will be all the values from N-X to N (where in this case X is the total number of +) sorted and if we look at B[0] it will always be Y+1 and N-X-1 (which are equal).
So the - will have all the values strictly smaller than B[0] and reverse sorted and the + will have all the values strictly bigger than B[0] and sorted.
This property is the key to understand why the solution proposed here works:
It consider B[0] equals to 0 and than it fills B following the property, this isn't a solution because the values are not in the range [0, N], but it is possible with a simple translation to move the range and arriving to [0, N]
The idea is to produce a permutation of [0,1...N] which will follow the pattern of [+,-...]. There are many permutations which will be applicable, it isn't a single one. For instance, look the the example provided:
[None, +, +, -, +], you could return [1, 2, 3, 0, 4].
But you also could have returned other solutions, just as valid: [2,3,4,0,1], [0,3,4,1,2] are also solutions. The only concern is that you need to have the first number having at least two numbers above it for positions [1],[2], and leave one number in the end which is lower then the one before and after it.
So the question isn't finding the one and only pattern which is scrambled, but to produce any permutation which will work with these rules.
This algorithm answers two questions for the next member of the list: get a number who’s both higher/lower from previous - and get a number who hasn’t been used yet. It takes a starting point number and essentially create two lists: an ascending list for the ‘+’ and a descending list for the ‘-‘. This way we guarantee that the next member is higher/lower than the previous one (because it’s in fact higher/lower than all previous members, a stricter condition than the one required) and for the same reason we know this number wasn’t used before.
So the intuition of the referenced algorithm is to start with a referenced number and work your way through. Let's assume we start from 0. The first place we put 0+1, which is 1. we keep 0 as our lowest, 1 as the highest.
l[0] h[1] list[1]
the next symbol is '+' so we take the highest number and raise it by one to 2, and update both the list with a new member and the highest number.
l[0] h[2] list [1,2]
The next symbol is '+' again, and so:
l[0] h[3] list [1,2,3]
The next symbol is '-' and so we have to put in our 0. Note that if the next symbol will be - we will have to stop, since we have no lower to produce.
l[0] h[3] list [1,2,3,0]
Luckily for us, we've chosen well and the last symbol is '+', so we can put our 4 and call is a day.
l[0] h[4] list [1,2,3,0,4]
This is not necessarily the smartest solution, as it can never know if the original number will solve the sequence, and always progresses by 1. That means that for some patterns [+,-...] it will not be able to find a solution. But for the pattern provided it works well with 0 as the initial starting point. If we chose the number 1 is would also work and produce [2,3,4,0,1], but for 2 and above it will fail. It will never produce the solution [0,3,4,1,2].
I hope this helps understanding the approach.
This is not an explanation for the question put forward by OP.
Just want to share a possible approach.
Given: N = 7
Index: 0 1 2 3 4 5 6 7
Pattern: X + - + - + - + //X = None
Go from 0 to N
[1] fill all '-' starting from right going left.
Index: 0 1 2 3 4 5 6 7
Pattern: X + - + - + - + //X = None
Answer: 2 1 0
[2] fill all the vacant places i.e [X & +] starting from left going right.
Index: 0 1 2 3 4 5 6 7
Pattern: X + - + - + - + //X = None
Answer: 3 4 5 6 7
Final:
Pattern: X + - + - + - + //X = None
Answer: 3 4 2 5 1 6 0 7
My answer definitely is too late for your problem but if you need a simple proof, you probably would like to read it:
+min_last or min_so_far is a decreasing value starting from 0.
+max_last or max_so_far is an increasing value starting from 0.
In the input, each value is either "+" or "-" and for each increase the value of max_so_far or decrease the value of min_so_far by one respectively, excluding the first one which is None. So, abs(min_so_far, max_so_far) is exactly equal to N, right? But because you need the range [0, n] but max_so_far and min_so_far now are equal to the number of "+"s and "-"s with the intersection part with the range [0, n] being [0, max_so_far], what you need to do is to pad it the value equal to min_so_far for the final solution (because min_so_far <= 0 so you need to take each value of the current answer to subtract by min_so_far or add by abs(min_so_far)).

Binary search modification

I have been attempting to solve following problem. I have a sequence of positive
integer numbers which can be very long (several milions of elements). This
sequence can contain "jumps" in the elements values. The aforementioned jump
means that two consecutive elements differs each other by more than 1.
Example 01:
1 2 3 4 5 6 7 0
In the above mentioned example the jump occurs between 7 and 0.
I have been looking for some effective algorithm (from time point of view) for
finding of the position where this jump occurs. This issue is complicated by the
fact that there can be a situation when two jumps are present and one of them
is the jump which I am looking for and the other one is a wrap-around which I
am not looking for.
Example 02:
9 1 2 3 4 6 7 8
Here the first jump between 9 and 1 is a wrap-around. The second jump between
4 and 6 is the jump which I am looking for.
My idea is to somehow modify the binary search algorithm but I am not sure whether it is possible due to the wrap-around presence. It is worthwhile to say that only two jumps can occur in maximum and between these jumps the elements are sorted. Does anybody have any idea? Thanks in advance for any suggestions.
You cannot find an efficient solution (Efficient meaning not looking at all numbers, O(n)) since you cannot conclude anything about your numbers by looking at less than all. For example if you only look at every second number (still O(n) but better factor) you would miss double jumps like these: 1 5 3. You can and must look at every single number and compare it to it's neighbours. You could split your workload and use a multicore approach but that's about it.
Update
If you have the special case that there is only 1 jump in your list and the rest is sorted (eg. 1 2 3 7 8 9) you can find this jump rather efficiently. You cannot use vanilla binary search since the list might not be sorted fully and you don't know what number you are searching but you could use an abbreviation of the exponential search which bears some resemblance.
We need the following assumptions for this algorithm to work:
There is only 1 jump (I ignore the "wrap around jump" since it is not technically between any following elements)
The list is otherwise sorted and it is strictly monotonically increasing
With these assumptions we are now basically searching an interruption in our monotonicity. That means we are searching the case when 2 elements and b have n elements between them but do not fulfil b = a + n. This must be true if there is no jump between the two elements. Now you only need to find elements which do not fulfil this in a nonlinear manner, hence the exponential approach. This pseudocode could be such an algorithm:
let numbers be an array of length n fulfilling our assumptions
start = 0
stepsize = 1
while (start < n-1)
while (start + stepsize > n)
stepsize -= 1
stop = start + stepsize
while (numbers[stop] != numbers[start] + stepsize)
// the number must be between start and stop
if(stepsize == 1)
// congratiulations the jump is at start to start + 1
return start
else
stepsize /= 2
start += stepsize
stepsize *= 2
no jump found

Find high & low peak points in cell array MATLAB

I want to find "significant" changes in a cell array in MATLAB for when I have a movement.
E.g. I have YT which represents movements in a yaw presentation for a face interaction. YT can change based on an interaction from anywhere upwards of 80x1 to 400x1. The first few lines might be
YT = {-7 -8 -8 -8 -8 -9 -9 -9 -6 ...}
I would like to record the following
Over the entire cell array;
1) Count the number of high and low peaks
I can do this with findpeak but not for low peaks?*
2) Measure the difference between each peak -
For this example, peaks -9 and -6 so difference of +3 between those. So report 1 peak change of +3. At the moment I am only interested in changes of +/- 3, but this might change, so I will need a threshold?
and then over X number of cells (repeating for the cell array)
3) count number of changes - for this example, 3 changes
3) count number of significant changes - for this example, 1 changes of -/+3
4) describe the change - 1 change of -1, 1 change of -1, 1 change of +3
Any help would be appreciated, bit of a MATLAB noob.
Thanks!
1) Finding negative peaks is the same as finding positive ones - all you need to do is multiply the sequence by -1 and then findpeaks again
2) If you simply want the differences, then you could subtract the vectors of the positive and negative peaks (possibly offset by one if you want differences in both directions). Something like pospeaks-negpeaks would do one side. You'd need to identify whether the positive or negative peak was first (use the loc return from findpeaks to determine this), and then do pospeaks(1:end-1)-negpeaks(2:end) or vice versa as appropriate.
[edit]As pointed out in your comment, the above assumes that pospeaks and negpeaks are the same length. I shouldn't have been so lazy! The code might be better written as:
if (length(pospeaks)>length(negpeaks))
% Starts and ends with a positive peak
neg_diffs=pospeaks(1:end-1)-negpeaks;
pos_diffs=negpeaks-pospeaks(2:end);
elseif (length(pospeaks)<length(negpeaks))
% Starts and ends with a negative peak
pos_diffs=negpeaks(1:end-1)-pospeaks;
neg_diffs=pospeaks-negpeaks(1:end-1);
elseif posloc<negloc
% Starts with a positive peak, and ends with a negative one
neg_diffs=pospeaks-negpeaks;
pos_diffs=pospeaks(2:end)-negpeaks(1:end-1);
else
% Starts with a negative peak, and ends with a positive one
pos_diffs=negpeaks-pospeaks;
neg_diffs=negpeaks(2:end)-pospeaks(1:end-1);
end
I'm sure that could be coded more effectively, but I can't think just now how to write it more compactly. posloc and negloc are the location returns from findpeaks.[/edit]
For (3) to (5) it is easier to record the differences between samples: changes=[YT{2:end}]-[YT{1:end-1}];
3) To count changes, count the number of non-zeros in the difference between adjacent elements: sum(changes~=0)
4) You don't define what you mean by "significant changes", but the test is almost identical to 3) sum(abs(changes)>=3)
5) It is simply changes(changes~=0)
I would suggest diff is the command which can provide the basis of a solution to all your problems (prior converting the cell to an array with cell2mat). It outputs the difference between adjacent values along an array:
1) You'd have to define what a 'peak' is but at a guess:
YT = cell2mat(YT); % convert cell to array
change = diff(YT); % get diffs
highp = sum(change >= 3); % high peak threshold
lowp = sum(change <= -3); % low peak threshold
2) diff(cell2mat(YT)) provides this.
3)
YT = cell2mat(YT); % convert cell to array
change = diff(YT); % get diffs
count = sum(change~=0);
4) Seems to be answered in the other points?

What is the advantage of linspace over the colon ":" operator?

Is there some advantage of writing
t = linspace(0,20,21)
over
t = 0:1:20
?
I understand the former produces a vector, as the first does.
Can anyone state me some situation where linspace is useful over t = 0:1:20?
It's not just the usability. Though the documentation says:
The linspace function generates linearly spaced vectors. It is
similar to the colon operator :, but gives direct control over the
number of points.
it is the same, the main difference and advantage of linspace is that it generates a vector of integers with the desired length (or default 100) and scales it afterwards to the desired range. The : colon creates the vector directly by increments.
Imagine you need to define bin edges for a histogram. And especially you need the certain bin edge 0.35 to be exactly on it's right place:
edges = [0.05:0.10:.55];
X = edges == 0.35
edges = 0.0500 0.1500 0.2500 0.3500 0.4500 0.5500
X = 0 0 0 0 0 0
does not define the right bin edge, but:
edges = linspace(0.05,0.55,6); %// 6 = (0.55-0.05)/0.1+1
X = edges == 0.35
edges = 0.0500 0.1500 0.2500 0.3500 0.4500 0.5500
X = 0 0 0 1 0 0
does.
Well, it's basically a floating point issue. Which can be avoided by linspace, as a single division of an integer is not that delicate, like the cumulative sum of floting point numbers. But as Mark Dickinson pointed out in the comments:
You shouldn't rely on any of the computed values being exactly what you expect. That is not what linspace is for. In my opinion it's a matter of how likely you will get floating point issues and how much you can reduce the probabilty for them or how small can you set the tolerances. Using linspace can reduce the probability of occurance of these issues, it's not a security.
That's the code of linspace:
n1 = n-1
c = (d2 - d1).*(n1-1) % opposite signs may cause overflow
if isinf(c)
y = d1 + (d2/n1).*(0:n1) - (d1/n1).*(0:n1)
else
y = d1 + (0:n1).*(d2 - d1)/n1
end
To sum up: linspace and colon are reliable at doing different tasks. linspace tries to ensure (as the name suggests) linear spacing, whereas colon tries to ensure symmetry
In your special case, as you create a vector of integers, there is no advantage of linspace (apart from usability), but when it comes to floating point delicate tasks, there may is.
The answer of Sam Roberts provides some additional information and clarifies further things, including some statements of MathWorks regarding the colon operator.
linspace and the colon operator do different things.
linspace creates a vector of integers of the specified length, and then scales it down to the specified interval with a division. In this way it ensures that the output vector is as linearly spaced as possible.
The colon operator adds increments to the starting point, and subtracts decrements from the end point to reach a middle point. In this way, it ensures that the output vector is as symmetric as possible.
The two methods thus have different aims, and will often give very slightly different answers, e.g.
>> a = 0:pi/1000:10*pi;
>> b = linspace(0,10*pi,10001);
>> all(a==b)
ans =
0
>> max(a-b)
ans =
3.5527e-15
In practice, however, the differences will often have little impact unless you are interested in tiny numerical details. I find linspace more convenient when the number of gaps is easy to express, whereas I find the colon operator more convenient when the increment is easy to express.
See this MathWorks technical note for more detail on the algorithm behind the colon operator. For more detail on linspace, you can just type edit linspace to see exactly what it does.
linspace is useful where you know the number of elements you want rather than the size of the "step" between them. So if I said make a vector with 360 elements between 0 and 2*pi as a contrived example it's either going to be
linspace(0, 2*pi, 360)
or if you just had the colon operator you would have to manually calculate the step size:
0:(2*pi - 0)/(360-1):2*pi
linspace is just more convenient
For a simple real world application, see this answer where linspace is helpful in creating a custom colour map

Resources