Get the actual ALSA period length used - alsa

One of the hardware parameters that can be configured for ALSA is the period: the interval between interrupts. You can indicate the range of values you want to use, using the snd_pcm_hw_params_set_period_time function.
But how do you get the actual value it selected? ALSA has a pcm_hw_params_get_period_time function, but that does not seem to tell you the actual value, but rather tells you whether the value is in a particular range.

The snd_pcm_hw_params_set_period_time() function selects a single value; if it succeeds, you know the value.
If you have set an interval with the _min/_max function, or did not restrict the period size at all, the actual period size is chosen when you call snd_pcm_hw_params(). You can then read the period length from the hw_params object.
The dir parameters is used to indicate that the actual value is between two integers; it does not actually define an interval.

Related

Is it possible to set a random number generator seed to get reproducible training?

I would like to re-run training with fewer epochs to stop with the same state it had at that point in the earlier training.
I see that tf.initializers take a seed argument. tf.layers.dropout does as well but 1.2.7 reports "Error: Non-default seed is not implemented in Dropout layer yet: 1". But even without dropout are there other sources of randomness? And can those be provided with a seed?
You can get a reproductible training by setting the weights default value. These default value are randomly generated at the beginning of the training.
To set the value of the weights the property kernerInitializer of the layer object parameter can be used.
Another way to set the weights is to call setWeights on the model passing as arguments the weights values
Also shuffle in model.fit parameter property is set to true by default. It has to be set to false to prevent the training data to be shuffled at each epoch.

Need help finding a logical solution solving a problem

Given the variable 'points' which increases every time a variable 'player' collects a point, how do I logically find a way to reward user for finding 30 points inside a 5 minutes limit? There's no countdown timer.
e.g player may have 4 points but in 5 minutes if he has 34 points that also counts.
I was thinking about using timestamps but I don't really know how to do that.
What you are talking about is a "sliding window". Your window is time based. Record each point's timestamp and slide your window over these timestamps. You will need to pick a time increment to slide your window.
Upon each "slide", count your points. When you get the amount you need, "reward your user". The "upon each slide" means you need some sort of timer that calls a function each time to evaluate the result and do what you want.
For example, set a window for 5 minutes and a slide of 1 second. Don't keep a single variable called points. Instead, simply create an array of timestamps. Every timer tick (of 1 second in this case), count the number of timestamps that match t - 5 minutes to t now; if there are 30 or more, you've met your threshold and can reward your super-fast user. If you need the actual value, that may be 34, well, you've just computed it, so you can use it.
There may be ways to optimize this. I've provided the naive approach. Timestamps that have gone out of range can be deleted to save space.
If there are "points going into the window" that count, then just add them to the sum.

Is GetDate() deterministic

This is a sort of philosophical question. In an interview I was present at GetDate was given as an example of a non deterministic function. I can see why that argument holds water; but it seems a specious argument to me.
To elaborate:
For a given instance in time (within a 100 microsecond band) getdate will return a specific value.
For two computers running to the same clock time (synchronised by a sufficiently accurate clock) they will both return the same value for getdate.
So that is deterministic.
It can be argued that getdate returns different values at different times and so it can not be described as deterministic.
But a sql query "get x from y where primary key equals z" will return the same value for x where z is the same value. So if the clock is fixed to a certain value then we shall always receive the same value for getdate.
In other words the value of getdate is determined by an external parameter, in exactly the same way as a SQL query that uses a where clause is controlled by that where clause parameter.
So why should we imply that getdate is non deterministic whereas any other variable parameter in a select query which provides a result is described as deterministic.
And just to extend the question; if the data changes then we receive different values to the select query, which we then explain do not affect the deterministicity (to coin a word) as the values have changed in time, just as getdate has.
To expand (as an edit) I could use XP_CmdShell to set a particular date and then immediately run GetDate(); ignoring the vagaries of the speed of the system etc. I would then always get the same answer. This effectively of negates the argument that the system date time is not an input as I have modified it via SQL and thus kept the whole process within a SQL controlled loop.
For two computers running to the same clock time (synchronised by a sufficiently accurate clock) they will both return the same value for getdate.
So that is deterministic.
No, it's not - deterministic means that the function returns the same value given the same inputs. In this case you have no inputs, but you get different values all the time! The system clock is not an input, it is external state that the function relies upon.
Any query that relies on table data is non-deterministic because it relies on external state. Examples of deterministic functions are those that do NOT rely on external state, but rely solely on inputs to the function: FLOOR, DATEADD, etc.
if the data changes then we receive different values to the select query, which we then explain do not affect the deterministicity (to coin a word) as the values have changed in time, just as getdate has.
Actually, that proves that the query is NOT deterministic - if a change in external state changes the output of the query.
In my experience it is deterministic. If you
SELECT GetDate() from TableX
Where TableX has 1mn rows, you would expect the same value returned for all rows, since the evaluation of GetDate does not depend on any value in any of the rows.

Matlab: Return input value with the highest output from a custom function

I have a vector of numbers like this:
myVec= [ 1 2 3 4 5 6 7 8 ...]
and I have a custom function which takes the input of one number, performs an algorithm and returns another number.
cust(1)= 55, cust(2)= 497, cust(3)= 14, etc.
I want to be able to return the number in the first vector which yielded the highest outcome.
My current thought is to generate a second vector, outcomeVec, which contains the output from the custom function, and then find the index of that vector that has max(outcomeVec), then match that index to myVec. I am wondering, is there a more efficient way of doing this?
What you described is a good way to do it.
outcomeVec = myfunc(myVec);
[~,ndx] = max(outcomeVec);
myVec(ndx) % input that produces max output
Another option is to do it with a loop. This saves a little memory, but may be slower.
maxOutputValue = -Inf;
maxOutputNdx = NaN;
for ndx = 1:length(myVec)
output = myfunc(myVec(ndx));
if output > maxOutputValue
maxOutputValue = output;
maxOutputNdx = ndx;
end
end
myVec(maxOutputNdx) % input that produces max output
Those are pretty much your only options.
You could make it fancy by writing a general purpose function that takes in a function handle and an input array. That method would implement one of the techniques above and return the input value that produces the largest output.
Depending on the size of the range of discrete numbers you are searching over, you may find a solution with a golden section algorithm works more efficiently. I tried for instance to minimize the following:
bf = -21;
f =#(x) round(x-bf).^2;
within the range [-100 100] with a routine based on a script from the Mathworks file exchange. This specific file exchange script does not appear to implement the golden section correctly as it makes two function calls per iteration. After fixing this the number of calls required is reduced to 12, which certainly beats evaluating the function 200 times prior to a "dumb" call to min. The gains can quickly become dramatic. For instance, if the search region is [-100000 100000], golden finds the minimum in 25 function calls as opposed to 200000 - the dependence of the number of calls in golden section on the range is logarithmic, not linear.
So if the range is sufficiently large, other methods can definitely beat min by requiring less function calls. Minimization search routines sometimes incorporate such a search in early steps. However you will have a problem with convergence (termination) criteria, which you will have to modify so that the routine knows when to stop. The best option is probably to narrow the search region for application of min by starting out with a few iterations of golden section.
An important caveat is that golden section is guaranteed to work only with unimodal regions, that is, displaying a single minimum. In a region containing multiple minima it's likely to get stuck in one and may miss the global minimum. In that sense min is a sure bet.
Note also that the function in the example here rounds input x, whereas your function takes an integer input. This means you would have to place a wrapper around your function which rounds the input passed by the calling golden routine.
Others appear to have used genetic algorithms to perform such a search, although I did not research this.

Can CUDA do argmax?

Question says it all;
Assuming each threads are doing something like
value=blockDim.x*blockIdx.x+threadIdx.x;
result=f(value);
where f is a device function, its easy enough to find the max result by adding an atomicMax() call, but how could you find out what the value was?
Does this make sense? Just add an if statement comparing the max result to the thread's result. If it matches, save the thread's value.
value=blockDim.x*blockIdx.x+threadIdx.x;
result=f(value);
atomicMax(max,result);
if result==*max:
max_value = value;
Or, perhaps you need to specify behavior if multiple threads have the max result... for example taking the lowest thread:
value=blockDim.x*blockIdx.x+threadIdx.x;
result=f(value);
atomicMax(max,result);
if result==*max:
atomicMin(max_value,value);
That said, if you are finding the max result out of every thread, you will want to use a reduction instead of atomicMax. If I understand correctly, the atomicMax function is basically going to execute serially, whereas a reduction will be largely in parallel. When you use a reduction, you can manually track the value along with the result - that's what I do. (Although perhaps the above if statement approach will work at the end of the reduction, too. I may have to try it in my code...)

Resources