Yup allow more than 0 decimal value - reactjs

I want to allow number which should be more than 0 with decimal number
I tried
const schema = yup.object().shape({
amount: yup.number().min(0),
});
But, it allow 0 value which I don't want. So, I tried
const schema = yup.object().shape({
amount: yup.number().min(0.1),
});
But, it not allow 0.001 or 0.0002
How can change which allow more than 0 decimal value. Any help would be really appreciated.

You can use yup.number.positive() instead min(0)
https://github.com/jquense/yup#numberpositivemessage-string--function-schema

Related

Generate all combinations of the SUM in Ruby but only using specific amount of numbers

I am currently pulling in F1 prices from an Api, placing them into an Array. and determining what combination is less than or equal to 20. Using the below successfully:
require 'net/http'
require 'json'
#url = 'HIDDEN URL AS HAS NO RELEVANCE'
#uri = URI(#url)
#response = Net::HTTP.get(#uri)
#fantasy = JSON.parse(#response)
arr= [[#fantasy.first["Mercedes"].to_f, #fantasy.first["Ferrari"].to_f], [#fantasy.first["Hamilton"].to_f, #fantasy.first["Verstappen"].to_f]]
target = 20
#array = arr[0].product(*arr[1..-1]).select { |a| a.reduce(:+) <= target }
Where:
#fantasy = [{"Mercedes" => "4", "Ferrari" => "6.2", "Hamilton" => "7.1", "Verstappen" => "3"}]
This is successfully outputting:
[[4.0, 7.1], [4.0, 3.0], [6.2, 7.1], [6.2, 3.0]]
Eventually this will contain all F1 teams on the left side and all F1 drivers on the right (making an F1 fantasy teambuilder). But the idea is that only 1 constructor is needed and 5 drivers for the combination that should be equal or less than 20.
Is there a way to define this? To only use 1 Team (Mercedes, Ferrari etc) and 5 drivers (Hamilton, Verstappen etc) in the calculation? Obviously do not have 5 drivers included yet as just testing. So that my output would be:
[[4.0, 7.1, 3.0], [6.2, 7.1, 3.0]]
Where the constructor forms the 'base' for the calculation and then it can have any 5 of the driver calls?
My final question is, considering what I am trying to do, is this the best way to put my API into an array? As in to manually place #fantasy.first["Mercedes"].to_f inside my array brackets?
Thanks!
Not sure if I understand the question, but does this help?
arr = #fantasy.first.values.map(&:to_f)
target = 20
p result = arr.combination(2).select{|combi| combi.sum <= target}

When creating a tensor with an array of timestamps, the numbers are incorrect

Looking for some kind of solution to this issue:
trying to create a tensor from an array of timestamps
[
1612892067115,
],
but here is what happens
tf.tensor([1612892067115]).arraySync()
> [ 1612892078080 ]
as you can see, the result is incorrect.
Somebody pointed out, I may need to use the datatype int64, but this doesn't seem to exist in tfjs 😭
I have also tried to divide my timestamp to a small float, but I get a similar result
tf.tensor([1.612892067115, 1.612892068341]).arraySync()
[ 1.6128920316696167, 1.6128920316696167 ]
If you know a way to work around using timestamps in a tensor, please help :)
:edit:
As an attempted workaround, I tried to remove my year, month, and date from my timestamp
Here are my subsequent input values:
[
56969701,
56969685,
56969669,
56969646,
56969607,
56969602
]
and their outputs:
[
56969700,
56969684,
56969668,
56969648,
56969608,
56969600
]
as you can see, they are still incorrect, and should be well within the acceptable range
found a solution that worked for me:
Since I only require a subset of the timestamp (just the date / hour / minute / second / ms) for my purposes, I simply truncate out the year / month:
export const subts = (ts: number) => {
// a sub timestamp which can be used over the period of a month
const yearMonth = +new Date(new Date().getFullYear(), new Date().getMonth())
return ts - yearMonth
}
then I can use this with:
subTimestamps = timestamps.map(ts => subts(ts))
const x_vals = tf.tensor(subTimestamps, [subTimestamps.length], 'int32')
now all my results work as expected.
Currently only int32 is supported with tensorflow.js, your data has gone out of the range supported by int32.
Until int64 is supported, this can be solved by using a relative timestamp. Currently a timestamp in js uses the number of ms that elapsed since 1 January 1970. A relative timestamp can be used by using another origin and compute the difference of ms that has elapsed since that date. That way, we will have a lower number that can be represented using int32. The best origin to take will be the starting date of the records
const a = Date.now() // computing a tensor out of it will give an accurate result since the number is out of range
const origin = new Date("02/01/2021").now()
const relative = a - origin
const tensor = tf.tensor(relative, undefined, 'int32')
// get back the data
const data = tensor.dataSync()[0]
// get the initial date
const initial date = new Date(data + origin)
In other scenarios, if using the ms is not of interest, using the number of s that has elapsed since the start would be better. It is called the unix time

How to implement input text masking in React-Native?

The mask is needed: 90.99%, where:
9 - optional digit
0 - required
%,. - relevant characters '%' and '.'
For example:
Input / Result
1 ---> 1%
12 ---> 12%
12.1 ---> 12.1%
12.12 ---> 12.12%
I'm using redux-form
I've tried react-native-text-input-mask and react-native-masked-text already, however, there is no similar functionality in these packages (in the first one there is something similar, but '%' is correctly displayed only if it is used before the number but this char should be after)
The best way here is to provide masking next to the input itself.
It highly depends on how do you use the Field component (do you even use it?).
You can try to use the format prop on the Field.
Or you can provide your own component to render a field and provide own format functionality:
const renderPercentagedInput = (field) => {
function onChange(evt) {
const value = evt.target.value;
const numbers = value.replace(/[^0-9.,]/g, '')
field.input.onChange(numbers + '%')
}
return (
<TextInput
{...field.input}
onChangeText={onChange}
/>
);
}

Tensorflow Probability Logistic Regression Example

I feel I must be missing something obvious, in struggling to get a positive control for logistic regression going in tensorflow probability.
I've modified the example for logistic regression here, and created a positive control features and labels data. I struggle to achieve accuracy over 60%, however this is an easy problem for a 'vanilla' Keras model (accuracy 100%). What am I missing? I tried different layers, activations, etc.. With this method of setting up the model, is posterior updating actually being performed? Do I need to specify an interceptor object? Many thanks..
### Added positive control
nSamples = 80
features1 = np.float32(np.hstack((np.reshape(np.ones(40), (40, 1)),
np.reshape(np.random.randn(nSamples), (40, 2)))))
features2 = np.float32(np.hstack((np.reshape(np.zeros(40), (40, 1)),
np.reshape(np.random.randn(nSamples), (40, 2)))))
features = np.vstack((features1, features2))
labels = np.concatenate((np.zeros(40), np.ones(40)))
featuresInt, labelsInt = build_input_pipeline(features, labels, 10)
###
#w_true, b_true, features, labels = toy_logistic_data(FLAGS.num_examples, 2)
#featuresInt, labelsInt = build_input_pipeline(features, labels, FLAGS.batch_size)
with tf.name_scope("logistic_regression", values=[featuresInt]):
layer = tfp.layers.DenseFlipout(
units=1,
activation=None,
kernel_posterior_fn=tfp.layers.default_mean_field_normal_fn(),
bias_posterior_fn=tfp.layers.default_mean_field_normal_fn())
logits = layer(featuresInt)
labels_distribution = tfd.Bernoulli(logits=logits)
neg_log_likelihood = -tf.reduce_mean(labels_distribution.log_prob(labelsInt))
kl = sum(layer.losses)
elbo_loss = neg_log_likelihood + kl
predictions = tf.cast(logits > 0, dtype=tf.int32)
accuracy, accuracy_update_op = tf.metrics.accuracy(
labels=labelsInt, predictions=predictions)
with tf.name_scope("train"):
optimizer = tf.train.AdamOptimizer(learning_rate=FLAGS.learning_rate)
train_op = optimizer.minimize(elbo_loss)
init_op = tf.group(tf.global_variables_initializer(),
tf.local_variables_initializer())
with tf.Session() as sess:
sess.run(init_op)
# Fit the model to data.
for step in range(FLAGS.max_steps):
_ = sess.run([train_op, accuracy_update_op])
if step % 100 == 0:
loss_value, accuracy_value = sess.run([elbo_loss, accuracy])
print("Step: {:>3d} Loss: {:.3f} Accuracy: {:.3f}".format(
step, loss_value, accuracy_value))
### Check with basic Keras
kerasModel = tf.keras.models.Sequential([
tf.keras.layers.Dense(1)])
optimizer = tf.train.AdamOptimizer(5e-2)
kerasModel.compile(optimizer = optimizer, loss = 'binary_crossentropy',
metrics = ['accuracy'])
kerasModel.fit(features, labels, epochs = 50) #100% accuracy
Compared to the github example, you forgot to divide by the number of examples when defining the KL divergence:
kl = sum(layer.losses) / FLAGS.num_examples
When I change this to your code, I quickly get to an accuracy of 99.9% on your toy data.
Additionaly, the output layer of your Keras model actually expects a sigmoid activation for this problem (binary classification):
kerasModel = tf.keras.models.Sequential([
tf.keras.layers.Dense(1, activation='sigmoid')])
It's a toy problem, but you will notice that the model gets to 100% accuracy faster with a sigmoid activation.

How to validate user input of currency

So I currently developing a website that support many languages. I have an input box where user can input the amount of currency inside. I need a function to validate if that input is legit or not.
however, because different countries use different format of number.
for example: England use '.' for decimal and ',' for thousand separator .
Where as Germany use ',' for decimal and '.' for thousand separator.
French use ',' for decimal and (space) for thousand separator...
And for Chinese/Jap , they even dont use number "1-9" to describe number
I can make a very big if-else function to do the validate base on the language they are using. something like this
number = userinput()
if "de":
return deValidator(number)
if "fr":
return frValidator(number)
if "en":
return enValidator(number)
if "zh":
return zhValidator(number)
However, is there any wiser way to do it?? what I am looking for is something like a already-built validator/library or an easier approach to solve this problem without having to writing different validator for different language
You can leverage on toLocaleString() method to help to build a validator; The toLocaleString() method returns a string with a language sensitive representation of the number.
const number = 123456.789;
// German uses comma as decimal separator and period for thousands
console.log(number.toLocaleString('de-DE'));
// → 123.456,789
// Arabic in most Arabic speaking countries uses Eastern Arabic digits
console.log(number.toLocaleString('ar-EG'));
// → ١٢٣٤٥٦٫٧٨٩
// India uses thousands/lakh/crore separators
console.log(number.toLocaleString('en-IN'));
// → 1,23,456.789
// the nu extension key requests a numbering system, e.g. Chinese decimal
console.log(number.toLocaleString('zh-Hans-CN-u-nu-hanidec'));
// → 一二三,四五六.七八九
// when requesting a language that may not be supported, such as
// Balinese, include a fallback language, in this case Indonesian
console.log(number.toLocaleString(['ban', 'id']));
// → 123.456,789
With this method, you can also format numbers with currency information:
const number = 10000000;
number.toLocaleString('it-IT', {style: 'currency', currency: 'EUR'})
// → 10.000.000,00 €
number.toLocaleString('it-IT', {style: 'currency', currency: 'USD'})
// → 10.000.000,00 US$
number.toLocaleString('en-US', {style: 'currency', currency: 'EUR'})
// → €10,000,000.00
number.toLocaleString('en-US', {style: 'currency', currency: 'USD'})
// → $10,000,000.00
For more details: toLocaleString https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Global_Objects/Number/toLocaleString

Resources