Am I using 'ontimer' incorrectly in Vizard? - timer

I am very new to programming in Vizard, but I am a pretty strong .js programmer. I have an art gallery and I want a man to walk from picture to picture. He needs to wait for a few seconds at each picture.
So I have a number of walking sequences and I'm trying to use the 'ontimer' function to call the next walk sequence and also add a few seconds of delay.
It works perfectly the first time it is called, in dostuff(), but doesn't work at all in dostuff2(). I assume I am using 'ontimer' incorrectly, could anyone explain where I am going wrong?
Any help or advice would be hugely appreciated!
walkOne = vizact.walkto(4, -0.5, 4)
turnOne = vizact.turn(60)
walking_sequence = vizact.sequence( [walkOne, turnOne])
walkTwo = vizact.walkto(5.350, -0.5, -2)
turnTwo = vizact.turn(60)
walking_sequenceTwo = vizact.sequence( [walkTwo, turnTwo])
def dostuff():
male.addAction(walking_sequence)
vizact.ontimer(10,dostuff2)
def dostuff2():
male.addAction(walking_sequenceTwo)
print(vizact.ontimer)
vizact.ontimer(20,dostuff)

Cracked it!! Got rid of the ontimer completely and used waittimer instead, seems to work ok.
walkOne = vizact.walkto(4, -0.5, 4)
turnOne = vizact.turn(60)
walking_sequence = vizact.sequence(walkOne, turnOne, vizact.waittime(10))
walkTwo = vizact.walkto(5.350, -0.5, -2)
turnTwo = vizact.turn(60)
walking_sequenceTwo = vizact.sequence(walkTwo, turnTwo, vizact.waittime(10))
def dostuff():
male.addAction(walking_sequence)
dostuff2()
def dostuff2():
male.addAction(walking_sequenceTwo)
dostuff3()

Related

The gof function from package btergm gives AUC value of a precision-recall greater than 1

I was trying to do out-of-sample prediction using the gof function from package btergm. When calculating the AUC value of a precision-recall curve from the testing set, I get the result of 1.012909, which seems to be theoretically impossible.
How can I interpret the result, or am I doing something wrong. Thank you! Here is my code:
network <- readRDS(url("https://www.dropbox.com/s/zxhqxa8h9awkzpd/network.rds?dl=1"))
model.3<- btergm(network[1:9]~edges+gwodegree(1, fixed=TRUE)+transitiveties + ctriple +
gwidegree(1, fixed=TRUE)+mutual+gwesp(1.5, fixed=TRUE)+ttriple+
memory(type="stability")+delrecip,R=1000)
gof.3 <-gof(model.3, nsim = 1000,target=network[[10]],formula = network[9:10]~edges+gwodegree(1,fixed=TRUE) +
transitiveties + ctriple +gwidegree(1,fixed=TRUE)+mutual+gwesp(1.5, fixed=TRUE)+ttriple+
memory(type="stability")+delrecip,coef = coef(model.3),
statistics = rocpr)
gof.3[[1]]$auc.pr

What strategy can I use to OCR Magic the Gathering corner text?

I need to recognize the text in the bottom left corner on Magic the Gathering paper cards (last design). Here an example:
If the text is like this
I want to retrieve the following text:
198/280 U
M20 EN
(I don't need the card author name - Lake Hurwitz in this example)
What OCR library can I use? I've tried with Tesseract without any tuning but the results are not correct. Any advice or link to a project that already does this stuff?
You can make it with tesseract (3.04.01) by sanitizing your image a bit
like in below code
import numpy as np
import cv2
def prepro(zone, prefix):
filename = 'stackmagic.png'
oriimg = cv2.imread(filename)
#keep the interesting part
(a,b,c,d) = zone
text_zone = oriimg[a:b, c:d]
height, width, depth = text_zone.shape
#resize it to be bigger (so less pixelized)
H = 50
imgScale = H/height
newX,newY = text_zone.shape[1]*imgScale, text_zone.shape[0]*imgScale
newimg = cv2.resize(text_zone,(int(newX),int(newY)))
#binarize it
gray = cv2.cvtColor(newimg, cv2.COLOR_BGR2GRAY)
th, img = cv2.threshold(gray, 130, 255, cv2.THRESH_BINARY);
#erode it
kernel = np.ones((1,1),np.uint8)
erosion = cv2.erode(img,kernel,iterations = 1)
cv2.imwrite(prefix+'_ero.png', erosion)
cv2.imshow("Show by CV2",erosion)
cv2.waitKey(0)
prepro((16,27, 6,130), 'upzone')
prepro((27,36, 6,130), 'downzone')
from your cropped image
you get
the upper part:
and the lower part:
and tesseract does seem to be able to extract
xx$ tesseract upzone_ero.png stdout
198/ 280 U
xx$ tesseract downzone_ero.png stdout
M20 ~ EN Duluu Hun-nu
Notice that we fail to extract Luke, but hopefully you were not interested in him/it :)
There are other tools but that'd be advertising stuff and be subjective..

Tensorflow Probability Logistic Regression Example

I feel I must be missing something obvious, in struggling to get a positive control for logistic regression going in tensorflow probability.
I've modified the example for logistic regression here, and created a positive control features and labels data. I struggle to achieve accuracy over 60%, however this is an easy problem for a 'vanilla' Keras model (accuracy 100%). What am I missing? I tried different layers, activations, etc.. With this method of setting up the model, is posterior updating actually being performed? Do I need to specify an interceptor object? Many thanks..
### Added positive control
nSamples = 80
features1 = np.float32(np.hstack((np.reshape(np.ones(40), (40, 1)),
np.reshape(np.random.randn(nSamples), (40, 2)))))
features2 = np.float32(np.hstack((np.reshape(np.zeros(40), (40, 1)),
np.reshape(np.random.randn(nSamples), (40, 2)))))
features = np.vstack((features1, features2))
labels = np.concatenate((np.zeros(40), np.ones(40)))
featuresInt, labelsInt = build_input_pipeline(features, labels, 10)
###
#w_true, b_true, features, labels = toy_logistic_data(FLAGS.num_examples, 2)
#featuresInt, labelsInt = build_input_pipeline(features, labels, FLAGS.batch_size)
with tf.name_scope("logistic_regression", values=[featuresInt]):
layer = tfp.layers.DenseFlipout(
units=1,
activation=None,
kernel_posterior_fn=tfp.layers.default_mean_field_normal_fn(),
bias_posterior_fn=tfp.layers.default_mean_field_normal_fn())
logits = layer(featuresInt)
labels_distribution = tfd.Bernoulli(logits=logits)
neg_log_likelihood = -tf.reduce_mean(labels_distribution.log_prob(labelsInt))
kl = sum(layer.losses)
elbo_loss = neg_log_likelihood + kl
predictions = tf.cast(logits > 0, dtype=tf.int32)
accuracy, accuracy_update_op = tf.metrics.accuracy(
labels=labelsInt, predictions=predictions)
with tf.name_scope("train"):
optimizer = tf.train.AdamOptimizer(learning_rate=FLAGS.learning_rate)
train_op = optimizer.minimize(elbo_loss)
init_op = tf.group(tf.global_variables_initializer(),
tf.local_variables_initializer())
with tf.Session() as sess:
sess.run(init_op)
# Fit the model to data.
for step in range(FLAGS.max_steps):
_ = sess.run([train_op, accuracy_update_op])
if step % 100 == 0:
loss_value, accuracy_value = sess.run([elbo_loss, accuracy])
print("Step: {:>3d} Loss: {:.3f} Accuracy: {:.3f}".format(
step, loss_value, accuracy_value))
### Check with basic Keras
kerasModel = tf.keras.models.Sequential([
tf.keras.layers.Dense(1)])
optimizer = tf.train.AdamOptimizer(5e-2)
kerasModel.compile(optimizer = optimizer, loss = 'binary_crossentropy',
metrics = ['accuracy'])
kerasModel.fit(features, labels, epochs = 50) #100% accuracy
Compared to the github example, you forgot to divide by the number of examples when defining the KL divergence:
kl = sum(layer.losses) / FLAGS.num_examples
When I change this to your code, I quickly get to an accuracy of 99.9% on your toy data.
Additionaly, the output layer of your Keras model actually expects a sigmoid activation for this problem (binary classification):
kerasModel = tf.keras.models.Sequential([
tf.keras.layers.Dense(1, activation='sigmoid')])
It's a toy problem, but you will notice that the model gets to 100% accuracy faster with a sigmoid activation.

Conversion of MetaTrader4 to NinjaTrader

I am trying to write an indicator originally from MT4 into NT7.
I have the following calculations in MT4:
dayi = iBarShift(Symbol(), myPeriod, Time[i], false);
Q = (iHigh(Symbol(), myPeriod,dayi+1) - iLow(Symbol(),myPeriod,dayi+1));
L = iLow(NULL,myPeriod,dayi+1);
H = iHigh(NULL,myPeriod,dayi+1);
O = iOpen(NULL,myPeriod,dayi+1);
C = iClose(NULL,myPeriod,dayi+1);
myperiod is a variable where I place the period in minutes (1440 = 1day).
What are the equivalent functions in NT7 to iBarShift, iHigh and so on?
Thanks in advance
For NinjaTrader:
iLow = Low or Lows for multi-time frame
iHigh = High or Highs
iOpen = Open or Opens
iClose = Close or Closes
So an example would be
double low = Low[0]; // Gets the low of the bar at index 0, or the last fully formed bar (If CalculateOnBarClose = true)
In order to make sure you are working on the 1440 minute time frame, you will need to add the following in the Initialize() method:
Add(PeriodType.Minute, 1440);
If there are no Add statements prior to this one, it will place it at index 1 (O being the chart default index) in a 2 dimensional array. So to access the low of the 1440 minute bar at index 0 would be:
double low = Lows[1][0];
For iBarShift look at
int barIndex = Bars.GetBar(time);
which will give you the index of the bar with the matching time. If you need to use this function on the 1440 bars (or other ones), use the BarsArray property to access the correct Bar object and then use the GetBar method on it. For example:
int barIndex = BarsArray[1].GetBar(time);
Hope that helps.

Can't edit root_blip of fetched wavelet

When i edit root_blip of wavelet everything works fine, but if i fetch the wavelet nothing happens neither in googleWave nor logs (no errors occured), although "wave_list.reply(text)" works. I have made myRobot.setup_oauth()
def OnWaveletSelfAdded(event, wavelet):
text = "123"
wave_list = myRobot.fetch_wavelet(wave_id="googlewave.com!w+O5yFQIteC", wavelet_id="googlewave.com!conv+root")
wave_list.submit_with(wavelet)
root_blip = wave_list.root_blip
root_blip.all().delete()
root_blip.append("WaveList\n" + text)
logging.info("root_blip.wave_id: %s" % root_blip.wave_id)
What am I doing wrong? I've tried myRobot.submit(wave_list) - also no results
wave_list = myRobot.fetch_wavelet(wave_id = wave_id, wavelet_id="googlewave.com!conv+root")
root_blip = wave_list.root_blip
root_blip.all().delete()
root_blip.append("WaveList:\n")
myRobot.submit(wave_list)
solved...

Resources