Output from scalatest is not synchronized with output from the tests - scalatest

I have a suite of scalatest tests that output information to the console using println as they run.
When I run the suite using the Eclipse Scala plug-in (using Run As ... / 3 ScalaTest - File from the context menu) there is additional output to the console about which tests pass and which fail. I guess this output is from the runner.
The problem is that the lines from my code and the lines from the runner are not interleaved sensibly. It's as if they are being printed from two different threads that aren't synchronized.
For example here is the output from a run
>>>>>>>>>>>>>Starting The parser should warn when the interface name at the end does not match >>>>>>>>>>>>>>>>>>
(interface Fred
interface Bob)
-----------------------------
File: <unknown> line: 2 column: 11 Name does not match.
----The AST after parsing-------------
[ IntfDeclNd( (), (), () ) ]
---------------------------------------
<<<<<<<<<<<<<Finished The parser should warn when the interface name at the end does not match <<<<<<<<<<<<<<<<<
>>>>>>>>>>>>>Starting The parser should parse a class with generic args >>>>>>>>>>>>>>>>>>
(class Fred{type a, type b extends B}() class)
- should parse multiline comment at end of file *** FAILED ***
Expected 0, but got 1 (TestsBase.scala:103)
- should fail on incomplete multiline comment
- should parse single line comments
- should allow a class name to be repeated at the end
- should warn when the class name at the end does not match
- should allow an interface name to be repeated at the end
- should warn when the interface name at the end does not match
----The AST after parsing-------------
The lines starting with "- should" or "Expected" come from the runner and you can see that a bunch of them are plunked in the middle of the output from one of my tests. Other output from the runner appears elsewhere, this isn't all of it.
My questions: Why is this happening? Is there some way to get the runner's output to coordinate with my output?

Most likely, the suites are running in parallel.
http://www.scalatest.org/user_guide/using_the_runner#executingSuitesInParallel
With the proliferation of multi-core architectures, and the often
parallelizable nature of tests, it is useful to be able to run tests
in parallel. [...]
The -P option may optionally be appended with a number (e.g. "-P10" --
no intervening space) to specify the number of threads to be created
in the thread pool. If no number (or 0) is specified, the number of
threads will be decided based on the number of processors available.
So basically pass -P1 to the runner. For eclipse, the place would possibly be the Arguments tab.

Related

What may be the reason why I get "Test ignored." message while running any test class in Apex?

Tests ignored: 14, passed: 0
Whenever I run any test class I get this type of messages:
Test ignored.
Test method AccountAddressHelperTest.testInvalidBillingCountry was
never reported as completed. Trigger.AllOppLineItemTriggers: line 188,
column 31: Method does not exist or incorrect signature: void
updateMISROnOpportunity(Map<Id,OpportunityLineItem>,
Map<Id,OpportunityLineItem>) from the type OppLineItemHelper
You have compilation failures. What happens if you hit Setup -> Classes -> Compile all?
You or your colleague edited something, probably in file OppLineItemHelper. Or some other file this file reuses. And now not everything compiles, dependencies aren't met. So best thing SF can do is skip these tests.
You can edit like that in sandboxes, SF will not always block you. But compile errors will prevent deployment to prod even before any tests are run

How to change configuration GTSon (i.e. for individual node have different GTS request) at command line in Castalia 3.3 Simulator?

I'm executing BANtest experiment provided in simulation example in Castalia 3.3 simulator. I'm exploring GTS in Contention Free Period (CFP) under beacon enable mode of IEEE 802.15.4 MAC. I want to change the configuration i.e. GTSon such a way that each individual node requests different GTS slots at command line. What do i need to change in configuration file ?
I read and understand the procedure to change configuration at command line from "section 3.5.3" at castalia user's manual. Currently, i'm able to change "equal" GTS request made by nodes in GTSon configuration at command line, but i am interested to get different GTS slots request form individual node.
Case-1: code for equal GTS request form all nodes
# Define as set of equal GTS request for all nodes in omnetpp.ini
[Config GTSon]
SN.node[*].Communication.MAC.requestGTS = ${GTS=1,2}
# Execute BANtest example- take request GTS from config file
$ Castalia -c ZigBeeMAC,[GTSon]
# Changing configuration (i.e. GTSon) form command line-run successfully
$ Castalia -c GTSon=\$\{GTS=0,3\}
Case-2: code for different GTS requests form nodes
# Define differnt GTS requests for nodes in omnetpp.ini
[Config GTSon]
SN.node[1].Communication.MAC.requestGTS = ${GTS1=0}
SN.node[2].Communication.MAC.requestGTS = ${GTS2=4}
SN.node[3].Communication.MAC.requestGTS = ${GTS3=3}
SN.node[4].Communication.MAC.requestGTS = ${GTS4=0}
SN.node[5].Communication.MAC.requestGTS = ${GTS5=0}
# Execute BANtest example- run successfully
$ Castalia -c ZigBeeMAC,[GTSon]
# Changing configuration (i.e.GTSon) form command line- showing error
$ Castalia -c GTSon=\$\{GTS1=0,GTS2=1,GTS3=5,GTS4=0,GTS5=0\}
The case-1 is running successfully, but case-2 have error,which is given below:
"ERROR: configuration 'GTSon' has more than one parameter and cannot be used with '=' syntax"
The error you are getting is simply a limitation of the Castalia script. Castalia's User manual is explicit about this limitation in section 3.5.3. You can also search for this string: has more than one parameter and cannot be used with '=' syntax" in the Castalia script to find out more details, or to think about how you could extend it to support multiple cmdline parameters per configuration.
But an extension of functionality is not really needed. One simple workaround would be to define individual configurations for each node. For example
[Config GTSon-n1]
SN.node[1].Communication.MAC.requestGTS = ${GTS1=0}
[Config GTSon-n2]
SN.node[2].Communication.MAC.requestGTS = ${GTS2=4}
...
Then you can use Castalia to run
$ Castalia -c ZigBeeMAC,GTSon-n1,GTSon-n2
Or change the parameter in the cmdline
$ Castalia -c ZigBeeMAC,GTSon-n1=3,GTSon-n2=5
In general, I'd like to suggest that changing the simulation parameters at command line is not a good idea (at least for your regular simulations). You should only use this feature to run throwaway exploratory simulations, where you quickly want to test the effect of a change, without having to edit the ini file. The added bonus here is that the command line is saved with the output file so you have some trace of how this output file was produced. That's why this feature was added in Castalia. However, for your regular simulations studies, you should have the parameter values (or range of values) in the ini file itself. In this way there is a proper record of what the simulation study is supposed to be. OMNeT++ ini files are quite versatile, and you can achieve a lot of with their syntax. Make sure you know about all that OMNeT has to offer by reading chapter 9 of the OMNeT++ 4.x manual.

Frama-c Assertion

Recently I have been working with frama-c and I have faced a problem which is a bit confusing.
I have written a very simple program in frama-c which is this:
void main(void)
{
int a = 3;
int b = 4;
/*# assert a == b;*/
}
I expect frama-c to say the assertion is not valid which in GUI this is shown by red bullet, but instead frama-c says the assertion is not valid according to value (under hypothesis), which is shown by orange-red bullet.
My question is why would frama-c say the assertion is not valid under hypothesis?
What are the possible hypotheses?
I am asking this because my program is very simple and I can't find any hypothesis or dependency in my program which is related to the assertion and I guess frama-c should just say the assertion is not valid.
If you have graphviz configured with your Frama-C installation (i.e. it was available when Frama-C was configured, either manually or via opam), you can double-click the property in the Properties panel, and a window should open with the following dependency graph for the property:
In it, we can see all the hypotheses used by a property, and so we see that the "under hypotheses" mentioned is that of the reachability of the assertion. Eva (value plug-in) computes an over-approximation of reachable states, so it cannot prove that a given state is reachable, only that it is unreachable.
Currently, the only plug-in which can definitely prove reachability statuses is PathCrawler. However, in practice this is rarely an issue.
An alternative way to see the dependencies of a property proven under hypotheses is to use the Report plugin, on the command-line:
$ frama-c -val c.c -then -report
[report] Computing properties status...
--------------------------------------------------------------------------------
--- Properties of Function 'main'
--------------------------------------------------------------------------------
[ Alarm ] Assertion (file c.c, line 5)
By Value, with pending:
- Unreachable program point (file c.c, line 5)
--------------------------------------------------------------------------------
--- Status Report Summary
--------------------------------------------------------------------------------
1 Alarm emitted
1 Total
--------------------------------------------------------------------------------
The pending information lists all the properties required to finish the proof. For statutes emitted by the Value/Eva plugin, the only dependences emitted will always be reachability ones.
(This is actually misleading: the status of the n th property actually depends on the statuses for the properties k with k < n. This would generate a dependency graph too big, so those dependencies are not tracked.)

More than 2 notebooks with R code in zeppelin failing with error sparkr intrepreter not responding

I have met with a strange issue in running R notebooks in zeppelin(0.7.2).
Spark intrepreter is in per note Scoped mode and spark version is 1.6.2 and SPARK_HOME is set.
Please find the steps below to reproduce the issue:
Create a notebook (Note1) and run any r code in a paragraph. I ran the following code.
%r
rdf <- data.frame(c(1,2,3,4))
colnames(rdf) <- c("myCol")
sdf <- createDataFrame(sqlContext, rdf)
withColumn(sdf, "newCol", sdf$myCol * 2.0)
Create another notebook (Note2) and run any r code in a paragraph. I ran the same code as above.
Till now everything works fine.
Create third notebook (Note3) and run any r code in a paragraph. I ran the same code. This notebook fails with the error
org.apache.zeppelin.interpreter.InterpreterException: sparkr is not
responding
What I understood from the analysis is that the process created for sparkr interpreter is not getting killed properly and this makes every third model to throw an error while executing. The process will be killed on restarting the sparkr interpreter and another 2 models could be executed successfully. ie, For every third model run using the sparkr interpreter, the error is thrown.
Help me to fix the problem.
you need to set spark.r.numRBackendThreads larger than 2. By default it is 2 which means you can only have 2 threads for RBackend. Since you are using scoped mode per note, each note will consume one thread for RBackend, so you can only run 2 note.

Parallel processing - `forking` fails under Mac OS 10.6.8

It appears that fork fails under Mac OS 10.6.8. The algorithm is coded in R and I have mainly be using the foreach package for parallel processing. The code works perfectly well when run sequentially (foreach and %do%) but not when run in parallel (foreach and %dopar%) even though the processes run in parallel do NOT communicate.
I used foreach on this same machine a month ago and it worked fine. An update of the OS has been performed in the meantime.
Error Messages
I received several kinds of error messages that seems to come almost stochastically. Also the errors differ depending on whether the code is run from the terminal (either by copy-pasting in the R environment or with R CMD BATCH) or from the R console.
When run on the Terminal, I get
Error in { : task 1 failed - "object 'dp' not found"
When run on the R console I get either
The process has forked and you cannot use this CoreFoundation functionality safely. You MUST exec().
Break on __THE_PROCESS_HAS_FORKED_AND_YOU_CANNOT_USE_THIS_COREFOUNDATION_FUNCTIONALITY___YOU_MUST_EXEC__() to debug.
....
<repeated many times when run on the R console>
or
Error in { : task 1 failed - "object 'dp' not found"
with the exact same code! Note that although this second error message is the same than the one received on the Terminal, the number of things that are printed (through the print() function) on the screen vastly differ!
What I've tried
I updated the package foreach and I also restarting my computer but it did not quite help.
I tried to print pretty much anything I could but it ended up being quite hard to keep track of what this algorithm is doing. For example, it often through the error about the missing object dp without executing the print statement at the line that precedes the call of the object dp.
I tried to use %dopar% but registering only 1 CPU. The output did not change on the Terminal, but it changed on the Console. Now the console gives the exact same error, at the same time than the terminal.
I made sure that several CPUs were in used when I ask for several CPUs.
I tried to use mclapply instead of foreach and registerDoMC() instead of registerDoParallel() to register the number of cores.
Extra-Info
My version of R is 3.0.2 GUI 1.62 Snow Leopard build. My machine has 16 cores.

Resources