How to skip phoronix-test-suite inital questions - benchmarking

I would like to use phoronix-test-suite to benchmark cloud instances of different providers.
Nevertheless automation seems to get hang, because phoronix-test-suites asks three initial questions to accept license agreement, whether to upload benchmark results to openbenchmarking and so on.
I know that the batch-run can be preconfigured using the user-config.xml file. But this seems not sufficient to run benchmarks non-interactively the first time.
Phoronix-test-suite still asks its initial questions which prevents automatic benchmarking of
Can anybody help? Is there another file which phoronix-test-suite needs to not ask its initial questions?

running "phoronix-test-suite enterprise-setup" on PTS 5.6+ is another way to avoid the initial setup questions.

After digging in the sources I discovered environment variable - PTS_SILENT_MODE. Just set it to 1.
Example:
On fresh install when I run
PTS_SILENT_MODE=1 phoronix-test-suite benchmark pts/openssl-1.9.0
Then 3 initial questions are not asked

Related

How to clean database after scenario in Python behave

I'm pretty new to the world of python/behave and API testing, and I'm trying to clean the database after 1 scenario is run by calling the tag #clean_database.
Can you please assist?
I guess that I will need a database_context.py in my context_steps folder but I'm not sure how to do the connection to the database...
Seems like you have 2 questions here:
(1) How do I connect to the database?
This question doesn't involve behave, so you should ask this question elsewhere--perhaps on the MySQL-Python thread if you're using MySQL (which you haven't specified) or on the Python thread.
(2) How do I use behave to call specific tags?
For the latter, check out the documentation for running tagged tests and see how to run behave from your Python program.

Is using ENV variables a good idea?

Background
Our app uses a MySQL DB and a couple more services.
To connect our app to these servers, we have the usernames and passwords saved in a prod.config file. If we are in dev, we use a dev.config file and so on...
Recently, I have been studying good practices in the industry ( such as the https://12factor.net/ ) and the majority of them ( if not all ) specify that information like usernames and pwd's to connect to DB and other services should not be in conifg files but rather in ENV variables.
If you have no idea what the 12 factor spec is you can check this free tutorial:
https://egghead.io/lessons/egghead-summary-concepts-of-the-twelve-factor-app
Problem
Now, at first this looks fine. Many CI tools like Travis or CircleCI already force you to do this anyway. The problem here is when your smallest app uses multiple services.
In our case, for our smallest app, we would need 13 ENV variables. Variables that wouldn't be in any specific file, they would all have to be in the ENV of the machine they run on.
I fail to see how this can be seen as a good practice. I understand the main idea of not pushing your confg files with all this sensitive data, but this approach poses several issues:
When the machine reboots, you loose all you ENV variables.
If you want to avoid the previous problem, you need to run a script on machine start, that sets these said variables, which means you would have them stored in a file, defeating the whole purpose.
Where do you save these variables? They need to be somewhere else other than you flimsy head!
Questions
How would I solve the previous issues?
Why is saving private info in ENV variables seen as a good idea?
I'm going to step back a bit here and pose a question to you: why are you trying to connect to a production database from a testing environment?
The beauty of CI tools is that they allow you to spin up Docker containers to act as testing services. In your production code, it is considered best practice to keep your passwords saved in environmental variables for two main reasons:
1.) If someone ever got a hold of your code, they would have access to your database. It requires an extra level of security that is just not realistic.
2.) If someone did get a hold of your passwords, you want to be able to change them quickly. This is easier to do if your code references environmental variables instead of hard-coded strings.
When you move to a CI system, point #2 becomes moot, but point #1 becomes exceedingly important. With Travis and CircleCI, your config file is public. If you put your production password into your config file, I (or someone much more malicious) could just go scan your file and jump into your database. I've heard stories of hackers scraping public repositories for hardcoded passwords in config files. It's even easier with a tool like CircleCI.
The environmental variables you set in Travis and CircleCI should be stored at a repository level- you shouldn't need to move variables around or save them.
Environmental variables in a production system should be set-up as part of a startup script. This is highly dependent on what kind of service you're using, so I won't go into much detail here.

Manage SSDT project file properly with version control (*.sqlproj)

We have constant problem with project XML file (*.sqlproj). If the files are added/renoved/changed location then it automatically adds/removes records in some unexpected places. After that we have big troubles by merging it when somebody changes that file also.
We came to conclusion that we might sort it before checkin. We would alphabetically sort it and in that case merge tool will understand it much better.
So, my questions would be:
Is it possible to re-arrange sqlproj file somehow before EVERY check-in? Maybe there are somekind of options/tools that doing that already?
Are there any other ways to make developers life easier?
UPDATE:
Once again I got the same problem. sqlproj file was modified 3 times and I want to merge to production only the last change, other 2 are not tested yet. in the merge tool I have the option to add all these 3 new objects or leave it without changes. I am not able to select only the last change ...
EXAMPLE:
developerA created tableA and checked in;
developerB got the latest version of dev branch, created tableB and checked in;
developerC got the latest version of dev branch, created tableC and checked in. DeveloperC tested the code and ready to go to production. He tries to merge his code to QA and get's the conflict where he has an option only to go with ALL changes.
I understand the scenario you are running into very well. This typically happens when you have multiple work streams happening in the context of a single repository and you don't have a common promotion schedule (as in all work will go to QA at the same time and PROD at the same time).
There's a few ways I can think to get around this problem and there are pros and cons to each option.
Lock each environment until everything can promote together. Not realistic in most cases.
When you are ready to promote, create a promotion branch from source environment and take things out of the promotion branch that aren't ready to promote to destination environment. This allows devs to keep working and be able to promote without freezing.
Hybrid approach... Don't source control anything in Dev until it's ready to promote to test. Then either do option #1 or 2 from there onward.
Create a more flexible ecosystem that can spin up an environment for each Feature branch in order to demo/test with others(or at least allocate/rotate enough between the developers to accomplish the same objective). Once it's accepted promote. This is what we are working towards currently but building out the infrastructure and process when you have a ton of interconnected databases and apps that share them is a bit challenging to say the least (especially in the Microsoft world).
Anyways hopes this helps...
1 - what source control are you using? No source control that I am aware of understands the context of sqlproj files but this isn't normally a problem.
2.a - This shouldn't be a problem you get constantly, are you checking in/out regularly? I would only expect to see issues if different developers are making large scale changes to the projects and not checking out / checking in before and after.
2.b - It is also possible you are not merging correctly, if you take both both sets of changes then it is normally fine.
ed

Interpreting WinBUGS traps and how to automate the program?

First of all, does anybody know of a developer's guide for WinBUGS? The website is full of detailed examples for Doodles and documentation for the model language, but I have yet to find anything about how to interpret trap windows.
Secondly, has anybody found any ways to streamline the check/load/compile/init/monitor/update cycle? By that I mean, there doesn't seem to be any way to say "don't bother rechecking the model or putting any of the settings back to their defaults (!!!), just keep loading data from these files, inits from those files, and for each generate a new coda". Even the standard Windows shortcuts are neutered here, forcing the user to keep clicking and filling the same fields with the same values over and over. This might seem like a minor issue, but when you are doing many similar analyses one after the other, it gets old fast.
I'm at the point where I'm about to use TRON.EXE to send fake mouseclicks to the program, but before going to that extreme I'm hoping there is some native and more elegant way to automate repetitive WinBUGS tasks.
Well... that's WinBUGS at its normal :-) Unfriendly, showing traps that would scare of an experienced kernel hacker.. :-) I don't think there exist some guide to traps. I mean if WinBUGS creators wanted to put some effort in being more user friendly, they would probably first made the traps more understandable, so that no guide was necessary.
I was trying to do something similar - i.e. to customize WinBUGS behaviour. First, you can call WinBUGS from R using R2WinBUGS. That way you are able to do a lot automatization but not all. For example, I wanted to have something like progress information in WinBUGS. The problem is that WinBUGS UI gets stuck during update cycles. R2WinBUGS creates the script.txt command script and there is command update (<big number of cycles>). What I wanted here was to customize this script.txt to contain a lot of smaller update(..) commands instead of one big one. But, the problem is that R2WinBUGS generates this script itself and you cannot change it.
So the way to customize WinBUGS could be that you create your own wrapper that creates the script.txt and other files. I believe you could do a lot more customization to WinBUGS this way.
However, I'm not sure if WinBUGS is worth it. Its development has stopped and while favorited by many people, it remains rigid. You can try JAGS or CppBugs which seem to have much more promissing future.
For a wrapper around R2WinBUGS that adds lots of functionality to streamline serious WinBUGS use, see my package rube (http://www.stat.cmu.edu/~hseltman/rube/) which is not yet on CRAN.
Among other things, it gives plain English error messages rather than passing your model/data/inits along to WinBUGS when a trap error is certain. It also gives a highly useful summary of your model/data/inits for finding problems that cannot be automatically detected. Of course, it does not catch all trap errors.
Turns out I didn't RTFM enough on the second part of my question. It turns out that the section of the WinBUGS 1.4 manual entitled "Batch-Mode: Scripts" lists all the batch commands. All the important UI functionality has a batch-mode command. There was only a little trial-and-error in getting the arguments right (for example over.relax('true')). What really took me a while to sort out is that WinBUGS seems to have trouble with some Windows paths, but as long as everything is in a subdirectory of the directory where WinBUGS is installed, it runs okay.
It's still kind of messy to have to keep loading all these little files, but I wrote an R-script that uses functions from the BRugs package to create all the files, name them in a consistent pattern, and generate a script that will then initialize the model and load them, over and over again.
I'll leave this question open for a while, though, to see if anybody has any suggestions on where I can learn to make better use of traps.

Testing C code using the web

I have a number of C functions which implement mathematical formulae. To-date these have been tested for mathematical "soundness" by passing parameters through command line applications or compiling DLL's for applications like Excel. Is there an easy way of doing this testing over the web?
Ideally something along the lines of:
compile a library
spend five minutes defining a web form which calls this code
testers can view the webpage, input parameters and review the output
A simple example of a calculation could be to calculate the "accrued interest" of a bond:
Inputs: current date, maturity date, coupon payment frequency (integer), coupon amount (double)
Outputs: the accrued interest (double)
You should have a look into automated testing. Manual tests will all have to be repeated every time you change something in your code. Automated tests are the solution for your kind of tests. Let testers write test cases with the accompanying results, then make them into unit tests.
See also: unit testing
The quickest thing I can think of is to have these C programs compiled on the server. And create a PHP page that received command-line parameters and then execute compiled program on the server, parsing the output. Technologies other than PHP would also work just fine. What you need to figure out, for specific technology, are:
How to start a process
How to redirect standard input/output
I have also seen number of web site which let users submit their C code and then it get compiled on the server. After that the program will be given some input file and give output. The output of program is then verified with correct answer. For example visit this site, http://acm.timus.ru/
If you're going to do this, you should be sure that every web interaction is captured in a permanent database of tests. Then you can use this database to
Automatically re-run all tests if the software changes
Possibly find inconsistencies that result if a person gives you the wrong answer
In other words, the web form should be the front end to a persistent infrastructure for testing, not a means of running tests that disappear just after they are viewed.
Or similarly, create a Perl CGI that checks input values and then passes them through to the C program. BTW This should only be done for testing and not for final deployment.
You should really automate the testing to check your behaviour is as expected over a wide range of values.
Or shouldn't you be testing this in an environment that is as close as possible to the final deployment environment?
cheers,
Rob
This is what you're looking for:
http://codepad.org/
It will execute C, C++, D, Haskell, Lua, and many others online, and display the results.
If you've got a large library to compile it may get unwieldy, but testing a function briefly is simply a matter of pasting the code and hitting "Submit".
This sounds much like FIT. You could probably make a new fixture for it, or for one of the other language ports like the Python one, that calls a C library with your function. This would take advantage of the work that's gone into making FIT convenient, the kind of work Norman Ramsey recommends in his answer.

Resources