Testing C code using the web - c

I have a number of C functions which implement mathematical formulae. To-date these have been tested for mathematical "soundness" by passing parameters through command line applications or compiling DLL's for applications like Excel. Is there an easy way of doing this testing over the web?
Ideally something along the lines of:
compile a library
spend five minutes defining a web form which calls this code
testers can view the webpage, input parameters and review the output
A simple example of a calculation could be to calculate the "accrued interest" of a bond:
Inputs: current date, maturity date, coupon payment frequency (integer), coupon amount (double)
Outputs: the accrued interest (double)

You should have a look into automated testing. Manual tests will all have to be repeated every time you change something in your code. Automated tests are the solution for your kind of tests. Let testers write test cases with the accompanying results, then make them into unit tests.
See also: unit testing

The quickest thing I can think of is to have these C programs compiled on the server. And create a PHP page that received command-line parameters and then execute compiled program on the server, parsing the output. Technologies other than PHP would also work just fine. What you need to figure out, for specific technology, are:
How to start a process
How to redirect standard input/output
I have also seen number of web site which let users submit their C code and then it get compiled on the server. After that the program will be given some input file and give output. The output of program is then verified with correct answer. For example visit this site, http://acm.timus.ru/

If you're going to do this, you should be sure that every web interaction is captured in a permanent database of tests. Then you can use this database to
Automatically re-run all tests if the software changes
Possibly find inconsistencies that result if a person gives you the wrong answer
In other words, the web form should be the front end to a persistent infrastructure for testing, not a means of running tests that disappear just after they are viewed.

Or similarly, create a Perl CGI that checks input values and then passes them through to the C program. BTW This should only be done for testing and not for final deployment.
You should really automate the testing to check your behaviour is as expected over a wide range of values.
Or shouldn't you be testing this in an environment that is as close as possible to the final deployment environment?
cheers,
Rob

This is what you're looking for:
http://codepad.org/
It will execute C, C++, D, Haskell, Lua, and many others online, and display the results.
If you've got a large library to compile it may get unwieldy, but testing a function briefly is simply a matter of pasting the code and hitting "Submit".

This sounds much like FIT. You could probably make a new fixture for it, or for one of the other language ports like the Python one, that calls a C library with your function. This would take advantage of the work that's gone into making FIT convenient, the kind of work Norman Ramsey recommends in his answer.

Related

Recreating KaTeX emulation in La/TeX?

I'm working on a site that is using KaTeX for rendering math. However, the interface for entering the math content is (really) not ideal, so it is actually faster for me to work in an editor, like Sublime Text 3 and import the work; however, an issue I run into is that when I import, I discover various functions / environments aren't supported (i.e. emulated) by KaTeX.
If it were just me working on the material, I would simply learn as I go and consult the KaTeX documentation page; however, I have several contractors working on digitizing content who do not have access to the site (and I don't have the ability to give them access), and so cannot learn by trial-and-error. Instead, I end up with piles of documents that all need to be manually adjusted, to render as desired with KaTeX.
As such, I wanted to assemble a preamble for a LaTeX document that would recreate the abilities (i.e. functions and environments) KaTeX can emulate, and was wondering if such a preamble / package already exists? I have tried a few quick searches, but because I'm looking for something that imitates an emulator, I'm finding it tricky to find the right choice of words to get relevant results.
I wasn't sure if this were best posted here or on the TeX.se - I suspect it falls in-between the two - so I apologize if my guess was wrong and I should've tried there first. Any suggestions would be very much appreciated, as this is creating a substantial bottle-neck in my workflow but is also just outside of my ability to solve on my own.
Supported functions is one thing. To tackle that you might actually stand a fair chance of just tokenizing the input, looking for backslash name sequences and checking them against a list extracted from KaTeX sources to see which are supported.
I guess one could even try to remove all other functions from LaTeX. Or rather hide them, such that the user input can't access them but third party libraries can. Getting rid of language features (as opposed to macros) such as \def would probably be even harder. Better askn on the TeX stack exchange for details of you really want to follow this route.
As an alternative I guess you might be able to perform the check I described above in TeX. Write a macro which reads the current file as plain text instead of TeX source, to perform this analysis. Or some such. But a separate stand-alone tool would be much easier.
If you are going for a separate tool, you might as well write it in JavaScript for Node, and have it run KaTeX on the input. That way you can at least tell whether it will get typeset to something or error out.
Whether the rendering is what you expect from LaTeX may be another question. In general KaTeX aims to reproduce LaTeX behaviour, so any difference might indicate a bug. But bugs exist, so all of this might not avoid the need for checks. How about you just processing the math part of the input with KaTeX to some HTML which authors can check without access to the site?
As for existing tools or macro packages, I know of none, but tool or library questions are off topic on stack exchange anyway.

Can anyone report experiences with HWUT for unit testing?

Can anyone report experiences with that HWUT tool (http://hwut.sourceforge.net)?
Has anyone had experiences for some longer time?
What about robustness?
Are the features like test generation, state machine walking, and Makefile generation useful?
What about execution speed? Any experience in larger projects?
How well does code coverage measurments perform?
I have been using HWUT since several years for software unit testing for larger automotive infotainment projects. It is easy to use, performance is great and it does indeed cover state machine walkers, test generation and make-file generation, too. Code coverage is working well. What I really like about HWUT are the state machine walker and test generation, since they allow to create a large amount test cases in very short time. I highly recommend HWUT.
Much faster than commercial tools, which can save a lot of time for larger projects!
I really like the idea of testing by comparing program output, which makes it easy to start writing tests, and also works well when scaling the project later on. Combined with makefile generation it is really easy to set up a test.
I used HWUT to test multiple software components. It is really straight forward and as a software developer you don't have to click around in GUIs. You can just create a new source code file (*.c or whatever) and your test is nearly done.
This is very handy when using a version control. You just have to check in the "test.c" file, the Makefile and the results of the test - that's it no need to check in binary files.
I like using the generators which HWUT offers. By using them it is easy possible to create ten thousands (or even more) testcases. Which is very handsome if you want to test the border conditions of e.g. a convert function.

Test-Driven Development in CakePHP

I'm using CakePHP 2.3 and would like to know how to properly go about building a CakePHP website using test-driven development (TDD). I've read the official documentation on testing, read Mark Story's Testing CakePHP Controllers the hard way, and watched Mark Story's Win at life with Unit testing (PDF of slides) but am still confused. I should note that I've never been very good about writing tests in any language and don't have a lot of experience with it, and that is likely contributing to my confusion.
I'd like to see a step by step walkthrough with code examples on how to build a CakePHP website using TDD. There are articles on TDD, there are articles on testing with CakePHP, but I have yet to find an in-depth article that is about both. I want something that holds my hand through the whole process. I realize this is somewhat of a tall order because, unless my Google-fu is failing me, I'm pretty sure such an article hasn't yet been published, so I'm basically asking you to write an article (or a long Stack Overflow answer), which takes time. Because this is a tall order, I plan to start a bounty on this question worth a lot of points once I'm able to in order to better reward somebody for their efforts, should anybody be willing to do this. I thank you for your time.
TDD is a bit of falacy in that it's essentially just writing tests before you code to ensure that you are writing tests.
All you need to do is create your tests for a thing before you go create it. This requires thought and analysis of your use cases in order to write tests.
So if you want someone to view data, you'll want to write a test for a controller. It'll probably be something like testViewSingleItem(), you'll probably want to assertContains() some data that you want.
Once this is written, it should fail, then you go write your controller method in order to make the test pass.
That's it. Just rinse and repeat for each use case. This is Unit Testing.
Other tests such as Functional tests and Integration tests are just testing different aspects of your application. It's up to you to think and decide which of these tests are usefull to your project.
Most of the time Unit Testing is the way to go as you can test individual parts of the application. Usually parts which will impact on the functionality the most, the "Critical path".
This is an incredibly useful TDD tutorial. http://net.tutsplus.com/sessions/test-driven-php/

Interpreting WinBUGS traps and how to automate the program?

First of all, does anybody know of a developer's guide for WinBUGS? The website is full of detailed examples for Doodles and documentation for the model language, but I have yet to find anything about how to interpret trap windows.
Secondly, has anybody found any ways to streamline the check/load/compile/init/monitor/update cycle? By that I mean, there doesn't seem to be any way to say "don't bother rechecking the model or putting any of the settings back to their defaults (!!!), just keep loading data from these files, inits from those files, and for each generate a new coda". Even the standard Windows shortcuts are neutered here, forcing the user to keep clicking and filling the same fields with the same values over and over. This might seem like a minor issue, but when you are doing many similar analyses one after the other, it gets old fast.
I'm at the point where I'm about to use TRON.EXE to send fake mouseclicks to the program, but before going to that extreme I'm hoping there is some native and more elegant way to automate repetitive WinBUGS tasks.
Well... that's WinBUGS at its normal :-) Unfriendly, showing traps that would scare of an experienced kernel hacker.. :-) I don't think there exist some guide to traps. I mean if WinBUGS creators wanted to put some effort in being more user friendly, they would probably first made the traps more understandable, so that no guide was necessary.
I was trying to do something similar - i.e. to customize WinBUGS behaviour. First, you can call WinBUGS from R using R2WinBUGS. That way you are able to do a lot automatization but not all. For example, I wanted to have something like progress information in WinBUGS. The problem is that WinBUGS UI gets stuck during update cycles. R2WinBUGS creates the script.txt command script and there is command update (<big number of cycles>). What I wanted here was to customize this script.txt to contain a lot of smaller update(..) commands instead of one big one. But, the problem is that R2WinBUGS generates this script itself and you cannot change it.
So the way to customize WinBUGS could be that you create your own wrapper that creates the script.txt and other files. I believe you could do a lot more customization to WinBUGS this way.
However, I'm not sure if WinBUGS is worth it. Its development has stopped and while favorited by many people, it remains rigid. You can try JAGS or CppBugs which seem to have much more promissing future.
For a wrapper around R2WinBUGS that adds lots of functionality to streamline serious WinBUGS use, see my package rube (http://www.stat.cmu.edu/~hseltman/rube/) which is not yet on CRAN.
Among other things, it gives plain English error messages rather than passing your model/data/inits along to WinBUGS when a trap error is certain. It also gives a highly useful summary of your model/data/inits for finding problems that cannot be automatically detected. Of course, it does not catch all trap errors.
Turns out I didn't RTFM enough on the second part of my question. It turns out that the section of the WinBUGS 1.4 manual entitled "Batch-Mode: Scripts" lists all the batch commands. All the important UI functionality has a batch-mode command. There was only a little trial-and-error in getting the arguments right (for example over.relax('true')). What really took me a while to sort out is that WinBUGS seems to have trouble with some Windows paths, but as long as everything is in a subdirectory of the directory where WinBUGS is installed, it runs okay.
It's still kind of messy to have to keep loading all these little files, but I wrote an R-script that uses functions from the BRugs package to create all the files, name them in a consistent pattern, and generate a script that will then initialize the model and load them, over and over again.
I'll leave this question open for a while, though, to see if anybody has any suggestions on where I can learn to make better use of traps.

How to Test Web Code?

Does anyone have some good hints for writing test code for database-backend development where there is a heavy dependency on state?
Specifically, I want to write tests for code that retrieve records from the database, but the answers will depend on the data in the database (which may change over time).
Do people usually make a separate development system with a 'frozen' database so that any given function should always return the exact same result set?
I am quite sure this is not a new issue, so I would be very interested to learn from other people's experience.
Are there good articles out there that discuss this issue of web-based development in general?
I usually write PHP code, but I would expect all of these issues are largely language and framework agnostic.
You should look into DBUnit, or try to find a PHP equivalent (there must be one out there). You can use it to prepare the database with a specific set of data which represents your test data, and thus each test will no longer depend on the database and some existing state. This way, each test is self contained and will not break during further database usage.
Update: A quick google search showed a DB unit extension for PHPUnit.
If you're mostly concerned with data layer testing, you might want to check out this book: xUnit Test Patterns: Refactoring Test Code. I was always unsure about it myself, but this book does a great job to help enumerate the concerns like performance, reproducibility, etc.
I guess it depends what database you're using, but Red Gate (www.red-gate.com) make a tool called SQL Data Generator. This can be configured to fill your database with sensible looking test data. You can also tell it to always use the same seed in its random number generator so your 'random' data is the same every time.
You can then write your unit tests to make use of this reliable, repeatable data.
As for testing the web side of things, I'm currently looking into Selenium (selenium.openqa.org). This appears to be a cross-browser capable test suite which will help you test functionality. However, as with all of these web site test tools, there's no real way to test how well these things look in all of the browsers without casting a human eye over them!
We use an in-memory database (hsql : http://hsqldb.org/). Hibernate (http://www.hibernate.org/) makes it easy for us to point our unit tests at the testing db, with the added bonus that they run as quick as lightning..
I have the exact same problem with my work and I find that the best idea is to have a PHP script to re-create the database and then a separate script where I throw crazy data at it to see if it breaks it.
I have not ever used any Unit testing or suchlike so cannot say if it works or not sorry.
If you can setup the database with a known quantity prior to running the tests and tear down at the end, then you'll know what data you are working with.
Then you can use something like Selenium to easily test from your UI (assuming web-based here, but there are a lot of UI testing tools out there for other UI-flavours) and detect the presence of certain records pulled back from the database.
It's definitely worth setting up either a test version of the database - or make your test scripts populate the database with known data as part of the tests.
You could try http://selenium.openqa.org/ it is more for GUI testing rather than a data layer testing application but does record your actions which then can be played back to automate tests across different platforms.
Here's my strategy (I use JUnit, but I'm sure there's a way to do the equivalent in PHP):
I have a method that runs before all of the Unit Tests for a specific DAO class. It puts the dev database into a known state (adds all test data, etc.). As I run tests, I keep track of any data added to the known state. This data is cleaned up at the end of each test. After all the tests for the class have run, another method removes all the test data in the dev database, leaving it in the state it was in before the tests were run. It's a bit of work to do all this, but I usually write the methods in a DBTestCommon class where all of my DAO test classes can get to them.
I would propose to use three databases. One production database, one development database (filled with some meaningful data for each developer) and one testing database (with empty tables and maybe a few rows that are always needed).
A way to test database code is:
Insert a few rows (using SQL) to initialize state
Run the function that you want to test
Compare expected with actual results. Here you could use your normal unit testing framework
Clean up the rows that were changed (so the next run won't see the previous run)
The cleanup could be done in a standard way (of course, only in the testing database) with DELETE * FROM table.
In general I agree with Peter but for creating and deleting of test data I wouldn't use SQL directly. I prefer to use some CRUD API that is used in product to create data as similar to production as possible...

Resources