Python behave issue - python-behave

I'm trying to use behave 1.2.5 with Python 2.6.
Unfortunately I'm stuck with this version of Python for the moment.
When defining handlers with parameter, such as
#given('we have behave {x} installed')
def step_impl(context, x):
...
I get the following error message
File "build/bdist.solaris-2.11-sun4v/egg/behave/model.py", line 1903, in run
self.func(context, *args, **kwargs)
TypeError: step_impl() keywords must be strings
To me this is an indication that the step handler is being invoked with
a dictionary where keys are unicode strings rather than regular strings.
If this is the case, does it have a solution?
Kurt

You're supposed to include the double quotes around the brackets in your step definition. It should be:
#given('we have behave "{x}" installed')

Related

Matlab / M - pass a cell array into a function as input [duplicate]

This question already has answers here:
"Undefined function 'function_name' for input arguments of type 'double'."
(3 answers)
Closed 5 years ago.
I'm a new user of Matlab, can you please help:
I have the following code in an .M file:
function f = divrat(w, C)
S=sqrt(diag(diag(C)));
s=diag(S);
f=sqrt(w'*C*w)/(w'*s);
I have stored this file (divrat.M) in the normal Matlab path, and therefore I'm assuming that Matlab will read the function when it's starting and that this function therefore should be available to use.
However, when I type
>> divrat(w, C)
I get the following error
??? Undefined function or method 'divrat' for input arguments of type 'double'.
What is the error message telling me to do, I can't see any error in the code or the function call?
You get this error when the function isn't on the MATLAB path or in pwd.
First, make sure that you are able to find the function using:
>> which divrat
c:\work\divrat\divrat.m
If it returns:
>> which divrat
'divrat' not found.
It is not on the MATLAB path or in PWD.
Second, make sure that the directory that contains divrat is on the MATLAB path using the PATH command. It may be that a directory that you thought was on the path isn't actually on the path.
Finally, make sure you aren't using a "private" directory. If divrat is in a directory named private, it will be accessible by functions in the parent directory, but not from the MATLAB command line:
>> foo
ans =
1
>> divrat(1,1)
??? Undefined function or method 'divrat' for input arguments of type 'double'.
>> which -all divrat
c:\work\divrat\private\divrat.m % Private to divrat
As others have pointed out, this is very probably a problem with the path of the function file not being in Matlab's 'path'.
One easy way to verify this is to open your function in the Editor and press the F5 key. This would make the Editor try to run the file, and in case the file is not in path, it will prompt you with a message box. Choose Add to Path in that, and you must be fine to go.
One side note: at the end of the above process, Matlab command window will give an error saying arguments missing: obviously, we didn't provide any arguments when we tried to run from the editor. But from now on you can use the function from the command line giving the correct arguments.
The most common cause of this problem is that Matlab cannot find the file on it's search path. Basically, Matlab looks for files in:
The current directory (pwd);
Directly in a directory on the path (to see the path, type path at the command line)
In a directory named #(whatever the class of the first argument is) that is in any directory above.
As someone else suggested, you can use the command which, but that is often unhelpful in this case - it tells you Matlab can't find the file, which you knew already.
So the first thing to do is make sure the file is locatable on the path.
Next thing to do is make sure that the file that matlab is finding (use which) requires the same type as the first argument you are actually passing. I.el, If w is supposed to be different class, and there is a divrat function there, but w is actually empty, [], so matlab is looking for Double/divrat, when there is only a #(yourclass)/divrat. This is just speculation on my part, but this often bites me.
The function itself is valid matlab-code. The problem must be something else.
Try calling the function from within the directory it is located or add that directory to your searchpath using addpath('pathname').
The error code indicates the function definition cannot be found. Make sure you're calling the function from the same workspace as the divrat.m file is stored. And make sure divrat function is not a subfunction, it should be first function declaration in the file. You can also try to call the function from the same divrat.m file in order to see if the problem is with workspace selection or the function.
By the way, why didn't you simply say
s = sqrt(diag(C));
Wouldn't it be the same?
Also, name it divrat.m, not divrat.M. This shouldn't matter on most OSes, but who knows...
You can also test whether matlab can find a function by using the which command, i.e.
which divrat
I am pretty sure that the reason why this problem happened is because of the license of the toolbox (package) in which this function belongs in. Write which divrat and see what will be the result. If it returns path of the function and the comment Has no license available, then the problem is related to the license. That means, license of the package is not set correctly. Mostly it happens if the package (toolbox) of this function is added later, i.e., after installation of the original matlab. Please check and solve the license issue, then it will work fine.

"Use" the Perl file that h2ph generated from a C header?

The h2ph utility generates a .ph "Perl header" file from a C header file, but what is the best way to use this file? Like, should it be require or use?:
require 'myconstants.ph';
# OR
use myconstants; # after mv myconstants.ph myconstants.pm
# OR, something else?
Right now, I am doing the use version shown above, because with that one I never need to type parentheses after the constant. I want to type MY_CONSTANT and not MY_CONSTANT(), and I have use strict and use warnings in effect in the Perl files where I need the constants.
It's a bit strange though to do a use with this file since it doesn't have a module name declared, and it doesn't seem to be particularly intended to be a module.
I have just one file I am running through h2ph, not a hundred or anything.
I've looked at perldoc h2ph, but it didn't mention the subject of the intended mechanism of import at all.
Example input and output: For further background, here's an example input file and what h2ph generates from it:
// File myconstants.h
#define MY_CONSTANT 42
...
# File myconstants.ph - generated via h2ph -d . myconstants.h
require '_h2ph_pre.ph';
no warnings qw(redefine misc);
eval 'sub MY_CONSTANT () {42;}' unless defined(&MY_CONSTANT);
1;
Problem example: Here's an example of "the problem," where I need to use parentheses to get the code to compile with use strict:
use strict;
use warnings;
require 'myconstants.ph';
sub main {
print "Hello world " . MY_CONSTANT; # error until parentheses are added
}
main;
which produces the following error:
Bareword "MY_CONSTANT" not allowed while "strict subs" in use at main.pl line 7.
Execution of main.pl aborted due to compilation errors.
Conclusion: So is there a better or more typical way that this is used, as far as following best practices for importing a file like myconstants.ph? How would Larry Wall do it?
You should require your file. As you have discovered, use accepts only a bareword module name, and it is wrong to rename myconstants.ph to have a .pm suffix just so that use works.
The choice of use or require makes no difference to whether parentheses are needed when you use a constant in your code. The resulting .ph file defines constants in the same way as the constant module, and all you need in the huge majority of cases is the bare identifier. One exception to this is when you are using the constant as a hash key, when
my %hash = { CONSTANT => 99 }
my $val = $hash{CONSTANT}
doesn't work, as you are using the string CONSTANT as a key. Instead, you must write
my %hash = { CONSTANT() => 99 }
my $val = $hash{CONSTANT()}
You may also want to wrap your require inside a BEGIN block, like this
BEGIN {
require 'myconstants.ph';
}
to make sure that the values are available to all other parts of your code, including anything in subsequent BEGIN blocks.
The problem does somewhat lie in the require.
Since require is a statement that will be evaluated at run-time, it cannot have any effect on the parsing of the latter part of the script. So when perl reads through the MY_CONSTANT in the print statement, it does not even know the existence of the subroutine, and will parse it as a bareword.
It is the same for eval.
One solution, as mentioned by others, is to put it into a BEGIN block. Alternatively, you may forward-delcare it by yourself:
require 'some-file';
sub MY_CONSTANT;
print 'some text' . MY_CONSTANT;
Finally, from my perspective, I have not ever used any ph files in my Perl programming.

Writing a tuple in file in erlang

How to write a tuple in file in erlang.What i have tried so far is
Result={1,"OK",3},
file:write_file("/tmp/logs.txt", Result, [append])
But it is giving bad args error.
Any solutions??
file:write_file takes an iolist, not a plain Erlang term. You could format the term as an iolist using io_lib:format:
file:write_file("/tmp/logs.txt", io_lib:format("~p.~n", [Result]), [append])

Should I unescape bytea field in C-function for Postgresql and if so - how to do it?

I write my own C function for Postgresql which have bytea parameter. This function is defined as followed
CREATE OR REPLACE FUNCTION putDoc(entity_type int, entity_id int,
doc_type text, doc_data bytea) RETURNS text
AS 'proj_pg', 'call_putDoc'
LANGUAGE C STRICT;
My function call_putDoc, written on C, reads doc_data and pass its data to another function like file_magic to determine mime-type of the data and then pass data to appropriate file converter.
I call this postgresql function from php script which loads file content to last parameter. So, I should pass file contents with pg_escape_bytea.
When data are passed to call_putDoc C function, does its data already unescaped and if not - how to unescape them?
Edit: As I found, no, data, passed to C function, is not unescaped. How to unescape it?
When it comes to programming C functions for PostgreSQL, the documentation explains some of the basics, but for the rest it's usually down to reading the source code for the PostgreSQL server.
Thankfully the code is usually well structured and easy to read. I wish it had more doc comments though.
Some vital tools for navigating the source code are either:
A good IDE; or
The find and git grep commands.
In this case, after having a look I think your bytea argument is being decoded - at least in Pg 9.2, it's possible (though rather unlikely) that 8.4 behaved differently. The server should automatically do that before calling your function, and I suspect you have a programming error in how you are calling your putDoc function from SQL. Without sources it's hard to say more.
Try calling it putDoc from psql with some sample data you've verified is correctly escape encoded for your 8.4 server
Try setting a breakpoint in byteain to make sure it's called before your function
Follow through the steps below to verify that what I've said applies to 8.4.
Set a breakpoint in your function and step through with gdb, using the print function as you go to examine the variables. There are lots of gdb tutorials that'll teach you the required break, backtrace, cont, step, next, print, etc commands, so I won't repeat all that here.
As for what's wrong: You could be double-encoding your data - for example, given your comments I'm wondering if you've base64 encoded data and passed it to Pg with bytea_output set to escape. Pg would then decode it ... giving you a bytea containing the bytea representation of the base64 encoding of the bytes, not the raw bytes themselves. (Edit Sounds like probably not based on comments).
For correct use of bytea see:
http://www.postgresql.org/docs/current/static/runtime-config-client.html
http://www.postgresql.org/docs/current/static/datatype-binary.html
To say more I'd need source code.
Here's what I did:
A quick find -name bytea\* in the source tree locates src/include/utils/bytea.h. A comment there notes that the function definitions are in utils/adt/varlena.c - which turns out to actually be src/backend/util/adt/varlena.c.
In bytea.h you'll also notice the definition of the bytea_output GUC parameter, which is what you see when you SHOW bytea_output or SET bytea_output in psql.
Let's have a look at a function we know does something with bytea data, like bytea_substr, in varlena.c. It's so short I'll include one of its declarations here:
Datum
bytea_substr(PG_FUNCTION_ARGS)
{
PG_RETURN_BYTEA_P(bytea_substring(PG_GETARG_DATUM(0),
PG_GETARG_INT32(1),
PG_GETARG_INT32(2),
false));
}
Many of the public functions are wrappers around private implementation, so the private implementation can be re-used with functions that have differing arguments, or from other private code too. This is such a case; you'll see that the real implementation is bytea_substring. All the above does is handle the SQL function calling interface. It doesn't mess with the Datum containing the bytea input at all.
The real implementation bytea_substring follows directly below the SQL interface wrappers in this partcular case, so read on in varlena.c.
The implementation doesn't seem to refer to the bytea_output GUC, and basically just calls DatumGetByteaPSlice to do the work after handling some border cases. git grep DatumGetByteaPSlice shows us that DatumGetByteaPSlice is in src/include/fmgr.h, and is a macro defined as:
#define DatumGetByteaPSlice(X,m,n) ((bytea *) PG_DETOAST_DATUM_SLICE(X,m,n))
where PG_DETOAST_DATUM_SLICE is
#define PG_DETOAST_DATUM_SLICE(datum,f,c) \
pg_detoast_datum_slice((struct varlena *) DatumGetPointer(datum), \
(int32) (f), (int32) (c))
so it's just detoasting the datum and returning a memory slice. This leaves me wondering: has the decoding been done elsewhere, as part of the function call interface? Or have I missed something?
A look at byteain, the input function for bytea, shows that it's certainly decoding the data. Set a breakpoint in that function and it should trip when you call your function from SQL, showing that the bytea data is really being decoded.
For example, let's see if byteain gets called when we call bytea_substr with:
SELECT substring('1234'::bytea, 2,2);
In case you're wondering how substring(bytea) gets turned into a C call to bytea_substr, look at src/catalog/pg_proc.h for the mappings.
We'll start psql and get the pid of the backend:
$ psql -q regress
regress=# select pg_backend_pid();
pg_backend_pid
----------------
18582
(1 row)
then in another terminal connect to that pid with gdb, set a breakpoint, and continue execution:
$ sudo -u postgres gdb -q -p 18582
Attaching to process 18582
... blah blah ...
(gdb) break bytea_substr
Breakpoint 1 at 0x6a9e40: file varlena.c, line 1845.
(gdb) cont
Continuing.
In the 1st terminal we execute in psql:
SELECT substring('1234'::bytea, 2,2);
... and notice that it hangs without returning a result. Good. That's because we tripped the breakpoint in gdb, as you can see in the 2nd terminal:
Breakpoint 1, bytea_substr (fcinfo=0x1265690) at varlena.c:1845
1845 PG_RETURN_BYTEA_P(bytea_substring(PG_GETARG_DATUM(0),
(gdb)
A backtrace with the bt command doesn't show bytea_substr in the call path, it's all SQL function call machinery. So Pg is decoding the bytea before it's passing it to bytea_substr.
You can now detach the debugger with quit. This won't quit the Pg backend, only detach and quit the debugger.

how to share common definition between c and tcl

For testing purposes, I need to share some definitions between Tcl and C. Is it possible to include C style include file in Tcl scripts? Any alternative suggestion will be welcomed, but I prefer not to write a parser for C header file.
SWIG supports Tcl so possibly you can make use of that. Also I remember seeing some code to parse C headers on the Tcl wiki - so you might try looking at the Parsing C page there. That should save you writing one from scratch.
If you're doing a full API of any complexity, you'd be best off using SWIG (or critcl†) to do the binding. SWIG can do a binding between a C API and Tcl with very little user input (often almost none). It should be noted though that the APIs it produces are not very natural from a Tcl perspective (because Tcl isn't C and has different idioms).
Yet if you are instead after some way of handling just the simplest parts of definitions — just the #defines of numeric constants — then the simplest way to deal with that is via a bit of regular expression parsing:
proc getDefsFromIncludeFile {filename} {
set defs {}
set f [open $filename]
foreach line [split [read $f] "\n"] {
# Doesn't handle all edge cases, but does do a decent job
if {[regexp {^\s*#\s*define\s+(\w+)\s+([^\s\\]+)} $line -> def val]} {
lappend defs $def [string trim $val "()"]
}
}
close $f
return $defs
}
It does a reasonably creditable job on Tcl's own headers. (Handling conditional definitions and nested #include statements is left as an exercise; I suggest you try to arrange your C headers so as to make that exercise unnecessary.) When I do that, the first few definitions extracted are:
TCL_ALPHA_RELEASE 0 TCL_BETA_RELEASE 1 TCL_FINAL_RELEASE 2 TCL_MAJOR_VERSION 8
† Critcl is a different way of doing Tcl/C bindings, and it works by embedding the C language inside Tcl. It can produce very natural-working Tcl interfaces to C code; I think it's great. However, I don't think it's likely to be useful for what you are trying to do.

Resources