Writing a tuple in file in erlang - file

How to write a tuple in file in erlang.What i have tried so far is
Result={1,"OK",3},
file:write_file("/tmp/logs.txt", Result, [append])
But it is giving bad args error.
Any solutions??

file:write_file takes an iolist, not a plain Erlang term. You could format the term as an iolist using io_lib:format:
file:write_file("/tmp/logs.txt", io_lib:format("~p.~n", [Result]), [append])

Related

C - How do I make a function that finds the location of a file it has to use just by giving it the filename? (Windows)

I am having trouble with the function fopen(). I would always send the exact location of a file as an arguement to fopen(),
which would look something like this:
fopen("c:\\Users/Username/Desktop/Projects/program_name/version 1.0/data/important_data.txt", "r");
That works just fine on my computer, but what if I decide to transfer the program to another computer?
The location would completely change.
It would no longer be:
c:\\Users/Username/Desktop/Projects/program_name/version 1.0/data/important_data.txt
But it would rather be something like:
c:\\Users/OtherUsername/Desktop/program_name/version 1.0/data/important_data.txt
My question is, how do I make a portable function which can obtain the location of a file, if I only give the
name (including the type e.g. .txt) of the file to the function?
Keep in mind, I've been learning C for less than a year. There are still a lot of things which I must learn, and
things like this are of high importance.
this is operating system specific. on linux you can use the locate shell command and parse its output ( http://www.linfo.org/locate.html )
C: Run a System Command and Get Output?
How do I execute a Shell built-in command with a C function?
however this solution will only work on linux. i think yano's solution in the comments above is better ...

Parsing Testing Data into C Program

I'm developing a module that will be run on an Embedded ARM chip to run an attitude controller, which is written in C. We have a MATLAB simulation, with a bunch of low-level functions that I'd like to be able to make unit tests for with data generated by the MATLAB program.
Each function is reasonably complex, and I'd like to calculate the error between the Matlab output and the C output for validation purposes. Each function has the same Inputs and Outputs between the two implementations, so they should match (to an allowable tolerance).
Are there any good existing file formats that could be useful? The types of test data would be:
<Test Input 1> <Test Input 2> <Test input 3> <Expected Output 1> <Expected output 2>
Where inputs and outputs are arbitrary single floats, arrays or matrices. I have considered XML because there are some nice parsers, but thats about all i know.
Any suggestions or direction?
an easy way is to use CSV file format:
it is easy to handle from C. see here
use OpenOffice/Excel later by just changing the file suffix to *.csv
see more here about CSV files
It sounds like you want to run these unit tests from C? Have you considered running them in MATLAB instead? If so then you would be able to leverage the MATLAB Unit Test Framework and parameterized testing to encode the actual and expected values (using the "sequential" ParameterCombination attribute in your MATLAB test. This would require that you create MEX wrappers for your C code so that you can invoke them from MATLAB, but other than that extra step this could be quite seamless. Also, have you looked into using MATLAB Coder for this?
The MATLAB Unit Test would look something like this:
classdef Times2Test < matlab.unittest.TestCase
properties(TestParameter)
input = {1,2,3};
expectedResult = {2,4,6};
end
methods(Test, ParameterCombination='sequential')
function testMATLABSimulation(testCase, input, expectedResult)
actualResult = times2(input);
testCase.verifyEqual(actualResult, expectedResult, ...
'RelTol', 1e-6);
end
function testCAlgorithm(testCase, input, expectedResult)
% Must expose to MATLAB by compiling C code to Mex
actualResult = times2Mex(input);
testCase.verifyEqual(actualResult, expectedResult, ...
'RelTol', 1e-6);
end
end
end
Since each function has the same input, there is no reason not to create input files in the most simple form - just numbers!
You know exactly the type and amount of numbers you want to read, so just read them using fscanf
The file could look like:
12.3 100 200.3
1 2 3
4 5 6
7 8 9
The first row is the arbitrary float numbers, you read each one into a variable.
The next 9 are a matrix, so you read them in a loop into a 3x3 matrix, etc...
There is one bit in your question which is kind of an eyebrow raiser:
"inputs and outputs are arbitrary single floats, arrays or matrices". This is going to add some complexity but maybe there is no way around that.
An .Xml file format is a good choice because it gives you a lot of flexibility and you can import/export your tests in an editor to help you make sense of it.
But perhaps an even better choice is a .JSON file format. It offers the same flexibility as a xml files but is not as heavy weight. There are open source libraries available to work with them in C and I'm sure matlab can export data in this format as well.

How do you call the "happy-dog" part of file "happy-dog.png"?

I just realized I don't know how file is called in file.ext.
The whole file.ext is called a file or filename, ext is called extension but how do you call the file part itself of file.ext?
For example happy-dog.png. All the file/filename is happy-dog.png, extension is png but how do you call happy-dog?
It's not basename. Is it like titlename? Or filepart? Any ideas?
I believe there is no short name for this thing. Some libraries just refer to it with names like "filename-without-extension" or "filename-without-path-or-extension".
You could use the term "basename", because that is the program or function often used to generate this thing. It is not quite accurate because basename may or may not strip the extension depending on what arguments you pass it, but I think programmers would understand you.
FileBaseName[] in Mathematica.

xmlParseFile vs xmlReadFile (libxml2)

I'm writing some C code using the libxml2 library to read an XML file. There seem to be two different functions for this purpose, xmlParseFile and xmlReadFile, and and I'm not sure of the difference between them (besides the fact that xmlReadFile() takes some additional parameters).
The examples on the libxml2 website sometimes use xmlParseFile and some use xmlReadFile.
So when should you use xmlParseFile and when should you use xmlReadFile?
I haven't been able to find anything that explains this.
xmlReadFile() is a bit more powerful as it is able to take an URL instead of a local file path, and allows to specify more options (http://xmlsoft.org/html/libxml-parser.html#xmlParserOption), so I tend to use it instead of xmlParseFile(). That said, if you are parsing a local XML file and not using the parser options, you will be fine with xmlParseFile().
xmlReadFile() is more powerful and latest version for parsing the XML. I am also using it in place of xmlParseFile().
I have xml arriving in character buffer 'msg' on TCP pipe so I use libxml2 call xmlReadDoc() instead as following with options XML_PARSE_NOBLANKS and XML_PARSE_OLD10
xmlDocPtr parsed_xml_dom;
parsed_xml_dom = xmlReadDoc((xmlChar *)(msg), NULL, NULL, XML_PARSE_NOBLANKS| XML_PARSE_OLD10);

Should I unescape bytea field in C-function for Postgresql and if so - how to do it?

I write my own C function for Postgresql which have bytea parameter. This function is defined as followed
CREATE OR REPLACE FUNCTION putDoc(entity_type int, entity_id int,
doc_type text, doc_data bytea) RETURNS text
AS 'proj_pg', 'call_putDoc'
LANGUAGE C STRICT;
My function call_putDoc, written on C, reads doc_data and pass its data to another function like file_magic to determine mime-type of the data and then pass data to appropriate file converter.
I call this postgresql function from php script which loads file content to last parameter. So, I should pass file contents with pg_escape_bytea.
When data are passed to call_putDoc C function, does its data already unescaped and if not - how to unescape them?
Edit: As I found, no, data, passed to C function, is not unescaped. How to unescape it?
When it comes to programming C functions for PostgreSQL, the documentation explains some of the basics, but for the rest it's usually down to reading the source code for the PostgreSQL server.
Thankfully the code is usually well structured and easy to read. I wish it had more doc comments though.
Some vital tools for navigating the source code are either:
A good IDE; or
The find and git grep commands.
In this case, after having a look I think your bytea argument is being decoded - at least in Pg 9.2, it's possible (though rather unlikely) that 8.4 behaved differently. The server should automatically do that before calling your function, and I suspect you have a programming error in how you are calling your putDoc function from SQL. Without sources it's hard to say more.
Try calling it putDoc from psql with some sample data you've verified is correctly escape encoded for your 8.4 server
Try setting a breakpoint in byteain to make sure it's called before your function
Follow through the steps below to verify that what I've said applies to 8.4.
Set a breakpoint in your function and step through with gdb, using the print function as you go to examine the variables. There are lots of gdb tutorials that'll teach you the required break, backtrace, cont, step, next, print, etc commands, so I won't repeat all that here.
As for what's wrong: You could be double-encoding your data - for example, given your comments I'm wondering if you've base64 encoded data and passed it to Pg with bytea_output set to escape. Pg would then decode it ... giving you a bytea containing the bytea representation of the base64 encoding of the bytes, not the raw bytes themselves. (Edit Sounds like probably not based on comments).
For correct use of bytea see:
http://www.postgresql.org/docs/current/static/runtime-config-client.html
http://www.postgresql.org/docs/current/static/datatype-binary.html
To say more I'd need source code.
Here's what I did:
A quick find -name bytea\* in the source tree locates src/include/utils/bytea.h. A comment there notes that the function definitions are in utils/adt/varlena.c - which turns out to actually be src/backend/util/adt/varlena.c.
In bytea.h you'll also notice the definition of the bytea_output GUC parameter, which is what you see when you SHOW bytea_output or SET bytea_output in psql.
Let's have a look at a function we know does something with bytea data, like bytea_substr, in varlena.c. It's so short I'll include one of its declarations here:
Datum
bytea_substr(PG_FUNCTION_ARGS)
{
PG_RETURN_BYTEA_P(bytea_substring(PG_GETARG_DATUM(0),
PG_GETARG_INT32(1),
PG_GETARG_INT32(2),
false));
}
Many of the public functions are wrappers around private implementation, so the private implementation can be re-used with functions that have differing arguments, or from other private code too. This is such a case; you'll see that the real implementation is bytea_substring. All the above does is handle the SQL function calling interface. It doesn't mess with the Datum containing the bytea input at all.
The real implementation bytea_substring follows directly below the SQL interface wrappers in this partcular case, so read on in varlena.c.
The implementation doesn't seem to refer to the bytea_output GUC, and basically just calls DatumGetByteaPSlice to do the work after handling some border cases. git grep DatumGetByteaPSlice shows us that DatumGetByteaPSlice is in src/include/fmgr.h, and is a macro defined as:
#define DatumGetByteaPSlice(X,m,n) ((bytea *) PG_DETOAST_DATUM_SLICE(X,m,n))
where PG_DETOAST_DATUM_SLICE is
#define PG_DETOAST_DATUM_SLICE(datum,f,c) \
pg_detoast_datum_slice((struct varlena *) DatumGetPointer(datum), \
(int32) (f), (int32) (c))
so it's just detoasting the datum and returning a memory slice. This leaves me wondering: has the decoding been done elsewhere, as part of the function call interface? Or have I missed something?
A look at byteain, the input function for bytea, shows that it's certainly decoding the data. Set a breakpoint in that function and it should trip when you call your function from SQL, showing that the bytea data is really being decoded.
For example, let's see if byteain gets called when we call bytea_substr with:
SELECT substring('1234'::bytea, 2,2);
In case you're wondering how substring(bytea) gets turned into a C call to bytea_substr, look at src/catalog/pg_proc.h for the mappings.
We'll start psql and get the pid of the backend:
$ psql -q regress
regress=# select pg_backend_pid();
pg_backend_pid
----------------
18582
(1 row)
then in another terminal connect to that pid with gdb, set a breakpoint, and continue execution:
$ sudo -u postgres gdb -q -p 18582
Attaching to process 18582
... blah blah ...
(gdb) break bytea_substr
Breakpoint 1 at 0x6a9e40: file varlena.c, line 1845.
(gdb) cont
Continuing.
In the 1st terminal we execute in psql:
SELECT substring('1234'::bytea, 2,2);
... and notice that it hangs without returning a result. Good. That's because we tripped the breakpoint in gdb, as you can see in the 2nd terminal:
Breakpoint 1, bytea_substr (fcinfo=0x1265690) at varlena.c:1845
1845 PG_RETURN_BYTEA_P(bytea_substring(PG_GETARG_DATUM(0),
(gdb)
A backtrace with the bt command doesn't show bytea_substr in the call path, it's all SQL function call machinery. So Pg is decoding the bytea before it's passing it to bytea_substr.
You can now detach the debugger with quit. This won't quit the Pg backend, only detach and quit the debugger.

Resources