I'm new to Prolog and I have just started looking around. I read the Define Clause Grammar chapter on both Simply Logical and Learn Prolog now!, so now I wanted to get started with some exercise but I'm stuck.
I have to read from a file with this syntax
setName = {elemen1, element2,..., elementN}.
element1: element2 > element3.
Now I have read that when you define a DCG you have a parser for free, so I wanted to do that to get the data from my file to the Prolog program.
My problem is that in all the examples I have read they always provide a basic dictionary like
article --> [the]
but I cannot do that because I don't know what is going to be written in the file.
Any suggestions?
In SWI-Prolog, consider using library(dcg/basics). It provides building-blocks that you can use in your DCG. Focus on a clear declarative description of what the contents of the file look like, state this with a DCG. Then use phrase_from_file/2 from library(pio) to apply the DCG to a file.
Related
I'm using ESNACC for compiling multiple ASN source files to C code. For ease of understanding, I will explain the scenario here as succintly as possible:-
FileA.asn1 contains the following:-
FileA DEFINITIONS ::=
BEGIN
A ::= SEQUENCE
{
AContent [0] OCTET STRING (CONTAINING FileB.B)
}
END
FileB.asn1 contains the following:-
FileB DEFINITIONS ::=
BEGIN
B ::= SEQUENCE
{
BElem1 [0] INTEGER,
BElem2 [1] INTEGER
}
END
I used ESNACC to compile both files in one command. Upon analysing the C source files generated, I observed that the AContent field will be decoded as a constructed OCTET STRING (the data being received in the application guarantees that the field will be specified as constructed) with its contents being filled into a simple string. This means that FileB does not come into the picture at all. I was hoping that AContent would be further decoded with a structure of FileB being filled, so that I can easily access the elements within. This does not seem to be the case.
I'm fairly new with ASN1, so please let me know if my understanding is wrong in any way.
Is ESNACC not capable of generating code for supporting CONTAINING keyword properly?
Are there other compilers that are able to do this?
Can this be done by using ESNACC in any way?
If this cannot be done using ESNACC, and I don't want to use any other compiler, how would I access the contents within AContent at runtime easily?
I am not sure of the capabilities of ESNACC, but there are many other compilers that support the CONTAINING keyword. An excellent list of compilers can be found at https://www.itu.int/en/ITU-T/asn1/Pages/Tools.aspx which is part of the ITU-T ASN.1 Project.
Heimdal's ASN.1 compiler (lib/asn1/) has support for the funky Information Object System syntax extensions that allow you to declare things like what all goes into Certificate Extensions (for example), and the generated code will decode everything recursively in one go.
I have a large source code package in C and I would like to spell check all string literals and comments. After adding exceptions to a file, I would like to perform the same procedure on each release, to see if there are any spelling errors inserted.
I have checked with ispell, hunspell and aspell but, to my disappointment and surprise, although they do seem to understand HTML, Tex and a few other languages they do not have a C feature. The closest I found was a "ccpp" filter mentioned for aspell, but when I "aspell dump filters" the ccpp filter is not listed.
Any ideas?
You have to write a lexer first to extract string constants and comments to a text file with the associated line & column of the source file.
(ply can be useful or lex/yacc but needs some coding).
Then use whatever spell checker you like, then parse the report, and trace back to the original C file location.
Or connect directly the spell checker to your lexer.
I am trying to write a C code that will find hyperlinks in a mail and replace them.
Is using a pcre library a good thing to do ?
Since pcre is ,allegedly, too slow is there an alternative ?
C is the last language I would choose to do this. Firstly, if you want to do this with high accuracy - use a MIME parser to get the HTML body out. Java has mime4j, Perl has MIME::Parser, Python has email, etc. This isn't too hard and I'm willing to help with this step in any of these languages if you'd like. Secondly, use an HTML parser to isolate the links.
If you're ok with some mistakes then just write a one-line program in Perl or PHP. Or sed even. Really. If you are replacing with a fixed URL, use sed. If you are modifying the URL, the only reason this won't work as-is is you'll probably have to url_encode it which a P-language can handle in one line.
For testing purposes, I need to share some definitions between Tcl and C. Is it possible to include C style include file in Tcl scripts? Any alternative suggestion will be welcomed, but I prefer not to write a parser for C header file.
SWIG supports Tcl so possibly you can make use of that. Also I remember seeing some code to parse C headers on the Tcl wiki - so you might try looking at the Parsing C page there. That should save you writing one from scratch.
If you're doing a full API of any complexity, you'd be best off using SWIG (or critcl†) to do the binding. SWIG can do a binding between a C API and Tcl with very little user input (often almost none). It should be noted though that the APIs it produces are not very natural from a Tcl perspective (because Tcl isn't C and has different idioms).
Yet if you are instead after some way of handling just the simplest parts of definitions — just the #defines of numeric constants — then the simplest way to deal with that is via a bit of regular expression parsing:
proc getDefsFromIncludeFile {filename} {
set defs {}
set f [open $filename]
foreach line [split [read $f] "\n"] {
# Doesn't handle all edge cases, but does do a decent job
if {[regexp {^\s*#\s*define\s+(\w+)\s+([^\s\\]+)} $line -> def val]} {
lappend defs $def [string trim $val "()"]
}
}
close $f
return $defs
}
It does a reasonably creditable job on Tcl's own headers. (Handling conditional definitions and nested #include statements is left as an exercise; I suggest you try to arrange your C headers so as to make that exercise unnecessary.) When I do that, the first few definitions extracted are:
TCL_ALPHA_RELEASE 0 TCL_BETA_RELEASE 1 TCL_FINAL_RELEASE 2 TCL_MAJOR_VERSION 8
† Critcl is a different way of doing Tcl/C bindings, and it works by embedding the C language inside Tcl. It can produce very natural-working Tcl interfaces to C code; I think it's great. However, I don't think it's likely to be useful for what you are trying to do.
Beginner question: I am trying to use SAS macro arrays as explained in this article: http://www2.sas.com/proceedings/sugi31/040-31.pdf, specifically in the section %ARRAY WITH DATA= AND VAR=. Unfortunately there are no examples of a full program using this, and I can't find any simple examples online. I tried to create a simple example, guessing at some things, but it didn't work. (I got two errors for each macro: "Apparent invocation of macro ARRAY not resolved." and "Statement is not valid or it is used out of proper order.") What am I doing wrong?
Here is the code:
data data1;
input variable1;
datalines;
1
2
3
4
run;
%array(array1, data=data1, var=variable1);
%do_over(array1, phrase=PROC PRINT DATA=data1(obs=?));
run;
(Also, does anyone know the name of the SAS website which is sort of like this one? I remember seeing it but I can't find it again.)
Thanks!
You can download a zip file with the macros at the SAS Community website: http://www.sascommunity.org/wiki/Tight_Looping_with_Macro_Arrays
Include them in your SAS program and it should work.