I create an input stream from a string with
pANTLR3_UINT8 input_string = (pANTLR3_UINT8) "test";
pANTLR3_INPUT_STREAM stream = antlr3StringStreamNew(input_string, ANTLR3_ENC_8BIT, sizeof(input_string), (pANTLR3_UINT8)"testname");
and then use my lexer and parser to process the string. When I'm done with this string I want to process a new one, but re-creating the lexer and parser objects seems inefficient.
I've found the reset method of the lexer and parser classes and the reuse method of the stream, but how do I use those to parse a new string?
I believe what you're looking for is the setCharStream() function.
Related
I am trying to write a GUI that will display the name of the sketch it was generated from using a simple text() command. However, I am running into trouble getting any of the general JS solutions to work for me. Many solutions I have found use the filename reserved word but that does not seem to be reserved in Processing 3.5.4. I have also tried parsing the strings using a similar method to what can be found here. I am very new to processing and this is only my 2nd attempt at using Processing.
Any advice would be greatly appreciated
You can get the path (as a string) to the sketch with sketchPath().
From there you could either parse the string (pull off everything after the last slash) to get the sketch name, or you can use sketchFile() to get a reference to the file itself and get the name from there:
String path = sketchPath();
File file = sketchFile(path);
String sketchName = file.getName();
println(sketchName);
You could combine this all into one line like so:
String sketchName = sketchFile(sketchPath()).getName();
Is it possible to get URL fragment parameters in C under glib ?
I've got a url like file://localhost/home/me/notepad.txt#line=100,2
What's the best way to get the parameters specified at the end of the url ?
There’s no single GLib function which will do what you want. If you can use libsoup (also part of the GNOME stack), then you can do it using SoupURI:
g_autoptr(SoupURI) uri = soup_uri_new (uri_string);
const gchar *fragment = soup_uri_get_fragment (uri);
That will set fragment to line=100,2. You’ll have to do further parsing of whatever your fragment format is, by hand. g_strsplit() would be useful for that.
You may also take a look on the function parse_sftp_uri from gnome-terminal terminal-nautilus.c file.
It can be easily adapted for general URIs.
Unsure if you mean to parse notepad.txt#line=100,2 or #line=100,2, nevertheless my answer should work in both cases.
You can use the strrchr() (man strrchr) function to get the last occurence of a character within a string.
Something like:
char *file;
file = strrchr(url, '/') + 1;
Elixir's File.stream! splits on and assumed \r character.
Is it possible to specify for example, \r\n or any other pattern?
Such a convenience would make File.stream! a handy file parser.
Edit: Added source file content:
iex(1)> File.read! "D:\\Projects\\Telegram\\PQ.txt"
"1039027537039357001\r\n1124138842463513719\r\n1137145765766942221\r\n1159807134726147157\r\n1162386423249503807\r\n1166092057686212149\r\n1192934946182607263\r\n1239437837009623463\r\n1242249431735251217\r\n1286092661601003031\r\n1300223652350017207\r\n1320700236992142661\r\n1322986082402655259\r\n1342729635050601557\r\n1342815051384338027\r\n1361578683715077199\r\n1381265403472415423\r\n1387654405700676857\r\n1414719090657425471\r\n1438176310698548801\r\n1440426998028857687\r\n1444777794598883737\r\n1448786004429696643\r\n1449069084476072141\r\n1449922801627060913\r\n1459186197300152561\r\n1470497644058466497\r\n1497532721434112879\r\n1514370843858307907\r\n1528087672407582373\r\n1530255914631110911\r\n1537681216742780453\r\n1547498566041252091\r\n1563354550428106363\r\n1570520040759209689\r\n1570650619548126013\r\n1572342415580617699\r\n1595238677050713949\r\n1602246062455069687\r\n1603930707387709439\r\n1620038771342153713\r\n1626781435762382063\r\n1628817368590631491\r\n1646011824126204499\r\n1654346190847567153\r\n1671293643237388043\r\n1674249379765115707\r\n1683876665120978837\r\n1700490364729897369\r\n1724114033281923457\r\n1729626235343064671\r\n1736390408379387421\r\n1742094280210984849\r\n1750652888783086363\r\n1756848379834132853\r\n1769689620230136307\r\n1791811376213642701\r\n1802412521744570741\r\n1816018323888992941\r\n1816202297040826291\r\n1833488086890603497\r\n1834281595607491843\r\n1840295490995033057\r\n1843931859412695937\r\n1845134226412607369\r\n1847514467055999659\r\n1868936961235125427\r\n18733753
Example:
iex(134)> s|> Enum.to_list
["1039027537039357001\n", "1124138842463513719\n", "1137145765766942221\n",
"1159807134726147157\n", "1162386423249503807\n", "1166092057686212149\n",
"1192934946182607263\n", "1239437837009623463\n", "1242249431735251217\n",
"1286092661601003031\n", "1300223652350017207\n", "1320700236992142661\n",
"1322986082402655259\n", "1342729635050601557\n", "1342815051384338027\n",
"1361578683715077199\n", "1381265403472415423\n", "1387654405700676857\n",
"1414719090657425471\n", "1438176310698548801\n", "1440426998028857687\n",
"1444777794598883737\n", "1448786004429696643\n", "1449069084476072141\n",
"1449922801627060913\n", "1459186197300152561\n", "1470497644058466497\n",
"1497532721434112879\n", "1514370843858307907\n", "1528087672407582373\n",
"1530255914631110911\n", "1537681216742780453\n", "1547498566041252091\n",
"1563354550428106363\n", "1570520040759209689\n", "1570650619548126013\n",
"1572342415580617699\n", "1595238677050713949\n", "1602246062455069687\n",
"1603930707387709439\n", "1620038771342153713\n", "1626781435762382063\n",
"1628817368590631491\n", "1646011824126204499\n", "1654346190847567153\n",
"1671293643237388043\n", "1674249379765115707\n", "1683876665120978837\n",
"1700490364729897369\n", "1724114033281923457\n", ...]
iex(135)> s|> String.to_integer|> Primes.factorize|> Enum.to_list
Elixir handles the differences between Windows and Unix just fine by always normalizing "\r\n" into "\n", so developers don't need to worry about both formats. That's what is happening in the example above and that's what you should expect from the operations in both IO and File module.
You could open the file in raw mode (see here) and check the characters yourself.
I developed an Antlr3.4, grammar which generates an AST for later parsing. The generated parser uses Antlr's C interface. When the parser encounters an unexpected token it adds
"Tree Error Node" to the AST token stream and continues on processing input. (Internally "Tree Error Node" represents ANTLR3_TOKEN_INVALID.)
When I pass the output of the parser to the AST parser, it halts upon the "Tree Error Node". Is there anyway to handle invalid tokens in an AST stream?
I'm using:
libantlr3c-3.4
antlr3.4
I turns out you can override the tree adaptor method "errorNode" to emit a user specified token. That token can then be handled in the AST parser.
You need to override Match() using the method described above and perform recover of the parser (this is c# pseudo code):
public override object Match(IIntStream input, int ttype, BitSet follow)
{
if (needs recover)
{
... Recover from mismatch, i.e. skip until next valid terminal.
}
return base.Match(input, ttype, follow);
}
Also, you need to recover from mismatched token:
protected override object RecoverFromMismatchedToken(IIntStream input, int ttype, BitSet follow)
{
if (needs recover)
{
if (unwanted token(input, ttype))
{
.. go to the next valid terminal
.. consume as if ok
.. return next valid token
}
if (missing token(input, follow))
{
.. go to the next valid terminal
.. insert missing symbol and return
}
.. othwerwise throw
}
.. call base recovery(input, ttype, follow);
}
Let me know if there are additional questions.
I need to read a file from the file system and load the entire contents into a string in a groovy controller, what's the easiest way to do that?
String fileContents = new File('/path/to/file').text
If you need to specify the character encoding, use the following instead:
String fileContents = new File('/path/to/file').getText('UTF-8')
The shortest way is indeed just
String fileContents = new File('/path/to/file').text
but in this case you have no control on how the bytes in the file are interpreted as characters. AFAIK groovy tries to guess the encoding here by looking at the file content.
If you want a specific character encoding you can specify a charset name with
String fileContents = new File('/path/to/file').getText('UTF-8')
See API docs on File.getText(String) for further reference.
A slight variation...
new File('/path/to/file').eachLine { line ->
println line
}
In my case new File() doesn't work, it causes a FileNotFoundException when run in a Jenkins pipeline job. The following code solved this, and is even easier in my opinion:
def fileContents = readFile "path/to/file"
I still don't understand this difference completely, but maybe it'll help anyone else with the same trouble. Possibly the exception was caused because new File() creates a file on the system which executes the groovy code, which was a different system than the one that contains the file I wanted to read.
the easiest way would be
new File(filename).getText()
which means you could just do:
new File(filename).text
Here you can Find some other way to do the same.
Read file.
File file1 = new File("C:\Build\myfolder\myTestfile.txt");
def String yourData = file1.readLines();
Read Full file.
File file1 = new File("C:\Build\myfolder\myfile.txt");
def String yourData= file1.getText();
Read file Line Bye Line.
File file1 = new File("C:\Build\myfolder\myTestfile.txt");
for (def i=0;i<=30;i++) // specify how many line need to read eg.. 30
{
log.info file1.readLines().get(i)
}
Create a new file.
new File("C:\Temp\FileName.txt").createNewFile();