Using org-mode as a flat file database and sanitizing input - database

I'd like to use an org-mode file as a flat file database that can be edited both programmatically and by hand. An example follows showing a list of bookmarks.
* Somebody's blog :: I like org-mode
:url: http://somebody.com/org
** Quotation 1
:date: 2013/01/13 08:32:11 EST
Very interesting observations here.
** Quotation 2
:date: 2013/01/13 08:33:46 EST
A marvelous code snippets
* Man bites dog
:url: http://newssite.com/today
I'd like emacs or a webserver cgi-script or similar to edit such a file (in the example above, to add more bookmarks or more quotations to existing bookmarks).
The problem is when, e.g., accepting arbitrary selections from websites to insert under an org-mode heading, it becomes necessary to sanitize the input so that, at minimum, quoted lines starting with asterisks don't affect the file's structure: if a quotation starts with "* this is pathological example", and is inserted into the file under some heading, when I open the file in emacs, it'll appear as a new first-level (h1) heading.
How can I meet the twin goals of (i) an editable org-mode flat file database (this rules out escaping and all our XML tricks) and (ii) isolating arbitrary inputs?
anti-solution: #+BEGIN_QUOTE wouldn't work because lines starting with "* " are rendered as new headings.
possibility 1: box/rebox everything from the outside world: http://www.emacswiki.org/emacs/BoxQuote this seems excessive though.

Related

CSV file not recognized as csv, reason nominal value not declared in header

I am trying to load a dataset in weka, I have tried many solutions such as arff format, comas etc. but it was all a failure. Could any of you give me a working solution or load this dataset according to the format.
Here is a link to dataset
Instead of using Weka's functionality for reading CSV files, you could use ADAMS (developed at the same university; I'm the lead developer) instead.
Download the adams-ml-app snapshot and then use the Weka Investigator to load/save the file:
Load it as ADAMS Spreadsheets (.csv, .csv.gz)
Save it as Arff data files (.arff, .arff.gz) or Simple ARFF data files (.arff, .arff.gz)
The Reviews column contains an erroneous 3.0M, which prevents it from becoming numeric.
If you want to have an introduction to the Weka Investigator, then take a look at my talk from the Weka User Conference 2021: Taking Weka to the next level with ADAMS .
There are too many issues with lines in this file.
In line 23, I eliminated the odd looking brackets.
I removed all single quotes (')
I eliminated all repeated double quotes ("")
In line 10474 the first two fields (before the number) didn't seem to be separated, so I added a comma.
This allowed the file to go through initial screening, but...
The file contains a lot of odd emojis. I started to eliminate them one by one, but there are clearly more of these than I wish to deal with.
Each time I got rid of one, it would read farther into the file, then stop at the next one.
If I just try to read the top of the file, the first 20 lines before we get to any of these problems, it reads fine.
My partial editing can be found here: https://www.dropbox.com/s/ij707mb23dt1jvz/googleplaystore3.csv?dl=0
I think if you clear up the remaining emojis the file should be usable.

Importing loosely structured data into database

I get daily data feeds with data that is only loosely structured. I need to import it into a database so I can run a report that finds new records and changes to existing records.
The data looks like this:
--------------------------------
blah:
foo
bar
lorum: ipsum
dolor: sit
foo: bar
bar: foo
123-555-1212
Lorum / Ipsum / Dolor / Sit
Foo / Bar
--------------------------------
As you can see there are some field headings like "blah", "lorum", etc. but some data lacks a heading, like the phone number or slash delimited list. And some headings are on the same line and others are not.
Just to keep us on our toes, the records do not have the same number of fields.
So I'm thinking that parsing needs to have at least 3 ways to parse the data like,
if "heading:$" then grab the next lines until the next "*.:" is read
and
grab "heading: value"
and
if line starts with number assume heading of "phone"
and
if line contains slash delimited list assume heading "features" until "--------..."
But I have no idea how to start coding something like this. The language is open at this point although I have to run the code in MacOS.
I suppose perl might be good for this, but have very poor perl foo.
Don't even know where to start with this one.
You always need to assume something about your text, otherwise you have an exercise in NLP.
Can we assume that the non-key-value part is in the end? is so, the following regexs will help you:
# split the text into records:
#records = split /\n-----------------\n/, $text;
# this will find lines that have another key/value pair after it
qr/\A(\w+):(.*?)(?=\n\w+:)/ms
# then the last key/value, that probably must be one line:
qr/^(\w+):(.*)/
I recommend that each time, after successful matching, remove the matched text and continue.
Other useful assumptions: that the phone number can appear only once in the record, (and not as part of other key/value) that tags are in the end.

Fix CSV file with new lines

I ran a query on a MS SQL database using SQL Server Management Studio, and some the fields contained new lines. I selected to save the result as a csv, and apparently MS SQL isn't smart enough to give me a correctly formatted CSV file.
Some of these fields with new lines are wrapped in quotes, but some aren't, I'm not sure why (it seems to quote fields if they contain more than one new line, but not if they only contain one new line, thanks Microsoft, that's useful).
When I try to open this CSV in Excel, some of the rows are wrong because of the new lines, it thinks that one row is two rows.
How can I fix this?
I was thinking I could use a regex. Maybe something like:
/,[^,]*\n[^,]*,/
Problem with this is it matches the last element of one line and the 1st of the next line.
Here is an example csv that demonstrates the issue:
field a,field b,field c,field d,field e
1,2,3,4,5
test,computer,I like
pie,4,8
123,456,"7
8
9",10,11
a,b,c,d,e
A simple regex replacement won't work, but here's a solution based on preg_replace_callback:
function add_quotes($matches) {
return preg_replace('~(?<=^|,)(?>[^,"\r\n]+\r?\n[^,]*)(?=,|$)~',
'"$0"',
$matches[0]);
}
$row_regex = '~^(?:(?:(?:"[^"*]")+|[^,]*)(?:,|$)){5}$~m';
$result=preg_replace_callback($row_regex, 'add_quotes', $source);
The secret to $row_regex is knowing ahead of time how many columns there are. It starts at the beginning of a line (^ in multiline mode) and consumes the next five things that look like fields. It's not as efficient as I'd like, because it always overshoots on the last column, consuming the "real" line separator and the first field of the next row before backtracking to the end of the line. If your documents are very large, that might be a problem.
If you don't know in advance how many columns there are, you can discover that by matching just the first row and counting the matches. Of course, that assumes the row doesn't contain any of the funky fields that caused the problem. If the first row contains column headers you shouldn't have to worry about that, or about legitimate quoted fields either. Here's how I did it:
preg_match_all('~\G,?[^,\r\n]++~', $source, $cols);
$row_regex = '~^(?:(?:(?:"[^"*]")+|[^,]*)(?:,|$)){' . count($cols[0]) . '}$~m';
Your sample data contains only linefeeds (\n), but I've allowed for DOS-style \r\n as well. (Since the file is generated by a Microsoft product, I won't worry about the older-Mac style CR-only separator.)
See an online demo
If you want a java programmatic solution, open the file using the OpenCSV library. If it is a manual operation, then open the file in a text editor such as Vim and run a replace command. If it is a batch operation, you can use a perl command to cleanup the CRLFs.

PDFBox adding white spaces within words

When I try to extract text from my PDF files, it seems to insert white spaces between severl words randomly.
I am using pdfbox-app-1.6.0.jar (latest version) on following sample file in Downloads section of this page :
http://www.sheffield.gov.uk/roads/children/parents/6-11/pedestrian-training
I've tried with several other PDF files and it seems to be doing same on several pages.
I do the following:
java -jar pdfbox-app-1.6.0.jar ExtractText -force -console ~/Desktop/ped training pdf.pdf
on the downloaded file and you will see spaces in following inserted wrongly in the result on console:
"• If ch ildren are able to walk to
schoo l safely this could reduce the
congestion. "
"• Develops good hab its for later life."
"www.sheff ield.gov.uk"
"Think Ahead!, wh ich is based on the"
etc etc.
As you can see several of words above have spaces between them for no reason I can fathom.
I am on ubuntu and running Sun's JDK 1.6.
I've tried this on several different PDF files and tried searching for solution on forums, there were similar bugs but all seemed to have been resolved.
Any help or if anyone else has same problem please comment. This is causing big problem in indexing the content properly for searching.
Unfortunately there is currently no easy solution for this.
Internally PDF documents simply contain instructions like "place characters 'abc' in position X" and "place characters 'def' in position Y", and PDFBox tries to reason whether the resulting extracted text should be "abc def" or "abcdef" based on things like the distance between X and Y. These heuristics are generally pretty accurate, but as you can see they don't always produce the correct result.
One way to improve the quality of the extracted text is to try a dictionary lookup on each extracted word or token. If the lookup fails, try combining the token with the next one. If a dictionary lookup on the combined token succeeds, then it's fairly likely that the text extractor has mistakenly added an extra space inside the word. Unfortunately such a feature does not yet exist in PDFBox. See https://issues.apache.org/jira/browse/PDFBOX-1153 for the feature request filed for this. Patches welcome!
The class org.apache.pdfbox.util.PDFTextStripper (pdfbox-1.7.1) allows to modify the propensity to decide if two strings are part of the same word or not.
Increasing spacingTolerance will reduce the number of inserted spaces.
/**
* Set the space width-based tolerance value that is used
* to estimate where spaces in text should be added. Note that the
* default value for this has been determined from trial and error.
* Setting this value larger will reduce the number of spaces added.
*
* #param spacingToleranceValue tolerance / scaling factor to use
*/
public void setSpacingTolerance(float spacingToleranceValue) {
this.spacingTolerance = spacingToleranceValue;
}

Need to extract/consolidate info from database files

Here's a summary of my problem:
Our company's old software had a large database of contacts in it.
We switched to a new program and have no way to easily transfer those contacts to it.
The contacts database appears to have 4 files which can all be opened in Excel, but not MSAccess. The four files contain the following:
File 1: A nicely formatted spreadsheet of names and some other BASIC info for each contact. There is an ID number on each one, but the numbers do not seem to correspond to anything in File 2.
File 2: Info on each contact, but not in rows. Instead it looks something like this :
JHGH_CONTACT_BLOB: 1426367745
EMAIL: SMITH
WEB:
PHONE_COUNT: 1
FAX_COUNT: 0
ADDRESS_COUNT: 0
NOTE_COUNT: 0
555-7364
(I changed some info for privacy reasons)
Each blob of info is on a separate spreadsheet row. Each starts off with the same first line, even the number is the same, so it can't be some sort of ID number.
File 3: A file containing a lot of gobbledygook, interspersed with a few readable bits of text here and there. The readable text looks like it belongs to the database (ie, it is info on contacts like place of work and other notes.)
File 4: Contains one row and one column labeled ID, with the number 12725 in it.
I need to somehow get the info from File 2, into the nicely formatted file 1. In essence, I need to add the phone numbers, emails etc included in a messy fashion in file 2 on their proper rows in file 1.
This probably makes little sense and I thank you for even reading down this far. If you have any suggestions, I'd love to hear them.
Thanks
We have established that you have a DBF file, an FPT file and a CDX file. These are likely to all relate to Visual FoxPro (a now discontinued Microsoft product).
The .dbf file can be opened in Excel via the standard file open dialog by changing "Files of type" to "dBase files (*.dbf)". Going by your original post, Excel seems to be able to open this sensibly in the first place.
The combination of all three files might be accessible by downloading this OLE DB provider for FoxPro which would let you access the database from Excel using the methods outlined here
You can get more info on the specific file structures at the following links: DBF, FPT and CDX. The DBF contains most of the data, the FPT contains binary memo data and the CDX is an index file.

Resources