Foxpro: How to export cursor into txt file - cursor

I am currently having issues exporting data from my cursor into a txt file. Unfortunately the txt file has to look a certain way. I have my cursor which I just named "Export" and have to push it into a txt file so that it looks like this.The asterisk also has to be there.
*Col1,Col2
Col1,Col2,Col3,Col4,Col5,Col6,Col7,Col8,Col9,Col10.
and repeat about 647 times. I have been searching for a good way to do this, but I feel as if my end result is too specific, which I hope isn't true. Any help would be immensely appreciated.

set textmerge on noshow
set textmerge to myfile.txt
select export
scan
\\*<<col1>>,<<col2>><<chr(13)>>
\\<<Col1>>,<<Col2>>,<<Col3>>,<<Col4>>,<<Col5>>,<<Col6>><<chr(13)>>
endscan
set textmerge off
set textmerge to
The line that stops at col6 you would obviously continue in the same way up to col10, I truncated it to fit here.

Related

Duplicate Block in Tab Delimited File and replace a word

I currently have the following sample text file:
http://pastebin.com/BasTiD4x
and I need to duplicate the CDS blocks. Essentially the line that has the word "CDS" and the 4 lines after it are part of the CDS block.
I need to insert this duplicated CDS block right before a line that says CDS, and I need to change the word CDS in the duplicated block to mRNA.
Of course this needs to happen every time there is an instance CDS.
A sample output would be here:
http://pastebin.com/mEMAB50t
Essentially for every CDS block, I need an mRNA block that says exactly the same thing.
Would appreciate help with this, never done 4 line insertions and replacements.
Thanks,
Adrian
Sorry for the very specific question. Here is a working solution provided by someone else:
perl -ne 'if (! /^\s/){$ok=0;if($mem){$mem=~s/CDS/mRNA/;print $mem;$mem="";}}$ok=1 if (/\d+\s+CDS/);if($ok){$mem.=$_};print;' exemple

Export query result in Pervasive to txt / csv file

I'm using Pervasive 10 with PCC (Pervasive Control Center) and I need to export a lot of results (over 100 000) to a TXT file.I know it's possible "Execute in Text" but this feature does not work for me because after exporting about 20 000 records the program stops. I have also changed the settings in PCC (Windows->Preferences->Text Output-> Maximun number of rows to display = 500,000).
Anyone know a way to export my query result to a txt file?
You should be able to use the Export Data function. Right click on the table name in the PCC and select Export Data. From there, you can either execute the standard "select * from " or make a more complex query to pull only the data you need. You can set the delimiter to Comma, Tab, or Colon.
Nice answer mirtheil, was wondering about this my self as well.
To add something to the answer.
It does not matter which table you right click and choose "Export Data" on, Because your query will override the default table query.

Fix CSV file with new lines

I ran a query on a MS SQL database using SQL Server Management Studio, and some the fields contained new lines. I selected to save the result as a csv, and apparently MS SQL isn't smart enough to give me a correctly formatted CSV file.
Some of these fields with new lines are wrapped in quotes, but some aren't, I'm not sure why (it seems to quote fields if they contain more than one new line, but not if they only contain one new line, thanks Microsoft, that's useful).
When I try to open this CSV in Excel, some of the rows are wrong because of the new lines, it thinks that one row is two rows.
How can I fix this?
I was thinking I could use a regex. Maybe something like:
/,[^,]*\n[^,]*,/
Problem with this is it matches the last element of one line and the 1st of the next line.
Here is an example csv that demonstrates the issue:
field a,field b,field c,field d,field e
1,2,3,4,5
test,computer,I like
pie,4,8
123,456,"7
8
9",10,11
a,b,c,d,e
A simple regex replacement won't work, but here's a solution based on preg_replace_callback:
function add_quotes($matches) {
return preg_replace('~(?<=^|,)(?>[^,"\r\n]+\r?\n[^,]*)(?=,|$)~',
'"$0"',
$matches[0]);
}
$row_regex = '~^(?:(?:(?:"[^"*]")+|[^,]*)(?:,|$)){5}$~m';
$result=preg_replace_callback($row_regex, 'add_quotes', $source);
The secret to $row_regex is knowing ahead of time how many columns there are. It starts at the beginning of a line (^ in multiline mode) and consumes the next five things that look like fields. It's not as efficient as I'd like, because it always overshoots on the last column, consuming the "real" line separator and the first field of the next row before backtracking to the end of the line. If your documents are very large, that might be a problem.
If you don't know in advance how many columns there are, you can discover that by matching just the first row and counting the matches. Of course, that assumes the row doesn't contain any of the funky fields that caused the problem. If the first row contains column headers you shouldn't have to worry about that, or about legitimate quoted fields either. Here's how I did it:
preg_match_all('~\G,?[^,\r\n]++~', $source, $cols);
$row_regex = '~^(?:(?:(?:"[^"*]")+|[^,]*)(?:,|$)){' . count($cols[0]) . '}$~m';
Your sample data contains only linefeeds (\n), but I've allowed for DOS-style \r\n as well. (Since the file is generated by a Microsoft product, I won't worry about the older-Mac style CR-only separator.)
See an online demo
If you want a java programmatic solution, open the file using the OpenCSV library. If it is a manual operation, then open the file in a text editor such as Vim and run a replace command. If it is a batch operation, you can use a perl command to cleanup the CRLFs.

How to export data from an ancient SQL Anywhere?

I'm tasked with exporting data from an old application that is using SQL Anywhere, apparently version 5, maybe 5.6. I never worked with this database before so I'm not sure where to start here. Does anybody have a hint?
I'd like to export it in more or less any text representation that then I can work with. Thanks.
I ended up exporting the data by using isql and these commands (where #{table} is each of the tables, a list I built manually):
SELECT * FROM #{table};
OUTPUT TO "C:\export\#{table}.csv" FORMAT ASCII DELIMITED BY ',' QUOTE '"' ALL;
SELECT * FROM #{table};
OUTPUT TO "C:\export\#{table}.txt" FORMAT TEXT;
I used the CVS to import the data itself and the txt to pick up the name of the fields (only parsing the first line). The txt can become rather huge if you have a lot of data.
Have a read http://www.lansa.com/support/tips/t0220.htm

Inserting data at a particular position in file using fseek() in C

Basically I want to write data to a file at a particular position and don't want to load data in to the memory for sorting it. For example if I have in a file:
FILE.txt
Andy dsoza
Arpit Raj
Karishma Shah
Pratik Mehta
Zppy andre
And i want to insert a contact Barbie patel then i will read the first letter in the file after every line , so Barbie should be inserted after Arpit and before Karishma so the file after editing should be:
FILE.txt
Andy dsoza
Arpit Raj
Barbie Patel
Karishma Shah
Pratik Mehta
Zppy andre
But fseek drives me to that postion but dosen't help me insert when i use fprintf/fwrite/putc. It replaces the byte but does not insert before that particular byte.
Loading all the data in to the memmory and sorting it out would not be good if i wold have lot of contacts in future.
You won't be able to directly insert into a file without loading it into memory. How you are to manage longer file depends on efficient design approach.
One approach would be to use different files.
You cannot insert data in the middle of a file. You have to first read everything that's in the file from that point to the end, overwrite and then append what you read.

Resources