Drools upgrade version from 6.x to 7.x - version

In 7.16 version .drl generated from excel sheet is corrupted one. In action column of xls sheet if else conditions there and are written in new lines of single column.... So when generating drl file instead of printing conditions in new line.... Its printing \n character. Can anyone help on this?

Related

Add line numbers to Microsoft Report Builder output

I have a report made by Microsoft Report Builder and I need to add line numbers. When the data is represented as a table, this is easy to accomplish by adding a new column for the line number. However, when the data is a text block inside a text block, there is no way to add numbers to each line of text. I have thought of exporting the report to word, reading the word doc into python, and adding lines there, but at first attempt it seems a little difficult. Has anyone had to do this before?

Error importing data from CSV with OpenRowset in SQL Server - Mysterious value of "S7"

I have a file dump which needs to be imported into SQL Server on a daily basis, which I have created a scheduled task to do this without any attendant. All CSV files are decimated by ',' and it's a Windows CR/LF file encoded with UTF-8.
To import data from these CSV files, I mainly use OpenRowset. It works well until I ran into a file in which there's a value of "S7". If the file contains the value of "S7" then that column will be recognized as datatype of numeric while doing the OpenRowset import and which will lead to a failure for other alphabetic characters to be imported, leaving only NULL values.
This is by far I had tried:
Using IMEX=1: openrowset('Microsoft.ACE.OLEDB.15.0','text;IMEX=1;HDR=Yes;
Using text driver: OpenRowset('MSDASQL','Driver=Microsoft Access Text Driver (*.txt, *.csv);
Using Bulk Insert with or without a format file.
The interesting part is that if I use Bulk Insert, it will give me a warning of unexpected end of file. To solve this, I have tried to use various row terminator indicators like '0x0a','\n', '\r\n' or not designated any, but they all failed. And finally I managed to import some of the records which using a row terminator of ',\n'. However the original file contains like 1000 records and only 100 will be imported, without any notice of errors or warnings.
Any tips or helps would be much appreciated.
Edit 1:
The file is ended with a newline character, from which I can tell from notepad++. I managed to import files which give me an error of unexpected end of file by removing the last record in those files. However even with this method, that I still can not import all records, only a partial of which can be imported.

pgadmin4 - Download Query result as CSV

I wrote a query using the query tool in pgadmin 4. Now I want to download the results as a csv. I´ve got two problems with that.
The 'Download as CSV'-button does not work sometimes. Especially when the result contains 1000+ rows.
When I finally have a csv and I want to open it, this message is all I see:
"'ascii' codec can't encode character u'\xbb' in position 26: ordinal not in range(128)"
Since I´m fairly new to all of this, could someone enlighten me to what is wrong?
On your questions:
The broken CSV download was a known bug that was fixed in pgAdmin v1.5 (Bug summary at the login-required https://redmine.postgresql.org/issues/2253; the gist is that there were multiple issues with exporting JSON data and Unicode). If you're not on that version, try updating and see whether you continue to have the issue.
You didn't specify where you're seeing that message regarding encoding, but the character referenced in the error is a "Right-Pointing Double Angle Quotation Mark" (») (http://www.codetable.net/hex/bb).

Text Qualifier þ (thorn) in SSIS [duplicate]

I'm trying to read a flat file in SSIS which is in this format
col1 þ col2 þ col 3
I'm using the flatfile connection manager but there is no option for the 'þ' character in the column delimiter section of the connection manager.
What would be the workaround for this? Other than reading the file and replacing the thorn character with a SSIS supported delimiter,
Being a dumb 'merican, I think the lower case thorn character is 0xFE while upper case is 0xDE. This will become important soon.
I created an SSIS package with a Flat File Connection Manager. I pointed it at a comma delimited file that looked like
col 1,col 2,col 3
This allowed me to get the metadata set for the file. Once I have all the columns defined and my package is otherwise good. Save it. Commit it to your version control system. If you're not using version control, shame on you, but then make a copy of your .dtsx file and put it somewhere handy.
Replace the comma delimited file with the a thorn delimited one.
What we're doing
What we're going to do is edit the XML that is our SSIS package by hand to exchange the delimter of a , with a þ. It's a straight forward operation but since you are going off the reservation, it's easy to foul up and then your package won't open up properly in the editor.
How to fix it
If you have the package open, close the package but leave Visual Studio open. Right click on the file and select "View Code".
In an SSIS 2012 package, you'll be looking for
DTS:ColumnDelimiter="_x002C_"
In a 2008 package,
<DTS:Property DTS:Name="ColumnDelimiter" xml:space="preserve">_x002C_</DTS:Property>
What we're going to do is substitute _x00FE_ (thorn) for _x002C_ (comma). Save the file and then double click to open it back up.
Your connection manager should now show the thorn symbol on the Columns tab.
Interestingly enough, after you open the package, if you go back into the Code, the editor will have swapped the thorn character into the file in place of the hexagonal character code. Weird.

Fix CSV file with new lines

I ran a query on a MS SQL database using SQL Server Management Studio, and some the fields contained new lines. I selected to save the result as a csv, and apparently MS SQL isn't smart enough to give me a correctly formatted CSV file.
Some of these fields with new lines are wrapped in quotes, but some aren't, I'm not sure why (it seems to quote fields if they contain more than one new line, but not if they only contain one new line, thanks Microsoft, that's useful).
When I try to open this CSV in Excel, some of the rows are wrong because of the new lines, it thinks that one row is two rows.
How can I fix this?
I was thinking I could use a regex. Maybe something like:
/,[^,]*\n[^,]*,/
Problem with this is it matches the last element of one line and the 1st of the next line.
Here is an example csv that demonstrates the issue:
field a,field b,field c,field d,field e
1,2,3,4,5
test,computer,I like
pie,4,8
123,456,"7
8
9",10,11
a,b,c,d,e
A simple regex replacement won't work, but here's a solution based on preg_replace_callback:
function add_quotes($matches) {
return preg_replace('~(?<=^|,)(?>[^,"\r\n]+\r?\n[^,]*)(?=,|$)~',
'"$0"',
$matches[0]);
}
$row_regex = '~^(?:(?:(?:"[^"*]")+|[^,]*)(?:,|$)){5}$~m';
$result=preg_replace_callback($row_regex, 'add_quotes', $source);
The secret to $row_regex is knowing ahead of time how many columns there are. It starts at the beginning of a line (^ in multiline mode) and consumes the next five things that look like fields. It's not as efficient as I'd like, because it always overshoots on the last column, consuming the "real" line separator and the first field of the next row before backtracking to the end of the line. If your documents are very large, that might be a problem.
If you don't know in advance how many columns there are, you can discover that by matching just the first row and counting the matches. Of course, that assumes the row doesn't contain any of the funky fields that caused the problem. If the first row contains column headers you shouldn't have to worry about that, or about legitimate quoted fields either. Here's how I did it:
preg_match_all('~\G,?[^,\r\n]++~', $source, $cols);
$row_regex = '~^(?:(?:(?:"[^"*]")+|[^,]*)(?:,|$)){' . count($cols[0]) . '}$~m';
Your sample data contains only linefeeds (\n), but I've allowed for DOS-style \r\n as well. (Since the file is generated by a Microsoft product, I won't worry about the older-Mac style CR-only separator.)
See an online demo
If you want a java programmatic solution, open the file using the OpenCSV library. If it is a manual operation, then open the file in a text editor such as Vim and run a replace command. If it is a batch operation, you can use a perl command to cleanup the CRLFs.

Resources