I have a package that giving me a very confusing "Text was truncated or one or more characters had no match in the target code page" error but only when I run the full package in the control flow, not when I run just the task by itself.
The first task takes CSV files and combines them into one file. The next task reads the output of the previous file and begins to process the records. What is really odd is the truncation error is thrown in the flat file source in the 2nd step. This is the exact same flat file source which was the destination in the previous step.
If there was a truncation error wouldn't that be thrown by the previous step that tried to create the file? Since the 1st step created the file without truncation, why can't I just read that same file in the very next task?
Note - Only thing that makes this package different from the others I have worked on is I am dealing with special characters and using code page 65001 UTF-8 to capture the fields that have special characters. My other packages were all referencing flat file connection managers with code page 1252.
The problem was caused by the foreach loop and using the ColumnNamesInFirstDataRow expression where I have the formula "#[User::count_raw_input_rows] < 0". I have a variable initialized to -1 and I assign it to the ColumnNamesInFirstDataRow for the flat file. When in the loop I update the variable with a row counter on each read of a CSV file. This puts the header in the first time (-1) but then avoids repeating on all the other CSV files. When I exit the loop and try to read the input file it treats the header as data and blows up. I only avoided this on my last package because I didn't tighten the column definitions for the flat file like I did with this package. Thanks for the help.
Related
Receiving below error while execution of CSV file which includes around 400k rows
Error:
ERROR CSV Reader 2:1 Execute failed: Too few data elements (line: 2 (Row0), source: 'file:/Users/shobha.dhingra/Desktop/SBC:Non%20SBC/SBC.csv')
I have tried executing another csv file with few lines, did not face an issue.
It is not about the number of lines, but the content in the line (2 in your case). It seems your SBC.csv file is not correct, it has extra header content or the second line misses the commas representing the missing cells.
You can use the CSV Reader node's Support Short Lines option to let KNIME handle this case by producing missing cells.
I get this error when end-of-line characters exist in a field. You could load the file into a text editor and identify any look for non-printing characters (tabs, carriage returns etc) between your delimiters.
If you can't get a clean version of the file, consider using this regex
[^ -~] to identify any character that is not a space or a visible character.
I hope this helps.
I have many files that are extracted into .txt with a batch file. But they don't have the headers. I've read that a possible solution from here that is to add to a .txt with the headers the exported rows.
With this:
echo. >> titles.txt
type data.txt >> titles.txt
This takes a lot of time and is not efficient, since it is adding the big file to the file with the text.
Another possible solution is to add to the SQL query the titles hardcoded, but this will change the type of the columns (is they are numeric they will be changed to varchar).
Is there a way to insert in the first row of the data txt the headers and not doing vice-versa?
I might be wrong, but as far as I am informed (and as far as I know from earlier experiments in doing as described): No, it is not possible! The mentioned Tasks are acting on the file sequentially. You can either open a file for reading, writing or appending. If you open the titles.txt file for writing, it is overwritten - and with this empty. If you open it for appending, it can only append to the end of the file - so you can only write the data after the Header... the only way it might work - but which is pretty nasty - is to append the title to the end of the file and during later processing (e.g. xls or whatever) Resort the rows and put the last one to the beginning. But as mentioned: nasty and not really the way to go.
If the number of files to process is a bigger problem than any individual file size, switching from bcp to sqlcmd might help.
I have tried many times to load this *.csv file, but I failed. I am using Weka 3.7
Here is the error:
Wrong number of mumber. Read 1,expected 12, read Token[EOL], line 2
This is the line 2 in my file:
7;0.27;0.36;20.7;0.045;45;170;1.001;3;0.45;8.8;6
I dont know what wrong with this?
Someone helps me. Thank you very much.
I had tried to import a semi-colon delimited file as a CSV for opening into Weka, but what appeared to happen was that the contents were loaded up as a single attribute (due to the lack of commas in the file structure). I didn't get the error that you had reported.
What you would need to do is replace all of the semi-colons to be commas and then load the contents again. In your above case, I assumed that the first line contained the attribute names, which loaded successfully in my test case.
As such, the format Weka is likely expecting is the Comma-Separated Values Format.
I am reading info (numbers) from a txt file and after that I am adding to those numbers, others I had in another file, with the same structure.
At the start of each line in the file is a number, that identifies a specific product. That code will allow me to search for the same product in the other file. In my program I have to add the other "variables" from one file to the other, and then replace it, in the same place in one of those files.
I didn't open any of those files with a or a+, I did it with r and r+ because i want to replace the information in the lines that may be in the middle of the file, and not in the end of it.
The program compiles, and runs, but when it comes to replace the info in the file, it just doesn't do anything.
How should I resolve the problem?
A program can replace (overwrite) text in the middle of the file. But the question is whether or not this should be performed.
In order to insert larger text or smaller text (and close up the gap), a new text file must be written. This is assuming the file is not fixed width. The fundamental rule is to copy all original text before the insertion to a new file. Write the new text. Finally write the remaining original text. This is a lot of work and will slow down even the simplest programs.
I suggest you design your data layout before you go any further. Also consider using a database, see my post: At what point is it worth using a database?
Your objective is to design the data to minimize duplication and data fetching.
I've got a service which runs all the time and also keeps a log file. It basically adds new lines to the log file every few seconds. I'm written a small file which reads these lines and then parses them to various actions. The question I have is how can I delete the lines which I have already parsed from the log file without disrupting the writing of the log file by the service?
Usually when I need to delete a line in a file then I open the original one and a temporary one and then I just write all the lines to the temp file except the original which I want to delete. Obviously this method will not word here.
So how do I go about deleting them ?
In most commonly used file systems you can't delete a line from the beginning of a file without rewriting the entire file. I'd suggest instead of one large file, use lots of small files and rotate them for example once per day. The old files are deleted when you no longer need them.
Can't be done, unfortunately, without rewriting the file, either in-place or as a separate file.
One thing you may want to look at is to maintain a pointer in another file, specifying the position of the first unprocessed line.
Then your process simply opens the file and seeks to that location, processes some lines, then updates the pointer.
You'll still need to roll over the files at some point lest they continue to grow forever.
I'm not sure, but I'm thinking in this way:
New Line is a char, so you must delete chars for that line + New Line char
By the way, "moving" all characters back (to overwrite the old line), is like copying each character in a different position, and removing them from their old position
So no, I don't think you can just delete a line, you should rewrite all the file.
You can't, that just isn't how files work.
It sounds like you need some sort of message logging service / library that your program could connect to in order to log messages, which could then hide the underlying details of file opening / closing etc.
If each log line has a unique identifier (or even just line number), you could simply store in your log-parsing the identifier until which you got parsing. That way you don't have to change anything in the log file.
If the log file then starts to get too big, you could switch to a new one each day (for example).