How to encounter error when uploading an arff file in WEKA? - file

Hi Im using WEKA for data mining and i have a project based on kid's usage of internet.I have downloaded the data from openML in .arff form and im processing them in Notepad i have changed the values "," to "." and "?" to ",". Although when i try to open the file in WEKA i get this
"nominal value not declaired in header, read line 76"
line 76 is the first information after #data
Error:
error.png

The ARFF defines comma as the separator between columns. Replacing commas with periods essentially turned all columns into a single nominal value which wasn't defined for that column in the header. Nominal attributes require all possible values to be declared in the header section for that attribute.
What was the reason for converting commas to periods?

Related

Changing .csv delimiter on ADF

I am trying to load a .csv table to MS SQL Server via Azure Data Factory, but I have a problem with the delimiter (;) since it appears as a character in some of the values included in some columns.
As a result, I get an error saying in the details "found more columns than expected column count".
Is there any way to change the delimiter directly on ADF before/while loading the .csv table (ex.: making it from ";" to "|||")?
Thanks in advance!
I have a problem with the delimiter (;) since it appears as a
character in some of the values included in some columns.
As you have quoted that your delimiter is ; but it is occurring as a character in some of the columns which means that there is no specific pattern of the occurrence. Hence, it is not possible in ADF.
The recommendation is to write a program using any preferred language (like python) which will iterate each row from the dataset and write a logic to replace the delimiter to ||| or you can also remove the unrequired ; and append the changes in new file. Later you can ingest this new file in ADF.

CSV file not recognized as csv, reason nominal value not declared in header

I am trying to load a dataset in weka, I have tried many solutions such as arff format, comas etc. but it was all a failure. Could any of you give me a working solution or load this dataset according to the format.
Here is a link to dataset
Instead of using Weka's functionality for reading CSV files, you could use ADAMS (developed at the same university; I'm the lead developer) instead.
Download the adams-ml-app snapshot and then use the Weka Investigator to load/save the file:
Load it as ADAMS Spreadsheets (.csv, .csv.gz)
Save it as Arff data files (.arff, .arff.gz) or Simple ARFF data files (.arff, .arff.gz)
The Reviews column contains an erroneous 3.0M, which prevents it from becoming numeric.
If you want to have an introduction to the Weka Investigator, then take a look at my talk from the Weka User Conference 2021: Taking Weka to the next level with ADAMS .
There are too many issues with lines in this file.
In line 23, I eliminated the odd looking brackets.
I removed all single quotes (')
I eliminated all repeated double quotes ("")
In line 10474 the first two fields (before the number) didn't seem to be separated, so I added a comma.
This allowed the file to go through initial screening, but...
The file contains a lot of odd emojis. I started to eliminate them one by one, but there are clearly more of these than I wish to deal with.
Each time I got rid of one, it would read farther into the file, then stop at the next one.
If I just try to read the top of the file, the first 20 lines before we get to any of these problems, it reads fine.
My partial editing can be found here: https://www.dropbox.com/s/ij707mb23dt1jvz/googleplaystore3.csv?dl=0
I think if you clear up the remaining emojis the file should be usable.

Can't import characters due to incorrect code page

I have an SSIS job to import data from a flat file into an SQL Server table. I'm having an issue regarding the encoding of the source file and destination table.
The file is an UTF8 encoded CSV file with some standard accented latin characters (ãóé, etc). My destination table is defined as having the Latin1_General_CI_AS Collation, which means I can manually insert the following text with no problem: "JOÃO ANTÓNIO".
When I declare the Flat File source, it automatically determines the file as having the 65001 code page (UTF-8), and infers the string [DT_STR] data type for each column. However, the SSIS package automatically assumes the destination table as having the 1252 Code Page, giving me the following error:
Validation error. <STEPNAME>: <STEPNAME>: The code page 65002 specified on output column "<MYCOLUMN>" (180) is not valid. Select a different code page for output column "<MYCOLUMN>".
I understand why, since the database collation is defined as having that Code Page. However, if I try to set the Flat File datasource as having the Latin1 1252 encoding, the SSIS executes but it imports characters incorrectly:
JOÃO ANTÓNIO (Flat File)-> JOAO ANTÓNIO (Database).
I have already tried to configure the flat file source as being unicode compliant, but then when after I configure each column as having a unicode compliant data type, i can't update the destination step since SSIS infers data types directly from the database and doesn't allow me to change them.
Is there a way to keep the flat file source as being CP 1252, but also importing the correct characters? What am I missing here?
Thanks to Larnu's comment i've been able to get around this problem.
Since SSIS doesn't allow implicit data conversion, I needed to set up a data conversion step first (Derived Column Transformation). Since the source columns were already set up as DTSTR[65002], i had to configure new derived columns form an expression, converting from the source code page into the destination code page, with the following expression:
(DT_STR, 50, 1252)<SourceColumn>
Where a direct cast to DT_STR is being made, stating the column will have a maximum size of 50 characters and the data will be represented with the 1252 code page.

SAP Data Services .csv data file load from Excel with special characters

I am trying to load data from an Excel .csv file to a flat file format to use as a datasource in a Data Services job data flow which then transfers the data to an SQL-Server (2012) database table.
I consistently lose 1 in 6 records.
I have tried various parameter values in the file format definition and settled on setting Adaptable file scheme to "Yes", file type "delimited", column delimeter "comma", row delimeter {windows new line}, Text delimeter ", language eng(English) and all else as defaults.
I have also set "write errors to file" to "yes" but it just creates an empty error file (I expected the 6,000 odd unloaded rows to be in here).
If we strip out three of the columns containing special characters (visible in XL) it loads a treat so I think these characters are the problem.
The thing is, we need the data in those columns and unfortunately, this .csv file is as good a data source as we are likely to get and it is always likely to contain special characters in these three columns so we need to be able to read it in if possible.
Should I try to specifically strip the columns in the Query source component of the dataflow? Am I missing a data-cleansing trick in the query or file format definition?
OK so didn't get the answer I was looking for but did get it to work by setting the "Row within Text String" parameter to "Row delimiter".

Import CSV data into SQL Server

I have data in the csv file similar to this:
Name,Age,Location,Score
"Bob, B",34,Boston,0
"Mike, M",76,Miami,678
"Rachel, R",17,Richmond,"1,234"
While trying to BULK INSERT this data into a SQL Server table, I encountered two problems.
If I use FIELDTERMINATOR=',' then it splits the first (and sometimes the last) column
The last column is an integer column but it has quotes and comma thousand separator whenever the number is greater than 1000
Is there a way to import this data (using XML Format File or whatever) without manually parsing the csv file first?
I appreciate any help. Thanks.
You can parse the file with http://filehelpers.sourceforge.net/
And with that result, use the approach here: SQL Bulkcopy YYYYMMDD problem or straight into SqlBulkCopy
Use MySQL load data:
LOAD DATA LOCAL INFILE 'path-to-/filename.csv' INTO TABLE `sql_tablename`
CHARACTER SET 'utf8'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '\"'
IGNORE 1 LINES;
The part optionally enclosed by '\"', or escape character and quote, will keep the data in the first column together for the first field.
IGNORE 1 LINES will leave the field name row out.
UTF8 line is optional but good to use if names have diacritics, like in José.

Resources