Greek language - X Cart - database

I recently opened up an old website of my own to a different webserver. I uploaded the database, the website files, checked the connections and everything is runnning smoothly.
The only thing that i can't fix is that the greek language is displayed as
"????"
. Checked in database and everything is correct, the letters are showing and the coding is utf8. So i ended up thinking it is x-carts problem. What can i try to do? x-cart Version is 4.4.1.

You have to check these points:
1) You database.sql file to import is UTF-8. Greek symbols are readable in a text editor
aim-server[~/tmp]$ file -ib database.sql
text/plain; charset=utf-8
aim-server[~/tmp]$ grep ελληνικά database.sql
INSERT INTO `xcart_languages` VALUES ('el','lbl_categories','Categories ελληνικά','Labels');
aim-server[~/tmp]$
2) every MySQL variable is UTF-8. Greek symbols are readable in a mysql client
[aim_xcart_4_4_1_gold]>show variables like '%colla%';
+----------------------+-----------------+
| Variable_name | Value |
+----------------------+-----------------+
| collation_connection | utf8_general_ci |
| collation_database | utf8_general_ci |
| collation_server | utf8_general_ci |
+----------------------+-----------------+
3 rows in set (0,00 sec)
[aim_xcart_4_4_1_gold]>show variables like '%char%'; +--------------------------+----------------------------+
| Variable_name | Value |
+--------------------------+----------------------------+
| character_set_client | utf8 |
| character_set_connection | utf8 |
| character_set_database | utf8 |
| character_set_filesystem | binary |
| character_set_results | utf8 |
| character_set_server | utf8 |
| character_set_system | utf8 |
| character_sets_dir | /usr/share/mysql/charsets/ |
+--------------------------+----------------------------+
8 rows in set (0,00 sec)
[aim_xcart_4_4_1_gold]>select * from xcart_languages where name='lbl_categories';
+------+----------------+-----------------------------+--------+
| code | name | value | topic |
+------+----------------+-----------------------------+--------+
| en | lbl_categories | Categories ελληνικά | Labels |
3) Charset is UTF-8 on the ' Main page :: Edit languages :: Greek ' page
4) mysql_query("SET NAMES 'utf8'"); is added to the include/func/func.db.php file according to
https://help.x-cart.com/index.php?title=X-Cart:FAQs#How_do_I_set_up_my_X-Cart_to_support_UTF-8.3F

Related

Snowflake - upload CSV - issue with only one accented character

I have an issue with an accented character when I upload a CSV file and then copy it into a table. the weird thing is that most accented letters are just fine, but one is being replaced by '�' when queried.
Another thing, when I use an INSERT statement, no issue whatsoever.
I use an internal stage. here's the file format definition:
create or replace file format MY_FORMAT
type = csv
record_delimiter = '\n'
field_delimiter = ';'
field_optionally_enclosed_by = '"'
skip_header = 1
null_if = ('NULL', 'null')
empty_field_as_null = true
compression = gzip
validate_UTF8=false
skip_blank_lines = true;
The file built in Excel, saved as CSV UTF-8. No other issues, no errors, all my rows are uploaded, just that one character that's supposed to be a "û" that turns out to be "�".
Any ideas?
Thanks,
JFS.
It could an issue with the terminal being used. Please try and check in a different terminal or web UI.
I simulated the scenario and I get the result as expected. Please refer below -
#### Data contents for so_testfile.csv
id;name
1;"û"
2;"û"
3;"û"
4;"û"
SNOWFLAKE1#COMPUTE_WH#TEST_DB.PUBLIC>create or replace stage so_my_stage file_format=SO_MY_FORMAT;
+----------------------------------------------+
| status |
|----------------------------------------------|
| Stage area SO_MY_STAGE successfully created. |
+----------------------------------------------+
1 Row(s) produced. Time Elapsed: 0.138s
SNOWFLAKE1#COMPUTE_WH#TEST_DB.PUBLIC>put file://c:\snowflake\so_testfile.csv #so_my_stage;
+-----------------+--------------------+-------------+-------------+--------------------+--------------------+----------+---------+
| source | target | source_size | target_size | source_compression | target_compression | status | message |
|-----------------+--------------------+-------------+-------------+--------------------+--------------------+----------+---------|
| so_testfile.csv | so_testfile.csv.gz | 39 | 64 | NONE | GZIP | UPLOADED | |
+-----------------+--------------------+-------------+-------------+--------------------+--------------------+----------+---------+
1 Row(s) produced. Time Elapsed: 1.100s
SNOWFLAKE1#COMPUTE_WH#TEST_DB.PUBLIC>select $1,$2 from #so_my_stage;
+----+----+
| $1 | $2 |
|----+----|
| 1 | û |
| 2 | û |
| 3 | û |
| 4 | û |
+----+----+
4 Row(s) produced. Time Elapsed: 0.308s
SNOWFLAKE1#COMPUTE_WH#TEST_DB.PUBLIC>select * from SO_TEST_TAB;
+----+------+
| ID | COL1 |
|----+------|
+----+------+
0 Row(s) produced. Time Elapsed: 0.388s
SNOWFLAKE1#COMPUTE_WH#TEST_DB.PUBLIC>copy into SO_TEST_TAB from #so_my_stage;
+--------------------------------+--------+-------------+-------------+-------------+-------------+-------------+------------------+-----------------------+-------------------------+
| file | status | rows_parsed | rows_loaded | error_limit | errors_seen | first_error | first_error_line | first_error_character | first_error_column_name |
|--------------------------------+--------+-------------+-------------+-------------+-------------+-------------+------------------+-----------------------+-------------------------|
| so_my_stage/so_testfile.csv.gz | LOADED | 4 | 4 | 1 | 0 | NULL | NULL | NULL | NULL |
+--------------------------------+--------+-------------+-------------+-------------+-------------+-------------+------------------+-----------------------+-------------------------+
1 Row(s) produced. Time Elapsed: 0.833s
SNOWFLAKE1#COMPUTE_WH#TEST_DB.PUBLIC>select * from so_test_tab;
+----+------+
| ID | COL1 |
|----+------|
| 1 | û |
| 2 | û |
| 3 | û |
| 4 | û |
+----+------+
4 Row(s) produced. Time Elapsed: 0.263s
SNOWFLAKE1#COMPUTE_WH#TEST_DB.PUBLIC>
It turns out that coding the file properly as CSV UTF-8 from Excel worked. The "û" is now displayed correctly, just like all others accented letters we have in French.
Thanks for your help.
JFS.

Map bash table output to array

How can i map the value's I got from the column name into an array that i can later use in my bash script?
+------------------------------+----------+-----------+---------+
| name | status | update | version |
+------------------------------+----------+-----------+---------+
| enable-jquery-migrate-helper | inactive | none | 1.0.1 |
| gravityforms | inactive | none | 2.4.17 |
| gutenberg | inactive | none | 8.8.0 |
| redirection | inactive | none | 4.8 |
| regenerate-thumbnails | inactive | none | 3.1.3 |
| safe-svg | inactive | none | 1.9.9 |
| weglot | inactive | none | 3.1.9 |
| wordpress-seo | inactive | available | 14.8 |
+------------------------------+----------+-----------+---------+
I already tried the following, but this would only save the name of the headers in the table:
IFS=$'\n' read -r -d '' -a my_array < <( wp plugin list --status=inactive --skip-plugins && printf '\0' )
echo $my_array
name status update version
After I have retrieved the value's I want to loop over them to add them to an array
Better use the CSV output format rather than the default table format if your intent is mapping the result with a shell or awk script:
wp plugin list --status=inactive --skip-plugins --format=csv
which would output this:
name,status,update,version
enable-jquery-migrate-helper,inactive,none,1.0.1
gravityforms,inactive,none,2.4.17
gutenberg,inactive,none,8.8.0
redirection,inactive,none,4.8
regenerate-thumbnails,inactive none,3.1.3
safe-svg,inactive,none,1.9.9
weglot,inactive,none,3.1.9
wordpress-seo,inactive,available,14.8

TSql Preceding Backslash Changes column Value

So, I fat fingered something when typing my commands and ended up discovering that a preceding backslash before a column name makes the result set return as zeros - for multiple data types.
Why is that?
Screen cap for example:
It appears SELECT \ just returns 0.00. The column name following is added as a label similar to using AS. I am not sure why \ equates to 0.00.
The answer as gleaned from comments is that SQL Server considers a backslash a currency symbol. This can be observed with the results of this query, which returns zero values of type money:
SELECT \ AS Backslash, ¥ AS Yen, $ AS Dollar;
Results:
+-----------+------+--------+
| Backslash | Yen | Dollar |
+-----------+------+--------+
| 0.00 | 0.00 | 0.00 |
+-----------+------+--------+
Getting result set metadata using sp_describe_first_result_set shows the money type is returned:
EXEC sp_describe_first_result_set N'SELECT \ AS Backslash, ¥ AS Yen, $ AS Dollar;';
Results (extraneous columns omitted):
+-----------+----------------+-----------+-------------+----------------+------------------+------------+-----------+-------+
| is_hidden | column_ordinal | name | is_nullable | system_type_id | system_type_name | max_length | precision | scale |
+-----------+----------------+-----------+-------------+----------------+------------------+------------+-----------+-------+
| 0 | 1 | Backslash | 0 | 60 | money | 8 | 19 | 4 |
| 0 | 2 | Yen | 0 | 60 | money | 8 | 19 | 4 |
| 0 | 3 | Dollar | 0 | 60 | money | 8 | 19 | 4 |
+-----------+----------------+-----------+-------------+----------------+------------------+------------+-----------+-------+
The behavior is especially unintuitive in this case because SQL Server allows one to specify a currency prefix without an amount as a money literal. The resultant currency value is zero when no amount is specified.
This Wiki article calls out the general confusion with the backslash and yen mark. This can be observed from SSMS by running the query below to return a backslash string literal. When the results are displayed with font MS Gothic, for example, the resultant character glyph is ¥ (yen mark) instead of the expected backslash:
SELECT '\' AS Backslash;
Results:
+-----------+
| Backslash |
+-----------+
| ¥ |
+-----------+

comma-separated value to rows and split other colums also /2 or /3

I have a table like this
MeterSizeGroup | WrenchTime | DriveTime
1,2,3 | | 7.843 || 5.099 |
I want to separate the comma delimited string into three rows as
MeterSizeGroup | WrenchTime | DriveTime
1 | | 2.614 | | 1.699 |
2 | | 2.614 | | 1.699 |
3 | | 2.614 | | 1.699 |
please help me how to write a query for this type of split it has to split in such a way that wrech time and driver time also has to be split by 3 enter image description here

How can we validate tabular data in robot framework?

In Cucumber, we can directly validate the database table content in tabular format by mentioning the values in below format:
| Type | Code | Amount |
| A | HIGH | 27.72 |
| B | LOW | 9.28 |
| C | LOW | 4.43 |
Do we have something similar in Robot Framework. I need to run a query on the DB and the output looks like the above given table.
No, there is nothing built in to do exactly what you say. However, it's fairly straight-forward to write a keyword that takes a table of data and compares it to another table of data.
For example, you could write a keyword that takes the result of the query and then rows of information (though, the rows must all have exactly the same number of columns):
| | ${ResultOfQuery}= | <do the database query>
| | Database should contain | ${ResultOfQuery}
| | ... | #Type | Code | Amount
| | ... | A | HIGH | 27.72
| | ... | B | LOW | 9.28
| | ... | C | LOW | 4.43
Then it's just a matter of iterating over all of the arguments three at a time, and checking if the data has that value. It would look something like this:
**** Keywords ***
| Database should contain
| | [Arguments] | ${actual} | #{expected}
| | :FOR | ${type} | ${code} | ${amount} | IN | #{expected}
| | | <verify that the values are in ${actual}>
Even easier might be to write a python-based keyword, which makes it a bit easier to iterate over datasets.

Resources