TSql Preceding Backslash Changes column Value - sql-server

So, I fat fingered something when typing my commands and ended up discovering that a preceding backslash before a column name makes the result set return as zeros - for multiple data types.
Why is that?
Screen cap for example:

It appears SELECT \ just returns 0.00. The column name following is added as a label similar to using AS. I am not sure why \ equates to 0.00.

The answer as gleaned from comments is that SQL Server considers a backslash a currency symbol. This can be observed with the results of this query, which returns zero values of type money:
SELECT \ AS Backslash, ¥ AS Yen, $ AS Dollar;
Results:
+-----------+------+--------+
| Backslash | Yen | Dollar |
+-----------+------+--------+
| 0.00 | 0.00 | 0.00 |
+-----------+------+--------+
Getting result set metadata using sp_describe_first_result_set shows the money type is returned:
EXEC sp_describe_first_result_set N'SELECT \ AS Backslash, ¥ AS Yen, $ AS Dollar;';
Results (extraneous columns omitted):
+-----------+----------------+-----------+-------------+----------------+------------------+------------+-----------+-------+
| is_hidden | column_ordinal | name | is_nullable | system_type_id | system_type_name | max_length | precision | scale |
+-----------+----------------+-----------+-------------+----------------+------------------+------------+-----------+-------+
| 0 | 1 | Backslash | 0 | 60 | money | 8 | 19 | 4 |
| 0 | 2 | Yen | 0 | 60 | money | 8 | 19 | 4 |
| 0 | 3 | Dollar | 0 | 60 | money | 8 | 19 | 4 |
+-----------+----------------+-----------+-------------+----------------+------------------+------------+-----------+-------+
The behavior is especially unintuitive in this case because SQL Server allows one to specify a currency prefix without an amount as a money literal. The resultant currency value is zero when no amount is specified.
This Wiki article calls out the general confusion with the backslash and yen mark. This can be observed from SSMS by running the query below to return a backslash string literal. When the results are displayed with font MS Gothic, for example, the resultant character glyph is ¥ (yen mark) instead of the expected backslash:
SELECT '\' AS Backslash;
Results:
+-----------+
| Backslash |
+-----------+
| ¥ |
+-----------+

Related

Removing trailing zeroes after decimal Snowflake

I have been trying to remove trailing zeroes from a numeric column after the decimal. For example:
0.978219150000 -> 0.97821915
0.650502591918 -> 0.650502591918
0.975479450000 -> 0.97547945
The data type is NUMBER(38,12). Is there any way to remove the trailing zeroes as I mentioned above?
You can try to cast to float:
create or replace table test (a NUMBER(38,12));
insert into test values (0.97821915), (0.650502591918), (0.975479450000);
select a from test;
+----------------+
| A |
|----------------|
| 0.978219150000 |
| 0.650502591918 |
| 0.975479450000 |
+----------------+
select a::float from test;
+--------------+
| A::FLOAT |
|--------------|
| 0.97821915 |
| 0.6505025919 |
| 0.97547945 |
+--------------+
However, depending what you want to achieve, using floating number might not be a good idea due to potential rounding issues.
UPDATE:
I tried the regexp version, not sure if I missed any test case or not:
create or replace table test (a NUMBER(38,12));
insert into test values
(0.97),
(0.650502591918),
(0.975479450000),
(10000),
(1450000),
(12.2000),
(14.0200);
select regexp_replace(
a::varchar,
'^([0-9]+)$|' ||
'^([0-9]+)\.0*$|' ||
'^([0-9]+\.[0-9]{1,}[1-9])0*$|' ||
'^([0-9]+\.[1-9])0*$', '\\1\\2\\3\\4'
) as a from test;
+----------------+
| A |
|----------------|
| 0.97 |
| 0.650502591918 |
| 0.97547945 |
| 10000 |
| 1450000 |
| 12.2 |
| 14.02 |
+----------------+
Where:
^([0-9]+)$ -> will cover the integer like 10000
^([0-9]+)\.0*$ -> will cover integer like 10.000000
^([0-9]+\.[0-9]{1,}[1-9])0*$ -> will cover 14.0200000
^([0-9]+\.[1-9])0*$. -> will cover 12.20000 or 0.97540000
If this is just a formatting/display issue, you can use the to_varchar() function with a fixed decimal format string:
select 123.45::number(38,12); -- 123.450000000000
select to_varchar(123.45::number(38,12), '99G999G999G999G999G999G999G999G999D999999999999'); -- 123.45
Since the format string is a bit long, it may make sense to put it in a UDF to make it more compact in SQL:
create or replace function DISPLAY_38_12(N number(38,12))
returns varchar
language sql
as
$$
to_varchar(123.45::number(38,12), '99G999G999G999G999G999G999G999G999D999999999999')
$$;
select DISPLAY_38_12(123.45::number(38,12));

comma-separated value to rows and split other colums also /2 or /3

I have a table like this
MeterSizeGroup | WrenchTime | DriveTime
1,2,3 | | 7.843 || 5.099 |
I want to separate the comma delimited string into three rows as
MeterSizeGroup | WrenchTime | DriveTime
1 | | 2.614 | | 1.699 |
2 | | 2.614 | | 1.699 |
3 | | 2.614 | | 1.699 |
please help me how to write a query for this type of split it has to split in such a way that wrech time and driver time also has to be split by 3 enter image description here

Greek language - X Cart

I recently opened up an old website of my own to a different webserver. I uploaded the database, the website files, checked the connections and everything is runnning smoothly.
The only thing that i can't fix is that the greek language is displayed as
"????"
. Checked in database and everything is correct, the letters are showing and the coding is utf8. So i ended up thinking it is x-carts problem. What can i try to do? x-cart Version is 4.4.1.
You have to check these points:
1) You database.sql file to import is UTF-8. Greek symbols are readable in a text editor
aim-server[~/tmp]$ file -ib database.sql
text/plain; charset=utf-8
aim-server[~/tmp]$ grep ελληνικά database.sql
INSERT INTO `xcart_languages` VALUES ('el','lbl_categories','Categories ελληνικά','Labels');
aim-server[~/tmp]$
2) every MySQL variable is UTF-8. Greek symbols are readable in a mysql client
[aim_xcart_4_4_1_gold]>show variables like '%colla%';
+----------------------+-----------------+
| Variable_name | Value |
+----------------------+-----------------+
| collation_connection | utf8_general_ci |
| collation_database | utf8_general_ci |
| collation_server | utf8_general_ci |
+----------------------+-----------------+
3 rows in set (0,00 sec)
[aim_xcart_4_4_1_gold]>show variables like '%char%'; +--------------------------+----------------------------+
| Variable_name | Value |
+--------------------------+----------------------------+
| character_set_client | utf8 |
| character_set_connection | utf8 |
| character_set_database | utf8 |
| character_set_filesystem | binary |
| character_set_results | utf8 |
| character_set_server | utf8 |
| character_set_system | utf8 |
| character_sets_dir | /usr/share/mysql/charsets/ |
+--------------------------+----------------------------+
8 rows in set (0,00 sec)
[aim_xcart_4_4_1_gold]>select * from xcart_languages where name='lbl_categories';
+------+----------------+-----------------------------+--------+
| code | name | value | topic |
+------+----------------+-----------------------------+--------+
| en | lbl_categories | Categories ελληνικά | Labels |
3) Charset is UTF-8 on the ' Main page :: Edit languages :: Greek ' page
4) mysql_query("SET NAMES 'utf8'"); is added to the include/func/func.db.php file according to
https://help.x-cart.com/index.php?title=X-Cart:FAQs#How_do_I_set_up_my_X-Cart_to_support_UTF-8.3F

SQL Server : Islands And Gaps

I'm struggling with an "Islands and Gaps" issue. This is for SQL Server 2008 / 2012 (we have databases on both).
I have a table which tracks "available" Serial-#'s for a Pass Outlet; i.e., Buss Passes, Admissions Tickets, Disneyland Tickets, etc. Those Serial-#'s are VARCHAR, and can be any combination of numbers and characters... any length, up to the max value of the defined column... which is VARCHAR(30). And this is where I'm mightily struggling with the syntax/design of a VIEW.
The table (IM_SER) which contains all this data has a primary key consisting of:
ITEM_NO...VARCHAR(20),
SERIAL_NO...VARCHAR(30)
In many cases... particularly with different types of the "Bus Passes" involved, those Serial-#'s could easily track into the TENS of THOUSANDS. What is needed... is a simple view in SQL Server... which simply outputs the CONSECUTIVE RANGES of Available Serial-#'s...until a GAP is found (i.e. a BREAK in the sequences). For example, say we have the following Serial-#'s on hand, for a given Item-#:
123
124
125
139
140
ABC123
ABC124
ABC126
XYZ240003
XYY240004
In my example above, the output would be displayed as follows:
123 -to- 125
139 -to- 140
ABC123 -to- ABC124
ABC126 -to- ABC126
XYZ240003 to XYZ240004
In total, there would be 10 Serial-#'s...but since we're outputting the sequential ranges...only 5-lines of output would be necessary. Does this make sense? Please let me know...and, again, THANK YOU!...Mark
This should get you started... the fun part will be determining if there are gaps or not. You will have to handle each serial format a little bit differently to determine if there are gaps or not...
select x.item_no,x.s_format,x.s_length,x.serial_no,
LAG(x.serial_no) OVER (PARTITION BY x.item_no,x.s_format,x.s_length
ORDER BY x.item_no,x.s_format,x.s_length,x.serial_no) PreviousValue,
LEAD(x.serial_no) OVER (PARTITION BY x.item_no,x.s_format,x.s_length
ORDER BY x.item_no,x.s_format,x.s_length,x.serial_no) NextValue
from
(
select item_no,serial_no,
len(serial_no) as S_LENGTH,
case
WHEN PATINDEX('%[0-9]%',serial_no) > 0 AND
PATINDEX('%[a-z]%',serial_no) = 0 THEN 'NUMERIC'
WHEN PATINDEX('%[0-9]%',serial_no) > 0 AND
PATINDEX('%[a-z]%',serial_no) > 0 THEN 'ALPHANUMERIC'
ELSE 'ALPHA'
end as S_FORMAT
from table1 ) x
order by item_no,s_format,s_length,serial_no
http://sqlfiddle.com/#!3/5636e2/7
| item_no | s_format | s_length | serial_no | PreviousValue | NextValue |
|---------|--------------|----------|-----------|---------------|-----------|
| 1 | ALPHA | 4 | ABCD | (null) | ABCF |
| 1 | ALPHA | 4 | ABCF | ABCD | (null) |
| 1 | ALPHANUMERIC | 6 | ABC123 | (null) | ABC124 |
| 1 | ALPHANUMERIC | 6 | ABC124 | ABC123 | ABC126 |
| 1 | ALPHANUMERIC | 6 | ABC126 | ABC124 | (null) |
| 1 | ALPHANUMERIC | 9 | XYY240004 | (null) | XYZ240003 |
| 1 | ALPHANUMERIC | 9 | XYZ240003 | XYY240004 | (null) |
| 1 | NUMERIC | 3 | 123 | (null) | 124 |
| 1 | NUMERIC | 3 | 124 | 123 | 125 |
| 1 | NUMERIC | 3 | 125 | 124 | 139 |
| 1 | NUMERIC | 3 | 139 | 125 | 140 |
| 1 | NUMERIC | 3 | 140 | 139 | (null) |

MySQL Import into Innodb table severely spikes at a certain point

I'm trying to migrate a 30GB database from one server to another.
The short story is that at a certain point through the process, the amount of time it takes to import records severely increases as a spike. The following is from using the SOURCE command to import a chunk of 500k records (out of about ~25-30 million throughout the database) that was exported as an sql file that was ssh tunnelled over to the new server:
...
Query OK, 2871 rows affected (0.73 sec)
Records: 2871 Duplicates: 0 Warnings: 0
Query OK, 2870 rows affected (0.98 sec)
Records: 2870 Duplicates: 0 Warnings: 0
Query OK, 2865 rows affected (0.80 sec)
Records: 2865 Duplicates: 0 Warnings: 0
Query OK, 2871 rows affected (0.87 sec)
Records: 2871 Duplicates: 0 Warnings: 0
Query OK, 2864 rows affected (2.60 sec)
Records: 2864 Duplicates: 0 Warnings: 0
Query OK, 2866 rows affected (7.53 sec)
Records: 2866 Duplicates: 0 Warnings: 0
Query OK, 2879 rows affected (8.70 sec)
Records: 2879 Duplicates: 0 Warnings: 0
Query OK, 2864 rows affected (7.53 sec)
Records: 2864 Duplicates: 0 Warnings: 0
Query OK, 2873 rows affected (10.06 sec)
Records: 2873 Duplicates: 0 Warnings: 0
...
The spikes eventually average to 16-18 seconds per ~2800 rows affected. Granted I don't usually use Source for a large import, but for the sakes of showing legitimate output, I used it to understand when the spikes happen. Using mysql command or mysqlimport yields the same results. Even piping the results directly into the new database instead of through an sql file has these spikes.
As far as I can tell, this happens after a certain amount of records are inserted into a table. The first time I boot up a server and import a chunk that size, it goes through just fine. Give or take the estimated amount it handles until these spikes occur. I can't correlate that because I haven't consistently replicated the issue to evidently conclude that. There are ~20 tables that have sub 500,000 records that all imported just fine when those 20 tables were imported through a single command. This seems to only happen to tables that have an excessive amount of data. Granted, the solutions I've come cross so far seem to only address the natural DR that occurs when you import over time (The expected output in my case was that eventually at the end of importing 500k records, it would take 2-3 seconds per ~2800, whereas it seems the questions were addressing that at the end it shouldn't take that long). This comes from a single sugarCRM table called 'campaign_log', which has ~9 million records. I was able to import in chunks of 500k back onto the old server i'm migrating off of without these spikes occuring, so I assume this has to do with my new server configuration. Another thing is that whenever these spikes occur, the table that it is being imported into seems to have an awkward way of displaying the # of records via count. I know InnoDB gives count estimates, but the number doesn't succeed the ~, indicating the estimate. It usually is accurate and that each time you refresh the table, it doesn't change the amount it displays (This is based on what it reports through PHPMyAdmin)
Here's the following commands/InnoDB system variables I have on the new server:
INNODB System Vars:
+---------------------------------+------------------------+
| Variable_name | Value |
+---------------------------------+------------------------+
| have_innodb | YES |
| ignore_builtin_innodb | OFF |
| innodb_adaptive_flushing | ON |
| innodb_adaptive_hash_index | ON |
| innodb_additional_mem_pool_size | 8388608 |
| innodb_autoextend_increment | 8 |
| innodb_autoinc_lock_mode | 1 |
| innodb_buffer_pool_instances | 1 |
| innodb_buffer_pool_size | 8589934592 |
| innodb_change_buffering | all |
| innodb_checksums | ON |
| innodb_commit_concurrency | 0 |
| innodb_concurrency_tickets | 500 |
| innodb_data_file_path | ibdata1:10M:autoextend |
| innodb_data_home_dir | |
| innodb_doublewrite | ON |
| innodb_fast_shutdown | 1 |
| innodb_file_format | Antelope |
| innodb_file_format_check | ON |
| innodb_file_format_max | Antelope |
| innodb_file_per_table | OFF |
| innodb_flush_log_at_trx_commit | 1 |
| innodb_flush_method | fsync |
| innodb_force_load_corrupted | OFF |
| innodb_force_recovery | 0 |
| innodb_io_capacity | 200 |
| innodb_large_prefix | OFF |
| innodb_lock_wait_timeout | 50 |
| innodb_locks_unsafe_for_binlog | OFF |
| innodb_log_buffer_size | 8388608 |
| innodb_log_file_size | 5242880 |
| innodb_log_files_in_group | 2 |
| innodb_log_group_home_dir | ./ |
| innodb_max_dirty_pages_pct | 75 |
| innodb_max_purge_lag | 0 |
| innodb_mirrored_log_groups | 1 |
| innodb_old_blocks_pct | 37 |
| innodb_old_blocks_time | 0 |
| innodb_open_files | 300 |
| innodb_print_all_deadlocks | OFF |
| innodb_purge_batch_size | 20 |
| innodb_purge_threads | 1 |
| innodb_random_read_ahead | OFF |
| innodb_read_ahead_threshold | 56 |
| innodb_read_io_threads | 8 |
| innodb_replication_delay | 0 |
| innodb_rollback_on_timeout | OFF |
| innodb_rollback_segments | 128 |
| innodb_spin_wait_delay | 6 |
| innodb_stats_method | nulls_equal |
| innodb_stats_on_metadata | ON |
| innodb_stats_sample_pages | 8 |
| innodb_strict_mode | OFF |
| innodb_support_xa | ON |
| innodb_sync_spin_loops | 30 |
| innodb_table_locks | ON |
| innodb_thread_concurrency | 0 |
| innodb_thread_sleep_delay | 10000 |
| innodb_use_native_aio | ON |
| innodb_use_sys_malloc | ON |
| innodb_version | 5.5.39 |
| innodb_write_io_threads | 8 |
+---------------------------------+------------------------+
System Specs:
Intel Xeon E5-2680 v2 (Ivy Bridge) 8 Processors
15GB Ram
2x80 SSDs
CMD to Export:
mysqldump -u <olduser> <oldpw>, <olddb> <table> --verbose --disable-keys --opt | ssh -i <privatekey> <newserver> "cat > <nameoffile>"
Thank you for any assistance. Let me know if there's any other information I can provide.
I figured it out. I increased the innodb_log_file_size from 5MB to 1024MB. While it did significantly increase the amount of records I imported (Never went above 1 second per 3000 rows), it also fixed the spikes. There were only 2 in all the records I imported, but after they happened, they immediately went back to taking sub 1 second.

Resources