What differs in the stored procedures between sybase version 12.5 and version 15
Here's the procedures reference manual for v15, and here it is for version 12.5... or you might find the What's New in Adaptive Server Enterprise? guide has some specifics so you don't have to do the comparison yourself. Happy reading!
I think you shoul specify your need. For example if you intend to ask "what is the declaration difference?". The answer could be like below:
Sybase ASE 15.7
create procedure [owner.]procedure_name[;number]
[[(#parameter_name datatype [(length) | (precision [, scale])]
[= default][output]
[, #parameter_name datatype [(length) | (precision [, scale])]
[= default][output]]...)]]
[with {recompile | execute as {owner | caller}} ]
as {SQL_statements | external name dll_name}
Sybase ASE 12.5
create procedure [owner.]procedure_name[;number]
[[(]#parameter_name datatype [(length ) | (precision [, scale ])]
[= default][output]
[, #parameter_name datatype [(length ) | (precision [, scale ])]
[= default][output]]...[)]]
[with recompile]
as {SQL_statements | external name dll_name}
Related
I am trying the set DATA_RETENTION_TIME_IN_DAYS for a table to a specific value (5) but it fails due invalid value error. Setting it to value 1 works. Setting it to 5 on another database works on another database on the same account.
Are there any other parameters affecting the maximum value other than the Snowflake Edition type, which shouldn't matter since we are using the Enterprise Edition?
ALTER TABLE MY_TABLE SET DATA_RETENTION_TIME_IN_DAYS = 5;
SQL State : 22023
Error Code : 1008
Message : SQL compilation error:
invalid value [5] for parameter 'DATA_RETENTION_TIME_IN_DAYS'
Location : some-file.sql
Line : 4
Statement : ALTER TABLE MY_TABLE SET DATA_RETENTION_TIME_IN_DAYS = 5
According to docs the max value for Snowflake Enterprise Edition which we are using is 90.
Are there any other parameters affecting the maximum value?
The Time-Travel capability depends on type of the table. The value range 0-90 for Enterprise Edition is for pernament tables.
Comparison of Table Types:
+-------------------------------------------+-----+-------------------------------------+
| Type | ... | Time Travel Retention Period (Days) |
+-------------------------------------------+-----+-------------------------------------+
| Temporary | | 0 or 1 (default is 1) |
| Transient | | 0 or 1 (default is 1) |
| Permanent (Standard Edition) | | 0 or 1 (default is 1) |
| Permanent (Enterprise Edition and higher) | | 0 to 90 (default is configurable) |
+-------------------------------------------+-----+-------------------------------------+
TRANSIENT databases have maximum value of 1 for DATA_RETENTION_TIME_IN_DAYS also in Enterprise Edition. My database that was causing this is TRANSIENT.
https://docs.snowflake.com/en/sql-reference/sql/create-database.html
how to convert the time type of 2020-06-02 10:40:28.001 to 1591065628 in TDengine database?
I want to convert the time in a certain format and can see it in the shell. Take an example, I want to convert the time 2020-06-02 10:40:28.001 to 1591065628, what should I do?
you can use taos -r option, it will output time as uint64_t
ubuntu#taos ~ $ taos -r
taos> use test;
Database changed.
taos> select * from tb;
ts | speed | desc |
================================================================
1644216894189 | 1 | test |
Query OK, 1 row(s) in set (0.006103s)
A while ago, I asked "How can I generate a breadcrumb of Categories in pure MySQL?", which a fellow stackoverflow member provided this neat code for my MySQL needs:
select group_concat(t2.name order by locate(concat('/', t2.id, '/'), concat(t1.path, '/')) separator ' - ') breadcrumb
from mdl_course_categories t1,
mdl_course_categories t2
where locate(concat('/', t2.id, '/'), concat(t1.path, '/'))
Today, I find I need an SQL Server 2016 solution to this combination of functions (group_concat() and locate()) to replicate this functionality. I tried running this code against this database, but I am hit with this error message instead:
SQLState: 42000
Error Code: 156
Message: [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Incorrect syntax near the keyword 'order'.
I checked other questions, but was unable to transfer this knowledge to my needs. How can I replicate this functionality in SQL Server 2016?
Edit: I've read the links that #Bacon Bits provided (thank you!), but doing this in SQL Server 2016 seems as possible as programming a slice of cheese to fly to the moon. Unfortunately upgrading is not an option, so I'm stuck with this hot mess. All I need to do is replace the numbers in the path column with the name as per the id. E.g.
| id | name | path | should display as |
|---------|---------------|-------------------|--------------------------------|
| 1 | Fruit and Veg | /1 | Fruit and Veg - Fruit |
| 436547 | Fruit | /1/436547 | Fruit and Veg - Fruit |
| 4657598 | Apples | /1/436547/4657598 | Fruit and Veg - Fruit - Apples |
SO FRUSTRATING! Here's my code so far:
select
stuff((',' + t2.name), 1, 1, charindex(concat('/', t2.id, '/'), concat(t1.path, '/')))
from prefix_course_categories t1,
prefix_course_categories t2
where charindex(concat('/', t2.id, '/'), concat(t1.path, '/'))
This produces the following error:
SQLState: 42000
Error Code: 4145
Message: [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]An expression of non-boolean type specified in a context where a condition is expected, near 'ORDER'.
Help appreciated, thank you.
GROUP_CONCAT() doesn't have a good equivalent in MS SQL Server until SQL Server 2017, when the STRING_AGG() function is introduced. SQL Server 2016 and earlier can fake it with the STUFF FOR XML PATH method, which is arcane, obnoxious, irritating, and has pitfalls where you can have XML entities in the output if you don't call it just right. But it does generally perform fairly well.
MySQL's LOCATE() is roughly equivalent to CHARINDEX(), I think. There's also the PATINDEX() function, which is a bit more flexible but doesn't perform as well.
I have an SSRS report that was pointed to SQL Server views, which pointed to Oracle tables. I edited the SSRS report Dataset so as to query directly from the Oracle db. It seems like a very simple change until I got this error message:
System.InvalidCastException: Specified cast is not valid.
With the following details...
Field ‘UOM_QTY’ and it also says at
Oracle.ManagedDataAccess.Client.OracleDataReader.GetDecimal(Int32 i).
The SELECT statement on that field is pretty simple:
, (DELV_RECEIPT.INV_LBS/ITEM_UOM_XREF.CONV_TO_LBS) AS UOM_QTY
Does anyone know what would cause the message, and how to resolve the error? My objective is use to use the ORACLE datasource instead of SQL SERVER.
Error 1
Severity Code Description Project File Line Suppression State
Warning [rsErrorReadingDataSetField] The dataset ‘dsIngredientCosts’ contains a definition for the Field ‘UOM_QTY’. The data extension returned an error during reading the field. System.InvalidCastException: Specified cast is not valid.
at Oracle.ManagedDataAccess.Client.OracleDataReader.GetDecimal(Int32 i)
at Oracle.ManagedDataAccess.Client.OracleDataReader.GetValue(Int32 i)
at Microsoft.ReportingServices.DataExtensions.DataReaderWrapper.GetValue(Int32 fieldIndex)
at Microsoft.ReportingServices.DataExtensions.MappingDataReader.GetFieldValue(Int32 aliasIndex) C:\Users\bl0040\Documents\Visual Studio 2015\Projects\SSRS\Project_ssrs2016\Subscription Reports\Feed Ingredient Weekly Price Avg.rdl 0
Error 2
Severity Code Description Project File Line Suppression State
Warning [rsMissingFieldInDataSet] The dataset ‘dsIngredientCosts’ contains a definition for the Field ‘UOM_QTY’. This field is missing from the returned result set from the data source. C:\Users\bl0040\Documents\Visual Studio 2015\Projects\SSRS\Project_ssrs2016\Subscription Reports\Feed Ingredient Weekly Price Avg.rdl 0
Source Tables:
+------------+---------------+-------------+---------------+-----------+
| Source | TABLE_NAME | COLUMN_NAME | DataSize | COLUMN_ID |
+------------+---------------+-------------+---------------+-----------+
| ORACLE | DELV_RECEIPT | INV_LBS | NUMBER (7,0) | 66 |
+------------+---------------+-------------+---------------+-----------+
| ORACLE | ITEM_UOM_XREF | CONV_TO_LBS | NUMBER (9,4) | 3 |
+------------+---------------+-------------+---------------+-----------+
| SQL SERVER | DELV_RECEIPT | INV_LBS | numeric (7,0) | 66 |
+------------+---------------+-------------+---------------+-----------+
| SQL SERVER | ITEM_UOM_XREF | CONV_TO_LBS | numeric (9,4) | 3 |
+------------+---------------+-------------+---------------+-----------+
The error went away after adding a datatype conversion statement to the data selection.
, CAST(DELV_RECEIPT.INV_LBS/ITEM_UOM_XREF.CONV_TO_LBS AS NUMERIC(9,4)) AS UOM_QTY
Can anyone provide some information on why the original query would be a problem and why the CAST would fix these errors? I tried casting the results because someone on Code Project forum said...
why don't you use typed datasets? you get such head aches just because
of not coding in a type-safe manner. you have a dataset designer in
the IDE which makes the life better, safer, easier and you don't use
it. I really can't understand.
Here is an approach to fix this error with an extension method instead of modifying the SQL-Query.
public static Decimal MyGetDecimal(this OracleDataReader reader, int i)
{
try
{
return reader.GetDecimal(i);
}
catch (System.InvalidCastException)
{
Oracle.ManagedDataAccess.Types.OracleDecimal hlp = reader.GetOracleDecimal(i);
Oracle.ManagedDataAccess.Types.OracleDecimal hlp2 = Oracle.ManagedDataAccess.Types.OracleDecimal.SetPrecision(hlp, 27);
return hlp2.Value;
}
}
Thank you for this but what happens if your query looks like:
SELECT x.* from x
and .GetDecimal appears nowhere?
Any suggestions in that case? I have created a function in ORACLE itself that rounds all values in a result set to avoid this for basic select statements but this seems wrong for loading updateable datasets...
Obviously this is an old-school approach to getting data.
I have a problem with Hibernate generating an SQL that do not work on SQLServer (works on PostgreSQL without any problems). I have tried to set the hibernate dialect for SQLServer but the same SQL is still generated and still do not work. The HQL query looks like this:
select count(t) from ValidationLog t
The generated SQL looks like this:
select count((vl.dataKey, vl.dataType)) from ValidationLog vl;
So my question is if there is anyway around it? Would really like to have the same code for both databases.
According to the JPA specification, your JPQL query is perfectly valid:
4.8 SELECT Clause
...
The SELECT clause has the following
syntax:
select_clause ::= SELECT [DISTINCT] select_expression {, select_expression}*
select_expression ::=
single_valued_path_expression |
aggregate_expression |
identification_variable |
OBJECT(identification_variable) |
constructor_expression
constructor_expression ::=
NEW constructor_name ( constructor_item {, constructor_item}*)
constructor_item ::= single_valued_path_expression | aggregate_expression
aggregate_expression ::=
{ AVG | MAX | MIN | SUM } ([DISTINCT] state_field_path_expression) |
COUNT ([DISTINCT] identification_variable | state_field_path_expression |
single_valued_association_path_expression)
However, you might be a victim of a bug reported in issues like HHH-4044, HHH-3096, HHH-2266 (or even HHH-5419).
Possible workaround: use a state_field_path_expression.
select count(t.someField) from ValidationLog t
The HQL looks wrong to me, should be:
select count(t.dataKey) from ValidationLog t