How to create a nano precision table in tdengine - tdengine

I know that tdengine has supported nanosecond,microsecond and millisecond. But I do not know how to configure that while I tried to create a nano table yesterday.I think the there isn’t any special appreciate persition to put precision statement among a create sql statement.just like :
create table if not exists tableName (ts timestamp,col int,...)
I just want know how to configure a table’ precision in tdengine.can someone help ?

you can create a database with nano precision, then all the tables under this database will be using nano precision.
create database test precision 'ns';

Related

Is SQL Server able to use internal parallelism for an update statement?

I am struggling to find the best way to migrate some varchar columns to nvarchar. One of the options I am using is to add new nvarchar column, then update the values from the original column, drop the original column and rename the new one to the old name.
I know it will generate a lot of UNDO and REDO data. Still, I have other limitations (mostly by SQL Server not supporting parallel DDLs and multi-column ALTER TABLE operations), so let's focus on how to run the update statement faster.
My Oracle experience is telling me to use internal parallelism, but is it available in SQL Server?
I am not able to run this statement in parallel, although I especially created the table to be a heap table (no clustered index).
update t
set new_col_1 = col_1,
new_col_2 = col_2,
...,
new_col_N = col_N ;
Thanks in advance!!
I read the documentation but have not find an exact proof that parallel update plans are possible.

Finding total number of tables in union query

I am writing a code which supports different versions of Sybase ASE. I am using union queries and the problem is that different version of Sybase ASE supports different number of tables in union query. The union query is dynamic and will be formed depending on the number of database present in the server.
Is there any way in which I can find the max number of tables supported by a particular Sybase ASE? The only solution that I know right now is to fetch the version using query and pick out the version number from the result and set the number accordingly in the code. But this is not a very good solution. I tried checking if there are any tables which have stores this value but nothing came up. Can anyone suggest any solution for this?
Since that's my SAP response you've re-posted here, I'll add some more notes ...
that was a proof of concept that answered the basic question of how to get the info via T-SQL; it was assumed anyone actually looking to implement the solution would (eventually) get around to addressing the various issues re: overhead/maintenance, eg ...
setting a tracefile is going to require permissions to do it; which permissions depends on whether or not you've got granular permissions enabled (see the notes for the 'set tracefile' command in the Reference manual); you'll need to decide if/how you want to grant the permissions to other users
while it's true you cannot re-use the tracefile, you can create a proxy table for the directory where the tracefile exists, then 'delete' the tracefile from the directory, eg:
create proxy_table tracedir external directory at '/tmp'
go
delete tracedir where filename = 'my_serverlmiits'
go
if you could have multiple copies of the proxy table solution running at the same time then you'll obviously (?) need to make sure you generate a unique tracefile name for each session; while you could do this by appending ##spid to the file name, you could also add the login name (suser_name()), the kpid (select KPID from master..monProcess where SPID = ##spid), etc; you'll also want to make sure such a file doesn't exist before trying to create it (eg, delete tracedir where filename = '.....'; set tracefile ...)
your error (when selecting from the proxy table) appears to be related to your client application running in transaction isolation level 0 (which, by default, requires a unique index on the table ... not something you're going to accomplish against a proxy table pointing to an OS file); try setting your isolation level to 1, or use a client application that doesn't default to isolation level 0 (eg, that example runs fine with the basic isql command line tool)
if this solution were to be productionalized then you'll probably want to get a separate filesystem allocated so that any 'run away' tracing sessions don't fill up an important filesystem (eg, /var, /tmp, $SYBASE, etc)
also from a production/security perspective, I'd probably want to investigate the possibility of encapsulating a lot of the details in a DBA/system proc (created to execute under the permissions of the creator) so as to ensure developers can't create tracefiles in the 'wrong' directories ... and on and on and on re: control/security ...
Then again ...
If you're going to be doing this a LOT ... and you're only interested in the max number of tables in a (union) query, then it'd probably be much easier to just build a static if/then/else (or case) expression that matches your ASE version with the few possible numbers (see RobV's post).
Let's face it, how often are really, Really, REALLY going to be building a query with more than, say, 100 tables, let alone 500, 1000, more? [You really don't want to deal with trying to tune such a monster!! YIKES] Realistically speaking, I can't see any reason why you'd want to productionalize the proxy table solution just to access a single row from dbcc serverlimits when you could just implement a hard limit (eg, max of 100 tables).
And the more I think about it, as a DBA I'm going to do whatever I can to make sure your application can't create some monster, multi-hundred table query that ends up bogging down my dataserver simply because the developer couldn't come up with a more efficient solution. [And heaven forbid this type of application gets rolled out to the general user community, ie, I'd have to deal with dozens/hundreds of copies of this monster running in my dataserver?!?!?!]
You can get such limits by running 'dbcc serverlimits' (enable traceflag 3604 first).
Up until version 15.7, the maximum was 256.
In 16.0, this was raised to 512.
In 16.0 SP01, this was raised again to 1023.
I suggest you open a case/ticket with SAP support to know if there is any system tables that store this information. If there is none, I would implement the tedious solution you mentionned and will monitor the following error in the ASE15.7 logs:
CR 805525 -- If you exceed the number of tables in a UNION query you can get a signal 11 in ord_getrowbounds instead of an error message.
This is the answer that I got from the SAP community
-- enable trace file for your spid
set tracefile '/tmp/my_serverlimits' for ##spid
go
-- dump dbcc serverlimits output to your tracefile
dbcc serverlimits
go
-- turn off tracing
set tracefile off for ##spid
go
-- enable external file access:
sp_configure 'enable file access',1
go
-- create proxy table pointing at the trace file
create proxy_table dbcc_serverlimits external file at '/tmp/my_serverlimits'
go
-- find our column name ('record' of type varchar(255) in this case)
sp_help dbcc_serverlimits
go
-- extract the desired row; store the 'record' value in a #variable
-- and parse for the desired info ...
select * from dbcc_serverlimits where lower(record) like '%union%'
go
record
------------------------------------------------------------------------
Max number of user tables overall in a statement using UNIONs : 512
There are some problems with this approach though. First issue is setting trace file. I am going to use this code mostly daily and in Sybase, I think we can't delete or overwrite a trace file. Second is regarding the proxy table. Proxy table will have to be deleted, but this can be taken care with the following code
IF
exists (select 1 from
sysobjects where type = 'U' and name = 'dbcc_serverlimits')
begin
drop table
dbcc_serverlimits
end
go
Final problem comes when a select query is made from dbcc_serverlimits table. It throws the following error
Could not execute statement. The optimizer could not find a unique
index which it could use to scan table 'dbo.dbcc_serverlimits' for
cursor 'jconnect_implicit_26'. SQLCODE=311 Server=************,
Severity Level=16, State=2, Transaction State=1, Line=1 Line 24
select * from dbcc_serverlimits
All this command will have to be written in procedure (that is what I am thinking). Any more elegant solution?

Sybase ASE 15.7 - How can I merge objects from two databases into one single database?

I have an application which uses Sybase ASE 15.7 for the underlying database. In older days it was recommended to split the tables and located them in two different databases let's say db1 and db2. I know that there are no naming conflicts which means that I could migrate either objects from db1 to db2 or vice versa.
What would be the best option to migrate the data. I have SQL scripts to create all objects I need in the remeining database. Is there a better option than using this:
1> INSERT INTO db2..tblA
2> SELECT * FROM db1..tblA
3> GO
Some of the tables are quite huge. So I need to take care that the transaction log is not filled up.
BCP might also be an option like that:
bcp db1..tblA out tblA.save -U... -P....
bcp db2..tblA in tblA.save -U... -P....
Is there a tool available that could connect to both databases and could handle something like this?
Maybe someone has an idea. Thanks in advance.
Best regards
Jens
To prevent the log filling up you could perform an unlogged operation.
If that is not possible you could set the DB options to truncate log checkpoint during the copy procedure but this 2nd method might not be enough to ensure the log is not exhausted.
If an unlogged operation is used then after completion a full DB dump should be done to create a backup of the new baseline.
Unlogged operations might be dangerous if done outside of an offline maintenance window.
Truncate on Checkpoint (do the following and then perform a checkpoint)
To switch the Truncate mode on/off use
use master;
sp_dboption , 'trunc log on chkpt', [false|true];
To allow BCP/select INTO on a DB (do the following and then perform a checkpoint)
use master;
sp_dboption , 'select into/bulkcopy/pllsort ', [false|true];
To perform a checkpoint
To checkpoint in DB ‘dbname’ use.
use 'dbname';
checkpoint;
BCP Option
If you choose to use BCP then make sure you use fast BCP. Fast BCP is unlogged.
The rules to ensure Fast BCP is used are specified at http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc30191.1570100/doc/html/san1367605064460.html.
You can do it in two phases an ‘extract’ (out) run and then a ‘load’ in run. The commands would look something like this.
SYBASE/$SYBASE_OCS/bin/bcp ${DB1NAME}..${TABNAME} out $DUMPDIR/$DB1NAME/${TABNAME}.TXT -c -U$SQLUSER -P$USERPASS -S$SERVER1NAME
SYBASE/$SYBASE_OCS/bin/bcp ${DB1NAME}..${TABNAME} in $DUMPDIR/$DB1NAME/${TABNAME}.TXT -c -U$SQLUSER -P$USERPASS -S$SERVER1NAME
The select into / bulk copy DB option needs to be set in the DB for this to work.
You will need to dump the DB after this operation.
Unlogged Operation - Select Into
As you have access to both databases from within the same server you should have a look at using select into.
‘select into’ is an unlogged operation.
The target table for the select into statement cannot exist so what you will need to do is move the original target table using sp_rename and then run the select into using the two source tables in your query.
The select into / bulk copy DB option needs to be set for the DB for this to work.
You will need to dump the DB after this operation.
This might be slow due to the Union statement. Also union is not a good option is the rows are not unique.
Indexes
In General if there is a large index on the target table (especially a clustered one) it would probably be more efficient to drop it during the copy and recreate it afterwards.

minimum impact "like" request on Sybase ASE 12.5 DB

I would like to minimize the performace impact of the following query on a Sybase ASE 12.5 database
SELECT description_field FROM table WHERE description_field LIKE 'HEADER%'
GO
I suspect I cannot do better than a full table scan without modifying the database but does someone have an idea?
Perhaps an improvement relative to locking would be done thanks to a special syntax?
In this case you should get a large speedup by adding an index on description_field.
This works because the like string starts with non-wildcard characters. If the string start with a % then there is no alternative to doinf a table scan.

Developer moving from SQL Server to Oracle

We are bringing a new project in house and whereas previously all our work was on SQL Server the new product uses an oracle back end.
Can anyone advise any crib sheets or such like that gives an SQL Server person like me a rundown of what the major differences are - Would like to be able to get up and running as soon as possible.
#hamishcmcn
Your assertion that '' == Null is simply not true. In the relational world Null should only ever be read to mean "I don't know". The only result you will get from Oracle (and most other decent databases) when you compare a value to Null is 'False'.
Off the top of my head the major differences between SQL Server and Oracle are:
Learn to love transactions, they are your friend - auto commit is not.
Read consistency and the lack of blocking reads
SQL Server Database == Oracle Schema
PL/SQL is a lot more feature rich than T-SQL
Learn the difference between an instance and a database in Oracle
You can have more than one Oracle instance on a server
No pointy clicky wizards (unless you really, really want them)
Everyone else, please help me out and add more.
The main difference I noticed in moving from SQL Server to Oracle was that in Oracle you need to use cursors in the SELECT statements.
Also, temporary tables are used differently. In SQL Server you can create one in a procedure and then DROP it at the end, but in Oracle you're supposed to already have a temporary table created before the procedure is executed.
I'd look at datatypes too since they're quite different.
String concatenation:
Oracle: || or concat()
Sql Server: +
These links could be interesting:
http://www.dba-oracle.com/oracle_news/2005_12_16_sql_syntax_differences.htm
http://www.mssqlcity.com/Articles/Compare/sql_server_vs_oracle.htm (old one: Ora9 vs Sql 2000)
#hamishmcn
Generally that's a bad idea.. Temporary tables in oracle should just be created and left (unless its a once off/very rarely used). The contents of the temporary table is local to each session and truncated when the session is closed. There is little point in paying the cost of creating/dropping the temporary table, might even result in clashes if two processes try to create the table at the same time and unexpected commits from performing DDL.
What you have asked here is a huge topic, especially since you haven't really said what you are using the database for (eg, are you going to be going from TSQL -> PL/SQL or just changing the backend database your java application is connected to?)
If you are serious about using your database choice to its potiential, then I suggest you dig a bit deeper and read something like Expert Oracle Database Architecture: 9i and 10g Programming Techniques and Solutions by Tom Kyte.
Watch out for the difference in the way the empty string is treated.
INSERT INTO atable (a_varchar_column) VALUES ('');
is the same as
INSERT INTO atable (a_varchar_column) VALUES (NULL);
I have no sqlserver experience, but I understand that it differentiates between the two
If you need to you can create and drop temporary tables in procedures using the Execute Immediate command.
to andy47, I did not mean that you can use the empty string in a comparison, but oracle treats it like null if you use it in an insert.
Re-read my entry, then try the following SQL:
CREATE TABLE atable (acol VARCHAR(10));
INsERT INTO atable VALUES( '' );
SELECT * FROM atable WHERE acol IS NULL;
And to avoid a "yes it is, no it isn't" situation, here is an external link

Resources