I am using TDengine as my time-series storage engine and I want to change a column name of a super table. I tried to find a way in the official documents but I failed. Is there a way to change the column name of a super table?
Change column name is not supported in TDengine 2.x.
TDengine 2.x supports add a new column, drop an existing column, change column length for binary/nchar column.
No, I'm afraid not. Currently TDengine 2.x does not support column name changing. You can
Use alter to drop it and add a new column with the right name, but you may lost data.
Use select col as col2 to use as you have a column col2, but does not work in inserts.
Related
Does TDengine support index on a normal table column? I know TDengine has a timestamp index for the first column. I want to filter according to a normal table column efficiently. Can I add an index on that column?
I think TDengine doesn't support index in 2.x versions. In later versions maybe inverted index will be added like ES.
Is it possible to issue something like
RENAME COLUMN col1 col2
in Google Cloud Spanner? It looks from the DDL that this isn't possible; if not, is this a design choice or a limitation whilst in Beta?
No, this is not possible. Currently you can only do the following with regard to altering columns in a table:
Add a new one
Delete an existing one, unless it's a key column
Change delete behavior (cascading or not)
Convert between STRING and BYTES
Change length of STRING and BYTES
Add or remove NOT NULL modifier
A work around is possible by following these steps in order:
Add the new column to your table
Update your code to read to from both columns
Update your code to only write to the new one
Run a Cloud Dataflow job to migrate the data from the old column to the new column
Update your code to only read from the new column
Drop the old column
Keep in mind the above steps will not work for the primary key column, you'll have to do by creating a new table, and doing the data migration that way.
I'm pretty good around Oracle but I've been struggling to find a decent solution to a problem I'm having with Sybase.
I have a table which has an IDENTITY column which is also a User Defined Datatype (UDD) "id" which is numeric(10,0). I've decided to replace the UDD with the native datatype but I get an error when I do this.
I've found that the only way to do this is:
Rename the original table (table_a to table_a_backup) using the procedure sp_rename
Recreate the original table (table_a) but use native data types
Copy the contents of the backup table to the original (i.e insert into table_a select * from table_b)
This works however I have over 10M records and it eventually runs out of log segment and halts (I can't increase the segment any more due to physical requirements).
Does anybody have a solution, preferably not a solution which would involve processing the records as anything but one large set?
Cheers,
JLove
conceptually, something like this works (in Sybase ASE 12.5.x) ...
do an "alter table drop column" on your current ID column
do "alter table add column" stmt to add new column (w/ native datatype) with IDENTITY attribute
Note that the ID field might not have the same numbers, so be very wary of doing the above if the ID field is used as an explicit or implicit key to other tables.
I am using SQL Server 2000 and I have two databases that both replicate (transactional push subscription) to a single database. I need to know which database the records came from.
So I want to add a fixed column specified in the publication to my table so I can tell which database the row originated from.
How do I go about doing this?
I would like to avoid altering the main databases mostly due to the fact there are many tables I would need to do this to. I was hoping for some built in feature of replication that would do this for me some where. Other than that I would go with the view idea.
You could use a calculated column Use the following on the two databases:
ALTER TABLE TableName ADD
MyColumn AS 'Server1'
Then just define the single "master" database to use a VARCHAR column (or whatever you want) that you fill using the calculated columns value.
You can create a view, which adds the "constant" column, and use it as a replication source.
So the solution for me was to set up the replication publications to allow transformations and create a DTS package for each site that appends the siteid into the tables to keep the ids unique as I can't use guids.
Does anyone know of a way to alter a computed column without dropping the column in SQL Server. I want to stop using the column as a computed column and start storing data directly in the column, but would like to retain the current values.
Is this even possible?
Not that I know of but here is something you can do
add another column to the table
update that column with the values of the computed column then drop the computed column
If you need to maintain the name of the column (so as not to break client code), you will need to drop the column and add back a stored column with the same name. You can do this without downtime by making the changes (along the lines of SQLMenace's solution) in a single transaction. Here's some pseudo-code:
begin transaction
drop computed colum X
add stored column X
populate column using the old formula
commit transaction
Ok, so let me see if I got this straight. You want to take a column that is currently computed and make it a plain-jane data column. Normally this would drop the column but you want to keep the data in the column.
Make a new table with the primary key columns from your source table and the generated column.
Copy the data from your source table into the new table.
Change the column on your source table.
Copy the data back.
No matter what you do I am pretty sure changing the column will drop it. This way is a bit more complex but not that bad and it saves your data.
[Edit: #SqlMenace's answer is much easier. :) Curse you Menace!! :)]