Is there a way to reorg rebuild sybase tables automatically? Can we do it with the job scheduler or script?
Reorgs can be run using the Job Scheduler, or via batch/shell scripts. You will have to generate the table list you wish to reorg programatically, as there is not a command to do this automatically.
There are a couple approaches, one is to use the 'optdiag' command to check table health, and use that information to dynamically decide which tables to reorg. Check my answer to this question for more on 'optdiag'
Another method is just reorg everything, which I only recommend for small databases. A script can be generated to do this with the following SQL
First, the database option "into/bulkcopy/pllsort" must be set to true to be able to run reorg rebuild
use master
go
sp_dboptions <dbname>, "select into/bulkcopy/pllsort", true
go
The following generates a script that can then be run against the server to rebuild the tables. Depending on how you generate this, you may have to remove the first row of the file if it contains column headers.
use <DBNME>
go
set nocount on
select "reorg rebuild "+ name + char(10) + "go"
from sysobjects
where name not like "sys%" //excludes system tables
go
Related
ETL Script to dynamically map multiple execute sql resultset to multiple tables (table name based on sql file provided)
I have a source folder with sql files ( I can put it up as stored procedures as well ) . I know how to loop and execute sql tasks in a foreach container. Now the part where I'm stuck is I need to use the final result set of each sql queries and shove it into a table with the same name as the sql file.
So, Folder -> script1.sql , script2.sql etc -> ETL -> goes to table script1, table script2 etc.
EDIT : Based on the comment made by Joe, I just want to say that I'm aware of using insert within a script but I need to insert it onto a table in a different server.And Linked servers are not the ideal solutions
Any psuedocode or link to tutorials will be extremely helpful . Thanks!
I would add the table creation to the script. It is probably the simplest way to do this. If your script is Select SomeField From Table1, you could change it to Select SomeField Into Table script1 From Table1. Then there is no need to map in SSIS which is not easy to do from my experience.
In Sql Developer (SqlDev) (the Oracle tool) I most of the time use ';' to indicate the end of a batch.
Also we use the ';' when having a larger script with lots of batches in it. For example a script where the first batch creates a table, the second inserts data in that table, the third does a join with the just created table and another table, etc.
On SqlDev the script (with the different batches in it) works fine. But when we copied the exact script to SQL Server Management Studio (SMSS) and ran it, it gave errors that the table (of the third batch where the created table is joined) does not exist.
How can I make the scipt run on SMSS without the script failing?
In SQL server you can use 'GO' to split a batch or block of statement.
something like below.
ALTER TABLE [dbo].[Security] ADD CONSTRAINT [DF_Security_ImportSettings] DEFAULT ((11111011111111111.)) FOR [ImportSettings]
GO
ALTER TABLE [dbo].[Security] ADD CONSTRAINT [DF_Security_PricingType] DEFAULT ((-1)) FOR [PricingType]
GO
ALTER TABLE [dbo].[Security] ADD CONSTRAINT [DF_Security_AutoUpdateCustomPricing] DEFAULT ((1)) FOR [AutoUpdateCustomPricing]
GO
Go is the Keyword you are looking for ..
Example..
insert into t1
select 1
go
alter table t1
add abc int
this is also configurable in SSMS(i haven't tested though) to ; or some other word..
It appears that in SQL Server Management Studio (SMSS) sometimes it is needed to use 'GO' instead of ';'.
The ';' works differently in SQL Server Management Studio (SSMS) compared to the use of ';' in for example SQl Developer (SQLDev) (the Oracle tool).
In SQLDev the ';' acts as a end of batch indicator, where SSMS doesn't see it when using DLL. Instead SMSS first looks at the whole script and thinks of smart ways to run it. Which means all is run parallel, where some batches are dependent of others, but they are not run properply and gives failure of the script.
In my situation it meant I had to use 'GO' to tell the DBMS to run the first, second and third sequantial instead of parallel. I changed all the ';' with GO in the script (in fact it has a whole lot more batches in it) and that did the trick. I'm not sure it is completely right to do it this way, but at least it worked. :)
Also see:
What is the use of GO in SQL Server Management Studio & Transact SQL?
When do I need to use Begin / End Blocks and the Go keyword in SQL Server?
I've created an SSIS package that pulls data from various sources and aggregates it as needed for the business. The goal of this processing is to create a single table, for example "Data_Tableau". This table is the datasource for connected Tableau dashboards.
The Tableau dashboards need to be available during the processing, so I don't truncate "Data_Tableau" and re-populate with the SSIS package. Instead, the SSIS package steps create "Data_Stage". Then the final step of the package is a drop/rename, wherein I drop "Data_Tableau" and sp_rename "Data_Stage" to "Data_Tableau".
USE dbname
DROP TABLE Data_Tableau
EXEC sp_rename Data_Stage, Data_Tableau
Before this final step, I expect max(buydate) from "Data_Stage" to be greater than max(buydate) from "Data_Tableau", since "Data_Stage" would have additional records since the last time the process ran.
However, sometimes there are issues with upstream data and I end up with max(buydate) from "Data_Stage" = max(buydate) from "Data_Tableau". In such cases, I would not want the final drop/rename process to run. Instead, I want the job to fail and I'll send an alert to the appropriate upstream data team when I get the failure notification.
That's the long-winded background. My question is...how do I check the dates and cause a failure within the SSIS package. I'm using VS 2012.
I was thinking of creating a constraint before the final drop/rename step, but I haven't created variables or expressions before and am unsure how to achieve this.
I was also considering creating a 2-row table as follows:
SELECT MAX(buydate) 'MaxDate', 'Tableau' 'FieldType' FROM dbname.dbo.Data_Tableau
UNION ALL
SELECT MAX(buydate) 'MaxDate', 'Stage' 'FieldType' FROM dbname.dbo.Data_Stage
and then using a query against that table as some sort of constraint, but not sure if that makes any sense and/or is better than the option of creating variables/expressions.
Goal: If MAX(buydate) from "Data_Stage" > MAX(buydate) from "Data_Tableau", then I'd want the drop/rename step to run, otherwise it should fail and "Data_Tableau" will contain the same data as before the package ran.
Suggestions? Step-by-step instrux would be greatly appreciated.
I would do this by putting this:
Then the final step of the package is a drop/rename, wherein I drop
"Data_Tableau" and sp_rename "Data_Stage" to "Data_Tableau".
into a stored procedure that gets called by the SSIS package.
Then it's simply a matter of using an IF block before that part of the code:
--psuedocode
IF (SELECT MaxBuyDateFromTableA) >= (SELECT MaxBuyDateFromTableB)
BEGIN
DROP TABLE Data_Tableau
EXEC sp_rename Data_Stage, Data_Tableau
END
ELSE
--do something else (or nothing at all)
I'm trying to rename a table using the following syntax
sp_rename [oldname],[newname]
but any time I run this, I get the following [using Aqua Datastudio]:
Command was executed successfully
Warnings: --->
W (1): The SQL Server is terminating this process.
<---
[Executed: 16/08/10 11:11:10 AM] [Execution: 359ms]
Then the connection is dropped (can't do anything else in the current query analyser (unique spid for each window))
Do I need to be using master when I run these commands, or am I doing something else wrong?
You shouldn't be getting the behaviour you're seeing.
It should either raise an error (e.g. If you don't have permission) or work successfully.
I suspect something is going wrong under the covers.
Have you checked the errorlog for the ASE server? Typically these sorts of problems (connections being forcibly closed) will be accompanied by an entry in the errorlog with a little bit more information.
The error log will be on the host that runs the ASE server, and will probably be in the same location that ASE is installed into. Something like
/opt/sybase/ASE-12_5/install/errorlog_MYSERVER
try to avoid using "sp_rename". Because some references in system tables remain like old name. Someday this may cause some faulties if you forget this change.
I suggest;
select * into table_backup from [tableRecent]
go
select * into [tableNew] from table_backup
go
drop table [tableRecent] -- in case of backup you may not drop that table
go
drop table table_backup -- in case of backup you may not drop that table
go
to achieve that; your database has an option "select into/bulkcopy/pllsort"
if your ata is huge, check your free space on that database.
and enjoy :)
I recently had to rename a table (and a column and FK/PK contraints) in SQL Server 2000 without losing an data. There did not seem to be an obvious DDL T-SQL statements for performing this action, so I used sp_rename to directly fiddle with object names.
Was this the only solution to the problem? (other, than give the table the correct name in the first place - doh!)
sp_rename is the correct way to do it.
EXEC sp_rename 'Old_TableName', 'New_TableName'
Ya
EXEC sp_rename 'Old_TableName', 'New_TableName'
work fine but are any key word like
"alter tabel old_name to new_name "
Maybe not the only: I guess you could always toy with the master database and update the table name there - but this is highly unrecommendable.
There is a solution that can let you work concurrently with both old and new versions of the table. This is particularly important if your data is replicated and/or is accessed through client interface (meaning old versions of the client interface will still work with the old table name):
Modify the constraints (including FKs) on your table through "ALTER TABLE" command
Do not change table name or
field name but create a view such
as:
SELECT oldTable.oldField1 as newField1, ...
save it as newTable (and, if requested, distribute it on your different servers)
Note that you cannot modify your PK this way.