Is it possible to change name of the system table - sql-server

I want to change name of the system table in my database is it possible? Probably I shouldn't change it but I'm curious.
When I execute sp_rename I get the following error:
Msg 15001, Level 16, State 1, Procedure sp_rename, Line 404
Object 'cdc.[dbo_CdcTest_CT]' does not exist or is not a valid object for this operation.
Edit:
I want to change name of tables created by Change Data Capture because I want to disable CDC mechanism for table and still have data - I know that I can create additional table and move there data from CDC table but it's easier to change name of the CDC and then disable cdc for specified table.

No you cannot change the name of the system tables. However you can refer it with a different name.
You can use synonyms for that:
CREATE SYNONYM [ schema_name_1. ] synonym_name FOR <object>
<object> :: =
{
[ server_name.[ database_name ] . [ schema_name_2 ].| database_name . [ schema_name_2 ].| schema_name_2. ] object_name
}
Also to mention that sp_rename
Changes the name of a user-created object in the current database.
This object can be a table, index, column, alias data type, or
Microsoft .NET Framework common language runtime

Related

In the tutorial "Tutorial: Bulk Loading from a local file system using copy" what is the difference between my_stage and my_table permissions?

I started to go through the first tutorial for how to load data into Snowflake from a local file.
This is what I have set up so far:
CREATE WAREHOUSE mywh;
CREATE DATABASE Mydb;
Use Database mydb;
CREATE ROLE ANALYST;
grant usage on database mydb to role sysadmin;
grant usage on database mydb to role analyst;
grant usage, create file format, create stage, create table on schema mydb.public to role analyst;
grant operate, usage on warehouse mywh to role analyst;
//tutorial 1 loading data
CREATE FILE FORMAT mycsvformat
TYPE = "CSV"
FIELD_DELIMITER= ','
SKIP_HEADER = 1;
CREATE FILE FORMAT myjsonformat
TYPE="JSON"
STRIP_OUTER_ARRAY = true;
//create stage
CREATE OR REPLACE STAGE my_stage
FILE_FORMAT = mycsvformat;
//Use snowsql for this and make sure that the role, db, and warehouse are seelcted: put file:///data/data.csv #my_stage;
// put file on stage
PUT file://contacts.csv #my
List #~;
list #%mytable;
Then in my active Snowsql when I run:
Put file:///Users/<user>/Documents/data/data.csv #my_table;
I have confirmed I am in the correct role Accountadmin:
002003 (02000): SQL compilation error:
Stage 'MYDB.PUBLIC.MY_TABLE' does not exist or not authorized.
So then I try to create the table in Snowsql and am successful:
create or replace table my_table(id varchar, link varchar, stuff string);
I still run into this error after I run:
Put file:///Users/<>/Documents/data/data.csv #my_table;
002003 (02000): SQL compilation error:
Stage 'MYDB.PUBLIC.MY_TABLE' does not exist or not authorized.
What is the difference between putting a file to a my_table and a my_stage in this scenario? Thanks for your help!
EDIT:
CREATE OR REPLACE TABLE myjsontable(json variant);
COPY INTO myjsontable
FROM #my_stage/random.json.gz
FILE_FORMAT = (TYPE= 'JSON')
ON_ERROR = 'skip_file';
CREATE OR REPLACE TABLE save_copy_errors AS SELECT * FROM TABLE(VALIDATE(myjsontable, JOB_ID=>'enterid'));
SELECT * FROM SAVE_COPY_ERRORS;
//error for random: Error parsing JSON: invalid character outside of a string: '\\'
//no error for generated
SELECT * FROM Myjsontable;
REMOVE #My_stage pattern = '.*.csv.gz';
REMOVE #My_stage pattern = '.*.json.gz';
//yay your are done!
The put command copies the file from your local drive to the stage. You should do the put to the stage, not that table.
put file:///Users/<>/Documents/data/data.csv #my_stage;
The copy command loads it from the stage.
But in document its mention like it gets created by default for every stage
Each table has a Snowflake stage allocated to it by default for storing files. This stage is a convenient option if your files need to be accessible to multiple users and only need to be copied into a single table.
Table stages have the following characteristics and limitations:
Table stages have the same name as the table; e.g. a table named mytable has a stage referenced as #%mytable
in this case without creating stage its should load into default Snowflake stage allocated

How to create schema binding and index on view from other server?

I have tried creating a view with the help of schema binding and indexing which is referring from other server table. But sql thrown some error for the below query .
create VIEW [dbo].[Vxyz]
with schemabinding
AS
SELECT
ELID,USECOUNT,LASTUPDATE,TYPE,CODENE,CASNUE,NAME_ENG,ISGROUP,CHGROUP,DLink
IDE,LOCKBY,PhyApB,BUILDNO,PMNNumE,EINECE
FROM IADL.dbo.tblxyz
GO
create unique clustered index IDX_xyz on [dbo].
[Vxyz](ELID)
Found below error
Msg 4512, Level 16, State 3, Procedure IADL.dbo.tblxyz, Line 3 [Batch Start Line 11]
Cannot schema bind view '[dbo].[Vxyz]' because name 'IADL.dbo.tblxyz' is invalid for schema binding. Names must be in two-part format
and an object cannot reference itself.
Msg 1939, Level 16, State 1, Line 17
Cannot create index on view '[dbo].[Vxyz]' because the view is not schema bound.
select distinct
ISNULL(A.elid, B.elid) ElementID,
CASE when A.elid is null and B.elid is not null then 'Missing ElementID :'+
B.elid+' in Mainproductsall table' when A.elid is not null
and B.elid is null then 'Missing ElementID :'+ A.elid+' in Genproductsall table' Else 'OK'
end Datastatus
into ABC
from [dbo].[Vxyz] As A
full outer join [dbo].[Vxyzwa] as B on A.elid = B.elid
where A.elid is null or B.elid is null
Each from from above query is view . As per my first query above which is referring from other server. so i want to optimize and i am trying to create index.
If you check official documentation, you will see that it is stated as follows
All referenced objects must be in the same database.
So you cannot refer a base table from an other database.
This means, for all referenced current database objects should be referenced with their schema name and object name.

Change sa.Column nullable property with Alembic in Batch mode

I have a database upgrade migration I want to apply to Database column:
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table('details') as batch_op:
batch_op.alter_column('details', 'non_essential_cookies',
existing_type=sa.BOOLEAN(),
nullable=False)
# ### end Alembic commands ###
I am implementing batch mode sice ALTER is unsupported and previously I received this error : sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) near "ALTER":. However, I hoped batch mode would work but now instead I receive the new error:
TypeError: <flask_script.commands.Command object at 0x1149bb278>: alter_column() got multiple values for argument 'nullable'.
I only have one tuple in the table and the relevant attribute is not NULL so the database migration is valid. I just don't understand why there are multiple values
From docs:
The method is used as a context manager, which returns an instance of
BatchOperations; this object is the same as Operations except that
table names and schema names are omitted.
Key point here being that you don’t have to provide the table name when calling ops on the BatchOperations instance.
The signature for alter_column is:
alter_column(table_name, column_name, nullable=None, server_default=False, new_column_name=None, type_=None, existing_type=None, existing_server_default=False, existing_nullable=None, schema=None, **kw)
So from your code:
with op.batch_alter_table('details') as batch_op:
batch_op.alter_column('details', 'non_essential_cookies',
existing_type=sa.BOOLEAN(),
nullable=False)
'details' is being passed to column_name, and 'non_essential_cookies' is getting passed to nullable as a positional argument. The issue is caused later when you specify the value of nullable again with the keyword arg,nullable=False.

How to make the 'public' schema default in a Scala Play project that uses PostgreSQL?

I am not sure if my issue connecting to the Scala Play 2.5.x Framework or to PostgreSQL so I am going to describe my setup.
I am using the Play 2.5.6 with Scala and PostgreSQL 9.5.4-2 from the BigSQL Sandboxes. I use the Play Framework default evolution package to manage the DB versions.
I created a new database in BigSQL Sandbox's PGSQL and PGSQL created a default schema called public. I use this schema for development.
I would like to create a table with the following script (1.sql in DB evolution config):
# Initialize the database
# --- !Ups
CREATE TABLE user (
id SERIAL PRIMARY KEY,
name TEXT NOT NULL,
email TEXT NOT NULL,
creation_date TIMESTAMP NOT NULL
);
# --- !Downs
DROP TABLE user;
Besides that I would like to read the table with a code like this:
val resultSet = statement.executeQuery("SELECT id, name, email FROM public.user WHERE id=" + id.toString)
I got an error if I would like to execute any of the mentioned code or even if I use the CREATE TABLE... code in pgadmin. The issue is with the user table name. If I prefix it with public (i.e. public.user) everything works fine.
My questions are:
Is it normal to prefix the table name with the schema name every time? It seems to odd to me.
How can I make the public schema a default option so I do not have to qualify the table name? (e.g. CREATE TABLE user (...); will not throw an error)
I tried the following:
I set the search_path for my user: ALTER USER my_user SET search_path to public;
I set the search_path for my database: ALTER database "my_database" SET search_path TO my_schema;
search_path correctly shows this: "$user",public
I got the following errors:
In Play: p.a.d.e.DefaultEvolutionsApi - ERROR: syntax error at or near "user"
In pgadmin:
ERROR: syntax error at or near "user"
LINE 1: CREATE TABLE user (
********** Error **********
ERROR: syntax error at or near "user"
SQL state: 42601
Character: 14
This has nothing to do with the default schema. user is a reserved word.
You need to use double quotes to be able to create such a table:
CREATE TABLE "user" (
id SERIAL PRIMARY KEY,
name TEXT NOT NULL,
email TEXT NOT NULL,
creation_date TIMESTAMP NOT NULL
);
But I strongly recommend not doing that. Find a different name that does not require a quoted identifier.

Vaadin MSSQLGenerator with a table inside a named schema causes error

I tried to use a Vaadin MSSQLGenerator to generate a SQLContainer from a SQL Server 2012 table inside a named schema foo. Something like this:
MSSQLGenerator msql = new MSSQLGenerator();
TableQuery tq = new TableQuery(tableName, conPool, msql);
but this causes a 'Primary key constraints have not been defined for the table' error.
Notice this only happens when the table is inside a named schema different that dbo. For instance: foo.tableName
Any workaround or advice on this? I can not change the foo schema by the way or move the table to dbo schema.
Since version 7.1. Vaadin allows to pass catalog and schema to TableQuery constructor.
new TableQuery("catalog", "schema", "table", pool, msqlgenerator)

Resources