How/Where is table structure(not data) stored in SQL server? - sql-server

I know that the data in SQL Server is stored in Data pages, But I don't know where the table structure is stored. I came across a statement about TRUNCATE as
"TRUNCATE removes the data by deallocating the data pages.TRUNCATE removes all rows from a table, but the table structure and columns remains"
This made me realize that, table structure, column information is stored outside pages(or Data pages in particular). SO, How/Where is table structure(not data) is stored in SQL server ?
Thank You.

You can access SQL server metadata on INFORMATION_SCHEMA. Following find the most useful views and its content:
INFORMATION_SCHEMA.TABLES: Contains information about the schemas, tables and views in the server.
INFORMATION_SCHEMA.COLUMNS: Full information about the table columns like data type, if it's nullable...
INFORMATION_SCHEMA.VIEWS: Containing information about the views and the code for creating them again.
INFORMATION_SCHEMA.KEY_COLUMN_USAGE: Information about foreign keys, unique keys, primary keys...
To use them, simply query them as they are data views: SELECT * FROM INFORMATION_SCHEMA.TABLES
For a full reference go to MSDN: https://msdn.microsoft.com/en-us/library/ms186778.aspx

There are system tables that store all of the metadata about the database. These tables are not directly queryable (except when using the DAC) but there are numerous views and functions built atop these tables. These are referred to as the Catalog Views.
So, for instance, there is the sys.columns view which describes each column in the database. It's a view built atop the syscolpars table, which is one of the system tables mentioned above that you cannot directly query.
There are also the INFORMATION_SCHEMA views which hespi mentions. These are meant to be a "standard" way of accessing metadata supported by all SQL database systems. Unfortunately, support for them is not 100%, and because they're meant to be cross-platform, they do not tend to reveal advanced features that are product specific.

A SQL Server Database consists of 2 Files (usually):
Master Data File (*.mdf)
Transaction Log File (*.ldf)
)The Master Data File contains: Schema and Data Information
)The Transaction Log Files contains Log Information for Actions in your DB
If you run select * from sys.database_files in your DB it will show you the filenames, location, size, etc..

Related

Flink difference between view vs temporary table vs table

What is the difference between view vs temporary table vs table and it's usecases. Trying to understand when to use which?
You can read more on this topic at https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/common/#temporary-vs-permanent-tables
Temporary tables are always stored in memory and only exist for the duration of the Flink session they are created within. These tables are not visible to other sessions. They are not bound to any catalog or database but can be created in the namespace of one. Temporary tables are not dropped if their corresponding database is removed.
Tables can be either virtual (VIEWS) or regular (TABLES). VIEWS can be created from an existing Table object, usually the result of a Table API or SQL query. TABLES describe external data, such as a file, database table, or message queue.

Does PostgreSQL manages tables inside tables?

I was assigned the task to create a simple Database Management System in a class so I looked up Postgres and noticed that the CLI tool (psql) has commands (\d and \l) that output information about the database and columns of a table in the form of tables like when you do a SELECT. So my question is If Postgres manages user tables inside system tables? and that way when you do \d or \l you are actually doing a SELECT on those system tables. This is just to understand if that would be a good way of managing tables in a database or not and just use regular data structures like lists.
It does indeed. You can run psql with -E to see the queries it is using.
Then check the online manuals
The items to search for are "system catalogs" and "INFORMATION_SCHEMA". The latter is a standard way of describing database schemas and should mostly work between different RDBMS.
Yes, Postgres uses tables that it creates to manage the tables that you create.
There is an entire chapter in the documentation explaining. To quote:
The system catalogs are the place where a relational database management system stores schema metadata, such as information about tables and columns, and internal bookkeeping information. PostgreSQL's system catalogs are regular tables.
As mentioned in the other Answer, the SQL standard requires metadata be provided in some table structures as defined within the standard. These must be housed within a schema named exactly INFORMATION_SCHEMA. Postgres provides that schema and its prescribed tables, but implements them as a view on the actual system tables. See the chapter on INFORMATION_SCHEMA in Postgres documentation.
You can access the metadata, such as to get a list of all the tables you have defined, or get a list of all the columns you defined in a particular table. To do so, perform a query in SQL using SELECT like any other query.
For portability, meaning to write code that works in other database systems in addition to Postgres, query against INFORMATION_SCHEMA.
For additional details not required by the SQL standard, and for Postgres-specific info, query against the Postgres-specific system tables. Their names all start with pg_.

Is it possible to move a schema to another database in SQL Server?

We have a legacy database that has dozens of schemas in it, and we're looking to split that database up into several smaller distinct databases instead.
Is there any way I can create a new database on the same physical server, and then transfer an entire schema over to the new database?
Our tables look like:
Foo.Table1
Foo.Table2
Foo.Table3
...
Bar.Table1
Bar.Table2
...
Xxx.Table1
Xxx.Table2
...
...and I want to move Foo.* to a new database.
Typically this is recommended to be done by some kind of per-table export/import, but that's quite cumbersome with the 150+ tables in the schema.
As far as my trivial research goes the options appear to be:
Export/import each table individually.
Backup the entire database, restore in a different destination and delete everything else (painful, since the entire database is ~900GB).
Deploy the dacpac of the single schema to the new database, and do a cross database initial seeding, aka:
INSERT INTO newDb.Foo.Table1 SELECT * FROM oldDb.Foo.Table1;
INSERT INTO newDb.Foo.Table2 SELECT * FROM oldDb.Foo.Table2;
INSERT INTO newDb.Foo.Table3 SELECT * FROM oldDb.Foo.Table3;
...
All of these options are a lot of effort... is there any other approach that will simply move an entire schema into a new database?
I am not aware of any fully automated way but this can be done relatively simply with the help of Excel.
In SSMS you can use "Object Explorer Details" to easily (with few mouse clicks) script schema of multiple tables.
With the help of system views (sys.tables, sys.columns etc.) and Excel you should be able to generate 'INSERT INTO .... SELECT ...' scripts for all of your tables in minutes.
In Excel (or a similar application) you paste the list of your tables (obtained using sys.tables) and then write a formula to generate a script for each table.
you can create a filegroup for each schema and move the tables of a schema into the related filegroup. after that you backup each filegroup and restore.

preserve the data while dropping a hive internal table

I have loaded a huge table from SQL Server onto Hive. The mistake I made is I created the table as a Internal table in HIVE. Can anyone suggest any hack so that I can alter the table structure , without dropping the data.
The data is huge and I cant afford to export the data out of source again.
The problem right now, is that since the column orders don't match the SQL server table, a lot of columns display NULL.
Any help will be highly appreciated.
I do not see any problem to use an Alter Table on a internal table. (https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-AlterTable/Partition/Column)
Another - but not recommended - option would be to open your hive metastore(HCatalog) and apply the changes there. Hive reads out the schema information from a relational database (configured during the Hadoop setup, default is MySQL). In this MySQL you can try to change some settings. However, this is not recommended as with a mistake, you can screw your whole Hive databases.
The safest way is creating a new table and using the existing as a source
create table new_table
as
select
[...]
from existing_table

What is the difference between table and external table in Netezza?

What is the difference between table and external table in Netezza? Does it always reads datafile in the backend after loading data is it required to again copy data from external table to normal database table?
This is covered pretty well in lot of blogs and tech sites, like this one : http://tennysusantobi.blogspot.no/2012/08/netezza-external-tables.html
Basically external tables are just a definition residing in Netezza, allowing it to query data from (usually) local textfiles and not having to load them onto a database in netezza physically. Also used to export data easily (as covered in the link).
Tables:
Both definition and data resides in databases. More precisely data is stored physically in each data slice based on distribution key.
External Table:
Only table definition resides in database but not the actual data. Data resides in file itself.
It is mainly used to load/ unload the data. It can also be used to backup netezza tables or to transfer data from one netezza box to another netezza box.

Resources