I am seeing something weird in a Snowflake database I'm working with. Our table names in the metadata are different than the actual table names.
For example, we have a table named
TABLE_NAME
but if I query the metadata for metadata about TABLE_NAME it comes back null. The metadata for TABLE_NAME appears to be stored under TABLE_NAME_1557893E-69ED-4225-9A54-C836977F01A8
Has anyone here had any experience with why this might be happening?
Related
I have a PostgreSQL database and I need to change the data type of a column. There is a view that uses that table. Is there any way to change the data type of the column without deleting the view using ALTER TABLE table_name ALTER COLUMN column_name TYPE new_type.
No, you will have to drop and re-create the view.
The pg_get_viewdef function that calculates the definition of a view will be helpful.
You can do the whole operation in a transaction if you don't want to expose the "view-less" state to concurrent transactions.
I have a requirement where data from 150 tables with different columns should be copied to another table which has all these columns. I need a script which will do this activity automatically instead of manually inserting one by one.
Any suggestions?
You can get the column names from either sys.columns or information_schema.columns along with the datatype, then it's just a simple matter of de-duping the columns (based on name) and sorting out any conflicts with differing datatypes to create your destination table.
once you have that, you can create and execute all your insert statements.
Good luck.
It's the first time that I am working with SQL Server, and I need some help. I've to write a stored procedure (with an empty table as input parameter, or maybe a string containing the name is better (?)) that:
Identifies all tables on the database ending with the name of an input table. E.g. when the input table is called 'test', tables with names like 'table1_test', 'table2_test', ... should be selected.
merges those tables into the input table (which is empty).
Assumption: tables ending with 'test' have the same structure.
I can identify the tables by using the like operator on the information_schema.tables table. Then, a table with one column containing all the selected table names is returned. At this point, I'm stuck.
Does somebody know what to do? Thanks in advance!
Edit: the code I currently have for identifying which tables to insert
DECLARE #TARGET VARCHAR(255)
SET #TARGET = 'TEST'
BEGIN
SELECT TABLE_NAME FROM sfdc_replica.information_schema.tables WHERE TABLE_NAME LIKE CONCAT('%', #TARGET);
END;
I am using hive for work. When I created some external tables today, I forgot to type the EXTERNAL keyword, and the HiveQL is like:
CREATE TABLE year_2012_main (
some BIGINT,
fields BIGINT,
should BIGINT,
beee BIGINT,
here STRING,
buttt STRING,
Iveee STRING,
decide STRING,
tohide STRING,
them BIGINT)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
COLLECTION ITEMS TERMINATED BY ' '
MAP KEYS TERMINATED BY ':'
STORED AS TEXTFILE location '/data/content/year_2012_main';
Then I tried select count (*) from year_2012_main; , and it worked well.
So, just out of curious, what's the difference with or without EXTERNAL?
A Hive table that's not external is called a managed table. One of the main differences between an external and a managed table in Hive is that when an external table is dropped, the data associated with it (in your case /data/content/year_2012_main) doesn't get deleted, only the metadata (number of columns, type of columns, terminators, etc.) gets dropped from the Hive metastore. When a managed table gets dropped, both the metadata and data get dropped. I have so far always preferred making tables external because if the schema of my Hive table changes, I can just drop the external table and re-create another external table over the same HDFS data with the new schema. However, most (if not all) of the changes to schema can now be made through ALTER TABLE or similar commands so my recommendation/preference to use external tables over managed ones might be more of a legacy concern than a contemporary one.
You can learn more about the terminologies here.
I see from the 2005 documentation that you cannot create an indexed view from an Xml column.
Is this possible in 2008 or 2008R2? I can't find any documentation saying that it is something that was added but am looking for confirmation and I don't have handy access to a 2008 environment at the moment.
EDIT
My motivation behind this is that the amount of Xml is growing to the point where SSRS reports which aggregate data from the Xml are becoming slow.
Depending on your need, what you could do is this:
create a set of stored functions that extract certain bits of key information from your XML (function receives XML as input, extracts the info using XPath/XQuery, returns a VARCHAR or INT or something value)
CREATE FUNCTION dbo.SomeFunction(#Input XML)
RETURNS VARCHAR(20)
WITH SCHEMABINDING
AS BEGIN
......
END
add those key bits to your base table as computed columns that reference those functions, with the PERSISTED keyword:
ALTER TABLE dbo.YourTable
ADD ComputedColumns1 AS dbo.SomeFunction(XmlColumn) PERSISTED
create your view on the table and those computed columns, with schemabinding:
CREATE VIEW vYourView
WITH SCHEMABINDING
AS
SELECT (list of columns)
FROM dbo.YourTable
create a unique, clustered index on that view - unless you've violated any of the requirements of the indexed view, this should work just fine:
CREATE UNIQUE CLUSTERED INDEX CIX_YourView ON dbo.vYourView(.....)
This works fine if you need to extract a small number of key bits of information from an XML column - it's definitely not recommended for lots of XML elements / values.
I don't believe this is possible. Without a better explanation of what you are trying to do, one suggestion I can offer is to pull the XML apart before insert (perhaps using an instead of trigger, or doing this shredding at the application layer) and storing the part(s) you want to use for the indexed view in separate non-XML columns.