Unable to create distributed hypertable on Multi-node TimescaleDB setup - database

I am trying to create a distributed hypertable on a multi-node setup of timescaledb. I can easily create the table and then convert it to a distributed hypertable using the "create_distributed_hypertable" command. This works on the "public" schema but if I create the table on my own created schema, the regular postgresql table gets created but the conversion does not work and I get the following error:
ERROR: [multinode-timescaledb-data-1]: schema "myschema" does not exist
SQL state: 3F000
SQL for regular table:
CREATE TABLE myschema.stocks_intraday (
"time" timestamp NOT NULL,
symbol text NULL,
price_open double precision NULL,
price_close double precision NULL,
price_low double precision NULL,
price_high double precision NULL,
trading_volume int NULL
);
SQL for conversion:
SELECT create_distributed_hypertable('myschema.stocks_intraday', 'time');

The error happens on data node multinode-timescaledb-data-1, since, I guess, you haven't created myschema on the data node.
TimescaleDB doesn't take care of creating schemas on data nodes, when creating hypertable there.
You need to create your schema on each data node either by login to each data node or by using distributed_exec:
CALL distributed_exec($$ CREATE SCHEMA myschema $$);

Related

VS Database Project keeps skipping a table

I have a Visual Studio (2019) Database Project that skips generating a table no matter what I do, and I cannot figure out why.
Background:
I created the project by importing from a SQL database up in Azure that had an existing table structure. The database was pretty much a prototype and I was pulling it in to speed up creating the "real" database.
I then modified pretty much all of the tables in the project, applying some standard naming conventions, indexes, etc. I added seed data files as part of the Post Deployment, etc.
Then to prep for a test of clean install of the database and the test data, I drop everything in the SQL Server database.
What I tried:
First I attempted to publish the database project to my newly empty database. I "Generate Script", execute the script (from VS) only to discover that it is trying to add an index to a table that doesn't exist. I swear the Item table is there...and sure enough it is in the Project. It is set to build too. But looking at the deploy script, it does not have the Item table.
What I tried next:
Fast forward to my next attempt, ignoring hours of VS updates, build, rebuild, clean, restarts, etc. I move on the doing a schema compare from my project to the empty database, knowing that it will generate all of my create statements but not include my post-deploy files. (I decide I can hand-run them later). Compare against an empty database looks like one would expect (nothing but whitespace on the right side of the schema compare screen). I generate the update script and look at the "Preview" (excerpt below) and my Item table is there:
** Highlights
Tables that will be rebuilt
None
Clustered indexes that will be dropped
None
Clustered indexes that will be created
None
Possible data issues
None
** User actions
Create
[db_executor] (Role)
[dbo].[ApiKey] (Table)
[dbo].[AppLog] (Table)
[dbo].[AppLog].[IX_AppLog_SessionId] (Index)
[dbo].[Customer] (Table)
[dbo].[Customer].[IX_Customer_ExternalId] (Index)
[dbo].[Customer].[IX_Customer_OrganizationId] (Index)
[dbo].[Customer].[UX_Customer_Name] (Index)
[dbo].[CustomerDepartment] (Table)
[dbo].[CustomerDepartment].[IX_CustomerDepartment_DepartmentId] (Index)
[dbo].[CustomerSetting] (Table)
[dbo].[CustomerSetting].[IX_CustomerSetting_SettingId] (Index)
[dbo].[Department] (Table)
[dbo].[Department].[IX_Department_ExternalId] (Index)
[dbo].[Department].[UX_Department_Code] (Index)
[dbo].[ImageResource] (Table)
[dbo].[Item] (Table)
[dbo].[Item].[IX_Item_UPC] (Index)
<snip>
Next I look at the SQL script generated and find that my Item table is omitted again, with the index on Item coming right after the ImageResource table create:
PRINT N'Creating [dbo].[ImageResource]...';
GO
CREATE TABLE [dbo].[ImageResource] (
[ImageResourceId] INT IDENTITY (1, 1) NOT NULL,
[URL] NVARCHAR (2048) NULL,
[ContentType] NVARCHAR (128) NOT NULL,
[Width] INT NULL,
[Height] INT NULL,
[IsActive] BIT NOT NULL,
[Created] DATETIME2 (7) NOT NULL,
[CreatedBy] NVARCHAR (128) NOT NULL,
[Modified] DATETIME2 (7) NULL,
[ModifiedBy] NVARCHAR (128) NULL,
CONSTRAINT [PK_ImageResource] PRIMARY KEY CLUSTERED ([ImageResourceId] ASC)
);
GO
PRINT N'Creating [dbo].[Item].[IX_Item_UPC]...';
GO
CREATE NONCLUSTERED INDEX [IX_Item_UPC]
ON [dbo].[Item]([UPC] ASC);
Any suggestions?
EDIT1:
To add some additional information, I do see in the output script that there are a few ALTERS that point to my "skipped" tables. So VS clearly thinks that the table exists in the destination DB. Since it doesn't, I speculate that the destination database metadata must be stored / cached somewhere and that it is out of date. I have deleted the obj and bin folders. Anyone know if/where this cached destination DB metadata could be?
EDIT2:
So confusingly, if I click "Publish" instead of "Generate Script" to magically deploy the database the publish actually works, creating all of the tables. But what is extra weird is the ProjectName.publish.sql that gets generated as part of the publish is just like the one that gets created is you select "Generate Script" in that it is missing 5 out of the 25 tables. So the SQL commands that are run under-the-covers during the Publish is not the same as the ProjectName.publish.sql that is output.
Unfortunately, I need the publish SQL file to give to the DBA to run "for real", I can only do a direct VS deploy to my dev database.

Unable to copy data from external table to exact copy of external table

While building a test DB environment in a SQL Azure DB, I dynamically generate a new DB using CREATE TABLE scripts generated from an originating, prototype DB.
Some of these tables require data from the prototype DB so for each of these I create an external table (referencing the table in the prototype DB) and then run an INSERT INTO query which takes data from the external table and inserts it into the exact copy in the test DB.
The important point here is that both the new table and the external table are dynamically generated using a script which is built in the prototype DB; the new table and the external table should therefore be exact copies of that in the prototype DB.
However, for one of these tables, I ran into an exception
Large object column support is limited to only nvarchar(max) data type.
The table in question didn't have anything greater than NVARCHAR (MAX) (such as TEXT), though it did have 10 NVARCHAR (MAX) columns. So I altered these columns to NVARCHAR (4000) and ran the process again.
I now encounter the following exception:
The data type of the column 'my_column_name' in the external table is different than the column's data type in the underlying standalone or sharded table present on the external source.
I've refreshed and checked the column type in the prototype DB, the external table and the new table, and all of these show that the data type is NVARCHAR (4000).
So why does it tell me that the data type is different?
Is it a coincidence that the column was previously NVARCHAR (MAX)?

Schema and call flow in Voltdb

what is the format of schema when we create a new table using Voltdb?
I'm a newbie. I have researched for a while and read the explanation in this https://docs.voltdb.com/UsingVoltDB/ChapDesignSchema.php
Please give me more detal about the schema format when I create a new table.
Another quesiton is What is the call flow of the system, since a request comes to the system until a response is create.
Which class/function does it go through in the system.
Since VoltDB is a SQL compliant database, you would create a new table in VoltDB just as you would create a new table in any other traditional relational database. For e.g.,
CREATE TABLE MY_TABLE (id INTEGER NOT NULL, name VARCHAR(10));
You can find all the SQL DDL statements that you can run on VoltDB here
1.Make file yourSchemaName.sql anywhere in the system. Suppose yourSchemaName.sql looks like this
CREATE TABLE Customer (
CustomerID INTEGER UNIQUE NOT NULL,
FirstName VARCHAR(15),
LastName VARCHAR (15),
PRIMARY KEY(CustomerID)
);
2.fire sqlcmd in CLI inside folder where you have installed the voltdB.
if you haven't set the path then you have to type /bin/sqlcmd.
After firing the command, a simple way to load schema in your voltdB database is by typing /path/to/yourSchemaName.sql; command inside the sqlcmd utility and the schema named yourSchemaName.sql will be imported inside the database.
VoltdB is relational database,so Now you can use all of sql database queries inside this database.

How to use BLOB data type with different databases in flyway?

is it possible to use only one SQL-Script to create this table with BLOB data type in mssyl, mysql and oracle in FLYWAY?
CREATE TABLE TFILEATTACHMENT (
ATTACHID decimal(16,0) NOT NULL,
FILENAME varchar(255) DEFAULT '',
FILEBLOB blob,
USERID varchar(10) DEFAULT '',
PRIMARY KEY (ATTACHID)
);
A flyway migration works fine with this script under MYSQL and ORACLE. But MSSQL does not know the data type "blob" - so we use the type "VARBINARY" in mssql.
But it would be nice when we have only one script for all databases.
Is it possible that flyway handle this db specific translation from blob to e.g. varbinary?
Or is there a better "standard"-SQL datatype then "blob"?
Thanks!
The simplest way to solve this is to use placeholders. You can then define a placeholder ${blobDataType} containing either the value FILEBLOB or the value VARBINARY depending on the configuration of your current environment.

Translating SQL for use with Oracle

I have 2 Oracle questions
How do I translate this SQL Server statement to work on Oracle?
Create table MyCount(Line int identity(1,1))
What is the equivalent of SQL Servers Image type for storing pictures in an Orace database?
You don't need to use triggers for this if you manage the inserts:
CREATE SEQUENCE seq;
CREATE TABLE mycount
(
line NUMBER(10,0)
);
Then, to insert a value:
INSERT INTO mycount(line) VALUES (seq.nextval);
For images, you can use BLOBs to store any binary data or BFILE to manage more or less as a BLOB but the data is stored on file system, for instance a jpg file.
References:
Create Sequence reference.
Create table reference.
Oracle® Database Application Developer's Guide - Large Objects.
1: You'll have to create a sequence and a trigger
CREATE SEQUENCE MyCountIdSeq;
CREATE TABLE MyCount (
Line INTEGER NOT NULL,
...
);
CREATE TRIGGER MyCountInsTrg BEFORE INSERT ON MyCount FOR EACH ROW AS
BEGIN
SELECT MyCountIdSeq.NEXTVAL INTO :new.Line
END;
/
2: BLOB.
Our tools can answer these questions for you. I'm talking about Oracle SQL Developer.
First - it has a Create Table wizard - and 12/18c Database supports native implementation of Identity columns.
And your new table DDL
CREATE TABLE MYCOUNT
(
LINE INT GENERATED ALWAYS AS IDENTITY NOT NULL
);
Also, we have a Translator - it can take SQL Server bits and turn them into equivalent Oracle bits. There's a full-blown migration wizard which will capture and convert your entire data model.
But for one-offs, you can use your Scratchpad. It's available under the Tools, Migrations menu.
Here it is taking your code and giving you something that would work in any Oracle Database.
Definitely use the identity feature in 12/18c if you're on that version of Oracle. Fewer db objects to maintain.

Resources