I've been struggling with this issue for most of the morning and I'm ready to suggest this is a bug in SqlPackage.exe
I extract a dacpac using the following command:
C:\"Program Files"\"Microsoft SQL Server"\150\DAC\bin\SqlPackage.exe /a:Extract /ssn:$server /sdn:$dbName /st:300 /p:CommandTimeout=300 /tf:obj\$dbName\extracted.dacpac
And publish a script using:
C:\"Program Files"\"Microsoft SQL Server"\150\DAC\bin\SqlPackage.exe /a:Script /tcs:"Server=$server`;Database=$newDbName`;Trusted_Connection=True`;Connection Timeout=300`;" /p:CommandTimeout=300 /p:ExcludeObjectType=Logins /p:ExcludeObjectType=Users /p:ExcludeObjectType=RoleMembership /p:IgnoreNotForReplication=true /p:UnmodifiableObjectWarnings=false /sf:obj\$dbName\extracted.dacpac /op:obj\$dbName\publish_script.sql
The script generates but it fails when I try to execute it with the error:
Procedure MY_FUNCTION, Line 39 Invalid object name 'MY_OTHER_FUNCTION'
If I examine the script I can see the following:
LINE 300: PRINT N'Creating [dbo].[MY_FUNCTION]...'
... More code ...
LINE 400: PRINT N'Creating [dbo].[MY_OTHER_FUNCTION]...';
I've gone as far as digging into the extracted dacpac to confirm that the model.xml is picking up the dependency MY_FUNCTION has on MY_OTHER_FUNCTION. I have also verified that this isn't a case of a circular dependency. MY_OTHER_FUNCTION is dependent on one table that was created back on LINE 100.
Why is the generated script creating them out of order?
Alright I'm certain this is a bug at this point. Steps to reproduce:
Create a new database.
Run the following creation scripts:
CREATE TABLE [dbo].[someTable]([Id] [int] IDENTITY(1,1) NOT FOR REPLICATION NOT NULL) ON [PRIMARY]
GO
CREATE FUNCTION [dbo].[someOtherFunction](#Id INT = 1) RETURNS #someResults TABLE (Id INT)
AS
BEGIN
INSERT INTO #someResults(Id)
SELECT * FROM [sqlpackagebug].[dbo].[someTable] st WHERE #Id = st.Id
RETURN;
END
GO
CREATE FUNCTION [dbo].[someFunction](#Id INT = 1) RETURNS TABLE
AS
RETURN
(
SELECT * FROM [sqlpackagebug].[dbo].someOtherFunction
)
GO
Extract the dacpac using:
C:\"Program Files"\"Microsoft SQL Server"\150\DAC\bin\SqlPackage.exe /a:Extract /ssn:$server /sdn:$dbName /st:300 /p:CommandTimeout=300 /tf:extracted.dacpac
Create a script using:
C:\"Program Files"\"Microsoft SQL Server"\150\DAC\bin\SqlPackage.exe /a:Script /tsn:$server /tdn:$newDbName /tt:300 /p:CommandTimeout=300 /sf:extracted.dacpac /op:script.sql
Look over the script and you can see that the functions are not created in the proper order. The script will fail.
Related
I started to go through the first tutorial for how to load data into Snowflake from a local file.
This is what I have set up so far:
CREATE WAREHOUSE mywh;
CREATE DATABASE Mydb;
Use Database mydb;
CREATE ROLE ANALYST;
grant usage on database mydb to role sysadmin;
grant usage on database mydb to role analyst;
grant usage, create file format, create stage, create table on schema mydb.public to role analyst;
grant operate, usage on warehouse mywh to role analyst;
//tutorial 1 loading data
CREATE FILE FORMAT mycsvformat
TYPE = "CSV"
FIELD_DELIMITER= ','
SKIP_HEADER = 1;
CREATE FILE FORMAT myjsonformat
TYPE="JSON"
STRIP_OUTER_ARRAY = true;
//create stage
CREATE OR REPLACE STAGE my_stage
FILE_FORMAT = mycsvformat;
//Use snowsql for this and make sure that the role, db, and warehouse are seelcted: put file:///data/data.csv #my_stage;
// put file on stage
PUT file://contacts.csv #my
List #~;
list #%mytable;
Then in my active Snowsql when I run:
Put file:///Users/<user>/Documents/data/data.csv #my_table;
I have confirmed I am in the correct role Accountadmin:
002003 (02000): SQL compilation error:
Stage 'MYDB.PUBLIC.MY_TABLE' does not exist or not authorized.
So then I try to create the table in Snowsql and am successful:
create or replace table my_table(id varchar, link varchar, stuff string);
I still run into this error after I run:
Put file:///Users/<>/Documents/data/data.csv #my_table;
002003 (02000): SQL compilation error:
Stage 'MYDB.PUBLIC.MY_TABLE' does not exist or not authorized.
What is the difference between putting a file to a my_table and a my_stage in this scenario? Thanks for your help!
EDIT:
CREATE OR REPLACE TABLE myjsontable(json variant);
COPY INTO myjsontable
FROM #my_stage/random.json.gz
FILE_FORMAT = (TYPE= 'JSON')
ON_ERROR = 'skip_file';
CREATE OR REPLACE TABLE save_copy_errors AS SELECT * FROM TABLE(VALIDATE(myjsontable, JOB_ID=>'enterid'));
SELECT * FROM SAVE_COPY_ERRORS;
//error for random: Error parsing JSON: invalid character outside of a string: '\\'
//no error for generated
SELECT * FROM Myjsontable;
REMOVE #My_stage pattern = '.*.csv.gz';
REMOVE #My_stage pattern = '.*.json.gz';
//yay your are done!
The put command copies the file from your local drive to the stage. You should do the put to the stage, not that table.
put file:///Users/<>/Documents/data/data.csv #my_stage;
The copy command loads it from the stage.
But in document its mention like it gets created by default for every stage
Each table has a Snowflake stage allocated to it by default for storing files. This stage is a convenient option if your files need to be accessible to multiple users and only need to be copied into a single table.
Table stages have the following characteristics and limitations:
Table stages have the same name as the table; e.g. a table named mytable has a stage referenced as #%mytable
in this case without creating stage its should load into default Snowflake stage allocated
When i am running below query first time its working but after again i am going to run this query i am getting exception
Select count(*) into rec from all_tables where table_name='DefaultTable';
if(rec=1) then
CREATE TABLE DefaultTable(
Code INT NOT NULL,
Code1 INT NOT NULL,
ResultCode INT NOT NULL,
CONSTRAINT DefaultTable_PK PRIMARY KEY(Code,Code1)
);
else
PRMOPT DefaultTable Already Exist //To print in Console
end if;
Can anyone tell what i am doing wrong? and what all i am doing wrong to write the above query ?
Error starting at line 2 in command:
if(rec=1) then
Error report:
Unknown Command
Error starting at line 3 in command:
CREATE TABLE DefaultTable(
Code INT NOT NULL,
Code1 INT NOT NULL,
ResultCode INT NOT NULL,
CONSTRAINT DefaultTable_PK PRIMARY KEY(Code,Code1)
Error at Command Line:3 Column:14
Error report:
SQL Error: ORA-00955: name is already used by an existing object
00955. 00000 - "name is already used by an existing object"
*Cause:
*Action:
Error starting at line 16 in command:
else
Error report:
Unknown Command
Error starting at line 17 in command:
PRMOPT Table Already Exist
Error report:
Unknown Command
Error starting at line 18 in command:
end if
Error report:
Unknown Command
Well, as I understand the author is trying to do it in one sql query. But in Oracle you can not use IF statement in simple sql. Moreover, even if you use PL/SQL the DDL statements are not allowed to be directly invoked from PL/SQL code, so you should use dynamic SQL. I think, the following script will do what you want:
DECLARE
rec NUMBER;
BEGIN
SELECT COUNT(*) INTO rec FROM all_tables WHERE table_name='DEFAULTTABLE';
IF (rec=0) THEN
EXECUTE IMMEDIATE 'CREATE TABLE DefaultTable(
Code INT NOT NULL,
Code1 INT NOT NULL,
ResultCode INT NOT NULL,
CONSTRAINT DefaultTable_PK
PRIMARY KEY(Code,Code1,ResultCode)
)';
ELSE
dbms_output.put_line('DefaultTable Already Exist');
END IF;
END;
Please, note that in order to see the messages printed via dbms_output, you should execute:
SET SERVEROUTPUT ON;
If you read the error message you will notice it says:
ORA-00955: name is already used by an existing object
This means you are trying to create a table that already exists. That explains why it runs the first time and not any more after that.
Check the entries in all_tables and you will find that Oracle creates tablenames in uppercase. So check for 'DEFAULTTABLE'.
I am building a SQL Publish Script that will be used to generate a database to our internal servers, and then used externally by our client.
The problem I have is that our internal script will automate quite a few things for us, in which the actual production environment will require these completed manually.
For example, internally we would use the following script
-- Global variables
:setvar EnvironmentName 'Local'
-- Script.PostDeployment.sql
:r .\PopulateDefaultValues.sql
IF ($(EnvironmentName) = 'Test')
BEGIN
:r .\GivePermissionsToDevelopmentTeam.sql
:r .\PopulateTestData.sql
:r .\RunETL.sql
END
ELSE IF ($(EnvironmentName) = 'Client_Dev')
BEGIN
:r .\GivePermissionsToDevWebsite.sql
END
This would generate a script like this:
-- (Ignore syntax correctness, its just the process I'm after)
IF($(EnvironmentName) = 'Test')
BEGIN
CREATE LOGIN [Developer1] AS USER [MyDomain\Developer1] WITH DEFAULT SCHEMA=[dbo];
CREATE LOGIN [Developer2] AS USER [MyDomain\Developer2] WITH DEFAULT SCHEMA=[dbo];
CREATE LOGIN [Developer3] AS USER [MyDomain\Developer3] WITH DEFAULT SCHEMA=[dbo];
-- Populate entire database (10000's of rows over 100 tables)
INSERT INTO Products ( Name, Description, Price ) VALUES
( 'Cheese Balls', 'Cheesy Balls ... mm mm mmmm', 1.00),
( 'Cheese Balls +', 'Cheesy Balls with a caffeine kick', 2.00),
( 'Cheese Squares', 'Cheesy squares with a hint of ginger', 2.50);
EXEC spRunETL 'AUTO-DEPLOY';
END
ELSE IF($(EnvironmentName) = 'Client_Dev')
BEGIN
CREATE LOGIN [WebLogin] AS USER [FABRIKAM\AppPoolUser];
END
END IF
This works fine, for us. When this script is taken on site, the script fails because it cannot authenticate the users of our internal environment.
One item I thought about permissions was to just give our internal team sysadmin privileges, but the test data just fills the script up. When going on site, having all of this test data just bloats the published script and isn't used anyway.
Is there any way to exclude a section entirely from a published file, so that all of the test data and extraeous inserts are removed, without any manual intervention of the published file?
Unfortunately, there is currently no way to remove the contents of a referenced script from the generated file entirely.
The only way to achieve this is to post-process the generated script (Powershell/Ruby/scripting language of choice) to find and remove the parts you care about using some form of string and file manipulation.
Based on: My experience with doing this exact same thing to remove a development-environment-only script which was sizable and bloated the Production deployment script with a lot of 'noise', making it harder for DBA's to review the script sensibly.
I'm trying to run a simple external table program using oracle 11g on Linux VM. The problem is that I can't query any data from .txt files.
Here's my code:
CONN / as sysdba;
CREATE OR REPLACE DIRECTORY DIR1 AS 'home/oracle/TEMP/X/';
GRANT READ, WRITE ON DIRECTORY DIR1 TO user;
CONN user/password;
CREATE TABLE gerada
(
field1 INT,
field2 Varchar2(20)
)
ORGANIZATION EXTERNAL
(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY DIR1
ACCESS PARAMETERS
(
RECORDS DELIMITED BY NEWLINE
FIELDS TERMINATED BY ';'
MISSING FIELD VALUES ARE NULL
)
LOCATION ('registros.txt')
)
REJECT LIMIT UNLIMITED;
--Error starts here.
SELECT * FROM gerada;
DROP TABLE gerada;
DROP DIRECTORY DIR1;
Here's the error message:
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
error opening file home/oracle/TEMP/X/GERADA_3375.log
And thats how registros.txt looks like:
1234;hello world;
I've checked my permissions on DIR1 and I do have read/write permissions.
Any ideas?
ORA-29913 and ORA-29400 mean that you're unable to access to directory and/or file.
Looking carefully at the CREATE DIRECTORY command it looks like the path you're using may be mis-formatted. Try putting a forward slash at the start of the path and removing the one at the end of the path when creating the directory - e.g. CREATE OR REPLACE DIRECTORY DIR1 AS '/home/oracle/TEMP/X';.
Share and enjoy.
CREATE TABLE LOG_FILES (
LOG_DTM VARCHAR(18),
LOG_TXT VARCHAR(300)
)
ORGANIZATION EXTERNAL(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY LOG_DIR
ACCESS PARAMETERS(
RECORDS DELIMITED BY NEWLINE
FIELDS(
LOG_DTM position(1:18),
LOG_TXT position(19:300)
)
)
LOCATION('logadm'))
)
REJECT LIMIT UNLIMITED
/
LOG_DIR is an oracle directory that points to /u/logs/
The problem though is that the contents of /u/logs/ looks like this
logadm_12012012.log
logadm_13012012.log
logadm_14012012.log
logadm_15012012.log
Is there any way i can specify the location of the file dynamically? i.e. every time i run Select * from LOG_FILES it should use the log file of the day. (e.g. log_adm_DDMMYYYYY).
I know i can use alter table log_files location ('logadm_15012012.log') but i would like not to have to issue the alter command.
Any other possibilities?
Thanks
It's a shame you're running 10g. On 11g we can associate a pre-processor script - a shell script - with an external table. In your case you could run a script which would figure out the latest file and then issue a copy command. Something like:
cp logadm_15012012.log logadm
Adrian Billington has blogged about this feature here. Frankly his write-up is more helpful than the official docs.
But as you're on 10g all you can do is run the ALTER TABLE statement, or use a scheduled job (cron or whatever) to sync a new file with the generic name.