I am looking for developing a vb script which would extract all the table/columns from powerdesigner model to a excel file. After changing few properties I will be updating it to the model using vbscript. So I would like to know if there is any property of a column which can uniquely identify each column of a table. Example:- ROWID column in oracle
Does powerdesigner maintain unique id for each object created in PDM?
Most (all?) modelisation objects derive from IdentifiedObject, which has an ObjectID property, a GUID.
Related
models.py: class Name(models.Model): name=models.TextField(db_column="Name_of_the_persion")
like this i defined the model, so that table 'Name' will be created on the database side, with column name as "Name_of_the_persion".
My requirement is like,I need to insert the new row in the table based on the django model field names instead of columns of tabel.
If any one knows, pls help me.
I tried inserting the data into table, by using psycopg2. But this was work in the case of pgadmin point of view only, that means its takes column names of database, instead of model fileds .
I am new to AzureSynapse and am technically a Data Scientist whos doing a Data Engineering task. Please help!
I have some xlsx files containing raw data that I need to import into an SQL database table. The issue is that the raw data does not have a uniqueidentifer column and I need to add one before inserting the data into my SQL database.
I have been able to successfully add all the rows to the table by adding a new column on the Copy Data command and setting it to be #guid(). However, this sets the guid of every row to the same value (not unique for each row).
GUID mapping:
DB Result:
If I do not add this mapping, the pipeline throws an error stating that it cannot import a NULL Id into the column Id. Which makes sense as this column does not accept NULL values.
Is there a way to have AzureSynapse analystics read in a raw xlsx file and then import it into my DB with a unique identifier for each row? If so, how can I accomplish this?
Many many thanks for any support.
Giving dynamic content to a column in this way would generate the same value for entire column.
Instead, you can generate a new guid for each row using a for each activity.
You can retrieve the data from your source excel file using a lookup activity (my source only has name column). Give the output array of lookup activity to for each activity.
#activity('Lookup1').output.value
Inside for each, since you already have a linked service, create a script activity. In this script activity, you can create a query with dynamic content to insert values into the destination table. The following is the query I built using dynamic content.
insert into demo values ('#{guid()}','#{item().name}')
This allows you to iterate through source rows, insert each row individually while generating new guid every time
You can follow the above procedure to build a query to insert each row with unique identifier value. The following is an image where I used copy data to insert first 2 rows (same as yours) and inserted the next 2 rows using the above procedure.
NOTE: I have taken Azure SQL database for demo, but that does not affect the procedure.
Background
I'm using Azure data factory v2 to load data from on-prem databases (for example SQL Server) to Azure data lake gen2. Since I'm going to load thousands of tables, I've created a dynamic ADF pipeline that loads the data as-is in the source based on parameters for schema, table name, modified date (for identifying increments) and so on. This obviously means I can't specify any type of schema or mapping manually in ADF. This is fine since I want the data lake to hold a persistent copy of the source data in the same structure. The data is loaded into ORC files.
Based on these ORC files I want to create external tables in Snowflake with virtual columns. I have already created normal tables in Snowflake with the same column names and data types as in the source tables, which I'm going to use in a later stage. I want to use the information schema for these tables to dynamically create the DDL statement for the external tables.
The issue
Since column names are always UPPER case in Snowflake, and it's case-sensitive in many ways, Snowflake is unable to parse the ORC file with the dynamically generated DDL statement as the definition of the virtual columns no longer corresponds to the source column name casing. For example it will generate one virtual column as -> ID NUMBER AS(value:ID::NUMBER)
This will return NULL as the column is named "Id" with a lower case D in the source database, and therefore also in the ORC file in the data lake.
This feels like a major drawback with Snowflake. Is there any reasonable way around this issue? The only options I can think of is to:
1. Load the information schema from the source database to Snowflake separately and use that data to build a correct virtual column definition with correct cased column names.
2. Load the records in their entirety into some variant column in Snowflake, converted to UPPER or LOWER.
Both options add a lot of complexity or even messes up the data. Is there any straight forward way to only return the column names from an ORC file? Ultimately I would need to be able to use something like Snowflake's DESCRIBE TABLE on the file in the data lake.
Unless you set the parameter QUOTED_IDENTIFIERS_IGNORE_CASE = TRUE you can declare your column in the casing you want:
CREATE TABLE "MyTable" ("Id" NUMBER);
If your dynamic SQL carefully uses "Id" and not just Id you will be fine.
Found an even better way to achieve this, so I'm answering my own question.
With the below query we can get the path/column names directly from the ORC file(s) in the stage with a hint of the data type from the source. This filters out colums that only contains NULL values. Will most likely create some type of data type ranking table for the final data type determination for the virtual columns we're aiming to define dynamically for the external tables.
SELECT f.path as "ColumnName"
, TYPEOF(f.value) as "DataType"
, COUNT(1) as NbrOfRecords
FROM (
SELECT $1 as "value" FROM #<db>.<schema>.<stg>/<directory>/ (FILE_FORMAT => '<fileformat>')
),
lateral flatten(value, recursive=>true) f
WHERE TYPEOF(f.value) != 'NULL_VALUE'
GROUP BY f.path, TYPEOF(f.value)
ORDER BY 1
I imported all entities and attributes from excel and generated Logical Diagram in PowerDesigner.
From Logical , I created Physical Diagram. But to generate sql scripts I need to update the column length for each tables.
I tried to update the column length from Properties Grid.
But I am unable to edit/add column length. Tool is allowing me to enter the value, but it is not saving the value.
Please suggest me the solution.
Thanks,
Vasu
I have a quick question, what is the name of the TFS 2010 database table that contains values for any custom fields.
I did a query against the TFS_Warehouse DB and the dbo.DimWorkItem table. However, I cannot find any of my custom work item fields under this table.
Can someone point me to the correct TFS 2010 table containing the custom field data? When I worked with Quality Center, the tables were pretty well defined so it was easy to do backend DB queries. TFS does not seem that intuitive.
Thanks
you have to add "reportable" to field definition.
Example - FIELD name="Scope" refname="xxx.Scope" type="String" reportable="dimension"
Wait few minutes and you'll see field in warehouse DB
look,
you need to go to your collection database, and to check a table called something like Fields.
there, you will find the new field properties and the type as well.
you can change the type to string and to be reportable.
go to the table of the WORKITEMLATEST, and check the field- you can see the name of the field like what was mentioned in the FIELDS table,.
open your work item normally, edit that field information, click save...
you can see your data updated in the WORKITEMLATEST table
BUT...
the problem is the STRING type is limited... I tried to add more text.. it keep telling me that number of character is over limit !