Can I insert column/field comments in PowerDesigner to be used in a generated report? - powerdesigner

I have a Conceptual Model in PowerDesigner with 150+ tables and a few thousand columns/fields in total. The columns have been documented separately outside of powerdesigner and I would like to import/insert these comments to each column/field. Is there a way to do this with script rather than manually adding field level comments/notes?
The second step will be to produce a data model report with tables/columns and which includes these comments. I believe I have seen an answer to this here on StackOverflow, but if there are any newer tips, I welcome them.
Have dabbled with PowerDesigner scripts but not done any advance scripting yet. Hoping to be able to read a csv-like or xml file and mapping field names to internal table/field combinations to add the comment.
I have a report template which generates a report of all tables, but column comments are empty.

Related

Count how many fields where cleansed and which fields on SSIS

I'm doing an exercise in which I have to clean data from a Flat File Source and write it on my Database. I have already managed to clean all of the fields by using some data quality rules for each field and also generate error codes which I write to a different table when a rule is broken.
My problem is that for the final step of the exercise I have to generate some Power BI graphics in which it shows how many fields were fixed from the source and which fields where cleansed. The only thing that I have thought compares the DB table to the flat file source or maybe do something with script components but I don't really think that those are really good solutions.
Has anybody encountered this problem? if somebody could point me out for info for something like this, it would be great. Thanks!
If I am facing a similar issue, I will do this in three steps:
Importing data without any transformation to a staging table
Cleaning data and loading it into the destination table
Comparing staging and destination table to get how many values were fixed.
From design standpoint - establishing a key is central before starting to clean.
Use could use SSIS derived column transformation to create a business key that is a concatenation of available fields to create a unique key, using FindString function and string functions.
Similar to the above step add a column in your staging table or use a derived column (depending on if you are using sql cleanup or ssis tasks to cleanup) to indicate if it was cleaned or not.

Querying the Audit Log through Database [SQL Server]

I would like to take the Audit History provided by Enterprise Architect and create a SQL query to report through a BI tool that will allow myself and other users to search the history of an object but I am having a little trouble understanding the audit table: t_snapshot.
From what I can tell, t_snapshot has a Style column that contains "INSERT," "UPDATE," and "DELETE" which would tell me what is happening and the Notes column can tell me what object it is referencing but so far I've only been able to get a partial picture. What I have not been able to deduce is when any event occurred or which user made the change.
If anyone has encountered this problem in the past, your input would be appreciated.
Well, I don't know whether you really want to touch that.
There's a column called BinContent which contains what you are looking for. It looks like
<LogItem><Row Number="0"><Column Name="object_id"><Old Value="1797"/><New Value="1797"/></Column><Column Name="name"><Old Value="CB"/><New Value="CBc"/></Column><Column Name="modifieddate"><Old Value="07.12.2018"/><New Value="11.12.2018"/></Column><appliesTo><Element Type="Action"/></appliesTo></Row><Details User="Thomas" DateTime="2018-12-11 08:22:59"/></LogItem>
So basically some XML describing the change including the plain text user name.
The bincontent column(s) are actually zips which contain a single file str.dat holding the above information.
Good luck.

PL SQL Find Dependencies on Table Field

I am needing to find all references to a table's field, and I have hundreds of stored procedures to search through. The goal is to find where it is being used in where clauses and add an extra value to the in statement. For example,
where myTable.Field_X in (1,2,3)
needs to become
myTable.Field_X in (1,2,3,4).
So I'm curious if there is a system table, like dba_dependencies, or something that I can query that would show me what procs, views, or functions are referencing the field? Thanks for any help you can give.
What version of Oracle are you using? If you are using at least 11.1, which introduced column-level dependency tracking, and you're not afraid to leverage some undocumented data dictionary tables, you can create a dba_dependency_columns view that will give you this information. But that will show every piece of code that depends on that column not just those where the column is used in the WHERE clause of a statement.
You can search dba_source for the text of procedures, functions, and triggers to look for code that has that sort of WHERE clause. That gets a little tricky, though-- if someone put the list of values on a different line or if there is an inline view where that column name is aliased to something else, it can be tricky to find via a text search. For views, you'd need to use dbms_metadata to generate the DDL for the view and search that in a loop.

Epicor asking for password after making a table change

Epicor - what a beastly creature!
Epicor asking for password after making a table change, any idea why?!?!
We removed the relationship from the (part table) and set up a criteria, instead. Now it is asking for a password, which should not be happening.
the login happens when I try to run the report. I am trying to figure out what I did to aggravate Epicor. The table was already there. I removed the relationship (part table) and added a criteria, instead, otherwise, that is exactly what I would have done. The only reason that I did not add a table to a report data definition, like I originally wanted to is because the parts table could only be added once. Which is why I removed the relationship and added a criteria, instead.
From your description, it sounds like the problem is related to the xml generated by Epicor for a non-BAQ based report data definition. Crystal and SSRS reports ask login information when either there is more than one datasource is referenced in the report, or there is improper relationships defined.
Note:
If you are not a report developer and you have modified this in an attempt to change the end data, I recommend you contact the report developer responsible for maintaining these before proceeding. Otherwise, read on.
Based on my experience, I would say if you are confident in the new relationship structure you have in the report data definition, the solution to this problem is likely within the report itself. Generate an xml file by running a test report, then open the .rpt (or .rdl) associated with this report and set the datasource to the new xml file. This should update the new xml schema used as the datasource. Even if none of the fields were changed in the data definition, the datasource schema definition that is stored in these files define exactly the data formatting that the report expects to receive when it is opened by Epicor.
If that doesn't solve the problem and you are using Crystal, the xml relationships may be defined in a way that will effect the way the data is displayed, which can be adjusted by using database expert->links tab in crystal. You should reconnect all of the links to match the report data definition within Epicor.
If none of that works, open up and view the xml file.
It is not unheard of for report data definitions in Epicor to break behind the scenes when altering relationships, and the xml file generated by the test report may not be a fully-qualified xml file. I have seen many xml files that do not have elements closed, etc. that will cause various problems when attempting to run the report. In this case, my recommendation is to create a completely new report data definition (do not copy), and re-enter all of the parameters that existed in the former definition. Repeat the refreshing of the report datasource as described above and this problem should be fixed.

Importing CSV to database (duplicate entries)

My job requires that I look up information on a long spreadsheet that's updated and sent to me once or twice a week. Sometimes the newest spreadsheet leaves off information that was in the last spreadsheet causing me to have to look through several different spreadsheets to find the info I need. I recently discovered that I could convert the spreadsheet to a CSV file and then upload it to a database table. With a few lines of script all I have to do is type in what I'm looking for and Voila! Now I just got the newest spreadsheet and I'm wondering if I can just Import it on top of the old one. There is a unique number for each row that I have set to primary in the database. If I try to import it on top of the current info will it just skip the rows where the primary would be duplicated or would it just mess up my database?
Thought I'd ask the experts before I tried it. Thanks for your input!
Details:
the spreadsheet consists of clients of ours. Each row contains the client's name, a unique id number, their address and contact info. I can set the row containing the unique ID to primary, then upload it. My concern is that there is nothing to signify a new row in a csv file (i think). when I upload it it it gives me the option to skip duplicates but will it skip the entire row or just that cell causing my data to be placed in the wrong rows.. It's apache server IDK what versions of mysql. I'm using 000webhost for this.
Higgs,
This issue in database/ETL terminology is called deduplication strategy.
There is not a template answer for this, but I suggest these helpful readings:
Academic paper - Joint Deduplication of Multiple Record Types
in Relational Data
Deduplication article
Some open source tools:
Duke tool
Data cleaner
there's a little checkbox when you click on import near the bottom that says 'ignore duplicates' or something like that. simpler than i thought.

Resources