PowerDesigner reverse engeering index column missing - powerdesigner

I am having some issue with validating my physical model against a DB2 V9.7 AIX database by using the "Apply Model Changes to Database" option. After the database was successfully reverse engineered, the Index Columns were missing on the database side.
The index columns are in the database, but PowerDesigner is not picking them up during reverse engineering. I am using PowerDesigner 16.1. Any ideas?
Thanks,
Alex

Finally fixed the issue. The configuration that I needed to change was under Database -> Edit Current DBMS -> Script -> Objects -> Index -> Maxlen. The Max length was too small for the Index name so needed to increase it then it worked.

Related

None of the fields in the record map to the columns apache-nifi

I am trying to insert records from Oracle to Postgresql. To do it, I use
QueryDatabaseTableRecord -> PutDatabaseRecord
QueryDatabaseTableRecord -> Fetch from Oracle
Writer -> CSV
PutDatabaseRecord -> to insert record to Postgresql.
Reader -> CSV
A few weeks ago, I faced with the same issue with Postgresql. cloudera question.
This time I made schema to public and Translate field name : false
I have changed postgresql table columns into block letters as I have used in oracle.
I found the solution for this. Its not directly related Apache-NiFi, kind of Postgresql related thing.
Data taken from Oracle comes with Upper-case headers. Headers MUST be converted to Lower-case. Creating postgresql columns with Upper-case wont solve this issue.
To do this, I have used Replace Text processor.
Search Value : MY_COLUMN1,MY_COLUMN2
Replacement Value : my_column1,my_column2
I hope this will help someone who is trying to get data from Oracle and put them back into Postgresql.

Case sensitive sybase query: Invalid column name

Details:
2 databases: sybase version 15 and sybase version 16
1 table each (identical): AuthRole with columns id, rolename and description
Tried both jTDS and jconn drivers
Query:
SELECT t1.roleName FROM AuthRole t1;
Results:
Sybase 15: rows returned successfully. 'roleName' could be upper, lower or a mix of case, i.e. not case sensitive
Sybase 16: Invalid column name 'roleName'. It will only work with 'rolename' which is the exact case of the column. Anyone know why this would happen and how to resolve it?
If on ASE 15 both queries work - with "rolename" and "roleName" - that means the the sort order in this database is case insensitive.
If on ASE 16 "rolename" is different than "roleName" - that means the the sort order in this database is case sensitive.
You can check this by querying:
if "a" = "A" print "Case insensitive" else print "Case sensitive"
This setting is set and static for the whole server (and for all the databases that the server contains), but can be changed. Of course changing the sort order is a time consuming process, as it requires to rebuild all indexes based on character types.
You can check the server sortorder setting:
exec sp_configure 'sortorder id'
The information about sort order should be visible in the ASE errorlog when the database server starts:
00:0002:00000:00002:2017/07/04 16:49:26.35 server ASE's default unicode sort order is 'binary'.
00:0002:00000:00002:2017/07/04 16:49:26.35 server ASE's default sort order is:
00:0002:00000:00002:2017/07/04 16:49:26.35 server 'bin_iso_1' (ID = 50)
00:0002:00000:00002:2017/07/04 16:49:26.35 server on top of default character set:
00:0002:00000:00002:2017/07/04 16:49:26.35 server 'iso_1' (ID = 1).
In my example the sort order is binary - which is case sensitive.
Information how to change the sort order for the server is in the ASE manual. Basicaly to change the sort order you need to:
add the new sort order using the charset program,
change the config parameter 'sortorder id'
reboot the ASE server (the server boots, rebuilds the disk devices and then it shuts down)
reboot the ASE server again
indexes that are build on character types are marked as invalid and need to be rebuild
Sounds like an issue with the sort order, eg:
ASE 15 is configured with a case-insensitive sort order
ASE 16 is configured with a case-sensitive sort order
You should be able to confirm the above by running sp_helpsort.
In ASE, case (in)sensitivity applies to data as well as identifiers (eg, table/column names).
To get ASE 16 to function like ASE 15, the DBA will need to change the sort order in the ASE 16 dataserver (I'd suggest they also verify the character set while they're at it).
Keep in mind that changing the sort order (and/or character set) is a dataserver-wide configuration and will require (at a minimum) a rebuild of all indexes and re-running of update index statistics. [For more info the DBA should refer to the ASE System Administration Guide, Chapter on Configuring Character Sets, Sort Orders and Languages.]
Off the top of my head:
In older versions of Sybase ASE you had to carefully set the case-sensitivity at server installation time. The installer defaults to case-sensitive. Maybe the admin who installed ASE15 noticed this (and changed the default to case-insensitive), whereas the admin who installed your ASE16 didn't.
Yes, case-Sensitivity is a property of the Server. You can change it at a later time with sp_configure or ALTER DATABASE, or both (I don't remember and I don't have the time to look it up). You can also use a graphical admin tool to change the server default sort order.
In any case only databases created after that configuration change will be affected. Confusingly, older databases will still be case-sensitive, or lots of warnings will be issued. This is because in your older tables, all primary keys (PK) are implemented as indices, in assume case-sensitivity, and PKs and the PK indices cannot be changed by an installer or a config wizard.
In fact, you have to drop and re-create the indices and run dbcc something (again I don't remember).
For small databases, this drop-and-recreate of indices can of cause be done (use a script or a database reengineering tool to do so). For larger databases this can take some time.
Maybe it's different for ASE16 -check the docuemntation

Get column creation date?

i am using oracle 11. I need to find when specific column was created. I know we can find out last DDL change date but first i created the column
and after some days created index on one of the column of same table . So now, I need to find when that specific column was created .
Is there a way ?
This depends on your audit settings if the object was being audited you may find it in audit trail. I'd suggest reading
http://docs.oracle.com/cd/B28359_01/server.111/b28337/tdpsg_auditing.htm
Or you can use LogMiner to check redo logs if you DB was running in ARCHIVELOG mode. But I have never used this so I'm not sure about all the requirements there.

I need to make sure 2 DB are the same

I'm doing it programmatically (I'm a newbie to sql) I'm getting the data per table within first DB using with being a value from a list of table names that I need to make sure are
there
if there have the corresponding values in the same table in
DB X list all the fields that do not have the same values and the
value in below
Table that does match listing the table, field name, row,
"SELECT * FROM [Dev.Chris21].[dbo].[" & PayrollTablemaskedarray(xxxxxx-2) & "]"
I can copy the whole thing into excel but I'm wondering is there a way to do this using sql?
Thanks
Since you mention that you're doing it programmically I assume you're using visual studio. If so you can take advantage of SQL Server Data Tools (SSDT) to do comparisons of two database schemas or two database data sets. You get this out of the box with VS2012 or VS2013 (and earlier versions too). Might be worth a look...

Merging multiple Access databases into SQL Server

We have a program in which each user is given their own Access database. We'd like to merge these all together into a single SQL Server database.
The problem is that, using the SQL Server import/export wizard, the primary/foreign keys do not get updated. So for instance if one user has this table:
1 Apple
2 Banana
and another user has this:
1 Coconut
2 Cheeseburger
the resulting table looks like this:
1 Apple
2 Banana
1 Coconut
2 Cheeseburger
Similarly, anything that referenced Banana by its primary key (2) is now referencing both Banana and Cheeseburger, which will not make the vegans very happy.
Is there any way to automatically update the primary/foreign key references when importing, other than writing an extremely long and complex import-script?
If you need to keep them fully compartmentalized, you have to assign some kind of partitioning column to each table. Is there a reason you need your SQL Server to have the same referential integrity as Access? Are you just importing to SQL Server for read-only reporting? In that case, I would not bother with RI. The queries will all require a partitionid/siteid/customerid. You could enforce that for single-entity access by wrapping tables with a table-valued UDF which required the partitionid. For cross-site that doesn't work.
If you are just loading to SQL Server for reporting, I would also consider altering the data model to support reporting (i.e. a dimensional model is sometimes better than a normalized model) instead of worrying about transaction processing.
I think we need to know more about the underlying goals.
Need more information of requirements.
My basic question is 'Do you need to preserve the original record key?' e.g. 1:apple in table T of user-database A; 1:coconut in table T of user-database B. Table T is assumed to have the same structure in all database instances. Reasons I can suppose that you may want to preserve the original data: (a) you may have a requirement to the reference the original data (maybe a visual for previous reporting), and/or (b) there may be a data dependency in the application itself.
If the answer is 'no,' then you are probably interested only in preserving all of the distinct data values. Allow the SQL table to build using a new key and constrain the SQL table field such that it contains unique data. This approach seems to preserve the original table structure (but not the original key value or its 'location') and may suffice to meet your requirement.
If the answer is 'yes,' I do not see a way around creating an index that preserves a pointer to the original database and the key that was created in its table T. This approach would seem to require an application modification.
The best approach in this case is probably to split the incoming data into two tables: one to identify the database and original key, another to identify the distinct data values. For example: (database) table D has records such as 'A:1:a,' 'A:2:b,' 'B:1:c,' 'B:2:d,' 'B:15:a,' 'C:8:a'; (data) table T1 has records such as 'a:apple,' 'b:banana,' 'c:coconut,' 'd:cheeseburger' where 'A' describes the original database 'location,' 1 is the original value in location 'A,' and 'a' is a value that equates records in table D and table T1. (Otherwise you have a lot of redundant data in the one table; e.g. A:1:apple, B:15:apple, C:8:apple.) Also, T1 has a structure similar to the original T and is seems to be more directly useful in the application.
Ended up creating an SSIS project for this. SSIS is a visual programming tool made by Microsoft (and part of their "Business Integration Studio", which comes with SQL Server) designed for solving exactly these sorts of problems.
Why not let Access use its replication manager to merge the databases? This will allow you to identify the conflicts and resolve them before importing to SQL Server. I'm fairly confident it will retain the foreign key relationships. If I understand your situation correctly, and the databases are the same structure with different data, you could load the combined database to the application and verify the data before moving to SQL Server.
What version of Access are you using? Here's a link for Access 2000. Use the language to adjust search parameters to fit your version.
http://technet.microsoft.com/en-us/library/cc751054.aspx

Resources