Temenos T24 Database structure - database

I was working with R09 of Temenos T24 which had Oracle as the background.
Table structure was 2 columns - RECID + Data in Blob (XML format).
Has anyone got an idea, if the structure has been changed to RDBMS structure with the new T24 versions such as R17 or R18?
Thank you for any help in advance !!!

Temenos T24 core was built around the so called "Multi Value Database" UniVerse and then moved to jBASE around year 2003. See this link https://en.wikipedia.org/wiki/MultiValue for an explanation what is a Multi Value Database.
Later, to add support for Oracle and other industry standard "big" databases, Temenos developed a special DB driver for their system that was designed to imitate the Multi-value database functionality inside the RDBMS. The solution was to use XML to store the multi dimensioned fields. And so all T24 tables in Oracle have two columns:
RECID for the ID or Unique Key of the Record
XMLRECORD to store the data.
The XMLRECORD by default is created as XMLTYPE, but can also be BLOB or CLOB type. In this case the data will be stored as it used to be stored inside the old Multi Value Database, i.e. a string where fields are separated by field markers, value markers and sub-value markers.
This basically means that T24 will never move to proper RDBMS structure as that would mean to completely re-write the whole T24 solution, or at least a significant part of it. Since T24 is being developed for 30 or more years now, you can imagine what it would take to perform such a task.

Working with R15 - still RECID + Blob.
I'm quite sure that R18 is the same as we're currently upgrading to R18 and no DB scheme change is in the road map.

you can select table view directly from DB, like SELECT * FROM V_FXXX_ACCOUNT. That table RDBMS you can select field that you need.

Temenos do have a product called Relational Replication aimed at providing a selected table from T24 in a relational format. So all the multi-value / group multi-value elements become child tables and sub-value elements come in further child tables with foregin keys. So easier to index and query. They also have a data model viewer for T24 in Design Studio which gives you an idea of how these tables will be structured.

Related

GUI to create and fill database with data and export it to sqlite

So, I have a relatively common task, and hope to get some suggestions here.
Idea is that I have a small database in mind, database will have at least 2 types of tables:
dictionary-table - it will have just the id and few columns of text
aggregation-table - it should combine different dictionary entries into some aggregation, so it will be basically mapping id's of different dictionary entries all together.
So, what I hoped to do is to have some software that will help me to fill database easily. I will add data to dictionary-tables, and will say that 'this particular column of my aggregation table can have values only from this dictionary-table', so I would type words, and it will just add id's from dictionary-table instead. You know, like a relationships in database.
Except that in the end I want it to be a plain sqlite database, and sqlite doesn't support relationships.
So what I want is some cool high-level GUI tool that will simplify the way I input data to database, and will help me to maintain the data when DB grows in future, but also be able to export to a simple SQLite.
I tried: SQliteBrowser, SqliteAdmin, Libreoffice Base + Sqlite ODBC. Neither supports what I want.
Anything else worth checking out?
How about PhpLiteAdmin? - https://code.google.com/p/phpliteadmin/
It allows you to directly add/modify the structure and data of an sqlite database but also allows import and export of tables, structure, indexes, and data (SQL, CSV). If you're dealing with thousands of entries then this may be the important feature for whatever tool you use.
There's no installation and it's open-source
you said
...be a plain sqlite database, and sqlite doesn't support relationships.
But sqlite supports relationships. By default it is disabled. you can enable it with
sqlite> PRAGMA foreign_keys = ON;
now you can code your requirement by having proper foreign key.
you said
I will add data to dictionary-tables,
Instead of multiple dictionary tables, just have one dictionary table and add one more column in it as dictionary_name
your aggregate table can simply have foreign key referring the dictionary table

database design - best practice- one table for web form drop down options or separate table for each drop down options

I'm looking at the best practice approach here. I have a web page that has several drop down options. The drop downs are not related, they are for misc. values (location, building codes, etc). The database right now has a table for each set of options (e.g. table for building codes, table for locations, etc). I'm wondering if I could just combine them all into on table (called listOptions) and then just query that one table.
Location Table
LocationID (int)
LocatValue (nvarchar(25))
LocatDescription (nvarchar(25))
BuildingCode Table
BCID (int)
BCValue (nvarchar(25))
BCDescription (nvarchar(25))
Instead of the above, is there any reason why I can't do this?
ListOptions Table
ID (int)
listValue (nvarchar(25))
listDescription (nvarchar(25))
groupID (int) //where groupid corresponds to Location, Building Code, etc
Now, when I query the table, I can pass to the query the groupID to pull back the other values I need.
Putting in one table is an antipattern. These are differnt lookups and you cannot enforce referential integrity in the datbase (which is the ciorrect place to enforce it as applications are often not the only way data gets changed) unless they are in separate tables. Data integrity is FAR more important than saving a few minutes of development time if you need an additonal lookup.
If you plan to use the values later in some referencing FKeys - better use separate tables.
But why do you need "all in one" table? Which problem it solves?
You could do this.
I believe that is your master data and it would not be having any huge amounts of rows that it might create and performance problems.
Secondly, why would you want to do it once your app is up and running. It should have thought about earlier. The tables might be used in a lot of places and it's might be a lot of coding and most importantly testing.
Can you throw further light into your requirements.
You can keep them in separate tables and have your stored procedure return one set of data with a "datatype" key that signifies which set of values go with what option.
However, I would urge you to consider a much different approach. This suggestion is based on years of building data driven websites. If these drop-down options don't change very often then why not build server-side include files instead of querying the database. We did this with most of our websites. Think about it, each time the page is presented you query the database for the same list of values... that data hardly ever changes.
In cases when that data did have the tendency to change, we simply added a routine to the back end admin that rebuilt the server-side include file whenever an add, change or delete was done to one of the lookup values. This reduced database I/O's and spead up the load time of all our websites.
We had approximately 600 websites on the same server all using the same instance of SQL Server (separate databases) our total server database I/O's were drastically reduced.
Edit:
We simply built SSI that looked like this...
<option value="1'>Blue</option>
<option value="2'>Red</option>
<option value="3'>Green</option>
With single table it would be easy to add new groups in favour of creating new tables, but for best practices concerns you should also have a group table so you can name those groups in the db for future maintenance
The best practice depends on your requirements.
Do the values of location and building vary frequently? Where do the values come from? Are they imported from external data? Do other tables refer the unique table (so that I need a two-field key to preper join the tables)?
For example, I use unique table with hetorogeneus data for constants or configuration values.
But if the data vary often or are imported from external source, I prefer use separate tables.

DB Design Pattern - Many to many classification / categorised tagging

I have an existing database design that stores Job Vacancies.
The "Vacancy" table has a number of fixed fields across all clients, such as "Title", "Description", "Salary range".
There is an EAV design for "Custom" fields that the Clients can setup themselves, such as, "Manager Name", "Working Hours". The field names are stored in a "ClientText" table and the data stored in a "VacancyClientText" table with VacancyId, ClientTextId and Value.
Lastly there is a many to many EAV design for custom tagging / categorising the vacancies with things such as Locations/Offices the vacancy is in, a list of skills required. This is stored as a "ClientCategory" table listing the types of tag, "Locations, Skills", a "ClientCategoryItem" table listing the valid values for each Category, e.g., "London,Paris,New York,Rome", "C#,VB,PHP,Python". Finally there is a "VacancyClientCategoryItem" table with VacancyId and ClientCategoryItemId for each of the selected items for the vacancy.
There are no limits to the number of custom fields or custom categories that the client can add.
I am now designing a new system that is very similar to the existing system, however, I have the ability to restrict the number of custom fields a Client can have and it's being built from scratch so I have no legacy issues to deal with.
For the Custom Fields my solution is simple, I have 5 additional columns on the Vacancy Table called CustomField1-5. This removes one of the EAV designs.
It is with the tagging / categorising design that I am struggling. If I limit a client to having 5 categories / types of tag. Should I create 5 tables listing the possible values "CustomCategoryItems1-5" and then an additional 5 many to many tables "VacancyCustomCategoryItem1-5"
This would result in 10 tables performing the same storage as the three tables in the existing system.
Also, should (heaven forbid) the requirements change in that I need 6 custom categories rather than 5 then this will result in a lot of code change.
Therefore, can anyone suggest any DB Design Patterns that would be more suitable to storing such data. I'm happy to stick with the EAV approach, however, the existing system has come across all the usual performance issues and complex queries associated with such a design.
Any advice / suggestions are much appreciated.
The DBMS system used is SQL Server 2005, however, 2008 is an option if required for any particular pattern.
Have you thought about using an XML column? You can enforce all your constraints declaratively through XSL.
Instead of EAV, have a single column with XML data validated by a schema (or a collection of schemas).
Take a look at this question/answer; describes the observation pattern.
It uses five tables and can be implemented in a "standard" RDBMS -- Sql Server 2005 will do.
No limit on number of custom properties (observations) that an entity can have.
EDIT
If tags (categories) are needed for properties, take a look at this one.
Why not store the custom fields in a key-value table?
| vacancy ID | CustomFieldType | CustomFieldValue |
Then have auxillary tables listing possible values per type (1 table) and may be possible types per vacancy type (it seems to be the original ClientCategory)

SQL Server: One Table with 400 Columns or 40 Tables with 10 Columns?

I am using SQL Server 2005 Express and Visual Studio 2008.
I have a database which has a table with 400 Columns. Things were (just about manageable) until I had to perform bi-directional sync between several databases.
I am wondering what arguments are for and against using 400 column database or 40 table database are?
The table in not normalised and comprises of mainly nvarchar(64) columns and some TEXT columns. (there are no datatypes as it was converted from text files).
There is one other table that links to this table and is a 1-1 relationship (i.e one entry relates to one entry in the 400 column table).
The table is a list files that contained parameters that are "plugged" into a application.
I look forward to your replies.
Thank you
Based on your process description I would start with something like this. The model is simplified, does not capture history, etc -- but, it is a good starting point. Note: parameter = property.
- Setup is a collection of properties. One setup can have many properties, one property belongs to one setup only.
- Machine can have many setups, one setup belongs to one machine only.
- Property is of a specific type (temperature, run time, spindle speed), there can be many properties of a certain type.
- Measurement and trait are types of properties. Measurement is a numeric property, like speed. Trait is a descriptive property, like color or some text.
For having a wide table:
Quick to report on as it's presumably denormalized and so no joins are needed.
Easy to understand for end-consumers as they don't need to hold a data model in their heads.
Against having a wide table:
Probably need to have multiple composite indexes to get good query performance
More difficult to maintain data consistency i.e. need to update multiple rows when data changes if that data is on multiple rows
As you're having to update multiple rows and maintain multiple indexes, concurrent performance for updates may become an issue as locks escalate.
You might end up with records with loads of nulls in columns if the attribute isn't relevant to the entity on that row which can make handling results awkward.
If lazy developers do a SELECT * from the table you end up dragging loads of data across the network, so you generally have to maintain suitable subset views.
So it all really depends on what you're doing. If the main purpose of the table is OLAP reporting and updates are infrequent and affect few rows then perhaps a wide, denormalized table is the right thing to have. In an OLTP environment then it's probably not and you should prefer narrower tables. (I generally design in 3NF and then denormalize for query performance as I go along.)
You could always take the approach of normalizing and providing a wide-view for readers if that's what they want to see.
Without knowing more about the situation it's not really possible to say more about the pros and cons in your particular circumstance.
Edit:
Given what you've said in your comments, have you considered just having a long & skinny name=value pair table so you'd just have UserId, PropertyName, PropertyValue columns? You might want to add in some other meta-attributes into it too; timestamp, version, or whatever. SQL Server is quite efficient at handling these sorts of tables so don't discount a simple solution like this out-of-hand.

Database design - do I need one of two database fields for this?

I am putting together a schema for a database. The goal of the database is to track applications in our department. I have a repeated problem that I am trying to solve.
For example, I have an "Applications" table. I want to keep track if any application uses a database or a bug tracking system so right now I have fields in the Applications table called
Table: Applications
UsesDatabase (bit)
Database_ID (int)
UsesBugTracking (bit)
BugTracking_ID (int)
Table: Databases:
id
name
Table: BugTracking:
id
name
Should I consolidate the "uses" column with the respective ID columns so there is only one bug tracking column and only one database column in the applications table?
Any best practice here for database design?
NOTE: I would like to run reports like "Percent of Application that use bug tracking" (although I guess either approach could generate this data.)
You could remove the "uses" fields and make the id columns nullable, and let a null value mean that it doesn't use the feature. This is a common way of representing a missing value.
Edit:
To answer your note, you can easily get that statistics like this:
select
count(*) as TotalApplications,
count(Database_ID) as UsesDatabase,
count(BugTracking_ID) as UsesBugTracking
from
Applications
Why not get rid of the two Use fields and simply let a NULL value in the _ID fields indicate that the record does not use that application (bug tracking or database)
Either solution works. However, if you think you may want to occasionally just get a list of applications which do / do not have databases / bugtracking consider that having the flag fields reduces the query by one (or two) joins.
Having the bit fields is slightly denormalized, as you have to keep two fields in sync to keep one piece of data updated, but I tend to prefer them for cases like this for the reason I gave in the prior paragraph.
Another option would be to have the field nullable, and put null in it for those entries which do not have DBs / etc, but then you run into problems with foreign key constraints.
I don't think there is any one supreme right way, just consider the tradeoffs and go with what makes sense for your application.
I would use 3 tables for the objects: Application, Database, and BugTracking. Then I would use 2 join tables to do 1-to-many joins: ApplicationDatabases, and ApplicationBugTracking.
The 2 join tables would have both an application_id and the id of the other table. If an application used a single database, it would have a single ApplicationDatabases record joining them together. Using this setup, an application could have 0 database (no records for this app in the ApplicationDatabases table), or many databases (multiple records for this app in the ApplicationDatabases table).
"Should i consolidate the "uses" column"
If I look at your problem statement, then there either is no "uses" column at all, or there are two. In either case, it is wrong of you to speak of "THE" uses column.
May I politely suggest that you learn to be PRECISE when asking questions ?
Yes using null in the foreign key fields should be fine - it seems superfluous to have the bit fields.
Another way of doing it (though it might be considered evil by database people ^^) is to default them to 0 and add in an ID 0 data row in both bugtrack and database tables with a name of "None"... when you do the reports, you'll have to do some more work unless you present the "None" values as they are as well with a neat percentage...
To answer the edited question-
Yes, the fields should be combined, with NULL meaning that the application doesn't have a database (or bug tracker).

Resources