What is the use of Maps in ax 2012? - maps

With a map, we can associate a map field with a field in one or more tables. This enables us to use the same field name to access fields with different names but the same data type in different tables...this is the definition i understood but i could not get the actual usage of maps.What is the practical application of maps in ax 2012.

Maps are to tables what interfaces are to classes.
The purpose is to implement once a behavior and apply it to different tables. The classical example before AX 2012 was on addresses. Then on CustTable, VendTable, etc... you have the same behavior (validation of ZIP code, formatting Addressing field, etc...).

Related

How do I create a new customer segment in Websphere Commerce using only DB2 database queries?

How can I add a new customer segment using only the database? I know how to create customer segments in CMC, but I'm looking to automate the process of adding, say, hundreds of user segments by writing a script to do it for me. However, I can't find any information on how to create a new customer segment using only DB2 database queries.
Is there a way to create a new customer segment using nothing but DB2 database queries?
I would not recommend that you use SQL directly to create customer segments, as this makes generation of primary keys and updates of the KEYS table your responsibility. And once you take that on, Murphy's Law states that you'll get something wrong.
Your question asks how to create "hundreds of user segments". However, I'm not sure if that's what you meant, or if you meant that you had hundreds of users to add to existing segments.
If you're talking about loading hundreds of users, then I'd refer you to
this article in the Knowledge Center that explains how you can use the MemberGroupMemberMediator to load segments from e-mail addresses.
If you truly mean to create segments by data load, I'd refer you to this Knowledge Center article that shows how to create member groups. A customer segment is a member group with a particular usage type.
For reference, these are the tables involved:
MBRGRP: The base group (segment) definintion
MBRGRPCOND: This is used to define the condition if it is a rule-based segment (e.g. "all shoppers over the age of 25")
MBRGRPDESC: The NLS description (name, etc) of the segment
MBRGRPMBR: For manually defined segments, this defines the members (relationship to MEMBER table)
MBRGRPTYPE: The type of the member group (e.g. "CustomerGroup")
MBRGRPUSG: The usage code for the member group (e.g. "GeneralPurpose")
What version fixpack / fep are you working with? Have you read http://www-01.ibm.com/support/knowledgecenter/SSZLC2_7.0.0/com.ibm.commerce.management-center.doc/tasks/tsbctsegbatch.htm?lang=en
Technically, changing the DB2 database directly is not officially supported. There are things like stagingprop that depend on certain actions happening in certain ways. For example, primary keys of any rows in any table that is a part of stagingprop cannot be updated. CMC will instead do this for you as a delete and an insert when you make the change through CMC.
That said, I've seen unsupported methods like this used to update/change/create data in WebSphere Commerce databases. I don't have the information specific to how to do this for customer segments. I just caution you that it is dangerous when changing the DB2 database directly, so be sure you have backups and evaluate the impact on other processes like stagingprop or dbclean very carefully.

When Designing a BI Start Schema, Should Dimension Tables Use Only User-Friendly Attribute Values?

I am designing the Dimension tables for my BI Start schema. I have already observed the value of user-friendly attribute values associated with each Dimension value, as these can be used quite readily and effectively in reporting.
I would like to know, is there ever any benefit to include/expose the encoded values of the source system (not including the source system's unique key, of course)?
For example, if I have an attribute called Color, whose native code values in the source system are: x2, x7, x9 for Red, Blue, Green respectively - is there any value in maintaining 2 columns in the Dimension table: one for the source system code value (e.g. x2) and one for the user-friendly value (e.g. Red)?
Is it common in BI reporting (we're currently using Cognos atop our star schema) to join back to the source system to get other attributes?
Should these "other" attributes always be surfaced to the BI schema, thus never joining back to the source system?
I find it worthwhile to expose the codes in the final (presentation) layer... inevitably, there is a group of users who go by the codes rather than descriptions (say, those in data entry, or the "export data to excel and merge with some other data source" types). Additionally, it helps for debugging and traceability. You can organize them all in their own folder or QS, keeping them separate from the business names as well. Thanks and good luck.

SQL Server Dynamic Columns Problem

I use a table GadgetData to store the properties of gadgets in my application. There gadgets are basically sort of custom control which have 80% of the properties common like height, width, color, type etc. There are are some set of properties per gadget type that the unique to them. All of this data has to store in database. Currently I am storing only common properties. What design approach should I use to store this kind data where the columns are dynamic.
Create table with common properties as Columns and add extra column of type Text to store the all unique properties of each gadget type in XML format.
Create a table with all possible columns in all of the gadget types.
Create separate table for each type of gadgets.
Any other better way you recommend?
(Note: The number of gadget types could grow even beyond 100 and )
Option 3 is a very normalized option, but will come back and bite you if you have to query across multiple types - every SELECT will have another join if a new type is added. A maintenance nightmare.
Option 2 (sparse table) will have a lot of NULL values and take up extra space. The table definition will also need updating if another type is added in the future. Not so bad but still painful.
I use Option 1 in production (using an xml type instead of text). It allows me to serialize any type derived from my common type, extracting the common properties and leaving the unique ones in the XmlProperties column. This can be done in the application or in the database (e.g. a stored procedure).
Your options:
Good one. Could even force schema etc
You cannot make those column NOT NULL, so you will loose some data integrity there
As long as you do not allow search for more then one type of gadget, it is good enough, but option 1 is better
I would use ORM
Notes:
If you would like to keep your database 'relational', but are not afraid to use ORM tools, then I would use one. I which case you can store the data (almost) as you want, but have it properly handled as long as you map them correctly.
See:
Single Table Inheritance
Concrete Table Inheritance
If you need SQL-only solution, then depending on your RDBMS, I would probably use XML column to store all the data that is specific to the gadget type: you can have validation, extend easily with new attributes. Then you can have all in one table, search quickly on all common attributes, and also pretty easily search for one gadget' type attributes as well
If all types of gadgets have many common mandatory properties that can be stored in one table and just several optional properties, you'd better use first approach: thus you'll use best of relational schema and ease your life with XML. And don't forget to use XML Schema collection linking XML column to it: you'll have full indexing and XQuery capabilities.
If gadget types has very different descriptions and only 1-3 common columns among 5 or more different set of properties, use 3rd approach.
But concerning the situation of 100+ types of gadgets I'd use 1st approach: it has flexibility supported with good performance and ease of support and further development.
Depending on how different the "Gadgets" are I wouldn't like option 2 there would be a lot of nulls floating around, which could get bad if you had a column which was mandatory for one gadget but not even used for another.
I would only go option 3 if the number of gadgets changes infrequently since it would require altering the database each time.
The unmentioned option is to store the Gadgets with a child table which holds the gadgets unique values. But this would require a fair amount of work to return gadgets details, or multiple Database calls.
Leaving option 1, except I would use SQL servers XML type instead of text, you can then use XQuery within your stored procedures.

Sorting and i18n in Database

I have the following data in my db:
ID Name Region
100 Sam North
101 Dam South
102 Wesson East
...
...
...
Now, the region will be different in different languages. I want the Sorting to happen right, based on the display value rather than the internal value.
Any Ideas? (And yeah, sorting in memory using Java is not an option!)
Unfortunately, if you want to do that in the database, you will have to store the internationalized versions as well in the database (or at least their order). Otherwise, how could the database engine possibly know how to sort?
You will have to create a table with three columns: the English version, the language code and the translated version (with the English version and the language code together being the primary key). Then join to this table in in you query by using the English word and a language code and then sort on the internationalized version.
The best approach to use internationalization is to remove the internationalization from your application and keep it in separated i18n databases. In your application you keep a key that can be used to access those separated databases, normally xml or yml.
As rule of thumb i would suggest:
keep all database in one format, one place
extract internationalization strings from your application
lets your application to pull i18n strings from your i18n from your internationalization database.
You can check the RAILS approach to i18n. It's simply, clean and easy to use.
What do you mean "Display Value"? I take it you are somehow converting that into a different language in java itself, and don't store that value (the localised one, I assume) in the db? If so, you're kind of screwed. You need to get that data into the DB to be able to sort with it there.

Best practices for consistent and comprehensive address storage in a database [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Are there any best practices (or even standards) to store addresses in a consistent and comprehensive way in a database ?
To be more specific, I believe at this stage that there are two cases for address storage :
you just need to associate an address to a person, a building or any item (the most common case). Then a flat table with text columns (address1, address2, zip, city) is probably enough. This is not the case I'm interested in.
you want to run statistics on your addresses : how many items in a specific street, or city or... Then you want to avoid misspellings of any sorts, and ensure consistency. My question is about best practices in this specific case : what are the best ways to model a consistent address database ?
A country specific design/solution would be an excellent start.
ANSWER : There does not seem to exist a perfect answer to this question yet, but :
xAL, as suggested by Hank, is the closest thing to a global standard that popped up. It seems to be quite an overkill though, and I am not sure many people would want to implement it in their database...
To start one's own design (for a specific country), Dave's link to the Universal Postal Union (UPU) site is a very good starting point.
As for France, there is a norm (non official, but de facto standard) for addresses, which bears the lovely name of AFNOR XP Z10-011 (french only), and has to be paid for. The UPU description for France is based on this norm.
I happened to find the equivalent norm for Sweden : SS 613401.
At European level, some effort has been made, resulting in the norm EN 14142-1. It is obtainable via CEN national members.
I've been thinking about this myself as well. Here are my loose thoughts so far, and I'm wondering what other people think.
xAL (and its sister that includes personal names, XNAL) is used by both Google and Yahoo's geocoding services, giving it some weight. But since the same address can be described in xAL in many different ways--some more specific than others--then I don't see how xAL itself is an acceptable format for data storage. Some of its field names could be used, however, but in reality the only basic format that can be used among the 16 countries that my company ships to is the following:
enum address-fields
{
name,
company-name,
street-lines[], // up to 4 free-type street lines
county/sublocality,
city/town/district,
state/province/region/territory,
postal-code,
country
}
That's easy enough to map into a single database table, just allowing for NULLs on most of the columns. And it seems that this is how Amazon and a lot of organizations actually store address data. So the question that remains is how should I model this in an object model that is easily used by programmers and by any GUI code. Do we have a base Address type with subclasses for each type of address, such as AmericanAddress, CanadianAddress, GermanAddress, and so forth? Each of these address types would know how to format themselves and optionally would know a little bit about the validation of the fields.
They could also return some type of metadata about each of the fields, such as the following pseudocode data structure:
structure address-field-metadata
{
field-number, // corresponds to the enumeration above
field-index, // the order in which the field is usually displayed
field-name, // a "localized" name; US == "State", CA == "Province", etc
is-applicable, // whether or not the field is even looked at / valid
is-required, // whether or not the field is required
validation-regex, // an optional regex to apply against the field
allowed-values[] // an optional array of specific values the field can be set to
}
In fact, instead of having individual address objects for each country, we could take the slightly less object-oriented approach of having an Address object that eschews .NET properties and uses an AddressStrategy to determine formatting and validation rules:
object address
{
set-field(field-number, field-value),
address-strategy
}
object address-strategy
{
validate-field(field-number, field-value),
cleanse-address(address),
format-address(address, formatting-options)
}
When setting a field, that Address object would invoke the appropriate method on its internal AddressStrategy object.
The reason for using a SetField() method approach rather than properties with getters and setters is so that it is easier for code to actually set these fields in a generic way without resorting to reflection or switch statements.
You can imagine the process going something like this:
GUI code calls a factory method or some such to create an address based on a country. (The country dropdown, then, is the first thing that the customer selects, or has a good guess pre-selected for them based on culture info or IP address.)
GUI calls address.GetMetadata() or a similar method and receives a list of the AddressFieldMetadata structures as described above. It can use this metadata to determine what fields to display (ignoring those with is-applicable set to false), what to label those fields (using the field-name member), display those fields in a particular order, and perform cursory, presentation-level validation on that data (using the is-required, validation-regex, and allowed-values members).
GUI calls the address.SetField() method using the field-number (which corresponds to the enumeration above) and its given values. The Address object or its strategy can then perform some advanced address validation on those fields, invoke address cleaners, etc.
There could be slight variations on the above if we want to make the Address object itself behave like an immutable object once it is created. (Which I will probably try to do, since the Address object is really more like a data structure, and probably will never have any true behavior associated with itself.)
Does any of this make sense? Am I straying too far off of the OOP path? To me, this represents a pretty sensible compromise between being so abstract that implementation is nigh-impossible (xAL) versus being strictly US-biased.
Update 2 years later: I eventually ended up with a system similar to this and wrote about it at my defunct blog.
I feel like this solution is the right balance between legacy data and relational data storage, at least for the e-commerce world.
I'd use an Address table, as you've suggested, and I'd base it on the data tracked by xAL.
In the UK there is a product called PAF from Royal Mail
This gives you a unique key per address - there are hoops to jump through, though.
I basically see 2 choices if you want consistency:
Data cleansing
Basic data table look ups
Ad 1. I work with the SAS System, and SAS Institute offers a tool for data cleansing - this basically performs some checks and validations on your data, and suggests that "Abram Lincoln Road" and "Abraham Lincoln Road" be merged into the same street. I also think it draws on national data bases containing city-postal code matches and so on.
Ad 2. You build up a multiple choice list (ie basic data), and people adding new entries pick from existing entries in your basic data. In your fact table, you store keys to street names instead of the street names themselves. If you detect a spelling error, you just correct it in your basic data, and all instances are corrected with it, through the key relation.
Note that these options don't rule out each other, you can use both approaches at the same time.
In the US, I'd suggest choosing a National Change of Address vendor and model the DB after what they return.
The authorities on how addresses are constructed are generally the postal services, so for a start I would examine the data elements used by the postal services for the major markets you operate in.
See the website of the Universal Postal Union for very specific and detailed information on international postal address formats:http://www.upu.int/post_code/en/postal_addressing_systems_member_countries.shtml
"xAl is the closest thing to a global standard that popped up. It seems to be quite an overkill though, and I am not sure many people would want to implement it in their database..."
This is not a relevant argument. Implementing addresses is not a trivial task if the system needs to be "comprehensive and consistent" (i.e. worldwide). Implementing such a standard is indeed time consuming, but to meet the specified requirement nevertheless mandatory.
normalize your database schema and you'll have the perfect structure for correct consistency. and this is why:
http://weblogs.sqlteam.com/mladenp/archive/2008/09/17/Normalization-for-databases-is-like-Dependency-Injection-for-code.aspx
I asked something quite similar earlier: Dynamic contact information data/design pattern: Is this in any way feasible?.
The short answer: Storing adderres or any kind of contact information in a database is complex. The Extendible Address Language (xAL) link above has some interesting information that is the closest to a standard/best practice that I've come accross...

Resources