I have a database which own events as following :
event(id, name, type, startDate, endDate)
The type is another custom object (Type(id, name, icon)) which is linked as one-to-many.
I want to add another custom object named EventDetail containing details, but the fields would be different with the type of the event.
For instance, if my event is a travel type, fields could be a weblink to the wikipedia of destination, a photo, but if event is an internship for work they would evidently be different.
What kind of structure could handle this type of situation ?
If it is important, I'm using java for android, with an ORM library named SugarORM
Final answer
I finally headed to ORM libraries for managing my database. Using a one-to-one relation between Type and EventDetail and one-to-many between Type and Event, it works really well for now.
You can go a few different ways with this...
One would be to create a catch-all table for EventDetail that has every field would need for every type and use the type to determine which fields you read from in that table.
Another way, if you do not have a lot of different types, would be to create a table for each type that would be structured to the needs of that type, in your queries, you would need to join to each of the tables based on the type (probably with using UNION [ALL], one query per type - or create views).
A third, and probably more ideal depending on the situation and use, would be to remove that level of complexity from the database. Store your "Event Details" as a CLOB or BLOB or XMLTYPE, etc... and use the type to deserialize the data to an object on the client side.
Related
So I have a use case wherein I need to create a document in mongo db. I want to create a document by the name Electrical Products. In the document i want fields as follows:
id,
productType,
applianceProperties,
brandsAvailable.. and so on
Now the problem is the productType can be anything from a television to an air-conditioner, but the properties of that appliance will be changing based on productType.
For example, a TV will be having properties like Screen Display, Size, Colour, Type.. whereas the AC will be having properties like Capacity, Installation Type etc. Is it possible to create a document such that it allows me to save all the different type of properties in the same field, only distinguishable by the productType?
I also need to get and put the data frequently, and it cannot be a SQL database, due to some reasons. I was thinking of making the applianceProperties something general like List<Object> and make different classes for different products like TelevisionProperties.java and ACProperties.java, but I am facing issues while casting the List<TelevisionProperties> to List<Object>.
Is there some way this entire thing can be achieved?
OK, I don't know whether this question belong to this place, but you will suggest me if I'm wrong.
I have some entities which has almost same attributes, differences is in maybe 2-3 columns.
Because of those different columns, I can't create one table with columns that are union of attributes of every entity, because new entity type will require changing table design adding new columns specific to that entity type.
Instead, currently working design is that every specific entity has own table.
But, if new type of entity come on scene, I must create new table, which is totally bad idea.
How can I create one table which consists shared attributes for each type of entity, and some additional mechanism to evidence entity-unique attributes?
So, idea is to easy add new types of objects, without changing database design, configuring only part that deal with unique columns.
P.S. Maybe I'm not clear, but I will add more description if is it needed.
I had a design like that once. What I did was I created a table that housed all the shared properties. Then, I had separate tables for the distinct values. I used joins to match a specific entity to its shared table row. I had less than 10, so my views that used unions I just updated when I added a new entity. But, if you used a naming convention, you could write stored procs that find the table names dynamically and do the unions and joins on the fly. In my case, I used a base class and specific classes to make a custom data layer.
Another possibility is to have a generic table that's basically name/value pairs and a table the represents your shared properties. By joining the tables together, you could have any number of entity specific properties for your entities. It's not very efficient and the SQL would get weird, but I've seen it done.
One solution is to store the common parts in one table, and the specific parts in tables specific to that entity.
eg: To have a set of people, some of whom are managers...
Person Table
PersonID
PersonName
Manager Table
ManagerID
PersonID
DepartmentManaged
As soon as you go down the path of having one table with variable field meanings - effectively an Entity Attribute Value design - you find yourself in querying hell.
Perhaps not the best or most academic, but what about this kind of "open structure" ?
MainTable: all common fields
SpecialProperties: extra properties, as required
- MainRecordId (P, F->MainTable)
- PropertyName (P)
- PropertyText
- PropertyValue (for numeric values)
I've been thinking about creating a database that, instead of having a table per object I want to represent, would have a series of generic tables that would allow me to represent anything I want and even modifying (that's actually my main interest) the data associated with any kind of object I represent.
As an example, let's say I'm creating a web application that would let people make appointments with hairdressers. What I would usually do is having the following tables in my database :
clients
hairdressers: FK: id of the company the hairdresser works for
companies
appointments: FK: id of the client and the hairdresser for that appointment
But what happens if we deal with scientific hairdressers that want to associate more data to an appointment (e.g. quantity of shampoo used, grams of hair cut, number of scissor's strokes,...) ?
I was thinking instead of that, I could use the following tables:
entity: represents anything I want. PK(entity_id)
group: is an entity (when I create a group, I first create an entity which
id is then referred to by the FK of the group). PK(group_id), FK(entity_id)
entity_group: each group can contain multiple entity (thus also other groups): PK(entity_id, group_id).
role: e.g. Administrator, Client, HairDresser, Company. PK(role_id)
entity_role: each entity can have multiple roles: PK(entity_id, role_id)
metadata: contains the name and type of the metadata aswell as the associated role and a flag that describes if its mandatory or not. PK(metadata_id), FK(metadata_type_id, role_id)
metadata_type: contains information about available metadata types. PK(metadata_type_id)
metadata_value: PK(metadata_value_id), FK(metadata_id)
metadata_: different tables for the different types e.g. char, text, integer, double, datetime, date. PK(metadata__id), FK(metadata_value_id) which contain the actual value of a metadata associated with an entity.
entity_metadata: contains data associated with an entity e.g. name of a client, address of a company,... PK(entity_id, metadata_value_id). Using the type of the metadata, its possible to select the actual value of a metadata for this entity in the corresponding table.
This would allow me to have a completely flexible data structure but has a few drawbacks:
Selecting the metadatas associated with an entity returns multiple rows that I have to process in my code to create the representation of the entity in my code.
Selecting metadatas of multiple entities requires to loop over the same process as above.
Selecting metadatas will also require me to do a select for each one of the metadata_* table that I have.
On the other hand, it has some advantages. For example, instead of having a client table with a lot of fields that will almost never be filled, I just use the exact number of rows that I need.
Is this a good idea at all?
I hope that I've expressed clearly what I'm trying to achieve. I guess that I'm not the first one who wonders how to achieve that but I was not able to find the right keywords to find an answer to that question :/
Thanks!
I have an architectural question concerning custom fields in a view for an object. Let's say you have a User Object with some basic information like firstname, lastname, ... that can be used by all customers.
Now, often we get a question from a customer to add couple of custom fields typical for their domain. Our solution now is an xml data column where key value pairs are stored. This has been ok so far, but now we'll have to find a more architectural solution.
For instance, now, a customer wants a dropdown where it can select the value for its custom field. We could still store the selected value in the xml data column, but where do we store all those dropdown values...
I know that in sharepoint you can also add custom fields like dropdowns and I was wondering how to deal with this best. I want to avoid creating custom tables for customers, or having a table with 90 columns (10 basic and then 10 for each customer), ...
You get the idea, it should be generic and be able to deal with all sorts of problems in the future.
What I was thinking about is a Table UserConfiguration where each record has a Foreign Key to the Customer (Channel in our database), then a column FieldName, a column FieldType and a column Values. The column values should be an xml type column, because for a dropdown, we'll need to add multiple values. Also, each value can have extra data attached to it (not just a name). The other problem then is how to store the selected value. I don't like the idea of having foreign keys to xml in my database (read somewhere that Azure can't handle this all to well). Do you just store the name of the value (what if the value were to disappear out of the xml?)?
Any documentation, links on this kind of problems would also be great. I'm trying to find a design pattern that deals with this kind of problem in the database.
I want to answer your question in two parts:
1) Implementing custom fields in a database server
2) Restricting custom fields to an enumeration of values
Although common solutions to 1) are discussed in the question referenced by #Simon, maybe you are looking for a bit of discussion on what the problem is and why it hasn't been solved for us already.
databases are great for structured, typed data
custom fields are inherently less structured
therefore, custom fields are more difficult to work with in a database
some or many of the advantages of using a database are lost
some queries may be more difficult or impossible
type safety may be lost (in the database)
data integrity may no longer be enforced (by the database)
it's a lot more work for the implementers and maintainers
As discussed in the other question, there's no perfect solution.
But these benefits/features still need to be implemented somewhere, and so often the application becomes responsible for data integrity and type safety.
For situations like these, people have created Object-Relation Mapping tools, although, as Jeff Atwood says, even using an ORM could create more problems than it solved. However, you mentioned that it 'should be generic and be able to deal with all sorts of problems in the future' -- this makes me think an ORM might be your best bet.
So, to sum up my answer, this is a known problem with known solutions, none of which are completely satisfactory (because it's so hard). Pick your poison.
To answer the second part of (what I think is) your question:
As mentioned in the linked question, you could implement Entity-Attribute-Value in your database for custom fields, and then add an extra table to hold the legal values for each entity. Then, the attribute/value of the EAV table is a foreign key into the attribute-value table.
For example,
CREATE TABLE `attribute_value` ( -- enumerations go in this table
`attribute` varchar(30),
`value` varchar(30),
PRIMARY KEY (`attribute`, `value`)
);
CREATE TABLE `eav` ( -- now the values of attributes are restricted
`entityid` int,
`attribute` varchar(30),
`value` varchar(30),
PRIMARY KEY (`entityid`, `attribute`),
FOREIGN KEY (`attribute`, `value`) REFERENCES `attribute_value`(`attribute`, `value`)
);
Of course, this solution isn't perfect or complete -- it's only supposed to illustrate the idea. For instance, it uses varchars, and lacks a type column. Also, who gets to decide what the possible values for each attribute are? Can these be changed at any time by the user?
I'm doing something similar for a customer. I've create a JSON FieldType which holds the entire JSON stream of a complex object and a String containing the FQTN (FullQualifiedTypeName) of my C# model class.
By using custom New-, Edit- and Display-Forms we'd ensured that our custom objects are rendered the correct way for best user experience.
To promote fields from the complex C# model to the SharePoint list, we've build something like Microsoft did in InfoPath. Users are able to select Properties or MetaData from the Complex C# type, which will be automatically promoted to the hosting SharePoint list.
The big advantage of JSON is, that its smaller than XML and easier to work with in the web world. (JavaScript...)
When you let the users create the data models, I would recommend looking at an document database or 'NoSQL' since you want exactly that, to store schemaless data structures.
Also, sharePoint stores metadata the way you mentioned (10 columns for text, 5 for dates etc)
That said, in my current project (locked in SharePoint, so Framework 3.5 + SQL Server and all the constraints that follow) we use a somewhat similar structure as below:
Form
Id
Attribute (or Field)
Name
Type (enum) Text, List, Dates, Formulas etc
Hidden (bool)
Mandatory
DefaultValue
Options (for lists)
Readonly
Mask (for SSN etc)
Length (for text fields)
Order
Metadata
FormId
AttributeId
Text (the value for everything but dates)
Date (the value for dates)
Our formulas employ functions such as Increment: INC([attribute1][attribute2], 6) and this would produce something like 000999 for the 999th instance of the combined values for attribute 1 and attribute 2 for a form, this is stored as:
AttributeIncrementFormula
AtributeId
Counter
Token
Other 'formulas' (aka anything non-trivial) such as barcodes are stored as single metadata values. In the actual implementation, we would have something like this:
var form = formRepository.GetById(1);
form.Metadata["firstname"].Value
Value above is a readonly property that decides whether we should get the value from Text or Date and if some additional transform is required. Note that the database here is merely a storage, we hold all the domain complexity in the application.
We also let our customer decide which attribute is the form title for example, so if firstname is the form title, they'll set an in-memory param that spans the entire application to be something like Params.InMemory.TitleAttributeId = <user-defined-id>.
I hope this gives you some insight on a production impl of a similar scenario.
This is really more of a comment than an answer, but I need more space than SO will allow for comments, so here 'tis:
I think your UserConfiguration table approach is good, and would suggest only abstracting the "type" and "value" pieces of your design a bit more:
Since your application will need to validate user input, each notion of "type" will have an associated piece of evaluation logic. Obviously the more of this you can abstract into data the easier it will be to keep your code small. Enumerated lists are a good start, but if your "validator" logic can be extended to handle pattern matching for text strings and Boolean logical expressions (e.g. to describe/enforce constraints on input values), then you can express pretty much any "type" of input that your application may need to handle in terms of (relatively) simple "atoms" that you can map naturally to DB tables.
When storing a user-specified value, you can either store the "raw" data (e.g. in JSON) and a foreign key to the associated "type", or you can add an lookup/cache system that assigns an integer to each new value that is encountered by the system ("novelty" can be checked by checking a hash of the "raw" data, for example). The latter approach obviously scales better if you're expecting lots of data duplication (which of course you would in the case of a multiple-choice menu).
I am trying to get Linq2SQL to work with my legacy database. I currently have a notes table that is generic to a few different entities and mapped m:m. Instead of mapping one relation table per entity type whoever designed this database decided to use a single relation table with a type column (as a varchar yuck!).
alt text http://img130.imageshack.us/img130/326/capturefm.png
How do I map Foo and Bar to have a Notes collection? Is this even possible. I am not seeing the light. I tried to have two classes FooNotes and BarNotes that inherit from RelateNotes and then mapping the Type field as the descriptor.
alt text http://img130.imageshack.us/img130/3153/capture2f.png
This doesn't work and I receive the below error.
Bad Storage property: '_EntityID' on member 'TestLinq.BarNotes.EntityID'.
I don't want to get too far down the Linq2SQL road before realising it not possible. I am not allowed to change the database much.
Many Thanks,
I would consider expanding your app's design to include a Domain Model based layered architecture.
This way you can create a Domain Model that meets the requirements of the system while abstracting away how the mapping works underneath. For example, you could have a common interface for the data access layer that returns the mapped entities. An implementation of this interface could be created for the old 'string-equality' m2m relationship in the legacy database. One day when you are ready to ditch the legacy database, a new implementation could be created for a different ER db model which would allow your Domain Model (object model) and higher layers (services, UI etc) to remain unchanged (because they all utilise the common interface).
In your object model you could define each object that needs Notes and have them each contain a Notes collection for each instance. Eg. Foo has a collection of Notes; Bar has a collection of Notes. Your Repository interface would look after returning these entities but the implementation of that repo would worry about how it's read and persisted to the db.