Alternative approach to database design: "verticality" - database

I would like to ask someone, who has experiences in database design. This is my idea, and I can't assess deep consequences of such approach to, let's say, common problem. I appreciate your comments in advance...
Imagine:
- patients in hospital
- each patient should have:
1. personal data - Name, Surname, Street, SecurityID, contact, and many more (which could be changed over time)
2. personal records - a heap of various forms (also changing over time)
Typically I would design table for patient's pesonal data:
personaldata_tbl
| ID | SecurityID | Name | Surname ... | TimeOfEntry
and similar tables for each form in program. This could be very hard task, because it could reach several hundreds of such tables. In addition to it, probably their count will be increasingly growing.
And yes, all of them should be relationally connected for example:
releaseform_tbl
| ID | personaldata_tbl_ID | DateOfRelease | CauseOfRelease ... | TimeOfEntry
My intention is to revert 2D tables to single 1D table - all data about patients would be stored in one table! Other tables will describe (referentially) what kind of data is stored in the main table. Look at this:
data_info_tbl
| ID | Description |
| 1 | Name |
| 2 | Surname |
patient_data_tbl
| ID | patient_ID | data_info_ID | form_ID | TimeOfEntry | Value
| 1 | 121 | 1 | 7 | 17.12.2011 14:34 | John
| 2 | 121 | 2 | 7 | 17.12.2011 14:34 | Smith
The main reason, why this approach attracts me is:
- simplicity
- ability to store any data with appropriate specification and precision
- no table jungle
Contras:
- SQL querying could be problematic in some cases
- there should be reliable algorithm to delete, update, insert data (one way is to dynamically create table, perform operations on it, and finally store it)
- dataaware controls won't be used.
So what would you say ?
thanx for your time and answers

The most obvious problems . . .
You lose control of size. The "Value" column must be big enough to hold the largest type you use, which in the general case must be a blob. (X-ray images, in a hospital database.)
You lose data types. PostgreSQL, for example, includes the data types "point", bit string, internet address, cidr address, MAC address, and UUID. Storing all values in a column of a single type means you lose all the type-safety built into the specific data types.
You lose constraints. Some integers need to be constrained to between 1 and 10, others between 1000 and 3000. Some text strings need to be all numbers (ZIP codes), some need to be a particular mix of alpha and numerics (tire sizes).
You lose scalability. If there are 1000 attributes in a person's medical records, each person's data will take 1000 rows in the table. If you have 100,000 patients--an easily manageable number even in Microsoft Access and SQLite--your table suddenly balloons from a manageable 100,000 rows to 100,000,000 rows. Any query that does a table scan will have to scan 100 million rows, every time. Any single query that needs to return, say, 30 attributes will need 30 joins.
What you're proposing is the EAV anti-pattern. (Starts on slide 30.)

I disagree with Bert Evans (in the sense that I don't find this terribly valid).
First of all it's not clear to me what problem you are trying to solve. The three "benefits" you list:
simplicity
ability to store any data with appropriate specification
and precision
no table jungle
don't really make a lot of sense if the application is small, and if it isn't (like hospital records you mention in your example) any possible "gain" is lost when you start to factor in that this will make any sort of query very inefficient, and that human operators trying to design reports, data extractions or to extend the DB will have to put in a lot of extra effort.
Example: I suppose your hospital patient has an address and therefore a ZIP code... have you considered what loops you will have to jump in to create foreing index on the zip code/state table?
Another example: as soon as you realize that the patient may have a middle name and that on the form it will be placed between the first and last name what will you do? renumber all the last name fields? or place the middle name at the bottom of the pile, so that your form will have to re-add special logic to show it in the "correct" position?
You may want to check some alternatives to SQL DBs, like for example XML based data stores, or even MUMPS, but I really can't see any benefit in the approach you are proposing (and please consider I had seen an over-zealous DBA trying to do something very similar when designing a web application backed by an Oracle DB: every field/label/image on the webpage had just a numeric reference to a sequence-based ID record in the DB, making the whole webapp a nightmare to mantain - so I am not just being a "purist" here).

Related

What is the best way to indicate a value that has special handling in a database table?

The setup.
I have a table that stores a list of physical items for a game. Items also have a hierarchical list of categories. Example base table:
Items
id | parent_id | is_category | name | description
-- | --------- | ----------- | ------- | -----------
1 | 0 | 1 | Weapon | Something intended to cause damage
2 | 1 | 1 | Ranged | Attack from a distance
3 | 1 | 1 | Melee | Must be able to reach the target with arm
4 | 2 | 0 | Musket | Shoots hot lead.
5 | 2 | 0 | Bomb | Fire damage over area
6 | 0 | 1 | Mount | Something that carries a load.
7 | 6 | 0 | Horse | It has no name.
8 | 6 | 0 | Donkey | Don't assume or you become one.
The system is currently running on PHP and SQLite but the database back-end is flexible and may use MySQL and the front-end may eventually use javascript or Object-C/Swift
The problem.
In the sample above the program must have a different special handling for each of the top level categories and the items underneath them. e.g. Weapon and Mount are sold by different merchants, weapons may be carried while a mount cannot.
What is the best way to flag the top level tiers in code for special handling?
While the top level categories are relatively fixed I would like to keep them in the DB so it is easier to generate the full hierarchy for visualization using a single (recursive) function.
Nearly all foreign keys that identify an item may also identify an item category so separating them into different tables seemed very clunky.
My thoughts.
I can use a string match on the name and store the id in an internal constant upon first execution. An ugly solution at best that I would like to avoid.
I can store the id in an internal constant at install time. better but still not quite what I prefer.
I can store an array in code of the top level elements instead of putting them in the table. This creates a lot of complications like how does a child point to the top level parent. Another id would have to be added to the table that is used by like 100 of the 10K rows.
I can store an array in code and enable identity insert at install time to add the top level elements sharing the identity of the static array. Probably my best idea but I don't really like the idea of identity insert it just doesn't feel "database" to me. Also what if a new top level item appears. Maybe start the ids at 1Million for these categories?
I can add a flag column "varchar(1) top_category" or "int top_category" with a character or bit-map indicating the value. Again a column used on like 10 of 10k rows.
As a software person I tend to fine software solutions so I'm curious if their is a more DB type solution out there.
Original table, with a join to actions.
Yes, you can put everything in a single table. You'd just need to establish unique rows for every scenario. This sqlfiddle gives you an example... but IMO it starts to become difficult to make sense of. This doesn't take care of all scenarios, due to not being able to do full joins (just a limitation of sqlfiddle that is awesome otherwise.)
IMO, breaking things out into tables makes more sense. Here's another example of how I'd start to approach a schema design for some of the scenarios you described.
The base tables themselve look clunky, but it gives so much more flexibility of how the data is used.
tl;dr analogy ahead
A datase isn't a list of outfits, organized in rows. It's where you store the cothes that make up an outfit.
So the clunky feel of breaking things out into separate tables, is actually the benefit of relational datbases. Putting everything into a single table feels efficient and optimized at first... but as you expand complexity... it starts to become a pain.
Think of your schema as a dresser. Drawers are you tables. If you only have a few socks and underware, putting them all in one drawer is efficient. But once you get enough socks, it can become a pain to have them all in the same drawer as your underware. You have dress socks, crew socks, ankle socks, furry socks. So you put them in another drawer. Once you have shirts, shorts, pants, you start putting them in drawers too.
The drive for putting all data into a single table is often driven by how you intend to use the data.
Assuming your dresser is fully stocked and neatly organized, you have several potential unique outfits; all neatly organized in your dresser. You just need to put them together. Select and Joins are you you would assemble those outfits. The fact that your favorite jean/t-shirt/sock combo isn't all in one drawer doesn't make it clunky or inefficient. The fact that they are separated and organized allows you to:
1. Quickly know where to get each item
2. See potential other new favorite combos
3. Quickly see what you have of each component of your outfit
There's nothing wrong with choosing to think of outfit first, then how you will put it away later. If you only have one outfit, putting everything in one drawer is way easier than putting each pieace in a separate drawer. However, as you expand your wardrobe, the single drawer for everything starts to become inefficient.
You typically want to plan for expansion and versatility. Your program can put the data together however you need it. A well organized schema can do that for you. Whether you use an ORM and do model driven data storage; or start with the schema, and then build models based on the schema; the more complex you data requirements become; the more similar both approaches become.
A relational database is meant to store entities in tables that relate to each other. Very often you'll see examples of a company database consisting of departments, employees, jobs, etc. or of stores holding products, clients, orders, and suppliers.
It is very easy to query such database and for example get all employees that have a certain job in a particular department:
select *
from employees
where job_id = (select id from job where name = 'accountant')
and dept_id = select id from departments where name = 'buying');
You on the other hand have only one table containing "things". One row can relate to another meaning "is of type". You could call this table "something". And were it about company data, we would get the job thus:
select *
from something
where description = 'accountant'
and parent_id = (select id from something where description = 'job');
and the department thus:
select *
from something
where description = 'buying'
and parent_id = (select id from something where description = 'department');
These two would still have to be related by persons working in a department in a job. A mere "is type of" doesn't suffice then. The short query I've shown above would become quite big and complex with your type of database. Imagine the same with a more complicated query.
And your app would either not know anything about what it's selecting (well, it would know it's something which is of some type and another something that is of some type and the person (if you go so far as to introduce a person table) is connected somehow with these two things), or it would have to know what description "department" means and what description "job" means.
Your database is blind. It doesn't know what a "something" is. If you make a programming mistake some time (most of us do), you may even store wrong relations (A Donkey is of type Musket and hence "shoots hot lead" while you can ride it) and your app may crash at one point or another not able to deal with a query result.
Don't you want your app to know what a weapon is and what a mount is? That a weapon enables you to fight and a mount enables you to travel? So why make this a secret? Do you think you gain flexibility? Well, then add food to your table without altering the app. What will the app do with this information? You see, you must code this anyway.
Separate entity from data. Your entities are weapons and mounts so far. These should be tables. Then you have instances (rows) of these entities that have certain attributes. A bomb is a weapon with a certain range for instance.
Tables could look like this:
person (person_id, name, strength_points, ...)
weapon (weapon_id, name, range_from, range_to, weight, force_points, ...)
person_weapon(person_id, weapon_id)
mount (mount_id, name, speed, endurance, ...)
person_mount(person_id, mount_id)
food (food_id, name, weight, energy_points, ...)
person_food (person_id, food_id)
armor (armor_id, name, protection_points, ...)
person_armor <= a table for m:n or a mere person.id_armor for 1:n
...
This is just an example, but it shows clearly what entities your app is dealing with. It knows weapons and food are something the person carries, so these can only have a maximum total weight for a person. A mount is something to use for transport and can make a person move faster (or carry weight, if your app and tables allow for that). Etc.

Storing phone numbers country code

I split the phone number in 2 parts:
country prefix(e.g. +49)
phone number without leading 0
My question is, which is the best approach to store the country code As it is (+49) or a Foreign key to a countries table?
You shoud use The Normal Forms for Databases.
https://en.wikipedia.org/wiki/Database_normalization
there are rules to roll with such a problem.
/M
The choice is dependent upon:
No. or records
The database used
No. of relationships with other tables
As, country code would be a repeating column it could be placed in a varchar type column as it is e.g. +91-9654637268 .This will allow different formats of the phone number but no validation at database level that the entered value must be a number that you will need to validate at code level .Using a varchar must be the first choice for storing the phone numbers with their country codes as it will be faster by avoiding joins.
But if you need good amount of manipulation use a bigint which will store the number as e.g. 9764536377443 where first two digits are country code and rest of the digits are the phone number part.
Or you could have a separate column for the country code which would add an unnecessary join but could be helpful if the country code is needed at several places and must be well validated and constrained whihc could also be achieved by using any of the above techniques.
Hope it is helpful.
Transactional Database
If this is a transactional database (lots of updating), or a general purpose database (querying and updating) then use Database normalisation as Jonathan says. So have a table called Country with structure
| ID | CountyCode | CountryName |
| 1 | +49 | Germany |
| 2 | +1 | USA |
This way you can keep the country code and the describing information related to it away from the data about the telephone number. So say a country changes its name or country code, rather than having to update each effected row in the telephone number table, you just update the one row in the Country table.
Then a table(s) for the rest of the telephone number (depending on whether you want to split up area code etc..) with a column that references as a foreign key the CountyCode ID
| ID | CountyCodeID | TelNumber |
| 1 | 1 | 12345 |
However bear in mind that this is a general purpose way of doing things, in query heavy situations with larger amounts of data (dataMart, datawarehouse) then a different approach is best see Star Schemas

Database structure and relationships

I'm building this gaming portal and I have some database concerns. Currently I have about 10 tables, but I think they will be more than 20 when I'm finished programming. Anyway, I want to create some sort of relationships between the different tables (somewhat like WordPress). That table will hold any relation that one row from table A has to a row in table B. And what I came up with is the following:
table relationships
| rs_id | rs_type | rs_alpha | rs_beta |
rs_id -> just an id
rs_type -> the type of relation
rs_alpha -> related table #1 and row id
rs_beta -> related table #2 and row id
examples:
| 1 | cover | games:153 | images:318 |
| 2 | tag | news:183 | tags:18 |
| 3 | group_admin | users:918 | group:75 |
...
This might just do it, but here it comes my concerns:
1. This table is going to grow so fast that in no time there might be over 100,000 rows which will slow the load time.
2. To extract info I'll have to explode every call which might slow down the load time.
3. I might divide table name from id (rs_alpha, rs_beta), yet that might also slow down the load time.
Thank you and I'm open to any other solutions that might be better than this one :)
If you have time you can download my db structure from here to see what it looks like:
demirevdesign.com/public/pcanvil.sql.gz
(The addon_ tables will become the relationships table)
As far as I understand, relationship type itself defines tables involved , so no need in storing table names.
Also, if you refactor your schema and add a common parent table for all entities that might be involved in relationship, you won't need to care about table name at all , you just store id of that new table.
Finally, relationship always has start date and may have end date, I'd suggest adding this attributes to relationships table.
As to performance, it's hard to answer without seeing how you are going to query the table. I guess in general partitioning by relationship type column will be beneficial

relational VS parametrized Data modeling when building semantic web applications?

Here is the summary of my question then i'll describe it in more details :
I read about using the parametrized data modeling method instead of using the standard relational data modeling when building semantic web application,i think we'll lose 90% of normalization if we used this method,If I want to design the database of my semantic web application should i use this way? what is the practical value ?
In More Details :
I've read a lot of articles around this, in this book "Programming the semantic web - Toby Segaran, Colin Evans, and Jamie Taylor" at page 14 they tell us to use parametrized Data modeling to get Semantic Relationships instead of the standard relational database described by this example:
in the standard Relational Database :
Venue : [ ID(PK), Name, Address ]
Restaurant : [ ID(PK), VenueID(FK), CuisineID]
Bar : [ ID(PK), VenueID(FK), DJ?, Specialty ]
Hours : [ VenueID(FK), Day, Open, Close ]
For Semantic Relationships : One table only !!! Fully parameterized venues
Properties : [ VenueID,Field, Value ]
Example:
VenueID _ Field____Value
1__Cuisine__Deli
1__Price__ $
1__Name__Deli Llama
1__Address__Peachtree Rd
2__Cuisine__Chinese
2__Price__ $$$
2__Specialty Cocktail __ Scorpion Bowl
2__DJ?__No
2__Name__ Peking Inn
2__Address Lake St
3__Live Music? __ Yes
3__Music Genre__ Jazz
3__Name__ Thai Tanic
3__Address__Branch Dr
Then the authors Says :
Now each datum is described alongside the property that defines it. In doing this, we’ve
taken the semantic relationships that previously were inferred from the table and column
and made them data in the table. This is the essence of semantic data modeling:
flexible schemas where the relationships are described by the data itself.
If I want to design the database of my semantic web application should i use this way? what is the practical value ?
What you lose in immediate clarity, you gain in flexibly. Notice with your more parametrized approach you gain the ability to easily add fields without altering any tables. This allows you give different fields to different venues as it suites your application. By association, this also makes it easy to extend your web application via your creation or future maintainer/modification authors (if you intend to release) down the road.
Just be careful when it comes to performance. Don't adopt a fully parametrized design when it is easier to a standard relational design. Let's say, for a moment, you have a two different users tables, one relational the other parametrized:
Table: users_relational
+---------+----------+------------------+----------+
| user_id | username | email | password |
+---------+----------+------------------+----------+
| 1 | Sam | sam#example.com | ******** |
| 2 | John | john#example.com | ******** |
| 3 | Jane | jane#example.com | ******** |
+---------+----------+------------------+----------+
Table: users_parametrized
+---------+----------+------------------+
| user_id | field | value |
+---------+----------+------------------+
| 1 | username | Sam |
| 1 | email | sam#example.com |
| 1 | password | ******** |
| 2 | username | John |
| 2 | email | john#example.com |
| 2 | password | ******** |
| 3 | username | Jane |
| 3 | email | jane#example.com |
| 3 | password | ******** |
+---------+----------+------------------+
Now you want to select a single user. With your relational table, you will only select one row, while your parametrized version will select the number of rows that there are fields associated with that user, in this case 3.
The next issue is searchability (at times). Say you have that same users table from the example above, but instead of knowing the user ID, you only know the username. You may be using two queries, one to find the user id and the other to get the data associated with the user.
Your last con stems from selecting only a few rows at a time. Taking the users tables example again, we can limit the number of fields easily with the relational one:
SELECT username, email FROM users_relational WHERE user_id = 2
We should get a single result with two columns.
Now, for the parametrized table:
SELECT field, value FROM users_parametrized WHERE user_id = 2 AND field IN('username','email')
It's a little more verbose and will become less readable than the first one, especially if you start taking on more and more fields to select.
Additionally, the parametrized will be slower for a few reasons. It now has to do text comparisons from the varchar in the field column, instead of a single, numerically indexed user_id. With the first query, it knows when to stop looking for the record because you're selecting by a primary key. In the parametrized, you are not selecting by a primary key, so you will take a performance hit because your database must look through all the records.
This leads me into the final real difference (as far as your DBMS sees it). There is no primary key in the parametrized, which (as you saw above) can be a performance issue, especially if you already have a considerable number of records. For something like a users table where you can have thousands of records, your record count would be that number times 3 (as we have three non-user_id fields) in this case alone. That's a lot of data for the database to search through.
There are quite a few things to consider when designing your application. Don't be afraid to mix your database with parametrized and relational style - it just has to make sense practically. In the case you gave, it makes perfect sense to do so; in the case I displayed, it would be pointless.
It is possible to stay fully relational while pursuing the intent of storing data in a parameterized fashion. The following is a greatly oversimplified demonstration, but should suffice to show the main tricks that are needed -- in a nutshell, additional levels of abstraction, some surrogate primary keys, and some tables with composite primary keys. I will leave out exact description of foreign key constraints assuming the reader can grasp the obvious relations between tables below.
Your first table is only to establish the entities you want to store information about, and a key to look up what sorts of information will be stored:
entity_id | entity_type
---------------------------
1 | lawn mower
2 | toothbrush
3 | bicycle
4 | restaurant
5 | person
The next table relates entity type to the fields you wish to store for each entity type:
entity_type | attribute
------------------------
lawn mower | horsepower
lawn mower | retail price
lawn mower | gas_or_electric
lawn mower | ...etc
toothbrush | bristle stiffness
toothbrush | weight
toothbrush | head size
toothbrush | retail price
toothbrush | ...etc
person | name
person | email
person | birth date
person | ...etc
This is expandable to as many fields as you like for each entity type. It's still relational; this table does have a primary key, it's just a composite key composed of both columns.
This example is oversimplified for brevity; in actual practice you have to confront the namespacing issues with attributes and you probably want certain attribute names to be per-entity-type in case the same name means something different on an entirely different kind of entity. Use a surrogate primary key for the attributes in order to solve the namespacing issue, if you don't mind the decrease in readability when looking directly at the tables.
Meanwhile, and opposite of the preceding point, it's useful to make common and unambiguous attributes (such as "weight in grams" or "retail price in USD" available for reuse across multiple entity types. To handle this, add a level of abstraction between attributes and entity types. Make a table of "attribute sets", with each set linked to 1..n attributes. Then each entity type in the table above would be linked not directly to attributes, but to one or more attribute sets.
You'll need to either guarantee that attribute sets do not overlap in what attributes they point to, or create a means of resolving conflicts by hierarchy, composition, set union, or whatever fits your needs.
So at this point a lookup for a particular entity goes as follows. From the entity id we get the entity type. From entity type we get 1..n attribute sets, which yield a resulting attribute set that is held by the entity. Finally there is the big table with the actual data in it as follows:
entity_id | attribute_id | value
---------------------------------------
923 | 1049272 | green
923 | 1049273 | 206.55
924 | 1049274 | 843-219-2862
924 | 1049275 | Smith
929 | 1049276 | soft
929 | 1049277 | ...etc
As with all of these tables, this one has a primary key, in this case composed of the entity_id and attribute_id columns. The values are stored in a plain-text column without units. The units are stored in a separate table linking attributes to units. More tables can be established if you need to get more specific on that; you can set up additional levels of abstraction to establish an "attribute type" system similar to the entity type system described above.
If needed, you can go as far as storing relationships such as "attribute X is numerically convertible to attribute Y by the following formula", for numerical attributes. Or for non-numerical attributes you can establish equivalence tables to manage alternate spellings or formats for the allowed values of an attribute.
As you can imagine, the farther you go with your "attribute types and units" system, and the more you use that additional machinery in computation, the slower this all will be. In the worst case you're looking at many joins. But that problem can be addressed with caching and views, if your situation allows you to make tradeoffs such as slowing write speed to gain a great increase in read speed. Also, many of your queries to the database will be in situations where you already know what entity type you're working with at the moment and what its resulting attributes are and their types; so you only have to grab the literal values out of the entity/attribute/value table, and that is plenty fast.
In conclusion, hopefully I have shown how you can get as parameterized as you wish while remaining fully relational. It just requires more tables for more levels of abstraction than some of the simpler approaches do; yet it avoids the disadvantages of the "one-big-table" style. This style of entity>type>attribute>value storage is powerful, flexible, and can be extended as far as you need.
And thanks to a relational/normalized table setup, you can do all sorts of reorganizing along the way as your entity schema evolves, without losing data. The additional levels of abstraction allow you to re-parent attributes from one attribute set to another, change their names if needed, and change which sets of attributes an entity type makes use of, without losing stored values, as long as you write appropriate migrations. The other day I realized I needed to store a certain product attribute on a per-brand basis instead of per-product, and was able to make the schema change in five minutes with only a couple of updated rows in the database. In many other setups, particularly in a one-big-table setup, it could have been a lot more work, requiring as much as one or more updated rows per entity affected by the change.

do I need multiple tables or single?

I developing a tool which may got more than a million data to fill in.
current i have designed single table with 36 coloumns. my question is do I need to divide these into multiple tables or single??
If single what is the advantage and disadvantage
if multiple then what is the advantage and disadvantage
and what will be the engine to use for speed...
my concern is a large database which will have atleast 50000 queries perday..
any help??
Yes, you should normalize your database. A general rule of thumb is that if a column that isn't a foreign key contains duplicate values, the table should be normalized.
Normalization involves splitting your database into tables, and helps to:
Avoid modification anomolies.
Minimize impact of changes to the data structure.
Make the data model more informative.
There is plenty of information about normalization on Wikipedia.
If you have a serious amount of data and don't normalize, you will eventually come to a point where you will need to redesign your database, and this is incredibly hard to do retrospectively, as it will involve not only changing any code that accesses the database, but also migrating all existing data to the new design.
There are cases where it might be better to avoid normalization for performance reasons, but you should have a good understanding of normalization before making this decision.
First and foremost ask yourself are you repeating fields or attributes of fields. Does your one table contain relationships or attributes that should be separated. Follow third normal form...we need more info to help but generally speaking one table with thirty six columns smells like a db fart.
If you want to store a million rows of the same kind, go for it. Any decent database will cope even with much bigger tables.
Design your database to best fit the data (as seen from your application), get it up, and optimize later. You will probably find that performance is not a problem.
You should model your database according to the data you want to store. This is called "normalization": Essentially, each piece of information should only be stored once, otherwise a table cell should point to another row or table containing the value. If, for example, you have table containing phone numbers, and one column contains the area code, you will likely have more than one phone number with the same value in the same column. Once this happens, you should set up a new table for area codes and link to its entries by referencing the primary key of the row the desired area code is stored in.
So instead of
id | area code | number
---+-----------+---------
1 | 510 | 555-1234
2 | 510 | 555-1235
3 | 215 | 555-1236
4 | 215 | 555-1237
you would have
id | area code id | number | area code
---+---------- ---+----------+-----------
1 | 510 1 | 555-1234 | 1
2 | 215 2 | 555-1235 | 1
3 | 555-1236 | 2
4 | 555-1237 | 2
The more occurences of the same value you have, the more likely will you save memory and get quicker performance if you organize your data in this way, especially when you're handling string values or binary data. Also, if an area code would change, all you need to do is update a single cell instead of having to perform an update operation on the whole table.
Try this tutorial.
Correlation does not imply causation.
Just because shitloads of columns usually indicate a bad design, doesn't mean that a shitload of columns is a bad design.
If you have a normalized model, you store whatever number of columns you need a single table.
It depends!
Does that one table contain a single 'entity'? i.e. Are all 36 columns attributes a single thing, or are there several 'things' mixed together?
If mixed, then you should normalise (separate into distinct entities with relationships between them). You should aim for at least Third Normal Form (3NF).
A best practice is to normalise as much as you can; if you later identify a performance problem, then denormalise as little as you can.

Resources