Here is my scenario with SQLServer 2008 R2 database table
(Update: Migration to SQL Server 2014 SP1 is in progress, so SQL Server 2014 can be used here).
A. Maintain daily history in the table (which is a fact table)
B. Create tableau graphs using the fact and dimension tables
A few steps to follow to create the table
A copy of the table from the source database will be pushed to my SQLServer DAILY which contain 120,000 to 130,000 rows with 20 columns approximately
a. 1st day, we get 120,000 records, sample structure is below.
(Modified or New records are highlighted in Yellow)
Source System Data:
b. 2nd day, we get, say 122,000 records (2,000 are newly inserted and 1,000 are modified/updated on previous day's data and 119,000 are as it is from previous day)
c. 3rd day, we get, say 123,000 records (1,000 are newly inserted and 1,000 are modified / updated on 2nd day's data and 121,000 are as it is from 2nd day)
Since the daily history has to be maintained in the Fact table, within a week the table will have 1 million rows,
for 2 weeks - 2 million rows
for 1 month - 5 million rows
for 1 year - say 65 - 70 million rows
for 12 years - say 1 billion rows (1,000 million)
12 years history has to be maintained
What could be right strategy to store data in the table to handle this scenario, which should also provide sufficient performance while generating reports ?
Partitioning the table by month wise (the table will contain 5 million rows approx.) ?
Thought of copying the differential data only in the table daily (new and modified rows only) but it is not possible to create tableau reports with Approach-2.
Fact Table Approaches:
Tableau graphs have to created using the fact and dimension tables for scenarios like
Weekly Bar graph for Sample Count
Weekly (week no. on X-axis) plotter graph for average Sample values (on Y-axis)
Weekly (week no. on x-axis) average sample values (on Y-axis) by quality
How to handle this scenario ?
Please provide references on the approach to follow.
Should we create any indexes on the fact table ?
A data warehouse can handle millions of rows these days without a lot of difficulty. Many have tens of billions of rows, and then things get a little difficult. You should look at both table partitioning over time and at columnstore compression and page compression in terms of seeing what is out there. Large warehouses often use both. 2008 R2 is quite old at this point, and note that huge progress has been made in this area in current versions of SQL Server.
Use a standard fact-dimensional design, and try to avoid tweaking the actual schema with workarounds just to conserve space - that generally will bite you in the long run.
For proven, time tested designs in warehousing I like the Kimball group's patterns, e.g. The Data Warehouse Lifecycle Toolkit book.
There are a few different requirements in your case. Because of that, I suggest splitting the requirements according to the standard data warehouse three-tier model.
DWH model (delta-driven, historized, high performance)
Presentation model (Again, high performance, should fit Tableau)
Front end
DWH model
Basically, you have three different approaches here, all with their pros and cons.
3NF
Can become cumbersome down the road. Is highly flexible if being used right. Time-to-market is long (depending on complexity). Historization can become complicated.
Star Schema (for DWH storage!)
Has a very, very fast time-to-market. Will become extremely complicated to maintain when business rules or business structure changes. Helpful for a very small business but not in the case of businesses which want to expand their Business Intelligence infrastructure. Historization can become a mess if the star schema is the DWH main model.
Data Vault
Has a medium time-to-market. Is easier to understand than 3NF but can be puzzling for people used to a star schema. Automatically historized, parallelizable and very flexible for changing business needs, because the business rules are implemented downstream. Scales quickly.
Anchor Modelling
Another highly flexible approach which I haven't used yet. Is in some kind the same approach as Data Vault but with some differences.
Presentation model
Now, to represent the never-touched-again data in the DWH layer, nothing fits better than Star Schema. Also, while creating the star schema, you can implement business logic.
Front end
Shouldn't matter, take the tool you like.
In your case, it would be smart to implement a DWH (using one of those models) and put the presentation model on top of it. If any problems are in the star schema, you could always re-generate it with the new changes.
NOTE: If you would use a star schema as a DWH model, you cannot re-create the star schema in the presentation layer without using some complex transformation logic to begin with.
NOTE: Also, sometimes the star schema is seen as a DWH. I don't think that this is a good use for it for any requirement which could become more complex.
EDIT
To clarify my last note, see this blog post: http://www.tobiasmaasland.de/2016/08/24/why-your-data-warehouse-is-not-a-data-warehouse/
Related
I have time series data in a relational database (postgres). Data import to the database every 5 minutes, but imput get overwritten during the day, meaning at the end of the day there is only 1 record for that day for specific id (id and date-> composite PKs).
current process is like this ->Data comes in and is evaluated the same way 1:1. (data comes in every table as they are in source, there is many redundancy.
3 problems:
currently performance of getting data out of database(reading) is fast (good performance)
frontend get query from this database and show data. result of the query is very fast. if I do normalization then getting the query become slower, but writing and updating become easier.
how can I optimize this database?
missing data (ignore this problem )
if we are able to store more records daily (history of one ID in different points of time everyday) then we can show comparison of two points in time in a day. does database support huge amoount of data every day?
DWH
source is just one, all data come from one source. can we have DWH for it or since source is only one, there is no need for it?
Edit:
How can I optimise this database?
currently there is only one Schema in a database. Data comes in and is evaluated the same way 1:1. writng is hard since we have redundany.
my solution:
I want to create 3 schemas for this database.
1 schema, for inserting data into tables, tables structure is base on data source. ( I assume data remains here temporary, and will be transfer in second schema)
2 schema, incoming data stored, and data is structured in 3NF.
3 Schema, denormlising data again because we need to get fast query (fast reading is required).
Your three schema model is exactly how this has been done for many years.
Schema 1:
Names: Staging/Landing/Ingestion
Schema matches the source system but it is cleared and reloaded for every load batch. Typically has a "looser" schema definition to allow for import and capture of bad data
Schema 2:
Names: Replica/ODS/Persisted data store
Schema 2 is never cleared, it's permanent. Following a data load, this layer should look exactly like your source systems. Data in schema 1 is "merged" into schema 2 each time. For example on a daily load cycle, Schema 1 just contains that days data but schema 2 contains the entire history of data loaded. Reference data is merged on a known primary key. Transactional data might be merged on a key or it might be merged on a "windowing" basis - i.e. delete the last days data from schema 2 and load schema 1 in
Some people like to have a "point in time view" where they can recreate what the source system looks like a historical point in time. I've never seen anyone use that though.
Schema 3:
Names: Business Layer/Star Schema/Reporting Layer/Datamart/Sematic Layer
Layer 2, which is usually a replica of an OLTP data model (OLTP is optimised for entering data). This is transformed into a data model that is optimised for reporting.
The tried and tested data model here is a star schema. It's been around for decades. If you research any reporting tool (i.e. Power BI), thay all say that the preferred data model to report from is a star schema. Yes a star schema is denormalised and has other benefits beyonf perforamnce, for example it is more easily understood by a business user, supports slowly changing dimensions etc.
All these concepts are explained further online but of you have any specific questions happy to expand further
We have a 4-5TB SQL Server database. The largest table is around 800 GB big containing 100 million rows. 4-5 other comparable tables are 1/3-2/3 of this size. We went through a process to create new indexes to optimize performance. While the performance certainly improved we saw that the newly inserted data was slowest to query.
It's a financial reporting application with a BI tool working on top of the database. The data is loaded overnight continuing in the late morning, though the majority of the data is loaded by 7am. Users start to query data around 8am through the BI tool and are most concerned with the latest (daily) data.
I wanted to know if newly inserted data causes indexes to go out of order. Is there anything we can do so that we get better performance on the newly inserted data than the old data. I hope I have explained the issue well here. Let me know in case of any missing information. Thanks
Edit 1
Let me describe the architecture a bit.
I have a base table (let’s call it Base) with Date,id as clustered index.
It has around 50 columns
Then we have 5 derived tables (Derived1, Derived2,...) , according to different metric types, which also have Date,Id as clustered index and foreign key constraint on the Base table.
Tables Derived1 and Derived2 have 350+ columns. Derived3,4,5 have around 100-200 columns. There is one large view created to join all the data tables due limitations of the BI tool. The date,ID are the joining columns for all the tables joining to form the view (Hence I created clustered index on those columns). The main concern is with regard to BI tool performance. The BI tool always uses the view and generally sends similar queries to the server.
There are other indexes as well on other filtering columns.
The main question remains - how to prevent performance from deteriorating.
In addition I would like to know
If NCI on Date,ID on all tables would be better bet in addition to the clustered index on date,ID.
Does it make sense to have 150 columns as included in NCI for the derived tables?
You have about a 100 million rows, increasing every day with new portions and those new portions are usually selected. I should use partitioned indexes with those numbers and not regular indexes.
Your solution within sql server would be partitioning. Take a look at sql partitioning and see if you can adopt it. Partitioning is a form of clustering where groups of data share a physical block. If you use year and month for example, all 2018-09 records will share the same physical space and easy to be found. So if you select records with those filters (and plus more) it is like the table has the size of 2018-09 records. That is not exactly accurate but its is quite like it. Be careful with data values for partitioning - opposite to standard PK clusters where each value is unique, partitioning column(s) should result a nice set of different unique combinations thus partitions.
If you cannot use partitions you have to create 'partitions' yourself using regular indexes. This will require some experiments. The basic idea is data (a number?) indicating e.g. a wave or set of waves of imported data. Like data imported today and the next e.g. 10 days will be wave '1'. Next 10 days will be '2' and so on. Filtering on the latest e.g. 10 waves, you work on the latest 100 days import effectively skip out all the rest data. Roughly, if you divided your existing 100 million rows to 100 waves and start on at wave 101 and search for waves 90 or greater then you have 10 million rows to search if SQL is put correctly to use the new index first (will do eventually)
This is a broad question especially without knowing your system. But one thing that I would try is manually update your stats on the indexes/table once you are done loading data. With tables that big, it is unlikely that you will manipulate enough rows to trigger an auto-update. Without clean stats, SQL Server won't have an accurate histogram of your data.
Next, dive into your execution plans and see what operators are the most expensive.
I am using SQLite in my application. The scenario is that I have stock market data and each company is a database with 1 table. That table stores records which can range from couple thousand to half a million.
Currently when I update the data in real time I - open connection, check if that particular data exists or not. If not, I then insert it and close the connection. This is then done in a loop and each database (representing a company) is updated. The number of records inserted is low and is not the problem. But is the process okay?
An alternate way is to have 1 database with many tables (each company can be a table) and each table can have a lot of records. Is this better or not?
You can expect at around 500 companies. I am coding in VS 2010. The language is VB.NET.
The optimal organization for your data is to make it properly normalized, i.e., put all data into a single table with a company column.
This is better for performance because the table- and database-related overhead is reduced.
Queries can be sped up with indexes, but what indexes you need depends on the actual queries.
I did something similar, with similar sized data in another field. It depends a lot on your indexes. Ultimately, separating each large table was best (1 table per file, representing a cohesive unit, in you case one company). Plus you gain the advantage of each company table being the same name, versus having x tables of different names that have the same scheme (and no sanitizing of company names to make new tables required).
Internally, other DBMSs often keep at least one file per table in their internal structure, SQL is thus just a layer of abstraction above that. SQLite (despite its conceptors' boasting) is meant for small projects and querying larger data models will get more finicky in order to make it work well.
I have question about creating Data warehouse.
We have system that generate more than 50 million records per day, I do some pre-process in these records then load them to table in data base.
Now you should see the problem which is: the size of single table and the how you can manage it (after about ~15 days of load ~50M record/day) and I need to keep records from 60-days old.
Now my question is: the best way to design my data warehouse is:
to use different table for every day or for every let say week.
OR use single table with many partitions.
OR some other Approach that you find is better for my case?
I need Headline to start my designing for DWH. I'm using Oracle 11g as my database.
Use partitioning if it's available.
Partitioning gets you the best of both worlds. You can access all your data at once, in one simple table. If the query predicates or partition name syntax is used correctly the table will act like it's magically much smaller than it really is. And you can manage the data by day - bulk operations like loading and dropping data can be done in a way that only affects a single day's worth of data.
Interval partitioning makes things even easier. You don't even have to specify the partitions. Just tell Oracle, "make each day a new partition". There are a few new things to learn. But it's a small price to pay for a significant boost in performance and manageability.
If you're using Enterprise Edition and have already licensed the partitioning option then there's no reason not to use it.
I'm a long time programmer who has little experience with DBMSs or designing databases.
I know there are similar posts regarding this, but am feeling quite discombobulated tonight.
I'm working on a project which will require that I store large reports, multiple times per day, and have not dealt with storage or tables of this magnitude. Allow me to frame my problem in a generic way:
The process:
A script collects roughly 300 rows of information, set A, 2-3 times per day.
The structure of these rows never change. The rows contain two columns, both integers.
The script also collects roughly 100 rows of information, set B, at the same time. The
structure of these rows does not change either. The rows contain eight columns, all strings.
I need to store all of this data. Set A will be used frequently, and daily for analytics. Set B will be used frequently on the day that it is collected and then sparingly in the future for historical analytics. I could theoretically store each row with a timestamp for later query.
If stored linearly, both sets of data in their own table, using a DBMS, the data will reach ~300k rows per year. Having little experience with DBMSs, this sounds high for two tables to manage.
I feel as though throwing this information into a database with each pass of the script will lead to slow read times and general responsiveness. For example, generating an Access database and tossing this information into two tables seems like too easy of a solution.
I suppose my question is: how many rows is too many rows for a table in terms of performance? I know that it would be in very poor taste to create tables for each day or month.
Of course this only melts into my next, but similar, issue, audit logs...
300 rows about 50 times a day for 6 months is not a big blocker for any DB. Which DB are you gonna use? Most will handle this load very easily. There are a couple of techniques for handling data fragmentation if the data rows exceed more than a few 100 millions per table. But with effective indexing and cleaning you can achieve the performance you desire. I myself deal with heavy data tables with more than 200 million rows every week.
Make sure you have indexes in place as per the queries you would issue to fetch that data. Whats ever you have in the where clause should have an appropriate index in db for it.
If you row counts per table exceed many millions you should look at partitioning of tables DBs store data in filesystems as files actually so partitioning would help in making smaller groups of data files based on some predicates e.g: date or some unique column type. You would see it as a single table but on the file system the DB would store the data in different file groups.
Then you can also try table sharding. Which actually is what you mentioned....different tables based on some predicate like date.
Hope this helps.
You are over thinking this. 300k rows is not significant. Just about any relational database or NoSQL database will not have any problems.
Your design sounds fine, however, I highly advise that you utilize the facility of the database to add a primary key for each row, using whatever facility is available to you. Typically this involves using AUTO_INCREMENT or a Sequence, depending on the database. If you used a nosql like Mongo, it will add an id for you. Relational theory depends on having a primary key, and it's often helpful to have one for diagnostics.
So your basic design would be:
Table A tableA_id | A | B | CreatedOn
Table B tableB_id | columns… | CreatedOn
The CreatedOn will facilitate date range queries that limit data for summarization purposes and allow you to GROUP BY on date boundaries (Days, Weeks, Months, Years).
Make sure you have an index on CreatedOn, if you will be doing this type of grouping.
Also, use the smallest data types you can for any of the columns. For example, if the range of the integers falls below a particular limit, or is non-negative, you can usually choose a datatype that will reduce the amount of storage required.