Analytics along with OLTP Database - database

I have a primary use case where I want to have a transactional relational database for which I am using Postgres.
I also need to run frequent aggregate queries (count, sum, average) on the data. These statistics cannot be precomputed as there are multiple filters for search that we have to provide.
I was initially thinking of using Redshift as a secondary storage, which can serve these queries, but then I would also need to build a system to keep the data in sync between the two storages.
Is there a better way to achieve this?

Take a look at AWS DMS, you can set this up to keep a near real time replica of your Postgres data on Redshift.
It is reliable and requires minimal maintenance (e.g. if you add new columns to your source data).
Read both of these carefully, especially limitations and requirements.
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.PostgreSQL.html
and
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Redshift.html
Unless you need them, I recommend excluding text (and other large object) columns from the sync. this can be done easily by setting a flag, or can be tailored column by column.
The source Postgres database does not have to be held on AWS.

Related

Loading data from SQL Server to Elasticsearch

Looking for suggesting on loading data from SQL Server into Elasticsearch or any other data store. The goal is to have transactional data available in real time for Reporting.
We currently use a 3rd party tool, in addition to SSRS, for data analytics. The data transfer is done using daily batch jobs and as a result, there is a 24 hour data latency.
We are looking to build something out that would allow for more real time availability of the data, similar to SSRS, for our Clients to report on. We need to ensure that this does not have an impact on our SQL Server database.
My initial thought was to do a full dump of the data, during the weekend, and writes, in real time, during weekdays.
Thanks.
ElasticSearch's main use cases are for providing search type capabilities on top of unstructured large text based data. For example, if you were ingesting large batches of emails into your data store every day, ElasticSearch is a good tool to parse out pieces of those emails based on rules you setup with it to enable searching (and to some degree querying) capability of those email messages.
If your data is already in SQL Server, it sounds like it's structured already and therefore there's not much gained from ElasticSearch in terms of reportability and availability. Rather you'd likely be introducing extra complexity to your data workflow.
If you have structured data in SQL Server already, and you are experiencing issues with reporting directly off of it, you should look to building a data warehouse instead to handle your reporting. SQL Server comes with a number of features out of the box to help you replicate your data for this very purpose. The three main features to accomplish this that you could look into are AlwaysOn Availability Groups, Replication, or SSIS.
Each option above (in addition to other out-of-the-box features of SQL Server) have different pros and drawbacks. For example, AlwaysOn Availability Groups are very easy to setup and offer the ability to automatically failover if your main server had an outage, but they clone the entire database to a replica. Replication let's you more granularly choose to only copy specific Tables and Views, but then you can't as easily failover if your main server has an outage. So you should read up on all three options and understand their differences.
Additionally, if you're having specific performance problems trying to report off of the main database, you may want to dig into the root cause of those problems first before looking into replicating your data as a solution for reporting (although it's a fairly common solution). You may find that a simple architectural change like using a columnstore index on the correct Table will improve your reporting capabilities immensely.
I've been down both pathways of implementing ElasticSearch and a data warehouse using all three of the main data synchronization features above, for structured data and unstructured large text data, and have experienced the proper use cases for both. One data warehouse I've managed in the past had Tables with billions of rows in it (each Table terabytes big), and it was highly performant for reporting off of on fairly modest hardware in AWS (we weren't even using Redshift).

Need a solution to get rid of multiple database

In my company we have multiple database structure hosted in SQL Server.
for e.g., whenever a new customer sign up with us, we create a new DB in SQL Server to maintain their data.
Right now we already have 2000+ DBs in our database server. We expect more customers to sign up in near future, which might even cross 5000+ count.
Having DBs of 5000+ and increasing count of DBs might not be an advisable one, sometimes we run some task which will run across the DBs, and if we are going to run tasks across 5000+ DBs we will surely end up in performance issues.
What would be the alternative solution to avoid creating multiple DBs for each and every customer and also at the same time maintaining their data separately?
I am hearing about BigData and other DataBase solutions but could not get clear picture.
Can someone share some light on this?
If the databases have an identical schema you could combine them into one. That way each customer's table will now become a set of rows in the new database. A new customer will probably be a few new rows in the tables that store customer's profile.
You can use row level security for restricting access to customer's data:-
https://msdn.microsoft.com/en-us/library/dn765131.aspxpx
For pros and cons of using this approach over your existing see: Pros/Cons Using multiple databases vs using single database and Single or multiple databases
Using other options provide great learning opportunity but may have a significant transition cost even if there were some that were indeed better.
one solution I would suggest is to use prefix on the table name for each customer. you can then solve the security issue by limit per customer per set of tables.
the con is you will have to rewrite your application to use prefix to each table whenever it want to access it. If you have a lot of tables , that will be a problem.
I think this is how some multi Wordpress hosting site handle it database issue.
you should consider if you just store the data and access it with simple querys or if you usually do complex query's, if you just store the data and access it with simple querys and your need are not 100% relational maybe you should consider to move part of your data to HDFS file system:
https://en.wikipedia.org/wiki/Apache_Hadoop#HDFS .
To process the data in hadoop there are many tools but the raising one for sure is spark:
https://en.wikipedia.org/wiki/Apache_Spark
probably the best solution is to start move your historic data in HDFS just for storage and keep the rest as it is until you take confidence with the hadoop and spark paradigm
hadoop is a distributed , fault tollerant file system and spark is an engine for batch processing huge amount of unstructured or structured data, consider that data in hadoop are not structure usually so you have to change the way you process your data, if you want to still use sql I suggest to check Impala and Hive as well:
http://impala.io/
https://hive.apache.org/
Take a look at cloudera web site for a more structure IT solution instead of a lot of single tool that you will need to organize
http://www.cloudera.com/content/www/en-us/solutions.html
They have a quick start VM to try all the hadoop ecosystem tools , probably thats the best way to start experimenting:
http://www.cloudera.com/content/www/en-us/downloads/quickstart_vms/5-4.html

Need Suggestions: Utilizing columnar database

I am working on a project which is highly performance dashboard where results are mostly aggregated mixed with non-aggregated data. First page is loaded by 8 different complex queries, getting mixed data. Dashboard is served by a centralized database (Oracle 11g) which is receiving data from many systems in realtime ( using replication tool). Data which is shown is realized through very complex queries ( multiple join, count, group by and many where conditions).
The issue is that as data is increasing, DB queries are taking more time than defined/agreed. I am thinking to move aggregated functionality to Columnar database say HBase ( all the counts), and rest linear data will be fetched from Oracle. Both the data will be merged based on a key on App layer. Need experts opinion if this is correct approach.
There are few things which are not clear to me:
1. Will Sqoop be able to load data based on query/view or only tables? on continuous basis or one time?
2. If a record is modified ( e.g. status is changed), how will HBase get to know?
My two cents. HBase is a NoSQL database build for fast lookup queries, not to make aggregated, ad-hoc queries.
If you are planning to use a hadoop cluster, you can try hive with parquet storage formart. If you need near real-time queries, you can go with MPP database. A commercial option is Vertica or maybe Redshift from Amazon. For an open-source solution, you can use InfoBrigth.
These columnar options is going to give you a greate aggregate query performance.

Hadoop and database

I am currently looking at an issue where I am trying to integrate hadoop with a database, since hadoop offers parallelism but not performance. I was referring the paper of hadoopDB. Hadoop usually takes a file and splits it into chunks and places these chunks in different data nodes. During processing the namenode tells the location where a chunk might be found and runs a map on that node. I am looking at a possiblility of the user telling the namenode which datanode to run the map on and the namenode either runs the map to get the data from a file or a database. Can you kindly tell me whether it is feasible to tell the namenode which datanode to run the map ?
Thanks!
Not sure why you would like to tie a map/reduce task to a particular node. What happens if that particular node goes down? In Hadoop the map/reduce operations cannot be tied to a particular node in the cluster that what makes Hadoop more scalable.
Also, you might want to take a look # Apache Sqoop for importing/exporting between Hadoop and Database.
If you are looking to query data from a distributed data store, then why don't you consider storing your data into Hbase which is a distributed data base built on top of Hadoop and HDFS. It stores data into HDFS in the background and gives query semantics like a big database. In that case you don't have to worry about issuing queries to the right data node. The query semantics of HBase (also known as hadoop database will take care of the same).
For easy querying and storing data into Hbase and if your data is timeseries data, then you can also consider using OpenTSDB which is a wrapper around Hbase and provides you with easy tag based query semantics as well as integrates nicely with GNUPlot, to give you graph like visualization of your data.
Hbase is very well suited for random reads/writes to a very large distributed data store however, if your queries operate on bulk writes/reads Hive maybe a well suited solution for your case. Similar to Hbase, it is also built on top of Hadoop Map Reduce and HDFS and converts each query to underlying map-reduce jobs. The best thing about Hive is that it provides SQL like semantics and you can query just like you would do on a relational database.
As far as organization of data and a basic introduction to the features of Hive is concerned you may like to go through the following points:
Hive adds structure to the data stored on HDFS. The schema of tables is stored in a separate metadata store. It converts SQL like semantics to multiple map reduce jobs running on HDFS in the backend.
Traditional databases follow the schema on write policy where once a schema is designed for a table, at the time of writing data itself, it is checked whether the data to be written conforms to the pre-defined schema. If it does not, the write is rejected.
In case of Hive, it is the opposite. It uses the schema on read policy. Both the policies have their own individual trade-offs. In case of schema on write, load time is more and loads are slower because schema conformance is verified at the time of loading data. However, it provides faster query time because it can index data based on predefined columns in the schema, however there may be cases where the indexing cannot be specified while populating the data initially and this is where schema on read comes in handy. It provides the option to have 2 different schema present on the same underlying data depending on the kind of analysis required.
Hive is well suited for bulk access, updates of data as a new update requires a completely new table to be constructed. Also, query time is slower as compared to traditional databases because of the absence of indexing.
Hive stores the metadata into a relational database called the “Metastore”.
There are 2 kinds of tables in Hive:
Managed tables - Where the data file for the table is predefined and is moved to the hive warehouse directory on HDFS (in general, or any other hadoop filesystem). When a table is deleted, in that case, the metadata and the data both are deleted from the filesystem.
External tables - Here you can create data into the table lazily. There is no data moved to the Hive warehouse directory in this case and the schema/metadata is loosely coupled to the actual data. When a table is deleted, only the metadata gets deleted and the actual data is left untouched. It becomes helpful in cases if you want the data to be used by multiple databases. Another reason of using the same maybe when you need multiple schemas on the same underlying data.

Syncrhonizing 2 database with different schemas

We have a normalized SQL Server 2008 database designed using generic tables. So, instead of having a separate table for each entity (e.g. Products, Orders, OrderItems, etc), we have generic tables (Entities, Instances, Relationships, Attributes, etc).
We have decided to have a separate denormalized database for quick retrieval of data. Could you please advise me of various technologies out there to synchronize these 2 databases, assuming they have different schemas?
Cheers,
Mosh
When two databases have so radically different schemas you should be looking at techniques for data migration or replication, not synchronization. SQL Server provides two technologies for this, SSIS and Replication, or you can write your own script to do this.
Replication will take new or modified data from a source database and copy it to a target database. It provides mechanisms for scheduling, packaging and distributing changes and can handle both real-time as well as batch updates. To work it needs to add enough info in both databases to track modifications and matching rows. In your case it would be hard to identify which "Products" have changed as you would have to identify all relevant modified rows in 4 or more different tables. It can be done but it will require some effort. In any case, you would have to create views that match the target schema, as replication doesn't allow any transformation of the source data.
SSIS will pull data from one source, transform it and push it to a target. It has no built-in mechanisms for tracking changes so you will have to add fields to your tables to track changes. It is strictly a batch process that can run according to a schedule. The main benefit is that you can perform a wide variety of transformations while replication allows almost none (apart from drawing the data from a view). You could create dataflows that modify only the relevant Product field when a Product related Attribute record changes, or simply reconstitute an entire Product record and overwrite the target record.
Finally, you can create your own triggers or stored procedures that will run when the data changes and copy it from one database to the other.
I should also point out that you have probably over-normalized your database. In all three cases you will have some performance penalty when you join all tables to reconstitute an entity, resulting in a larger amount of locking that is necessary and inefficient use of indexes. You are sacrificing performance and scalability for the sake of ease of change.
Perhaps you should take a look at the Sparse Column feature of SQL Server 2008 for a way to support flexible schemas while maintaining performance and scalability.

Resources