Data Structures and Databases - database

I recently completed a course Algorithms part I and hence learned a lot of data structures that are used to store data in appropriate ways.
I realize that today for maintaining data we use data bases rather than writing data structures. We cant change these data structures used by the data base(Or can we). So when we go for permanent storage do we always go for database use(wont that hit performance), or is their a way in which both of them are used in a combination?

Databases are themselves comprehensibly designed to perform efficiently in all possible conditions. These databases are used only to store compact and vast data. If one has simpler,lesser and minimal needs to store and save data like that of a small program,these smaller data-structures would work easily on a flat-file system.
Also,I feel you are trying to compare flat-file-storage which utilises data structures and which are directly visible to the user WHEREAS in the case of database you can't say about their implemented data-structures to save the file and perform the data-manipulation operations as they are not visible to the user!
But,databases themselves perform hashing,indexing and other data-structures to implement their data storage techniques. They are efficiently coded as per internal data-structures that we generally don't find externally data-structure from the front-view. BUT,THEY DO IMPLEMENT THEIR DATABASES ON THE SAME TECHNIQUE!

Data structures that you learnt ( I suppose), like List, Maps, Trees etc. are the core concepts of the modern Relational databases.
For example:
B-tree is used in many databases. The B+-tree is used in many well known databases as well as common filesystems like NTFS.
SQLite uses B+ tree
SQL Server uses heap or B-tree
Majority of databases use a combination of these data structures, sometimes implementing a customized version. These are optimized for high performance.
For permanent storage, you can use a file system with manually implementing any data structure of your choice, but that would be like re-inventing the wheel

Related

Where does Big Data go and how is it stored?

I'm trying to get to grips with Big Data, and mainly with how Big Data is managed.
I'm familiar with the traditional form of data management and data life cycle; e.g.:
Structured data collected (e.g. web form)
Data stored in tables in an RDBMS on a database server
Data cleaned and then ETL'd into a Data Warehouse
Data is analysed using OLAP cubes and various other BI tools/techniques
However, in the case of Big Data, I'm confused about the equivalent version of points 2 and 3, mainly because I'm unsure about whether or not every Big Data "solution" always involves the use of a NoSQL database to handle and store unstructured data, and also what the Big Data equivalent is of a Data Warehouse.
From what I've seen, in some cases NoSQL isn't always used and can be totally omitted - is this true?
To me, the Big Data life cycle goes something on the lines of this:
Data collected (structured/unstructured/semi)
Data stored in NoSQL database on a Big Data platform; e.g. HBase on MapR Hadoop distribution of servers.
Big Data analytic/data mining tools clean and analyse data
But I have a feeling that this isn't always the case, and point 3 may be totally wrong altogether. Can anyone shed some light on this?
When we talk about Big Data, we talk in most cases about huge amount of data that is many cases constantly written. Data can have a lot of variety as well. Think of a typical data source for Big Data as a machine in a production line that produces all the time sensor data on temperature, humidity, etc. Not the typical kind of data you would find in your DWH.
What would happen if you transform all this data to fit into a relational database? If you have worked with ETL a lot, you know that extracting from the source, transforming the data to fit into a schema and then to store it takes time and it is a bottle neck. Creating a schema is too slow. Also mostly this solution is to costly as you need expensive appliances to run your DWH. You would not want to fill it with sensor data.
You need fast writes on cheap hardware. With Big Data you store schemaless as first (often referred as unstructured data) on a distributed file system. This file system splits the huge data into blocks (typically around 128 MB) and distributes them in the cluster nodes. As the blocks get replicated, nodes can also go down.
If you are coming from the traditional DWH world, you are used to technologies that can work well with data that is well prepared and structured. Hadoop and co are good for looking for insights like the search for the needle in the hay stack. You gain the power to generate insights by parallelising data processing and you process huge amount of data.
Imagine you collected Terabytes of data and you want to run some analytical analysis on it (e.g. a clustering). If you had to run it on a single machine it would take hours. The key of big data systems is to parallelise execution in a shared nothing architecture. If you want to increase performance, you can add hardware to scale out horizontally. With that you speed up your search with a huge amount of data.
Looking at a modern Big Data stack, you have data storage. This can be Hadoop with a distributed file system such as HDFS or a similar file system. Then you have on top of it a resource manager that manages the access on the file system. Then again on top of it, you have a data processing engine such as Apache Spark that orchestrates the execution on the storage layer.
Again on the core engine for data processing, you have applications and frameworks such as machine learning APIs that allow you to find patterns within your data. You can run either unsupervised learning algorithms to detect structure (such as a clustering algorithm) or supervised machine learning algorithms to give some meaning to patterns in the data and to be able to predict outcomes (e.g. linear regression or random forests).
This is my Big Data in a nutshell for people who are experienced with traditional database systems.
Big data, simply put, is an umbrella term used to describe large quantities of structured and unstructured data that are collected by large organizations. Typically, the amounts of data are too large to be processed through traditional means, so state-of-the-art solutions utilizing embedded AI, machine learning, or real-time analytics engines must be deployed to handle it. Sometimes, the phrase "big data" is also used to describe tech fields that deal with data that has a large volume or velocity.
Big data can go into all sorts of systems and be stored in numerous ways, but it's often stored without structure first, and then it's turned into structured data clusters during the extract, transform, load (ETL) stage. This is the process of copying data from multiple sources into a single source or into a different context than it was stored in the original source. Most organizations that need to store and use big data sets will have an advanced data analytics solution. These platforms give you the ability to combine data from otherwise disparate systems into a single source of truth, where you can use all of your data to make the most informed decisions possible. Advanced solutions can even provide data visualizations for at a glance understanding of the information that was pulled, without the need to worry about the underlying data architecture.

What is the most effective method for handling large scale dynamic data for recommendation system?

We re thinking on a recommendation system based on large scale data but also looking for a professional way to keeping a dynamic DB structure for working in faster manner. We consider some of the alternative approaches. One is to keep in a normal SQL database but it would be slower compared to using normal file structure. Second is to use nosql graph model DB but it is also not compatible with the algorithms we use since we continuously pull al the data into a matrix. Final approach we think is to use normal files to keep the data but it is harder to keep track and watch the changes since no query method or the editor. Hence there are different methods and the pros and cons. What ll be the your choice and why?
I'm not sure why you mention "files" and "file structure" so many times, so maybe I'm missing something, but for efficient data processing, you obviously don't want to store things in files. It is expensive to read/write data to disk and it's hard to find something to query files in a file system that is efficient and flexible.
I suppose I'd start with a product that already does recommendations:
http://mahout.apache.org/
You can pick from various algorithms to run on your data for producing recommendations.
If you want to do it yourself, maybe a hybrid approach would work? You could still use a graph database to represent relationships, but then each node/vertex could be a pointer to a document database or a relational database where a more "full" representation of the data would exist.

Best ORM, Simple data Structures, Strong Query analysis

What is the best ORM db combination for simple data structures. That is data that contains names as identifiers and locations, but whose main interaction will be numerical data for times(sports durations), and currency related data.
I initially want to create a sports data base that will take names and statistics. Secondarily I plan to start into an investment and stock analysis db.
Which ORM suits storing many numerical types and have strong query functions?
I really am not biased to db engine (most likely use sqlite or mongo) so any suggestions to best network less db server to suit said ORM appreciated.
I had reviewed several options but I don't want to influence any suggestion or opinion. But for reference.
Genstone/Glass - Smalltalk/Pharo/Squeak
Magma - Pharo/Squeak
SQLalchemy - Python
Sequel - Ruby
Access/Excel - Micorosoft
I am learning scheme but haven't seen an ORM on offer via Racket or Chicken at the moment.
Dabo - python
I disagree that there is no need for an ORM with NoSQL, e.g., mongodb. If there is any difference in the data store from the way objects are created, modified, inter-related, found and deleted in the programming environment then that different needs to be made as small and non-obtrusive as possible. This it the job of an ORM when working with a RDBMS. But in principle the problem of mapping objects in one or more languages to a persistent store is much broader than just the subset when the persistent store is a relational database.
Today with multiple levels of distributed and local store the problem is larger, not smaller. Data can be spread from process memory to local shared memory to local disk stores which may be an arbitrary mix of SSD and HD and from there to distributed memory (e.g., memcache) and remote possibly replicated stores. Not to mention mobile, local, cloud.
THe problem that ORM is made to solve is deeper and wider today.
I wrote my first ORM in 1987 from Objective C to a relational database core (file level). I then worked for an object database company a few years interfacing languages to their ODBMS. Even with an object database there was some mismatch and need for language specific powerful but transparent interfaces.
In my case I have to say that the best ORM I have used is The Sharp Factory
It can handle thousands of tables and creates a repository, interfaces, entities and all of the code needed to interact with the databse.
The downside is that it only supports C#.

When to use an Embedded Database

I am writing an application, which parses a large file, generates a large amount of data and do some complex visualization with it. Since all this data can't be kept in memory, I did some research and I'm starting to consider embedded databases as a temporary container for this data.
My question is: is this a traditional way of solving this problem? And is an embedded database (other than structuring data) supposed to manage data by keeping in memory only a subset (like a cache), while the rest is kept on disk? Thank you.
Edit: to clarify: I am writing a desktop application. The application will be inputted with a file of size of 100s of Mb. After reading the file, the application will generate a large number of graphs which will be visualized. Since, the graphs may have such a large number of nodes, they may not fit into memory. Should I save them into an embedded database which will take care of keeping only the relevant data in memory? (Do embedded databases do that?), or I should write my own sophisticated module which does that?
Tough question - but I'll share my experience and let you decide if it helps.
If you need to retain the output from processing the source file, and you use that to produce multiple views of the derived data, then you might consider using an embedded database. The reasons to use an embedded database (IMHO):
To take advantage of RDBMS features (ACID, relationships, foreign keys, constraints, triggers, aggregation...)
To make it easier to export the data in a flexible manner
To enable access to your processed data to external clients (known format)
To allow more flexible transformation of the data when preparing for viewing
Factors which you should consider when making the decision:
What is the target platform(s) (windows, linux, android, iPhone, PDA)?
What technology base? (Java, .Net, C, C++, ...)
What resource constraints are expected or need to be designed for? (RAM, CPU, HD space)
What operational behaviours do you need to take into account (connected to network, disconnected)?
On the typical modern desktop there is enough spare capacity to handle most operations. On eeePCs, PDAs, and other portable devices, maybe not. On embedded devices, very likely not. The language you use may have build in features to help with memory management - maybe you can take advantage of those. The connectivity aspect (stateful / stateless / etc.) may impact how much you really need to keep in memory at any given point.
If you are dealing with really big files, then you might consider a streaming process approach so you only have in memory a small portion of the overall data at a time - but that doesn't really mean you should (or shouldn't) use an embedded database. Straight text or binary files could work just as well (record based, column based, line based... whatever).
Some databases will allow you more effective ways to interact with the data once it is stored - it depends on the engine. I find that if you have a lot of aggregation required in your base files (by which I mean the files you generate initially from the original source) then an RDBMS engine can be very helpful to simplify your logic. Other options include building your base transform and then adding additional steps to process that into other temporary stores for each specific view, which are then in turn processed for rendering to the target (report?) format.
Just a stream-of-consciousness response - hope that helps a little.
Edit:
Per your further clarification, I'm not sure an embedded database is the direction you want to take. You either need to make some sort of simplifying assumptions for rendering your graphs or investigate methods like segmentation (render sections of the graph and then cache the output before rendering the next section).

Database alternatives?

I was wondering the trade-offs for using databases and what the other options were? Also, what problems are not well suited for databases?
I'm concerned with Relational Databases.
The concept of database is very broad. I will make some simplifications in what I present here.
For some tasks, the most common database is the relational database. It's a database based on the relational model. The relational model assumes that you describe your data in rows, belonging to tables where each table has a given and fixed number of columns. You submit data on a "per row" basis, meaning that you have to provide a row in a single shot containing the data relative to all columns of your table. Every submitted row normally gets an identifier which is unique at the table level, sometimes at the database level. You can create relationships between entities in the relational database, for example by saying that a given cell in your table must refer to another table's row, so to preserve the so called "referential integrity".
This model works fine, but it's not the only one out there. In some cases, data are better organized as a tree. The filesystem is a hierarchical database. starts at a root, and everything goes under this root, in a tree like structure. Another model is the key/value pair. Sleepycat BDB is basically a store of key/value entities.
LDAP is another database which has two advantages: stores rather generic data, it's distributed by design, and it's optimized for reading.
Graph databases and triplestores allow you to store a graph and perform isomorphism search. This is typically needed if you have a very generic dataset that can encompass a broad level of description of your entities, so broad that is basically unknown. This is in clear opposition to the relational model, where you create your tables with a very precise set of columns, and you know what each column is going to contain.
Some relational column-based databases exist as well. Instead of submitting data by row, you submit them by whole column.
So, to answer your question: a database is a method to store data. Technically, even a text file is a database, although not a particularly nice one. The choice of the model behind your database is mostly relative to what is the typical needs of your application.
Setting the answer as CW as I am probably saying something strictly not correct. Feel free to edit.
This is a rather broad question, but databases are well suited for managing relational data. Alternatives would almost always imply to design your own data storage and retrieval engine, which for most standard/small applications is not worth the effort.
A typical scenario that is not well suited for a database is the storage of large amounts of data which are organized as a relatively small amount of logical files, in this case a simple filesystem-like system can be enough.
Don't forget to take a look at NOSQL databases. It's pretty new technology and well suited for stuff that doesn't fit/scale in a relational database.
Use a database if you have data to store and query.
Technically, most things are suited for databases. Computers are made to process data and databases are made to store them.
The only thing to consider is cost. Cost of deployment, cost of maintenance, time investment, but it will usually be worth it.
If you only need to store very simple data, flat files would be an alternative (text files).
Note: you used the generic term 'database', but there are many many different types and implementations of these.
For search applications, full-text search engines (some of which are integrated to traditional DBMSes, but some of which are not), can be a good alternative, allowing both more features (various linguistic awareness, ability to have semi-structured data, ranking...) as well as better performance.
Also, I've seen applications where configuration data is stored in the database, and while this makes sense in some cases, using plain text files (or YAML, XML and such) and loading the underlying objects during initialization, may be preferable, due to the self-contained nature of such alternative, and to the ease of modifying and replicating such files.
A flat log file, can be a good alternative to logging to DBMS, depending on usage of course.
This said, in the last 10 years or so, the DBMS Systems, in general, have added many features, to help them handle different forms of data and different search capabilities (ex: FullText search a fore mentioned, XML, Smart storage/handling of BLOBs, powerful user-defined functions, etc.) which render them more versatile, and hence a fairly ubiquitous service. Their strength remain mainly with relational data however.

Resources