Public Library Data over Time - database

I'm working on creating a better way to view the history of items held at the public library I work at. I have been using a combination of the built in functions of SirsiDynix (our library software), Excel, and Autohotkey to extract, manipulate, and display the data. I am currently stuck on designing a way to view the change in status of an item over time since the system as it is only shows based on the last transaction. For example, if I have the following item:
0000519227318 005.54 WAL 101 EXCEL 2013 TIPS, TRICKS & TIMESAVERS Walkenbach, John, author WE-WH 2013 7 7/13/2013 6/29/2015 35
I can tell you it was created on 7/13/2013, last checked out on 6/29/2015, and has been checked out a total of 7 times. But I am unable to tell you anything about the length of those checkouts, or when they occurred, or if the book had been missing for a year in the middle of that time period.
With Autohotkey and the SirsiDynix Director's Station I have been able to create "daily snapshot" csv files that indicate where an item is every day. However I am having trouble figuring out how to consolidate that information. Originally I was planning to simply add an additional column to the end of the record every day so that after the general item information you would have a series of numbers listing the changing location. The coding I have for AHK to do this is somewhat slow and I'm still working on how I would best display it in Excel regardless. However it occurred to me that there may be a much better way to handle this that could fully automate the process.
So I'm asking whether there are suggestions for either a simple database system to use or an improvement to my current method that could assist me. The queries I plan to do are simply to be able primarily to type in an item number and have a chart display the status of an item, hopefully with something that could also show whenever the total checkouts has increased. I have been looking at Stock Market charts as examples but as many people with those seem to want open,close,hi,low values the responses they get seem beyond what I may need. Additional queries of items with the longest period of time on the shelf relative to total time would be useful although not initially required.
Any help as to what direction I may want to go would be appreciated. I have basic understanding of AutoHotKey and Excel, and I briefly used MySQL several years ago so I have a general feel of how a database can be used.

Not too familiar with your specific software or Autohotkey but for an efficient, secure, and scalable solution, consider any type of relational database management system (RDMS) including server level enterprise systems (some open-source or proprietary): Oracle, SQL Server, DB2, PostgreSQL, MySQL; or file level databases including SQLite and MS Access. One main thing is to try to move out of the concept of flatfile spreadsheets and applications. Excel is simply not a database and should be only used an end-use document for reporting or graphics/analytics using retrieved database content.
With a relational database you can maintain data across normalized related tables linked together by primary and foreign keys. Essentially, you want to build a Library Management System which can comprise of the following tables:
Items -unique list of Items ISBN, Catalog, Title, Author, Publisher, Category (Fiction, Nonfiction, Reference, Media); Primary Key: ItemID (autonumber/auto-increment)
Stock -copies of Items, Condition, Missing/Damaged Status, Cost, and Inventory Quantity; one-to-many relationship with Items table; Primary Key: StockID, Foreign Key: ItemID
Checkouts -infinite history of checkout record including Stock Item, CheckoutDate, CheckinDate, Notes; many-to-many relationship with Stock; Primary Key: CheckoutID, Foreign Key: StockID
Now with this schema, you can better manage each Item throughout its life cycle (new or arrived item, checked out periods, and retired/discard/basement stored) with easy queries for real-time reporting. Additionally, you can use any type of general purpose language (Java, C#, PHP, Python, Perl, VB) which can connect to any aforementioned RDMS to build the interface or tool for this backend system. A host of free consoles are also available including:
PhpMyAdmin (PHP/MySQL)
PgAdmin (PostgreSQL)
SQLite-manager (Firefox addin/SQLite)
Management Studio (SQL Server)
MS Access (Jet/ACE Engine and MS Office) the often misnomer to conflate MS Access as a database when it is actually a GUI that connects by default to a Windows technology, the JET/ACE SQL engine, where this default can be switched out to any aforementioned RDMS
Here, Excel can connect via ODBC/OLEDB VBA for reporting on checked out items status, history, and current shelf stock. Depending on the RDMS, you can even build triggers where as soon as an item is checked out, a record is added to CheckOut table or code it in you tool or scripts. Finally, outputs into txt, csv, xml, pdf reports, email attachments to co-workers, board, etc can be integrated.

Related

Many to one ApexSql data diff Database Transfer

I have searched the answers and haven't found anything relating to the project I'm currently working on. If I've missed it if you could please link the URL it would be greatly appreciated.
Currently we have many sets in the field which run various applications to transfer field data to a master database. These applications were developed for different reason as different times and are currently sending only specific data. We now have a need to send all data. I was hoping to leverage ApexSQL, a product we already have, data diff to possibly transfer each infield servers net new row entries to the main database.
I was hoping to get some information from someone who may have already looked at this as a possibility or have implemented it in the past.
The extra fun aspect of this project is that it must fall under PCI compliance which I can figure out after the fact.
To transfer only new entries, you have to synchronize the records that exist in your source (infield servers) and are "missing" in your destination (the main database). ApexSQL Data Diff can do this, but you have to select all records in the Missing tab for the specific tables and unselect all records in the Additional tab. If you leave the records selected inn the Additional tab, the records that exist in the main database but doesn't in the infield servers will be deleted, so you have to be careful
If you have ApexSQL Diff API, you can use SynchronizeMissingRows and all missing rows in the database will be synchronized

Alternative to Apatar for scripted data migration

I'm looking for the fastest-to-success alternative solution for related data migration between Salesforce environments with some specific technical requirements. We were using Apatar, which worked fine in testing, but late in the game it has started throwing the dreaded socket "connection reset" errors and we have not been able to resolve it - it has some other issues that are leading me to ditch it.
I need to move a modest amount of data (about 10k rows total) between several sandboxes and ultimately to a production environment. The data is spread across eight custom objects. There is a four-level-deep master-detail relationship, which obviously must be preserved.
The target environment tables are 100% empty.
The trickiest object has a master-detail and two lookup fields.
Ideally, the data from one table near the top of the hierarchy should be filtered by a simple WHERE, and then children not under matching rows not migrated, but I'll settle for a solution that migrates all the data wholesale.
My fallback in this situation is going to be good old Data Loader, but it's not ideal because our schema is locked down and does not contain external ID fields, so scripting a solution that preserves all the M-D and lookups will take a while and be more error prone than I'd like.
It's been a long time since I've done a survey of the tools available, and don't have much time to do one now, so I'm appealing to the crowd. I need an app that will be simple (able to configure and test very quickly), repeatable, and rock-solid.
I've always pictured an SFDC data migration app that you can just check off eight checkboxes from a source environment, point it to a destination environment, and it just works, preserving all your relationships. That would be perfect. Free would be nice too. Does such a shiny thing exist?
Sesame Relational Junction seems to best match what you're looking for. I haven't used it, though; so, I can't comment on its effectiveness for what you're attempting.
The other route you may want to look into is using the Bulk API or using the Data Loader CLI with Task Scheduling.
You may find this information (below), from an answer to a different question, helpful.
Here is a list of integration services (other than Apatar):
Informatica Cloud
Cast Iron
SnapLogic
Boomi
JitterBit
Sesame Relational Junction
Information on other tools, to integrate Salesforce with other databases, is available here:
Salesforce Web Services API
Salesforce Bulk API
Relational Junction has a unique feature set that supports cloning, splitting, and merging of Salesforce orgs, and will keep the relationships intact in a one-pass load. It works like this:
Download source org to an empty database schema (any relationship DBMS)
Download target org to a second empty database schema
Run some scripts to condition the data; this varies by object. Sesame provides guidance and sample scripts, but essentially you have to set a control field to tell Relational Junction to create or update Salesforce. This is also where you may need to replace source ID's with target ID's if some objects have been pre-populated during sandbox creation
Replicate the second database to the target org
Relational Junction handles the socket disconnects, timeouts, and whatever havoc happens during the unload/reload process gracefully and without creating duplicates.
This process was developed for a proof of concept at a large Silicon Valley network vendor in 2007, who became a customer. The entire down and up of 15 GB of data took 46 hours, plus about 2 days of preparation.

What should I have in mind when building OLAP solution from scratch?

I'm working for a company running a software product based on a MS SQL database server, and through the years I have developed 20-30 quite advanced reports in PHP, taking data directly from the database. This has been very successful, and people are happy with it.
But it has some drawbacks:
For new changes, it can be quite development intensive
The user can't experiment much with the data - it is locked to a hard-coded view
It can be slow for big reports
I am considering gradually going to a OLAP-based approach, which can be queried from Excel or some web-based service. But I would like to do this in a way that introduces the least amount of new complexity in the IT environment - the least amount of different services, synchronization jobs etc!
I have some questions in this regard:
1) Workflow-related:
What is a good development route from "black box SQL server" to "OLAP ready to use"?
Which servers and services should be set up, and which scripts should be written?
Which are the hardest/most critical/most time-intensive parts?
2) ETL:
I suppose it is best to have separate servers for their Data Warehouse and Production SQL?
How are these kept in sync (push/pull)? Using which technologies/languages?
For me SSIS looks overly complicated, and the graphical workflow doesn't appeal much to me -- I would rather like a text based script that does the job. Is this feasible?
Or is it advantagous to use the graphical client with only one source and one destination?
3) Development:
How much of this (data integration, analysis services) can be efficiently maintained from a CLI-tool?
Can the setup be transferred back and forth between production and development easily?
I'm happy with any answer that covers just some of this - and even though it is a MS environment, I'm also interested to hear about advantages in other technologies.
I only have experience with Microsoft OLAP, so here are my two cents regarding what I know:
If you are implementing cubes, then separate the production SQL Server from the source for the cubes. Cubes require a lot of SELECT DISTINCT column_name FROM source.table. You don't want cube processing to block your mission critical production system.
Although you can implement OLAP cubes with standard relation tables, you will quickly find that unless your data is a ledger-style system you will probably need to fully reprocess your fact and dimension tables and this will require requerying the source database over and over again. That's a large argument for building a separate data warehouse that uses ledger-style transactions for the fact tables. For instance, if a customer orders something and then cancels it, your source system may track this as a status change. In your fact table, you probably need to show this as a row for ordering that has a positive quantity and revenue stream and a row for cancelling that has a negative quantity and revenue stream.
OLAP may be overkill for your environment. The main issue you appeared to raise was that your reports are static and users want access to the data directly. You could build a data model and give users Report Builder access in SSRS, or report writing access in some other BI suite like Cognos, Business Objects, etc. I don't generally recommend this approach since it is way beyond what most users should have to know to get data, but in a small shop this may be sufficient and it is easy to implement. Let's face it -- users generally just want to get the data into Excel to manipulate it further. So if you don't want to give them a web front-end and you just want them to get to the data from Excel, you could give them direct database access to a copy of the production data. The downside of this approach is users don't generally understand SQL or database relationships. OLAP helps you avoid forcing users to learn SQL or relationships, but is isn't easy to implement on your end. If you only have a couple of power users who need this kind of access, it could be easy enough to teach the few power users how to do basic queries in Excel against the database and they will be happy to get this tomorrow. OLAP won't be ready by tomorrow.
If you only have a few kinds of source data systems, you could get away with building a super-dynamic static report. For instance, I have a report that was written in C# that basically allows users to select as many columns as they want from a list of 30 columns and filter the data on a few date range fields and field filter lists. This simple report covers about 40% of all ad hoc report requests from end-users since it covers all the basic, core customer metrics and fields. We recently moved this report to SSRS and that allowed us to up the number of fields to about 100 and improved the overall user experience. Regardless of the reporting platform, it is possible to give users some dynamic flexibility even in the confines of a static reporting system.
If you only have a couple of databases, you can probably backup and restore the databases as your ETL. However, if you want to do anything beyond that, then you might as well bite the bullet and use SSIS (or some other ETL tool). Once you get into ETL for data warehousing, you are going to use a graphic-oriented design tool. Coding works well for applications, but ETL is more about workflows and that's why the tools tend to converge on a graphical UI. You can work around this and try to code a data warehouse from a text editor, but in the end you are going to lose out on a lot. See this post for more details on the differences between loading data from code and loading data from SSIS.
FEEDBACK ON HOW TO USE CUBES WITH A RELATIONAL DATA STORE
It is possible to implement a cube over a relational data store, but there are some major problems with using this approach. The main reason it is technically feasible has to do with how you configure your DSV. The DSV is essentially a logical layer between the physical database and the cube/dimension definitions. Instead of importing the relational tables into the DSV, you could define Named Queries or create views in the database that flatten the data.
The advantage of this approach are as follows:
It is relatively easy to implement since you don't have to build an entire ETL subsystem to get started with OLAP.
This approach works well for prototyping how you want to build a more long-term solution. You can prototype it in 1-2 days and show some of the benefits of OLAP today.
Some very, very large tables don't have to be completely duplicated just to support an OLAP cube. I have several multi-billion row tables that are almost completely standardized fact tables. The only columns they don't have are date keys and they also contain some NULL values on fields that shouldn't have nulls at all. Instead of duplicating these very massive tables, you can create the surrogate date keys and set values for the nulls in the view or named query. If you aren't going to see a huge performance boon for duplicating the table, then this may be a candidate for leaving in a more raw format in the database itself.
The disadvantages of this approach are as follows:
If you haven't built a true Kimball method data warehouse, then you probably aren't tracking transactions in a ledger-style. Kimball method fact tables (at least as I understand them) always change values by adding and subtracting rows. If someone cancels part of an order, you can't update the value in the cube for the single transaction. Instead, you have to balance out the transaction with a negative value. If you have to update the transaction, then you will have to fully reprocess the partition of the cube to replace the value which can be a very expensive operation. Unless your source system is a ledger-style transaction system, you will probably have to build a ledger-style copy in your ETL subsystem.
If you don't build a Kimball method data warehouse, then you are probably using unobscured and possibly non-integer primary keys in your database. This directly impacts query performance inside the cube. It also sets you up for having a theoretically inflexible data warehouse. For instance, if you have an product ordering system that uses an integer key and you start using a second product ordering system either as a replacement for the legacy system or in tandem with the legacy system, you may struggle to combine the data together merely through the DSV since each system has different data points, metrics, workflows, data types, etc. Worse, if they have the same data types for the order id and the order id values overlap between systems, then you must declare a surrogate key that you can use across both systems. This can be difficult, but not impossible, to implement without using a flattened data warehouse.
You may have to build the system twice if you start with the relational data store and then move to flattened database. Frankly, I think the amount of duplicated work is trivial. Most of what you learned building the cube off a relational data store will translate to setting up the new OLAP cube. The main problem, though, is that you will probably create a new cube altogether and then any users of the old cube will have to migrate to the new cube. Any reports built in SSRS or Excel will probably break at that point and need to be rewritten from the ground up. So the main cost of rebuilding the cube is really on rebuilding dependent objects -- not on rebuilding the cube itself.
Let me know if you want me to expand on any of the above points. good luck.
You're basically asking the million dollar question of "How do I build a DWH". This is not really a question that can decisively be answered.
Nevertheless, here is a kickstart:
If you are looking for a minimum viable product, be aware that you are in a data environment, and not a pure software one. In data-heavy environments, it is much harder to incrementally build a product, because the amount of effort to introduce changes in the system is much greater. Think about it as if every change you make in a piece of software has to be somehow backwards-compatible with anything you've ever done. Now you understand the hell Microsoft are in :-).
Also, data systems involve many third-party tools such as DBs, ETL tools and reporting platforms. The choices you make should be viable for the expected development of your system, else you might have to completely replace these tools down the road.
While you can start with a DB cloning that will be based on simple copy SQLs and then aggregating it or pushing it into an OLAP, I would recommend getting your hands dirty with a real ETL tool from the start. This is especially true if you foresee the need to grow. 9 out of 10 times, the need will grow.
MS-SQL is a good choice for a DB if you don't mind the cost. The natural ETL tool would be SSIS, and it's a solid tool as well.
Even if your first transformations are merely "take this table and dump it in there", you still gain a lot in terms of process management (has the job run? What happens if it fails? etc) and debugging. Also, it is easier to organically grow as requirements and/or special cases have to be dealt with.

SQl Server How to maintain the data

I am working on a product (ASP.NET Web site) developed for educational institutions. There are around 20 educational inst. in my site. For each of them academic year start and end date varies. There are huge number of records in the database for attendance and results.
Now I need to show all previous years data (like attendance, results etc) whenever a student, teacher logs in. There are some reports which compares student performance in various academic years.
Now my problem is how to maintain that huge data ?
I wanted to go with 2 databases. 1 for current academic year, another for all previous yrs.
But my current year DB schema may change for enhancement. So whenever I move the current year data to archive database then it creates problem for me. Please suggest a good way to implement this.
Thanks,
seshu.
Have you thought about table partitioning? It allows you to rapidly move data through sliding windows - so that at the start of a new year, you slide last year's details into an archive partition. (You need to check the SQL Server edition you have to see whether it is enabled)
MSDN details:
http://msdn.microsoft.com/en-us/library/ms345146(SQL.90).aspx
If you want to keep two databases in sync, schema-wise, there are plenty of tools available for that. Here is mine, here is Red Gate's and here is Apex's. There are many more available, including one which comes with Visual Studio Team System Database edition (if you have that already - if you don't then one of the ones I have previously mentioned will be a lot cheaper).

What is an example of a non-relational database? Where/how are they used?

I have been working with relational databases for sometime, but it only recently occurred to me that there must be other types of databases that are non-relational.
What are some examples of non-relational databases, and where/how are they used in the real world? Why would you choose to use a non-relational database over relational databases?
Edit: Two other similar questions have been mentioned in the answers:
Database system that is not relational.
Good reasons NOT to use a relational database?
An admittedly obscure but interesting alternative to the types of databases mentioned here is the associative database, such as Sentences, from LazySoft Technology. There is a free personal version you can download and try on your own. The Enterprise Edition is also free, but requires a request to the company.
Essentially, an associative database allows you to store information in much the same way as our brains do: as things and associations between those things. The name "Sentences" comes from the way this information can be represented in a subject-verb-object syntax:
Tom is brother to Laura
San Francisco is located in California
Mike has a credit limit of $10,000
A sentence may be the subject or object of another sentence:
(Bus 570 arrives at 8:15am) on Sundays
Mary says (the pie was baked by William)
So, everything can be boiled down to entities and associations.
There is, of course, much more to Sentences than what can be expressed here. I recommend that you take some time to read more about it in a white paper from LazySoft.
"The Associative Model of Data" is a book available in PDF format by Simon Williams, one of the creators of Sentences.
Flat file
CSV or other delimited data
spreadsheets
/etc/passwd
mbox mail files
Hierarchical
Windows Registry
Subversion using the file system, FSFS, instead of Berkley DB
A non-relational document oriented database we have been looking at is Apache CouchDB.
Apache CouchDB is a distributed, fault-tolerant and schema-free document-oriented database accessible via a RESTful HTTP/JSON API. Among other features, it provides robust, incremental replication with bi-directional conflict detection and resolution, and is queryable and indexable using a table-oriented view engine with JavaScript acting as the default view definition language.
Our interest was in providing a distributed access user preferences store that would be immune to shape changes to which we could serialize preference objects from Java and access those just as easily with Javascript from a XULRunner based client application.
Any database that claims to be a "Berkley style Database" or "Key/Value" Database is not relational.
These databases are usually based off complex hashing algorithms and provide a very fast lookup O(1) based off a key, but leave any form of relational goodness to end user.
For example, in a relational database, you would normalize your structure and join many tables together to create a single result set.
In a key/value database, you would denormalize as much as possible and then use a unique key to lookup data.
If you need to pull data from two sources, you would have to join the resulting set together by hand.
All databases were originally non-relational, it was only with the arrival of DB2 and Oracle in the mid 1980's that they became common. Before that most databases where either flat files or hierarchical.
Flat files are inherently boring, but hierarchical database are much less so, particularly as DB2 was actually implemented on top of an hierarchical implementation (namely VSAM) in the first instance. VSAM is I believe still around on mainframe systems and is of some considerable importance.
DB/1 (so obscure now I can't even find a wikipedia link) was IBM's predecessor prime-time database to DB2 (hence the name). This was hierarchical - basically you had a file which consisted of any number or 'root' records, generally directly accessible by a key. Each root record could then have any number of child records off it, each of which could in turn have it's own children. The net effect is a index file or root records with each root being the top of a potential tree-like structure. Accessing the child records could be tricky - there were limitations of direct access so usually you ended up traversing the tree looking for the record you needed. A 'database' could have any number of these files in it, usually related by keys.
This had major disadvantages - not least that actually doing anything demanded a full program written - basically the equivalent of a days work for what we can now do in SQL in a few minutes. However it really did score on execution speed, in those days a mainframe had about the processing power of your iPhone (albeit optimized for data I/O) and poor DB2 queries could kill a multi-million dollar installation dead. This was never an issue with DB/1 and in a world where programmers were less expensive than CPU time it made sense.
Google App Engine Datastore :
The App Engine datastore is not a relational database. While the datastore interface has many of the same features of traditional databases, the datastore's unique characteristics imply a different way of designing and managing data to take advantage of the ability to scale automatically.
The PI historical database from OSIsoft is non-relational. It's only made to archive timestamped data. It's used a lot by industry, especially as the back-end database for all those 'dashboards'.
There's no need to be relational in it, since there are no joins.
Other two types of databases that haven't come up yet:
Content Repositories are databases designed for content (i.e. files, documents, images, etc). They typically have additioan constructs such as a hierarchical way to browse content, search, transformation between different formats, versioning, and many other things. Examples - Alfresco, Documentum, JackRabbit, Day, OpenText, many other ECM vendors.
Directories, i.e. Active Directory, or LDAP Directories. These are databases designed for low-write / high read scenarios and highly distributed across high geographical distances / high latency connections. While mostly used for authentication / authorization, they don't have to be if your use case matches the requirements.
Dimensional Databases are great examples of non-relational databases. They are very commonly used for 'Business Dashboards'/'Business Intelligence' for KPIs and other types of aggregate or statistical data. They are usually populated from relational databases and can offer better performance in certain situations.
http://en.wikipedia.org/wiki/Dimensional_database
XML databases e.g. xindice
Object databases e.g. db4o
Be aware that the concept of relational databases is highly contentious. Purists such as C. J. Date would argue that many databases in common use (such as Oracle and SQL Server) do not comply sufficiently with the relational model to be termed 'relational'.
Non-Relational databases just do not meet Codd’s requirements.
Intersystems Caché seams a total re-write/re-design of the old Pick Operating system’s database. From the little I’ve read of Caché it appears to be a nicely done redesign.
It permits .net programs to access the database just like it would SQL. Caché’s run’s the Pick OS programs without requiring any changes. By importing your Pick files into Caché you can still run your old green screen applications with it, but also write new programs using .net so you can migrate to Windows Applications without abandoning the years of data design you’ve already invested in.
Here is some background on the Pick DB model . A Pick database uses totally variable length records, and fields. All table are keyed by a single unique key and are accessible without reading an index. Pick designed the system to use a Hashing algorithm that reads the item from disk on generally on the 1st physical read (assuming system maintenance was performed correctly). Fields in Pick are Non-Typed. All data is stored as string and Casting is up to the programmer. Nulls are stored as a empty string, thus a null does not take up disk space as it does in SQL. There is no need for Foreign Keys. In the ‘Relational world’ the DBA has to create and Order Header table, and an Order Line Item table. In the “Pick Model’ there is a single table. An example would be, ‘Order Date’ is a field that would store a # of days since ‘Dec 13, 1967’ (the data Pick OS was turned on for the first time). Pick programmers did not have Y2k problems. A 2nd column would be Customer Number. The big difference is when you get to the Product Number Column, it would be ‘Multi-Valued’ (the Codd Non-Conformance). In other words, the database can handle 1-32000 product#s in that column. Other columns like Quantity Ordered would be in a controlling/dependant relationship with the Product Number and would also be multi-valued. When you get to the Quantity Shipped, Pick would go to a third dimension and have Sub-Multi-Valued field. You would have a Shipment Number Column, and it would be Multi-Valued by Line Item and Sub-Multi-Valued containing The Shipment Quantity for that line for that shipment number. There are No Inner Joins needed. All data for that Order is stored in One Table and in a single record. No orphan rows ever!
Secondly the data definition is a bit different. Our dictionaries can contain definitions for data that is not in this table or is being manipulated. A couple of examples are, Customer Name. It would be defined as ‘Use the Customer number column and return the Name field from the Customer Table. Another Example is Line Item Extension would be defined as a calculation of Quantity*Price/PricePer.
I believe I read somewhere Caché claims to have over 100,000 installations.
I would think a flat-file database in Excel is non-relational and used by quite a few people.
It is really just a database table that can not be joined with other tables.
Object-Oriented Databases are one interesting type of non-relational database.
The trading sector sometimes uses OO Databases since each deal/contract can look sort of like others in that category but have unique attributes as well. VERY difficult to represent it relationally.
eXist-db is an xml database that has been around for a long time. It is particularly useful for xquery over tons of xml documents.
Any file or group of files that contains data but does not express relationships within that data is a non-relational database.
RRDtool is designed to store and aggregate log data. You configure a sampling interval and feed data into it, then it returns time-based results. It's optimized for fixed-size storage, and starts aggregating past results after a time. For example, suppose you have a round-robin database with a 5-minute time interval. Even if you send it temperature data once per second, it still only stores the results in 5-minute increments. After a week, it averages those results into hourly values. After a month, hourly results are averaged into daily numbers, and so on.
RRDtool is commonly used as the backend for tools like Cricket and MRTG to track network and environmental data for months and years at a stretch.
For a graph based dbms you have neo4j
For a hierarichal dbms you have any standard filesystem or with "schema" support any LDAP implementation.
There are many answers but they all end up being in one of two major categories:
Navigational. Includes Tree/Hierarchy databases and Graph databases.
Databases that break first normal form (multiple values). Includes Pick databases and Lotus Notes and its offspring like CouchDB.
EDIT: And of course key/value stores like BDB aren't relational, but that just goes without saying doesn't it? I mean, they're just key/value stores.
dBase. Although it was marketed as such, it doesn't meet the requirements.
As an OO database, Intersystems Caché comes to mind. Some medical and library systems are built on this.
In my company, www.smartsgroup.com, we have a proprietary database engine we call a "transaction log database". It is built on flat files, each file containing a sequence of "events" or "messages", in binary format, plus various indexes on this data and algorithms for reproducing the state of a stock exchange's orderbook. It is highly optimised for sequential updates and sequential access.
In scientific applications, it is also common to use proprietary database engines rather than RDBMS's. I also worked for a company that has the world's largest database of EEG brain recordings: www.brainresource.com . There we use a flat file database, and it worked well for us.
SmartsGroup also uses a temporal database, which is like a non-relational database table, except that we store a history of all changes to all fields so we can reproduce the state of a particular row on a particular date.
The Wiki page for Dimensional Databases linked to above seems to have disappeared.
Some OLAP Systems have are backed by Multidimensional databases (MOLAP) these are used often in financial analysis. They afford interactive clients that allow one to navigate through the data at different levels of aggregation.
At my university there is a group that researches deductive databases.

Resources