Is there any person to let me know RDB + GDB use case?
I'm looking for some use cases by using AgensGraph which is multi-model database based on RDB + GDB.
Furthermore, also need to know how to migrate data from RDB to GDB, because data models between two databases are not same, do I have any tool to migrate partially automatically?
RDB + GDB AgensGraph Use Case:
One of the AgensGraph use cases for RDB+GDB is having network topology in a graph format(As a Graph Layer) and then having log data (ex. from IoT devices) which are time-series in relational tables.
Data from RDB to GDB:
You can use FDW extension or pg_dump(PostgreSQL) to migrate data from RDB to AgensGraph and the then model it in Graph.
Related
I've got a little too far with RedisGraph and now it's about to ship in production.
Therefore I need to export and import some data between servers and also to create backups.
I'm using the open source community version (not the Redis Entreprise).
How would you recommend to proceed backups and imports/exports?
Thanks for your feedbacks!
RedisGraph stores each graph in a single Redis key, so traditional Redis persistency methods can be used to persist and migrate data.
Backups are usually managed using RDB files or a combination of both the RDB and AOF strategies, these are described here.
If your Redis keyspace should be entirely duplicated or only consists of graph keys, you can copy the RDB file between servers, otherwise you can export and import graph keys with the DUMP and RESTORE commands.
I have a database hosted on a server. And I have to monitor the database with a script with the necessary queries and stored procedures. The metrics that I have to monitor are:
accounts or users are connected
transactions are activated
resources use transactions
what time
Processor use
Disk use
They told me that with MDA tables I can do it. How can I get those metrics with these MDA ASE tables? Or with what stored procedures could I obtain them?
You are asking about full functionality of a full featured program. There are commercial tools available - like Bradmark Surveilance, or free - like asetune. You can also write your own scripts.
You could be using build in procedures like sp_sysmon. Or you can write your own scripts that read MDA tables and store the results. You can also try to use the tools delivered with ASE server - like ASE cockpit, Sybase Control Center (older versions), or Sybase Central (ancient ASE versions).
One tool in Sybase that may be very helpful is sp_help table_name (just replace table_name with the name of the table you want to know more about). sp_help will show you everything you need to know about the tables, and columns in your database, and I've found it extremely helpful when I need to build queries, but can't remember the full structure of all the tables.
Once you have an idea of what values are stored where, you can build queries that will pull the information you need. As #Adam point out in his answer above, Sybase has built-in procedures that will gather at least some of this data. The Sybase InfoCenter is also a great source of information about what's available to you already.
I installed grafana as explained on the website. http://docs.grafana.org/installation/debian/ And since I'm using postgresql as database I wanted tp add may data source. Unfortunately I didn't found the option postgres (only Graphit, MySQL, InfluxDB and two other types were there). There is even any postgres plugin on the website (The one that I found didn't work https://github.com/sraoss/grafana-sqldb-datasource).
Do have a solution so that Grafana supports Postgres ?
Currently Grafana only supports Postgres as the data store for Grafana (for storing dashboards, users etc.) and there is no published plugin for using Postgres as a data source.
The MySQL data source plugin has just been added to Grafana and the Postgres plugin is on the Grafana roadmap and should be released later in the year.
If you are wondering why there are so many data sources supported but not Postgres, it is because relational databases are not traditionally what Grafana is used for. The main use case is visualizing time series data (often more data than a relational db can handle). Relational dbs are not built for this use case which is why most people use something like Graphite or InfluxDB. But it is quite common to have some data in a db like Postgres that you would like to show on a dashboard or combine with some data from a time series db. This is why the Grafana team is planning to release a plugin soon.
EDIT: The Postgres data source is now merged to Grafana master and will be released in Grafana 4.6: https://github.com/grafana/grafana/pull/9209
I know Aster Data leverages SQL Map Reduce, ncluster and analytic capability.
From Database architecture perspective which family does Aster belongs to?
Aster database doesn't formally belong to certain database family, but you can identify it with several database types:
it's distributed, parallel, relational database;
it's MPP (massively parallel processing) database;
it's based on PostgreSQL open source code (forked);
it's NOT based on Teradata database.
I do not know exactly about right name, but it is sharded db, what means one Queen server and several workers with running postgres instances.
Agree with topchef. Hope the below gives you some high level information.
The database is built on top of Postgres similar to other databases like Netezza and Greenplum.
Asterdata is built on postgres but in a distributed manner.
It has something called vproc's which are similar to a standalone postgres db instance.
A Node (worker) will have multiple vprocs and all the nodes are co-ordinated using a Queen node (Master).
Though its built on Postgres, not all features of postgres are ported to Asterdata because of the distributed nature of the system.
I'm looking for a tool to export data from a PostgreSQL DB to an Oracle data warehouse. I'm really looking for a heterogenous DB replication tool, rather than an export->convert->import solution.
Continuent Tungsten Replicator looks like it would do the job, but PostgreSQL support won't be ready for another couple months.
Are there any open-source tools out there that will do this? Or am I stuck with some kind of scheduled pg_dump/SQL*Loader solution?
You can create a database link from Oracle to Postgres (this is called heterogeneous connectivity). This makes it possible to select data from Postgres with a select statement in Oracle. You can use materialized views to schedule and store the results of those selects.
It sounds like SymmetricDS would work for your scenario. SymmetricDS is web-enabled, database independent, data synchronization/replication software. It uses web and database technologies to replicate tables between relational databases in near real time.
Sounds like you want an ETL (extract transform load) tool. There are allot of open source options Enhydra Octopus, and Talend Open Studio are a couple I've come across.
In general ETL tools offer you better flexibility than the straight across replication option.
Some offer scheduling, data quality, and data lineage.
Consider using the Confluent Kafka Connect JDBC sink and source connectors if you'd like to replicate data changes across heterogeneous databases in real time.
The source connector can select the entire database , particular tables, or rows returned by a provided query, and send the data as a Kafka message to your Kafka broker. The source connector can calculate the diffs based on an incrementing id column, a timestamp column, or be run in bulk mode where the entire contents are recopied periodically. The sink can read these messages, optionally check them against an avro or json schema, and populate the source database with the results. It's all free, and several sink and source connectors exist for many relational and non-relational databases.
*One major caveat - Some JDBC Kafka connectors can not capture hard deletes
To get around that limitation, you can use a propietary connector such as Debezium (http://www.debezium.io), see also
Delete events from JDBC Kafka Connect Source.