I am creating a flink job, that needs Dynamic Tables with continuous queries,I found the concept here but did not find any good example program to try it on.
Can someone help me in this.
Thanks
If you are looking for examples that use SQL, I would suggest either the Flink SQL Training or the Flink SQL Demo shown in Flink SQL in 2020: Time to show off! - Fabian Hueske & Timo Walther.
If you prefer the Table API, there's a tutorial in the docs.
Related
Can someone please share some links/references or guide me how to do database migration from IBM Netezza to Google's Bigquery?
I suggest you to check the data warehouse migration offer due to this process involves lots of steps such as:
Migration Strategy
Migration Plan
Effort estimation
Technical Architecture
Additional, GCP does not offer official documentation to migrate from Netezza to BigQuery; nevertheless, you can take a look into the Data Transfer Service migration documentation to get an idea of all the work that this migration implies.
On the other hand, I found some third-party companies that offers this service, maybe you can take a look there, and finally a Medium post that talks about this.
I looked through Flink official documentation ,but didn't find what kinds of SQL that flink has supported
Flink’s SQL support is based on Apache Calcite, which implements the SQL standard. The SQL support is not yet feature complete, but lately each new release has brought some exciting new features -- such as temporal tables and match_recognize, which were added in Flink 1.7.
I believe this part of the documentation provides the details you are looking for.
For learning how to work with Flink SQL, I recommend the Apache Flink® SQL Training on github.
I am very new to cassandra and from database background. I want to know the process/tools/utilities by which I can load bulk of data into cassandra column family and read the data to show for analytics.
Thank you in advance!
The commands you talk to cassandra keyspaces (which are what databases are to sql) are similar to sql.
Check the CQL query language turorials.
Also check the datastax cassandra tutorials or the ones at tutorialspoint, but check that what you read matches the version you want to use.
Once you get a hang of the basics you can move on to cassandra-specific concepts like data replication and partitioning
A quick & easy start would be to get cassandra on docker and setup a container running your keyspace.
I am starting a Elastic search 5 project from data that are actually in a SQL Server, so I am starting from the start:
I am thinking about how import data from my SQL Server, and especially how to synchronise my data when data are updated or added.
I saw here it is adviced to make no too frequent batch.
But how make synchronisation batchs, may I have to write it myself or is there very used tools and practices ?
River and JDBC plugin feeder appears to have been really used but don't work with Elastic Search 5.*
Any help would be very welcomed.
I'd recommend using Logstash:
It's easy to use and setup
You can do your own ETL in logstash configuration files
You can have multiple JDBC sources in one file
You'll have figure out how to make incremental (batched) updates to sync your data. It really depends on your data model.
This is a nice blog piece to begin with:
https://www.elastic.co/blog/logstash-jdbc-input-plugin
is there any way to migrate to datomic directly from postgres..?
I have one existing postgres database i am planning to migrate to datomic
is there any source or library from where i can get help?
There's no straight-forward answer to this question or automated, general purpose tool for this. In general, ETL is not trivial, especially when moving between different types of databases (e.g. from table-backed SQL to Datomic).
That said, to get started in solving this for your case you might find Onyx, a project that transfers data to and from a SQL database (MySQL) and Datomic, to be a helpful example.
EDIT: As of December 2016, there is now a video available demonstrating a way to architecture an import job from SQL to Datomic by Stu Halloway, available here.