Is there a way to configure TDengine to use Amazon S3 storage? I cannot find that in official documents? Is there someone know how to make TDengine to use S3 to store data?
No, TDengine only support disk storage for now.
Related
TDengine is a time-series database with cluster open-sourced.
but any specific feature related to cloud-native?
anyone could give me some information?
I think first we release the TDengine database cloud service .
then TDengine 3.0 supports separation between storage and computation
in the last ,The virtual data node support split and merge.
We're setting up a MongoDB cluster with AWS' DocumentDB.
DocumentDB allows to set up replication in the cloud but we also want to have local replicas. Is it possible to have a local replica of the mongo cluster in the cloud?
Thanks ahead!
Yes. You can use DocumentDB Change Streams to achieve this. In this GitHub you can find sample implementations of it. In a series of DocumentDB from the dining table episode 1, you can also learn how to use this feature for your goal.
For staging in Snowflake, we need S3 AWS layer or Azure or Local machine. Instead of this, can we FTP a file from a source team directly to Snowflake internal storage, so that, from there the Snowpipe can the file and load to our Snowflake table.
If yes, please tell how. If no, please confirm that as well. If no, won't that is a big drawback of Snowflake to depend on other platforms every time.
You can use just about any driver from Snowflake to move files to Internal stage on Snowflake. ODBC, JDBC, Python, SnowSQL, etc. FTP isn't a very common protocol in the cloud, though. Snowflake has a lot of customers without any presence on AWS, Azure, or GCP that are using Snowflake without issues in this manner.
I made a website that uses a sqlite3 database, and I'm trying to get my program on AWS using elastic beanstalk. I've been googling, but can't find any instructions/tutorials on how to get a sqlite3 database running on AWS. Does AWS support sqlite3? Is there some trick to making it work? And if not, what do you recommend? Many thanks.
You can refer to the documentation below which will help you to get to the Beanstalk console and add the SQLite3 to the AWS. This is for the MySQL but you can change the database engine to SQLite3 from Database settings.
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.managing.db.html
I am not entirely sure whether this is possible because I have not done it before, but I'll point you in the right direction.
There is documentation that shows you how to get started with a custom Amazon Machine Image (AMI) for your elastic beanstalk environment. So what I would recommend doing is:
install sqlite3 on an EC2 instance,
configure sqlite3 to your requirements,
ensure the instance starts the sqlite3 service on boot,
create an AMI of the instance,
follow this documentation:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.customenv.html
Please let me know how you go and I may be able to help if you get stuck along the way.
It would be epic if AWS released a service/ intermediate server for it. I love SQLite.
However, the problem is that SQLite ~ does not support transactions over NFS. I actually tried storing SQLite on AWS EFS and then mounting EFS from both AWS Lambda & Batch, so I hit this wall organically.
Given that cloud environments are largely multi-machine/node, you really start to see the benefit of a server-based approach like PostgreSQL.
is it possible to connect Apache Kylin without other databases like Hbase (plus HDFS) in general? So you can store raw data and the cube metadata somewhere else?
I think you coulde use Apache Hive using managed native tables
(Hive storage handlers)
Hive could connect over ODBC driver to MySQL for example
To use Kylin ,HDFS is mandatatory .Raw data as well as Cube data both will be stored in HDFS.
If you want to support other nosql datastore like cassandra ,you can consider other framework ,FiloDB