In latest TDengine develop branch, taosadapter is provided to replace httpd for restful service.
Both taosadapter and httpd use port 6041, I am wondering what is the difference between taosadaptor and httpd in TDengine database?
The httpd is the buildin restful agent of TDengine running as a thread while taosAdaptor servers as separate process. They are both http proxys. However, taosAdaptor supports more features than the original http, for example, taosAdaptor support using schemaless method to writting data into TDengine.
Related
In my case, I need to connect gravitational Teleport to mongodb in the cloud through an application deployed in kubernetes.
For postgresql, I used Pgbouncer which, using user/password, connected to postgresql in the cloud, and a teleport was connected to it using a certificate. For mongodb, I found a solution with mongos that can be connected to mongodb in the cloud, but only by keyFile, but I need by user/password.
Can anyone help me find an analogue of pgbouncer for mongodb or suggest another solution? Thanks a lot!
When you say MongoDB in the cloud, are you using Mongo Atlas? If so, Teleport has native integration with Atlas.
You can utilize the following guide for it: https://goteleport.com/docs/database-access/guides/mongodb-atlas/
I have some on-premise based frontend java servlet server, on-premise Java backend app server and one on-premise oracle database server. My Oracle client version is 12.1.0 . Java version is openjdk "1.8.0_222" . And using tomcat v7.0.55 for frontend servlets. The Architecture is like the backend server communicate with Oracle DB to process SQL queries.
Now I moved all my servers to AWS cloud docker based containers except Oracle DB. And my Java backend server is running on AWS docker is connecting to Oracle DB running in on-premise Datacenter.
Now I am facing an issue where the AWS based application having latency when it connects to on-premise database and the latency keeps increasing as the number of requests grows and eventually the application gets gateway timeout if the requests are keep on increasing. But strangely it will not happen if I connect my AWS tomcat frontend servers to on-premise java backend servers which talks to on-premise Oracle DB. it only happens when the AWS Java backend servers talks to on-premise Oracle DB. I am not sure why this is happening. Any ideas will be highly appreciated.
The issue is related to the DAO / ORM framework we using. If you use something like Hibernate , Spring it will work in a way where SQL calls are not optimized with join statements. So to fetch a 1000 objects there will be 1000+ SQL calls. So if there is a single digit millisecond latency in the network to Database it will boot to 1000 times delay time. So the solution was to keep Database as close as possible with the application.
We have a secured Kafka cluster in our production environment (using TLS encryption, certificate based client authentication and ACL's). I am trying to figure out how to configure the Flink Kafka connector with the appropriate settings so it can connect securely to our Kafka cluster. Is this possible with Flink? Do i have to pass the security configuration stuff via the properties?
Our Flink cluster is running on Kubernetes (1.14.2) and using the latest Flink stable release (v1.8) with the integrated kafka connector.
After some fiddling with the docs I have got it working by myself. I am now providing the required Java keystores at deployment time when running helm (We deploy the whole Flink stuff through helm charts). The keystores will be base64 encoded and saved as a Kubernetes secret. The taskmanager pods mount the secret at the given location.
I can now pass the location of the keystore / truststore and their passwords on the command-line as params when running the Flink job. These params are finally used to configure the Kafka client via properties.
We are having an application which is already developed(Very OLD application,That time ORM was not in Market) and use Oracle as a database using JDBC. Now we are in situation where we have to connect different database like Postgres. We can not use ORM or such tool at this stage when application is completely developed. Is there any way so that we can provide multiple db support for our application using JDBC.
You can create multiple Datasource connections using the Apache commons DBCP package.
If you have a Spring Application you can easily have this configured in your spring application context.
http://javarevisited.blogspot.in/2012/06/jdbc-database-connection-pool-in-spring.html
Connection Pooling with Apache DBCP
Or you can have them defined as JNDI in your web server like JBOSS
How to configure and run apache solr with spring mvc in tomcat7 server? Will apache solr run only by specifying dependency in maven or do I have to install apache solr in local machine?
You need to have Solr service running in your system or in any server instance, only then you will be able to connect and do some operation on Solr.
Specifying dependency in maven will be useful only to get connected to Solr and do read/write operation on Solr service.
For instance if we compare this with any database system, then your database server is running some where and you use any jdbc driver in your application to perform CRUD operation on that RDBMS.
You can follow the below steps to install and run Solr
Make sure you have installed Java
Download Solr from http://lucene.apache.org/solr/
Extract Solr distribution archive to a directory
Start Solr server by issuing following command:
-- if you are in linux distribution then
$ bin/solr start
-- if you are in windows then
bin\solr.cmd start
Once Solr server is running, use a Web browser to see the Admin Console at http://localhost:8983/solr/
For more detailed instructions follow these links
https://cwiki.apache.org/confluence/display/solr/Installing+Solr
https://cwiki.apache.org/confluence/display/solr/Running+Solr