We have an TDengine application. There are more than one clients, including Docker, Linux, and Windows.
I'm using interval(1d), but the time on Windows & Docker is different by 8 hours. I check the timezone setting, they are 'Asia/Shanghai' and Beijing Time. I can't tell what's may be the problem.
I believe when importing data time is converted based on timezone settings on client and on server just store the value of UTC timestamp. So once you query data on another client, the queried timestamp will be converted based on the timezone setting on that client too. Be careful when you query data based on timestamp make sure considering the time offset between two different. So its better to make consistent client timezone settings to mitigate unexpected result.
Be careful to set a TZ environment when you want to use TDengine database in docker or a kubernetes cluster.
Like this:
docker run --name tdengine -d -e TZ=Asia/Shanghai tdengine/tdengine
Recommend to use docker-compose to manage the runtime configurations for the TDengine container:
version: "3.7"
networks:
td:
external: true
services:
tdengine:
image: tdengine/tdengine:2.4.0.16
networks:
- td
environment:
TZ: Asia/Shanghai
TAOS_FQDN: localhsot
Check the docker here https://taosdata.github.io/TDengine-Operator/en/2.2-tdengine-with-helm.html if you use TDengine in k8s.
Related
Requirement in detail:
So I have two databases (both are in sync) and somehow one goes down and Spring Boot application starts giving exceptions. In this case I want the application to connect to the second database.
Please help me with this.
Thanks in advance.
As you have a DataGuard implementation in Oracle with a Primary database and another one in Standby mode, Oracle Transparent Application Failover is the way to go.
Transparent Application Failover (TAF) is a feature of the Java
Database Connectivity (JDBC) Oracle Call Interface (OCI) driver. It
enables the application to automatically reconnect to a database, if
the database instance to which the connection is made fails. In this
case, the active transactions roll back.
Database Setup
I am assuming your implementation of DG uses Oracle Restart.
Datatase: TESTDB
Service in TAF: TESTDB_HA
Primary site
srvctl add service -d testdb -s testdb_ha -l PRIMARY -y AUTOMATIC -e select -m BASIC -z 200 -w 1
srvctl start service -d testdb -s testdbha
Standby site
srvctl add service -d testdb -s testdb_ha-l PRIMARY -y AUTOMATIC -e select -m BASIC -z 200 -w 1
srvctl modify service -d testdb -s testdb_ha -failovermethod basic
Your JDBC connection
jdbc:oracle:thin:#(description=(address=(host=primaryserver)(protocol=tcp)(port=yourdbport))(address=(host=standbyserver)(protocol=tcp)(port=yourport))(failover=yes)(connect_data=(service_name=testdb_ha)(failover_mode=(type=select)(method=basic))))
In this setup, in case a failover from Primary to Standby, the connection will keep working once the failover is completed without manual intervention.
I am using this configuration currently in applications store in Kubernetes, using Spring Boot and/or Hibernate, and in normal Jboss Java applications. I have personally tested failover scenarios totally transparent for the applications. Obviously, if you have a transaction or query running in the moment the failover is being performed, you will get an error. But you don't need to manually change any jdbc settings in case of switch from primary site to standby site.
i am new to docker, and just learning docker for quite sometimes,for i am working with LAPP ( Linux, Apache, PHP and PostreSQL ) stack and always working with monolithic architecture where all the things are put together into one single server / vps.
after i learn about docker my mind changes quite a bit, and now i am going to try how to containerized the stack. When i came into the db, i thought if i implemented docker in swarm mode, the db can replicated or sync automatically in my worker node when i scale it up.. but in fact it does not. And here is my .yml file i am using to re-produce the scenario.
version: "3"
services:
db:
image: postgres:9.5
container_name: db
restart: always
tty: true
ports:
- "5432:5432"
environment:
POSTGRES_PASSWORD: mydocker_pass
POSTGRES_USER: mydocker_user
POSTGRES_DB: mydocker
volumes:
- db-data:/var/lib/postgresql/data
volumes:
db-data:
at the first time i just create a single container inside my swarm-manager but then i scale up this service with this command docker service scale db_stack_db=2 it works perfectly, now new container are running in my worker node. In my worker node i also have the same volume which i already have in my manager node but sadly when i write something into the db in the manager node, it doesn't show up in my worker node and vice versa though they have the same volume name, and pre-created database : mydocker and also have the same db user : mydocker_user and db_password mydocker_password which means instance in worker node are not syncing with manager node.
then if anyone here is experienced this kind of scenario, please help me and please kindly share your thoughts with me, what is the best practice with swarm mode? do i only have to put one node for db, because if i scale the db up and if it running in the worker node is it useful and helping the load of the db service? since it doesn't have same data between manager and worker node.
regards.
I want to replicate postgresql data of windows server to linux server, I know how to replication between same operating systems but that method is not working with windows and linux. If yes what would be the better way to do this?
You cannot use streaming replication between different operating systems.
Look at the PostgreSQL Wiki for a list of replication solutions. Some of them should work for you.
From PostgreSQL v10 on, you could consider logical replication.
Done this using Postgresql 9.5.21 as master on Windows 2012 R2 and slave on Ubuntu 14.04.
You have to take care about a few things:
most similar CPU (page size, architecture, registers). So, you can't mix 64/32bit, or using CPU with different endianess or page size;
same endianess also for the O.S.: both 32 or 64 bit;
same major version of PG: 9.5.x with same or other 9.5.x version (that's for streaming replication, that I'm using, Logical Replication works with different versions of PG);
So, I find an already installed PG on Windows Server. Edit postgresql.conf to enable replica and PITR, and pg_hba.conf to allow connection.
Then moved on Ubuntu and, with PG stopped, I fetched from the master with:
pg_basebackup -D /tmp/db/ -X stream -R -U postgres -h ip-master
Then modified configuration and replaced data directory with /tmp/db.
Start slave, and is up and running, but look at this:
2020-03-18 21:05:31.598 CET [44640] LOG: database system is ready to accept read
only connections
2020-03-18 21:05:31.631 CET [44645] LOG: started streaming WAL from primary at 36/C2000000 on timeline 1
2020-03-18 21:05:31.905 CET [44646] [unknown]#[unknown] LOG: incomplete startup packet
2020-03-18 21:05:32.416 CET [44649] postgres#postgres FATAL: database locale is incompatible with operating system
2020-03-18 21:05:32.416 CET [44649] postgres#postgres DETAIL: The database was initialized with LC_COLLATE "Italian_Italy.1252", which is not recognized by setlocale().
2020-03-18 21:05:32.416 CET [44649] postgres#postgres HINT: Recreate the database with another locale or install the missing locale.
Here's the funny thing: replication works, but you can't connect to the databases.
Anyway, if you raw copy the data dir on Windows, it works like a charm.
Of course, if you re-create the cluster with UTF-8, there's no problem at all.
n.b.: thanks a lot to incognito and ilmari on official IRC PG channel for the hints.
I'm saving a date in a mysql db as datetime using UTC, so if cst time is 2014-07-22 10:34 am in the db it will save as 2014-07-22 15:34. When testing the app locally, osx 10.9, with either local db or connecting to remote db angular formats it correctly as 2014-07-22 10:34 am. When running the app on a server, ubuntu + nginx + sailsjs, the date reads as 2014-07-22 3:34 pm, so it's not taking into account the timezone. On the server I've set the correct timezone using tzconfig, and it shows local as cst and universal as utc. As I mentioned above, I can connect the local sails app to use the remote database and the time gets formatted correctly. So as long as the sails server is running locally the time is formatted correctly, but if I use the sails app on the server the time is incorrect. Any suggestions?
Thanks
I think your question needs a better explanation. It is hard to tell when exactly the date is messing up. Are you saying that when the sails server is moved to a remote server the date is wrong?
I know personally I stopped using date/time formats and make everything a unix timestamp (ie a NUMBER). This is universal and is never translated till I want it to be translated.
I use this in connection with moment.js (on the server and in angular) to translate my times to/from number and in the correct timezone.
This method has helped tremendously because dates can be manipulated by the server, the db or the client or the db adapter. Maybe none, maybe all and it might event be poor design on my part, but after switching to numbers, I never had these problems ever again.
I understood from the GAE documentation that the production server timezone is always UTC. While developing locally, the server timezone is set to CET. Is there a way to force the local development server to also run on UTC ?
The development server is running on Mac OS.
Thanks,
Hugues
jut found the answer. In order to set the server timezone, just go in Eclipse, "Run configurations", then "VM arguments" and add the following "-Duser.timezone=UTC".
This will set the server timezone to the value you want (UTC) in this case. This is really handy as Google App Engine production will always run UTC whereas the development server (at least in my case) was running with local timezone. The net effect was that I had a different behavior between dev and prod.
Hugues
Well you can use this while saving a date value to your datastore to convert to your specific timezone.
DateFormat utcFormat = new SimpleDateFormat(patternString);
utcFormat.setTimeZone(TimeZone.getTimeZone("UTC"));
DateFormat indianFormat = new SimpleDateFormat(patternString);
utcFormat.setTimeZone(TimeZone.getTimeZone("Asia/Kolkata"));
Date timestamp = utcFormat.parse(inputString);
String output = indianFormat.format(timestamp);
GAE devServer uses local Timezone by default.
I use this code to force it to UTC:
boolean isDevEnvironment = SystemProperty.environment.value() == SystemProperty.Environment.Value.Development;
if (isDevEnvironment) {
TimeZone.setDefault(DateTimeZone.UTC.toTimeZone());
DateTimeZone.setDefault(DateTimeZone.UTC);
}
You need to run it once, in the very during server startup and initialization.