what is the proper connection pool configuration for TDengine database?
I encountered this error:
connection is not available after 30 seconds.
my configuration is :
maximumPoolSize=10
minimumIdle=5
maxLifetime=0
connectionTimeout=30000
idleTimeout=0
does it good for using ? Does increasing maximumPoolSize guarantee that the Connection is not available?I want to know the best configuration for this
maximumPoolSize = threadSize / (connectionTimeOut * timePerSQL)
threadSize: The number of threads executing SQL simultaneously
connectionTimeOut: indicates the timeout period of the connection
maximumPoolSize: indicates the maximum number of connections in the
connection pool
timePerSQL: execution time per thread.
The number of threads can be greater than the number of connection pools, but please make sure that connections are released in a timely manner within the thread. For example, if it takes 5 seconds to get a connection in each thread , execute the query and close the connection.
maximumPoolSize is 10 and connectionTimeout is 30 second, the processing capacity of the application is 30/5s * 10, that is, the maximum number of connections that can be maintained is 60. If the number exceeds 60, an error will definitely be reported.
hence maximumPoolSize is 60 . but it also depends on the cpu of your server,that would be complicated
Related
"Connection closed" occurs when executing a function for data pre-processing.
The data pre-processing is as follows.
Import data points of about 30 topics from the database.( Data for 9 days every 1 minute,
60 * 24 * 9 * 30 = 388,800 values)
Convert data to a pandas dataframe for pre-processing such as missing value or resampling (this process takes the longest time)
Data processing
In the above data pre-processing, the following error occurs.
volttron.platform.vip.rmq_connection ERROR: Connection closed unexpectedly, reopening in 30 seconds.
This error is probably what the VOLTTRON platform does to manage the agent.
Since it takes more than 30 seconds in step 2, an error occurs and the VOLTTRON platform automatically restarts the agent.
Because of this, the agent cannot perform data processing normally.
Does anyone know how to avoid this?
If this is happening during agent instantiation I would suggest moving the pre-processing out of the init or configuration steps to a function with the #core.receiver("onstart") decorator. This will stop the agent instantiation and configuration steps from timing out. The listener agent's on start method can be used as an example.
After encountering (on an RDS Postgres instance) for several times:
ERROR: canceling statement due to conflict with recovery
Detail: User query might have needed to see row versions that must be removed
I ran (on the hot standby):
SELECT *
FROM pg_stat_database_conflicts;
And found that all the conflicts have to do with confl_snapshot
Which is explained in the documentation as:
confl_snapshot: Number of queries in this database that have been canceled due to old
snapshots
What might be causing this conflict (being an old snapshot)?
If it helps, here are some of the relevant settings (by running SHOW ALL ; on the stand by):
hot_standby: on
hot_standby_feedback: off
max_standby_archive_delay: 30s
max_standby_streaming_delay: 1h
name,setting
old_snapshot_threshold: -1
vacuum_defer_cleanup_age: 0
vacuum_freeze_min_age: 50000000
vacuum_freeze_table_age: 150000000
vacuum_multixact_freeze_min_age: 5000000
vacuum_multixact_freeze_table_age: 150000000
wal_level: replica
wal_receiver_status_interval: 10s
wal_receiver_timeout: 30s
wal_retrieve_retry_interval: 5s
wal_segment_size: 16MB
wal_sender_timeout: 30s
wal_writer_delay: 200ms
I have a cockroachdb instance running in production and would like to know the settings for the --max-sql-memory and --cache specified when the database was started. I am trying to enhance performance by following this production checklist but I am not able infer the setting either on dashboard or sql console.
Where can I check the values of max-sql-memory and cache value ?
Note: I am able to access the cockroachdb admin console and sql tables.
You can find this information in the logs, shortly after node startup:
I190626 10:22:47.714002 1 cli/start.go:1082 CockroachDB CCL v19.1.2 (x86_64-unknown-linux-gnu, built 2019/06/07 17:32:15, go1.11.6)
I190626 10:22:47.815277 1 server/status/recorder.go:610 available memory from cgroups (8.0 EiB) exceeds system memory 31 GiB, using system memory
I190626 10:22:47.815311 1 server/config.go:386 system total memory: 31 GiB
I190626 10:22:47.815411 1 server/config.go:388 server configuration:
max offset 500000000
cache size 7.8 GiB <====
SQL memory pool size 7.8 GiB <====
scan interval 10m0s
scan min idle time 10ms
scan max idle time 1s
event log enabled true
If the logs have been rotated, the value depends on the flags.
The defaults for v19.1 are 128MB, with recommended settings being 0.25 (a quarter of system memory).
The settings are not currently logged periodically or exported through metrics.
I have a set of tasks and I want to run them on backend. I want to cap to max 2 instances. Is there a way to execute maximum number of tasks without creating another more than the maximum tasks?
I want to execute maximum permissible tasks keeping the max to 2 instances. I cannot cap the task rate as some rate takes a second and some can take 20 seconds to finish.
Thanks.
You can specify the number of instances that you want in backends.xml:
<backends>
<backend name="memdb">
<class>B8</class>
<instances>2</instances>
</backend>
</backends>
Or you can switch from backends to modules. Then you can choose a scaling method and set the maximum number of instances, if you select Basic Scaling.
https://developers.google.com/appengine/docs/java/modules/
From reading, I can see the Work_Queue wait can safely be ignored, but I don't find much about logcapture_wait. This is from BOL, "Waiting for log records to become available. Can occur either when waiting for new log records to be generated by connections or for I/O completion when reading log not in the cache. This is an expected wait if the log scan is caught up to the end of log or is reading from disk."
Average disk sec/write is basically 0 for both SQL Servers so I'm guessing this wait type can safely be ignored?
Here are the top 10 waits from the primary:
wait_type pct running_pct
HADR_LOGCAPTURE_WAIT 45.98 45.98
HADR_WORK_QUEUE 44.89 90.87
HADR_NOTIFICATION_DEQUEUE 1.53 92.40
BROKER_TRANSMITTER 1.53 93.93
CXPACKET 1.42 95.35
REDO_THREAD_PENDING_WORK 1.36 96.71
HADR_CLUSAPI_CALL 0.78 97.49
HADR_TIMER_TASK 0.77 98.26
PAGEIOLATCH_SH 0.66 98.92
OLEDB 0.53 99.45
Here are the top 10 waits from the secondary:
wait_type pct running_pct
REDO_THREAD_PENDING_WORK 66.43 66.43
HADR_WORK_QUEUE 31.06 97.49
BROKER_TRANSMITTER 0.79 98.28
HADR_NOTIFICATION_DEQUEUE 0.79 99.07
Don't troubleshoot problems on your server by looking at total waits. If you want to troubleshoot what is causing you problems, then you need to look at current waits. You can do that by either querying sys.dm_os_waiting_tasks or by grabbing all waits (like you did above), waiting for 1 minute, grabbing all waits again, and subtracting them to see what waits actually occurred over that minute.
See the webcast I did for more info: Troubleshooting with DMVs
That aside, HADR_LOGCAPTURE_WAIT is a background wait type and does not affect any running queries. You can ignore it.
No, you can't just simply ignore " HADR_LOGCAPTURE_WAIT". this wait type happens when SQL is either waiting for some new log data to be generated or when there are some latency while trying to read data from the log file. internal and external fragmentation on the log file or slow storage could contribute to this wait type as well.