MariaDB storage increase - database

linux and db's nameswe faced the problem with mariadb storage.
we have centos 8 with monitoring system and mariadb as database system. while creating the linux machine we gave it 120 gb, but after adding the nodes to monitoring system the space was fulled. so we increase the linux root space by 50 gb.
but the problem is still exist:
mariadb service didn't work, so we have to restart it
we need to restart nginx service for web access.
after this 2 manipulations monitoring system works for a 5-6 hours, then we again have to restart DB and web service.
We think that database didn't use all 170GB and 'see' only the initial 120GB. As test we delete approximately 15 devices (+-15GB) from monitoring system and test it for a 5 days and there was not any DB or WEB issues.
MariaDB - 10.3.28 version
The used engine is InnoDB
We checked for innodb_page_size
Innodb_page_size = 16384
Could someone help us
innodb status1
innodb status2

Analysis of GLOBAL STATUS and VARIABLES:
Observations:
Version: 10.3.28-MariaDB
3.6 GB of RAM
Uptime = 6d 22:26:28
429 Queries/sec : 145 Questions/sec
The More Important Issues:
Do you have 3.6GB of RAM? Are you using InnoDB? If yes to both of those, then make these two changes; they may help performance a lot:
key_buffer_size = 40M
innodb_buffer_pool_size = 2G
I'm getting conflicting advice on table_open_cache; let's leave it alone for now.
If you have SSD, I recommend these two:
innodb_io_capicity = 1000
innodb_flush_neighbors = 0
innodb_log_file_size = 200M -- Caution: It may be complex to change this in the version you are running. If so, leave it alone.
You seem to DELETE far more rows than you INSERT; what is going on?
Unless you do a lot of ALTERs, make this change:
myisam_sort_buffer_size = 50M
Details and other observations:
( (key_buffer_size - 1.2 * Key_blocks_used * 1024) ) = ((128M - 1.2 * 0 * 1024)) / 3865470566.4 = 3.5% -- Percent of RAM wasted in key_buffer.
-- Decrease key_buffer_size (now 134217728).
( Key_blocks_used * 1024 / key_buffer_size ) = 0 * 1024 / 128M = 0 -- Percent of key_buffer used. High-water-mark.
-- Lower key_buffer_size (now 134217728) to avoid unnecessary memory usage.
( (key_buffer_size / 0.20 + innodb_buffer_pool_size / 0.70) ) = ((128M / 0.20 + 128M / 0.70)) / 3865470566.4 = 22.3% -- Most of available ram should be made available for caching.
-- http://mysql.rjweb.org/doc.php/memory
( Key_reads + Key_writes + Innodb_pages_read + Innodb_pages_written + Innodb_dblwr_writes + Innodb_buffer_pool_pages_flushed ) = (0 + 0 + 107837817 + 14228075 + 669027 + 14217155) / 599188 = 228 /sec -- IOPs?
-- If the hardware can handle it, set innodb_io_capacity (now 200) to about this value.
( ( Key_reads + Key_writes + Innodb_pages_read + Innodb_pages_written + Innodb_dblwr_writes + Innodb_buffer_pool_pages_flushed ) / innodb_io_capacity / Uptime ) = ( 0 + 0 + 107837817 + 14228075 + 669027 + 14217155 ) / 200 / 599188 = 114.3% -- This may be a metric indicating what innodb_io_capacity is set reasonably.
-- Increase innodb_io_capacity (now 200) if the hardware can handle it.
( Table_open_cache_misses ) = 12,156,771 / 599188 = 20 /sec
-- May need to increase table_open_cache (now 2000)
( Table_open_cache_misses / (Table_open_cache_hits + Table_open_cache_misses) ) = 12,156,771 / (184539214 + 12156771) = 6.2% -- Effectiveness of table_open_cache.
-- Increase table_open_cache (now 2000) and check table_open_cache_instances (now 8).
( innodb_buffer_pool_size ) = 128M -- InnoDB Data + Index cache
-- 128M (an old default) is woefully small.
( innodb_buffer_pool_size ) = 128 / 3865470566.4 = 3.5% -- % of RAM used for InnoDB buffer_pool
-- Set to about 70% of available RAM. (To low is less efficient; too high risks swapping.)
( (key_buffer_size / 0.20 + innodb_buffer_pool_size / 0.70) ) = ((128M / 0.20 + 128M / 0.70)) / 3865470566.4 = 22.3% -- (metric for judging RAM usage)
( innodb_lru_scan_depth ) = 1,024
-- "InnoDB: page_cleaner: 1000ms intended loop took ..." may be fixed by lowering lru_scan_depth
( innodb_io_capacity ) = 200 -- When flushing, use this many IOPs.
-- Reads could be slugghish or spiky.
( innodb_io_capacity_max / innodb_io_capacity ) = 2,000 / 200 = 10 -- Capacity: max/plain
-- Recommend 2. Max should be about equal to the IOPs your I/O subsystem can handle. (If the drive type is unknown 2000/200 may be a reasonable pair.)
( Innodb_log_writes ) = 33,145,091 / 599188 = 55 /sec
( Innodb_os_log_written / (Uptime / 3600) / innodb_log_files_in_group / innodb_log_file_size ) = 67,002,682,368 / (599188 / 3600) / 2 / 48M = 4 -- Ratio
-- (see minutes)
( Uptime / 60 * innodb_log_file_size / Innodb_os_log_written ) = 599,188 / 60 * 48M / 67002682368 = 7.5 -- Minutes between InnoDB log rotations Beginning with 5.6.8, this can be changed dynamically; be sure to also change my.cnf.
-- (The recommendation of 60 minutes between rotations is somewhat arbitrary.) Adjust innodb_log_file_size (now 50331648). (Cannot change in AWS.)
( innodb_flush_method ) = innodb_flush_method = fsync -- How InnoDB should ask the OS to write blocks. Suggest O_DIRECT or O_ALL_DIRECT (Percona) to avoid double buffering. (At least for Unix.) See chrischandler for caveat about O_ALL_DIRECT
( default_tmp_storage_engine ) = default_tmp_storage_engine =
( innodb_flush_neighbors ) = 1 -- A minor optimization when writing blocks to disk.
-- Use 0 for SSD drives; 1 for HDD.
( ( Innodb_pages_read + Innodb_pages_written ) / Uptime / innodb_io_capacity ) = ( 107837817 + 14228075 ) / 599188 / 200 = 101.9% -- If > 100%, need more io_capacity.
-- Increase innodb_io_capacity (now 200) if the drives can handle it.
( innodb_io_capacity ) = 200 -- I/O ops per second capable on disk . 100 for slow drives; 200 for spinning drives; 1000-2000 for SSDs; multiply by RAID factor.
( sync_binlog ) = 0 -- Use 1 for added security, at some cost of I/O =1 may lead to lots of "query end"; =0 may lead to "binlog at impossible position" and lose transactions in a crash, but is faster. 0 is OK for Galera.
( innodb_adaptive_hash_index ) = innodb_adaptive_hash_index = ON -- Usually should be ON.
-- There are cases where OFF is better. See also innodb_adaptive_hash_index_parts (now 8) (after 5.7.9) and innodb_adaptive_hash_index_partitions (MariaDB and Percona). ON has been implicated in rare crashes (bug 73890). 10.5.0 decided to default OFF.
( innodb_print_all_deadlocks ) = innodb_print_all_deadlocks = OFF -- Whether to log all Deadlocks.
-- If you are plagued with Deadlocks, turn this on. Caution: If you have lots of deadlocks, this may write a lot to disk.
( myisam_sort_buffer_size ) = 134,216,704 / 3865470566.4 = 3.5% -- Used for ALTER, CREATE INDEX, OPTIMIZE, LOAD DATA; set when you need it. Also for MyISAM's REPAIR TABLE.
-- Decrease myisam_sort_buffer_size (now 134216704) to keep from blowing out RAM.
( innodb_ft_result_cache_limit ) = 2,000,000,000 / 3865470566.4 = 51.7% -- Byte limit on FULLTEXT resultset. (Possibly not preallocated, but grows?)
-- Lower the setting.
( innodb_autoextend_increment * 1048576 ) = (64 * 1048576) / 3865470566.4 = 1.7% -- How much to increase ibdata1 by (when needed).
-- Decrease setting to avoid premature swapping.
( character_set_server ) = character_set_server = latin1
-- Charset problems may be helped by setting character_set_server (now latin1) to utf8mb4. That is the future default.
( local_infile ) = local_infile = ON
-- local_infile (now ON) = ON is a potential security issue
( Created_tmp_disk_tables / Created_tmp_tables ) = 542,381 / 1084764 = 50.0% -- Percent of temp tables that spilled to disk
-- Maybe increase tmp_table_size (now 16777216) and max_heap_table_size (now 16777216); improve indexes; avoid blobs, etc.
( Com_delete / Com_insert ) = 2,294,352 / 1521534 = 150.8% -- Deletes / Inserts (as a pct). (Ignores LOAD, REPLACE, etc.)
( Com_insert + Com_delete + Com_delete_multi + Com_replace + Com_update + Com_update_multi ) = (1521534 + 2294352 + 21366 + 0 + 45590666 + 0) / 599188 = 82 /sec -- writes/sec
-- 50 writes/sec + log flushes will probably max out I/O write capacity of normal drives
( Com__biggest ) = Com__biggest = Com_stmt_execute -- Which of the "Com_" metrics is biggest.
-- Normally it is Com_select (now 34545111). If something else, then it may be a sloppy platform, or may be something else.
( binlog_format ) = binlog_format = MIXED -- STATEMENT/ROW/MIXED.
-- ROW is preferred by 5.7 (10.3)
( slow_query_log ) = slow_query_log = OFF -- Whether to log slow queries. (5.1.12)
( long_query_time ) = 10 -- Cutoff (Seconds) for defining a "slow" query.
-- Suggest 2
( back_log ) = 80 -- (Autosized as of 5.6.6; based on max_connections)
-- Raising to min(150, max_connections (now 151)) may help when doing lots of connections.
( thread_cache_size / Max_used_connections ) = 151 / 51 = 296.1%
-- There is no advantage in having the thread cache bigger than your likely number of connections. Wasting space is the disadvantage.
You have the Query Cache half-off. You should set both query_cache_type = OFF and query_cache_size = 0 . There is (according to a rumor) a 'bug' in the QC code that leaves some code on unless you turn off both of those settings.
Abnormally small:
(Com_select + Qcache_hits) / (Com_insert + Com_update + Com_delete + Com_replace) = 0.699
Com_show_tables = 0
Innodb_buffer_pool_bytes_data = 193 /sec
Table_locks_immediate = 0.53 /HR
eq_range_index_dive_limit = 0
innodb_spin_wait_delay = 4
Abnormally large:
Com_delete_multi = 0.036 /sec
Com_stmt_close = 141 /sec
Com_stmt_execute = 141 /sec
Com_stmt_prepare = 141 /sec
Handler_discover = 0.94 /HR
Innodb_buffer_pool_read_ahead = 162 /sec
Innodb_buffer_pool_reads * innodb_page_size / innodb_buffer_pool_size = 114837.2%
Innodb_data_pending_fsyncs = 2
Innodb_os_log_fsyncs = 55 /sec
Opened_plugin_libraries = 0.006 /HR
Table_open_cache_active_instances = 4
Tc_log_page_size = 4,096
Abnormal strings:
aria_recover_options = BACKUP,QUICK
innodb_fast_shutdown = 1
log_slow_admin_statements = ON
myisam_stats_method = NULLS_UNEQUAL
old_alter_table = DEFAULT

Related

How can I speed up my optimization with Gekko?

My program is optimizing the charging and decharging of a home battery to minimize the cost of electricity at the end of the year. The electricity usage of homes is measured each 15 minutes, so I have 96 measurement point in 1 day. I want to optimilize the charging and decharging of the battery for 2 days, so that day 1 takes the usage of day 2 into account. I wrote the following code and it works.
from gekko import GEKKO
import numpy as np
import pandas as pd
import time
import math
# ------------------------ Import and read input data ------------------------
file = r'D:\Bedrijfseconomie\MP Thuisbatterijen\Spyder - Gekko\Data Sim 1.xlsx'
data = pd.read_excel(file, sheet_name='Input', na_values='NaN')
dataRead = pd.DataFrame(data, columns= ['Timestep','Verbruik woning (kWh)','Prijs afname (€/kWh)',
'Capaciteit batterij (kW)','Capaciteit batterij (kWh)',
'Rendement (%)','Verbruikersprofiel'])
timestep = dataRead['Timestep'].to_numpy()
usage_home = dataRead['Verbruik woning (kWh)'].to_numpy()
price = dataRead['Prijs afname (€/kWh)'].to_numpy()
cap_batt_kW = dataRead['Capaciteit batterij (kW)'].iloc[0]
cap_batt_kWh = dataRead['Capaciteit batterij (kWh)'].iloc[0]
efficiency = dataRead['Rendement (%)'].iloc[0]
usersprofile = dataRead['Verbruikersprofiel'].iloc[0]
# ---------------------------- Optimization model ----------------------------
# Initialise model
m = GEKKO()
# Global options
m.options.SOLVER = 1
# Constants
snelheid_laden = cap_batt_kW/4
T = len(timestep)
loss_charging = m.Const(value = (1-efficiency)/2)
max_cap_batt = m.Const(value = cap_batt_kWh)
min_cap_batt = m.Const(value = 0)
max_charge = m.Const(value = snelheid_laden) # max battery can charge in 15min
max_decharge = m.Const(value = -snelheid_laden) # max battery can decharge in 15min
# Parameters
dummy = np.array(np.ones([T]))
# Variables
e_batt = m.Array(m.Var, (T), lb = min_cap_batt, ub = max_cap_batt) # energy in battery
usage_net = m.Array(m.Var, (T)) # usage home & charge/decharge battery
price_paid = m.Array(m.Var, (T)) # price paid each 15min
charging = m.Array(m.Var, (T), lb = max_decharge, ub = max_charge) # amount charge/decharge each 15min
# Intermediates
e_batt[0] = m.Intermediate(charging[0])
for t in range(T):
e_batt[t] = m.Intermediate(m.sum([charging[i]*(1-loss_charging) for i in range(t)]))
usage_net = [m.Intermediate(usage_home[t] + charging[t]) for t in range(T)]
price_paid = [m.Intermediate(usage_net[t] * price[t] / 100) for t in range(T)]
total_price = m.Intermediate(m.sum([price_paid[t] for t in range(T)]))
# Equations (constraints)
m.Equation([min_cap_batt*dummy[t] <= e_batt[t] for t in range(T)])
m.Equation([max_cap_batt*dummy[t] >= e_batt[t] for t in range(T)])
m.Equation([max_charge*dummy[t] >= charging[t] for t in range(T)])
m.Equation([max_decharge*dummy[t] <= charging[t] for t in range(T)])
m.Equation([min_cap_batt*dummy[t] <= usage_net[t] for t in range(T)])
m.Equation([(-1*charging[t]) <= (1-loss_charging)*e_batt[t] for t in range(T)])
# Objective
m.Minimize(total_price)
# Solve problem
m.solve()
My code is running and it works but despite that it gives a Solution time of 10 seconds, the total time for it to run is around 8 minutes. Does anyone know a way I can speed it up?
There are a few ways to speed up the Gekko code:
Solve locally instead of on the public server. The option is m=GEKKO(remote=False). The public server can slow down with many jobs.
Use sum() instead of m.sum(). This can be faster for compiling the model. Otherwise, use m.integral(x) if you need the integral of x.
Many of the equations are repeated at each time horizon step. Gekko is more efficient using a single equation definition with IMODE=2 (for algebraic equation models) or IMODE=6 (for differential / algebraic equation models) and then it creates the equations over the time horizon. You may need to use m.vsum() instead of m.sum().
For additional diagnosis, try setting m.options.DIAGLEVEL=1 to get a detailed timing report of how long it takes to compile the model and perform each function, 1st derivative, and 2nd derivative calculation. It also gives a detailed view of the solver versus model time during the solution phase.
Update with Data File Testing
Thanks for sending the data file. The run directory shows that the model file is 58,682 lines long. It takes a while to compile a model that size. Here is the solution from the files you sent:
--------- APM Model Size ------------
Each time step contains
Objects : 193
Constants : 5
Variables : 20641
Intermediates: 578
Connections : 18721
Equations : 20259
Residuals : 19681
Number of state variables: 20641
Number of total equations: - 19873
Number of slack variables: - 1152
---------------------------------------
Degrees of freedom : -384
* Warning: DOF <= 0
----------------------------------------------
Steady State Optimization with APOPT Solver
----------------------------------------------
Iter Objective Convergence
0 3.37044E+01 5.00000E+00
1 2.81987E+01 1.00000E-10
2 2.81811E+01 5.22529E-12
3 2.81811E+01 2.10942E-15
4 2.81811E+01 2.10942E-15
Successful solution
---------------------------------------------------
Solver : APOPT (v1.0)
Solution time : 10.5119999999879 sec
Objective : 28.1811214884047
Successful solution
---------------------------------------------------
Here is a version that uses IMODE=6 instead. You define the variables and equations once and let Gekko handle the time discretization. It makes a much more efficient model because there is no unnecessary duplication of equations.
from gekko import GEKKO
import numpy as np
import pandas as pd
import time
import math
# ------------------------ Import and read input data ------------------------
file = r'Data Sim 1.xlsx'
data = pd.read_excel(file, sheet_name='Input', na_values='NaN')
dataRead = pd.DataFrame(data, columns= ['Timestep','Verbruik woning (kWh)','Prijs afname (€/kWh)',
'Capaciteit batterij (kW)','Capaciteit batterij (kWh)',
'Rendement (%)','Verbruikersprofiel'])
timestep = dataRead['Timestep'].to_numpy()
usage_home = dataRead['Verbruik woning (kWh)'].to_numpy()
price = dataRead['Prijs afname (€/kWh)'].to_numpy()
cap_batt_kW = dataRead['Capaciteit batterij (kW)'].iloc[0]
cap_batt_kWh = dataRead['Capaciteit batterij (kWh)'].iloc[0]
efficiency = dataRead['Rendement (%)'].iloc[0]
usersprofile = dataRead['Verbruikersprofiel'].iloc[0]
# ---------------------------- Optimization model ----------------------------
# Initialise model
m = GEKKO()
m.open_folder()
# Global options
m.options.SOLVER = 1
m.options.IMODE = 6
# Constants
snelheid_laden = cap_batt_kW/4
m.time = timestep
loss_charging = m.Const(value = (1-efficiency)/2)
max_cap_batt = m.Const(value = cap_batt_kWh)
min_cap_batt = m.Const(value = 0)
max_charge = m.Const(value = snelheid_laden) # max battery can charge in 15min
max_decharge = m.Const(value = -snelheid_laden) # max battery can decharge in 15min
# Parameters
usage_home = m.Param(usage_home)
price = m.Param(price)
# Variables
e_batt = m.Var(value=0, lb = min_cap_batt, ub = max_cap_batt) # energy in battery
price_paid = m.Var() # price paid each 15min
charging = m.Var(lb = max_decharge, ub = max_charge) # amount charge/decharge each 15min
usage_net = m.Var(lb=min_cap_batt)
# Equations
m.Equation(e_batt==m.integral(charging*(1-loss_charging)))
m.Equation(usage_net==usage_home + charging)
price_paid = m.Intermediate(usage_net * price / 100)
m.Equation(-charging <= (1-loss_charging)*e_batt)
# Objective
m.Minimize(price_paid)
# Solve problem
m.solve()
import matplotlib.pyplot as plt
plt.plot(m.time,e_batt.value,label='Battery Charge')
plt.plot(m.time,charging.value,label='Charging')
plt.plot(m.time,price_paid.value,label='Price')
plt.plot(m.time,usage_net.value,label='Net Usage')
plt.xlabel('Time'); plt.grid(); plt.legend(); plt.show()
The model is only 31 lines long (see gk0_model.apm) and it solves much faster (a couple seconds total).
--------- APM Model Size ------------
Each time step contains
Objects : 0
Constants : 5
Variables : 8
Intermediates: 1
Connections : 0
Equations : 6
Residuals : 5
Number of state variables: 1337
Number of total equations: - 955
Number of slack variables: - 191
---------------------------------------
Degrees of freedom : 191
----------------------------------------------
Dynamic Control with APOPT Solver
----------------------------------------------
Iter Objective Convergence
0 3.46205E+01 3.00000E-01
1 3.30649E+01 4.41141E-10
2 3.12774E+01 1.98558E-11
3 3.03148E+01 1.77636E-15
4 2.96824E+01 3.99680E-15
5 2.82700E+01 8.88178E-16
6 2.82039E+01 1.77636E-15
7 2.81334E+01 8.88178E-16
8 2.81085E+01 1.33227E-15
9 2.81039E+01 8.88178E-16
Iter Objective Convergence
10 2.81005E+01 8.88178E-16
11 2.80999E+01 1.77636E-15
12 2.80996E+01 8.88178E-16
13 2.80996E+01 8.88178E-16
14 2.80996E+01 8.88178E-16
Successful solution
---------------------------------------------------
Solver : APOPT (v1.0)
Solution time : 0.527499999996508 sec
Objective : 28.0995878585948
Successful solution
---------------------------------------------------
There is no long compile time. Also, the solver time is reduced from 10 sec to 0.5 sec. The objective function is nearly the same (28.18 versus 28.10).
Here is a complete version without the data file dependency (in case the data file isn't available in the future).
from gekko import GEKKO
import numpy as np
import pandas as pd
import time
import math
# ------------------------ Import and read input data ------------------------
timestep = np.arange(1,193)
usage_home = np.array([0.05,0.07,0.09,0.07,0.05,0.07,0.07,0.07,0.06,
0.05,0.07,0.07,0.09,0.07,0.06,0.07,0.07,
0.07,0.16,0.12,0.17,0.08,0.10,0.11,0.06,
0.06,0.06,0.06,0.06,0.07,0.07,0.07,0.08,
0.08,0.06,0.07,0.07,0.07,0.07,0.05,0.07,
0.07,0.07,0.07,0.21,0.08,0.07,0.08,0.27,
0.12,0.09,0.10,0.11,0.09,0.09,0.08,0.08,
0.12,0.15,0.08,0.10,0.08,0.10,0.09,0.10,
0.09,0.08,0.10,0.12,0.10,0.10,0.10,0.11,
0.10,0.10,0.11,0.13,0.21,0.12,0.10,0.10,
0.11,0.10,0.11,0.12,0.12,0.10,0.11,0.10,
0.10,0.10,0.11,0.10,0.10,0.09,0.08,0.12,
0.10,0.11,0.11,0.10,0.06,0.05,0.06,0.06,
0.06,0.07,0.06,0.06,0.05,0.06,0.05,0.06,
0.05,0.06,0.05,0.06,0.07,0.06,0.09,0.10,
0.10,0.22,0.08,0.06,0.05,0.06,0.08,0.08,
0.07,0.08,0.07,0.07,0.16,0.21,0.08,0.08,
0.09,0.09,0.10,0.09,0.09,0.08,0.12,0.24,
0.09,0.08,0.09,0.08,0.10,0.24,0.08,0.09,
0.09,0.08,0.08,0.07,0.06,0.05,0.06,0.07,
0.07,0.05,0.05,0.06,0.05,0.28,0.11,0.20,
0.10,0.09,0.28,0.10,0.15,0.09,0.10,0.18,
0.12,0.13,0.30,0.10,0.11,0.10,0.10,0.11,
0.10,0.21,0.10,0.10,0.12,0.10,0.08])
price = np.array([209.40,209.40,209.40,209.40,193.00,193.00,193.00,
193.00,182.75,182.75,182.75,182.75,161.60,161.60,
161.60,161.60,154.25,154.25,154.25,154.25,150.70,
150.70,150.70,150.70,150.85,150.85,150.85,150.85,
150.00,150.00,150.00,150.00,153.25,153.25,153.25,
153.25,153.25,153.25,153.25,153.25,151.35,151.35,
151.35,151.35,151.70,151.70,151.70,151.70,154.95,
154.95,154.95,154.95,150.20,150.20,150.20,150.20,
153.75,153.75,153.75,153.75,160.55,160.55,160.55,
160.55,179.90,179.90,179.90,179.90,202.00,202.00,
202.00,202.00,220.25,220.25,220.25,220.25,245.75,
245.75,245.75,245.75,222.90,222.90,222.90,222.90,
203.40,203.40,203.40,203.40,205.30,205.30,205.30,
205.30,192.80,192.80,192.80,192.80,177.00,177.00,
177.00,177.00,159.90,159.90,159.90,159.90,152.50,
152.50,152.50,152.50,143.95,143.95,143.95,143.95,
142.10,142.10,142.10,142.10,143.75,143.75,143.75,
143.75,170.80,170.80,170.80,170.80,210.35,210.35,
210.35,210.35,224.45,224.45,224.45,224.45,226.30,
226.30,226.30,226.30,227.85,227.85,227.85,227.85,
225.45,225.45,225.45,225.45,225.80,225.80,225.80,
225.80,224.50,224.50,224.50,224.50,220.30,220.30,
220.30,220.30,220.00,220.00,220.00,220.00,221.90,
221.90,221.90,221.90,230.25,230.25,230.25,230.25,
233.60,233.60,233.60,233.60,225.20,225.20,225.20,
225.20,179.85,179.85,179.85,179.85,171.85,171.85,
171.85,171.85,162.90,162.90,162.90,162.90,158.85,
158.85,158.85,158.85])
cap_batt_kW = 3.00
cap_batt_kWh = 5.00
efficiency = 0.95
usersprofile = 1
# ---------------------------- Optimization model ----------------------------
# Initialise model
m = GEKKO()
#m.open_folder()
# Global options
m.options.SOLVER = 1
m.options.IMODE = 6
# Constants
snelheid_laden = cap_batt_kW/4
m.time = timestep
loss_charging = m.Const(value = (1-efficiency)/2)
max_cap_batt = m.Const(value = cap_batt_kWh)
min_cap_batt = m.Const(value = 0)
max_charge = m.Const(value = snelheid_laden) # max battery can charge in 15min
max_decharge = m.Const(value = -snelheid_laden) # max battery can decharge in 15min
# Parameters
usage_home = m.Param(usage_home)
price = m.Param(price)
# Variables
e_batt = m.Var(value=0, lb = min_cap_batt, ub = max_cap_batt) # energy in battery
price_paid = m.Var() # price paid each 15min
charging = m.Var(lb = max_decharge, ub = max_charge) # amount charge/decharge each 15min
usage_net = m.Var(lb=min_cap_batt)
# Equations
m.Equation(e_batt==m.integral(charging*(1-loss_charging)))
m.Equation(usage_net==usage_home + charging)
price_paid = m.Intermediate(usage_net * price / 100)
m.Equation(-charging <= (1-loss_charging)*e_batt)
# Objective
m.Minimize(price_paid)
# Solve problem
m.solve()
import matplotlib.pyplot as plt
plt.plot(m.time,e_batt.value,label='Battery Charge')
plt.plot(m.time,charging.value,label='Charging')
plt.plot(m.time,price_paid.value,label='Price')
plt.plot(m.time,usage_net.value,label='Net Usage')
plt.xlabel('Time'); plt.grid(); plt.legend(); plt.show()

Solving multi-armed bandit problems with continuous action space

My problem has a single state and an infinite amount of actions on a certain interval (0,1). After quite some time of googling I found a few paper about an algorithm called zooming algorithm which can solve problems with a continous action space. However my implementation is bad at exploiting. Therefore I'm thinking about adding an epsilon-greedy kind of behavior.
Is it reasonable to combine different methods?
Do you know other approaches to my problem?
Code samples:
import portion as P
def choose_action(self, i_ph):
# Activation rule
not_covered = P.closed(lower=0, upper=1)
for arm in self.active_arms:
confidence_radius = calc_confidence_radius(i_ph, arm)
confidence_interval = P.closed(arm.norm_value - confidence_radius, arm.norm_value + confidence_radius)
not_covered = not_covered - confidence_interval
if not_covered != P.empty():
rans = []
height = 0
heights = []
for i in not_covered:
rans.append(np.random.uniform(i.lower, i.upper))
height += i.upper - i.lower
heights.append(i.upper - i.lower)
ran_n = np.random.uniform(0, height)
j = 0
ran = 0
for i in range(len(heights)):
if j < ran_n < j + heights[i]:
ran = rans[i]
j += heights[i]
self.active_arms.append(Arm(len(self.active_arms), ran * (self.sigma_square - lower) + lower, ran))
# Selection rule
max_index = float('-inf')
max_index_arm = None
for arm in self.active_arms:
confidence_radius = calc_confidence_radius(i_ph, arm)
# indexfunction from zooming algorithm
index = arm.avg_learning_reward + 2 * confidence_radius
if index > max_index:
max_index = index
max_index_arm = arm
action = max_index_arm.value
self.current_arm = max_index_arm
return action
def learn(self, action, reward):
arm = self.current_arm
arm.avg_reward = (arm.pulled * arm.avg_reward + reward) / (arm.pulled + 1)
if reward > self.max_profit:
self.max_profit = reward
elif reward < self.min_profit:
self.min_profit = reward
# normalize reward to [0, 1]
high = 100
low = -75
if reward >= high:
reward = 1
self.high_count += 1
elif reward <= low:
reward = 0
self.low_count += 1
else:
reward = (reward - low)/(high - low)
arm.avg_learning_reward = (arm.pulled * arm.avg_learning_reward + reward) / (arm.pulled + 1)
arm.pulled += 1
# zooming algorithm confidence radius
def calc_confidence_radius(i_ph, arm: Arm):
return math.sqrt((8 * i_ph)/(1 + arm.pulled))
You may find this useful, full algorithm description is here. They grid out the probes uniformly, informing this choice (e.g. normal centering on a reputed high energy arm) is also possible (but this might invalidate a few bounds I am not sure).

What are the best values for Postgresql.conf for better performance

I am using a DBMS PostgreSQL 9.0.1 on S.O Linux Debian 8 and server HP proliant Ml110-G9 :
Processador: (1) Intel Xeon E5-1603v3 (2.8GHz/4-core/10MB/140W)
Memória RAM: 8GB DDR4
Disco Rígido: SATA 1TB 7.2K rpm LFF
More specifications here: https://www.hpe.com/us/en/product-catalog/servers/proliant-servers/pip.specifications.hpe-proliant-ml110-gen9-server.7796454.html
154/5000
See Below parameters presents in postgresql.conf. You would indicate which value for example: cpu_index_tuple_cost and other CPU_*, based on this
Server. So I do not have to use default values.
#seq_page_cost = 1.0
#random_page_cost = 4.0
#cpu_tuple_cost = 0.01
#cpu_index_tuple_cost = 0.005
#cpu_operator_cost = 0.0025
max_connections = 100
shared_buffers = 2GB
effective_cache_size = 6GB
work_mem = 52428kB
maintenance_work_mem = 1GB
checkpoint_segments = 128
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 500

Lua / Corona SDK / Loop Spawning / Positional Difference / Framerate independent animation

So while building a mobile game with Corona SDK I have recently been facing a problem, that I couldn't solve, which are positional differences when spawning display objects in a loop. I got some help and found out this must have something to do with framerate independent animation. But now im facing this :
Albeit I'm using framerate independent animation here, this also produces the same problem. This gets emphasized by increasing the speed of the loop, as in the code below. What are your thoughts on this?
local loopSpeed = 306
local loopTimerSpeed = 1000
local gapTable = {}
local gapLoopTimer
local frameTime
local gap
--enterFrame for time only
local function frameTime(event)
frameTime = system.getTimer()
end
--enterFrame
local function enterFrame(self, event)
local deltaTime = frameTime - self.time
print(deltaTime/1000)
self.time = frameTime
local speed = self.rate * deltaTime / 1000
self:translate(speed, 0)
end
--loop speed function
local function setLoopSpeed(factor)
loopSpeed = loopSpeed * factor
loopTimerSpeed = loopTimerSpeed / factor
end
--set the loop speed
setLoopSpeed(3)
--loop to create gaps
local function createGap()
gap = display.newRect(1, 1, 308, 442)
gap.time = system.getTimer()
gap.anchorX = 1
gap.anchorY = 0
--animation
gap.rate = loopSpeed
gap.enterFrame = enterFrame
Runtime:addEventListener("enterFrame", gap)
--fill table for cleaning up
table.insert(gapTable, gap)
--cleaning up
for i = #gapTable, 1, -1 do
local thisGap = gapTable[i]
if thisGap.x > display.contentWidth + 500 then
display.remove(thisGap)
table.remove(gapTable, i)
Runtime:removeEventListener("enterFrame", thisGap)
end
thisGap = nil
end
end
Runtime:addEventListener("enterFrame", frameTime)
gapLoopTimer = timer.performWithDelay(
loopTimerSpeed,
createGap,
0
)
If you can set up size of gap between rects try use code below
local gapTable = {}
local gapWidth = 50
local runtime = 0
local speed = 20
local gap
local function getDeltaTime()
local temp = system.getTimer() -- Get current game time in ms
local dt = (temp-runtime) / (1000/display.fps) -- 60 fps or 30 fps as base
runtime = temp -- Store game time
return dt
end
local function enterFrame()
local dt = getDeltaTime()
for i=1, #gapTable do
gapTable[i]:translate(speed * dt, 0)
end
end
local function createGap()
local startX = 0
if #gapTable > 0 then
startX = gapTable[#gapTable].x - gapWidth - gapTable[#gapTable].width
end
gap = display.newRect(startX, 1, 308, 442)
gap.anchorX, gap.anchorY = 1, 0
table.insert(gapTable, gap)
--cleaning up
for i=#gapTable, 1, -1 do
if gapTable[i].x > display.contentWidth + 500 then
local rect = table.remove(gapTable, i)
if rect ~= nil then
display.remove(rect)
rect = nil
end
end
end
end
timer.performWithDelay(100, createGap, 0)
Runtime:addEventListener("enterFrame", enterFrame)
Hope this helps:)

How to run a function during a limited time?

I've a function and would like to call here each 2 seconds during 3 seconds.
I tried timer.performwithDelay() but it doesn't answer to my question.
Here is the function I want to call each 2 secondes during 3 seconds :
function FuelManage(event)
if lives > 0 and pressed==true then
lifeBar[lives].isVisible=false
lives = lives - 1
-- print( lifeBar[lives].x )
livesValue.text = string.format("%d", lives)
end
end
How can I use timer.performwithDelay(2000, callback, 1) to call my function FuelManage(event) ?
So it looks like what you are actually after is to start a few check 2 seconds from "now", for a duration of 3 seconds. You can schedule registering and unregistering for the enterFrame events. Using this will call your FuelManage function every time step during the period of interest:
function cancelCheckFuel(event)
Runtime:removeListener('enterFrame', FuelManager)
end
function FuelManage(event)
if lives > 0 and pressed==true then
lifeBar[lives].isVisible=false
lives = lives - 1
-- print( lifeBar[lives].x )
livesValue.text = string.format("%d", lives)
end
end
-- fuel management:
local startFuelCheckMS = 2000 -- start checking for fuel in 2 seconds
local fuelCheckDurationMS = 3000 -- check for 3 seconds
local stopFuelCheckMS = startFuelCheckMS + fuelCheckDurationMS
timer.performWithDelay(
startFuelCheckMS,
function() Runtime:addEventListener('enterFrame', FuelManager) end,
1)
timer.performWithDelay(
stopFuelCheckMS,
function() Runtime:removeEventListener('enterFrame', FuelManager) end,
1)
If this is too high frequency, then you'll want to use a timer, and keep track of time:
local fuelCheckDurationMS = 3000 -- check for 3 seconds
local timeBetweenChecksMS = 200 -- check every 200 ms
local totalCheckTimeMS = 0
local startedChecking = false
function FuelManage(event)
if lives > 0 and pressed==true then
lifeBar[lives].isVisible=false
lives = lives - 1
-- print( lifeBar[lives].x )
livesValue.text = string.format("%d", lives)
end
if totalCheckTimeMS < 3000 then
timer.performWithDelay(timeBetweenChecksMS, FuelManage, 1)
if startedChecking then
totalCheckTimeMS = totalCheckTimeMS + timeBetweenChecksMS
end
startedChecking = true
end
end
-- fuel management:
local startFuelCheckMS = 2000 -- start checking for fuel in 2 seconds
timer.performWithDelay(startFuelCheckMS, FuelManage, 1)
Set a timer inside a timer like this:
function FuelManage(event)
if lives > 0 and pressed==true then
lifeBar[lives].isVisible=false
lives = lives - 1
-- print( lifeBar[lives].x )
livesValue.text = string.format("%d", lives)
end
end
-- Main timer, called every 2 seconds
timer.performwithDelay(2000, function()
-- Sub-timer, called every second for 3 seconds
timer.performwithDelay(1000, FuelManage, 3)
end, 1)
Be careful though because the way it's setup know you will have an infinite number of timer running very soon... Since the first timer has a lower lifetime than the second one. So you might think if you would like to secure the second timer by making sure it's cancelled first before calling it again, this kind of thing.

Resources