What are the best values for Postgresql.conf for better performance - database

I am using a DBMS PostgreSQL 9.0.1 on S.O Linux Debian 8 and server HP proliant Ml110-G9 :
Processador: (1) Intel Xeon E5-1603v3 (2.8GHz/4-core/10MB/140W)
Memória RAM: 8GB DDR4
Disco Rígido: SATA 1TB 7.2K rpm LFF
More specifications here: https://www.hpe.com/us/en/product-catalog/servers/proliant-servers/pip.specifications.hpe-proliant-ml110-gen9-server.7796454.html
154/5000
See Below parameters presents in postgresql.conf. You would indicate which value for example: cpu_index_tuple_cost and other CPU_*, based on this
Server. So I do not have to use default values.
#seq_page_cost = 1.0
#random_page_cost = 4.0
#cpu_tuple_cost = 0.01
#cpu_index_tuple_cost = 0.005
#cpu_operator_cost = 0.0025
max_connections = 100
shared_buffers = 2GB
effective_cache_size = 6GB
work_mem = 52428kB
maintenance_work_mem = 1GB
checkpoint_segments = 128
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 500

Related

How can I speed up my optimization with Gekko?

My program is optimizing the charging and decharging of a home battery to minimize the cost of electricity at the end of the year. The electricity usage of homes is measured each 15 minutes, so I have 96 measurement point in 1 day. I want to optimilize the charging and decharging of the battery for 2 days, so that day 1 takes the usage of day 2 into account. I wrote the following code and it works.
from gekko import GEKKO
import numpy as np
import pandas as pd
import time
import math
# ------------------------ Import and read input data ------------------------
file = r'D:\Bedrijfseconomie\MP Thuisbatterijen\Spyder - Gekko\Data Sim 1.xlsx'
data = pd.read_excel(file, sheet_name='Input', na_values='NaN')
dataRead = pd.DataFrame(data, columns= ['Timestep','Verbruik woning (kWh)','Prijs afname (€/kWh)',
'Capaciteit batterij (kW)','Capaciteit batterij (kWh)',
'Rendement (%)','Verbruikersprofiel'])
timestep = dataRead['Timestep'].to_numpy()
usage_home = dataRead['Verbruik woning (kWh)'].to_numpy()
price = dataRead['Prijs afname (€/kWh)'].to_numpy()
cap_batt_kW = dataRead['Capaciteit batterij (kW)'].iloc[0]
cap_batt_kWh = dataRead['Capaciteit batterij (kWh)'].iloc[0]
efficiency = dataRead['Rendement (%)'].iloc[0]
usersprofile = dataRead['Verbruikersprofiel'].iloc[0]
# ---------------------------- Optimization model ----------------------------
# Initialise model
m = GEKKO()
# Global options
m.options.SOLVER = 1
# Constants
snelheid_laden = cap_batt_kW/4
T = len(timestep)
loss_charging = m.Const(value = (1-efficiency)/2)
max_cap_batt = m.Const(value = cap_batt_kWh)
min_cap_batt = m.Const(value = 0)
max_charge = m.Const(value = snelheid_laden) # max battery can charge in 15min
max_decharge = m.Const(value = -snelheid_laden) # max battery can decharge in 15min
# Parameters
dummy = np.array(np.ones([T]))
# Variables
e_batt = m.Array(m.Var, (T), lb = min_cap_batt, ub = max_cap_batt) # energy in battery
usage_net = m.Array(m.Var, (T)) # usage home & charge/decharge battery
price_paid = m.Array(m.Var, (T)) # price paid each 15min
charging = m.Array(m.Var, (T), lb = max_decharge, ub = max_charge) # amount charge/decharge each 15min
# Intermediates
e_batt[0] = m.Intermediate(charging[0])
for t in range(T):
e_batt[t] = m.Intermediate(m.sum([charging[i]*(1-loss_charging) for i in range(t)]))
usage_net = [m.Intermediate(usage_home[t] + charging[t]) for t in range(T)]
price_paid = [m.Intermediate(usage_net[t] * price[t] / 100) for t in range(T)]
total_price = m.Intermediate(m.sum([price_paid[t] for t in range(T)]))
# Equations (constraints)
m.Equation([min_cap_batt*dummy[t] <= e_batt[t] for t in range(T)])
m.Equation([max_cap_batt*dummy[t] >= e_batt[t] for t in range(T)])
m.Equation([max_charge*dummy[t] >= charging[t] for t in range(T)])
m.Equation([max_decharge*dummy[t] <= charging[t] for t in range(T)])
m.Equation([min_cap_batt*dummy[t] <= usage_net[t] for t in range(T)])
m.Equation([(-1*charging[t]) <= (1-loss_charging)*e_batt[t] for t in range(T)])
# Objective
m.Minimize(total_price)
# Solve problem
m.solve()
My code is running and it works but despite that it gives a Solution time of 10 seconds, the total time for it to run is around 8 minutes. Does anyone know a way I can speed it up?
There are a few ways to speed up the Gekko code:
Solve locally instead of on the public server. The option is m=GEKKO(remote=False). The public server can slow down with many jobs.
Use sum() instead of m.sum(). This can be faster for compiling the model. Otherwise, use m.integral(x) if you need the integral of x.
Many of the equations are repeated at each time horizon step. Gekko is more efficient using a single equation definition with IMODE=2 (for algebraic equation models) or IMODE=6 (for differential / algebraic equation models) and then it creates the equations over the time horizon. You may need to use m.vsum() instead of m.sum().
For additional diagnosis, try setting m.options.DIAGLEVEL=1 to get a detailed timing report of how long it takes to compile the model and perform each function, 1st derivative, and 2nd derivative calculation. It also gives a detailed view of the solver versus model time during the solution phase.
Update with Data File Testing
Thanks for sending the data file. The run directory shows that the model file is 58,682 lines long. It takes a while to compile a model that size. Here is the solution from the files you sent:
--------- APM Model Size ------------
Each time step contains
Objects : 193
Constants : 5
Variables : 20641
Intermediates: 578
Connections : 18721
Equations : 20259
Residuals : 19681
Number of state variables: 20641
Number of total equations: - 19873
Number of slack variables: - 1152
---------------------------------------
Degrees of freedom : -384
* Warning: DOF <= 0
----------------------------------------------
Steady State Optimization with APOPT Solver
----------------------------------------------
Iter Objective Convergence
0 3.37044E+01 5.00000E+00
1 2.81987E+01 1.00000E-10
2 2.81811E+01 5.22529E-12
3 2.81811E+01 2.10942E-15
4 2.81811E+01 2.10942E-15
Successful solution
---------------------------------------------------
Solver : APOPT (v1.0)
Solution time : 10.5119999999879 sec
Objective : 28.1811214884047
Successful solution
---------------------------------------------------
Here is a version that uses IMODE=6 instead. You define the variables and equations once and let Gekko handle the time discretization. It makes a much more efficient model because there is no unnecessary duplication of equations.
from gekko import GEKKO
import numpy as np
import pandas as pd
import time
import math
# ------------------------ Import and read input data ------------------------
file = r'Data Sim 1.xlsx'
data = pd.read_excel(file, sheet_name='Input', na_values='NaN')
dataRead = pd.DataFrame(data, columns= ['Timestep','Verbruik woning (kWh)','Prijs afname (€/kWh)',
'Capaciteit batterij (kW)','Capaciteit batterij (kWh)',
'Rendement (%)','Verbruikersprofiel'])
timestep = dataRead['Timestep'].to_numpy()
usage_home = dataRead['Verbruik woning (kWh)'].to_numpy()
price = dataRead['Prijs afname (€/kWh)'].to_numpy()
cap_batt_kW = dataRead['Capaciteit batterij (kW)'].iloc[0]
cap_batt_kWh = dataRead['Capaciteit batterij (kWh)'].iloc[0]
efficiency = dataRead['Rendement (%)'].iloc[0]
usersprofile = dataRead['Verbruikersprofiel'].iloc[0]
# ---------------------------- Optimization model ----------------------------
# Initialise model
m = GEKKO()
m.open_folder()
# Global options
m.options.SOLVER = 1
m.options.IMODE = 6
# Constants
snelheid_laden = cap_batt_kW/4
m.time = timestep
loss_charging = m.Const(value = (1-efficiency)/2)
max_cap_batt = m.Const(value = cap_batt_kWh)
min_cap_batt = m.Const(value = 0)
max_charge = m.Const(value = snelheid_laden) # max battery can charge in 15min
max_decharge = m.Const(value = -snelheid_laden) # max battery can decharge in 15min
# Parameters
usage_home = m.Param(usage_home)
price = m.Param(price)
# Variables
e_batt = m.Var(value=0, lb = min_cap_batt, ub = max_cap_batt) # energy in battery
price_paid = m.Var() # price paid each 15min
charging = m.Var(lb = max_decharge, ub = max_charge) # amount charge/decharge each 15min
usage_net = m.Var(lb=min_cap_batt)
# Equations
m.Equation(e_batt==m.integral(charging*(1-loss_charging)))
m.Equation(usage_net==usage_home + charging)
price_paid = m.Intermediate(usage_net * price / 100)
m.Equation(-charging <= (1-loss_charging)*e_batt)
# Objective
m.Minimize(price_paid)
# Solve problem
m.solve()
import matplotlib.pyplot as plt
plt.plot(m.time,e_batt.value,label='Battery Charge')
plt.plot(m.time,charging.value,label='Charging')
plt.plot(m.time,price_paid.value,label='Price')
plt.plot(m.time,usage_net.value,label='Net Usage')
plt.xlabel('Time'); plt.grid(); plt.legend(); plt.show()
The model is only 31 lines long (see gk0_model.apm) and it solves much faster (a couple seconds total).
--------- APM Model Size ------------
Each time step contains
Objects : 0
Constants : 5
Variables : 8
Intermediates: 1
Connections : 0
Equations : 6
Residuals : 5
Number of state variables: 1337
Number of total equations: - 955
Number of slack variables: - 191
---------------------------------------
Degrees of freedom : 191
----------------------------------------------
Dynamic Control with APOPT Solver
----------------------------------------------
Iter Objective Convergence
0 3.46205E+01 3.00000E-01
1 3.30649E+01 4.41141E-10
2 3.12774E+01 1.98558E-11
3 3.03148E+01 1.77636E-15
4 2.96824E+01 3.99680E-15
5 2.82700E+01 8.88178E-16
6 2.82039E+01 1.77636E-15
7 2.81334E+01 8.88178E-16
8 2.81085E+01 1.33227E-15
9 2.81039E+01 8.88178E-16
Iter Objective Convergence
10 2.81005E+01 8.88178E-16
11 2.80999E+01 1.77636E-15
12 2.80996E+01 8.88178E-16
13 2.80996E+01 8.88178E-16
14 2.80996E+01 8.88178E-16
Successful solution
---------------------------------------------------
Solver : APOPT (v1.0)
Solution time : 0.527499999996508 sec
Objective : 28.0995878585948
Successful solution
---------------------------------------------------
There is no long compile time. Also, the solver time is reduced from 10 sec to 0.5 sec. The objective function is nearly the same (28.18 versus 28.10).
Here is a complete version without the data file dependency (in case the data file isn't available in the future).
from gekko import GEKKO
import numpy as np
import pandas as pd
import time
import math
# ------------------------ Import and read input data ------------------------
timestep = np.arange(1,193)
usage_home = np.array([0.05,0.07,0.09,0.07,0.05,0.07,0.07,0.07,0.06,
0.05,0.07,0.07,0.09,0.07,0.06,0.07,0.07,
0.07,0.16,0.12,0.17,0.08,0.10,0.11,0.06,
0.06,0.06,0.06,0.06,0.07,0.07,0.07,0.08,
0.08,0.06,0.07,0.07,0.07,0.07,0.05,0.07,
0.07,0.07,0.07,0.21,0.08,0.07,0.08,0.27,
0.12,0.09,0.10,0.11,0.09,0.09,0.08,0.08,
0.12,0.15,0.08,0.10,0.08,0.10,0.09,0.10,
0.09,0.08,0.10,0.12,0.10,0.10,0.10,0.11,
0.10,0.10,0.11,0.13,0.21,0.12,0.10,0.10,
0.11,0.10,0.11,0.12,0.12,0.10,0.11,0.10,
0.10,0.10,0.11,0.10,0.10,0.09,0.08,0.12,
0.10,0.11,0.11,0.10,0.06,0.05,0.06,0.06,
0.06,0.07,0.06,0.06,0.05,0.06,0.05,0.06,
0.05,0.06,0.05,0.06,0.07,0.06,0.09,0.10,
0.10,0.22,0.08,0.06,0.05,0.06,0.08,0.08,
0.07,0.08,0.07,0.07,0.16,0.21,0.08,0.08,
0.09,0.09,0.10,0.09,0.09,0.08,0.12,0.24,
0.09,0.08,0.09,0.08,0.10,0.24,0.08,0.09,
0.09,0.08,0.08,0.07,0.06,0.05,0.06,0.07,
0.07,0.05,0.05,0.06,0.05,0.28,0.11,0.20,
0.10,0.09,0.28,0.10,0.15,0.09,0.10,0.18,
0.12,0.13,0.30,0.10,0.11,0.10,0.10,0.11,
0.10,0.21,0.10,0.10,0.12,0.10,0.08])
price = np.array([209.40,209.40,209.40,209.40,193.00,193.00,193.00,
193.00,182.75,182.75,182.75,182.75,161.60,161.60,
161.60,161.60,154.25,154.25,154.25,154.25,150.70,
150.70,150.70,150.70,150.85,150.85,150.85,150.85,
150.00,150.00,150.00,150.00,153.25,153.25,153.25,
153.25,153.25,153.25,153.25,153.25,151.35,151.35,
151.35,151.35,151.70,151.70,151.70,151.70,154.95,
154.95,154.95,154.95,150.20,150.20,150.20,150.20,
153.75,153.75,153.75,153.75,160.55,160.55,160.55,
160.55,179.90,179.90,179.90,179.90,202.00,202.00,
202.00,202.00,220.25,220.25,220.25,220.25,245.75,
245.75,245.75,245.75,222.90,222.90,222.90,222.90,
203.40,203.40,203.40,203.40,205.30,205.30,205.30,
205.30,192.80,192.80,192.80,192.80,177.00,177.00,
177.00,177.00,159.90,159.90,159.90,159.90,152.50,
152.50,152.50,152.50,143.95,143.95,143.95,143.95,
142.10,142.10,142.10,142.10,143.75,143.75,143.75,
143.75,170.80,170.80,170.80,170.80,210.35,210.35,
210.35,210.35,224.45,224.45,224.45,224.45,226.30,
226.30,226.30,226.30,227.85,227.85,227.85,227.85,
225.45,225.45,225.45,225.45,225.80,225.80,225.80,
225.80,224.50,224.50,224.50,224.50,220.30,220.30,
220.30,220.30,220.00,220.00,220.00,220.00,221.90,
221.90,221.90,221.90,230.25,230.25,230.25,230.25,
233.60,233.60,233.60,233.60,225.20,225.20,225.20,
225.20,179.85,179.85,179.85,179.85,171.85,171.85,
171.85,171.85,162.90,162.90,162.90,162.90,158.85,
158.85,158.85,158.85])
cap_batt_kW = 3.00
cap_batt_kWh = 5.00
efficiency = 0.95
usersprofile = 1
# ---------------------------- Optimization model ----------------------------
# Initialise model
m = GEKKO()
#m.open_folder()
# Global options
m.options.SOLVER = 1
m.options.IMODE = 6
# Constants
snelheid_laden = cap_batt_kW/4
m.time = timestep
loss_charging = m.Const(value = (1-efficiency)/2)
max_cap_batt = m.Const(value = cap_batt_kWh)
min_cap_batt = m.Const(value = 0)
max_charge = m.Const(value = snelheid_laden) # max battery can charge in 15min
max_decharge = m.Const(value = -snelheid_laden) # max battery can decharge in 15min
# Parameters
usage_home = m.Param(usage_home)
price = m.Param(price)
# Variables
e_batt = m.Var(value=0, lb = min_cap_batt, ub = max_cap_batt) # energy in battery
price_paid = m.Var() # price paid each 15min
charging = m.Var(lb = max_decharge, ub = max_charge) # amount charge/decharge each 15min
usage_net = m.Var(lb=min_cap_batt)
# Equations
m.Equation(e_batt==m.integral(charging*(1-loss_charging)))
m.Equation(usage_net==usage_home + charging)
price_paid = m.Intermediate(usage_net * price / 100)
m.Equation(-charging <= (1-loss_charging)*e_batt)
# Objective
m.Minimize(price_paid)
# Solve problem
m.solve()
import matplotlib.pyplot as plt
plt.plot(m.time,e_batt.value,label='Battery Charge')
plt.plot(m.time,charging.value,label='Charging')
plt.plot(m.time,price_paid.value,label='Price')
plt.plot(m.time,usage_net.value,label='Net Usage')
plt.xlabel('Time'); plt.grid(); plt.legend(); plt.show()

MariaDB storage increase

linux and db's nameswe faced the problem with mariadb storage.
we have centos 8 with monitoring system and mariadb as database system. while creating the linux machine we gave it 120 gb, but after adding the nodes to monitoring system the space was fulled. so we increase the linux root space by 50 gb.
but the problem is still exist:
mariadb service didn't work, so we have to restart it
we need to restart nginx service for web access.
after this 2 manipulations monitoring system works for a 5-6 hours, then we again have to restart DB and web service.
We think that database didn't use all 170GB and 'see' only the initial 120GB. As test we delete approximately 15 devices (+-15GB) from monitoring system and test it for a 5 days and there was not any DB or WEB issues.
MariaDB - 10.3.28 version
The used engine is InnoDB
We checked for innodb_page_size
Innodb_page_size = 16384
Could someone help us
innodb status1
innodb status2
Analysis of GLOBAL STATUS and VARIABLES:
Observations:
Version: 10.3.28-MariaDB
3.6 GB of RAM
Uptime = 6d 22:26:28
429 Queries/sec : 145 Questions/sec
The More Important Issues:
Do you have 3.6GB of RAM? Are you using InnoDB? If yes to both of those, then make these two changes; they may help performance a lot:
key_buffer_size = 40M
innodb_buffer_pool_size = 2G
I'm getting conflicting advice on table_open_cache; let's leave it alone for now.
If you have SSD, I recommend these two:
innodb_io_capicity = 1000
innodb_flush_neighbors = 0
innodb_log_file_size = 200M -- Caution: It may be complex to change this in the version you are running. If so, leave it alone.
You seem to DELETE far more rows than you INSERT; what is going on?
Unless you do a lot of ALTERs, make this change:
myisam_sort_buffer_size = 50M
Details and other observations:
( (key_buffer_size - 1.2 * Key_blocks_used * 1024) ) = ((128M - 1.2 * 0 * 1024)) / 3865470566.4 = 3.5% -- Percent of RAM wasted in key_buffer.
-- Decrease key_buffer_size (now 134217728).
( Key_blocks_used * 1024 / key_buffer_size ) = 0 * 1024 / 128M = 0 -- Percent of key_buffer used. High-water-mark.
-- Lower key_buffer_size (now 134217728) to avoid unnecessary memory usage.
( (key_buffer_size / 0.20 + innodb_buffer_pool_size / 0.70) ) = ((128M / 0.20 + 128M / 0.70)) / 3865470566.4 = 22.3% -- Most of available ram should be made available for caching.
-- http://mysql.rjweb.org/doc.php/memory
( Key_reads + Key_writes + Innodb_pages_read + Innodb_pages_written + Innodb_dblwr_writes + Innodb_buffer_pool_pages_flushed ) = (0 + 0 + 107837817 + 14228075 + 669027 + 14217155) / 599188 = 228 /sec -- IOPs?
-- If the hardware can handle it, set innodb_io_capacity (now 200) to about this value.
( ( Key_reads + Key_writes + Innodb_pages_read + Innodb_pages_written + Innodb_dblwr_writes + Innodb_buffer_pool_pages_flushed ) / innodb_io_capacity / Uptime ) = ( 0 + 0 + 107837817 + 14228075 + 669027 + 14217155 ) / 200 / 599188 = 114.3% -- This may be a metric indicating what innodb_io_capacity is set reasonably.
-- Increase innodb_io_capacity (now 200) if the hardware can handle it.
( Table_open_cache_misses ) = 12,156,771 / 599188 = 20 /sec
-- May need to increase table_open_cache (now 2000)
( Table_open_cache_misses / (Table_open_cache_hits + Table_open_cache_misses) ) = 12,156,771 / (184539214 + 12156771) = 6.2% -- Effectiveness of table_open_cache.
-- Increase table_open_cache (now 2000) and check table_open_cache_instances (now 8).
( innodb_buffer_pool_size ) = 128M -- InnoDB Data + Index cache
-- 128M (an old default) is woefully small.
( innodb_buffer_pool_size ) = 128 / 3865470566.4 = 3.5% -- % of RAM used for InnoDB buffer_pool
-- Set to about 70% of available RAM. (To low is less efficient; too high risks swapping.)
( (key_buffer_size / 0.20 + innodb_buffer_pool_size / 0.70) ) = ((128M / 0.20 + 128M / 0.70)) / 3865470566.4 = 22.3% -- (metric for judging RAM usage)
( innodb_lru_scan_depth ) = 1,024
-- "InnoDB: page_cleaner: 1000ms intended loop took ..." may be fixed by lowering lru_scan_depth
( innodb_io_capacity ) = 200 -- When flushing, use this many IOPs.
-- Reads could be slugghish or spiky.
( innodb_io_capacity_max / innodb_io_capacity ) = 2,000 / 200 = 10 -- Capacity: max/plain
-- Recommend 2. Max should be about equal to the IOPs your I/O subsystem can handle. (If the drive type is unknown 2000/200 may be a reasonable pair.)
( Innodb_log_writes ) = 33,145,091 / 599188 = 55 /sec
( Innodb_os_log_written / (Uptime / 3600) / innodb_log_files_in_group / innodb_log_file_size ) = 67,002,682,368 / (599188 / 3600) / 2 / 48M = 4 -- Ratio
-- (see minutes)
( Uptime / 60 * innodb_log_file_size / Innodb_os_log_written ) = 599,188 / 60 * 48M / 67002682368 = 7.5 -- Minutes between InnoDB log rotations Beginning with 5.6.8, this can be changed dynamically; be sure to also change my.cnf.
-- (The recommendation of 60 minutes between rotations is somewhat arbitrary.) Adjust innodb_log_file_size (now 50331648). (Cannot change in AWS.)
( innodb_flush_method ) = innodb_flush_method = fsync -- How InnoDB should ask the OS to write blocks. Suggest O_DIRECT or O_ALL_DIRECT (Percona) to avoid double buffering. (At least for Unix.) See chrischandler for caveat about O_ALL_DIRECT
( default_tmp_storage_engine ) = default_tmp_storage_engine =
( innodb_flush_neighbors ) = 1 -- A minor optimization when writing blocks to disk.
-- Use 0 for SSD drives; 1 for HDD.
( ( Innodb_pages_read + Innodb_pages_written ) / Uptime / innodb_io_capacity ) = ( 107837817 + 14228075 ) / 599188 / 200 = 101.9% -- If > 100%, need more io_capacity.
-- Increase innodb_io_capacity (now 200) if the drives can handle it.
( innodb_io_capacity ) = 200 -- I/O ops per second capable on disk . 100 for slow drives; 200 for spinning drives; 1000-2000 for SSDs; multiply by RAID factor.
( sync_binlog ) = 0 -- Use 1 for added security, at some cost of I/O =1 may lead to lots of "query end"; =0 may lead to "binlog at impossible position" and lose transactions in a crash, but is faster. 0 is OK for Galera.
( innodb_adaptive_hash_index ) = innodb_adaptive_hash_index = ON -- Usually should be ON.
-- There are cases where OFF is better. See also innodb_adaptive_hash_index_parts (now 8) (after 5.7.9) and innodb_adaptive_hash_index_partitions (MariaDB and Percona). ON has been implicated in rare crashes (bug 73890). 10.5.0 decided to default OFF.
( innodb_print_all_deadlocks ) = innodb_print_all_deadlocks = OFF -- Whether to log all Deadlocks.
-- If you are plagued with Deadlocks, turn this on. Caution: If you have lots of deadlocks, this may write a lot to disk.
( myisam_sort_buffer_size ) = 134,216,704 / 3865470566.4 = 3.5% -- Used for ALTER, CREATE INDEX, OPTIMIZE, LOAD DATA; set when you need it. Also for MyISAM's REPAIR TABLE.
-- Decrease myisam_sort_buffer_size (now 134216704) to keep from blowing out RAM.
( innodb_ft_result_cache_limit ) = 2,000,000,000 / 3865470566.4 = 51.7% -- Byte limit on FULLTEXT resultset. (Possibly not preallocated, but grows?)
-- Lower the setting.
( innodb_autoextend_increment * 1048576 ) = (64 * 1048576) / 3865470566.4 = 1.7% -- How much to increase ibdata1 by (when needed).
-- Decrease setting to avoid premature swapping.
( character_set_server ) = character_set_server = latin1
-- Charset problems may be helped by setting character_set_server (now latin1) to utf8mb4. That is the future default.
( local_infile ) = local_infile = ON
-- local_infile (now ON) = ON is a potential security issue
( Created_tmp_disk_tables / Created_tmp_tables ) = 542,381 / 1084764 = 50.0% -- Percent of temp tables that spilled to disk
-- Maybe increase tmp_table_size (now 16777216) and max_heap_table_size (now 16777216); improve indexes; avoid blobs, etc.
( Com_delete / Com_insert ) = 2,294,352 / 1521534 = 150.8% -- Deletes / Inserts (as a pct). (Ignores LOAD, REPLACE, etc.)
( Com_insert + Com_delete + Com_delete_multi + Com_replace + Com_update + Com_update_multi ) = (1521534 + 2294352 + 21366 + 0 + 45590666 + 0) / 599188 = 82 /sec -- writes/sec
-- 50 writes/sec + log flushes will probably max out I/O write capacity of normal drives
( Com__biggest ) = Com__biggest = Com_stmt_execute -- Which of the "Com_" metrics is biggest.
-- Normally it is Com_select (now 34545111). If something else, then it may be a sloppy platform, or may be something else.
( binlog_format ) = binlog_format = MIXED -- STATEMENT/ROW/MIXED.
-- ROW is preferred by 5.7 (10.3)
( slow_query_log ) = slow_query_log = OFF -- Whether to log slow queries. (5.1.12)
( long_query_time ) = 10 -- Cutoff (Seconds) for defining a "slow" query.
-- Suggest 2
( back_log ) = 80 -- (Autosized as of 5.6.6; based on max_connections)
-- Raising to min(150, max_connections (now 151)) may help when doing lots of connections.
( thread_cache_size / Max_used_connections ) = 151 / 51 = 296.1%
-- There is no advantage in having the thread cache bigger than your likely number of connections. Wasting space is the disadvantage.
You have the Query Cache half-off. You should set both query_cache_type = OFF and query_cache_size = 0 . There is (according to a rumor) a 'bug' in the QC code that leaves some code on unless you turn off both of those settings.
Abnormally small:
(Com_select + Qcache_hits) / (Com_insert + Com_update + Com_delete + Com_replace) = 0.699
Com_show_tables = 0
Innodb_buffer_pool_bytes_data = 193 /sec
Table_locks_immediate = 0.53 /HR
eq_range_index_dive_limit = 0
innodb_spin_wait_delay = 4
Abnormally large:
Com_delete_multi = 0.036 /sec
Com_stmt_close = 141 /sec
Com_stmt_execute = 141 /sec
Com_stmt_prepare = 141 /sec
Handler_discover = 0.94 /HR
Innodb_buffer_pool_read_ahead = 162 /sec
Innodb_buffer_pool_reads * innodb_page_size / innodb_buffer_pool_size = 114837.2%
Innodb_data_pending_fsyncs = 2
Innodb_os_log_fsyncs = 55 /sec
Opened_plugin_libraries = 0.006 /HR
Table_open_cache_active_instances = 4
Tc_log_page_size = 4,096
Abnormal strings:
aria_recover_options = BACKUP,QUICK
innodb_fast_shutdown = 1
log_slow_admin_statements = ON
myisam_stats_method = NULLS_UNEQUAL
old_alter_table = DEFAULT

I have array in database field and want to search like devices where 5G is Yes value using Query Builder Laravel and display Products in view Laravel

I have a column in database in which I store an array to store device specification in an array I think its array in a array now i want to get devices where for example technology 5G is Yes Value here is the below code and database entry of array
Array Of Specs in database
a:12:{s:6:"Launch";a:2:{s:9:"Announced";s:12:"2019, August";s:6:"Status";s:9:"Available";}s:7:"Network";a:7:{s:10:"Technology";s:35:"GSM / CDMA / HSPA / EVDO / LTE / 5G";s:2:"3G";s:3:"Yes";s:2:"4G";s:3:"Yes";s:2:"5G";s:3:"Yes";s:5:"Speed";s:69:"HSPA 42.2/5.76 Mbps, LTE-A (7CA) Cat20 2048/150 Mbps, 5G (2+ Gbps DL)";s:4:"Edge";s:3:"Yes";s:4:"GPRS";s:3:"Yes";}s:4:"Body";a:5:{s:10:"Dimensions";s:45:"162.3 x 77.2 x 7.9 mm (6.39 x 3.04 x 0.31 in)";s:6:"Weight";s:15:"198 g (6.98 oz)";s:3:"SIM";s:3:"Yes";s:5:"Build";s:75:"Glass front (Gorilla Glass 6), glass back (Gorilla Glass 6), aluminum frame";s:6:"Others";s:143:"Samsung Pay (Visa, MasterCard certified) IP68 dust/water resistant (up to 1.5m for 30 mins) Stylus (Bluetooth integration, accelerometer, gyro)";}s:7:"Display";a:6:{s:4:"Type";s:49:"Dynamic AMOLED capacitive touchscreen, 16M colors";s:4:"Size";s:51:"6.8 inches, 114.0 cm2 (~91.0% screen-to-body ratio)";s:10:"Resolution";s:49:"1440 x 3040 pixels, 19:9 ratio (~498 ppi density)";s:10:"Multitouch";s:3:"Yes";s:10:"Protection";s:23:"Corning Gorilla Glass 6";s:6:"Others";s:25:"HDR10+, Always-on display";}s:8:"Platform";a:4:{s:2:"OS";s:60:"Android 9.0 (Pie), planned upgrade to Android 10.0; One UI 2";s:7:"Chipset";s:31:"Exynos 9825 (7 nm) - EMEA/LATAM";s:3:"CPU";s:80:"Octa-core (2x2.73 GHz Mongoose M4 & 2x2.4 GHz Cortex-A75 & 4x1.9 GHz Cortex-A55)";s:3:"GPU";s:26:"Mali-G76 MP12 - EMEA/LATAM";}s:6:"Memory";a:2:{s:9:"Card Slot";s:36:"microSD, up to 1 TB (dedicated slot)";s:8:"Internal";s:30:"256GB 12GB RAM, 512GB 12GB RAM";}s:6:"Camera";a:4:{s:7:"Primary";s:68:"12 MP, f/1.5-2.4, 27mm (wide), 1/2.55", 1.4µm, Dual Pixel PDAF, OIS";s:8:"Features";s:29:"LED flash, auto-HDR, panorama";s:5:"Video";s:107:"2160p#30/60fps, 1080p#30/60/240fps, 720p#960fps, HDR10+, dual-video rec., stereo sound rec., gyro-EIS & OIS";s:9:"Secondary";s:57:"10 MP, f/2.2, 26mm (wide), 1/3", 1.22µm, Dual Pixel PDAF";}s:5:"Sound";a:4:{s:11:"Alert types";N;s:11:"Loudspeaker";s:3:"Yes";s:9:"3.5mmJack";s:3:"Yes";s:6:"Others";s:95:"32-bit/384kHz audio Active noise cancellation with dedicated mic Dolby Atmos sound Tuned by AKG";}s:5:"Comms";a:6:{s:4:"WLAN";s:60:"Wi-Fi 802.11 a/b/g/n/ac/ax, dual-band, Wi-Fi Direct, hotspot";s:9:"Bluetooth";s:19:"5.0, A2DP, LE, aptX";s:3:"GPS";s:38:"Yes, with A-GPS, GLONASS, BDS, GALILEO";s:3:"NFC";s:3:"Yes";s:5:"Radio";s:28:"FM radio (USA & Canada only)";s:3:"USB";s:36:"3.1, Type-C 1.0 reversible connector";}s:8:"Features";a:5:{s:7:"Sensors";s:91:"Fingerprint (under display, ultrasonic), accelerometer, gyro, proximity, compass, barometer";s:9:"Messaging";N;s:7:"Browser";N;s:4:"Java";N;s:6:"Others";N;}s:7:"Battery";a:4:{s:7:"Battery";s:37:"Non-removable Li-Ion 4300 mAh battery";s:8:"Stand-by";N;s:9:"Talk time";N;s:10:"Music play";N;}s:4:"Misc";a:5:{s:6:"Colors";s:33:"Aura Glow, Aura White, Aura Black";s:6:"SAR US";N;s:6:"SAR EU";N;s:11:"Price group";s:43:"€ 1,232.07 / $ 1,257.20 / £ 780.00";s:7:"Website";N;}}
Now the Query That i used in controller
public function technologynetwork(Request $request)
{
$tech = $request->a;
// return $tech;
$devices = DB::table('devices')
->select('devices.*')
->from('devices', 'specs_array')
->where('specs_array','===', array_search("Yes",specs_array))
->orderBy('release_year', 'desc')
->orderBy('release_month', 'desc')
->orderBy('id', 'desc')
->paginate(30);
return view('frontend/'.$this->config->template.'/devices', [
// global variables
'config' => $this->config,
'template_path' => $this->template_path,
'logged_user_role' => $this->logged_user_role ?? NULL,
// page variables
'devices' => $devices,
'count_all' => $devices->total(),
]);
}
and my route is in web.php
Route::get('/spec/{a}', 'Frontend\DevicesController#technologynetwork');
and its frontend
<dd>5G Phones</dd>
Anyone who can guide me how i can get any specs wise devices or how to search from an array
Could not decode your Array Of Specs in database.
Assume you save specs in a column field called specs_array, it has this format: "5G":"Yes"
public function technologynetwork(Request $request)
{
$tech = $request->a;
$search_string = '%"' . $tech . '":"Yes"%'; // = %"5G":"Yes"%
$devices = DB::table('devices')
->select('devices.*') // should remove this, not necessary
->from('devices', 'specs_array') // should remove this, not necessary
->where('specs_array','LIKE', $search_string) // <= use LIKE for query
->orderBy('release_year', 'desc')
->orderBy('release_month', 'desc')
->orderBy('id', 'desc')
->paginate(30);
return view('frontend/'.$this->config->template.'/devices', [
// global variables
'config' => $this->config,
'template_path' => $this->template_path,
'logged_user_role' => $this->logged_user_role ?? NULL,
// page variables
'devices' => $devices,
'count_all' => $devices->total(),
]);
}
When saving to database, spec_arrays becomes a string and should not be treated as array.

How do i record JANUS signal as wav file?

I am testing an interoperability between modems. one of my modem did support JANUS and I believe UnetStack base Subnero Modem Phy[3] also support JANUS. How can i send and record JANUS signal which i can use for preliminary testing for other modem ? Can someone please provide basic snippet ?
UnetStack indeed has an implementation of JANUS that is, by default, configured on phy[3].
You can check this on your modem (the sample outputs here are from unet audio SDOAM, and so your modem parameters might vary somewhat):
> phy[3]
« PHY »
[org.arl.unet.phy.PhysicalChannelParam]
fec = 7
fecList ⤇ [LDPC1, LDPC2, LDPC3, LDPC4, LDPC5, LDPC6, ICONV2]
frameDuration ⤇ 1.1
frameLength = 8
janus = true
[org.arl.yoda.FhbfskParam]
chiplen = 1
fmin = 9520.0
fstep = 160.0
hops = 13
scrambler = 0
sync = true
tukey = true
[org.arl.yoda.ModemChannelParam]
modulation = fhbfsk
preamble = (2400 samples)
threshold = 0.0
(I have dropped a few parameters that are not relevant to the discussion here to keep the output concise)
The key parameters to take note of:
modulation = fhbfsk and janus = true setup the modulation for JANUS
fmin = 9520.0, fstep = 160.0 and hops = 13 are the modulation parameters to setup fhbfsk as required by JANUS
fec = 7 chooses ICONV2 from the fecList, as required by JANUS
threshold = 0.0 indicates that reception of JANUS frames is disabled
NOTE: If your modem is a Subnero M25 series, the standard JANUS band is out of the modem's ~20-30 kHz operating band. In that case, the JANUS scheme is auto-configured to a higher frequency (which you will see as fmin in your modem). Do note that this frequency is important to match for interop with any other modem that might support JANUS at a higher frequency band.
To enable JANUS reception, you need to:
phy[3].threshold = 0.3
To avoid any other detections from CONTROL and DATA packets, we might want to disable those:
phy[1].threshold = 0
phy[2].threshold = 0
At this point, you could make a transmission by typing phy << new TxJanusFrameReq() and put a hydrophone next to the modem to record the transmitted signal as a wav file.
However, I'm assuming you would prefer to record on the modem itself, rather than with an external hydrophone. To do that, you can enable the loopback mode on the modem, and set up the modem to record the received signal:
phy.loopback = true # enable loopback
phy.fullduplex = true # enable full duplex so we can record while transmitting
phy[3].basebandRx = true # enable capture of received baseband signal
subscribe phy # show notifications from phy on shell
Now if you do a transmission, you should see a RxBasebandSignalNtf with the captured signal:
> phy << new TxJanusFrameReq()
AGREE
phy >> RxFrameStartNtf:INFORM[type:#3 rxTime:492455709 rxDuration:1100000 detector:0.96]
phy >> TxFrameNtf:INFORM[type:#3 txTime:492456016]
phy >> RxJanusFrameNtf:INFORM[type:#3 classUserID:0 appType:0 appData:0 mobility:false canForward:true txRxFlag:true rxTime:492455708 rssi:-44.2 cfo:0.0]
phy >> RxBasebandSignalNtf:INFORM[adc:1 rxTime:492455708 rssi:-44.2 preamble:3 fc:12000.0 fs:12000.0 (13200 baseband samples)]
That notification has your signal in baseband complex format. You can save it to a file:
save 'x.txt', ntf.signal, 2
To convert to a wav file, you'll need to load this signal and convert to passband. Here's some example Python code to do this:
import numpy as np
import scipy.io.wavfile as wav
import arlpy.signal as asig
x = np.genfromtxt('x.txt', delimiter=',')
x = x[:,0] + 1j * x[:,1]
x = asig.bb2pb(x, 12000, 12000, 96000)
wav.write('x.wav', 96000, x)
NOTE: You will need to replace the fd and fc of 12000 respectively, by whatever is the fs and fc fields in your modem's RxBasebandSignalNtf. For Unet audio, it is 12000 for both, but for Subnero M25 series modems it is probably 24000.
Now you have your wav file at 96 kSa/s!
You could also plot a spectrogram to check if you wanted to:
import arlpy.plot as plt
plt.specgram(x, fs=96000)
I have an issue while recording the signal. Modem refuse to send the JANUS frame. It looks like something is not correctly set on my end, specially fmin = 12000.0 , fstep = 160.0 and hops = 13. The Actual modem won't let me set the fmin to 9520.0 and automatically configured on lowest fmin = 12000. How can i calculate corresponding parameters for fmin=12000.
Although your suggestion do work on the unet audio.
Here is my modem logs:
> phy[3]
« PHY »
[org.arl.unet.DatagramParam]
MTU ⤇ 0
RTU ⤇ 0
[org.arl.unet.phy.PhysicalChannelParam]
dataRate ⤇ 64.0
errorDetection ⤇ true
fec = 7
fecList ⤇ [LDPC1, LDPC2, LDPC3, LDPC4, LDPC5, LDPC6, ICONV2]
frameDuration ⤇ 1.0
frameLength = 8
janus = true
llr = false
maxFrameLength ⤇ 56
powerLevel = -10.0
[org.arl.yoda.FhbfskParam]
chiplen = 1
fmin = 12000.0
fstep = 160.0
hops = 13
scrambler = 0
sync = true
tukey = true
[org.arl.yoda.ModemChannelParam]
basebandExtra = 0
basebandRx = true
modulation = fhbfsk
preamble = (2400 samples)
test = false
threshold = 0.3
valid ⤇ false
> phy << new TxJanusFrameReq()
REFUSE: Frame type not setup correctly
phy >> FAILURE: Timed out

Get current Payara MaxHeapSize and MetaspaceSize

I have a running Payara 4 instance which I set the MaxHeapSize and MetaspaceSize as described here to be production ready. How can I check those values were correctly set?
You could check this using jmap -heap <pid> on the PID of the Payara process. jmap is contained in the JDK bin directory.
On JDK9+ you need to use jhsdb jmap --heap --pid <PID> to get the needed information.
The output should contain the needed information, e.g:
Heap Configuration:
MinHeapFreeRatio = 0
MaxHeapFreeRatio = 100
MaxHeapSize = 268435456 (256.0MB)
NewSize = 89128960 (85.0MB)
MaxNewSize = 89128960 (85.0MB)
OldSize = 179306496 (171.0MB)
NewRatio = 2
SurvivorRatio = 8
MetaspaceSize = 21807104 (20.796875MB)
CompressedClassSpaceSize = 1073741824 (1024.0MB)
MaxMetaspaceSize = 17592186044415 MB
G1HeapRegionSize = 0 (0.0MB)

Resources