I am using OpenEBS with Jiva. I have MySQL pod running on OpenEBS with 3 replicas. My DB is around 10GB with the actual volume size ~30GB
After I lose a replica, new replica span up. Assuming that it starts replicating data immediately;
1) How do I know rebuild is done and it's safe to say ?
2) What is the average time to complete a replica rebuild on AWS (using EBS volumes) per 10 GB of data?
You need to exec this example on you openebs-apiserver:
mayactl --namespace stolon --volname my-data-my-service-0-1559496748 volume stats
End got result:
Executing volume stats...
Portal Details :
---------------
IQN : iqn.2016-09.com.openebs.jiva:my-data-my-service-0-1559496748
Volume : my-data-my-service-0-1559496748
Portal : 10.43.111.28:3260
Size : 70Gi
Replica Stats :
----------------
REPLICA STATUS DATAUPDATEINDEX
-------- ------- ----------------
10.42.7.56 running 1784
10.42.9.13 running 1266322
10.42.3.13 running 1266322
Performance Stats :
--------------------
r/s w/s r(MB/s) w(MB/s) rLat(ms) wLat(ms)
---- ---- -------- -------- --------- ---------
0 22 0.000 0.188 0.000 7.625
Capacity Stats :
---------------
LOGICAL(GB) USED(GB)
------------ ---------
77.834 65.966
From this example you can see that this replica not ready with DATAUPDATEINDEX
10.42.7.56 running 1784
Related
I have table with simple table :
create table if not exists keyspace_test.table_test
(
id int,
date text,
val float,
primary key (id, date)
)
with caching = {'keys': 'ALL', 'rows_per_partition': 'ALL'}
and compaction = {'class': 'SizeTieredCompactionStrategy'}
and compression = {'sstable_compression': 'org.apache.cassandra.io.compress.LZ4Compressor'}
and dclocal_read_repair_chance = 0
and speculative_retry = '99.0PERCENTILE'
and read_repair_chance = 1;
After that i import 12 million rows. Than i want to run simple calculation count rows & sum column val. With this query :
SELECT COUNT(*), SUM(val)
FROM keyspace_test.table_test
but show error :
Cassandra timeout during read query at consistency ONE (1 responses were required but only 0 replica responded)
I am already add USING TIMEOUT 180s; but show error :
Timed out waiting for server response
Configuration server that i use are in 2 location datacenter. Each datacenter has 4 server.
# docker exec -it scylla-120 nodetool status
Datacenter: dc2
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 10.3.192.25 79.04 GB 256 ? 5975a143fec6 Rack1
UN 10.3.192.24 74.2 GB 256 ? 61dc1cfd3e92 Rack1
UN 10.3.192.22 88.21 GB 256 ? 0d24d52d6b0a Rack1
UN 10.3.192.23 63.41 GB 256 ? 962s266518ee Rack1
Datacenter: dc3
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 34.77.78.21 83.5 GB 256 ? 5112f248dd38 Rack1
UN 34.77.78.20 59.87 GB 256 ? e8db897ca33b Rack1
UN 34.77.78.48 81.32 GB 256 ? cb88bd9326db Rack1
UN 34.77.78.47 79.8 GB 256 ? 562a721d4b77 Rack1
Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless
And i create keyspace with :
CREATE KEYSPACE keyspace_test WITH replication = { 'class' : 'NetworkTopologyStrategy', 'dc2' : 3, 'dc3' : 3};
How realay config this scylla with million rows data ?
Not sure about SUM, but you could use DSBulk to count the rows in a table.
dsbulk count \
-k keyspace_test \
-t table_test \
-u username \
-p password \
-h 10.3.192.25
DSBulk takes token range ownership into account, so it's not as stressful on the cluster.
A explained in the Scylla documentation (https://docs.scylladb.com/stable/kb/count-all-rows.html) a COUNT requires scanning the entire database, which can take a long time, so using USING TIMEOUT like you did is indeed the right thing.
I don't know whether or not 180 seconds for scanning 12 million rows on your table is a long enough timeout - to be sure you can try increasing this to 3600 seconds and see if it ever finishes, or try a full-table scan (not just a count) so see how fast it progresses to be able to estimate how long a count() might take (a count() should take less time than an actual scan returning data, but not much less - it still does all the same IO).
Also, it is important to note that until very recently, COUNT was implemented inefficiently - it proceeded sequentially instead of utilizing all the shards in the system. This was fixed in https://github.com/scylladb/scylladb/commit/fe65122ccd40a2a3577121aebdb9a5b50deb4a90 - but it only reached Scylla 5.1 (or the master branch), are you using an older version of Scylla? The example in that commit suggests that the new implementation may be as much as 30 times faster as the old one!
So hopefully, on Scylla 5.1 a much lower timeout would be enough for your COUNT operation to finish. On older versions, you can emulate what Scylla 5.1 does manually: Divide the token range into parts, and invoke a partial COUNT on each of these token ranges, all in parallel, and then sum up the results from all the different ranges.
I have a hardware Server (2x 6 CPU Intel e5, 128GB DDR3 RAM, NVM-SSDs)
Config in .config-files for RAM are:
dbms.memory.heap.initial_size=25g
dbms.memory.heap.max_size=25g
dbms.memory.pagecache.size=40g
dbms.memory.transaction.global_max_size=20g
I have a tree structure. For test purpose I genereated a Tree where each parent has 15 children with a depth of 4. (depth: 0=>15, 1=> 225, 2=>3.375, 3=>50k 4=>760k)
If I want to output a lot of nodes the query gets stuck.
Nodes are persons with a only a few attributes:
-id (from neo4j)
-status
-customerid
-level (depth of tree)
The Nodes are linked in two ways:
:UPLINE from deeper level pointing at root
:DOWNLINE from root to leaves
My Query for getting the tree:
MATCH p = (r:Person {VM:1})-[:DOWNLINE *]->(x)
RETURN DISTINCT nodes(p) AS Customer
LIMIT 5000
Started streaming 5000 records after 2 ms and completed after 42 ms, displaying first 1000 rows.
Displaying the data takes so long even for 5000 records. It takes like 30s to display the nodes or show them in a table.
For a test I created a tree structure in a MySQL and SQL-Server database with the schema:
ID--------ManagerID-------Status
1 ----------- [-] ----------- active
2 ------------ 1 ----------- active
...
15 ----------- 1 ----------- active
16 ----------- 2 ----------- inactive
17 ----------- 2 ----------- active
...
If I query that with CTE (recursive) I get faster times then on NEO4j.
Am I wrong thinking that NEO4j should be faster at this task?
Thank you for your help!
I've found this post about the usual size of a Sonarqube Database:
How big is a sonar database?
In our case, we have 3,584,947 LOC to analyze. If every 1,000 LOC stores 350 Ko of data space it should use about 1.2Gb But we've found that our SonarQube database actually stores more than 20Gb...
The official documentation (https://docs.sonarqube.org/display/SONAR/Requirements) says that for 30 millions LOC with 4 years of history, they use less than 20Gb...
In our General Settings > Database Cleaner we have all default value except for "Delete all analyses after" which is set to 360 instead of 260
What can create so much data in our case?
We use sonarqube 6.7.1 version
EDIT
As #simonbrandhof asked, here are our biggest tables
| Table Name | # Records | Data (KB) |
|`dbo.project_measures` | 12'334'168 | 6'038'384 |
|`dbo.ce_scanner_context`| 116'401 | 12'258'560 |
|`dbo.issues` | 2'175'244 | 2'168'496 |
20Gb of disk sounds way too big for 3.5M lines of code. For comparison the internal PostgreSQL schema at SonarSource is 2.1Gb for 1M lines of code.
I recommend to clean-up db in order to refresh statistics and reclaim dead storage. Command is VACUUM FULL on PostgreSQL. There are probably similar command on other databases. If it's not better then please provide the list of biggest tables.
EDIT
The unexpected size of table ce_scanner_context is due to https://jira.sonarsource.com/browse/SONAR-10658. This bug is going to be fixed in 6.7.4 and 7.2.
I have a SQL Server table in which I need to store daily interest rate data.
Each day, rates are published for multiple time periods. For example, a 1 day rate might be 0.12%, 180 day rate might be 0.070% and so on.
I am considering 2 options for storing the data.
One option is to create columns for date, "days" and rate:
Date | Days | Rate
=========================
11/16/2015 | 1 | 0.12
11/16/2015 | 90 | 0.12
11/16/2015 | 180 | 0.7
11/16/2015 | 365 | 0.97
The other option is to store the "days" and rate via a JSON string(or XML.)
Date | Rates
=============================================================
11/16/2015 | { {1,0.12}, {90,0.12}, {180, 0.7}, {365, 0.97} }
Data only will be imported via bulk insert; when we need to delete we'll just delete all the records and re-import; there is no need for updates. So my need is mostly to read rates for an specified date or range of dates into a .NET application for processing.
I like option 2 (JSON) - it will be easier to create objects in my application; but I also like option 1 because I have more control over the data - data types and constraints.
Any similar experience out there on what might be the best approach or does anyone care to chime in with their thoughts?
I would do option 1. MS SQL Server is a relational database, and storing key:value pairs as in option 2 is not normalized, and is not an efficient for SQL Server to deal with. If you really want option 2, I would use something other than SQL Server.
I am running Apache2 on Linux (Ubuntu 9.10).
I am trying to monitor the load on my server using mod_status.
There are 2 things that puzzle me (see cut-and-paste below):
The CPU load is reported as a ridiculously small number,
whereas, "uptime" reports a number between 0.05 and 0.15 at the same time.
The "requests/sec" is also ridiculously low (0.06)
when I know there are at least 10 requests coming in per second right now.
(You can see there are close to a quarter million "accesses" - this sounds right.)
I am wondering whether this is a bug (if so, is there a fix/workaround),
or maybe a configuration error (but I can't imagine how).
Any insights would be appreciated.
-- David Jones
- - - - -
Current Time: Friday, 07-Jan-2011 13:48:09 PST
Restart Time: Thursday, 25-Nov-2010 14:50:59 PST
Parent Server Generation: 0
Server uptime: 42 days 22 hours 57 minutes 10 seconds
Total accesses: 238015 - Total Traffic: 91.5 MB
CPU Usage: u2.15 s1.54 cu0 cs0 - 9.94e-5% CPU load
.0641 requests/sec - 25 B/second - 402 B/request
11 requests currently being processed, 2 idle workers
- - - - -
After I restarted my Apache server, I realized what is going on. The "requests/sec" is calculated over the lifetime of the server. So if your Apache server has been running for 3 months, this tells you nothing at all about the current load on your server. Instead, reports the total number of requests, divided by the total number of seconds.
It would be nice if there was a way to see the current load on your server. Any ideas?
Anyway, ... answered my own question.
-- David Jones
Apache status value "Total Accesses" is total access count since server started, it's delta value of seconds just what we mean "Request per seconds".
There is the way:
1) Apache monitor script for zabbix
https://github.com/lorf/zapache/blob/master/zapache
2) Install & config zabbix agentd
UserParameter=apache.status[*],/bin/bash /path/apache_status.sh $1 $2
3) Zabbix - Create apache template - Create Monitor item
Key: apache.status[{$APACHE_STATUS_URL}, TotalAccesses]
Type: Numeric(float)
Update interval: 20
Store value: Delta (speed per second) --this is the key option
Zabbix will calculate the increment of the apache request, store delta value, that is "Request per seconds".