How to calculate the redis database size? - database

Redis redis database size, how we can calculate size in mb. Suppose there are 1000 record in redis database then how we we can calculate its size in mb

Related

Improve PostgreSQL pg_restore Performance from 130 hours

I am trying to improve the time taken to restore a PostgreSQL database backup using pg_restore. The 29 GB gzip-compressed backup file is created from a 380 GB PostgreSQL database using pg_dump -Z0 -Fc piped into pigz.
During pg_restore, the database size is increasing at a rate of 50 MB/minute estimated using the SELECT pg_size_pretty(pg_database_size()) query. At this rate, it will take approximately 130 hours to complete the restore which is a very long time.
On further investigation, it appears that the CPU usage is low despite setting pg_restore to use 4 workers.
The disk write speed and IOPS are also very low:
Benchmarking the system's IO using fio has shown that it can do 300 MB/s writes and 2000 IOPS, so we are utilizing only about 20% of the potential IO capabilities.
Is there any way to speed up the database restore?
System
Ubuntu 18.04.3
1 vCPU, 2 GB RAM, 4GB Swap
500 GB ZFS (2-way mirror array)
PostgreSQL 11.6
TimescaleDB 1.60
Steps taken to perform restore:
Decompress the .gz file to /var/lib/postgresql/backups/backup_2020-02-29 (~ 40mins)
Modify postgresql.conf settings
work_mem = 32MB
shared_buffers = 1GB
maintenance_work_mem = 1GB
full_page_writes = off
autovacuum = off
wal_buffers = -1
pg_ctl restart
Run the following commands inside psql:
CREATE DATABASE database_development;
\c database_development
CREATE EXTENSION timescaledb;
SELECT timescaledb_pre_restore();
\! time pg_restore -j 4 -Fc -d database_development /var/lib/postgresql/backups/backup_2020-02-29
SELECT timescaledb_post_restore();
Your database system is I/O bound, as you can see from the %iowait value of 63.62.
Increasing maintenance_work_mem might improve the situation a little, but essentially you need faster storage.

How much amount of data can be stored in the single instance of mongodb

I have single node mongo db instance with 8GB RAM, 500 GB Harddisk,
what is the maximum amount of data that can be stored in the following?
1) Per collection
2) Per database
3) Per MongoDB instance
Thanks,
Harry
If you are using the WiredTiger storage engine then there is no restriction on capping of collections, databases on a single instance.
Only constraint is that the maximum BSON document size is 16MB or 16777216 bytes.
From above one you can calculate the size of collections, db depending one how many collections you have in a database in a instance.
For more information on size and limit depending on the storage engine and version of mongo you are using click here

Best way to store data in SQLite: Database with many tables VS many databases?

I'm storing historical stock data for 200 stocks (each stock with ~1 GB of data). And will be adding about 1 MB of data per day to each stock.
SQLite allows for very large database file sizes, but I'm wondering if as the file size grows, the database's performance will suffer?
Should I keep database file sizes small (~1GB) by using a separate database for each stock? OR should I just set up one LARGE database (~200GB) with a separate table for each stock?

Postgresql - calculate database size

I need to build deploy plan for a medium application which contain many postgres database (720). The model of almost of that are similar, but i have to separate its for manager and performance. Each database have about 400.000 to 1.000.000 record include both read and write.
I have two questions:
1. How could i calculate amount of database on each machine (in centos 2.08 GHz CPU and 4 GB RAM)? Or how many database can i delop on each machine? The concurrence i guest about 10.
2. Is there any tutorial to calculate database size?
3. Is postgres database can "Active - Standby"?
I don't think that your server can possibly handle such load (if these are your true numbers).
Simple calculation: lets round up your 720 databases to 1000.
Also lets round up your average row width of 7288 to 10000 (10KB).
Assume that every database will be using 1 million rows.
Considering all these facts, total size of database in bytes can be estimated as:
1,000 * 10,000 * 1,000,000 = 10,000,000,000,000 = 10 TB
In other words, you will need at least few biggest hard drives money can buy (probably 4TB), and then you will need to use hardware or software RAID to get some adequate reliability out of them.
Note that I did not account for indexes. Depending on nature of your data and your queries, indexes can take anything from 10% to 100% of your data size. Full text search indexes can take 5x more than raw data.
At any rate, you server with just 4GB of ram will be barely moving trying to serve such a huge installation.
However, it should be able to serve not 1,000, but probably 10 or slightly more databases for your setup.

Problem with size of my SQL Server Log file

In my database on SQL 2008 Express Advanced
Recovery Model is full
AutoShrink is false
When I ran sp_spaceused, I got the following;
database_name db_size unallocated_space
FreeLearningTuts 1398.13 MB 0.73 MB
reserved data index_size unused
211216 KB 207024 KB 2944 KB 1248 KB
Out of this, the size of the tables is 150MB but db size shows 1398.13. It's probably the size of the log file. Can you tell what should I do to reduce the size of the Database.
Does anything look wrong with my DB from the figures above or are these the figures that a healthy db shows?
Do you do regular log backups? If not, the log will keep growing until you hit the maximum db size for SQL Express.

Resources