Scheduled backup never run in Plesk 12.0.18 - plesk

Scheduled backup never run in Plesk 12.0.18.
Right user, group and permission:
ll /etc/cron.d/plesk-backup-manager-task
-rw-r--r-- 1 root root 111 Nov 19 15:54 /etc/cron.d/plesk-backup-manager-task
cat /etc/cron.d/plesk-backup-manager-task
10,25,40,55 * * * * root [ -x /opt/psa/admin/sbin/backupmng ] && /opt/psa/admin/sbin/backupmng >/dev/null 2>&1
Here is the configuration on web admin panel:
On database the record seems to be correct.
mysql> select * from psa.BackupsScheduled;
+----+--------+----------+------------+---------------------+--------+--------+-----------+----------+--------+-------+------------+---------+--------------+------------+-------------+------------------------------+
| id | obj_id | obj_type | repository | last | period | active | processed | rotation | prefix | email | split_size | suspend | with_content | backup_day | backup_time | content_type |
+----+--------+----------+------------+---------------------+--------+--------+-----------+----------+--------+-------+------------+---------+--------------+------------+-------------+------------------------------+
| 1 | 6 | domain | ftp | 2016-01-05 19:03:41 | 604800 | true | false | 4 | abc | | 0 | false | true | 2 | 03:00:00 | backup_content_all_at_domain |
+----+--------+----------+------------+---------------------+--------+--------+-----------+----------+--------+-------+------------+---------+--------------+------------+-------------+------------------------------+
1 row in set (0.00 sec)
In BackupsSettings table I've the right value, indeed backups work properly if I run them manually.
I also checked log files related to plesk backup manager (http://kb.odin.com/it/111283). But I can see only backup executed manually.
Running /opt/psa/admin/sbin/backupmng manually nothing happens.
It should be normal, because the cronjob is executed each 15 minutes so I think /opt/psa/admin/sbin/backupmng read tasks on database, and execute the task only if there is one scheduled.
Now I don't know if I should change the cronjob to match the task scheduled for 3:00 am of each tuesday:
0,15,30,45 * * * * root [ -x /opt/psa/admin/sbin/backupmng ] && /opt/psa/admin/sbin/backupmng >/dev/null 2>&1

Related

How to list a stage in snowflake?

Look at this procedure:
greendatasvc#COMPUTE_WH#POS_DATA.BLAZE>CREATE STAGE IF NOT EXISTS NDJSON_STAGE FILE_FORMAT = NDJSON;
+---------------------------------------------------+
| status |
|---------------------------------------------------|
| NDJSON_STAGE already exists, statement succeeded. |
+---------------------------------------------------+
1 Row(s) produced. Time Elapsed: 0.182s
greendatasvc#COMPUTE_WH#POS_DATA.BLAZE>SHOW FILE FORMATS;
greendatasvc#COMPUTE_WH#POS_DATA.BLAZE>LIST #NDJSON_STAGE;
+------+------+-----+---------------+
| name | size | md5 | last_modified |
|------+------+-----+---------------|
+------+------+-----+---------------+
0 Row(s) produced. Time Elapsed: 0.192s
greendatasvc#COMPUTE_WH#POS_DATA.BLAZE>SHOW STAGES;
+-------------------------------+--------------+---------------+-------------+-----+-----------------+--------------------+----------+---------+--------+----------+-------+----------------------+---------------------+
| created_on | name | database_name | schema_name | url | has_credentials | has_encryption_key | owner | comment | region | type | cloud | notification_channel | storage_integration |
|-------------------------------+--------------+---------------+-------------+-----+-----------------+--------------------+----------+---------+--------+----------+-------+----------------------+---------------------|
| 2021-10-19 12:31:31.043 -0700 | NDJSON_STAGE | POS_DATA | BLAZE | | N | N | SYSADMIN | | NULL | INTERNAL | NULL | NULL | NULL |
+-------------------------------+--------------+---------------+-------------+-----+-----------------+--------------------+----------+---------+--------+----------+-------+----------------------+---------------------+
1 Row(s) produced. Time Elapsed: 0.159s
I believe I already have a stage named NDJSON_STAGE based on its output when I try and create one. However, when I try and list it I get no results. Am I using the LIST function incorrectly?
Your stage exists, its confirmed both by the 'already exists' results response and by the fact that you did'nt receive any error when trying to list files from your stage.
If you see nothing with LIST #NDJSON_STAGE; command that's probably because you don't have any file in this stage. Upload a file in the stage using a PUT command then you should be able to list your availables stage files.
Just to be clear, LIST #stagename returns a list of files that have been staged - on that stage.
In your case the stage is empty.
If you want to display the stages for which you have access, then you can use SHOW STAGES and that lists all the stages for which you have access privileges

Best design for refactoring multiple tables with the same columns but different FK

I currently have a database with multiple Log tables. The table is used to log the states of a process. A Process_Log has 3 basic columns which is the table ID, process State, and a FK to their respective Process table which is ProcessID
Lets say I have these tables:
ProcessALog
ID | State | ProcessID
---|------------|---------
1 | Created | 24
2 | Created | 32
3 | Processing | 24
4 | Canceled | 24
5 | Processing | 32
ProcessBLog
ID | State | ProcessID
---|------------|---------
1 | Created | 12
2 | Processing | 12
3 | Deleted | 12
But I found a problem to this implementation. I would need to create another table if I needed to log another process. I figured I could simplify this by having a central log table and having another column named ProcessName to store the different processes like so:
Log
ID | State | ProcessID | ProcessName
---|------------|-----------|-------------
1 | Created | 24 | ProcessA
2 | Created | 32 | ProcessA
3 | Processing | 24 | ProcessA
4 | Canceled | 24 | ProcessA
5 | Processing | 32 | ProcessA
1 | Created | 12 | ProcessB
2 | Processing | 12 | ProcessB
3 | Deleted | 12 | ProcessB
But having a central log table would mean that my ProcessID can't be a foreign key anymore.
How can I retain my foreign keys? Is this a good database design?

GAE leaving mysql connections open

Some times happens that the GAE App engine instance is failing to respond successfully, for requests that apparently do not cause exceptions in the Django app.
Then I check the processlist in MySQL instance and see that there are many unnecessary processes open by localhost, and probably the server app is trying to open a new connection and hits the process limit.
Why is the server creating new processes but fails to close the connections at the end? How to close these connections programatically?
If I restart the App engine instance the 500 errors (and mysql threads) disappear.
| 7422 | root | localhost | prova2 | Sleep | 1278 | | NULL
| 7436 | root | localhost | prova2 | Sleep | 703 | | NULL
| 7440 | root | localhost | prova2 | Sleep | 699 | | NUL
| 7442 | root | localhost | prova2 | Sleep | 697 | | NULL
| 7446 | root | localhost | prova2 | Sleep | 694 | | NULL
| 7448 | root | localhost | prova2 | Sleep | 694 | | NULL
| 7450 | root | localhost | prova2 | Sleep | 693 | | NULL
Actually the problematic code was middleware that stores the queries and produces some summary data of requests. The problem of sleeping connections disappears when I remove this section in appengine_config.py:
def webapp_add_wsgi_middleware(app):
from google.appengine.ext.appstats import recording
app = recording.appstats_wsgi_middleware(app)
return app

MySQL Import into Innodb table severely spikes at a certain point

I'm trying to migrate a 30GB database from one server to another.
The short story is that at a certain point through the process, the amount of time it takes to import records severely increases as a spike. The following is from using the SOURCE command to import a chunk of 500k records (out of about ~25-30 million throughout the database) that was exported as an sql file that was ssh tunnelled over to the new server:
...
Query OK, 2871 rows affected (0.73 sec)
Records: 2871 Duplicates: 0 Warnings: 0
Query OK, 2870 rows affected (0.98 sec)
Records: 2870 Duplicates: 0 Warnings: 0
Query OK, 2865 rows affected (0.80 sec)
Records: 2865 Duplicates: 0 Warnings: 0
Query OK, 2871 rows affected (0.87 sec)
Records: 2871 Duplicates: 0 Warnings: 0
Query OK, 2864 rows affected (2.60 sec)
Records: 2864 Duplicates: 0 Warnings: 0
Query OK, 2866 rows affected (7.53 sec)
Records: 2866 Duplicates: 0 Warnings: 0
Query OK, 2879 rows affected (8.70 sec)
Records: 2879 Duplicates: 0 Warnings: 0
Query OK, 2864 rows affected (7.53 sec)
Records: 2864 Duplicates: 0 Warnings: 0
Query OK, 2873 rows affected (10.06 sec)
Records: 2873 Duplicates: 0 Warnings: 0
...
The spikes eventually average to 16-18 seconds per ~2800 rows affected. Granted I don't usually use Source for a large import, but for the sakes of showing legitimate output, I used it to understand when the spikes happen. Using mysql command or mysqlimport yields the same results. Even piping the results directly into the new database instead of through an sql file has these spikes.
As far as I can tell, this happens after a certain amount of records are inserted into a table. The first time I boot up a server and import a chunk that size, it goes through just fine. Give or take the estimated amount it handles until these spikes occur. I can't correlate that because I haven't consistently replicated the issue to evidently conclude that. There are ~20 tables that have sub 500,000 records that all imported just fine when those 20 tables were imported through a single command. This seems to only happen to tables that have an excessive amount of data. Granted, the solutions I've come cross so far seem to only address the natural DR that occurs when you import over time (The expected output in my case was that eventually at the end of importing 500k records, it would take 2-3 seconds per ~2800, whereas it seems the questions were addressing that at the end it shouldn't take that long). This comes from a single sugarCRM table called 'campaign_log', which has ~9 million records. I was able to import in chunks of 500k back onto the old server i'm migrating off of without these spikes occuring, so I assume this has to do with my new server configuration. Another thing is that whenever these spikes occur, the table that it is being imported into seems to have an awkward way of displaying the # of records via count. I know InnoDB gives count estimates, but the number doesn't succeed the ~, indicating the estimate. It usually is accurate and that each time you refresh the table, it doesn't change the amount it displays (This is based on what it reports through PHPMyAdmin)
Here's the following commands/InnoDB system variables I have on the new server:
INNODB System Vars:
+---------------------------------+------------------------+
| Variable_name | Value |
+---------------------------------+------------------------+
| have_innodb | YES |
| ignore_builtin_innodb | OFF |
| innodb_adaptive_flushing | ON |
| innodb_adaptive_hash_index | ON |
| innodb_additional_mem_pool_size | 8388608 |
| innodb_autoextend_increment | 8 |
| innodb_autoinc_lock_mode | 1 |
| innodb_buffer_pool_instances | 1 |
| innodb_buffer_pool_size | 8589934592 |
| innodb_change_buffering | all |
| innodb_checksums | ON |
| innodb_commit_concurrency | 0 |
| innodb_concurrency_tickets | 500 |
| innodb_data_file_path | ibdata1:10M:autoextend |
| innodb_data_home_dir | |
| innodb_doublewrite | ON |
| innodb_fast_shutdown | 1 |
| innodb_file_format | Antelope |
| innodb_file_format_check | ON |
| innodb_file_format_max | Antelope |
| innodb_file_per_table | OFF |
| innodb_flush_log_at_trx_commit | 1 |
| innodb_flush_method | fsync |
| innodb_force_load_corrupted | OFF |
| innodb_force_recovery | 0 |
| innodb_io_capacity | 200 |
| innodb_large_prefix | OFF |
| innodb_lock_wait_timeout | 50 |
| innodb_locks_unsafe_for_binlog | OFF |
| innodb_log_buffer_size | 8388608 |
| innodb_log_file_size | 5242880 |
| innodb_log_files_in_group | 2 |
| innodb_log_group_home_dir | ./ |
| innodb_max_dirty_pages_pct | 75 |
| innodb_max_purge_lag | 0 |
| innodb_mirrored_log_groups | 1 |
| innodb_old_blocks_pct | 37 |
| innodb_old_blocks_time | 0 |
| innodb_open_files | 300 |
| innodb_print_all_deadlocks | OFF |
| innodb_purge_batch_size | 20 |
| innodb_purge_threads | 1 |
| innodb_random_read_ahead | OFF |
| innodb_read_ahead_threshold | 56 |
| innodb_read_io_threads | 8 |
| innodb_replication_delay | 0 |
| innodb_rollback_on_timeout | OFF |
| innodb_rollback_segments | 128 |
| innodb_spin_wait_delay | 6 |
| innodb_stats_method | nulls_equal |
| innodb_stats_on_metadata | ON |
| innodb_stats_sample_pages | 8 |
| innodb_strict_mode | OFF |
| innodb_support_xa | ON |
| innodb_sync_spin_loops | 30 |
| innodb_table_locks | ON |
| innodb_thread_concurrency | 0 |
| innodb_thread_sleep_delay | 10000 |
| innodb_use_native_aio | ON |
| innodb_use_sys_malloc | ON |
| innodb_version | 5.5.39 |
| innodb_write_io_threads | 8 |
+---------------------------------+------------------------+
System Specs:
Intel Xeon E5-2680 v2 (Ivy Bridge) 8 Processors
15GB Ram
2x80 SSDs
CMD to Export:
mysqldump -u <olduser> <oldpw>, <olddb> <table> --verbose --disable-keys --opt | ssh -i <privatekey> <newserver> "cat > <nameoffile>"
Thank you for any assistance. Let me know if there's any other information I can provide.
I figured it out. I increased the innodb_log_file_size from 5MB to 1024MB. While it did significantly increase the amount of records I imported (Never went above 1 second per 3000 rows), it also fixed the spikes. There were only 2 in all the records I imported, but after they happened, they immediately went back to taking sub 1 second.

Data restore procedure failed in OrientDB

Last night I received the following error after inserting ~500k records:
2014-07-03 22:10:50:056 SEVE Internal server error:
java.lang.IllegalArgumentException: Cannot get allocation information
for database 'pumpup' because it is not a disk-based database
[ONetworkProtocolHttpDb]
My OrientDB server.sh froze, so I rebooted my computer. Now when I try to access the database, I get the following output from server.sh:
2014-07-04 13:52:35:331 INFO OrientDB Server v1.7.3 is active. [OServer]
2014-07-04 13:52:38:784 WARN segment file 'database.ocf' was not closed correctly last time [OSingleFileSegment]
2014-07-04 13:52:38:879 WARN Storage pumpup was not closed properly. Will try to restore from write ahead log. [OLocalPaginatedStorage]
2014-07-04 13:52:38:879 INFO Looking for last checkpoint... [OLocalPaginatedStorage]
2014-07-04 13:52:38:879 INFO Checkpoints are absent, the restore will start from the beginning. [OLocalPaginatedStorage]
2014-07-04 13:52:38:880 INFO Data restore procedure is started. [OLocalPaginatedStorage]
2014-07-04 13:53:15:080 INFO Heap memory is low apply batch of operations are read from WAL. [OLocalPaginatedStorage]Exception during storage data restore.
null
-> com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage.restoreWALBatch(OLocalPaginatedStorage.java:1842)
-> com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage.restoreFrom(OLocalPaginatedStorage.java:1802)
-> com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage.restoreFromBegging(OLocalPaginatedStorage.java:1772)
-> com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage.restoreFromWAL(OLocalPaginatedStorage.java:1611)
-> com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage.restoreIfNeeded(OLocalPaginatedStorage.java:1578)
-> com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage.open(OLocalPaginatedStorage.java:245)
-> com.orientechnologies.orient.core.db.raw.ODatabaseRaw.open(ODatabaseRaw.java:100)
-> com.orientechnologies.orient.core.db.ODatabaseWrapperAbstract.open(ODatabaseWrapperAbstract.java:49)
-> com.orientechnologies.orient.core.db.record.ODatabaseRecordAbstract.open(ODatabaseRecordAbstract.java:268)
-> com.orientechnologies.orient.core.db.ODatabaseWrapperAbstract.open(ODatabaseWrapperAbstract.java:49)
-> com.orientechnologies.orient.server.OServer.openDatabase(OServer.java:557)
-> com.orientechnologies.orient.server.network.protocol.http.command.OServerCommandAuthenticatedDbAbstract.authenticate(OServerCommandAuthenticatedDbAbstract.java:126)
-> com.orientechnologies.orient.server.network.protocol.http.command.OServerCommandAuthenticatedDbAbstract.beforeExecute(OServerCommandAuthenticatedDbAbstract.java:87)
-> com.orientechnologies.orient.server.network.protocol.http.command.get.OServerCommandGetConnect.beforeExecute(OServerCommandGetConnect.java:46)
-> com.orientechnologies.orient.server.network.protocol.http.ONetworkProtocolHttpAbstract.service(ONetworkProtocolHttpAbstract.java:173)
-> com.orientechnologies.orient.server.network.protocol.http.ONetworkProtocolHttpAbstract.execute(ONetworkProtocolHttpAbstract.java:572)
-> com.orientechnologies.common.thread.OSoftThread.run(OSoftThread.java:45)
2014-07-04 13:53:15:082 SEVE Internal server error:
com.orientechnologies.orient.core.exception.OStorageException: Cannot open local storage '/Users/gsquare567/Databases/orientdb-community-1.7.3/databases/pumpup' with mode=rw
--> java.lang.NullPointerException [ONetworkProtocolHttpDb]
When I try to connect every subsequent time, I get the following:
--> com.orientechnologies.common.concur.lock.OLockException: File '/Users/gsquare567/Databases/orientdb-community-1.7.3/databases/pumpup/database.ocf'
is locked by another process, maybe the database is in use by another
process. Use the remote mode with a OrientDB server to allow multiple
access to the same database. [ONetworkProtocolHttpDb]
I can't connect to the database. I'm going to update from 1.7.3 to 1.7.4, recreate the database, and try again. For now, here's some output from dserver.sh as it seems to be trying to perform a data restore procedure:
2014-07-04 14:01:09:168 INFO [192.168.1.8]:2434 [orientdb] [3.2.2] Address[192.168.1.8]:2434 is STARTED [LifecycleService]
2014-07-04 14:01:09:198 INFO [192.168.1.8]:2434 [orientdb] [3.2.2] Initializing cluster partition table first arrangement... [InternalPartitionService]
2014-07-04 14:01:09:212 INFO [node1404496844581] found no previous messages in queue orientdb.node.node1404496844581.response [OHazelcastDistributedMessageService]
2014-07-04 14:01:09:230 WARN [node1404496844581] opening database 'pumpup'... [OHazelcastPlugin]
2014-07-04 14:01:09:231 INFO [node1404496844581] loaded database configuration from disk: /Users/gsquare567/Databases/orientdb-community-1.7.3/config/default-distributed-db-config.json [OHazelcastPlugin]
2014-07-04 14:01:09:238 INFO updated distributed configuration for database: pumpup:
----------
{
"autoDeploy":true,
"hotAlignment":false,
"readQuorum":1,
"writeQuorum":2,
"failureAvailableNodesLessQuorum":false,
"readYourWrites":true,"clusters":{
"internal":{
},
"index":{
},
"*":{
"servers":["<NEW_NODE>"]
}
},
"version":0
}
---------- [OHazelcastPlugin]
2014-07-04 14:01:09:243 INFO updated distributed configuration for database: pumpup:
----------
{
"version":0,
"autoDeploy":true,
"hotAlignment":false,
"readQuorum":1,
"writeQuorum":2,
"failureAvailableNodesLessQuorum":false,
"readYourWrites":true,"clusters":{
"internal":null,
"index":null,
"*":{
"servers":["<NEW_NODE>"]
}
}
}
---------- [OHazelcastPlugin]
2014-07-04 14:01:09:243 INFO Saving distributed configuration file for database 'pumpup' to: /Users/gsquare567/Databases/orientdb-community-1.7.3/databases/pumpup/distributed-config.json [OHazelcastPlugin]
2014-07-04 14:01:09:246 INFO [node1404496844581] adding node 'node1404496844581' in partition: db=pumpup [*] [OHazelcastDistributedDatabase]
2014-07-04 14:01:09:246 INFO updated distributed configuration for database: pumpup:
----------
{
"version":1,
"autoDeploy":true,
"hotAlignment":false,
"readQuorum":1,
"writeQuorum":2,
"failureAvailableNodesLessQuorum":false,
"readYourWrites":true,"clusters":{
"internal":null,
"index":null,
"*":{
"servers":["<NEW_NODE>","node1404496844581"]
}
}
}
---------- [OHazelcastPlugin]
2014-07-04 14:01:09:247 INFO Saving distributed configuration file for database 'pumpup' to: /Users/gsquare567/Databases/orientdb-community-1.7.3/databases/pumpup/distributed-config.json [OHazelcastPlugin]
2014-07-04 14:01:09:247 INFO [node1404496844581] received added status node1404496844581.pumpup=OFFLINE [OHazelcastPlugin]
2014-07-04 14:01:09:249 INFO [node1404496844581] found no previous messages in queue orientdb.node.node1404496844581.pumpup.request [OHazelcastDistributedMessageService]
2014-07-04 14:01:09:288 WARN segment file 'database.ocf' was not closed correctly last time [OSingleFileSegment]
2014-07-04 14:01:09:378 WARN Storage pumpup was not closed properly. Will try to restore from write ahead log. [OLocalPaginatedStorage]
2014-07-04 14:01:09:378 INFO Looking for last checkpoint... [OLocalPaginatedStorage]
2014-07-04 14:01:09:378 INFO Checkpoints are absent, the restore will start from the beginning. [OLocalPaginatedStorage]
2014-07-04 14:01:09:379 INFO Data restore procedure is started. [OLocalPaginatedStorage]
2014-07-04 14:01:35:724 INFO Heap memory is low apply batch of operations are read from WAL. [OLocalPaginatedStorage]Exception during storage data restore.
null
-> com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage.restoreWALBatch(OLocalPaginatedStorage.java:1842)
-> com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage.restoreFrom(OLocalPaginatedStorage.java:1802)
-> com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage.restoreFromBegging(OLocalPaginatedStorage.java:1772)
-> com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage.restoreFromWAL(OLocalPaginatedStorage.java:1611)
-> com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage.restoreIfNeeded(OLocalPaginatedStorage.java:1578)
-> com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage.open(OLocalPaginatedStorage.java:245)
-> com.orientechnologies.orient.core.db.raw.ODatabaseRaw.open(ODatabaseRaw.java:100)
-> com.orientechnologies.orient.core.db.ODatabaseWrapperAbstract.open(ODatabaseWrapperAbstract.java:49)
-> com.orientechnologies.orient.core.db.record.ODatabaseRecordAbstract.open(ODatabaseRecordAbstract.java:268)
-> com.orientechnologies.orient.core.db.ODatabaseWrapperAbstract.open(ODatabaseWrapperAbstract.java:49)
-> com.orientechnologies.orient.server.OServer.openDatabase(OServer.java:557)
-> com.orientechnologies.orient.server.hazelcast.OHazelcastDistributedDatabase.initDatabaseInstance(OHazelcastDistributedDatabase.java:283)
-> com.orientechnologies.orient.server.hazelcast.OHazelcastDistributedDatabase.setOnline(OHazelcastDistributedDatabase.java:295)
-> com.orientechnologies.orient.server.hazelcast.OHazelcastPlugin.loadDistributedDatabases(OHazelcastPlugin.java:742)
-> com.orientechnologies.orient.server.hazelcast.OHazelcastPlugin.startup(OHazelcastPlugin.java:194)
-> com.orientechnologies.orient.server.OServer.registerPlugins(OServer.java:720)
-> com.orientechnologies.orient.server.OServer.activate(OServer.java:241)
-> com.orientechnologies.orient.server.OServerMain.main(OServerMain.java:32)Exception in thread "main" com.orientechnologies.orient.core.exception.OStorageException: Cannot open local storage '/Users/gsquare567/Databases/orientdb-community-1.7.3/databases/pumpup' with mode=rw
at com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage.open(OLocalPaginatedStorage.java:251)
at com.orientechnologies.orient.core.db.raw.ODatabaseRaw.open(ODatabaseRaw.java:100)
at com.orientechnologies.orient.core.db.ODatabaseWrapperAbstract.open(ODatabaseWrapperAbstract.java:49)
at com.orientechnologies.orient.core.db.record.ODatabaseRecordAbstract.open(ODatabaseRecordAbstract.java:268)
at com.orientechnologies.orient.core.db.ODatabaseWrapperAbstract.open(ODatabaseWrapperAbstract.java:49)
at com.orientechnologies.orient.server.OServer.openDatabase(OServer.java:557)
at com.orientechnologies.orient.server.hazelcast.OHazelcastDistributedDatabase.initDatabaseInstance(OHazelcastDistributedDatabase.java:283)
at com.orientechnologies.orient.server.hazelcast.OHazelcastDistributedDatabase.setOnline(OHazelcastDistributedDatabase.java:295)
at com.orientechnologies.orient.server.hazelcast.OHazelcastPlugin.loadDistributedDatabases(OHazelcastPlugin.java:742)
at com.orientechnologies.orient.server.hazelcast.OHazelcastPlugin.startup(OHazelcastPlugin.java:194)
at com.orientechnologies.orient.server.OServer.registerPlugins(OServer.java:720)
at com.orientechnologies.orient.server.OServer.activate(OServer.java:241)
at com.orientechnologies.orient.server.OServerMain.main(OServerMain.java:32)
Caused by: java.lang.NullPointerException
at com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage.restoreWALBatch(OLocalPaginatedStorage.java:1842)
at com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage.restoreFrom(OLocalPaginatedStorage.java:1802)
at com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage.restoreFromBegging(OLocalPaginatedStorage.java:1772)
at com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage.restoreFromWAL(OLocalPaginatedStorage.java:1611)
at com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage.restoreIfNeeded(OLocalPaginatedStorage.java:1578)
at com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage.open(OLocalPaginatedStorage.java:245)
... 12 more
2014-07-04 14:01:39:184 INFO [192.168.1.8]:2434 [orientdb] [3.2.2] memory.used=1.3G, memory.free=155.9M, memory.total=1.4G, memory.max=1.8G, memory.used/total=89.32%, memory.used/max=71.63%, load.process=37.00%, load.system=41.00%, load.systemAverage=170.00%, thread.count=59, thread.peakCount=59, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.operation.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operation.size=0, executor.q.priorityOperation.size=0, executor.q.response.size=0, operations.remote.size=0, operations.running.size=0, proxy.count=5, clientEndpoint.count=0, connection.active.count=0, connection.count=0 [HealthMonitor]
2014-07-04 14:02:09:195 INFO [192.168.1.8]:2434 [orientdb] [3.2.2] memory.used=1.3G, memory.free=155.2M, memory.total=1.4G, memory.max=1.8G, memory.used/total=89.37%, memory.used/max=71.68%, load.process=0.00%, load.system=4.00%, load.systemAverage=162.00%, thread.count=59, thread.peakCount=59, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.operation.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operation.size=0, executor.q.priorityOperation.size=0, executor.q.response.size=0, operations.remote.size=0, operations.running.size=0, proxy.count=5, clientEndpoint.count=0, connection.active.count=0, connection.count=0 [HealthMonitor]
2014-07-04 14:02:39:207 INFO [192.168.1.8]:2434 [orientdb] [3.2.2] memory.used=1.3G, memory.free=149.3M, memory.total=1.4G, memory.max=1.8G, memory.used/total=89.77%, memory.used/max=72.00%, load.process=0.00%, load.system=5.00%, load.systemAverage=124.00%, thread.count=61, thread.peakCount=61, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.operation.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operation.size=0, executor.q.priorityOperation.size=0, executor.q.response.size=0, operations.remote.size=0, operations.running.size=0, proxy.count=5, clientEndpoint.count=0, connection.active.count=0, connection.count=0 [HealthMonitor]
2014-07-04 14:03:09:218 INFO [192.168.1.8]:2434 [orientdb] [3.2.2] memory.used=1.3G, memory.free=149.2M, memory.total=1.4G, memory.max=1.8G, memory.used/total=89.78%, memory.used/max=72.00%, load.process=0.00%, load.system=6.00%, load.systemAverage=151.00%, thread.count=61, thread.peakCount=61, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.operation.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operation.size=0, executor.q.priorityOperation.size=0, executor.q.response.size=0, operations.remote.size=0, operations.running.size=0, proxy.count=5, clientEndpoint.count=0, connection.active.count=0, connection.count=0 [HealthMonitor]
EDIT
Here is my OrientDB info:
CLUSTERS
----------------------------------------------+-------+---------------------+---------+-----------------+
NAME | ID | TYPE | DATASEG | RECORDS |
----------------------------------------------+-------+---------------------+---------+-----------------+
default | 3 | PHYSICAL | -1 | 0 |
e | 10 | PHYSICAL | -1 | 0 |
index | 1 | PHYSICAL | -1 | 4 |
internal | 0 | PHYSICAL | -1 | 3 |
manindex | 2 | PHYSICAL | -1 | 1 |
ofunction | 7 | PHYSICAL | -1 | 0 |
orids | 6 | PHYSICAL | -1 | 0 |
orole | 4 | PHYSICAL | -1 | 3 |
oschedule | 8 | PHYSICAL | -1 | 0 |
ouser | 5 | PHYSICAL | -1 | 3 |
post | 12 | PHYSICAL | -1 | 1312295 |
user | 11 | PHYSICAL | -1 | 205795 |
v | 9 | PHYSICAL | -1 | 0 |
----------------------------------------------+-------+---------------------+---------+-----------------+
TOTAL = 13 | | 1518104 |
----------------------------------------------------------------------------+---------+-----------------+
CLASSES
----------------------------------------------+------------------------------------+------------+----------------+
NAME | SUPERCLASS | CLUSTERS | RECORDS |
----------------------------------------------+------------------------------------+------------+----------------+
E | | 10 | 0 |
OFunction | | 7 | 0 |
OIdentity | | - | 0 |
ORestricted | | - | 0 |
ORIDs | | 6 | 0 |
ORole | OIdentity | 4 | 3 |
OSchedule | | 8 | 0 |
OTriggered | | - | 0 |
OUser | OIdentity | 5 | 3 |
ParseObject | | - | 0 |
Post | ParseObject | 12 | 1312295 |
User | ParseObject | 11 | 205795 |
V | | 9 | 0 |
----------------------------------------------+------------------------------------+------------+----------------+
TOTAL = 13 1518096 |
----------------------------------------------+------------------------------------+------------+----------------+
INDEXES
----------------------------------------------+------------+-----------------------+----------------+------------+
NAME | TYPE | CLASS | FIELDS | RECORDS |
----------------------------------------------+------------+-----------------------+----------------+------------+
dictionary | DICTIONARY | | | 0 |
ORole.name | UNIQUE | ORole | name | 3 |
OUser.name | UNIQUE | OUser | name | 3 |
Post.objectId | UNIQUE_... | Post | objectId | 1312295 |
User.objectId | UNIQUE_... | User | objectId | 205795 |
----------------------------------------------+------------+-----------------------+----------------+------------+
TOTAL = 5 1518096 |
-----------------------------------------------------------------------------------------------------------------+

Resources