integration of apereo cas with keycolak (as Identity Provider) via openid connect or saml - saml-2.0

I am getting below warings and errors while i am trying to connect with SAML
2017-08-01 11:11:59,912 ERROR [org.apereo.cas.support.pac4j.web.flow.DelegatedClientAuthenticationAction] - https://cas.example.org:8443/cas/login?client_name=SAML2Client | urlResolver: org.pac4j.core.http.DefaultUrlResolver#5ebbc212 | ajaxRequestResolver: org.pac4j.core.http.DefaultAjaxRequestResolver#7fc6bf26 | includeClientNameInCallbackUrl: true | redirectActionBuilder: null | credentialsExtractor: null | authenticator: null | profileCreator: org.pac4j.core.profile.creator.AuthenticatorProfileCreator#437895ac | logoutActionBuilder: org.pac4j.core.logout.NoLogoutActionBuilder#2326b11c | authorizationGenerators: [] |]>
.... ....
2017-08-01 11:14:37,859 WARN [org.apereo.cas.authentication.PolicyBasedAuthenticationManager] -
2017-08-01 11:14:37,861 INFO [org.apereo.inspektr.audit.support.Slf4jLoggingAuditTrailManager] -
WHO: harshad
WHAT: Supplied credentials: [harshad]
ACTION: AUTHENTICATION_FAILED
APPLICATION: CAS
WHEN: Tue Aug 01 11:14:37 IST 2017
CLIENT IP ADDRESS: 192.168.56.1
SERVER IP ADDRESS: 192.168.56.1
>
2017-08-01 11:14:37,876 ERROR [org.apereo.cas.web.flow.AuthenticationExceptionHandlerAction] -

Related

How to download files with GET and preserve directory structure?

Note: this is about the same situation as Unload data to a stage with partitions "flattened" One of these might be an X-Y question. I also asked on the Snowflake Community forum: https://community.snowflake.com/s/question/0D53r0000BZbU70CQF/prevent-filename-conflicts-when-unloading-files-with-partition-by.
I have some files in a Snowflake stage called mystage:
+------------------------------------------------------------------------------------------------------------+---------+----------------------------------+------------------------------+
| name | size | md5 | last_modified |
|------------------------------------------------------------------------------------------------------------+---------+----------------------------------+------------------------------|
| mystage/00/data_01a4c9ff-0602-1baa-0070-30030078eb32_1112_4_0.snappy.parquet | 8645856 | 2e3fe0e95b3b4d39f1b2c2cd2a5e5fd3 | Tue, 7 Jun 2022 13:51:40 GMT |
| mystage/01/data_01a4c9ff-0602-1baa-0070-30030078eb32_712_1_0.snappy.parquet | 8743968 | 4d59bb8cfa355c8d55f05b88aa91002f | Tue, 7 Jun 2022 13:51:44 GMT |
| mystage/02/data_01a4c9ff-0602-1baa-0070-30030078eb32_1412_1_0.snappy.parquet | 9719360 | c723aae191fb46c1c83eb660e3dd41cc | Tue, 7 Jun 2022 13:51:43 GMT |
| mystage/03/data_01a4c9ff-0602-1baa-0070-30030078eb32_712_2_0.snappy.parquet | 7786640 | b63ed8a15f18cd97ed8aaeb4fea89dee | Tue, 7 Jun 2022 13:51:44 GMT |
| mystage/04/data_01a4c9ff-0602-1baa-0070-30030078eb32_712_1_0.snappy.parquet | 9249120 | 47471b0b0fb72b8d9ae5982176f29af1 | Tue, 7 Jun 2022 13:52:00 GMT |
| mystage/05/data_01a4c9ff-0602-1baa-0070-30030078eb32_612_2_0.snappy.parquet | 9385936 | 18b286beb69b876aa3b4e08e857f69e7 | Tue, 7 Jun 2022 13:51:44 GMT |
| mystage/06/data_01a4c9ff-0602-1baa-0070-30030078eb32_612_2_0.snappy.parquet | 9187536 | 2dd0e99c7fa22fe42c624e4298c9d8f1 | Tue, 7 Jun 2022 13:51:58 GMT |
| mystage/07/data_01a4c9ff-0602-1baa-0070-30030078eb32_712_4_0.snappy.parquet | 8804096 | aaf089af92251957502665fb7ab144f5 | Tue, 7 Jun 2022 13:51:44 GMT |
| mystage/08/data_01a4c9ff-0602-1baa-0070-30030078eb32_312_5_0.snappy.parquet | 9214480 | 7969198b4e826d791e6f99455d01d72c | Tue, 7 Jun 2022 13:51:44 GMT |
| mystage/09/data_01a4c9ff-0602-1baa-0070-30030078eb32_512_5_0.snappy.parquet | 8986608 | 159c831c75439dcdc2743dabd91a66bb | Tue, 7 Jun 2022 13:51:41 GMT |
| mystage/10/data_01a4c9ff-0602-1baa-0070-30030078eb32_112_7_0.snappy.parquet | 9063392 | 5410728fe0723f350322075f89a7e37f | Tue, 7 Jun 2022 13:51:41 GMT |
| mystage/11/data_01a4c9ff-0602-1baa-0070-30030078eb32_912_7_0.snappy.parquet | 9197600 | 68f6c4f81c3d3890a9eb269bfcde97ec | Tue, 7 Jun 2022 13:51:39 GMT |
| mystage/12/data_01a4c9ff-0602-1baa-0070-30030078eb32_1512_0_0.snappy.parquet | 9613440 | 124fdd720b104d4e6a119140409454a3 | Tue, 7 Jun 2022 13:51:44 GMT |
| mystage/13/data_01a4c9ff-0602-1baa-0070-30030078eb32_112_6_0.snappy.parquet | 8524336 | 17ace7467c50a89d6558a263b40f4036 | Tue, 7 Jun 2022 13:51:40 GMT |
| mystage/14/data_01a4c9ff-0602-1baa-0070-30030078eb32_112_5_0.snappy.parquet | 8830192 | 83bf9a6e027e812583b69d735adda12c | Tue, 7 Jun 2022 13:51:41 GMT |
| mystage/15/data_01a4c9ff-0602-1baa-0070-30030078eb32_1112_3_0.snappy.parquet | 9051568 | a75520c1713b6e4cc7a3f28ac4589f42 | Tue, 7 Jun 2022 13:51:40 GMT |
+------------------------------------------------------------------------------------------------------------+---------+----------------------------------+------------------------------+
I tried to download these files:
GET #mystage file:///data;
But the files were downloaded without the 00, 01, etc. directory structure. At least two of the files have non-unique names (in this case data_01a4c9ff-0602-1baa-0070-30030078eb32_612_2_0.snappy.parquet), so this means I can't actually download all of the files!
In this particular case, it causes a "file not found" error, but in general I want some way to preserve the directory structure when downloading. Is there some other Snowflake command I can use for this purpose?

GAE leaving mysql connections open

Some times happens that the GAE App engine instance is failing to respond successfully, for requests that apparently do not cause exceptions in the Django app.
Then I check the processlist in MySQL instance and see that there are many unnecessary processes open by localhost, and probably the server app is trying to open a new connection and hits the process limit.
Why is the server creating new processes but fails to close the connections at the end? How to close these connections programatically?
If I restart the App engine instance the 500 errors (and mysql threads) disappear.
| 7422 | root | localhost | prova2 | Sleep | 1278 | | NULL
| 7436 | root | localhost | prova2 | Sleep | 703 | | NULL
| 7440 | root | localhost | prova2 | Sleep | 699 | | NUL
| 7442 | root | localhost | prova2 | Sleep | 697 | | NULL
| 7446 | root | localhost | prova2 | Sleep | 694 | | NULL
| 7448 | root | localhost | prova2 | Sleep | 694 | | NULL
| 7450 | root | localhost | prova2 | Sleep | 693 | | NULL
Actually the problematic code was middleware that stores the queries and produces some summary data of requests. The problem of sleeping connections disappears when I remove this section in appengine_config.py:
def webapp_add_wsgi_middleware(app):
from google.appengine.ext.appstats import recording
app = recording.appstats_wsgi_middleware(app)
return app

Scheduled backup never run in Plesk 12.0.18

Scheduled backup never run in Plesk 12.0.18.
Right user, group and permission:
ll /etc/cron.d/plesk-backup-manager-task
-rw-r--r-- 1 root root 111 Nov 19 15:54 /etc/cron.d/plesk-backup-manager-task
cat /etc/cron.d/plesk-backup-manager-task
10,25,40,55 * * * * root [ -x /opt/psa/admin/sbin/backupmng ] && /opt/psa/admin/sbin/backupmng >/dev/null 2>&1
Here is the configuration on web admin panel:
On database the record seems to be correct.
mysql> select * from psa.BackupsScheduled;
+----+--------+----------+------------+---------------------+--------+--------+-----------+----------+--------+-------+------------+---------+--------------+------------+-------------+------------------------------+
| id | obj_id | obj_type | repository | last | period | active | processed | rotation | prefix | email | split_size | suspend | with_content | backup_day | backup_time | content_type |
+----+--------+----------+------------+---------------------+--------+--------+-----------+----------+--------+-------+------------+---------+--------------+------------+-------------+------------------------------+
| 1 | 6 | domain | ftp | 2016-01-05 19:03:41 | 604800 | true | false | 4 | abc | | 0 | false | true | 2 | 03:00:00 | backup_content_all_at_domain |
+----+--------+----------+------------+---------------------+--------+--------+-----------+----------+--------+-------+------------+---------+--------------+------------+-------------+------------------------------+
1 row in set (0.00 sec)
In BackupsSettings table I've the right value, indeed backups work properly if I run them manually.
I also checked log files related to plesk backup manager (http://kb.odin.com/it/111283). But I can see only backup executed manually.
Running /opt/psa/admin/sbin/backupmng manually nothing happens.
It should be normal, because the cronjob is executed each 15 minutes so I think /opt/psa/admin/sbin/backupmng read tasks on database, and execute the task only if there is one scheduled.
Now I don't know if I should change the cronjob to match the task scheduled for 3:00 am of each tuesday:
0,15,30,45 * * * * root [ -x /opt/psa/admin/sbin/backupmng ] && /opt/psa/admin/sbin/backupmng >/dev/null 2>&1

How to set up stunserver and turnserver in icedemo?

I am using pjsip project 2.3.and I want to use ice to imp my p2p.
so i compiled icedemo.c.and the cmdline is "-s stunserver.org".
but when i run the demo,i found it can not work well.
the dump info like this:
11:46:45.343 os_core_win32. pjlib 1.4 for win32 initialized
11:46:45.359 pjlib select() I/O Queue created (00A338E4)
+----------------------------------------------------------------------+
| M E N U |
+---+------------------------------------------------------------------+
| c | create Create the instance |
| d | destroy Destroy the instance |
| i | init o|a Initialize ICE session as offerer or answerer |
| e | stop End/stop ICE session |
| s | show Display local ICE info |
| r | remote Input remote ICE info |
| b | start Begin ICE negotiation |
| x | send <compid> .. Send data to remote |
+---+------------------------------------------------------------------+
| h | help * Help! * |
| q | quit Quit |
+----------------------------------------------------------------------+
Input: c
11:46:49.703 icedemo Creating ICE stream transport with 1 component(s)
11:46:49.734 icedemo Comp 1: srflx candidate starts Binding discovery
11:46:50.000 stuntp00A34390 TX 36 bytes STUN message to 132.177.123.13:3478:
--- begin STUN message ---
STUN Binding request
Hdr: length=16, magic=2112a442, tsx_id=6784482372ae3d6c00015f90
Attributes:
SOFTWARE: length=10, value="pjnath-1.4"
--- end of STUN message ---
11:46:50.000 stuntsx00A3BCF STUN client transaction created
11:46:50.000 stuntsx00A3BCF STUN sending message (transmit count=1)
11:46:50.015 icedemo Comp 1: host candidate 192.168.2.146:7033 added
11:46:50.015 icedemo ICE stream transport created
11:46:50.015 icedemo.c ICE instance successfully created
+----------------------------------------------------------------------+
| M E N U |
+---+------------------------------------------------------------------+
| c | create Create the instance |
| d | destroy Destroy the instance |
| i | init o|a Initialize ICE session as offerer or answerer |
| e | stop End/stop ICE session |
| s | show Display local ICE info |
| r | remote Input remote ICE info |
| b | start Begin ICE negotiation |
| x | send <compid> .. Send data to remote |
+---+------------------------------------------------------------------+
| h | help * Help! * |
| q | quit Quit |
+----------------------------------------------------------------------+
Input: 11:46:50.359 stuntsx00A3BCF STUN sending message (transmit count=2)
11:46:50.562 stuntsx00A3BCF STUN sending message (transmit count=3)
11:46:50.968 stuntsx00A3BCF STUN sending message (transmit count=4)
11:46:51.781 stuntsx00A3BCF STUN sending message (transmit count=5)
11:46:53.390 stuntsx00A3BCF STUN sending message (transmit count=6)
11:46:56.593 stuntsx00A3BCF STUN sending message (transmit count=7)
11:46:58.203 stuntsx00A3BCF STUN timeout waiting for response
11:46:58.203 stuntp00A34390 Session failed because STUN Binding request failed
: STUN transaction has timed out (PJNATH_ESTUNTIMEDOUT)
11:46:58.203 icedemo STUN binding request failed: STUN transaction has
timed out (PJNATH_ESTUNTIMEDOUT)
11:46:58.203 icedemo.c ICE initialization failed: STUN transaction has ti
med out (PJNATH_ESTUNTIMEDOUT)
11:47:00.203 stuntsx00A3BCF STUN client transaction destroyed
I have been having similar issues with my PjSIP based application. Something seems to be wrong with the STUN server at stunserver.org. It keeps going down from time to time.
Try using one of the STUN servers provided by google. I found them to be pretty reliable.
stun.l.google.com:19302
stun1.l.google.com:19302
stun2.l.google.com:19302
stun3.l.google.com:19302
stun4.l.google.com:19302

Data restore procedure failed in OrientDB

Last night I received the following error after inserting ~500k records:
2014-07-03 22:10:50:056 SEVE Internal server error:
java.lang.IllegalArgumentException: Cannot get allocation information
for database 'pumpup' because it is not a disk-based database
[ONetworkProtocolHttpDb]
My OrientDB server.sh froze, so I rebooted my computer. Now when I try to access the database, I get the following output from server.sh:
2014-07-04 13:52:35:331 INFO OrientDB Server v1.7.3 is active. [OServer]
2014-07-04 13:52:38:784 WARN segment file 'database.ocf' was not closed correctly last time [OSingleFileSegment]
2014-07-04 13:52:38:879 WARN Storage pumpup was not closed properly. Will try to restore from write ahead log. [OLocalPaginatedStorage]
2014-07-04 13:52:38:879 INFO Looking for last checkpoint... [OLocalPaginatedStorage]
2014-07-04 13:52:38:879 INFO Checkpoints are absent, the restore will start from the beginning. [OLocalPaginatedStorage]
2014-07-04 13:52:38:880 INFO Data restore procedure is started. [OLocalPaginatedStorage]
2014-07-04 13:53:15:080 INFO Heap memory is low apply batch of operations are read from WAL. [OLocalPaginatedStorage]Exception during storage data restore.
null
-> com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage.restoreWALBatch(OLocalPaginatedStorage.java:1842)
-> com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage.restoreFrom(OLocalPaginatedStorage.java:1802)
-> com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage.restoreFromBegging(OLocalPaginatedStorage.java:1772)
-> com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage.restoreFromWAL(OLocalPaginatedStorage.java:1611)
-> com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage.restoreIfNeeded(OLocalPaginatedStorage.java:1578)
-> com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage.open(OLocalPaginatedStorage.java:245)
-> com.orientechnologies.orient.core.db.raw.ODatabaseRaw.open(ODatabaseRaw.java:100)
-> com.orientechnologies.orient.core.db.ODatabaseWrapperAbstract.open(ODatabaseWrapperAbstract.java:49)
-> com.orientechnologies.orient.core.db.record.ODatabaseRecordAbstract.open(ODatabaseRecordAbstract.java:268)
-> com.orientechnologies.orient.core.db.ODatabaseWrapperAbstract.open(ODatabaseWrapperAbstract.java:49)
-> com.orientechnologies.orient.server.OServer.openDatabase(OServer.java:557)
-> com.orientechnologies.orient.server.network.protocol.http.command.OServerCommandAuthenticatedDbAbstract.authenticate(OServerCommandAuthenticatedDbAbstract.java:126)
-> com.orientechnologies.orient.server.network.protocol.http.command.OServerCommandAuthenticatedDbAbstract.beforeExecute(OServerCommandAuthenticatedDbAbstract.java:87)
-> com.orientechnologies.orient.server.network.protocol.http.command.get.OServerCommandGetConnect.beforeExecute(OServerCommandGetConnect.java:46)
-> com.orientechnologies.orient.server.network.protocol.http.ONetworkProtocolHttpAbstract.service(ONetworkProtocolHttpAbstract.java:173)
-> com.orientechnologies.orient.server.network.protocol.http.ONetworkProtocolHttpAbstract.execute(ONetworkProtocolHttpAbstract.java:572)
-> com.orientechnologies.common.thread.OSoftThread.run(OSoftThread.java:45)
2014-07-04 13:53:15:082 SEVE Internal server error:
com.orientechnologies.orient.core.exception.OStorageException: Cannot open local storage '/Users/gsquare567/Databases/orientdb-community-1.7.3/databases/pumpup' with mode=rw
--> java.lang.NullPointerException [ONetworkProtocolHttpDb]
When I try to connect every subsequent time, I get the following:
--> com.orientechnologies.common.concur.lock.OLockException: File '/Users/gsquare567/Databases/orientdb-community-1.7.3/databases/pumpup/database.ocf'
is locked by another process, maybe the database is in use by another
process. Use the remote mode with a OrientDB server to allow multiple
access to the same database. [ONetworkProtocolHttpDb]
I can't connect to the database. I'm going to update from 1.7.3 to 1.7.4, recreate the database, and try again. For now, here's some output from dserver.sh as it seems to be trying to perform a data restore procedure:
2014-07-04 14:01:09:168 INFO [192.168.1.8]:2434 [orientdb] [3.2.2] Address[192.168.1.8]:2434 is STARTED [LifecycleService]
2014-07-04 14:01:09:198 INFO [192.168.1.8]:2434 [orientdb] [3.2.2] Initializing cluster partition table first arrangement... [InternalPartitionService]
2014-07-04 14:01:09:212 INFO [node1404496844581] found no previous messages in queue orientdb.node.node1404496844581.response [OHazelcastDistributedMessageService]
2014-07-04 14:01:09:230 WARN [node1404496844581] opening database 'pumpup'... [OHazelcastPlugin]
2014-07-04 14:01:09:231 INFO [node1404496844581] loaded database configuration from disk: /Users/gsquare567/Databases/orientdb-community-1.7.3/config/default-distributed-db-config.json [OHazelcastPlugin]
2014-07-04 14:01:09:238 INFO updated distributed configuration for database: pumpup:
----------
{
"autoDeploy":true,
"hotAlignment":false,
"readQuorum":1,
"writeQuorum":2,
"failureAvailableNodesLessQuorum":false,
"readYourWrites":true,"clusters":{
"internal":{
},
"index":{
},
"*":{
"servers":["<NEW_NODE>"]
}
},
"version":0
}
---------- [OHazelcastPlugin]
2014-07-04 14:01:09:243 INFO updated distributed configuration for database: pumpup:
----------
{
"version":0,
"autoDeploy":true,
"hotAlignment":false,
"readQuorum":1,
"writeQuorum":2,
"failureAvailableNodesLessQuorum":false,
"readYourWrites":true,"clusters":{
"internal":null,
"index":null,
"*":{
"servers":["<NEW_NODE>"]
}
}
}
---------- [OHazelcastPlugin]
2014-07-04 14:01:09:243 INFO Saving distributed configuration file for database 'pumpup' to: /Users/gsquare567/Databases/orientdb-community-1.7.3/databases/pumpup/distributed-config.json [OHazelcastPlugin]
2014-07-04 14:01:09:246 INFO [node1404496844581] adding node 'node1404496844581' in partition: db=pumpup [*] [OHazelcastDistributedDatabase]
2014-07-04 14:01:09:246 INFO updated distributed configuration for database: pumpup:
----------
{
"version":1,
"autoDeploy":true,
"hotAlignment":false,
"readQuorum":1,
"writeQuorum":2,
"failureAvailableNodesLessQuorum":false,
"readYourWrites":true,"clusters":{
"internal":null,
"index":null,
"*":{
"servers":["<NEW_NODE>","node1404496844581"]
}
}
}
---------- [OHazelcastPlugin]
2014-07-04 14:01:09:247 INFO Saving distributed configuration file for database 'pumpup' to: /Users/gsquare567/Databases/orientdb-community-1.7.3/databases/pumpup/distributed-config.json [OHazelcastPlugin]
2014-07-04 14:01:09:247 INFO [node1404496844581] received added status node1404496844581.pumpup=OFFLINE [OHazelcastPlugin]
2014-07-04 14:01:09:249 INFO [node1404496844581] found no previous messages in queue orientdb.node.node1404496844581.pumpup.request [OHazelcastDistributedMessageService]
2014-07-04 14:01:09:288 WARN segment file 'database.ocf' was not closed correctly last time [OSingleFileSegment]
2014-07-04 14:01:09:378 WARN Storage pumpup was not closed properly. Will try to restore from write ahead log. [OLocalPaginatedStorage]
2014-07-04 14:01:09:378 INFO Looking for last checkpoint... [OLocalPaginatedStorage]
2014-07-04 14:01:09:378 INFO Checkpoints are absent, the restore will start from the beginning. [OLocalPaginatedStorage]
2014-07-04 14:01:09:379 INFO Data restore procedure is started. [OLocalPaginatedStorage]
2014-07-04 14:01:35:724 INFO Heap memory is low apply batch of operations are read from WAL. [OLocalPaginatedStorage]Exception during storage data restore.
null
-> com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage.restoreWALBatch(OLocalPaginatedStorage.java:1842)
-> com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage.restoreFrom(OLocalPaginatedStorage.java:1802)
-> com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage.restoreFromBegging(OLocalPaginatedStorage.java:1772)
-> com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage.restoreFromWAL(OLocalPaginatedStorage.java:1611)
-> com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage.restoreIfNeeded(OLocalPaginatedStorage.java:1578)
-> com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage.open(OLocalPaginatedStorage.java:245)
-> com.orientechnologies.orient.core.db.raw.ODatabaseRaw.open(ODatabaseRaw.java:100)
-> com.orientechnologies.orient.core.db.ODatabaseWrapperAbstract.open(ODatabaseWrapperAbstract.java:49)
-> com.orientechnologies.orient.core.db.record.ODatabaseRecordAbstract.open(ODatabaseRecordAbstract.java:268)
-> com.orientechnologies.orient.core.db.ODatabaseWrapperAbstract.open(ODatabaseWrapperAbstract.java:49)
-> com.orientechnologies.orient.server.OServer.openDatabase(OServer.java:557)
-> com.orientechnologies.orient.server.hazelcast.OHazelcastDistributedDatabase.initDatabaseInstance(OHazelcastDistributedDatabase.java:283)
-> com.orientechnologies.orient.server.hazelcast.OHazelcastDistributedDatabase.setOnline(OHazelcastDistributedDatabase.java:295)
-> com.orientechnologies.orient.server.hazelcast.OHazelcastPlugin.loadDistributedDatabases(OHazelcastPlugin.java:742)
-> com.orientechnologies.orient.server.hazelcast.OHazelcastPlugin.startup(OHazelcastPlugin.java:194)
-> com.orientechnologies.orient.server.OServer.registerPlugins(OServer.java:720)
-> com.orientechnologies.orient.server.OServer.activate(OServer.java:241)
-> com.orientechnologies.orient.server.OServerMain.main(OServerMain.java:32)Exception in thread "main" com.orientechnologies.orient.core.exception.OStorageException: Cannot open local storage '/Users/gsquare567/Databases/orientdb-community-1.7.3/databases/pumpup' with mode=rw
at com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage.open(OLocalPaginatedStorage.java:251)
at com.orientechnologies.orient.core.db.raw.ODatabaseRaw.open(ODatabaseRaw.java:100)
at com.orientechnologies.orient.core.db.ODatabaseWrapperAbstract.open(ODatabaseWrapperAbstract.java:49)
at com.orientechnologies.orient.core.db.record.ODatabaseRecordAbstract.open(ODatabaseRecordAbstract.java:268)
at com.orientechnologies.orient.core.db.ODatabaseWrapperAbstract.open(ODatabaseWrapperAbstract.java:49)
at com.orientechnologies.orient.server.OServer.openDatabase(OServer.java:557)
at com.orientechnologies.orient.server.hazelcast.OHazelcastDistributedDatabase.initDatabaseInstance(OHazelcastDistributedDatabase.java:283)
at com.orientechnologies.orient.server.hazelcast.OHazelcastDistributedDatabase.setOnline(OHazelcastDistributedDatabase.java:295)
at com.orientechnologies.orient.server.hazelcast.OHazelcastPlugin.loadDistributedDatabases(OHazelcastPlugin.java:742)
at com.orientechnologies.orient.server.hazelcast.OHazelcastPlugin.startup(OHazelcastPlugin.java:194)
at com.orientechnologies.orient.server.OServer.registerPlugins(OServer.java:720)
at com.orientechnologies.orient.server.OServer.activate(OServer.java:241)
at com.orientechnologies.orient.server.OServerMain.main(OServerMain.java:32)
Caused by: java.lang.NullPointerException
at com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage.restoreWALBatch(OLocalPaginatedStorage.java:1842)
at com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage.restoreFrom(OLocalPaginatedStorage.java:1802)
at com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage.restoreFromBegging(OLocalPaginatedStorage.java:1772)
at com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage.restoreFromWAL(OLocalPaginatedStorage.java:1611)
at com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage.restoreIfNeeded(OLocalPaginatedStorage.java:1578)
at com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage.open(OLocalPaginatedStorage.java:245)
... 12 more
2014-07-04 14:01:39:184 INFO [192.168.1.8]:2434 [orientdb] [3.2.2] memory.used=1.3G, memory.free=155.9M, memory.total=1.4G, memory.max=1.8G, memory.used/total=89.32%, memory.used/max=71.63%, load.process=37.00%, load.system=41.00%, load.systemAverage=170.00%, thread.count=59, thread.peakCount=59, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.operation.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operation.size=0, executor.q.priorityOperation.size=0, executor.q.response.size=0, operations.remote.size=0, operations.running.size=0, proxy.count=5, clientEndpoint.count=0, connection.active.count=0, connection.count=0 [HealthMonitor]
2014-07-04 14:02:09:195 INFO [192.168.1.8]:2434 [orientdb] [3.2.2] memory.used=1.3G, memory.free=155.2M, memory.total=1.4G, memory.max=1.8G, memory.used/total=89.37%, memory.used/max=71.68%, load.process=0.00%, load.system=4.00%, load.systemAverage=162.00%, thread.count=59, thread.peakCount=59, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.operation.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operation.size=0, executor.q.priorityOperation.size=0, executor.q.response.size=0, operations.remote.size=0, operations.running.size=0, proxy.count=5, clientEndpoint.count=0, connection.active.count=0, connection.count=0 [HealthMonitor]
2014-07-04 14:02:39:207 INFO [192.168.1.8]:2434 [orientdb] [3.2.2] memory.used=1.3G, memory.free=149.3M, memory.total=1.4G, memory.max=1.8G, memory.used/total=89.77%, memory.used/max=72.00%, load.process=0.00%, load.system=5.00%, load.systemAverage=124.00%, thread.count=61, thread.peakCount=61, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.operation.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operation.size=0, executor.q.priorityOperation.size=0, executor.q.response.size=0, operations.remote.size=0, operations.running.size=0, proxy.count=5, clientEndpoint.count=0, connection.active.count=0, connection.count=0 [HealthMonitor]
2014-07-04 14:03:09:218 INFO [192.168.1.8]:2434 [orientdb] [3.2.2] memory.used=1.3G, memory.free=149.2M, memory.total=1.4G, memory.max=1.8G, memory.used/total=89.78%, memory.used/max=72.00%, load.process=0.00%, load.system=6.00%, load.systemAverage=151.00%, thread.count=61, thread.peakCount=61, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.operation.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operation.size=0, executor.q.priorityOperation.size=0, executor.q.response.size=0, operations.remote.size=0, operations.running.size=0, proxy.count=5, clientEndpoint.count=0, connection.active.count=0, connection.count=0 [HealthMonitor]
EDIT
Here is my OrientDB info:
CLUSTERS
----------------------------------------------+-------+---------------------+---------+-----------------+
NAME | ID | TYPE | DATASEG | RECORDS |
----------------------------------------------+-------+---------------------+---------+-----------------+
default | 3 | PHYSICAL | -1 | 0 |
e | 10 | PHYSICAL | -1 | 0 |
index | 1 | PHYSICAL | -1 | 4 |
internal | 0 | PHYSICAL | -1 | 3 |
manindex | 2 | PHYSICAL | -1 | 1 |
ofunction | 7 | PHYSICAL | -1 | 0 |
orids | 6 | PHYSICAL | -1 | 0 |
orole | 4 | PHYSICAL | -1 | 3 |
oschedule | 8 | PHYSICAL | -1 | 0 |
ouser | 5 | PHYSICAL | -1 | 3 |
post | 12 | PHYSICAL | -1 | 1312295 |
user | 11 | PHYSICAL | -1 | 205795 |
v | 9 | PHYSICAL | -1 | 0 |
----------------------------------------------+-------+---------------------+---------+-----------------+
TOTAL = 13 | | 1518104 |
----------------------------------------------------------------------------+---------+-----------------+
CLASSES
----------------------------------------------+------------------------------------+------------+----------------+
NAME | SUPERCLASS | CLUSTERS | RECORDS |
----------------------------------------------+------------------------------------+------------+----------------+
E | | 10 | 0 |
OFunction | | 7 | 0 |
OIdentity | | - | 0 |
ORestricted | | - | 0 |
ORIDs | | 6 | 0 |
ORole | OIdentity | 4 | 3 |
OSchedule | | 8 | 0 |
OTriggered | | - | 0 |
OUser | OIdentity | 5 | 3 |
ParseObject | | - | 0 |
Post | ParseObject | 12 | 1312295 |
User | ParseObject | 11 | 205795 |
V | | 9 | 0 |
----------------------------------------------+------------------------------------+------------+----------------+
TOTAL = 13 1518096 |
----------------------------------------------+------------------------------------+------------+----------------+
INDEXES
----------------------------------------------+------------+-----------------------+----------------+------------+
NAME | TYPE | CLASS | FIELDS | RECORDS |
----------------------------------------------+------------+-----------------------+----------------+------------+
dictionary | DICTIONARY | | | 0 |
ORole.name | UNIQUE | ORole | name | 3 |
OUser.name | UNIQUE | OUser | name | 3 |
Post.objectId | UNIQUE_... | Post | objectId | 1312295 |
User.objectId | UNIQUE_... | User | objectId | 205795 |
----------------------------------------------+------------+-----------------------+----------------+------------+
TOTAL = 5 1518096 |
-----------------------------------------------------------------------------------------------------------------+

Resources