wget -qO - https://www.mongodb.org/static/pgp/server-5.0.asc | sudo apt-key add -
When I run the above command to import the MongoDB public GPG Key I got this error.
The MongoDB documentation has like these steps to install. How do I install MongoDB's latest version through PPA?
Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
gpg: invalid key resource URL '/tmp/apt-key-
gpghome.y7PLc7iFDy/home:manuelschneid3r.asc.gpg'
gpg: keyblock resource '(null)': General error
gpg: key 7721F63BD38B4796: 2 signatures not checked due to missing keys
gpg: key 8C718D3B5072E1F5: 75 signatures not checked due to missing keys
gpg: key 7721F63BD38B4796: 2 signatures not checked due to missing keys
gpg: key 1488EB46E192A257: 1 signature not checked due to a missing key
gpg: key 1488EB46E192A257: 1 signature not checked due to a missing key
gpg: key D94AA3F0EFE21092: 3 signatures not checked due to missing keys
gpg: key 871920D1991BC93C: 1 signature not checked due to a missing key
gpg: Total number processed: 17
gpg: skipped new keys: 17
Could you please try the following? Two people had similar issue as yours when installing MongoDB and they solved removing the gpg key.
sudo rm /etc/apt/trusted.gpg.d/home:manuelschneid3r.gpg
source: askubuntu.com
Related
I'm trying to create a UUID id in a table with PostgreSQL. I tried with:
id uuid PRIMARY KEY DEFAULT uuid_generate_v4()
But I get:
ERROR: function uuid_generate_v4() does not exist
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
I tried with adding the schema like: id uuid PRIMARY KEY DEFAULT public.uuid_generate_v4() (as seen in a comment here)
I also checked if the extension is there (SELECT * FROM pg_available_extensions;), and yes I have it installed in the PostgreSQL database:
I read that if the Postgres is runing in --single mode, this may not work, but I don't know how to test it or if there is any way to do it.
Somebody knows how I can resolve the problem? Or any other option?
Is it a good idea to use like this:
SET DEFAULT uuid_in(md5(random()::text || now()::text)::cstring);
Because the function uuid_generate_v4 is not found, it suggests that the extension uuid-ossp is not loaded
pg_available_extensions lists the extensions available, but not necessarily loaded.
to see the list of loaded extensions query the view pg_extension as such:
select * from pg_extension;
To load the uuid-ossp extension run the following:
CREATE EXTENSION "uuid-ossp";
note: this will require super user privileges.
After the uuid-ossp extension is successfully loaded, you should see it in the pg_extension view & the function uuid_generate_v4 should be available.
In my case I needed to add the schema to the function call like this: app.uuid_generate_v4()
instead of this: uuid_generate_v4()
I found the schema for each extension by running this query:
SELECT
pge.extname,
pge.extversion,
pn.nspname AS schema
FROM pg_extension pge
JOIN pg_catalog.pg_namespace pn ON pge.extnamespace = pn."oid" ;
My DSE version is 4.7.3.
I got error "Corrupt sstable /var/lib/cassandra/data/solr_admin/solr_resources-a31c76040e40393b82d7ba3d910ad50a/solr_admin-solr_resources-ka-9808=[TOC.txt, Index.db, Digest.sha1, Filter.db, CompressionInfo.db, Statistics.db, Data.db]; skipping table"
so getting time out error while inserting records. After restart node the issue temp fixed but after some hours again i got time out error when insert records.
Kindly help me to fix the issue
You can get this if the server is being killed and not allowed to shutdown cleanly. Caused by https://issues.apache.org/jira/browse/CASSANDRA-10501. I would recommend updating to 4.8.11 or 5.0.4 (or later) to rule them out.
Follow below mentioned step :
1) Try to rebuild the sstable on the node using "nodetool scrub"
http://docs.datastax.com/en/cassandra/2.1/cassandra/tools/toolsScrub.html
If issue still don't get solved, follow below step
2) Shutdown the dse node.
3) Scrub the the sstable using "sstablescrub [options] "
http://docs.datastax.com/en/cassandra/2.1/cassandra/tools/toolsSSTableScrub_t.html
4) Remove the corrupt SSTable
5) Stare dse service in dse node
6) Repair using "nodetool repair"
So I am configuring mysql to run multiple instances.
when I try to initialize the datadir I get this error.
# mysql_install_db --user=mysql -- datadir=/var/lib/mysql/mysql1
[WARNING] mysql_install_db is deprecated. Please consider switching to mysqld --initialize
[ERROR] The data directory '/var/lib/mysql/mysql1' already exist and is not empty.
# mysql_install_db --user=mysql -- datadir=/var/lib/mysql/mysql2
[WARNING] mysql_install_db is deprecated. Please consider switching to mysqld --initialize
[WARNING] The bootstrap log isn't empty:
[WARNING] 2015-12-14T15:15:33.485673Z 0 [Warning] -- bootstrap is deprecated. Please consider using --initialize instead
[Warning] Changed limits: max_open_files: 1024 (requested 5000)
[Warning] Changed limits: table_open_cache: 431 (requested 2000)
I get this error message when I try to index files inside one directory. The steps to reproduce this error are following:
$ bin/solr stop -all
$ bin/solr start -e cloud
How many solr nodes ...? 2
Please enter the port for node1 [8983] 8983
Please enter the port for node2 [7574] 7574
Please provide a name for your new collection [gettingstarted] test
How many shard would you like to split test to [2] 2
How many replicas per shard .... [2] 2
Please choose a configuration for test collection .... basic_configs
Then, if I go to localhost:8983/solr/#/ under Core Admin tab I see two shards of a new collection test, which we have just created. I then want to index one of my folders and associate this index with test collection. I do it like this:
bin/post -c test ~/Projects/
As a result, I see how files are being indexed, but among all this information I see a lot of such warnings :
SimplePostTool: WARNING: Response Solr returned an error #400 (Bad request) for url
http://localhost:8983/solr/test/update
....
SimplePostTool: WARNING: IOException while reading response: java.io.IOException;
Server returned HTTP response code: 400 for URL http://localhost/8983/solr/test/update
What am I doing wrong?
We have introduced a new model to our Datastore a few days ago. Surprisingly I still get Index warnings
W 2014-02-09 03:38:28.480
suspended generator run_to_queue(query.py:938) raised NeedIndexError(no matching index found.
The suggested index for this query is:
- kind: FeelTrackerRecord
ancestor: yes
properties:
- name: timestamp)
W 2014-02-09 03:38:28.480
suspended generator helper(context.py:814) raised NeedIndexError(no matching index found.
The suggested index for this query is:
- kind: FeelTrackerRecord
ancestor: yes
properties:
- name: timestamp)
even though the index is served under DataStore Indexes
indexes:
# AUTOGENERATED
# This index.yaml is automatically updated whenever the dev_appserver
# detects that a new type of query is run. If you want to manage the
# index.yaml file manually, remove the above marker line (the line
# saying "# AUTOGENERATED"). If you want to manage some indexes
# manually, move them above the marker line. The index.yaml file is
# automatically uploaded to the admin console when you next deploy
# your application using appcfg.py.
- kind: FeelTrackerRecord
ancestor: yes
properties:
- name: record_date
- name: timestamp
What am I missing please?
I finally found the problem.
Best way to solve this is to make sure the local index.yaml is empty (delete all the indices). Then simply run your GAE app on localhost and access your app as you would expect.
Http access is pretty straightforward over browser and if GET/POST over REST is required you can use curl from a terminal:
GET:
curl --user test#gmail.com:test123 http://localhost:8080/api/v1.0/records/1391944029
POST:
curl --user test#gmail.com:test123 http://localhost:8080/api/v1.0/records/1391944029 -d '{"records": [
{
"notes": "update",
"record_date": "2014-02-02",
"timestamp": 1391944929
}
], "server_sync_timestamp": null}' -X POST -v -H "Accept: application/json" -H "Content-type: application/json"
GAE is now updating the index.yaml automatically and add the correct indices in there.
After your deploying your app, you need to cleanup the old indices.
This is done through a terminal:
appcfg.py vacuum_indexes src
After login with credentials it will ask you about the old indices that are no longer in your index.yaml and if they should be deleted. Press y and continue.
I mentioned in a comment your indexes don't match the required. The error says
raised NeedIndexError(no matching index found.
The suggested index for this query is:
- kind: FeelTrackerRecord
ancestor: yes
properties:
- name: timestamp)
However you the index you list is different
- kind: FeelTrackerRecord
ancestor: yes
properties:
- name: record_date
- name: timestamp
Do you see the difference ?
Just add the index as listed and update your indexes.