I'm following the Heroku Django tutorial. I believe I followed it exactly. I ran no additional commands besides what they asked for.
However, when I get to the part where I sync the Celery and Kombu tables (under the "Running a Worker" section), I get a bug.
Typing in their command python hellodjango/manage.py syncdb, gives me the following:
...
File "/Users/Alex/Coding/getcelery/venv/lib/python2.7/site-packages/django/db/backends/dummy/base.py", line 15, in complain
raise ImproperlyConfigured("You haven't set the database ENGINE setting yet.")
django.core.exceptions.ImproperlyConfigured: You haven't set the database ENGINE setting yet.
Anybody run into this problem before? Should I be doing something that's not explicit in the tutorial?
Any hints would be greatly appreciated!
Your output is from running the syncdb locally. Enabling the database addon will set DATABASE_URL in your config, and hence the environment of the dynos (see heroku config). What it won't do is set DATABASE_URL locally - you'll need to do that yourself (or sort some other local database)
Its likely because your DATABASE dictionary is undefined. Can you attempt to add this code which should read your database from the environment variable then the CELERY db can be setup from it:
import os
import sys
import urlparse
# Register database schemes in URLs.
urlparse.uses_netloc.append('postgres')
urlparse.uses_netloc.append('mysql')
try:
# Check to make sure DATABASES is set in settings.py file.
# If not default to {}
if 'DATABASES' not in locals():
DATABASES = {}
if 'DATABASE_URL' in os.environ:
url = urlparse.urlparse(os.environ['DATABASE_URL'])
# Ensure default database exists.
DATABASES['default'] = DATABASES.get('default', {})
# Update with environment configuration.
DATABASES['default'].update({
'NAME': url.path[1:],
'USER': url.username,
'PASSWORD': url.password,
'HOST': url.hostname,
'PORT': url.port,
})
if url.scheme == 'postgres':
DATABASES['default']['ENGINE'] = 'django.db.backends.postgresql_psycopg2'
if url.scheme == 'mysql':
DATABASES['default']['ENGINE'] = 'django.db.backends.mysql'
except Exception:
print 'Unexpected error:', sys.exc_info()
Related
I have followed the various tutorials from philly and others to setup django with django-mssql-backend but have had no luck. I think the connection is working but when it is trying to parse the tables I get a collation error that it cannot get past. The specs of what I'm running are as follows:
django-mssql-backend: 2.8.1
django: 3.2
pyodbc: 4.0.30
'server':{
'ENGINE': 'sql_server.pyodbc',
'NAME': 'database',
'USER': 'username',
'PASSWORD': 'password',
'HOST': 'hostname of server',
'PORT': '',
'OPTIONS': {
'driver': 'ODBC Driver 17 for SQL Server',
'unicode_results': True,
},
}
When I attempt to run the migration class creator or:
python manage.py inspectdb --database=server
I get the following error in the output:
# This is an auto-generated Django model module.
# You'll have to do the following manually to clean this up:
# * Rearrange models' order
# * Make sure each model has one field with primary_key=True
# * Make sure each ForeignKey and OneToOneField has `on_delete` set to the desired behavior
# * Remove `managed = False` lines if you wish to allow Django to create, modify, and delete the table
# Feel free to rename the models, but don't rename db_table values or field names.
from django.db import models
# Unable to inspect table 'ADObjectMemberships'
# The error was: __new__() missing 1 required positional argument: 'collation'
# Unable to inspect table 'ADObjects'
# The error was: __new__() missing 1 required positional argument: 'collation'
I am sure this is possible because I spooled up a different venv with django 1.8 installed and the old django-pyodbc-azure module installed and it connects to the tables and pulls their information. The biggest problem I have with it is that it stops with ~15 tables left in the DB and throws a memory error no matter what I do to fix it.
Any thoughts or help on the issue is greatly appreciated!
Confirmed through independent testing that the issue does appear to be a bug in version 3.2 and will put in a bug report. However I did come out with a workaround for now as follows:
Create new temporary virtual environment
Install django==3.0 pyodbc==4.0 django-mssql-backend==1.8
Create the database entry for the SQL Server in settings.py
run python manage.py inspectdb --database=yourentry > yourentry.py
Once you have created all the models for the existing database you want to use in your website you can grab each of the yourentry.py files and copy them to a submodel folder and import them into the main models.py file.
When the virtual environment is no longer needed it can be deleted.
I've been holding off posting here because I feel like this issue could be too vague. I will try my best to explain. I have been through all of the existing questions but they don't seem relevant to what I am doing.
Basically, I have inherited 3 Ec2 Instances that are Dev / Staging / Live web applications in my new role. I use Ansible playbooks to migrate the Database between all environments. We recently had a new website that was deployed onto all three existing instances.
The Dev box recently died so I blew it away and launched a new one, the website looks fine, however exporting and importing the Database no longer works (on the new instance)
Below is the Ansible output:
TASK: [Export database to migrate] ********************************************
failed: [172.**.**.***] => {"changed": true, "cmd": "wp db export dbv2.sql --tables=t*******0_links,t*******0_options,t*******0_postmeta,t*******0_posts,taxlt4ws0_rg_form,taxlt4ws0_rg_form_meta,taxlt4ws0_rg_form_view,t*******0_term_relationships,t*******0_term_taxonomy,t*******0_termmeta,t*******0_terms,t*******0_usermeta,t*******0_users", "delta": "0:00:00.001594", "end": "2017-09-01 10:21:25.225355", "rc": 127, "start": "2017-09-01 10:21:25.223761", "warnings": []}
stderr: /bin/sh: 1: wp: not found
FATAL: all hosts have already failed -- aborting
Things I've checked:
Chmod on the folders it import/exports in/from.
IAM Role is set
Used Shell instead of Command in the Playbook
Configs for each environment
I'm really stumped my Ansible knowledge is quite limited as I only picked it up a couple months ago and hadn't run into any issues (even with a new Website) until the Dev box had to be replaced.
I think ansible is referring to wpcli. It is not able to find its executable.
If this is the case,you need to install it with another task before that one.
Basically what this is complaining about is that whatever script you are using in module Export DB is not able to find a wp script or executable.
stderr: /bin/sh: 1: wp: not found
Would recommend checking which wp or maybe do a find to see if it is on the staging or live instances to see what it is and install/copy it over to the Dev instances.
You can test this hypothesis by using a small test script:
#!/bin/sh
wp
create this script say test.sh, give it executable permissions and run it on all the env's to see where it fails.
After configuring my DSpace server, its working correctly but when I look at the OAI identify page (http://repositorio.puce.edu.ec/oai/request?verb=Identify) so we can be harvested, it says that the repository is localhost instead of my URL. I investigated and found out that to update this, I have to run this command: dspace/bin/dspace oai import -c but when I run that command is gives me the following error: Solr server (http://repositorio.puce.edu.ec/solr/oai) is down, turn it on.
I can see the Solr Admin (it can't be seen from the outside because of security reasons) so I don't know what should be turned on or how to do it?
Thanks for the help.
I encountered this error in the past.
Looking at my oai.cfg file, I used localhost for some settings and my public URL for others.
solr.url=http://localhost/solr/oai
# OAI persistent identifier prefix.
# Format - oai:PREFIX:HANDLE
identifier.prefix = repository.library.georgetown.edu
# Base url for bitstreams
bitstream.baseUrl = https://repository.library.georgetown.edu
If you need to make a config change, be sure to clear the cache after restarting service.
I'm new to Heroku, so I tried following the instructions literally and I'm lost with this error.
So, when I included the settings.py configurations written in "Getting Started with Django on Heroku" I can't run the local server anymore unless I comment out South from my Installed Apps.
This is the error:
from south.db import DEFAULT_DB_ALIAS
File "/home/alejandro/Proyectos/olenv/local/lib/python2.7/site-packages/south/db/__init__.py", line 83, in <module>
db = dbs[DEFAULT_DB_ALIAS]
KeyError: 'default'
This is the relevant settings in settings.py:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'dbolib',
'USER': 'alejandro',
'PASSWORD': 'zzzzz'
}
}
# --- HEROKU --- #
# Parse database configuration from $DATABASE_URL
import dj_database_url
DATABASES['default'] = dj_database_url.config()
Maybe it is supposed to be this way since South is a tool for production and therefore it conflicts with Heroku. Maybe not, since Im new to Heroku, any clarification will help.
Edit
When I comment out South and try to run syncdb, I ge this error:
File "/home/alejandro/Proyectos/olenv/local/lib/python2.7/site-packages/django/db/backends/dummy/base.py", line 15, in complain
raise ImproperlyConfigured("settings.DATABASES is improperly configured. "
django.core.exceptions.ImproperlyConfigured: settings.DATABASES is improperly configured. Please supply the ENGINE value. Check settings documentation for more details.
requirements.txt:
Django==1.6
South==0.8.4
argparse==1.2.1
dj-database-url==0.3.0
dj-static==0.0.5
django-crispy-forms==1.4.0
django-debug-toolbar==1.2
django-endless-pagination==2.0
django-extensions==1.3.5
django-toolbelt==0.0.1
gunicorn==18.0
psycopg2==2.5.2
pystache==0.5.4
six==1.6.1
sqlparse==0.1.11
static==1.0.2
wsgiref==0.1.2
When you comment out South in your INSTALLED_APPS, does it let you run python manage.py syncdb locally?
I think the problem is that you're overwriting your default database with your dj_database_url lines in your settings.py. When you're on Heroku, there's an environment variable named DATABASE_URL, which is what dj_database_url uses to config your database. However, if you don't have a similar environment variable locally, you won't be configuring your database correctly. The reason South is throwing an error is because it tries to connect to your database upon initialization (note how the error comes from South's __init__). If you tried to actually use your database locally after commenting out South, I'm guessing you'd get an error.
There are two ways to fix this. The first, and easiest, is to make a DATABASE_URL variable on your local machine. Given the settings you listed above, set your DATABASE_URL to 'postgres://alejandro:zzzzz#localhost/dbolib'. The second, and more difficult, is to add something in your settings to essentially check if you're on Heroku and, if not, skip over the dj_database_url configuration. Basically, nest your dj_database_url lines in your settings inside an if statement that would return True if on Heroku and False if not.
EDIT:
It gets a little messy, but if you want to keep it all in one file, you could do something like the following:
First, try to set your default database using dj_database_url. This means moving the following code ABOVE the DATABASES setting in your settings:
import dj_database_url
DATABASES = {}
DATABASES['default'] = dj_database_url.config()
This code sets your default database equal to the result from dj_database_url. However, there may not be a result. You'll want to test for this (which would indicate local development) and set a database accordingly. So, AFTER THE PREVIOUS CODE, add the following code:
if len(DATABASES['default']) == 0:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'dbolib',
'USER': 'alejandro',
'PASSWORD': 'zzzzz'
}
}
This checks to see if you have anything in your DATABASES['default'] and, if not, sets your default database using your local settings.
This is a little messy, but it'll work. The better option may be to use different settings files, which adds another layer of complexity. Truly, the best option is to set your DATABASE_URL environment variable on all of your machines so that dj_database_url works as it should.
I needed to test some changes on my local dev server before pushing to production. Doing so required having the full dataset on my local machine.
A colleague directed me to:
https://developers.google.com/appengine/docs/python/tools/uploadingdata?csw=1
I downloaded the data using an administrator's username and password, but unfortunately, I was unable to upload the data to my localhost "dev" app engine server.
Ran this command from the commandline:
appcfg.py upload_data --filename=../data/data1.dat --url=http://localhost:9080/_ah/remote_api ./
Where:
9080 was my app port on my localhost copy of the app
I was running this command from my app directory
Had the downloaded data stored in relative directory
../data/data1.dat
Received this error:
raise _ToDatastoreError(err)
google.appengine.api.datastore_errors.BadRequestError: app "dev~appname" cannot access app "appname"'s data
UPDATE: It seems that the answer was as simple as adding the following to my upload_data call:
--application="dev~appname"
Thanks #DavidBennett.
ORIGINAL ANSWER: (which also works)
After a ton of searching on SO and code.google.com, the solution I found that worked was a comment on this question:
devappserver2, remote_api, and --default_partition
I used my original command as described in the question:
appcfg.py upload_data --filename=../data/data1.dat --url=http://localhost:9080/_ah/remote_api ./
The username and password I entered when prompted were my apps username (in my case, my email) and the corresponding password. (If that doesn't work you might want to try blank or test#example.com based on other comments I've read, but have not tested that theory.)
I also restarted my app engine with the following flag: (don’t forget to remove the flag the next time you restart the server) (You might want to try without using this flag, since I can’t confirm that it affects anything - I’m including it here, since it was a setting that I used.)
--clear_datastore=yes
The commenter recommends to delete “dev~” in your local server code on line 84 in this file:
google/appengine/tools/devappserver2/application_configuration.py, line 84
Where:
that base directory 'google' is located inside of:
/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/
assuming your GoogleAppEngineLauncher.app directory is in your Applications directory on your Mac
IMPORTANT: Restart your local app engine server for the changes to take effect.