Related
Im trying to get my head around proper design of my resources in Azure "universum".
I have done following as pre-reqs for future deployments
+ created Resource Group for SQL Server
+ Created SQL server
lets call it RG-dev-SQL
At the moment I have build myself deployment templates that kick of building of the following :
+ creates resource group RG-webapp-dev-someappName
+ creates AppService plans (1 basic / 1 shared) AppSp-someappname-B1 | AppSp-someappname-B1
+ creates webapp called webapp-dev-someappname
+ Uses one of the before created AppServicePlans for the new web App
+ performs deployment
This works - however my question is if this is the way to go - so using Resource groups lets say per application that I deploy ? So repeating the process above for example for App1...App33 ?
I'm interested how other people see this of use ?
Thanks!
That's totally up to you.
Advantage of having similar entities in same resource group gives you guarantee that all of those entities will run in the same region. So if you web site is using sql server you've created and you want to make sure response time will be minimum it's good to use same resource group.
I'd say that if you plan to use multiple services and combine their functionalities, it's better to keep them in same resource group.
Even after when you decide to split your resources among different resources group you can still do it.
Here's more detailed article about it:
https://azure.microsoft.com/en-us/documentation/articles/resource-group-move-resources/
I've read a pile of other related questions... nothing really seems to answer the question I have.
My application will integrate with several different third party sites. (ebay, paypal, google, amazon...) It is a product management system and it pushes products all over the place...
Of course since it interacts with all these sites, it needs usernames, passwords, tokens.. ect.. Now I don't think it's really a good idea to store these things raw, but I still need to be able to get them raw, so I can embed them in the XML I send, or the HTTP header.
Does anyone have a suggestion on how to store the info? Is there a rails GEM?
Storing in server environment variables is the best practice for storing credentials to the DB, third-party credentials, etc. according to Twelve-Factor App methodology. How to store them depends on what you are using and how you have it setup. This promotes keeping creds out of source control, out of the database, and local to the server environment. To access an environment variable, you can use ENV, e.g.:
ENV['something']
Concerns about limitations and security:
For those storing thousands or more passwords/credentials in env vars, here are some things to help you decide whether or not to use them, in terms of feasibility and security:
If the OS user that is running the web application or service has read-only access to the Rails application root directory and subdirectories only and therefore has read access to a well-known (relative or absolute) path of a credentials/secrets file, and a developer accidentally writes a service that uses a request param as part of the pathname to a file read into a variable returned to the client, then a user of the application could potentially remotely dump all of your creds. If you put those creds into a place much less accessible by the OS user running the application in a pathname which is not easily guessable, you will reduce the risk of that exploit being used successfully to dump those creds.
You should also do what you can to make it harder to use those credentials outside of the server environment. This way, if they dumped all the credentials via an app/service exploit but cannot use those credentials outside of that environment, then they would have much less value.
The limit of how much can be stored in env variables is likely higher than you might suppose. For example, in macOS with RVM loaded which wastes a ton of environment space with bash functions, etc., I was able to get 4278 53 char length creds (e.g. bcrypt-ed):
test.sh
#!/bin/bash
set -ev
for i in `seq 1 4278`;
do
export CRED$i='...........................................'
done
ruby -e 'puts "#{ENV.size} env vars in Ruby. First cred=#{ENV["CRED1"]}"'
output:
$ time ./test.sh
for i in `seq 1 4278`;
do
export CRED$i='...........................................'
done
seq 1 4278
ruby -e 'puts "#{ENV.size} env vars in Ruby. First cred=#{ENV["CRED1"]}"'
4319 env vars in Ruby. First cred=...........................................
real 0m0.342s
user 0m0.297s
sys 0m0.019s
When I exceeded that, I got ruby: Argument list too long.
If you were to have a service in your app that could spit out any environment variable value, then you'd obviously NOT want to store creds in env vars as it would be less secure, but in my experience I've never encountered a development situation where ENV was exposed intentionally except for something like a Java administrative console that might spit out all system properties and env vars.
If you store creds in the DB, you're at more of a risk since SQL injection exploits are typically much more common. This is one reason usually only hashes of passwords are stored in the DB and not encrypted creds to other services.
If an attacker logs into the server itself and has access to the environment of the user running the web app/service or can find and read files containing the creds, you are out of luck.
I'd like my Play app to use different databases for test, local and production (production is Heroku) environments.
In application.conf I have:
db.default.driver=org.postgresql.Driver
%dev.db.default.url="jdbc:postgresql://localhost/foobar"
%test.db.default.url="jdbc:postgresql://localhost/foobar-test"
%prod.db.default.url=${DATABASE_URL}
This doesn't seem to work. When I run play test or play run,
all DB access fails with:
Configuration error [Missing configuration [db.default.url]] (Configuration.scala:258)
I have a few questions about this:
In general, I'm a little confused about how databases are configured
in Play: it looks like there's plain db, db.[DBNAME] and db.
[DBNAME].url and different tutorials make different choices among
those. Certain expressions that seem like they should work (e.g. db.default.url = "jdbc:..." fail with an error that a string was provided where an object was expected).
I've seen other people suggest that I create separate prod.conf, dev.conf and test.conf files that each include application.conf and then contain DB-specific configuration. But in that case, how do I specify what database to use when I run test from the Play console?
Is the %env syntax supposed to work in Play 2?
What's the correct way to specify an environment for play test to use?
In Play 2 there aren't different config environments. Instead you just set or override the config parameters in the conf/application.conf file. One way to do it is on the play command line, like:
play -Ddb.default.driver=org.postgresql.Driver -Ddb.default.url=$DATABASE_URL ~run
You can also tell Play to use a different config file:
play -Dconfig.file=conf/prod.conf ~run
For an example Procfile for Heroku, see:
https://github.com/jamesward/play2bars/blob/scala-anorm/Procfile
More details in the Play Docs:
http://www.playframework.org/documentation/2.0/Configuration
At least in Play 2.1.1 it is possibly to override configuration values with environment variables, if they are set. (For details see: http://www.playframework.com/documentation/2.1.1/ProductionConfiguration)
So you can set the following in your conf/application.conf:
db.default.url="jdbc:mysql://localhost:3306/my-db-name"
db.default.url=${?DATABASE_URL_DB}
per default it will use the JDBC-URL defined unless the environment variable DATABASE_URL_DB defines a value for it.
So you just set your development database in the configuration and for production or stages you define the environment variable.
But beware, this substitution does NOT WORK if you put your variable reference inside quoted strings:
db.default.url="jdbc:${?DATABASE_URL_DB}"
Instead, just unquote the section to be substituted, for example.
database_host = "localhost"
database_host = ${?ENV_DATABASE_HOST}
db.default.url="jdbc:mysql://"${?database_host}":3306/my-db-name"
In this example, localhost will be used by default if the environment variable ENV_DATABASE_HOST is not set. (For details see: https://www.playframework.com/documentation/2.5.x/ConfigFile#substitutions)
You can actually still use the Play 1.0 config value naming method, in Play 2, if you, when you load config values, check if Play.isTest, and then prefix the properties you load with 'test.'. Here's a snipped:
def configPrefix = if (play.api.Play.isTest) "test." else ""
def configStr(path: String) =
Play.configuration.getString(configPrefix + path) getOrElse
die(s"Config value missing: $configPrefix$path")
new RelDb(
server = configStr("pgsql.server"),
port = configStr("pgsql.port"),
database = configStr("pgsql.database"),
user = ...,
password = ...)
And the related config snippet:
pgsql.server="192.168.0.123"
pgsql.port="5432"
pgsql.database="prod"
...
test.pgsql.server="192.168.0.123"
test.pgsql.port="5432"
test.pgsql.database="test"
...
Now you don't need to remember setting any system properties when you run your e2e test suite, and you won't accidentally connect to the prod database.
I suppose that you can optionally place the test. values in a separate file, which you would then include at the end of the main config file I think.
There is another approach which is to override Global / GlobalSettings method onLoadConfig and from there you can setup application configuration with generic config and specific environment configuration like below...
conf/application.conf --> configurations common for all environment
conf/dev/application.conf --> configurations for development environment
conf/test/application.conf --> configurations for testing environment
conf/prod/application.conf --> configurations for production environment
You can check http://bit.ly/1AiZvX5 for my sample implementation.
Hope this helps.
Off-topic but if you follow 12-factor-app then having separate configurations named after environments is bad:
Another aspect of config management is grouping. Sometimes apps batch config into named groups (often called “environments”) named after specific deploys, such as the development, test, and production environments in Rails. This method does not scale cleanly: as more deploys of the app are created, new environment names are necessary, such as staging or qa. As the project grows further, developers may add their own special environments like joes-staging, resulting in a combinatorial explosion of config which makes managing deploys of the app very brittle
source: http://12factor.net/config
OK, so I did the dumb thing and released production code (C#, VS2010) that targeted our development database (SQL Server 2008 R2). Luckily we are not using the production database yet so I didn't have the pain of trying to recover and synchronize everything...
But, I want to prevent this from happening again when it could be much more painful. My idea is to add a table I can query at startup and determine what database I am connected to by the value returned. Production would return "PROD" and dev and test would return other values, for example.
If it makes any difference, the application talks to a WCF service to access the database so I have endpoints in the config file, not actual connection strings.
Does this make sense? How have others addressed this problem?
Thanks,
Dave
The easiest way to solve this is to not have access to production accounts. Those are stored in the Machine.config file for our .net applications. In non-.net applications this is easily duplicated, by having a config file in a common location, or (dare I say) a registry entry which holds the account information.
Most of our servers are accessed through aliases too, so no one really needs to change the connection string from environment to environment. Just grab the user from the config and the server alias in the hosts file points you to the correct server. This also removes the headache from us having to update all our config files when we switch db instances (change hardware etc.)
So even with the click once deployment and the end points. You can publish the a new endpoint URI in a machine config on the end users desktop (I'm assuming this is an internal application), and then reference that in the code.
If you absolutely can't do this, as this might be a lot of work (last place I worked had 2000 call center people, so this push was a lot more difficult, but still possible). You can always have an automated build server setup which modifies the app.config file for you as a last step of building the application for you. You then ALWAYS publish the compiled code from the automated build server. Never have the change in the app.config for something like this be a manual step in the developer's process. This will always lead to problems at some point.
Now if none of this works, your final option (done this one too), which I hated, but it worked is to look up the value off of a mapped drive. Essentially, everyone in the company has a mapped drive to say R:. This is where you have your production configuration files etc. The prod account people map to one drive location with the production values, and the devs etc. map to another with the development values. I hate this option compared to the others, but it works, and it can save you in a pinch with others become tedious and difficult (due to say office politics, setting up a build server etc.).
I'm assuming your production server has a different name than your development server, so you could simply SELECT ##SERVERNAME AS ServerName.
Not sure if this answer helps you in a assumed .net environment, but within a *nix/PHP environment, this is how I handle the same situation.
OK, so I did the dumb thing and released production code
There are a times where some app behavior is environment dependent, as you eluded to. In order to provide this ability to check between development and production environments I added the following line to global /etc/profile/profile.d/custom.sh config (CentOS):
SERVICE_ENV=dev
And in code I have a wrapper method which will grab an environment variable based on name and localize it's value making it accessible to my application code. Below is a snippet demonstrating how to check the current environment and react accordingly (in PHP):
public function __call($method, $params)
{
// Reduce chatter on production envs
// Only display debug messages if override told us to
if (($method === 'debug') &&
(CoreLib_Api_Environment_Package::getValue(CoreLib_Api_Environment::VAR_LABEL_SERVICE) === CoreLib_Api_Environment::PROD) &&
(!in_array(CoreLib_Api_Log::DEBUG_ON_PROD_OVERRIDE, $params))) {
return;
}
}
Remember, you don't want to pepper your application logic with environment checks, save for a few extreme use cases as demonstrated with snippet. Rather you should be controlling access to your production databases using DNS. For example, within your development environment the following db hostname mydatabase-db would resolve to a local server instead of your actual production server. And when you push your code to the production environment, your DNS will correctly resolve the hostname, so your code should "just work" without any environment checks.
After hours of wading through textbooks and tutorials on MSBuild and app.config manipulation, I stumbled across something called SlowCheetah - XML Transforms http://visualstudiogallery.msdn.microsoft.com/69023d00-a4f9-4a34-a6cd-7e854ba318b5 that did what I needed it to do in less than hour after first stumbling across it. Definitely recommended! From the article:
This package enables you to transform your app.config or any other XML file based on the build configuration. It also adds additional tooling to help you create XML transforms.
This package is created by Sayed Ibrahim Hashimi, Chuck England and Bill Heibert, the same Hashimi who authored THE book on MSBuild. If you're looking for a simple ubiquitous way to transform your app.config, web.config or any other XML fie based on the build configuration, look no further -- this VS package will do the job.
Yeah I know I answered my own question but I already gave points to the answer that eventually pointed me to the real answer. Now I need to go back and edit the question based on my new understanding of the problem...
Dave
I' assuming yout production serveur has a different ip address. You can simply use
SELECT CONNECTIONPROPERTY('local_net_address') AS local_net_address
I recently had a hard drive crashed and lost all of my source code. Is it possible to pull/checkout the code that I have already uploaded to Google App Engine (like the most recent version)?
Since I just went to all the trouble of figuring out how to do this, I figure I may as well include it as an answer, even if it doesn't apply to you:
Before continuing, swear on your mother's grave that next time you will back your code up, or better, use source control. I mean it: Repeat after me "next time I will use source control". Okay, with that done, let's see if it's possible to recover your code for you...
If your app was written in Java, I'm afraid you're out of luck - the source code isn't even uploaded to App Engine, for Java apps.
If your app was written in Python, and had both the remote_api and deferred handlers defined, it's possible to recover your source code through the interaction of these two APIs. The basic trick goes like this:
Start the remote_api_shell
Create a new deferred task that reads in all your files and writes them to the datastore
Wait for that task to execute
Extract your data from the datastore, using remote_api
Looking at them in order:
Starting the remote_api_shell
Simply type the following from a command line:
remote_api_shell.py your_app_id
If the shell isn't in your path, prefix the command with the path to the App Engine SDK directory.
Writing your source to the datastore
Here we're going to take advantage of the fact that you have the deferred handler installed, that you can use remote_api to enqueue tasks for deferred, and that you can defer an invocation of the Python built-in function 'eval'.
This is made slightly trickier by the fact that 'eval' executes only a single statement, not an arbitrary block of code, so we need to formulate our entire code as a single statement. Here it is:
expr = """
[type(
'CodeFile',
(__import__('google.appengine.ext.db').appengine.ext.db.Expando,),
{})(
name=dp+'/'+fn,
data=__import__('google.appengine.ext.db').appengine.ext.db.Text(
open(dp + '/' + fn).read()
)
).put()
for dp, dns, fns in __import__('os').walk('.')
for fn in fns]
"""
from google.appengine.ext.deferred import defer
defer(eval, expr)
Quite the hack. Let's look at it a bit at a time:
First, we use the 'type' builtin function to dynamically create a new subclass of db.Expando. The three arguments to type() are the name of the new class, the list of parent classes, and the dict of class variables. The entire first 4 lines of the expression are equivalent to this:
from google.appengine.ext import db
class CodeFile(db.Expando): pass
The use of 'import' here is another workaround for the fact that we can't use statements: The expression __import__('google.appengine.ext.db') imports the referenced module, and returns the top-level module (google).
Since type() returns the new class, we now have an Expando subclass we can use to store data to the datastore. Next, we call its constructor, passing it two arguments, 'name' and 'data'. The name we construct from the concatenation of the directory and file we're currently dealing with, while the data is the result of opening that filename and reading its content, wrapped in a db.Text object so it can be arbitrarily long. Finally, we call .put() on the returned instance to store it to the datastore.
In order to read and store all the source, instead of just one file, this whole expression takes place inside a list comprehension, which iterates first over the result of os.walk, which conveniently returns all the directories and files under a base directory, then over each file in each of those directories. The return value of this expression - a list of keys that were written to the datastore - is simply discarded by the deferred module. That doesn't matter, though, since it's only the side-effects we care about.
Finally, we call the defer function, deferring an invocation of eval, with the expression we just described as its argument.
Reading out the data
After executing the above, and waiting for it to complete, we can extract the data from the datastore, again using remote_api. First, we need a local version of the codefile model:
import os
from google.appengine.ext import db
class CodeFile(db.Model):
name = db.StringProperty(required=True)
data = db.TextProperty(required=True)
Now, we can fetch all its entities, storing them to disk:
for cf in CodeFile.all():
os.makedirs(os.dirname(cf.name))
fh = open(cf.name, "w")
fh.write(cf.data)
fh.close()
That's it! Your local filesystem should now contain your source code.
One caveat: The downloaded code will only contain your code and datafiles. Static files aren't included, though you should be able to simply download them over HTTP, if you remember what they all are. Configuration files, such as app.yaml, are similarly not included, and can't be recovered - you'll need to rewrite them. Still, a lot better than rewriting your whole app, right?
Update: Google appengine now allows you to download the code (for Python, Java, PHP and Go apps)
Tool documentation here.
Unfortunately the answer is no. This is a common question on SO and the app engine boards.
See here and here for example.
I'm sure you'll be OK though, because you do keep all your code in source control, right? ;)
If you want this to be an option in the future, you can upload a zip of your src, with a link to it somewhere in your web app, as part of your build/deploy process.
There are also projects out there like this one that automate that process for you.
Found that you can run the following in your console (command line / terminal). Just make sure that appcfg.py is accessible via your $PATH.
locate appcfg.py
By default the code below prints out each file and the download progress.
appcfg.py download_app -A APP_ID -V VERSION_ID ~/Downloads
You CAN get your code, even in Java. It just requires a bit of reverse engineering. You can download the war file using the appengine SDK by following these instructions: https://developers.google.com/appengine/docs/java/tools/uploadinganapp
Then you at least have the class files that you can run through JAD to get back to the source files (close to it, at least).
if you're using python... you might be able to write a script that opens all the files in it's current directory and child directories and adds them to a zipfile for you to download
I don't know much about app engine or the permissions, but it seems like that could be possible
You have to revert to the earlier sdk, appcfg.py is not in the latest sdk. Kind of a pain, but it works. It should be far more prominent in the literature. Cost me an entire day.
Update as of October 2020.
The current version of the Google App Engine SDK still includes the appcfg.py script however when trying to download the files from your site the script will attempt to download them into the root folder of your system.
Example:
/images/some_site_image.png
This is probably related to changes in appengine where your files might have
been in a relative directory before but they are no longer with the new versions
of the system.
To fix the problem you will have to edit the appcfg.py file in:
<path_to_cloud_install_dir>/google-cloud-sdk/platform/google_appengine/google/appengine/tools/appcfg.py
Around line 1634 you will find something that looks like:
full_path = os.path.join(out_dir, path)
The problem is with the path argument that for most files is a root directory.
This causes the join method to ignore the out_dir argument.
To fix this on a *NIX and MacOS type of system you will need to add a line before the above mentioned statement that looks like:
path = re.sub(r'^/', '', path)
This removes the '/' prefix from the path and allows the join method to properly
connect the strings.
Now you should be able to run:
google-cloud-sdk/platform/google_appengine/appcfg.py download_app -A <app> -V <version> 20200813t184800 <your_directory>