How to pass deployment settings to application? - qooxdoo

I am trying to deploy a Qooxdoo web application backed by CherryPy-hosted web services onto a server. However, I need to configure the client-side Qooxdoo application with the hostname of the server on which the application resides so that that the Ajax callbacks resolve to the right host. I have a feeling I can use the capabilities of the generate.py Qooxdoo script to generate client-side code with this appropriately set, but reading through the docs hasn't helped make it clear how yet. Anyone have any tips?
(FWIW, I know how I'd approach this using something like PHP and a different client-side framework like Echo 3--I'd have the index file be a PHP file that reads a local system configuration file prior to sending back client-side code. In this case, however, the generate.py file is a necessary part of the toolchain, so I can't see how to do it so simply.)

You can use qx.core.Enviroment class to add/get configuration for your project. The recommend way is only during compilation time, but there is a hack if you want to configure your application during run time.
Configuration during compilation time
If you want to configure the environment during compilation time see this.
In both cases after you add any environmental variable to your application, it can be accessed using the qx.core.Environment.get method.
On run time
WARNING this method isn't supported/documented from qooxdoo. Basically it's a hack
If you want to make available some environment configuration on run time you have to do this before qooxdoo loads. In order to this you could add some javascript into your webpage e.g.
window.qx = { };
window.qx.$$environment = {
"myawsomeapp.hostname": "example.org",
};
This should be added somewhere in your page before the qooxdoo start loading otherwise it will not have the desirable effect. The advantage of this method is that you can push configuration to the client e.g. some api keys that may be different between instances of your application.

The easiest way will be to compose your AJAX URL on the fly from window.location; ideally, you would be able to use window.location.origin which for this StackOverflow website would be "https://stackoverflow.com" but there are issues with that on IE.
A cross platform solution is:
var urlRoot = window.location.protocol + "//" +
window.location.hostname + (window.location.port ? ':' +
window.location.port: '');
This means your URL will always be correct, even if the server name changes (eg your on a test server instead of production).
See here for more details:
https://tosbourn.com/a-fix-for-window-location-origin-in-internet-explorer/

Related

Cefpython app with html/js files in local filesystem

I'm trying to make a hybrid python-js application with cefpython.
I would like to have:
JS and HTML files local to the cef python app (e.g. in './html', './js', etc)
Load one of the HTML files as the initial page
Avoid any CORS issues with files accessing each other (e.g. between directories)
The following seems to work to load the first page:
browser = cef.CreateBrowserSync(url='file:///html/index.html',
window_title="Rulr 2.0")
However, I then hit CORS issues.
Do I need to run a webserver also? Or is there an effective pattern for working with local files?
Try passing "disable-web-security" switch to cef.Initialize or set BrowserSettings.web_security_disabled.
Try also setting BrowserSettings.file_access_from_file_urls_allowed and BrowserSettings.universal_access_from_file_urls_allowed.
There are a few options in CEF for loading custom content and that can be used to load filesystem content without any security restrictions. There is a resource handler, a scheme handler and a resource manager. In CEF Python only resource handler is currently available. There is the wxpython-response.py example on README-Examples.md page.
Resource manager is a very easy API for loading various content, it is to be implemented in Issue #418 (PR is welcome):
https://github.com/cztomczak/cefpython/issues/418
For scheme handler see Issue #50:
https://github.com/cztomczak/cefpython/issues/50
Additionally there is also GetResourceResponseFilter in upstream CEF which is an easier option than resource handler, to be implemented via Issue #229:
https://github.com/cztomczak/cefpython/issues/229
You could also run an internal web server inside your app (easy to do with Python) and serve files that way. Upstream CEF also has a built-in web server functionality, however I don't think this will be exposed in cefpython, as it's already easy to set up web server in Python.

How to specify different api URL in Azure deployment vs running locally?

So my setup is like this.
I have a solution with two projects. The first project is an ASP.NET WebAPI project that represents a REST API. It is completely view-less and returns only JSON responses for the API calls.
The second project is an AngularJS client. I started by creating an empty Web app in Visual Studio. So this project does have a Web.Config and an Azure publish profile but no C# controllers, routes, app_start, etc. It is all JavaScript and HTML.
The two projects are deployed as two independent Web Apps in Azure. Project_API and Project_Web.
My question is in my Angular App when the service responsible for communicating with the REST API how do I gracefully detect or set the URL based on whether I am deployed in Azure vs running locally?
// Use this api URL when running locally
var BaseURL = 'http://localhost:15774/api/games/';
// Use this api URL when deployed to Azure
// var BaseURL = 'http://Project_API.azurewebsites.net/api/games/';
It is similar to how inside of the Project_API project I can set a different connection string for my local vs production database. That I understand though because the C# code can read the database connection string from Web.Config, and I can override that value in the Azure application settings for the deployed app. I guess I don't know the right way to do the similar action for a JavaScript client web app though.
The actual solution I went with was to create an ASP 5 project type. This project type has built-in support for the gulp task runner. It still feels a little bit unnatural to use Visual Studio to develop an AngularJS client in the first place, but at least this brings it closer to a common front-end webdev client development feel with the taskrunner support.
The other suggest solution I am sure works also. It just seems to me that if you choose:
To separate your REST API and client front-end into separate independent projects rather than serving up both your client and server from a single project.
Write your front-end client as an Angular SPA.
Then it would be undesirable to have to use C# and Razor in the Angular client. That might be common in traditional ASP development, but not common and standard in most Angular client development. Using taskrunners for Angular clients seems more like the general practice. However, as the previous solution points out at this time it is brand new for Visual Studio to support this.
Rest of the details for my solution:
Pull into gulp a new module/dependency: gulp-ng-constant
Create app/config.json:
{
"development": { "ApiEndpoint": "http://localhost:15774/api/games/" },
"production":{ "ApiEndpoint": "http://myapp.azurewebsites.net/api/games/" }
}
Setup new gulp task: "gulp config"
Set this task to call the ngConstant function that comes with gulp-ng-constant. Have it load the development settings from the config file:
var myConfig = require(paths.webroot + 'js/app/config.json');
var envConfig = myConfig["development"];
Follow gulp-ng-constant documentation to specify any options and specify the name of which Angular module you want the constants to be registered in.
Bind this new task to the after-build event in task-runner explorer, so it will run after every build.
Setup new gulp task: gulp production:config
Make it exactly the same as step 3, except myConfig["production"]
Don't bind it to the after-build event, rather add it to the pre-publish tasks in your project.json file:
"prepublish": [ "npm install", "bower install", "gulp clean", "gulp production:config", "gulp min" ]
Now whenever you build and/or publish the gulp task will automatically generate a file /app/ngConstants.js. If you setup the task correctly the file will contain the Angular code to register the constants with the correct module.
angular.module("game", [])
.constant("ApiEndpoint", "http://localhost:15774/api/games/")
The only thing I don't really like about this solution is that there is no obvious way to tell in gulp if a build is "Debug" vs "Release". Reading some forums it sounds like the VS team is aware of this issue and planning to fix it in the future. It needs some method to expose the build config to the taskrunner. In my solution it will write the "development" constants every build and then it overwrites them to "production" values on publish. This works for this API endpoint case, but other constants might have different requirements and need that release vs debug configuration or you would be forced to run the release tasks by hand which might be acceptable depending on how often you are running the release build locally.
In your case, you should have a cshtml file which provides you more information on the page. You will need MVC if you're intending to deploy this with IIS. Otherwise, your options would be different with something like node.
Whether you read that information from a registry value, environment variable, database, web config, or whatever is up to you.
At the end of the day, you will have something that sets that value, which you generate in the cshtml with Razor:
<script>window.ENDPOINT = '#someEndpoint'</script>
And then you can either just read that off the window in your JavaScript, or you can make a constant in your app and use it that way:
app.constant('myAppGlobal', window.ENDPOINT || {});

Access Sitecore DB from API in Console application

I would like to accesss the sitecore DB and items from console application like
Sitecore.Data.Database db = Sitecore.Context.Database
or
Sitecore.Data.Database db = Sitecore.Data.Database.GetDatabase("master")
how do I configure and setup my console application to access the DB as above?
Thanks Everyone for the suggestion, I am really interested in config changes, I used webservice, but it has very limited methods. For example, if I would like create an Item with the template and insert the item with prepopulated value, there is no such option. The reason I am looking for the console apporach is I would like to import the contents from XML or excel sheet and push those to the sitecore tree, eventually use the scheduled task to run the console app periodically. I do not want to copy the entire web.config and app_config. If anyone has already done this, could you please post your steps and necessary config changes?
You have two options I think:
1) Import the Sitecore bits of a website's web.config into your console application's app.config, so that the Sitecore API "just works"
I'm sure I read a blog post about this, but I can't find the reference right now. (I will have another look) But I think the simple but long winded approach is to copy all of the <sitecore/> element and all the separate files it references. I'm fairly sure you can whittle this down to a subset of the config required for data access with a bit of thinking.
2) Don't use the Sitecore API directly, connect to a web service that exposes access to it remotely.
There are a few of these that already exist. Sitecore itself exposes one, Sitecore Rocks has one, and Hedgehog TDS has one too. And you can always write your own (since any web service running inside the Sitecore ASP.Net app can make database calls and report values back and forth - just remember to consider security if this web service might end up exposed externally for any reason)
John West links to some relevant stuff here:
http://www.sitecore.net/Learn/Blogs/Technical-Blogs/John-West-Sitecore-Blog/Posts/2013/09/Getting-Data-Out-of-the-Sitecore-ASPNET-CMS.aspx
-- Edited to add --
I've not found the blog post I remember. But I came across this SO thread:
Accessing Sitecore API from a CLI tool
which refers to this blog post:
http://www.experimentsincode.com/?p=232
which I think gives the info you'll need for option 1.
(And it reminds me that, of course, when you copy the config stuff you have to copy the Sitecore binaries into your app's folder as well)
I would just like to expand on #JermDavis' post and note that Sitecore isn't a big fan of being accessed when not in a web application. However, if you still want to do this, you will need to make sure that you have all of the necessary configuration settings from the web.config and App_Config of your site in your console application's app.config file.
Moreover, you will never be able to call Sitecore.Context in a console application, as the Sitecore Context sits on top of the HttpContext which means that it must be an application and have a valid request for you to use it. What you are looking for is something more along the lines of Sitecore.Configuration.Factory.GetDatabase("master").
Good luck and happy coding :)
This sounds like a job for the Sitecore Item Web API. I use the Sitecore Item Web API whenever I need to access Sitecore data from the master database outside the context of the Content Management server or outside of the context of the Sitecore application. The Web API definitely does not allow you to do everything that the standard Sitecore API does but it can act as a good base and I now extend upon the Web API instead of writing my own custom web services whenever possible.
Thanks to JemDavis's advise.
After I copied the configuration and made changes to config section to get rid of conflicts. I copied almost all of Sitrecore, analytics and lucene dlls, it worked great.
Only thing you have to remember is, copy the app_config folder to the same location where your dlls are.
Thanks again JemDavis....

How to work with authentication in local Google App Engine tests written in Go?

I'm building a webapp in Go that requires authentication. I'd like to run local tests using appengine/aetest that validate the authentication behavior. However, I do not see any way to create an aetest.Context with a dummy user. Am I missing something?
I had a similar issue with Python sdk. The gist of the solution is to bypass authentication when tests run locally.
You should have access to the [web] app object at the the test setup time - create a user object and save it into the app (or wherever your get_current_user() method will check).
This will let you unit test all application functions except authentication itself. For the later part you can deploy your latest changes as unpublished google app version, then test authentication and if all works - publish the version.
I've discovered some header values that seem to do the trick. appengine/user/user_dev.go has the following:
X-AppEngine-Internal-User-Email
X-AppEngine-Internal-User-Federated-Identity
X-AppEngine-Internal-User-Federated-Provider
X-AppEngine-Internal-User-Id
X-AppEngine-Internal-User-Is-Admin
If I set those headers on the Context's Request when doing in-process tests, things seem to work as expected. If I set the headers on a request that I create separately, things are less successful, since the 'user.Current()' call consults the Context's Request.
These headers might work in a Python environment as well.

Unit Testing the Server Interface for a Silverlight-Facebook Application

I have a Silverlight 4 client running on a Facebook page hosted on Google App Engine. It's using gminifb to communicate with the Facebook API. The Silverlight client uses POST calls to the URIs for each method and passes the session information from Facebook with each call.
The project's growing and it would be super-useful if I could set up a unit-testing system to make a variety of the server calls so that when I make changes I can ensure everything else still works. I've worked with nUnit before and I like what I've read of PEX but I'm not sure how to apply them to this situation.
What're the choices for creating a test system for this? Pros/cons of each?
How do I get started setting something like this up?
Solved. I did it as follows:
Created a special user account to be used for testing on the server that bypassed the authentication. Only did this on the test environment by checking a debug flag in that environment's settings. This avoided creating any security hole in the live site (since the same debug flag will be false there.)
Created a C#.NET solution to test each API call. The host project is a console app (no need for a GUI) with three reusable synchronous methods:
SendFormRequest(WebRequest request, Dictionary<string,string> pairs),
GetJsonFromResponse(HttpWebResponse response),
and ResetAccount().
These three methods allow the host project to make HTTP requests on the server and to read the JSON responses.
Wrapped each server API call inside a method call in the host project.
Created an nUnit test project in the solution. Then simply created tests that call each wrapper method in the host project, using different parameters and changing values on the server.
Created a series of tests to verify correct error handling for invalid parameters and data.
It's working perfectly and has already identified a few minor issues that have been found. The result is immensely useful and will check for breaking changes on new deployments.

Resources