Can QGIS server read projects from database - database

I have QGIS server working. The standard functionality is to add ?map=path/to/projectfile.qgs to the server URL. So you can dynamically switch between projects in your webapplication. Now I make my projects and upload my projectfile to the server to get them working in my application. That works fine.
But I can also store the project in the database. It would be much nicer if I could tell my application to use a project from my database. Skipping the cumbersome file update procedure.
Researching this I came across this info from the QGis documentation:
https://docs.qgis.org/3.16/en/docs/server_manual/config.html
in the section: 5.2. Environment variables
I see the following info:
QGIS_PROJECT_FILE
The .qgs or .qgz project file, normally passed as a parameter in the query string (with MAP), you can also set it as an environment
variable (for example by using mod_rewrite Apache module).
postgresql://localhost:5432?sslmode=disable&dbname=mydb&schema=myschema&project=myproject
So you can point to a projectfile in the database for the default projectfile. But that's not what I want. I want to do it dynamically.
What I want is to have something like ?map=projectfile_in_my_database. And specifiy in my conf / environment on the server where these are stored in de DB.
Is this possible?

You can use project files stored in a database (in my case it is postgres). How I did it.
I created a pg_service file on a home directory which contains database connection credentials and let us to connect to database just specifying service name, for example using psql you can connect psql service=myservicename and set fastcgi params on nginx
fastcgi_param PGSERVICEFILE /home/qgis/.pg_service.conf;
I connected from qgis desktop to postgres database specifying the service name which I set in service file.
Saved project file in database. By doing like this project file will contain database connection via service name.
Set fastcgi_param for project name
location / {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range';
gzip off;
include fastcgi_params;
fastcgi_param QGIS_PROJECT_FILE postgresql:///?service=myservicename&schema=public&project=testproject;
fastcgi_pass unix:/var/run/fcgiwrap.socket;;
}
Default qgis project you can set in frontend of your web application via url, that is not difficult.

Related

Plesk Webserver - New domain redirects to server IP rather than index.html

I'm running Plesk 12.0.18 on Ubuntu 12.04.5 and have a problem with a recently added domain.
The domain was added to plesk for a new website and has its own directory (httpdocs/goodsnap). When I visit the domain http://goodsnap.co.uk it redirects to the default domain on the server IP rather than opening the index.html file within httpdocs/goodsnap.
I can open the index file manually by going to http://goodsnap.co.uk/index.html and also get into WP by going to http://goodsnap.co.uk/wordpress
Can anyone advise how to rectify this? I have tried setting a redirect to the wordpress subdirectory by placing a .htaccess in httpdocs/goodsnap but it makes no difference.
Many Thanks
Tom

Prestashop Database Error

I am trying to migrate my website from server to localhost in prestashop. And i have tried following steps.
I assume you’ve configured PHP, Apache and MySQL on your local machine. If you don’t have those things installed, find some informations about how to do it. If you are using Windows I can suggest you installing XAMPP application.
Download all website files from your FTP and put them to the local directory.
Next, let’s export the database from phpMyAdmin to the .sql file and download it. Import that file to your local database.
Now it’s a time to make some changes in your local database, files and BackOffice.
Database:
Go to the table PS_SHOP_URL and change values of following columns:
– domain localhost
– domain_ssl localhost
– If your PrestaShop is located in some addictional directory, set the value of physical_uri (for example, if it’s in the ‘shop’ directory, write /shop/ there)
In the PS_CONFIGURATION table change the value of PS_SHOP_DOMAIN and PS_SHOP_DOMAIN_SSL
Files:
Set the debug mode ON in config/defines.inc.php
define('_PS_MODE_DEV_', true)
1
define('_PS_MODE_DEV_', true)
2. Set your local database parameters in config/settings.inc.php
If your PrestaShop is located in some addictional directory (for example /shop/), edit the .htaccess file. It’s located in PrestaShop main folder. Add to this part…
RewriteRule . - [E=REWRITEBASE:
1
RewriteRule . - [E=REWRITEBASE:
…that directory. Complete code should look like this:
RewriteRule . - [E=REWRITEBASE:/shop/
1
RewriteRule . - [E=REWRITEBASE:/shop/
BackOffice:
Advanced Parameters -> Performance
Select “Force compilation” in smarty settings, disable the cache and clear the cache using button located in upper right header of page.
And this sucessfully transfers my site to local host i am able to access backoffice.
but when i access frontoffice there errors as shown in screenshots
Check in backend under Preferences->Seo & URLs the section Set Shop Url to see if you have the correct url. As far as I can see in the printscreen, your website is still using the localhost url. Also, after checking in the backend, try accessing your website in another browser.

Deploying a decoupled front + backend of an application

I've written a web app using two completely decoupled components:
An API that is based off the Place Framework and serves requests of
type: /api/* to any client.
A decoupled front end based on AngularJS built using grunt build
Now, the front end talks to the API but I'd like both of these units to be deployed behind a proxy, something like nginx that can proxy incoming requests to the respective component. For example, I'd like all the /web/* requests to be served off a web directory containing all the client side source (js/html/etc.) and all the /api/* requests to be proxied to my Play framework server (we will need to pass on the path to the server to make sure the right paths are served back) to return all the API related data. For example, a request like GET domain.com/api/users should be internally proxied to GET 127.0.0.1:9000/api/users.
I've seen some discussions online about this and I'd still like to run it through you guys to see which is the best approach for this kind of deployment.
Eventually, I'd like a service oriented architecture and I'd like the flexibility to decouple things even further.
I have built and deployed Play Framework + AngularJS apps and found nginx to be a great approach.
Nginx also gives you a growth path to handle more services as your app architecture grows. For example, you might add a dedicated service for /api/user/* while keeping the standard service for all other /api/* routes.
At some point you might need to go to a commercial product but for my needs for now and the foreseeable future, nginx is amazing.
The relevant part of my nginx config is:
server {
listen 80;
# Without this, Play serves the assets from within it's bundled jar. That's
# fine and works but seems unnecessary when nginx can serve the files directly.
location /assets {
alias /app/live/my-play-app-here/active/public;
}
location / {
proxy_pass http://localhost:9000;
proxy_set_header X-Real-IP $remote_addr;
}
}
The key part here is the /assets URI-space. Yours will probably be different because you package your AngularJS app completely independently. My angular app is within the Play app's /app/assets/javascripts folder. There are pros and cons to this (I quite like your idea of keeping it completely separate). What I've done with the /assets block is allowed nginx to serve the static content directly, as it seems pretty silly for Play to serve that when nginx does a fine job.
It's not so relevant in your scenario but for others that have everything within Play, for the above serving-static-assets strategy to work, the deployment process needs to unpack the public directory from the archive made by play dist, something like this (an excerpt from my bash deployment script):
unzip lib/$SERVICE_BASE_NAME.$SERVICE_BASE_NAME-$VERSION.jar "public/*"
For your particular scenario, something like the below is probably a good start:
server {
listen 80;
location /api {
proxy_pass http://localhost:9000;
proxy_set_header X-Real-IP $remote_addr;
}
location / {
alias /app/live/my-angularjs-app-here/active/public;
}
}

Configure Shibboleth native Service Provider and Apache

I have a simple web application. I want to set Shibboleth native SP in front of my web app so that it issues/asserts SAML related things and forwards request to my web app. Is there a complete tutorial how to achieve that?
Use testshib to test your app, it gives too much ease.
Follow the steps
download and instal sp on your machine
include shibboleth's configuration into your apache
2.1. into httpd.conf file add include "PATH/opt/path/etc/apache22"(if version is apache2.2, otherwise appropriate)
in apache22.config file add the location you want to secure - it would be /secure bydefault
in your shibboleth2.xml file (in etc folder) put your entity id(application defaults element), ex https://mywebsite.com/shibboleth - this can be anything, not neccessary a real path
put entity id of your idp in sso element, in case of testshib it would be https://idp.testshib.org/idp/shibboleth
in the metadata provider put idp's metadata uri to your idp's metadata urn, incase testshib it would be http://www.testshib.org/metadata/testshib-providers.xml
Download your metadata from https://mywebsitehost.com/Shibboleth.sso/Metadata - here mywebsitehost would be a real host and rest path will be automatically configured by shibboleth - this path will download your sp's metadata file
Upload your metadata file to testshib via register
You are ready to go. Go to https://mywebsitehost.com/secure and you should be redirected to idp to authenticate.
NOTE: Make sure you have a domain name configured with ssl(https)

How do I make my Apache 2 server force a browser to open a file transfer dialogue?

How do I make my Apache 2 server force a browser to open a file transfer dialogue if the URL points to a file with a .pln or .psa extension?
I have a simple LAMP server with CentOS 5, Apache 2, MySQL 5, PHP 5, recently built CentOS 5.2 i386 installation CDs. My web application generates files to be downloaded and imported into a custom application. The file extensions are .psa and .pln. How do I make my server force the browser to open a file transfer dialogue? If I point my browser to a .psa or .pln file on the Apache 2 server, the file's content is displayed in a pop-up window as simple text. I want a file transfer dialogue.
The web-app I am working on is deployed on another web-server and handles the .pln and .psa files as desired. I cannot compare server configuration files because I do not have administrator access to the working server.
How do I change my server's behavior? Does this require code changes to my web-app code (such as sending explicit headers)? If so, why does it work against the other server? Can code changes be avoided by configuring the server's default behavior?
You should be able to use the FilesMatch directive to add the necessary header.
<FilesMatch "\.(?i:pin)$">
Header set Content-Disposition attachment
</FilesMatch>
I tried several configuration changes which had no apparent effect.
I added the following line to my /etc/httpd/conf/httpd.conf file:
AddType application/octet-stream .pln .psa
I restarted the Apache server and it had no effect.
I added the following lines to my /etc/httpd/conf/httpd.conf file:
ForceType application/octet-stream
Header set Content-Disposition attachment
ForceType application/octet-stream
Header set Content-Disposition attachment
I restarted the Apache server and it had no effect.
If you have Firefox (and if not, why not?) install Chris Pedericks Developer Toolbar, and check that the headers are actually being set correctly. If so, it may be the fault of the browser. As I said, you can't be certain that any given browser will "correctly" interpret the response headers. What browser are we talking about here anyway?
If the headers aren't being set correctly, you may need to re-check your httpd.conf file. Possibly the directives you added aren't in the correct section? (e.g. under the wrong <Location > directive)
Forcing a browser to do something is always a tricky proposition, since the browser can ignore you and do what the hell it likes
That said, most browsers will prompt the user with a "save as" dialog box if the "Content-type" header is set to "application/octet-stream". Either write a simple wrapper CGI that serves up the requested file with the correct header, or fiddle with Apache2's mime-types (look in the config directory.)

Resources