I need multiple sites to all point to a common application, varying by host-header.
While the code / content for each each site is identicial each site does need a unique config, for things like connection strings.
What would be the best approach to set this up?
(The site is actually a Silverlight / WCF application, although I don't think that should matter.)
Either use msi installation package and allow set up all these values in installation wizard or use new web.config transformation syntax introduced in .NET 4.0 (you will have separate config and build target for each host header).
Edit - I didn't understand your question first:
You will have to install the application multiple times. You can't have single site with multiple different configs. But you don't have to copy libraries multiple times - you can use links (mklink.exe). It means you will have one central directory holding your shared content like bin directory and you will have separate directory for each site. Each of sites' directories will contain its own web.config and some content placed to root of your site + links to central directory. You will create create separate application for each site in IIS and map single host header to each application.
Other possiblity is handling this in your code and having everything in single web.config but IMO it is pretty bad and dangerous solution.
Related
I am working on a react web application, which may require multi language support. I am using i18n-next which internally loads the required configuration file from specific directory based on the language selected by user.
The word or scentences that needs to be translated may increase based the screens that user going to add and also if use adds new folder, we will loading those languages into our application.
What is the best way (I mean Scallable, Easy to configure, platform provided...) to satisfy the requirement?
( :( All I can think of is mounting an external locales folder to the folder inside container.. Is that the only way.. or something else is there..)
Note: kubernetes and rancher is there to manage. Plase provide solution/suggestion around that.
Nandri.
If you can add the files from the Storage bucket to Ci/CD & add files to the docker image and manage inside it that would be one way.
Following this way might be helpful during scaling up the application and need to manage the external locales folder and anything worried.
By external local folder mean you want to use the Host path of the node what if your node is changing by Kubernetes during maintenance how will you add the files to the node each time or manage it?
If you will use the PVC you might face the issue of readwriteonce if you are scaling the replicas you require the readwritemany. Make try to create stateless containers as much as possible.
If you can create and add the directory inside the docker image and directly use it that would be perfect or else you might could use the NFS like minio or glusterFS which support the readwritemany also.
I have a web-app(browser based) which needs to access a folder full of icons that resides outside the web folder.
This folder MUST be outside the web folder, and would ideally exist outside the project folder all together
however, when specifying the path to the folder neither "../" or making use of a symlink will work
when the page attempts to load the image I always get
"[web] GET /Project|web/icons/img.png => Could not find asset Project|web/icons/img.png."
however I set the image source to "../icons/img.png"
how can i get dart to access this file properly
PS: I attempted a symlink to another part of the filesystem (where the images would be kept ideally) however this did not work either.
The web server integrated into DartEditor or pub serve only serves directories that are added as folders to the files view. When you add the folder to DartEditor you should be able to access the files. This is just for development.
You have also to find a solution for when you deploy your server app. It would be a hazardous security issue when you could access files outside the project directory. Where should the server draw the line? If this would be possible your entire server would be accessible to the world.
Like #Robert asked, I also have a hard time imaging why the files must not be in the project folder.
If you want to reuse the icons/images between different projects you could create a resource package that contains only those images and add them as a dependency to your project.
If you want a better answer you need to provide more information about your requirements.
If you wrote your own server (by using the HttpServer class) it may be possible to use the VirtualDirectory to server your external files.
Looking at look the dartiverse_search example may give you some ideas.
You could put them in the lib directory and refer to them via /packages/Project/...
Or in another package, in which case they would be in a different place in the file system. But as other people have said, your requirement seems odd.
I have a bunch of text, xml and other files (i.e. resources) that I need to access using servlets in java web app. For example, there is an xml file, a part of which is returned with a servlet by a user query. I am using Tomcat. What is the best practice to store these files and access them from java code?
1) What are the default folders where should I put them, do I need to put them into Web archive or into one of the Jars?
2) How to access the files from java code? How can I set the path to them so it will work in any environment?
P.S. I've read a number of posts related to this topic, most of which recommend to store resources in jars and access them using java.lang.Class.getResourceAsStream(String). It seems strange because classes and data should be separated.
It's perfectly fine to load static resources using the classloader. That's what ResourceBundle does to load the internationalized properties files for example.
Put them in WEB-INF/classes along with your class files, or in a jar inside WEB-INF/lib, and load them with the ClassLoader as indicated by the answers you already read.
That doesn't forbid you to place these files in a separate directory from the Java source files in your project. The build process should just make sure to put them in the appropriate location for runtime. The Maven and Gradle convention is to put the source files under src/main/java and the resource files under src/main/resources.
I have an application for a huge business, which needs many pages, controls etc. The .xap file easily goes up to 50MB. I notice that every time when I load the page, the .xap file got downloaded to my local. However, my users may use 3G network to connect, so it must be very slow if we downlaod the app everytime they open the page. So I was wondering if there is some way I can do the deployment similar to WPF, which only download to local when the version is changed....
Any other suggestion to improve the loading speed is welcomed.
Thanks a lot
First and for most get your web server caching headers sorted. Typically you open the ClientBin folder in IIS Manager and enter the HTTP Response Header section. Set expiry to something like 1 Day (or if you update during normal working hours set to 15 Minutes). Note just because the content expires doesn't mean it will be re-downloaded but it does mean it'll get cached before being used. The browser will inform the server of the version it currently has if it has expired allow the server to simply respond with "go ahead and use that it hasn't changed since the last time you checked".
For such a large system you should seriously consider dividing the app up into multiple dll projects. Then use the Application Library Caching feature found in the main apps project properties. You need to create the appropriate .extmap.xml files for each of your dlls. Many of the SDK and Toolkit dlls have them already. This results in separate .zip files for these dlls being placed in the ClientBin folder and not incorporated into one large Xap. This allows you separate slow moving / never changing code into a set of zips and more frequently changing business code into another set. When you update the app the you only update the changed zips thus reducing the download burden of a new version. (Note this only works with inbrowser based apps).
In the serverlight project option, check the Reduce XAP size by using application library caching.
I've got a CakePHP install running six different web sites, each with their own webroot. All of the base code is the same (controllers, models, etc.), just the css, images, js and so forth are split into the separate webroots (app/webroot, app/webroot_second_site, app/webroot_third_site, etc.)
My question is: Is there a way to share common resources among the webroots? So we don't have six different copies of TinyMCE and jQuery cluttering up our project, and more importantly to me, so that we can make a change in a common CSS file instead of having to copy/paste a change across six different sites' folders?
If these sites were running on a Linux box, I think it could be fairly easily accomplished with a symlink from each of the webroots to a common folder higher up in the directory tree, but we're running Windows Server 2003 / IIS 6. Any suggestions?
Turns out you can do directory symlinks in NTFS file systems. Or at least close enough for practical purposes. "NTFS Junctions" will work for what you want.
Grab the Sysinternals "Junction" program for a simple command-line program to create/delete these junctions.
Then you can link whatever common directories you need to a single master directory.
For example, if you have
webroot1/
webroot2/
webroot3/
each with their own "js/" directory, then you could create
webroot_common/js/
and then symlink... er, "create junctions" to that new directory like so:
junction webroot1/js/common webroot_common/js
junction webroot2/js/common webroot_common/js
junction webroot3/js/common webroot_common/js
(yes, the "junction" program takes its inputs backwards from Linux "ln -s")
Then you can put whatever common js files you need, like jQuery, in that common folder, and leave any site-specific js files in "webrootX/js".
You could make a static server. Add a DNS entry to something like static.yoursite.com. Link to those files from your other sites -- probably you could just modify the HTML helper so that it will automatically create links to the other domain.
This can help with performance, because you can run something like nginx to serve these static files. It will also parallelize the resource retrievals -- most browsers will allow 2 connections to a given server, so the static stuff competes with those connection resources that are needed by the dynamic stuff. In essence, the user will start 2 connections to your dynamic stuff as well as 2 connections to the static resources.
Works pretty well IME.
This will work. You will need to redefine the directories for a windows server, but you will understand it well enough.
First, put your APP and CAKE directories a level above the public_html.
/var/www/app
/var/www/cake
Make sure that the folder cake has all of the cake folders in it (cake, vendors, etc.)
Point your sites to their public_html directories.
/var/www/html/site1
/var/www/html/site2
The webroot content will sit in each of the public_html directories. Now, modify your index.php file in each of the webroots to point to the same app:
if (!defined('ROOT')) {
define('ROOT', DS.'var'.DS.'www'.DS.'app');
}
if (!defined('APP_DIR')) {
define('APP_DIR',dirname('app'));
}
if (!defined('CAKE_CORE_INCLUDE_PATH')) {
define('CAKE_CORE_INCLUDE_PATH', DS.'var'.DS.'www'.DS.'cake');
}
Make sure that rewrite is turned on of course. Then it will all run off the same code but use the webroot where the index.php is being served from.