Undertow ResourceHandler to return the same file when a path cannot be found - reactjs

I am using undertow to statically serve a react single page application. For client side routing to work correctly, I need to return the same index file for routes which do not exist on the server. (For a better explanation of the problem click here.)
It's currently implemented with the following ResourceHandler:
ResourceHandler(resourceManager, { exchange ->
val handler = FileErrorPageHandler({ _: HttpServerExchange -> }, Paths.get(config.publicResourcePath + "/index.html"), arrayOf(OK))
handler.handleRequest(exchange)
}).setDirectoryListingEnabled(false)
It works, but it's hacky. I feel there must be a more elegant way of achieving this?

I could not find what I needed in the undertow documentation and had to play with it to come to a solution. This solution is for an embedded web server since that is what I was seeking. I was trying to do this for an Angular 2+ single page application with routing. This is what I arrived at:
masterPathHandler.addPrefixPath( "/MY_PREFIX_PATH_", myCustomServiceHandler )
.addPrefixPath( "/MY_PREFIX_PATH",
new ResourceHandler( new FileResourceManager( new File( rootDirectory+"/MY_PREFIX_PATH" ), 4096, true, "/" ),
new FileErrorPageHandler( Paths.get( rootDirectory+"/MY_PREFIX_PATH/index.html" ) , StatusCodes.NOT_FOUND ) ) );
Here is what it does:
the 'myCustomServiceHandler' provides the handler for server side logic to process queries sent to the server
the 'ResourceManager/FileResourceManager' delivers the files that are located in the (Angular) root path for the application
The 'FileErrorPageHandler' serves up the 'index.html' page of the application in the event that the query is to a client side route path instead of a real file. It also serves up this file in the event of a bad file request.
Note the underscore '_' after the first 'MY_PREFIX_PATH'. I wanted to have the application API URL the same as the web path, but without extra logic, I settled on the underscore instead.

I check the MIME type for null and serve index.html in such a case as follows:
.setHandler(exchange -> {
ResourceManager manager = new PathResourceManager(Paths.get(args[2]));
Resource resource = manager.getResource(exchange.getRelativePath());
if(null == resource.getContentType(MimeMappings.DEFAULT))
resource = manager.getResource("/index.html");
exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, resource.getContentType(MimeMappings.DEFAULT));
resource.serve(exchange.getResponseSender(), exchange, IoCallback.END_EXCHANGE);
})

Related

404 page not found error when passing getMapping( value="/") and requestParameters

Myapp is java springboot microservice deployed in kubernetes. In local it works fine since I do not need proxy and kubernetes.yml, however when I deploy to int environment, I run into "404 page not found" error.
Please advice if there is any other way to call the service endpoint with this path /api/v1/primary-domain/domain-objects?parameter1=645&parameter2=363&parameter3=2023-02-01
Why is it not able to find the resource? Are there any options to make this work still with same naming pattern for path?
application.properties
server.servlet.context-path=/api/v1/primary-domain/domain-objects management.endpoints.web.base-path=/ management.endpoint.health.show-details=always management.endpoints.web.path-mapping.health=health
#API Registry
springdoc.api-docs.path=/doc
controller:
`#GetMapping(value="", produces = MediaType.APPLICATION_JSON_VALUE)
public ResponseEntity<List<DomainObjectsMsResponseDTO>> getDomainObjects(
#RequestParam(name = "parameter1", required = false) String parameter1,
#RequestParam(name = "parameter2", required = false) String parameter2,
#RequestParam(name = "parameter3", required = false) String parameter3,
) throws Exception{..}`
The path variable in proxy is /v1/primary-domain/domain-objects
proxy.yml
.
.
spec: path: /v1/primary-domain/domain-objects target: /api/v1/primary-domain/domain-objects
.
.
Due to the architecture direction the path for health, doc and actual service endpoint should follow certain naming convention, hence running this issue.
If I use proxy.yml path variable v1/primary-domain and getMapping( value="/domain-objects"..) then it works fine but the domain-objects/doc endpoint returns sever array with url: api/v1/primary-domain, which is not what I want. Since under primary-domain multiple microservices will be created which will be in their own gitRepos and they will path like /primary-domain/points and /primary-domain/positions
I tried getMapping( value="/"..) and removing value variable entirely from getMapping, but still getting same error
I tried changing the Label variables, path target and paths in kube and proxy and matched the app.proeprties and controller to go with it. However with every approach one thing gets broken. None of the approach satisfies endpoint, health endpoint, doc endpoint( open api)

Blocked a frame with origin "https://example.com" from accessing a frame with origin "https://www.herokucdn.com". Protocols, domains, and ports [duplicate]

I am loading an <iframe> in my HTML page and trying to access the elements within it using JavaScript, but when I try to execute my code, I get the following error:
SecurityError: Blocked a frame with origin "http://www.example.com" from accessing a cross-origin frame.
How can I access the elements in the frame?
I am using this code for testing, but in vain:
$(document).ready(function() {
var iframeWindow = document.getElementById("my-iframe-id").contentWindow;
iframeWindow.addEventListener("load", function() {
var doc = iframe.contentDocument || iframe.contentWindow.document;
var target = doc.getElementById("my-target-id");
target.innerHTML = "Found it!";
});
});
Same-origin policy
You can't access an <iframe> with different origin using JavaScript, it would be a huge security flaw if you could do it. For the same-origin policy browsers block scripts trying to access a frame with a different origin.
Origin is considered different if at least one of the following parts of the address isn't maintained:
protocol://hostname:port/...
Protocol, hostname and port must be the same of your domain if you want to access a frame.
NOTE: Internet Explorer is known to not strictly follow this rule, see here for details.
Examples
Here's what would happen trying to access the following URLs from http://www.example.com/home/index.html
URL RESULT
http://www.example.com/home/other.html -> Success
http://www.example.com/dir/inner/another.php -> Success
http://www.example.com:80 -> Success (default port for HTTP)
http://www.example.com:2251 -> Failure: different port
http://data.example.com/dir/other.html -> Failure: different hostname
https://www.example.com/home/index.html:80 -> Failure: different protocol
ftp://www.example.com:21 -> Failure: different protocol & port
https://google.com/search?q=james+bond -> Failure: different protocol, port & hostname
Workaround
Even though same-origin policy blocks scripts from accessing the content of sites with a different origin, if you own both the pages, you can work around this problem using window.postMessage and its relative message event to send messages between the two pages, like this:
In your main page:
const frame = document.getElementById('your-frame-id');
frame.contentWindow.postMessage(/*any variable or object here*/, 'https://your-second-site.example');
The second argument to postMessage() can be '*' to indicate no preference about the origin of the destination. A target origin should always be provided when possible, to avoid disclosing the data you send to any other site.
In your <iframe> (contained in the main page):
window.addEventListener('message', event => {
// IMPORTANT: check the origin of the data!
if (event.origin === 'https://your-first-site.example') {
// The data was sent from your site.
// Data sent with postMessage is stored in event.data:
console.log(event.data);
} else {
// The data was NOT sent from your site!
// Be careful! Do not use it. This else branch is
// here just for clarity, you usually shouldn't need it.
return;
}
});
This method can be applied in both directions, creating a listener in the main page too, and receiving responses from the frame. The same logic can also be implemented in pop-ups and basically any new window generated by the main page (e.g. using window.open()) as well, without any difference.
Disabling same-origin policy in your browser
There already are some good answers about this topic (I just found them googling), so, for the browsers where this is possible, I'll link the relative answer. However, please remember that disabling the same-origin policy will only affect your browser. Also, running a browser with same-origin security settings disabled grants any website access to cross-origin resources, so it's very unsafe and should NEVER be done if you do not know exactly what you are doing (e.g. development purposes).
Google Chrome
Mozilla Firefox
Safari
Opera: same as Chrome
Microsoft Edge: same as Chrome
Brave: same as Chrome
Microsoft Edge (old non-Chromium version): not possible
Microsoft Internet Explorer
Complementing Marco Bonelli's answer: the best current way of interacting between frames/iframes is using window.postMessage, supported by all browsers
Check the domain's web server for http://www.example.com configuration for X-Frame-Options
It is a security feature designed to prevent clickJacking attacks,
How Does clickJacking work?
The evil page looks exactly like the victim page.
Then it tricked users to enter their username and password.
Technically the evil has an iframe with the source to the victim page.
<html>
<iframe src='victim-domain.example'/>
<input id="username" type="text" style="display: none;"/>
<input id="password" type="text" style="display: none;"/>
<script>
//some JS code that click jacking the user username and input from inside the iframe...
<script/>
<html>
How the security feature work
If you want to prevent web server request to be rendered within an iframe add the x-frame-options
X-Frame-Options DENY
The options are:
SAMEORIGIN: allow only to my own domain render my HTML inside an iframe.
DENY: do not allow my HTML to be rendered inside any iframe
ALLOW-FROM https://example.com/: allow specific domain to render my HTML inside an iframe
This is IIS config example:
<httpProtocol>
<customHeaders>
<add name="X-Frame-Options" value="SAMEORIGIN" />
</customHeaders>
</httpProtocol>
The solution to the question
If the web server activated the security feature it may cause a client-side SecurityError as it should.
For me i wanted to implement a 2-way handshake, meaning:
- the parent window will load faster then the iframe
- the iframe should talk to the parent window as soon as its ready
- the parent is ready to receive the iframe message and replay
this code is used to set white label in the iframe using [CSS custom property]
code:
iframe
$(function() {
window.onload = function() {
// create listener
function receiveMessage(e) {
document.documentElement.style.setProperty('--header_bg', e.data.wl.header_bg);
document.documentElement.style.setProperty('--header_text', e.data.wl.header_text);
document.documentElement.style.setProperty('--button_bg', e.data.wl.button_bg);
//alert(e.data.data.header_bg);
}
window.addEventListener('message', receiveMessage);
// call parent
parent.postMessage("GetWhiteLabel","*");
}
});
parent
$(function() {
// create listener
var eventMethod = window.addEventListener ? "addEventListener" : "attachEvent";
var eventer = window[eventMethod];
var messageEvent = eventMethod == "attachEvent" ? "onmessage" : "message";
eventer(messageEvent, function (e) {
// replay to child (iframe)
document.getElementById('wrapper-iframe').contentWindow.postMessage(
{
event_id: 'white_label_message',
wl: {
header_bg: $('#Header').css('background-color'),
header_text: $('#Header .HoverMenu a').css('color'),
button_bg: $('#Header .HoverMenu a').css('background-color')
}
},
'*'
);
}, false);
});
naturally you can limit the origins and the text, this is easy-to-work-with code
i found this examlpe to be helpful:
[Cross-Domain Messaging With postMessage]
There is a workaround, actually, for specific scenarios.
If you have two processes running on the same domain but different ports, the two Windows can interact without limitations. (i.e. localhost:3000 & localhost:2000). To make this work, each window needs to change their domain to the shared origin:
document.domain = 'localhost'
This also works in the scenario that you are working with different subdomains on the same second-level domain, i.e. you are on john.site.example trying to access peter.site.example or just site.example
document.domain = 'site.example'
By explicitily setting document.domain; the browser will ignore the hostname difference and the Windows can be treated as coming from the 'same-origin'. Now, in a parent window, you can reach into the iframe: frame.contentWindow.document.body.classList.add('happyDev')
If you have control over the content of the iframe - that is, if it is merely loaded in a cross-origin setup such as on Amazon Mechanical Turk - you can circumvent this problem with the <body onload='my_func(my_arg)'> attribute for the inner html.
For example, for the inner html, use the this html parameter (yes - this is defined and it refers to the parent window of the inner body element):
<body onload='changeForm(this)'>
In the inner html :
function changeForm(window) {
console.log('inner window loaded: do whatever you want with the inner html');
window.document.getElementById('mturk_form').style.display = 'none';
</script>
I experienced this error when trying to embed an iframe and then opening the site with Brave. The error went away when I changed to "Shields Down" for the site in question. Obviously, this is not a full solution, since anyone else visiting the site with Brave will run into the same issue. To actually resolve it I would need to do one of the other things listed on this page. But at least I now know where the problem lies.
I would like to add Java Spring specific configuration that can effect on this.
In Web site or Gateway application there is a contentSecurityPolicy setting
in Spring you can find implementation of WebSecurityConfigurerAdapter sub class
contentSecurityPolicy("
script-src 'self' [URLDomain]/scripts ;
style-src 'self' [URLDomain]/styles;
frame-src 'self' [URLDomain]/frameUrl...
...
.referrerPolicy(ReferrerPolicyHeaderWriter.ReferrerPolicy.STRICT_ORIGIN_WHEN_CROSS_ORIGIN)
Browser will be blocked if you have not define safe external contenet here.
Open the start menu
Type windows+R or open "Run
Execute the following command.
chrome.exe --user-data-dir="C://Chrome dev session" --disable-web-security

Next.js: How can we have dynamic routing redirect to static pages?

Using Next.js , I currently have an app with a single entry point in the form of /pages/[...slug]/index.ts
It contains a getServerSideProps function which analyses the slug and decide upon a redirection
In some cases a redirection is needed, but it will always be towards a page that can be statically rendered. Example: redirect /fr/uid towards /fr/blog/uid which can be static.
In other cases the slug already is the url of a page that can be static.
How can I mix this dynamic element with a static generation of all pages?
Thanks a lot for your help!
If I understood you problem correctly, you cannot use getServerSideProps if you are going to export a static site.
You have two solutions:
Configure your redirection rules in your web hosting solution (i.e. Amazon S3/CloudFront).
Create client-side redirects (when _app.tsx mounts you can check if router.asPath matches any of the redirection you would like to have configured.
Please remember that the first solution is more correct (as 301 redirects from the browser) for SEO purposes.
EDIT: #juliomalves rightly pointed out OP is looking at two different things: redirection, and hybrid builds.
However, question should be clarified a bit more to really be able to solve his problem.
Because you will need to host a web-server for SSR, you can leverage Next.js 9.5 built-in redirection system to have permanent server-side redirects.
When it comes to SSR vs SSG, Next.js allows you to adopt a hybrid approach, by giving you the possibility of choosing with Data Fetching strategy to adopt.
In case you are using AWS CloudFront, then you can redirect with CloudFront Functions.
CloudFront Functions is ideal for lightweight, short-running functions for use cases like the following:
URL redirects or rewrites – You can redirect viewers to other pages based on information in the request, or rewrite all requests from one path to another.
Here is what we are using to redirect clients (e.g. Native App, Google search index, etc.) to new location when NextJS page was moved or removed.
// NOTE: Choose "viewer request" for event trigger when you associate this function with CloudFront distribution.
function makeRedirectResponse(location) {
var response = {
statusCode: 301,
statusDescription: 'Moved Permanently',
headers: {
'location': { value: location }
}
};
return response;
}
function handler(event) {
var mappings = [
{ from: "/products/decode/app.html", to: '/products/decode.html' },
{ from: "/products/decode/privacy/2021_01_25.html", to: '/products/decode/privacy.html' }
];
var request = event.request;
var uri = request.uri;
for (var i = 0; i < mappings.length; i++) {
var mapping = mappings[i]
if (mapping.from === uri) {
return makeRedirectResponse(mapping.to)
}
}
return request;
}

How to post a project with Gitlab API that sets also a default_branch

I try to create a project in gitlab via their API,
with a request (in angular) like this :
$http.post(
"https://gitlab.com/api/v3/projects",
{private_token: <token>}
)
But then I get, as a returned data, a project document with a default_branch : null ... and then it is impossible to update the project by for example post files with the API "https://gitlab.com/api/v3/projects//repository/files" because gitlab will return me an error that I need to be in a specified branch.
Unfortunately a post of a branch with
$http.post(
"https://gitlab.com/api/v3/projects/<projectId>/repository/branches",
{
private_token: <token>
branch_name: "master"
}
)
returns me also an error... because I need to specify also a ref parameter, but it would make any sense if I don't have yet an origin master branch !
If your project is empty (contains no git objects), you should be able to post to "https://gitlab.com/api/v3/projects/repository/files", and specify a branch name, and that will become your default branch. If that isn't working, open a bug on: https://gitlab.com/gitlab-org/gitlab-ce/issues

How can I upload a file to a Sharepoint Document Library using Silverlight and client web-services?

Most of the solutions I've come across for Sharepoint doc library uploads use the HTTP "PUT" method, but I'm having trouble finding a way to do this in Silverlight because it has restrictions on the HTTP Methods. I visited this http://msdn.microsoft.com/en-us/library/dd920295(VS.95).aspx to see how to allow PUT in my code, but I can't find how that helps you use an HTTP "PUT".
I am using client web-services, so that limits some of the Sharepoint functions available.
That leaves me with these questions:
Can I do an http PUT in Silverlight?
If I can't or there is another better way to upload a file, what is it?
Thanks
Figured it out!! works like a charm
public void UploadFile(String fileName, byte[] file)
{
// format the destination URL
string[] destinationUrls = {"http://qa.sp.dca/sites/silverlight/Answers/"+fileName};
// fill out the metadata
// remark: don't set the Name field, because this is the name of the document
SharepointCopy.FieldInformation titleInformation = new SharepointCopy.FieldInformation
{DisplayName =fileName,
InternalName =fileName,
Type = SharepointCopy.FieldType.Text,
Value =fileName};
// to specify the content type
SharepointCopy.FieldInformation ctInformation = new SharepointCopy.FieldInformation
{DisplayName ="XML Answer Doc",
InternalName ="ContentType",
Type = SharepointCopy.
FieldType.Text,
Value ="xml"};
SharepointCopy.FieldInformation[] metadata = { titleInformation };
// initialize the web service
SharepointCopy.CopySoapClient copyws = new SharepointCopy.CopySoapClient();
// execute the CopyIntoItems method
copyws.CopyIntoItemsCompleted += copyws_CopyIntoItemsCompleted;
copyws.CopyIntoItemsAsync("http://null", destinationUrls, metadata, file);
}
Many Thanks to Karine Bosch for the solution here: http://social.msdn.microsoft.com/Forums/en/sharepointdevelopment/thread/f135aaa2-3345-483f-ade4-e4fd597d50d4
What type of SharePoint deployment and what version of silverlight? If say it is an intranet deployment you could use UNC paths to access your document library in sharepoint and the savefiledialog/openfiledialog available in Silverlight 3.
http://progproblems.blogspot.com/2009/11/saveread-file-from-silverlight-30-in.html
or
http://www.kirupa.com/blend_silverlight/saving_file_locally_pg1.htm
Silverlight has restrictions on what it can do with local files, though I've read that silverlight 4 has some changes.
http://www.wintellect.com/CS/blogs/jprosise/archive/2009/12/16/silverlight-4-s-new-local-file-system-support.aspx

Resources