I'm trying to make a hybrid python-js application with cefpython.
I would like to have:
JS and HTML files local to the cef python app (e.g. in './html', './js', etc)
Load one of the HTML files as the initial page
Avoid any CORS issues with files accessing each other (e.g. between directories)
The following seems to work to load the first page:
browser = cef.CreateBrowserSync(url='file:///html/index.html',
window_title="Rulr 2.0")
However, I then hit CORS issues.
Do I need to run a webserver also? Or is there an effective pattern for working with local files?
Try passing "disable-web-security" switch to cef.Initialize or set BrowserSettings.web_security_disabled.
Try also setting BrowserSettings.file_access_from_file_urls_allowed and BrowserSettings.universal_access_from_file_urls_allowed.
There are a few options in CEF for loading custom content and that can be used to load filesystem content without any security restrictions. There is a resource handler, a scheme handler and a resource manager. In CEF Python only resource handler is currently available. There is the wxpython-response.py example on README-Examples.md page.
Resource manager is a very easy API for loading various content, it is to be implemented in Issue #418 (PR is welcome):
https://github.com/cztomczak/cefpython/issues/418
For scheme handler see Issue #50:
https://github.com/cztomczak/cefpython/issues/50
Additionally there is also GetResourceResponseFilter in upstream CEF which is an easier option than resource handler, to be implemented via Issue #229:
https://github.com/cztomczak/cefpython/issues/229
You could also run an internal web server inside your app (easy to do with Python) and serve files that way. Upstream CEF also has a built-in web server functionality, however I don't think this will be exposed in cefpython, as it's already easy to set up web server in Python.
Related
My JMeter version is apache-jmeter-5.4.1.
I am trying to set up a HTTP Request something like this on a react based website:
HTTP Request - GET http://YYY.YYY.YYYY/141719 (With Retrieve Embedded Resources checked)
When I run this, I can see JMeter captures embedded resource requests (Secondary requests) which are like *.css, *.js
Second set of embedded resource requests :
However one of these secondary requests called - bundle.xxxxxxx.js creates another set of embedded resource requests to the server which retrieve further *.js files as part of the Request Initiator chain.
While the name of this file itself is randomly generated like, ex., bundle.0787f963ab0ac67dd7d4.js
The browser of course parses this bundle.xxxxxxx.js immediately and gets all the embedded resources/requests(including chunk.*.js)
My problem is how do I replicate this behaviour using JMeter, for the second set of embedded resource requests also to be triggered. At the moment, I can only achieve capturing the first set of embedded resource requests. This does not give me true load test results as the second set has more traffic to the server. Is there a way to recursively Retrieve all embedded resources.
Our application under test is based on React JS.
As per JMeter project main page:
JMeter is not a browser, it works at protocol level. As far as web-services and remote services are concerned, JMeter looks like a browser (or rather, multiple browsers); however JMeter does not perform all the actions supported by browsers. In particular, JMeter does not execute the Javascript found in HTML pages. Nor does it render the HTML pages as a browser does (it's possible to view the response as HTML etc., but the timings are not included in any samples, and only one sample in one thread is ever displayed at a time).
There are several initiator types:
parser (index, styles, fonts, etc.) - these guys will be captured by embedded resources downloading functionality. All below will need to be handled somehow else
redirect
script
other
So if you need to mimic a number of HTTP requests which originate from JavaScript it will be required to replicate the logic from this JavaScript code in JMeter.
There is a minimal chance that you will be able to copy and paste the JavaScript into JSR223 PostProcessor however most likely it relies on certain objects like navigator, window or XMLHttpRequest so it's highly likely that you will have to re-write it in Groovy. Once you have enough data to properly build HTTP Request samplers most probably you will need to put them under the Parallel Controller
Another option is going for WebDriver Sampler and use real browsers for your tests but be aware that browsers are very resources intensive comparing to HTTP Request samplers
I am developing a client-only ReactJS application (only for local usage) where I need to save and load images by filepaths (URL and local file system).
The paths are stored in the local storage and images from URLs can be used. Anyway, local images cannot since ReactJS is using the project directory and I cannot escape it.
Is there the possibility to open files with absolute path from the local file system or can I/do I have to upload it in the project directory?
Are you running this through a browser? If so Javascript on browsers does not yet have the ability to access local file systems. I haven't tried this but you could run Node locally and use ExpressJs for client-server communication.
As stated here:
you'll need two pieces:
A browser piece, which does the UI with the user.
A Node piece, which runs on the user's machine (and thus can access the file system) and which the browser piece uses to do the actual file operations.
Probably the easiest way for the pieces to interact would be HTTP, which you can trivially support using ExpressJS.
So for instance, if the user wants to delete a file:
User clicks something to say "delete this file"
Browser JavaScript sends the command to the Node process over HTTP via ajax
Node process does the deletion and reports success/failure
Browser JavaScript displays the result
I am trying to deploy a Qooxdoo web application backed by CherryPy-hosted web services onto a server. However, I need to configure the client-side Qooxdoo application with the hostname of the server on which the application resides so that that the Ajax callbacks resolve to the right host. I have a feeling I can use the capabilities of the generate.py Qooxdoo script to generate client-side code with this appropriately set, but reading through the docs hasn't helped make it clear how yet. Anyone have any tips?
(FWIW, I know how I'd approach this using something like PHP and a different client-side framework like Echo 3--I'd have the index file be a PHP file that reads a local system configuration file prior to sending back client-side code. In this case, however, the generate.py file is a necessary part of the toolchain, so I can't see how to do it so simply.)
You can use qx.core.Enviroment class to add/get configuration for your project. The recommend way is only during compilation time, but there is a hack if you want to configure your application during run time.
Configuration during compilation time
If you want to configure the environment during compilation time see this.
In both cases after you add any environmental variable to your application, it can be accessed using the qx.core.Environment.get method.
On run time
WARNING this method isn't supported/documented from qooxdoo. Basically it's a hack
If you want to make available some environment configuration on run time you have to do this before qooxdoo loads. In order to this you could add some javascript into your webpage e.g.
window.qx = { };
window.qx.$$environment = {
"myawsomeapp.hostname": "example.org",
};
This should be added somewhere in your page before the qooxdoo start loading otherwise it will not have the desirable effect. The advantage of this method is that you can push configuration to the client e.g. some api keys that may be different between instances of your application.
The easiest way will be to compose your AJAX URL on the fly from window.location; ideally, you would be able to use window.location.origin which for this StackOverflow website would be "https://stackoverflow.com" but there are issues with that on IE.
A cross platform solution is:
var urlRoot = window.location.protocol + "//" +
window.location.hostname + (window.location.port ? ':' +
window.location.port: '');
This means your URL will always be correct, even if the server name changes (eg your on a test server instead of production).
See here for more details:
https://tosbourn.com/a-fix-for-window-location-origin-in-internet-explorer/
I have a website that used to have .dsp file extensions for all pages. There are alot of other sites referencing mine that reference the pages like that, but my pages are all actually .aspx pages. In IIS5, I was able to configure this to work.
My problem is I've recently switched from IIS5 to IIS7, and I have no idea how to map these requests (.dsp) to the real file (.aspx) without the server telling me the file doesn't exist.
In the IIS go to Handler Mappings under you website/application/vdir and add a new managed handler.
Tell you extension (.dsp) and the Executable (in this case the same of asp.net which is %windir%\Microsoft.NET\Framework64\v2.0.50727\aspnet_isapi.dll for ASP.NET 2.0 64 bits).
The company I work for has proxies/WAN accelerators between our international sites to cache Intranet web content. I have a Silverlight application being hosted on a server at one location, but being accessed by clients in another location. When the users access the web page hosting the Silverlight app, they get the stale xap file being cached by the proxy and not the latest version from the server. Local users always get the latest xap as their requests are not going through a proxy.
I've tried the various header/metadata techniques mentioned elsewhere to prevent caching, and the containing web page itself is being served up fresh, but I still get the old .xap file. Short of getting our IT admin to disable proxy caching for my site, is there anything I can do make sure the latest xap file get retrieved from the server instead of the proxy? The containing page is ASP.NET.
What I do is just add a querystring at the end of the path to the xap file. Then when you change the querystring variable, the proxies etc. should see it as a request to a new file. So far this has worked fine for me.
So basically, when embedding an .xap in a straight-up HTML file, you would do this:
<param name="source" value="ClientBin/SilverlightApplication1.xap?cachepreventer=whatevervalue"/>
And then when you deploy a new version, just change "whatevervalue" to something else.
EDIT
If you need to use this technique in many places in your app I would read the querystring value from config and just write it to the page using asp.net. That way you only need to update it in one place when you deploy.
If you want to make sure every time the xap file is retrieved and you don't want to worry about it - just use
<param name="source" value="ClientBin/YourSilverlightapp.xap?<%=Guid.NewGuid().ToString() %>"/>
of course - this lends itself to a heavier cache load. I do like the helper method above though if you only want changes to be propagated to the client.