Considering the sandboxing factor, and OTH that MS appears to be giving renewed attention to the value of desktop in the aftermath of UWP/Win 8 failings, could a WinUI 3 app, granted some level of trust below corporate/institutional, be permitted to do IPC with a win32 explorer shell extension
AND
have all components, extension dll included, packaged in such a way that it may be a Store app
AND
participate in native Store ecommerce thus not have to be
kicked out into the cold of 3rd party payment/registration processing for the customer to buy the app (after a trial period etc)?
follow-on question. Then is there some intrinsic security reason that MS would be prevented from offering Store payment API/services to anything but sandboxed applications or those with a level of corporate/institutional trust? Or maybe just not a priority?
thanks
Related
This is a wrap-up of several open questions in the vaadin forum (which is moving to stack-overflow) - see 2, 3, 4, ...
The basic question is: "How can one run a (recent V14 LTS or V19/20) vaadin-flow application reliably on google app engine (GAE manual or automatic scaling) standard environment (i.e. no docker, no google compute engine), without experiencing constant refreshing of components"
Vaadin has a tutorial for deploying vaadin applications to GAE flexible (not standard as for this question). This tutorial doesn't mention that one might run into trouble when GAE switches the server-instance. Even GAE mentiones Vaadin as one of the supported frameworks.
Old Vaadin 8 release notes state, that support for GAE has been dropped.
According to the questions in the vaadin forum, vaadin-flow at GAE will lead (or at least has led in older versions of vaadin) to constant refreshing of components and/or loss of session-state.
If one uses manual scaling, ones applications can rely on the state of the memory over time, so vaadin-flow applications should not be bothered with switches of the server-instance (which will eventually occur when an instance is shut down due to an error or maintenence reasons).
So the first question is: "When running a vaadin-flow application on GAE standard with manual scaling, will it lead to constant refreshing of componends and/or loss of session-state even when the instance is not switched?"
If that works, than vaadin-flow is fine on GAE standard when one needs neither dynamic scaling nor high availability. It that does not work, vaadin-flow on GAE standard would be a NOGO for any type of application (and one needs to switch to another provider or to GAE flexible and docker).
The next question is: "What has to be done to make vaadin-flow application on GAE standard run reliably, even when instances are scaled or switched due to maintenance?"
The following suggestions were stated in the forum - but noone ever confirmed that they work:
One could have set <sessions-enabled>true</sessions-enabled> for java 8. This setting no longer exists for java 11. Even when when nr. of instances is changing or instances are restarted, this could have been the solution since session data is stored in memcache which is available over all instances.
When instances are moved or shut down, google sends a shutdown notification -> one could implement a shutdown-hook and try to serialize all session-state (if vaadin provides a way to serialize it manually and automatically de-serialize it when another instance takes over).
Has anyone found a reliable solution for this?
I think it's not possible.
According to https://stackoverflow.com/a/10640357/377320 GAE uses Datastore to store the session informations and only synchronizes objects set via session.setAttribute(). However, according to https://mvysny.github.io/vaadin-14-session-replication/ Vaadin doesn't call setAttribute(), moreover Vaadin does that on purpose.
That means that GAE won't synchronize the state properly but the requests will land on random nodes (since session affinity/sticky sessions is not supported on GAE), causing the requests to land on nodes with obsolete Vaadin state. That leads me to believe that the most likely outcome is that Vaadin components will constantly lose state, and Vaadin will try to frequently attempt to resync the UI state by performing browser reloads.
Vaadin Flow works best with sticky sessions and long-running servers while GAE sounds like an opposite of that.
I believe your best bet is to use Vaadin Fusion with a stateless server.
Regarding serializing all session state (eg. on a shutdown hook): this requires that all of your stateful objects, like all of your data beans, Vaadin Flow views, compositions etc., are serializable. In other words, all classes must implement java.io.Serializable. This can be done, as all classes from Vaadin should be serializable, but it's up to the application developer to make sure that all custom instantiable classes in the codebase (and any instantiable classes in the dependencies) can be serialized and deserialized. Based on practical experience, this is not a trivial requirement - it usually drives the application's architecture by limiting or changing the design patterns used in the code. Making an existing codebase fully serializable will likely incur significant refactoring work. This is one of the big reasons why sticky sessions (which are not available in GAE) are the recommended approach for deploying Vaadin Flow applications in multi-node environments.
My application is running since months now and I must say, that it worked just fine. BUT we never needed more than one instance (basic_scaling: max_instances: 1), so no sticky-sessions were needed and my conclusions might not be valid if one needs more than one instance:
both F- and B- instance-class worked fine - not a single switch has occurred in the past, no sessions were lost
if the instance needs to be re-started due to idle-timeout, this is just a matter of ~10 seconds which is ok for the test-instance. For production I would recommend manual scaling with 24 instance-hours per day (so the instance will never be re-started)
Session-affinity can be set in app.yaml according to https://cloud.google.com/appengine/docs/flexible/java/using-websockets-and-session-affinity. So it might as well work with multiple instances
I'd like to retrieve a log of crashes for my mobile app from IBM Mobile Quality Assurance (MQA) so that I can filter/process these crashes based on certain characteristics. In particular, I want to excluded crashes that occur in the device simulator since these are most likely happening during automated tests and not actual user crashes.
Is there a REST API or other means to obtain a list of crashes (with details) for my app from IBM MQA?
We are working to improve the MQA filtering features however at this time we don't have a way to filter crashes by devices. We recently announced that MQA in Bluemix now supports integration with the following bug tracking systems: Jira, TFS, HP Quality Center, GitHub, FogBugz.
If you use one of these BTS's, its possible to use their filtering process, if any.
I will speak to the MQA Product Mgr to take this as a future enhancement request.
Currently clouds are mushrooming like crazy and people start to deploy everything to the cloud including CMS systems, but so far I have not seen people that have succeeded in deploying popular CMS systems to a load balanced cluster in the cloud. Some performance hurdles seem to prevent standard open-source CMS systems to be deployed to the cloud like this.
CLOUD: A cloud, better load-balanced cluster, has at least one frontend-server, one network-connected(!) database-server and one cloud-storage server. This fits well to Amazon Beanstalk and Google Appengine. (This specifically excludes CMS on a single computer or Linux server with MySQL on the same "CPU".)
To deploy a standard CMS in such a load balanced cluster needs a cloud-ready CMS with the following characteristics:
The CMS must deal with the latency of queries to still be responsive and render pages in less than a second to be cached (or use a precaching strategy)
The filesystem probably must be connected to a remote storage (Amazon S3, Google cloudstorage, etc.)
Currently I know of python/django and Wordpress having middleware modules or plugins that can connect to cloud storages instead of a filesystem, but there might be other cloud-ready CMS implementations (Java, PHP, ?) and systems.
I myself have failed to deploy django-CMS to the cloud, finally due to query latency of the remote DB. So here is my question:
Did you deploy an open-source CMS that still performs well in rendering pages and backend admin? Please post your average page rendering access stats in microseconds for uncached pages.
IMPORTANT: Please describe your configuration, the problems you have encountered, which modules had to be optimized in the CMS to make it work, don't post simple "this works", contribute your experience and knowledge.
Such a CMS probably has to make fewer than 10 queries per page, if more, the queries must be made in parallel, and deal with filesystem access times of 100ms for a stat and query delays of 40ms.
Related:
Slow MySQL Remote Connection
Have you tried Umbraco?
It relies on database, but it keeps layers of cache so you arent doing selects on every request.
http://umbraco.com/azure
It works great on azure too!
I have found an excellent performance test of Wordpress on Appengine. It appears that Google has spent some time to optimize this system for load-balanced cluster and remote DB deployment:
http://www.syseleven.de/blog/4118/google-app-engine-php/
Scaling test from the report.
parallel
hits GAE 1&1 Sys11
1 1,5 2,6 8,5
10 9,8 8,5 69,4
100 14,9 - 146,1
Conclusion from the report the system is slower than on traditional hosting but scales much better.
http://developers.google.com/appengine/articles/wordpress
We have managed to deploy python django-CMS (www.django-cms.org) on GoogleAppEngine with CloudSQL as DB and CloudStore as Filesystem. Cloud store was attached by forking and fixing a django.storage module by Christos Kopanos http://github.com/locandy/django-google-cloud-storage
After that, the second set of problems came up as we discovered we had access times of up to 17s for a single page access. We have investigated this and found that easy-thumbnails 1.4 accessed the normal file system for mod_time requests while writing results to the store (rendering all thumb images on every request). We switched to the development version where that was already fixed.
Then we worked with SmileyChris to fix unnecessary access of mod_times (stat the file) on every request for every image by tracing and posting issues to http://github.com/SmileyChris/easy-thumbnails
This reduced access times from 12-17s to 4-6s per public page on the CMS basically eliminating all storage/"file"-system access. Once that was fixed, easy-thumbnails replaced (per design) file-system accesses with queries to the DB to check on every request if a thumbnail's source image has changed.
One thing for the web-designer: if she uses a image.width statement in the template this forces a ugly slow read on the "filesystem", because image widths are not cached.
Further investigation led to the conclusion that DB accesses are very costly, too and take about 40ms per roundtrip.
Up to now the deployment is unsuccessful mostly due to DB access times in the cloud leading to 4-5s delays on rendering a page before caching it.
I am aiming to simulate a large number of 'real users' hitting and realistically using our site at the same time, and ensuring they can all get through their use cases. I am looking for a framework that combines some EC2 grid management with a web automation tool (such as GEB/WATIR). Ideal 'pushbutton' operation would do all of this:
Start up a configurable number of EC2 instances (using a specified
AMI preconfigured with my browser automation framework and test
scripts)
Start the web automation framework test(s) running on all of them,
in parallel. I guess they would have to be headless.
Wait for completion
Aggregate results
Shut down EC2 instances.
While not a framework per se, I've been really happy with http://loader.io/
It has an API for your own custom integration, reporting and analytics for analysis.
PS. I'm not affiliated with them, just a happy customer.
However, in my experience, you need to do both load testing and actual client testing. Even loader.io will only hit your service from a handful of hosts. And, it skips a major part (the client-side performance from a number of different clients' browsers).
This video has more on that topic:
http://www.youtube.com/watch?v=Il4swGfTOSM&feature=youtu.be
BrowserMob used to offer such service. Looks like they got acquired.
If we had the possibility to run GAE app without any code change on our servlet engine that would be great because:
in case that google changes their billing policy we can just jump to our own server or in case their current policy doesn't fit our app needs
we can do stuff which is not allowed in the GAE, compromising a 1 JVM, 1 DB
We don't actually need a distributed system but more of a realtime system with synchronize, true locking mechanisms, other servers/software installed on the server machine, socket interface etc...
Such a package should include at least:
TomCat (or equivalent)
DataNucleus Access Platform
(Task Queue service)
Any idea if it's easy to get such a thing or if it's already exist somewhere?
Thanks
Good question - GAE is excellent, but it has considerable limitations, so I think it is a good idea to keep your options open. With that in mind here are some options.
http://appscale.cs.ucsb.edu/
"AppScale is a platform that allows users to deploy and host their own Google App Engine applications. It executes automatically over Amazon EC2 and Eucalyptus as well as Xen and KVM. It has been developed and is maintained by the RACELab at UC Santa Barbara.
There is also TyphoonAE but it is Python specific so probably not useful for you.
Also take note of the Siena project...
http://www.sienaproject.com/index.html
This is supposed to provide GAE/J users with a persistence API that is better suited to the GAE Datastore then JDO/JPA, but is still portable to other platforms.