Is Google really blocking authentication from embedded frameworks (e.g. CEF)? - chromium-embedded

This blog post - https://security.googleblog.com/2019/04/better-protection-against-man-in-middle.html - indicates that Google is planning on blocking authentication from embedded frameworks in June, 2019.
This is a short time frame for our desktop applications that utilize CEF.
I haven't been able to find any contact information to better understand the exact timing of this change. Information in the forums indicates that I should post to stackoverflow - here I am.
Can anyone shed light on the exact timing? We are scrambling to try and determine a path forward.

Related

MEAN.JS, high latence / ways to find bottlenecks in web-development

:)
I recently came across MEAN.JS. I'm still a beginner in webdevelopment but all worked really fine so far. Up to one thing.
Unfortunately, all requests seem to take a huge amount of time - 300 - 4000(!) ms for a single call (have a look at the screenshot). I'm developing locally on a state of the art computer and wonder where the bottleneck might be. Does anyone have the same issues? Could you give me a hint how to attack this problem?
I've had a look at this and similar posts, but couldn't find a way to tackle it.
What are the ways to find bottlenecks in a web application?
The framework uses MongoDB, ExpressJS, AngularJS, Node.js. Could you give me a hint how to track down the source of those latencies in a Javascript-based application? (Maybe a tool, plugin or best practice approach in development?) Have you experienced similar issues?
Greetings,
Tea
It's hard to guess what's wrong as that latency can be originated from many sources, however if we put aside computer and network problems/configurations, and taking into account that you don't have any other processes running that can affect your app performance, the first thing I would check is the express configuration, i.e, the order in which the middleware is loaded. A misplaced middleware can indeed influence the app's performance.

Difference between Salesforce and Salesforce1?

If I were to explain to a lay-person, who is somewhat familiar to Salesforce, the difference between Salesforce.com and Salesforce1, without delving much into the technical aspects, what would be the best way to do so ?
This?
Did you try googling you're question?
[EDIT]
This post, and this article sum it up better than I can.
The main points, boiled down are:
1) Developers can develop in any platform and it will work with SF1. Previously, it took sort-of a specialized knowledge to develop in SF
2) There's a command line that can be used to perform functions or get data and is specifically usefully and less time consuming for SF specialists and devs
3) "Committing to open source" means that the SF1 product will incorporate the best of the tech community into their product, for free
4) Integration with another cloud database company/system, Heroku, allows a broader range of data access - not just through the SF application
5) All of the SF product features will be available in mobile in SF1.
IDK - that's my shot at simplifying the explanations I've read. I could be misstating a technical detail slightly (which I guess is fine based on what you need) and I'm sure there are better ways of reducing the topic to more common language.
Hope this helps.
Both Salesforce.com and Salesforce1 can be accessed via a web client running on a PC, tablet, or phone.
Salesforce1 is optimized to work on a small touchscreen and can have functions via 3rd party apps to access iOS or Android features like GPS location data.

Should I use GAE + Lift for my Scala based webapp?

Such questions have been asked before - but all of the answers are outdated now.
I am looking forward to work on a Scala based webapp. I understand this question can be split into two, but I am posting them as one because they rely on same context, there being a dependency on the hosting platform and frameworks used.
I have read multiple (awesome) debates on Play! and Lift, but cannot find a good comparison between Play! 2.1 and Lift. How do I decide which one is better for my scenario (a social network website) ?
Similarly, this discussion has some very good arguments as to which platform to use for if I go with Lift, but it's from 2010 and seems outdated. The recommended provider (stax.net) is dead (or I guess it's merged with cloudbees.com). I am personally inclined towards GAE, as they are quick to start with, but unsure if the issues still prevail :
Support for actors (I am not sure if Akka helps us solve this problem)
Requests for a given session being served by different JVMs without notice to running app
Quoting David Pollak (lead author of Lift) :
GAE is slow and non-scalable, despite Google's claims (everyone I've
spoken with that have tried to scale GAE apps have failed and gone
elsewhere). GAE locks you into a tremendously suboptimal storage
mechanism. GAE is free, but so is Stax and there are many inexpensive
options including SliceHost. Next up, you've got Amazon EC2 and
RackSpace. So, I haven't found a good reason for anyone to use GAE.
And if there's no good reason to use GAE, devoting a pile of resources
to code around the GAE JVM incompatibilities (e.g., no new threads)
seems like a waste.
Another issue if I go with GAE is lack of Play! 2.1 support. I still don't see a module for that. Another issue is difficulty to migrate to other databases (although I hear migrating to MongoDB should be relatively easier) in the future. Worst case would be to move out of GAE and use AppScale.
Personally I use Lift, Cloudbees, and MongoLab as my first choice for most of my projects. I tried several cloud hosting services to no avail (Heroku and RedHat in particular. I don't think I tried GAE due to the post from David Pollak that you have already referenced). To use cloudbees, you just need an sbt plugin. Then it is as easy as running the cloudbees-deploy target. Within a minute, your code is up and running. I was floored by how easy it was. I went with Mongo primarily because of this excellent g8 template (note, there is now an SQL equivalent)
Another thing I really like about Cloudbees and MongoLab is they both have free services. It's great for me because I only work on these projects in my free time, so I don't want to spend any money while my ideas are half-baked.
As for Lift, I can't compare it much to Play. I downloaded/installed play and was immediately turned off by how MVC it is. I felt that the view-first approach, albeit foreign to me, seemed to be a much more intuitive and powerful way to build web applications. I love how Lift doesn't obscure from me the fact that I am indeed developing a web application. I often feel that MVC frameworks try to keep all of the HTML/CSS/JS etc at an arms-length.
The question is quite open so I will share my experience and opinion regarding Scala web app development as it might help you with your decision.
I built my first scala web application using Scalatra and Scalate using Jetty as the server. The app is hosted on an Amazon EC2 instance and I've had no problems with this... it's been running since the end of 2011 with only one small blip that took 10 mins to resolve. I found it a good experience for learning to use Scala in web applications.
http://www.scalatra.org/
Typesafe (http://typesafe.com) appear to have opted for the Play Framework and so for my next scala based web app I am likely to go for Play. A book I have been reading on the Play Framework is "Play for Scala". It has just been published this month (Oct 2013).
http://www.manning.com/hilton/
My impression is that Lift was the go-to framework in the past but that this has shifted to the Play Framework.

Silverlight Multi-User application with synchronization

I am wondering if it's possible to create a graphical application in Silverlight which supports synchronisation between the different clients.
To be a bit more precise, I am drawing concepts of developing a Silverlight Game. Visitors would log-in, and see live, synchronised what the other vistors are doing.
If it is possible to have this implemented, I would like to know what is needed to create a fully synched Silverlight environment between multiple peers. Anything from links, code snippets, ideas and / or alternatives are more than appreciated !
Please do not suggest Flash, as I do not own a valid Flash building license, I prefer to have this created within Visual Studio 2010.
Edit:
I want it to be as lightweight for the clients as possible, I don't care much for the server, and also low bandwidth consuming. I don't know whether a broadcasting principal is the only option to have all the events taken place at the same time?
You may want to take a look at the Polling Duplex protocol of WCF. This is the subscription and publish concept. Support in SL has been about since version 2 so there's plenty of articles out there. An article I referenced for a message broadcast system we put in place at work can be found here...
http://tomasz.janczuk.org/2009/07/pubsub-sample-using-http-polling-duplex.html
which also mentions an interesting project on codeplex (I've not used)...
http://laharsub.codeplex.com/
A simple and working (but rather inefficient) solution would be for all clients to ask a WCF/Ria service on the server for status updates in regular intervals, perhaps once every X seconds or so, letting the server keep track of changes relevant to the calling clients.

Looking for an example of when screen scraping might be worthwhile

Screen scraping seems like a useful tool - you can go onto someone else's site and steal their data - how wonderful!
But I'm having a hard time with how useful this could be.
Most application data is pretty specific to that application even on the web. For example, let's say I scrape all of the questions and answers off of StackOverflow or all of the results off of Google (assuming this were possible) - I'm left with data that is not very useful unless I either have a competing question and answer site (in which case the stolen data will be immediately obvious) or a competing search engine (in which case, unless I have an algorithm of my own, my data is going to be stale pretty quickly).
So my question is, under what circumstances could the data from one app be useful to some external app? I'm looking for a practical example to illustrate the point.
It's useful when a site publicly provides data that is (still) not available as an XML service. I had a client who used scraping to pull flight tracking data into one of his company's intranet applications.
The technique is also used for research. I had a client who wanted to compare the contents of several online dictionaries by part of speech, and all of these sites had to be scraped.
It is not a technique for "stealing" data. All ordinary usage restrictions apply. Many sites implement CAPTCHA mechanisms to prevent scraping, and it is inappropriate to work around these.
A good example is StackOverflow - no need to scrape data as they've released it under a CC license. Already the community is crunching statistics and creating interesting graphs.
There's a whole bunch of popular mashup examples on ProgrammableWeb. You can even meet up with fellow mashupers (O_o) at events like BarCamps and Hack Days (take a sleeping bag). Have a look at the wealth of information available from Yahoo APIs (particularly Pipes) and see what developers are doing with it.
Don't steal and republish, build something even better with the data - new ways of understanding, searching or exploring it. Always cite your data sources and thank those who helped you. Use it to learn a new language or understand data or help promote the semantic web. Remember it's for fun not profit!
Hope that helps :)
If the site has data that would benefit from being accessible through an API (and it would be free and legal to do so), but they just haven't implemented one yet, screen scraping is a way of essentially creating that functionality for yourself.
Practical example -- screen scraping would allow you to create some sort of mashup that combines information from the entire SO family of sites, since there's currently no API.
Well, to collect data from a mainframe. That's one reason why some people use screen scraping. Mainframes are still in use in the financial world and often it's running software that has been written in the previous century. The people who wrote it might already be retired and since this software is very critical for these organizations, they really hate it when some new code needs to be added. So, screenscraping offers an easy interface to communicate with the mainframe to collect information from the mainframe and then send it onwards to any process that needs this information.
Rewrite the mainframe application, you say? Well, software on mainframes can be very old. I've seen software on mainframes that was over 30 years old, written in COBOL. Often, those applications work just fine and companies don't want to risk rewriting parts because it might break some code that had been working for over 30 years! Don't fix things if they're not broken, please. Of course, additional code could be written but it takes a long time for mainframe code to be used in a production environment. And experienced mainframe developers are hard to find.
I myself had to use screen scraping too in a software project. This was a scheduling application which had to capture the output to the console of every child process it started. It's the simplest form of screen scraping, actually, and many people don't even realize that if you redirect the output of one application to the input of another, that it's still a kind of screen scraping. :)
Basically, screen scraping allows you to connect one (web) application with another one. It's often a quick solution, used when other solutions would cost too much time. Everyone hates it, but the amount of time it saves still makes it very efficient.
Let's say you wanted to get scores from a popular sports site that did not offer the information available with an XML feed or API.
For one project we found a (cheap) commercial vendor that offered translation services for a specific file format. The vendor didn't offer an API (it was, after all, a cheap vendor) and instead had a web form to upload and download from.
With hundreds of files a day the only way to do this was to use WWW::Mechanize in Perl, screen scrape the way through the login and upload boxes, submit the file, and save the returned file. It's ugly and definitely fragile (if the vendor changes the site in the least it could break the app) but it works. It's been working now for over a year.
One example from my experience.
I needed a list of major cities throughout the world with their latitude and longitude for an iPhone app I was building. The app would use that data along with the geolocation feature on the iPhone to show which major city each user of the app was closest to (so as not to show exact location), and plot them on a 3D globe of the earth.
I couldn't find an appropriate list in XML/Excel/CSV type format anywhere easily, but I did find this wikipedia page with (roughly) the info I needed. So I wrote up a quick script to scrape that page and load the data into a database.
Any time you need a computer to read the data on a website. Screen scraping is useful in exactly the same instances that any website API is useful. Some websites, however, don't have the resources to create an API themselves; screen scraping is the developer's way around that.
For instance, in the earlier days of Stack Overflow, someone built a tool to track changes to your reputation over time, before Stack Overflow itself provided that feature. The only way to do that, since Stack Overflow has no API, was to screen scrape.
The obvious case is when a webservice doesn't offer reverse search. You can implement that reverse search over the same data set, but it requires scraping the entire dataset.
This may be fair use if the reverse search also requires significant pre-processing, e.g. because you need to support partial matching. The data source may not have the technical skills or computing resources to provide the reverse search option.
I use screen scraping daily, I run some eCommerce sites and have screen-scraping scripts running daily to gather product lists automatically from my suppliers wholesale sites. This allows me to have upto date information on all the products available to me from several suppliers and allows me to flag non-economical margins due to price changes.

Resources