How to utilize intranet bandwith - intranet

Hi
Is it possible to measure which web pages are visited mostyy and download their contenet so
that people can access them offline.
Basic scheme is:
There will be client software on each user PC which will extract domain information on http requests and decide if it's already available on server or not.
On the server side there will be another software which updates downloaded web pages.
Do you think is this a good way of utilizing intranet bandwith ?
thanks

Squid: Web browsers can then use the local Squid cache as a proxy HTTP server, reducing access time as well as bandwidth consumption.

Related

Enable an asp.net core web application to work without internet and with internet?

I have been developing an asp.net core web application and published on the production mode (online server), the users can access it with the specific domain name and will log in and do data entry from three different countries.
But, the problem is sometimes, in one specific country there is no internet access, my client wants that this application should work online and offline, If there is no internet access the local branch must be able to do data entry, then when the internet gets connected data should send to the online server database,
What is the best way to achieve this goal?
Please write your view or add some good forum link below.
Rationally, it is not possible for you to access a Web App without internet. Web Apps are meant for network usage. However, I believe there is a workaround for such requirements. What you can do is that you can create a clone of your database for the third user, who has no internet access and perform all transactions within the local machine and when the connection comes back on line, you can replicate the data from the local SQL Server into the online server database.
And then there is something called Progressive Web Apps , which will allow you below privileges :
Reliable - Load instantly and never show the downasaur, even in uncertain network conditions.
Fast - Respond quickly to user interactions with silky smooth animations and no janky
Engaging - Feel like a natural app on the device, with an immersive user experience.
What are Progressive Web Applications, Google has something more to discuss here

With AngularJS based single page apps hosted on premise, how to connect to AWS cloud servers

Maybe this is a really basic question, but how do you architect your system such that your single page application is hosted on premise with some hostname, say mydogs.com but you want to host your application services code in the cloud (as well as database). For example, let's say you spin up an Amazon EC2 Container Service using docker and it is running NodeJS server. The hostnames will all have ec2_some_id.amazon.com. What system sits in from of the Amazon EC2 instance where my angularjs app connects to? What architecture facilitate this type of app? Especially AWS based services.
One of the important aspects setting up the web application and the backend is to server it using a single domain avoiding cross origin requests (CORS). To do this, you can use AWS CloudFront as a proxy, where the routing happens based on URL paths.
For example, you can point the root domain to index.html while /api/* requests to the backend endpoint running in EC2. Sample diagram of the architecture is shown below.
Also its important for your angular application to have full url paths. One of the challenges having these are, for routes such as /home /about and etc., it will reload a page from the backend for that particular path. Since its a single page application you won't be having server pages for /home and /about & etc. This is where you can setup error pages in CloudFront so that, all the not found routes also can be forwarded to the index.html (Which serves the AngularJS app).
The only thing you need to care about is the CORS on whatever server you use to host your backend in AWS.
More Doc on CORS:
https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS
Hope it helps.
A good approach is to have two separated instances. It is, an instance to serve your API (Application Program Interface) and another one to serve your SPA (Single Page Application).
For the API server you may want more robust service because it's the one that will suffer the most receiving tons of requests from all client instances, so this one needs to have more performance, band, etc. In addition, you probably want your API server to be scalable when needed (depends on the load over it); maybe not, but is something to keep in mind if your application is supposed to grow fast. So you may invest a little bit more on this one.
The SPA server in the other hand, is the one that will only serve static resources (if you're not using server side rendering), so this one is supposed to be cheaper (if not free). Furthermore, all it does is to serve the application resources once and the application actually runs on client and most files will end up being cached by the browser. So you don't need to invest much on this one.
Anyhow, you question about which service will fit better for this type of application can't be answered because it doesn't define much about that you may find the one that sits for the requisites you want in terms of how your application will be consumed by the clients like: how many requests, downloads, or storage your app needs.
Amazon EC2 instance types

Advantage of Deploying AngularJS application with Restful services on diffeent servers

I want to deploy angularJs app on somw web server and restful services on application servers like tomcat.
Can any one please let me know that what will be the advantage and disadvantages of deploying angularJs app with Restful services on different server or on same servers.
which option will be good including authorization and performance.
Since the html / angularjs code will be downloaded on the clients devices and then the webservice will be called by those clients there is no gain on the response time if the app and the ws are on the same server.
For the rest, it all depends on the load of your website. Distributing the html code to the clients does not take that much of a load, but you will have an apache (or ngix or wathever) + a tomcat + your database running on the same server, it will be ok for most cases, it depends on the success of your website but usually when you have to ask yourself how you are going to manage such a load you have the means to rethink the architecture!
The most important is to have your db and your tomcat on the same server!
For the authorizations, if you use a REST webservice you will have to deal with those damn CORS headers whether or not the app and the ws are on the same server.
Overall, having 2 servers will be more flexible and share the load more evenly but it will also increase the cost, so you will probably be fine with only one!

How Google cloud achieved scalability through virtualization?

I have a question about how Google app engine achieve scalability through virtualization. For example when we deploy a cloud app to Goodle app engine, by the time the number of users of our app has been increased and I think Google will automatically generate a new virtual server to handle user request. At the first time, the cloud app runs on one virtual server and now it runs on two virtual servers. Google achieved
scalability through virtualization so that any one system in the Google
infrastructure can run an application’s code—even two consecutive
requests posted to the same application may not go to the same server
Does anyone know how an application can run on two virtual servers on Google. How does it send request to two virtual server and synchronizes data, use CPU resources,...?
Is there any document from Google point out this problem and virtualization implement?
This is in now way a specific answer since we have no idea how Google does this. But I can explain how a load balancer works in Apache which operates on a similar concept. Heck, maybe Google is using a variable of Apache load balancing. Read more here.
Basically a simply Apache load balancing structure consists of at least 3 servers: 1 head load balancer & 2 mirrored servers. The load balancer is basically the traffic cop to outside world traffic. Any public request made to a website that uses load balancing will actually be requesting the “head” machine.
On that load balancing machine, configuration options basically determine which slave servers behind the scenes send content back to the load balancer for delivery. These “slave” machines are basically regular Apache web servers that are—perhaps—IP restricted to only deliver content to the main head load balancer machine.
So assuming both slave servers in a load balancing structure are 100% the same. The load balancer will randomly choose one to grab content from & if it can grab the content in a reasonable amount of time that “slave” now becomes the source. If for some reason the slave machine is slow, the load balancer then decides, “Too slow, moving on!” and goes to the next machine. And it basically makes a decision like that for each request.
The net result is the faster & more accessible server is what is served first. But because the content is all proxied behind the load balancer the public accesses, nobody in the outside world knows the difference.
Now let’s say the site behind a load balancer is so heavily trafficked that more servers need to be added to the cluster. No problem! Just clone the existing slave setup to as many new machines as possible, adjust the load balancer to know that these slaves exist & let it manage the proxy.
Now the hard part is really keeping all machines in sync. And that is all dependent on site needs & usage. So a DB heavy website might use MySQL mirroring for each DB on each machine. Or maybe have a completely separate DB server that itself might be mirroring & clustering to other DBs.
All that said, Google’s key to success is balancing how their load balancing infrastructure works. It’s not easy & I have no clue what they do. But I am sure the basic concepts outlined above are applied in some way.

Google App Engine, Amazon EC2 and sockets

As I know the GAE does not support use the raw TCP/IP sockets, i.e. java.net.ServerSocket. Is there any other well known cloud service I can use it? E.g. Amazon EC2?
My client application needs the permanent TCP connection to the server...
Thanks a lot
STeN
Any IaaS provider will allow to do that. IaaS is Infrastracture as a Service, where Amazon EC2 is the most known one. In IaaS you can do all the same things that you could do with a dedicated server. The only difference is that it is using visualization and you can deploy and undeploy servers within minutes. You can find a number of IaaS providers at cloudorado.com .
GAE is PaaS - Platform as a Service. You don't play there with servers at all, you even don't know how many servers is your application using. You just put your app (like war) into the service and it hosts it. The platform will take care of scaling, distributing, etc. But there is an expense - you need to limit yourself, since the application needs to almost stateless (apart from session object). You cannot start your own services, db servers, start threads, etc.
EDIT: It appears now to be possible with GAE Managed VMs: https://cloud.google.com/appengine/docs/managed-vms/
sockets in GAE is a coming soon feature.
I read from here http://code.google.com/p/googleappengine/wiki/SdkForGoReleaseNotes
For now you need to sign up as a trusted tester to use this feature, but I guess this will be available to the public in the future.

Resources