How can I transfer a DigitalOcean droplet to another host - ubuntu-18.04

I would like to switch to a different VPS host. How would I transfer a DigitalOcean droplet to another host?

It's a pain. I use DigitalOcean, AWS, Google Cloud, and Vultr. It's hypothetically possible, mad the best explanation is here; however, as they point out, while it may be difficult for servers that have been active for a very long time, a fresh start is at least worthy of consideration, given the incompatibility of the file formats big cloud services use for snapshots.
Also, keep in mind that even if you can get the snapshot to boot on AWS or Azure or wherever you're going, it's likely the virtual network configuration will be totally different. You're going to be stuck doing network configuration and possibly new reverse proxies and the like. Probably only accessible through the slow browser shell your new provider offers. Do not recommend, from experience.

Related

How to access terabytes of data sitting in cloud quickly?

We have Terabytes of data sitting in the google hard drives. Initially, since we were using google cloud VMs, so we were doing development work in the cloud and were able to access the data.
Now, we bought our own servers where our application is running and we are bringing the data to our local disks which would be accessed by our application. The things is transferring the data especially terabytes on network using scp is quite slow. Can anyone suggest a way to fix this issue?
What I am thinking is there isn't a way that we can keep running a script waiting for a request on the google cloud instance(it send the requested data over HTTP!), and from local_server, we can request for data at a time!
I know this again is happening over the network but, I think we can scale in this approach, but I could be wrong!. it's kind of client-server(1:1) layout using in building interaction between frontend and backend! any suggestions?
Would that be slow? slower than bringing the data using SCP!
You could download the full VM disk and mount int on you servers or download the disk then just copy the data and delete the VM disk. For any case you should follow the next steps:
Create a snapshot of your VM which will have all the data.
Build and export the VM image to your servers.
Run the image on your servers according to GCE requirements.
It would take a lot less of time, since you're doing it on on premises and avoiding network traffic.

Tomcat via Apache Server going down after too many connections

I have an Apache (2.4) Server that serves content through the AJP connector on a Tomcat 7 Server.
One of my clients manages to kill the tomcat instance after running too many concurrent connections to a JSP JSON Api service. (Apache still works, but tomcat falls over. Restarting Tomcat brings it back up) there are no errors in tomcats logs.
I would like to protect the site from falling over like that, but I am not sure what configurations to change.
I do not want to limit the number of concurrent connections as there are legitimate use cases for that,
My Tomcat memory settings are :
Initial Memory pool : 1280MB
Maximum memory pool : 2560MB
which I assumed was plenty.
It might be worth mentioning that the API service relies on multiple, possibly heavy MySQL connections.
Any advice would be most appreciated.
Why don't you'd slowly switch your most used/ important application features to microservices architecture and dockerize your tomcat servers to be able to manage multiple instances of your application. This will hopefully help your application to manage multiple connections without impacting the overall performance of the servers at the job.
If you are talking about scaling, you need to do the horizontal scaling her with multiple tomcat servers.
If you cannot limit user connections & still want the app to run smooth, then you need to scale. Architectural change to microservices is an option but may not be possible always for a production solution.
The best to think about is running multiple tomcats sharing the load. There are various ways to do this. With your tech stack, I feel the Apache 2 load balancer plugin in combination with Tomcat will do best.
Have an example here.
Now, with respect to server capacity, db connection capacity etc, you might also need to think about vertical scaling.

How can a number of angular clients communicate between themselves even when they lose connection to a central server?

So the scenario is like this...
I have a number of different users in an organization. Each has his own session of an AngularJS app running in their browser. They share an internet connection over a local LAN.
I need them to continue working together (data, notifications, ... etc) even when they lose internet i.e. server side communication.
What is the best architecture for solving this?
Having clients communicate directly, without a server, requires peer-to-peer connections.
If your users are updating data that should be reflected in the database, then you will have to cache that data locally on the client until the server is available again. But if you want to first send that data to other peers, then you need to think carefully about which client will then update the database when the server comes back up (should it be the original client that made the edit - who may not be online anymore - or should it be the first client that establishes server connection). Lots to consider in your architecture.
To cope with this scenario you need angular service-worker library, which you can read about here.
If you just want the clients/users to communicate without persisting data in the database (eg. simple chat messages) then you don't have to worry about the above complexity.
Refer to this example which shows how to use simple-peer library with Angular2.
An assisting answer (doesn't fit in a comment) was provided here: https://github.com/amark/gun/issues/506
Here is it:
Since GUN can connect to multiple peers, you can have the browser connect to both outside/external servers AND peers running on your local area network. All you have to do is npm install gun and then npm start it on a few machines within your LAN and then hardcode/refresh/update their local IPs in the browser app (perhaps could even use GUN to do that, by storing/syncing a table of local IPs as the update/change)
Ideally we would all use WebRTC and have our browsers connect to each other directly. This is possible however has a big problem, WebRTC depends upon a relay/signal server every time the browser is refreshed. This is kinda stupid and is the browser/WebRTC's fault, not GUN (or other P2P systems). So either way, you'd have to also do (1) either way.
If you are on the same computer, in the same browser, in the same browser session, it is possible to relay changes (although I didn't bother to code for this, as it is kinda useless behavior) - it wouldn't work with other machines in your LAN.
Summary: As long as you are running some local peers within your network, and can access them locally, then you can do "offline" (where offline here is referencing external/outside network) sync with GUN.
GUN is also offline-first in that, even if 2 machines are truly disconnected, if they make local edits while they are offline, they will sync properly when the machines eventually come back online/reconnect.
I hope this helps.

How should I setup my Azure Network for this particular web scenario

We're looking at moving off our Hosted company and onto Azure.
We're not sure what type of network setup we need to do with Azure. Eg. availability sets, etc
We currently have
1x VM IIS Website (main site)
1x VM IIS Website (totally separate site with different UI/content etc).
1x VM IIS Webiste (json api).
1x dedicated Sql Server 2012 box all tricked out big time with RAID 10, SSD, 24Gig Ram.
(no IIS VM's are load balanced or scaled).
We're not doing -anything- special with IIS (eg. custom sections unlocked, etc) so we're hoping to move these over to WAWS so we can scale when needs be. (eg. add more instances).
SQL Server 2012 uses FTS (oh! le sigh!) so we'll probably go and get an A6 2012 R1 VM with SQL Standard (we need to be able to profile if a failure happens in production).
So, what we're hoping to setup is something like the following
SQL Server in Azure. IP Whitelist it for a) the Azure website private VLAN thingy (is this possible? and b) about 3 public IP's.
3x WAWS for our IIS sites.
But we want to be able to update. Say, the main website and not incur any downtime for the users. (NOTE: Lets assume we're not doing any DB maintenance).
So, is there something special we can do here to have .. say .. 1 instance up, the 2nd get's auto updated, then it does the other one? Do we need to worry about load balancing?
eg. Put webs on one subnet . 192.168.1.x, DB on a 2nd subnet 192.168.2.x ... and then do this and that, etc.
Incidentally, I'm not sure if that's possible.
Lastly, I'm hoping to avoid using VMs for the websites or web workers for the websites, because I've found using WAWS so nice and less support/maintenance required.
You loaded that up with a lot of questions. I'll avoid the opinion-based ones (such as what you should do to set this up), and tackle the objective ones:
Azure Web Sites: Very easy to push code to, and simple to update without downtime, assuming you have more than one instance running (the changes are propagated, and not all at the exact same time to all instances). However: Azure Web Sites does not offer dedicated outbound IP addresses (only dedicated incoming, if you purchase an ssl cert). Therefore, you cannot include a site hosted in WAWS within a virtual network, nor can you add it to an IP whitelist on a VM's endpoint ACL.
Web Sites will take care of load-balancing for you, assuming you scale to multiple instances. By the way: those same instances would host all of your websites. Just like, with Cloud Services, you can deploy multiple websites to the same Web Role.
If you want to IP-whitelist your website, you'd need to go with cloud services (web role), or VM. Web Roles are fairly straightforward to construct; underneath, they're just Windows Server VMs. You have no OS maintenance to worry about; you just maintain the code project in Visual Studio, and push up a deployment package when it's time to update the app.
Also keep this in mind: with either Web Sites or Cloud Services (or VMs, for that matter), if you have static content such as CSS, images, Javascript, etc., you can store that in blob storage and update this content independent of your deployed code (assuming you've adjusted your app to point to blob storage for the source of such content).
Regarding availability sets: This is a mechanism for combining multiple virtual machines into a high-availability configuration: the VMs are spread out across racks, removing single-point-of-failure (e.g. top-of-rack router fails; you don't want all your VMs knocked out because of that). VMs in an availability set are also updated separately when it comes time for Host OS update (the OS running beneath the VMs). Otherwise, they'd all have the potential to be updated simultaneously.

Fastest Open Source Content Management System for Cloud/Cluster deployment

Currently clouds are mushrooming like crazy and people start to deploy everything to the cloud including CMS systems, but so far I have not seen people that have succeeded in deploying popular CMS systems to a load balanced cluster in the cloud. Some performance hurdles seem to prevent standard open-source CMS systems to be deployed to the cloud like this.
CLOUD: A cloud, better load-balanced cluster, has at least one frontend-server, one network-connected(!) database-server and one cloud-storage server. This fits well to Amazon Beanstalk and Google Appengine. (This specifically excludes CMS on a single computer or Linux server with MySQL on the same "CPU".)
To deploy a standard CMS in such a load balanced cluster needs a cloud-ready CMS with the following characteristics:
The CMS must deal with the latency of queries to still be responsive and render pages in less than a second to be cached (or use a precaching strategy)
The filesystem probably must be connected to a remote storage (Amazon S3, Google cloudstorage, etc.)
Currently I know of python/django and Wordpress having middleware modules or plugins that can connect to cloud storages instead of a filesystem, but there might be other cloud-ready CMS implementations (Java, PHP, ?) and systems.
I myself have failed to deploy django-CMS to the cloud, finally due to query latency of the remote DB. So here is my question:
Did you deploy an open-source CMS that still performs well in rendering pages and backend admin? Please post your average page rendering access stats in microseconds for uncached pages.
IMPORTANT: Please describe your configuration, the problems you have encountered, which modules had to be optimized in the CMS to make it work, don't post simple "this works", contribute your experience and knowledge.
Such a CMS probably has to make fewer than 10 queries per page, if more, the queries must be made in parallel, and deal with filesystem access times of 100ms for a stat and query delays of 40ms.
Related:
Slow MySQL Remote Connection
Have you tried Umbraco?
It relies on database, but it keeps layers of cache so you arent doing selects on every request.
http://umbraco.com/azure
It works great on azure too!
I have found an excellent performance test of Wordpress on Appengine. It appears that Google has spent some time to optimize this system for load-balanced cluster and remote DB deployment:
http://www.syseleven.de/blog/4118/google-app-engine-php/
Scaling test from the report.
parallel
hits GAE 1&1 Sys11
1 1,5 2,6 8,5
10 9,8 8,5 69,4
100 14,9 - 146,1
Conclusion from the report the system is slower than on traditional hosting but scales much better.
http://developers.google.com/appengine/articles/wordpress
We have managed to deploy python django-CMS (www.django-cms.org) on GoogleAppEngine with CloudSQL as DB and CloudStore as Filesystem. Cloud store was attached by forking and fixing a django.storage module by Christos Kopanos http://github.com/locandy/django-google-cloud-storage
After that, the second set of problems came up as we discovered we had access times of up to 17s for a single page access. We have investigated this and found that easy-thumbnails 1.4 accessed the normal file system for mod_time requests while writing results to the store (rendering all thumb images on every request). We switched to the development version where that was already fixed.
Then we worked with SmileyChris to fix unnecessary access of mod_times (stat the file) on every request for every image by tracing and posting issues to http://github.com/SmileyChris/easy-thumbnails
This reduced access times from 12-17s to 4-6s per public page on the CMS basically eliminating all storage/"file"-system access. Once that was fixed, easy-thumbnails replaced (per design) file-system accesses with queries to the DB to check on every request if a thumbnail's source image has changed.
One thing for the web-designer: if she uses a image.width statement in the template this forces a ugly slow read on the "filesystem", because image widths are not cached.
Further investigation led to the conclusion that DB accesses are very costly, too and take about 40ms per roundtrip.
Up to now the deployment is unsuccessful mostly due to DB access times in the cloud leading to 4-5s delays on rendering a page before caching it.

Resources