I have heard that docker solves the "works on my machine" issue for application deployment and that SQL Server can be run inside a docker container, running in Docker for Windows.
I have a C# Windforms application that I would like to deploy without Dll Hell.
Is it possible to use Docker for this?
sort of, but i wouldn't.
docker isn't meant for interactive / gui based applications at this point. there are some workarounds for this, but all of them are difficult from what I've read.
it's better to think of Docker as a server. you don't have a person sitting at a server all day long, clicking things to respond to requests that come into the server. you have code that runs, listening for requests and doing things in response.
Docker apps should be this type of app where it runs on it's own, exposes an API and can respond to requests.
... i would bet that this becomes possible in the not-so-distant future. but right now, i don't think it's something Docker officially supports.
Related
I have developed some shiny apps which I want to make available to a few selected internal users for testing purposes and continued development.
Deploying the apps on the cloud or on shinyapps.io is not an option, as the apps are handling sensitive internal data.
Using ShinyServer is unfortunately also not an option, as we have a strict Microsoft only IT architecture and I thus have available only
a virtual machine with Windows Server 2012 R2 on it.
I have been doing some web search and have found out the following:
i.) I could host my apps on the Windows machine as explained here: https://stackoverflow.com/a/44584982/7306540 . This seems rather hackish and
not elegant at all. It would only allow hosting of one app at a time and I am not sure if it would allow several concurrent users at all.
ii.) I could use shinyproxy.io which would possibly work on the Windows machine but involves a fair amount of quite complex installation
and configuration work that I am not particularly keen on doing.
iii.) SQLServer 2016 seems to feature some sort of R integration. We are currently using SQLServer2014 and it would be possible to upgrade to 2016
in principle. However, I don't know if the "R features" of SQLServer2016 would allow hosting of Shiny Apps. I found this blog post, https://social.technet.microsoft.com/Forums/windowsserver/en-US/1cf94cbb-c45d-4f8d-8b5e-9d208bfe369a/microsoft-r-server-can-i-host-shiny-apps-yet?forum=MicrosoftR , but without an answer:
Q: Does anyone know more about the capabilities of SQLServer2016 in this regard?
What about other options? Is there any other way to host my apps on the Windows Server? Do the makers of RStudio plan to add a Windows version of ShinyServer? Is anyone else working on this?
I would appreciate any insights into this topic!
EDIT:
Additional hosting options:
iv.) We can install a VM on the Windows Server, e.g. Virtual Box, or VM Player, install Linux and Shiny Server and host from there. We might run into problems in this variant if the Shiny Apps need to access SQL Server DB's on the Windows machine.
i.) This variant could possibly be improved by using (quote #gregL): "pm2.keymetrics.io, a process manager typically used for Node.js in production. The plumber docs describe how you can use pm2 with R: rplumber.io/docs/hosting.html#pm2"
Hosting of Shiny Apps is possible on Windows!
At work, we host several production shiny dashboards, so it is definitely possible. You can host more Shiny apps by extending the i.) solution you mentioned, and using different ports for the Apps. The steps that you need to take are listed here:
make sure that the port is open in the local (evtl. also remote) firewall for TCP/IP connections
run a "scheduled task" on the local machine that starts a local R session as described in i.), make sure that the task does time-out and restarts if needed
Once these settings are in place, you can already test the Shiny App, first locally, and also from the remote station. Editing the shiny app can be done also live, in what the GUI is concerned, but if you want to refresh the data, you will have to restart the R command process.
Tip: You should also have an index webpage where you list all running apps with their ports
I love using docker & docker-compose for both development and production environments.
But in my workflow, I keep considering dockers as disposable:
it means if I need to add a feature to my docker, I edit my Dockerfile, then run docker-compose build and docker-compose up -d and done.
But this time, the production DB is also in a Docker.
I still need to make some changes in my environment (eg. configuring backup), but now I can't rerun docker-compose build because this would imply a loss of all the data... It means I need to enter the docker (docker-compose run web /bin/bash) and run the commands inside it while still reporting those to my local Dockerfile to keep track of my changes.
Are there any best practices regarding this situation?
I thought of setting up a process that would dump the DB to a S3 bucket before container destruction, but it doesn't really scale to wide DBs...
I thought of making a Docker not destructible (how?), though it means losing the disposability of a container.
I thought of having a special partition that would be in charge of storing the data only and that would not get destructed when rebuilding the docker, though it feels hard to setup and unsecure.
So what?
Thanks
This is what data volumes are for. There is a whole page on the docker documentation site covering this.
The idea is that when you destroy the container, the data volume persists with data on it, and when you restart it the data hasn't gone anywhere.
I will say though, that putting databases in docker containers is hard. People have done it and had severe dataloss and severe job-loss.
I would recommend reading extensively on this topic before trusting your production data to docker containers. This is a great article explaining the perils of doing this.
I have an AngularJS site consuming an API written in Sinatra.
I'm simply trying to deploy these 2 components together on an AWS EC2 instance.
How would one go about doing that? What tools do you recommend? What structure do you think is most suitable?
Cheers
This is based upon my experience of utilizing the HashciCorp line of tools.
Manual: Launch an Ubuntu image, gem install sinatra and deploy your code. Take a snapshot for safe keeping. This one off approach is good for a development box to iron out the configuration process. Write down the commands you run and any options you may need.
Automated: Use the Packer EC2 Builder and Shell Provisioner to automate your commands from the previous manual approach. This will give you a configured AMI that can be launched.
You can apply different methods of getting to an AMI using different toolsets. However, in the end, you want a single immutable image that can be deployed. repeatedly.
I have developed an app in Twilio which I would like to run from the cloud. I tried learning about AWS and Google App Engine but am quite confused at this stage:
I have 2 questions which I hope to get your help on:
1) How can I store my scripts and database in the cloud? Right now, everything is running out of my local machine but I would like to transfer the scripts and db to another server and run my app at a predetermined time of day. What would be the best way to do this?
2) How can I write a batch file to run my app at a predetermined time of day in the cloud?
I understand this does not have code, but I really hope someone can point me to the right direction. I have spent lots of time trying to understand this myself but still am unsure. Tks in adv.
Update: The application is a Twilio app that makes calls to people, the script simply applies an algorithm to make calls in a certain fashion and the database is a mysql db that provides the details of people to be called.
This is quite difficult to provide an exact answer without understanding what is the application, what is the DB or what is the script that you wish to run.
I can give you a couple of ideas that might be helpful in such cases.
OpsWorks (http://aws.amazon.com/opsworks/) is a managed service for managing applications. You can define your stack (multiple layers like web, workers, DB...) and what are the chef recipes that should run in various points in the life of the instances in each layer (startup, shutdown, app deployment or stack modification..). Then you can use the ability to add instances to each layer in specific days and hours, to implement the functionality of running at predetermined times as you requested.
In such a solution you can either have some of your instances (like DB) always on, or even to bootstrap them using the chef recipes every day, with restore from snapshot on start and create snapshot on shutdown.
Another AWS service that you use is Data Pipeline (http://aws.amazon.com/datapipeline/). It is designed to move data periodically between data sources, for example from a MySQL database to Amazon Redshift, the Data warehouse service. But you can use it to trigger scripts and run random shell scripts that you wish (http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-shellcommandactivity.html), and schedule it to run in various conditions like every hour/day or specific times (http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-concepts-schedules.html).
A simple path here would be just to create an EC2 instance in AWS, and put the components needed to run your app there. A thorough walk through is here:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/get-set-up-for-amazon-ec2.html
Essentially you will create an EC2 virtual machine, which you can for most purposes treat just like any other Linux server. You can install MySQL on it, copy your script there, and run it. Of course whatever container or support libraries your code requires will need to be installed as well.
You don't say what OS you are using locally, but if it is Mac or Linux, you should be able to follow almost the same process to get your script running on an EC2 instance that you used on your local machine.
As you get to know AWS, there are sophisticated services you can use for deployment, infrastructure orchestration, database services, and so on. But just to get started running a script from a virtual machine should be pretty straightforward.
I recently developed a Twilio application using Ruby on Rails for the backend and found Heroku extremely simple to setup and launch. While Heroku does cost more than AWS, I found that the time I saved using Heroku more than made up this. As an early stage startup, we wanted to spend our time developing important features, and not "wasting" time optimizing our AWS cloud.
However, while I believe Heroku is ideal for early-stage websites/startups I do believe hosting should be reevaluated once a company reaches a certain size. At some point it becomes economically viable to devote resources into optimizing an AWS cloud solution because it will be cheaper than Heroku in the long run.
We use UI Automation and Nunit to create tests UI tests for WPF application.
We've created tests that work fine when you run them from a local machine. Those tests never run successfully on our build server (using TeamCity). Build always hang after opening application window. But if I am logged in (remote desktop), on our build server all UI Automation tests also run successfully.
So I am guessing that it probably has something to do with running active windows session. Any ideas how to convince our build server to create active windows session or any other solutions for making those tests run on build server?
You don't have many options. I will list the two I know, the most preferred option first:
Set up a virtual machine on your build server. Your builds execute in the virtual machine. You can lock the host (aka your buildserver) keeping things secure.
Keep someone logged on all the time. This offcourse creates a security problem. You can alleviate this problem a little by removing the mouse, keyboard and the screen and only access the buildserver through RDP or something similar.
Edit
Take a look at this TestComplete FAQ item: Can TestComplete execute scripts when the computer is locked?
OK, I'm just guessing here.
Try and run the TeamCity service with a local build server user instead of the system account.
Maybe you have to login with that account once, before starting a new build.
It definatley sounds like you need to run your tests with an interactive session as opposed to a service. Adding the "Allow Service to interact with desktop" might help, but this is not supported in Vista any more apparently.
If you can run your builds interactivley as a command line, not a serivice that should work too.
We used to run our UIAutomation tests using the visual studo 2008 load agent to distribute them, running as a command line tool on VM's with no problem.
I also agree that you probably should't be running UI tests on a build server a part of your daily build.
Build always hang after opening application window.
Tests that instantiate the UI? That's not going to work, e.g. if you get a modal dialog the build will hang. This is the reason the MVP pattern was invented, to isolate the active presentation code from a concrete view.
Are you using a mock view in your automated tests?