I have successfully deployed the MobileFirst Platform to Bluemix a number of times. I am able to create the container group for the operations console (MFP Server), and then push an app to this server. From there, I am able to install and run the app on my physical devices (a few Apple devices and a couple of Android devices). At this point every thing works fine.
However, after some period of inactivity (about one day), the MFP runtime is no longer available on the server. I can still log into the operations console, but the runtime is missing. FYI, I am using a Cloudant DB service.
One thing I have noticed is the container "Created" time is shorter than the time that has past since I actually created the container. For example, I created a container 6 days ago, but the container shows (in the Bluemix console) that the container was created 4 days ago. Is it possible the container is getting recreated, thus loosing connectivity to the DB, or otherwise corrupting the MFP runtime?
Related
I deployed my own personal website to Google Cloud's App Engine last week, and since then it has built up around a $12 bill. But there would be very few if any people actually visiting my site, as I'm really the only one who knows about it.
It is a basic React app that uses React Router to display three different pages. I can't imagine the code that I used in the app is so complex that it's causing the bill to stack up, because the code is very basic.
The logging shows health, liveness and readiness checks being completed on the app constantly, is this something that would add to the cost of the app?
I'm relatively new to deploying apps on the cloud, is there anything I can do to reduce the cost of having this app deployed?
Is this Flex or Standard? A Flex instance never goes down to 0 (which means you're being billed) whereas a Standard instance goes to 0 when there's no traffic (which means no billing for that period).
Do you have multiple versions of your App running? When deploying, if you do not specify a version number, gcloud will generate a version number for you. This means that multiple deployments might lead to you having multiple versions and if you do not delete them or force traffic to be migrated to the newer versions, you might end up with multiple versions running and thus increasing your bill. You should check the number of versions you have by going to your console and delete any one you don't need.
Are you using manual scaling or what type of settings do you h
Check out the following tips (short twitter threads from our account) to see if the tips will help - tip 1, tip 2
I have recently upgraded my website to the dot net nuke version- 9.4.1 but here getting performance issue, the website runs slow. I have searched for this and applied the performance configuration inside the server setting and also did the cache configuration at the page level.
I have minified the files(Js and CSS) and have updated the setting value inside the host setting table.
Thanks a lot in advance.
Check the DNN Scheduler to see if there are any active jobs that are taking longer than they should. For example, if the Site Crawler scheduler is constantly running then you should check the files in the Portals folders to make sure all of the files located in the Portals folder should actually be there. The crawler is rebuilding the index and if you have a lot of files it could take hours to complete. If the files should be there, disable the crawler scheduler and then run during your slowest time of the day (1:00 AM?). I ran into this problem on a server that had hundreds of thousands of documents in the portals folder. Ended up solving it by running the crawler between 1:00 AM and 5:00 AM for a few days until it indexed all of the files. Once the files are indexed it will only have to index changed and new files; so it should just be a burden the first time it runs.
Another possible cause are exceptions. If your site is throwing a large amount of exceptions it will slow down your site. The handling of the exceptions and then the logging of them (to the DNN EventLog table in the database and the Log4Net files) can be brutal if your site is constantly throwing exceptions. If your site is also running in DEBUG mode the performance hit is multiplied by at least 30 times due to .Net collection all of the additional information about the exception while running in debug mode. That will be brutal to your sites performance.
Check the server logs to see how often IIS is recycling the application pool for your DNN site. If it's occurring often then that is also a sign of a large amount of exceptions being thrown if you are using the default IIS application pool settings. By default, IIS will recycle your application pool if too many exceptions are thrown within a short period of time. If you also have the option set to bring up a new instance of your site and run it side by side before IIS terminates the existing instance while your site is throwing exceptions that can cause a bottleneck and will cripple performance. For this situation, I usually disable IIS from recycling the application pool if too many exceptions are thrown within a short period of time. That may not be the best option for you but if you are on top of the exceptions being thrown on the site then you can disable that and let IIS run instances side by side after an app recycle (this is nice to have when you recycle during active periods so that all existing traffic completes with the old instance and all new traffic is sent to the new instance. Once all traffic is hitting the new instance of your site IIS will terminate the older instance.)
If none of the above help, run SQL Profiler on your database to see if there is any extreme database activities going on. Also check for any db locks.
There are a lot of possible causes that can slow down DNN. The best way to find out what is going on is to run a profiler on the server (RedGate Ants profiler or Telerik (Progress) Just Trace).
On our development environment, we have been charged about USD100 every month for an instance we didn't know existed (and of course we are not using), and we cant find in the entire AppEngine or Console Engine.
Also, the usage report shows no activity for the whole month, but we are still getting the charges.
The instance is: Flex Instance Core Hours Sao Paulo
I found similar posts in stackoverflow, so, here are the questions:
- is this some bad strategy from Google???
- where can I see this instance to stop it or delete it?
- where can I see who started this instance and when?
Of course, I called google support and no answer received.
Many thanks!
Google Cloud Platform Support here! I found your ticket and see that you were provided an answer there already. In addition to what Dan described in his answer, if your app has currently the "Serving" status it will still run with the corresponding instances regardless of any requests incoming or not. As long as the version is serving it will continue to bill for hours that you are using. Also, if you are using automatic scaling with a minimum number of instances:
that specified number of instances run as resident instances while any
additional instances are dynamic
(Instance scaling description in GCP docs)
You can use basic or manual scaling if this is not what you're interested in.
Check the App Engine Versions pages for all your projects, you should find at least one with Flexible environment. The Deployed column should indicate who deployed it and when.
Based on that information you can decide to keep or delete the respective version(s). Simply stopping the instance may not be sufficient, depending on the scaling configuration for that service version GAE may automatically start one or more new instances.
You should also check the App Engine Instances pages for your projects and cross-reference that with the versions info to make sure no undesired instances are accidentally left behind (at least in the standard environment they are normally stopped when the respective versions are deleted, not entirely certain the same is true for the flex environment)
The running flexible environment instances are billed by the hour, even if they receive no requests, which could explain why you're seeing charges without any activity.
Apparently, the source of this instance was a firebase setting we made to make some test, and it automatically creates this instance. I shut off the billing account for this space, and instantly I received an email from firebase saying it detected changes that make some functions unavailable.
I have developed an app in Twilio which I would like to run from the cloud. I tried learning about AWS and Google App Engine but am quite confused at this stage:
I have 2 questions which I hope to get your help on:
1) How can I store my scripts and database in the cloud? Right now, everything is running out of my local machine but I would like to transfer the scripts and db to another server and run my app at a predetermined time of day. What would be the best way to do this?
2) How can I write a batch file to run my app at a predetermined time of day in the cloud?
I understand this does not have code, but I really hope someone can point me to the right direction. I have spent lots of time trying to understand this myself but still am unsure. Tks in adv.
Update: The application is a Twilio app that makes calls to people, the script simply applies an algorithm to make calls in a certain fashion and the database is a mysql db that provides the details of people to be called.
This is quite difficult to provide an exact answer without understanding what is the application, what is the DB or what is the script that you wish to run.
I can give you a couple of ideas that might be helpful in such cases.
OpsWorks (http://aws.amazon.com/opsworks/) is a managed service for managing applications. You can define your stack (multiple layers like web, workers, DB...) and what are the chef recipes that should run in various points in the life of the instances in each layer (startup, shutdown, app deployment or stack modification..). Then you can use the ability to add instances to each layer in specific days and hours, to implement the functionality of running at predetermined times as you requested.
In such a solution you can either have some of your instances (like DB) always on, or even to bootstrap them using the chef recipes every day, with restore from snapshot on start and create snapshot on shutdown.
Another AWS service that you use is Data Pipeline (http://aws.amazon.com/datapipeline/). It is designed to move data periodically between data sources, for example from a MySQL database to Amazon Redshift, the Data warehouse service. But you can use it to trigger scripts and run random shell scripts that you wish (http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-shellcommandactivity.html), and schedule it to run in various conditions like every hour/day or specific times (http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-concepts-schedules.html).
A simple path here would be just to create an EC2 instance in AWS, and put the components needed to run your app there. A thorough walk through is here:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/get-set-up-for-amazon-ec2.html
Essentially you will create an EC2 virtual machine, which you can for most purposes treat just like any other Linux server. You can install MySQL on it, copy your script there, and run it. Of course whatever container or support libraries your code requires will need to be installed as well.
You don't say what OS you are using locally, but if it is Mac or Linux, you should be able to follow almost the same process to get your script running on an EC2 instance that you used on your local machine.
As you get to know AWS, there are sophisticated services you can use for deployment, infrastructure orchestration, database services, and so on. But just to get started running a script from a virtual machine should be pretty straightforward.
I recently developed a Twilio application using Ruby on Rails for the backend and found Heroku extremely simple to setup and launch. While Heroku does cost more than AWS, I found that the time I saved using Heroku more than made up this. As an early stage startup, we wanted to spend our time developing important features, and not "wasting" time optimizing our AWS cloud.
However, while I believe Heroku is ideal for early-stage websites/startups I do believe hosting should be reevaluated once a company reaches a certain size. At some point it becomes economically viable to devote resources into optimizing an AWS cloud solution because it will be cheaper than Heroku in the long run.
My WPF application currently only shows a screen with some controls, it doesn't connect to DB or has any other functionality. It's a simple UI screen.
When I was testing in some computers (WinXP SP2), I've detected that it took more than 15 seconds to startup. They were all in our domain.
I've grabbed a similar computer, only with Windows installed, and the application took 2 seconds to startup.
Then I added the computer to our domain, and testing it with a domain user showed that it also took 15 seconds to startup. I tested again with the previous user (local user) and it's still fast. I created another local user, but it takes the 15 seconds that the domain user also takes.
I've added other local users but they were also slow.
To summarize: the application starts fast (2 sec) in only one user, the first one I tested. All other users (domain or local) are slow (15 sec).
I've been checking Improving WPF applications startup time but my problem seems to need a different approach. Does anyone figure out what can be happening?
I found another solution to this problem in this documentation from Microsoft.
Adding the following configuration to the app.config file will also solve the problem:
<configuration>
<runtime>
<generatePublisherEvidence enabled="false"/>
</runtime>
</configuration>
This way, you don't need to change computer configurations. It's just configuration of the application.
UPDATE:
Seems that .NET 4.0 fixed this issue, as documented here on MSDN.
Is the system connected to a network, but cannot reach the internet because the proxy is not configured? If so, go to Internet Settings (i.e. Internet Explorer Properties), Advanced, and look in the tree view for Security and a checkbox like "check revoked certificates" or something (I'm using German Windows, so I don't have the English label at hands). Uncheck and test again.
If this fixed the problem, you have one signed assembly that is not from Microsoft for which the .NET Framework will check for revocations, and time out after 15 seconds. If you disable the checking or configure the internet connection properly, you won't have to wait.
Does it open up a file or interact in the network in some way? Because if not, I would suggest that whether or not you're logged into a domain or running as a local user is probably a red herring.
Are you building in debug or release mode? It's worth trying release mode if you've not already because running in debug does a load of extra error checking..
Have you checked if there are any domain policies that can affect this scenario?
I had still this Problem (.NET 4.5). I my case the problem was, that the computer was not connected to the internet, but there were some other device (cameras etc.) which were connected via GigE. The startup of every .NET Application was delayed for about 20 seconds.
The solution was quite easy: Just connected the computer once to the internet, started any .NET application (first startup took about 7 seconds) and after that, every startup was quite fast, even if the computer was no longer connected to the internet. In addition I had to disable the protocol TCP/IP V6 (caused 3-5 seconds delay).
Another possible solution is to select Properties for the "internet Protocol Version 4 (TCP/IPv4), then select Advanced, select the tab "WINS" and set "Disable NetBIOS over TCP/IP".