I migrated a project to HRD using the migration tool.
This seems to have various side effects. Some might be related to unanchored entities or queries. The app doesn't send the notification emails it is supposed to which may have something to do with Billing needed to be re-enabled and Google waiting seven days before applying mail quota.
Anyways, I am wondering if it is possible to roll back to master/slave? Disable or delete the HRD app and re-enable the old one?
As a way for me to buy time, prepare better for this migration and try again at a later time.
I'm afraid that you can't go back because Master/Slave Datatsore is deprecated since April 4, 2012. You could contact the support, but I would suggest you to spend some time and try to fix it on HRD.
Related
We have a PHP application running on GAE. It connects to Cloud Datastore using the Google PHP library (v0.6.7).
Google introduced in the last days a new version of App Engine, v1.9.0 (not oficially released), which apparently was running fine, just as 1.8.9 was. However, we have been experiencing some issues related to Cloud Datastore. Sometimes, all the operations regarding to entities updating are just ignored. All the queries used to retrieve information work perfectly, however if we want to create a new entity of update any property, no action is performed. I have been checking for some errors in the response returned by the Cloud Api, but there is no errors or warnings at all.
This situation happened for the first time the 31st of January, and it is also happening today. It started to fail at 3am (GMT +1) and according to the instance log, at the same time the latency times of all the requests suffered an important increase (from 1-3 secs to 5-10 secs). The first time after a few hours the system started to work properly again, but now this problem is lasting much more.
Has anyone experienced anything similar?
Thank you for the report, we're investigating the issue now.
Update: We've addressed the issue. Please join the Google Cloud Datastore downtime notify mailing list for future updates.
https://groups.google.com/forum/?fromgroups=#!topic/gcd-downtime-notify/sNXCFJYFNQU
For future reports about production issues, please refer to the Contact support section of our documentation.
So I was recently hired by a big department of a Fortune 50 company, straight out of college. I'll be supporting a brand new ASP.NET MVC app - over a million lines of code written by contractors over 4 years. The system works great with up to 3 or 4 simultaneous requests, but becomes very slow with more. It's supposed to go live in 2 weeks ... I'm looking for practical advice on how to drastically improve the scalability.
The advice I was given in Uni is to always run a profiler first. I've already secured a sizeable tools budget with my manager, so price wouldn't be a problem. What is a good or even the best profiler for ASP.NET MVC?
I'm also looking at adding caching. There is currently no second level and query cache configured for nHibernate. My current thinking is to use Redis for that purpose. Also looking at output caching, but unfortunately the majority of the users will login to the site. Is there a way to still cache parts of the pages served by MVC?
Do you have any monitoring or instrumentation setup for the application? If not, I would highly recommend starting there. I've been using New Relic for a few years with ASP.NET apps and been very happy with it.
Right off the bat you get a nice graph of request response times broken down into 3 kind of tasks that contribute to the response time
.NET CLR - Time spent running .NET code
Database - Time spent waiting on SQL requests
Request Queue - Time spent waiting for application workers to become available
It also breaks down performance by MVC action so you can see which ones are the slowest. You also get a breakdown of performance per database query. I've used this many times to detect procedures that were way too slow for heavy production loads.
If you want to, you can have New Relic add some unobtrusive Javascript to your page that allows you to instrument browser load times. This helps you figure things out like "my users outside North America spend on average 500ms loading images. I need to move my images to a CDN!"
I would highly recommend you use some instrumentation software like this. It will definitely get you pointed in the right direction and help you keep your app available and healthy.
Profiler is a handy tool to watch how apps communicate with your database and debug odd behaviour. It's not a long-term solution for performance instrumentation given that it puts a load on your server and the results require quite a bit of laborious processing and digestion to paint a clear picture for you.
Random thought: check out your application pool configuration and keep and eye out in the event log for too many recycling events. When an application pool recycles, it takes a long time to become responsive again. It's just one of those things can kill performance and you can rip your hair out trying to track it down. Improper recycling settings bit me recently so that's why I mention it.
For nHibernate analysis (session queries, caching, execution time) you could use HibernatingRhinos Profiler. It's developed by the guys that developed nhibernate, so you know it will work really good with it.
Here is the URL for it:
http://hibernatingrhinos.com/products/nhprof
You could give it a try and decide if it helps you or not.
Almost a week ago I migrated my (paid) app from Master/Slave to HRD. Since that time my app has been restricted to 100 emails/day with a warning indicating "Resource is currently experiencing a short-term quota limit". I know it mentions a limitation until the first successful billing so I was hoping the limit would disappear once that happened - but alas it has not! I have also filled out the "request additional resources" form hoping that might help.
Anyone encountered this problem migrating from master/slave? Any suggestions of who I can contact or how I can recover from this limit? The migration process was relatively smooth - except for this problem which has become a significant impact to my customers.
I am having the same problem right now. I just finished my HRD migration yesterday and only today realized that my Mail API requests are failing because the mail quota on my new HRD app is only at 100 messages per day. I did not have this limitation before, and I find it pretty disappointing that such a triviality is causing my users trouble despite a successful migration. I submitted a quota increase request and hoping I don't also have to wait a week before it's applied.
Anyone who is waiting until the last minute to migrate their app to HRD be warned: make sure you apply for a mail quota increase on your new app because your old setting will not carry over, even if you have billing enabled on both apps.
I created my application using "High Replication" option. Now I want to switch to "Master/Slave" option because I'm hitting my daily CPU quota.
It turns out High Replication uses "approximately three times the storage and CPU cost of Master/Slave"
Is there anyway I can do this without recreating my app? It's not in the Application Settings page.
You can't - once you've chosen a particular type of datastore, that application is bound to it. The only way to change it is exactly the way you suggested - you'd have to create a new app with the Master / Slave datastore and port your data to it.
You may want to profile your app and optimize it to use less CPU, although in the general case that may be easier said than done.
Take a look at the first answer to this question: Have you expirienced DataStore downtime in AppEngine? What are the odds?
As #mihai said:I would recommend you to use HRD as Google said they will make M/S more expensive than HRD until the end of the year and even remove the M/S option as they are looking to "force" the businesses & developers to take advantage of all HRD goodies. The real reason is that maintaining a single type of infrastructure is cheaper than to maintain the both HRD and M/S so Google picks HRD. Source : Google I/O 2011
I want some ideas on the best practice to implement an activity stream for a social network im building in app engine (PYTHON)
I first want to keep a log for all activities of each user - so that we have a history. i.e. someone became a friend, added a picture, changed their address etc. This way we have a users history available should we need it. Also mean we can remove friendship joins, change user data but have a historical log.
I also want to stream a users activity to their friends. for this only the last X activities need to be kept - that is in the scenario that messages are sent to friends when an activity occurs.
Its pretty straight forward designing a history log - ie: when, what, where. The complication comes as to how we notify friends of a user as to their activity.
In our app friendships are not mutual - ie they are based on the twitter following model. Some accounts could have thousands of followers.
What is the best approach to model this.
using a many to many join table and doing a costly query -
using a feed class that fired a copy of the activity to all the subscribers - maybe into mcache? As their maybe a need to fire thousands of messages i would imagine a cron job would need to be used.
Any help ideas thoughts on this
Thx
There's a great talk by Brett Slatkin called Building Scalable, Complex Apps on App Engine from last year's Google I/O, in which the example is a Twitter-like application, where users' updates are pushed to their followers. Basically exactly what you're trying to do.
I highly recommend the video for anyone writing an App Engine app, it's really helpful.
Don't do joins. They're too expensive, you'll burn through your quota in no time.
You can use a task queue, it's a bit like a cron job (i.e. stuff happens outside of the original request) but you can start them at will. memcache would be good if you're ok with loosing some activity at times the cache is flushed...