Testing kubegres promotion feature - database

I'm trying to check how the transition would work when I crash my leader database (I do this by killing the pod) but kubegres never promotes the follower database.
I have checked my settings and auto-promotion is enabled. Any idea why it doesn't promote?

Related

under what circumstances (if any) can I continue to run "out of date" GWT clients when I update my GAEJ version?

following on from this question:
GWT detect GAE version changes and reload
I would like to further clarify some things.
I have an enterprise app (GWT 2.4 & GAEJ 1.6.4 - using GWT-RPC) that my users typically run all day in their browsers, indeed some don't bother refreshing the browser from day to day. I make new releases on a pretty regular basis, so am trying to streamline the process for minimal impact to my users. - Not all releases concern all users, so I'd like to minimize the number of restarts.
I was hoping it might be possible to do the following. Categorize my releases as follows:
1) releases that will cause an IncompatibleRemoteServiceException to be thrown
and 2) those that don't : i.e. only affect the server, or client but not the RPC interface.
Then I could make lots of changes to the client and server without affecting the interface between the two. As long as I don't make a modification to the RPC interface, presumably I can change server code and or client code and the exception won't be thrown? Right? or will any redeployment of GAE cause an old client to get an IncompatibleRemoteServiceException ?
If I was able to do that I could batch up interface busting changes into fairly infrequent releases and notify my users a restart will be required.
many thanks for any help.
I needed an answer pretty quick so I thought I'd just do some good old fashioned testing to see what's possible. Hopefully this will be useful for others with production systems using GWT-RPC.
Goal is to be able to release updates / fixes without requiring all connected browsers to refresh. Turns out there is quite a lot you can do.
So, after my testing, here's what you can and can't do:
no problem
add a new call to a RemoteService
just update some code on the server e.g. simple bug fix, redeploy
just update some client (GWT) code and redeploy (of course anyone wanting new client functionality will have to refresh browser, but others are unaffected)
limited problems
add a parameter to an existing RemoteService method - this one is interesting, that particular call will throw "IncompatibleRemoteServiceException" (of course) but all others calls to the same Remote Service or other Remote Services (Impl's) are unaffected.
Add a new type (as a parameter) to any method within a RemoteService - this is the most interesting one, and is what led me to do this testing. It will render that whole RemoteService out of date for existing clients with IncompatibleRemoteServiceException. However you can still use other RemoteServices. - I need to do some more testing here to fully understand or perhaps someone else knows more?
so if you know what you're doing you can do quite a lot without having to bother your users with refreshes or release announcements.

How to rollback/tear down/clear the database changes after a system test runs?

I have a test method, using NUnit and Selenium, which opens a browser on our website which is on the Production Server and registers a user and verifies that the registration is successful.
(I know ideally the system tests should run on a separate Test Server rather than production but here they want to test whether the prod system works!)
The problem is how to rollback the database changes as a result of this test? For example, the state of my database before and after running the state should be the same.
I thought of 3 possible options but none is practical:
1) writing SQL queries to delete from the actual tables before starting the test (Setup) and after running the test (TearDown); this is my current approach however
The problem with this approach is that I have to know exactly which tables were involved for each System Test which runs and this can quickly become very complex as a test may impact more than one table.
2) Writing transactional Code
This is not an option since the code changes are done by the website, not by the unit test written.
3) Getting an snapshot of existing database (SQL Server 2008 R2) before each test starts then after the test finished, restoring the snapshot to the original one.
This idea sounds good to me if we could run the tests only on Staging environment but the problem is that the tests have to run on Production and may take like 5 minutes totally so rolling it back and restoring it, would be a stupid idea as the changes done in that 5 minutes would be lost!
Please advise what approach would be best possible option to resolve this problem? there may be a 4th option?
Thanks,
Option 4 never ever ever ever do tests on a production server it's a recipe for disaster (see thousands of funny (if you are not the protagonist) stories on the internet on how this could go horribly wrong), the right thing to do would be to configure the test and production server in the same way.
There is a fith option. If the website receives a registration for user "WeAreTestingOutSite" it does everything except for actually adding the user to the Database.
To be honest, as was said, there are better ways to test if a production site is still in operation than to run bots to register a user to make sure it is working (or operational).
I would recommend you going with 4th option: Introduce new feature which allows to delete the user. Probably not to the user himself/herself but to the system admins (Backoffice users). That way you can test if user can be registered - and deleted afterwards while not caring that much about the SQL scripts.

NHibernate will not insert a record

I have an application that is now 4+ years old that is exhibiting some odd behavior on our latest deployment. The application uses nHibernate for all inserts / updates / selects, etc. We are currently using .NET 2.0, and nHibernate 1.2 (I know, we need to upgrade)
This deployment is on Windows 2008 Server x64, IIS 7.5 - what I have seen so far is that the application runs, but is unable to insert or update records in the DB - reads seem fine so far, but writes are a problem. SOME writes actually work, inserts into some small tables, but most never even make it to the DB.
Using SQL Profiler, the insert / updates never make it to the server, and turning log4net up to DEBUG, and show_sql true - the select statements appear, but the insert / update statements never make it into the log at all, and never show up at the server.
What's even more odd is that the application seems to be oblivious to this - the commandandclose runs without exception (open session in view with an httpmodule), the domain objects come back with uuid's generated, etc. but never get persisted.
Certainly an upgrade is due, but I would hate to try it during a deployment, and without time to accurately test the app. Any ideas?
My guess is that the default ISession FlushMode has been changed from Auto to Never or Commit. Never means that the session will flush when Flush() is called by the application; Commit means that the session will flush when a transaction is committed.
Back out your current deployment and return to what you had before. Then look for the mistake someone made. If it used to insert and now does not, then something is wrong with your current code. If it isn't creating the insert/update statments, then I'd look first at where they are supposed to be created. Did the current deplyment actually insert record or update them in dev? Did anybody test that or were you relying onthe fact that it didn;t pop up an error? If it did work in dev and doesn't work in prod, I'd look at the envirnmental differnences between dev and prod.
Both good answers, the problem was in the deployment. The web.config was setup for IIS6, and the deployment to IIS7 did not properly setup the open session in view HttpModule that is used to commit the transaction. Changing the pipeline mode from Integrated to Classic solved the problem.

Scheduling a RichCopy Jobs

Anyone use the timer feature of RichCopy? I have a job that works fine when I manually start the job. However, when I schedule the job and click run, the app appears to be waiting for the scheduled time to elapse yet never fires. Interesting enough when I stop the job the copy starts.
Anyone have any experience with using RichCopy timer?
IanB
Try created a batch file with command line options. Then use windows scheduler to launch the batch.
OMBG (Bill Gates) You need to read and get security policy and the respect it has to place on a hierarchy of upstream objects and credentials. Well that's the MS answer and attitude...
The reality is if you are working with server OSs you need to understand their security & policy frameworks, and how to debug them :). If your process loses the necessary file permissions or rights (2 different things) you should ask: "Hot damn, why didn't I fix that in the config/setup". People that blast the vendor/project (or even ####&$! MS) are just blinding themselves to the solution/s.
In most cases this kind of issue is due to Windows' AD removing the rights of a Local administrator User to run a scheduled task. It is a common security setting in corporate networks (implemented with glee by Domain Admins to upset developers) though it is really a default setting these days. It happens because the machine updates against an upstream policy (after you've scheduled a task) and decides that all of a sudden it won't trust you to run it (even though previously it let you set it up). In a perfect world it wouldn't let you set it up in the first place, but that isn't the way policy applies in Windows... (####&$! MS). LOL
Wow it only took 5 months to get an answer! (but here they are for the next person at least!)

Best Practice for seeing live data on the dev server?

Assumption: live/production web app suppresses errors being shown to end-users.
Suppose your tech support team wants to see live data but through the eyes of the development-side of the application (maybe you want to see what errors are occurring, or want to see when you've got an issue fixed using an end-user's data).
Right now we've got one database serving both the dev and live boxes (not my idea - I know it's gross).
Ideas?
Edit: Best/handy tools for implementing your suggestion?
We replicate the data back to a different database. Yes, there is a delay, but it keeps people hands out of the production servers. This also allows us to "hide" information that tech support (and other people for that matter) aren't supposed to see.
In addition to replicating data down, on production, we see who's logged into the application, and if it's a member of the company, send them to the real error page versus the happy kitten playing with a ball of yarn apologizing.
Back up and restore from live to dev on a regular basis (once, twice a day). It doesn't need to be realtime (as you might be entering data from the dev side anyway, which could cause problems).
If you have PCI or HIPAA data, make sure you don't put that in your dev environment -- that might break laws.
I generally like to have a 3-tier system for web development:
Development
Testing
Live
Most of the time testing is an exact copy of the live system, except that errors are turned on, when a new version is about to be moved live it's replaced with the new version BEFORE live is, to detect upgrade issues.
Development is completely separate from live, to allow for major changes to things like the database, or changes to the production environment.
I would firstly make errors are either emailed to someone with details of how the user got there or at minimum logged so you can watch the error log while you perform similar actions to see if you get the same messages in the log.
And yes, copying the database on the dev server/site is probably your only option. You don't want any changes made by the development team to live data and you'll probably also have changes that won't work with the production database at some point.
I wouldn't recommend doing a nightly copy as a developer might be in the middle of some new feature where they have added data and then it's erased that night. I usually copy the production database(s) to dev each time a major version is released. This also allows me to do speed testing with a lot of live data. On some systems I also change everyones password to a default so I can login easily as any user.
If your configuration permits it:
a. Add a logging function (if there isn't one already) to write messages of interest to a log file.
b. Run the unix command
tail -f < logfile.txt
which will stream the growing log file to your console.
http://www.monkey.org/cgi-bin/man2html?tail
If you have Windows, you might try this:
http://tailforwin32.sourceforge.net/

Resources