Container STATE is Error when the Test Result has Failure - selenium-webdriver

I have set up Kubernetes environment locally using minikube. I create a Job which has 3 containers.
Hub
Chrome
App ( Selenium-TestNG)
When I apply/create job, which will set up Hub/Chrome/App. Execute selenium tests.
If all the tests are completed successfully, you will see the status of Container like below
The result from webAutomation1 app
The above things are as expected. Looks good in terms of container listing
Now, if we have completed the app(Execution of tests) with some failures like below
Then, my container listing will show it as error
I use ITestListenr to write logs to console as of now. What is that making container STATE to turn Error.? Is there anything I don't see in terms of integration between container and app?
It would be greatly appreciated if someone helps me with this.

As per TestNG exit codes:
When TestNG completes execution, it exits with a return code. This
return code can be inspected to get an idea on the nature of failures
(if there were any). The following table summarises the different exit
codes that TestNG currently uses.
/**
* |---------------------|---------|--------|-------------|------------------------------------------|
* | FailedWithinSuccess | Skipped | Failed | Status Code | Remarks |
* |---------------------|---------|--------|-------------|------------------------------------------|
* | 0 | 0 | 0 | 0 | Passed tests |
* | 0 | 0 | 1 | 1 | Failed tests |
* | 0 | 1 | 0 | 2 | Skipped tests |
* | 0 | 1 | 1 | 3 | Skipped/Failed tests |
* | 1 | 0 | 0 | 4 | FailedWithinSuccess tests |
* | 1 | 0 | 1 | 5 | FailedWithinSuccess/Failed tests |
* | 1 | 1 | 0 | 6 | FailedWithinSuccess/Skipped tests |
* | 1 | 1 | 1 | 7 | FailedWithinSuccess/Skipped/Failed tests |
* |---------------------|---------|--------|-------------|------------------------------------------|
*/
Your container is probably using the TestNG as the main process, and any test that is not considered Passed tests (i.e., exit code different than 0) will result in a pod with the terminated/error state.
You can confirm this by Determining the Reason for Pod Failure.
e.g: You can check your pod state; the output will be something like this:
$ kubectl get my-pod-name -o=json | jq .status.containerStatuses[].state
{
"terminated": {
"containerID": "docker://9bc2497ec0d2bc3b1b62483c217aaaaa1027102a5f7ff1688f47b94254",
"exitCode": 1,
"finishedAt": "2019-10-28T02:00:10Z",
"reason": "Error",
"startedAt": "2019-10-28T02:00:05Z"
}
}
and check if the exit code matches your TestNG status code.

Related

Alert Before Running Query Consisting Of Large Size Data

Do we have any mechanism in Snowflake where we alert Users running a Query containing Large Size Tables , this way user would get to know that Snowflake would consume many warehouse credits if they run this query against large size dataset,
There is no alert mechanism for this, but users may run EXPLAIN command before running the actual query, to estimate the bytes/partitions read:
explain select c_name from "SAMPLE_DATA"."TPCH_SF10000"."CUSTOMER";
+-------------+----+--------+-----------+-----------------------------------+-------+-----------------+-----------------+--------------------+---------------+
| step | id | parent | operation | objects | alias | expressions | partitionsTotal | partitionsAssigned | bytesAssigned |
+-------------+----+--------+-----------+-----------------------------------+-------+-----------------+-----------------+--------------------+---------------+
| GlobalStats | | | | 6585 | 6585 | 109081790976 | | | |
| 1 | 0 | | Result | | | CUSTOMER.C_NAME | | | |
| 1 | 1 | 0 | TableScan | SAMPLE_DATA.TPCH_SF10000.CUSTOMER | | C_NAME | 6585 | 6585 | 109081790976 |
+-------------+----+--------+-----------+-----------------------------------+-------+-----------------+-----------------+--------------------+---------------+
https://docs.snowflake.com/en/sql-reference/sql/explain.html
You can also assign users to specific warehouses, and use resource monitors to limit credits on those warehouses.
https://docs.snowflake.com/en/user-guide/resource-monitors.html#assignment-of-resource-monitors
As the third alternative, you may set STATEMENT_TIMEOUT_IN_SECONDS to prevent long running queries.
https://docs.snowflake.com/en/sql-reference/parameters.html#statement-timeout-in-seconds

Why 'Lost connection to MySQL server during query' errors occur time-to-time (timeouts raised, pinging before query)

I'm creating an experiment, in which the server side is a libuv based C/C++ application that runs queries on a mysql server (it's on localhost).
In most cases it just works, but sometimes I get 'Lost connection to MySQL server during query' in various places in the process.
I have tried raising all timeouts. But I think this is unrelated, if the server gets bombarded with requests (like every second) the same error gets thrown.
+-------------------------------------+----------+
| Variable_name | Value |
+-------------------------------------+----------+
| connect_timeout | 31536000 |
| deadlock_timeout_long | 50000000 |
| deadlock_timeout_short | 10000 |
| delayed_insert_timeout | 300 |
| innodb_flush_log_at_timeout | 1 |
| innodb_lock_wait_timeout | 50 |
| innodb_print_lock_wait_timeout_info | OFF |
| innodb_rollback_on_timeout | OFF |
| interactive_timeout | 31536000 |
| lock_wait_timeout | 31536000 |
| net_read_timeout | 31536000 |
| net_write_timeout | 31536000 |
| slave_net_timeout | 31536000 |
| thread_pool_idle_timeout | 60 |
| wait_timeout | 31536000 |
+-------------------------------------+----------+
I'm pinging the server before doing queries.
// this->con was set up somewhere before, just like down below in the retry section
char query[] = "SELECT * FROM data WHERE id = 12;";
mysql_ping(this->con);
if(!mysql_query(this->con, query)) {
// OK
return 0;
} else {
// reconnect usually helps
// here I get the error message
mysql_close(this->con);
this->con = mysql_init(NULL);
if(mysql_real_connect(this->con, this->host.c_str(), this->user.c_str(), this->password.c_str(), this->db.c_str(), 0, NULL, 0) == NULL) {
// no DB, goodbye
exit(6);
}
// retry
if(!mysql_query(this->con, query)) {
return 0;
} else {
// DB fail
return 1;
}
}
I have tried the reconnect option, the problem is the same.
In my understanding this flow should be possible with single-threaded libuv and mysql:
1. set up db connection
2. run event loop
-> make queries based on IO events, get results
3. event loop ends
4. db close
What do I miss?

GAE leaving mysql connections open

Some times happens that the GAE App engine instance is failing to respond successfully, for requests that apparently do not cause exceptions in the Django app.
Then I check the processlist in MySQL instance and see that there are many unnecessary processes open by localhost, and probably the server app is trying to open a new connection and hits the process limit.
Why is the server creating new processes but fails to close the connections at the end? How to close these connections programatically?
If I restart the App engine instance the 500 errors (and mysql threads) disappear.
| 7422 | root | localhost | prova2 | Sleep | 1278 | | NULL
| 7436 | root | localhost | prova2 | Sleep | 703 | | NULL
| 7440 | root | localhost | prova2 | Sleep | 699 | | NUL
| 7442 | root | localhost | prova2 | Sleep | 697 | | NULL
| 7446 | root | localhost | prova2 | Sleep | 694 | | NULL
| 7448 | root | localhost | prova2 | Sleep | 694 | | NULL
| 7450 | root | localhost | prova2 | Sleep | 693 | | NULL
Actually the problematic code was middleware that stores the queries and produces some summary data of requests. The problem of sleeping connections disappears when I remove this section in appengine_config.py:
def webapp_add_wsgi_middleware(app):
from google.appengine.ext.appstats import recording
app = recording.appstats_wsgi_middleware(app)
return app

MySQL Import into Innodb table severely spikes at a certain point

I'm trying to migrate a 30GB database from one server to another.
The short story is that at a certain point through the process, the amount of time it takes to import records severely increases as a spike. The following is from using the SOURCE command to import a chunk of 500k records (out of about ~25-30 million throughout the database) that was exported as an sql file that was ssh tunnelled over to the new server:
...
Query OK, 2871 rows affected (0.73 sec)
Records: 2871 Duplicates: 0 Warnings: 0
Query OK, 2870 rows affected (0.98 sec)
Records: 2870 Duplicates: 0 Warnings: 0
Query OK, 2865 rows affected (0.80 sec)
Records: 2865 Duplicates: 0 Warnings: 0
Query OK, 2871 rows affected (0.87 sec)
Records: 2871 Duplicates: 0 Warnings: 0
Query OK, 2864 rows affected (2.60 sec)
Records: 2864 Duplicates: 0 Warnings: 0
Query OK, 2866 rows affected (7.53 sec)
Records: 2866 Duplicates: 0 Warnings: 0
Query OK, 2879 rows affected (8.70 sec)
Records: 2879 Duplicates: 0 Warnings: 0
Query OK, 2864 rows affected (7.53 sec)
Records: 2864 Duplicates: 0 Warnings: 0
Query OK, 2873 rows affected (10.06 sec)
Records: 2873 Duplicates: 0 Warnings: 0
...
The spikes eventually average to 16-18 seconds per ~2800 rows affected. Granted I don't usually use Source for a large import, but for the sakes of showing legitimate output, I used it to understand when the spikes happen. Using mysql command or mysqlimport yields the same results. Even piping the results directly into the new database instead of through an sql file has these spikes.
As far as I can tell, this happens after a certain amount of records are inserted into a table. The first time I boot up a server and import a chunk that size, it goes through just fine. Give or take the estimated amount it handles until these spikes occur. I can't correlate that because I haven't consistently replicated the issue to evidently conclude that. There are ~20 tables that have sub 500,000 records that all imported just fine when those 20 tables were imported through a single command. This seems to only happen to tables that have an excessive amount of data. Granted, the solutions I've come cross so far seem to only address the natural DR that occurs when you import over time (The expected output in my case was that eventually at the end of importing 500k records, it would take 2-3 seconds per ~2800, whereas it seems the questions were addressing that at the end it shouldn't take that long). This comes from a single sugarCRM table called 'campaign_log', which has ~9 million records. I was able to import in chunks of 500k back onto the old server i'm migrating off of without these spikes occuring, so I assume this has to do with my new server configuration. Another thing is that whenever these spikes occur, the table that it is being imported into seems to have an awkward way of displaying the # of records via count. I know InnoDB gives count estimates, but the number doesn't succeed the ~, indicating the estimate. It usually is accurate and that each time you refresh the table, it doesn't change the amount it displays (This is based on what it reports through PHPMyAdmin)
Here's the following commands/InnoDB system variables I have on the new server:
INNODB System Vars:
+---------------------------------+------------------------+
| Variable_name | Value |
+---------------------------------+------------------------+
| have_innodb | YES |
| ignore_builtin_innodb | OFF |
| innodb_adaptive_flushing | ON |
| innodb_adaptive_hash_index | ON |
| innodb_additional_mem_pool_size | 8388608 |
| innodb_autoextend_increment | 8 |
| innodb_autoinc_lock_mode | 1 |
| innodb_buffer_pool_instances | 1 |
| innodb_buffer_pool_size | 8589934592 |
| innodb_change_buffering | all |
| innodb_checksums | ON |
| innodb_commit_concurrency | 0 |
| innodb_concurrency_tickets | 500 |
| innodb_data_file_path | ibdata1:10M:autoextend |
| innodb_data_home_dir | |
| innodb_doublewrite | ON |
| innodb_fast_shutdown | 1 |
| innodb_file_format | Antelope |
| innodb_file_format_check | ON |
| innodb_file_format_max | Antelope |
| innodb_file_per_table | OFF |
| innodb_flush_log_at_trx_commit | 1 |
| innodb_flush_method | fsync |
| innodb_force_load_corrupted | OFF |
| innodb_force_recovery | 0 |
| innodb_io_capacity | 200 |
| innodb_large_prefix | OFF |
| innodb_lock_wait_timeout | 50 |
| innodb_locks_unsafe_for_binlog | OFF |
| innodb_log_buffer_size | 8388608 |
| innodb_log_file_size | 5242880 |
| innodb_log_files_in_group | 2 |
| innodb_log_group_home_dir | ./ |
| innodb_max_dirty_pages_pct | 75 |
| innodb_max_purge_lag | 0 |
| innodb_mirrored_log_groups | 1 |
| innodb_old_blocks_pct | 37 |
| innodb_old_blocks_time | 0 |
| innodb_open_files | 300 |
| innodb_print_all_deadlocks | OFF |
| innodb_purge_batch_size | 20 |
| innodb_purge_threads | 1 |
| innodb_random_read_ahead | OFF |
| innodb_read_ahead_threshold | 56 |
| innodb_read_io_threads | 8 |
| innodb_replication_delay | 0 |
| innodb_rollback_on_timeout | OFF |
| innodb_rollback_segments | 128 |
| innodb_spin_wait_delay | 6 |
| innodb_stats_method | nulls_equal |
| innodb_stats_on_metadata | ON |
| innodb_stats_sample_pages | 8 |
| innodb_strict_mode | OFF |
| innodb_support_xa | ON |
| innodb_sync_spin_loops | 30 |
| innodb_table_locks | ON |
| innodb_thread_concurrency | 0 |
| innodb_thread_sleep_delay | 10000 |
| innodb_use_native_aio | ON |
| innodb_use_sys_malloc | ON |
| innodb_version | 5.5.39 |
| innodb_write_io_threads | 8 |
+---------------------------------+------------------------+
System Specs:
Intel Xeon E5-2680 v2 (Ivy Bridge) 8 Processors
15GB Ram
2x80 SSDs
CMD to Export:
mysqldump -u <olduser> <oldpw>, <olddb> <table> --verbose --disable-keys --opt | ssh -i <privatekey> <newserver> "cat > <nameoffile>"
Thank you for any assistance. Let me know if there's any other information I can provide.
I figured it out. I increased the innodb_log_file_size from 5MB to 1024MB. While it did significantly increase the amount of records I imported (Never went above 1 second per 3000 rows), it also fixed the spikes. There were only 2 in all the records I imported, but after they happened, they immediately went back to taking sub 1 second.

SilverLight Datagrid

I am building a timemanagement system, where im using a Datagrid to list tasks / days, so you can register how much time you spend on a given task on a given day.
What i would like, is to do a summary at the bottom of each day.
| Tasks | Moanday | Tuesday | Wednesday | Thursday | Friday |
| Task 1 | 1 | 1 | 0 | 0 | 0 |
| Task 2 | 2 | 0 | 0 | 0 | 0 |
| Task 3 | 3 | 2 | 0 | 0 | 0 |
| 6 | 3 | 0 | 0 | 0 |
Does anyone have any hints, on how to add a footer like effect, like this?
I suggest you take a look at the agDataGridSuite from DevExpress. It supports summaries. They offer this grid as a free download. See link text

Resources