dynamic excel array based on input - arrays

I am looking for a little help in making a formula based dynamic array in excel.
KPI | Tgt | number | Weight
FCR | 0% | 1 | 45%
FCR | 60% | 2 | 45%
FCR | 80% | 3 | 45%
Leads | 45% | 4 | 25%
Leads | 50% | 5 | 25%
Leads | 200% | 6 | 25%
Attrition | 8% | 7 | 10%
Attrition | 12% | 8 | 10%
Attrition | 100% | 9 | 10%
Abandon | 1% | 10 | 20%
Abandon | 5% | 11 | 20%
Abandon | 200% | 12 | 20%
So if i have a Leads score in cell E2 as 3%, then i want output in F2 as Number 4 which is <45% hence 4.
PS: I have a spreadsheet but don't know how to attach it.

Try this in F2:
=INDEX(C:C,AGGREGATE(15,6,ROW(B:B)/((B:B>=E2)*(A:A="Leads")),1))
This will only return matches for rows with "Leads" in column A. This can be made into another reference if you have or want to put that into another cell instead.
EDIT:
Based on you're comment below, this formula works as well if entered as an array (CTRL + SHIFT + ENTER):
=INDEX(C:C,MATCH(1,(A:A="Leads")*(B:B>=E2),0))
EDIT 2:
We can cover our bases for an unsorted list in column B by combining the two solution:
{=INDEX(C:C,MATCH(1,(A:A="Leads")*(B:B=AGGREGATE(15,6,B:B/((B:B>=E2)*(A:A="Leads")),1)),0))}

Related

How to model a table to save many metrics for same timestamp in QuestDb?

I have zillion lot of sensors which tick every min with some floating point data and I want to use to save the data in QuestDb. I see two options:
Options 1 is to create a wide table with zillion of columns and have one row per each minute
| Time | Sensor1 | Sensor2 | .... | Sensor1232123 |
| 10:01 | 3.4 | 0.0 | .... | 23.4 |
| 10:02 | 5.46 | 23.987 | .... | 0.0 |
...
And the option option 2
| Time | Id | Value |
| 10:01 | 1 | 3.4 |
| 10:01 | 2 | 0.0 |
...
| 10:01 | 123123 | 23.4 |
| 10:02 | 1 | 5.46 |
| 10:02 | 2 | 23.987 |
...
| 10:02 | 3 | 0.0 |
...
Since my data comes from individual sensors independently I'm inclined to use option 2 but the QuestDb requires the designated timestamp column to be ascending, so I cannot have duplicated values in Time column.
It sounds pretty common case but I cannot figure out how can I store my sensors data in one table.
You should use the option 2 you described where there is timestamp, sensor id, value saved in the table.
Repeated timestamp is allowed so it is valid to have all the sensors as individual rows at 10:01, 10:02 etc.

Algorithm or pseudo code for finding combinations from table

I have a table with thousands of items with a lot of attributes (approx 15+). I would like to select the following results:
Select all combination of items to have at least 100% from each attributes? Exactly 100% would be nice but thats not necessary so it can go over a little or be a little less (maybe +-2%).
All combinations would be a big dataset so I think it would be better to sort them by price and select only the 10 cheapest one.
Also if I would like to modify selects before so that one or several attributes cant get over some value, like 50% for example?
| ----------- | ------------ | ----------- | ----------- | ----- |
| item name | attribute 1 | attribute 2 | attribute 3 | price |
| item 1 | 25% | 1% | 5% | 1€ |
| item 2 | 10% | 10% | 10% | 2€ |
| item 3 | 5% | 20% | 5% | 3€ |
| item 4 | 20% | 15% | 50% | 12€ |
I don't know if there is an existing algorithm for my problem ( I hope so ) or my problem has a name I can google but I would be thankful for any tips how I should proceed.
The only way I could think of for now is to bruteforce all the combinations and drop the unusable ones. But I don't think that's the right way (maybe I'm wrong and thats the only way).
The number of items, price and attribute values can change over time. If they were static I would just run the bruteforce option once and be done with it.
Sorry if this question was already asked.
EDIT:
As an example I can provide nutritional information about food (All the numbers are made up):
daily intake of carbohydrates/fat/protein are 225g/30g/65g
| ----------- | --------------- | ------- | --------- | ------ | ----- |
| item name | carbohydrates | fat | protein | sodium | price |
| apple | 10g | 1g | 5g | 1mg | 1€ |
| banana | 20g | 2g | 10g | 1mg | 2€ |
| pear | 15g | 3g | 5g | 5mg | 3€ |
| ----------- | --------------- | ------- | --------- | ------ | ----- |
find me combination of foods which will reach daily intake.
Now i want the same as in 1. but sort it by the price/select the cheapest.
I want only combinations with sodium not exceeding 30mg

Solr poor performance on date range query

I'm currently struggling to get decent performance on a ~18M documents core with NRT indexing from a date range query, in Solr 4.10 (CDH 5.14).
I tried multiple strategies but everything seems to fail.
Each document has multiple versions (10 to 100) valid at different non-overlapping periods (startTime/endTime) of time.
The query pattern is the following: query on the referenceNumber (or other criteria) but only return documents valid at a referenceDate (day precision). 75% of queries select a referenceDate within the last 30 days.
If we query without the referenceDate, we have very good performance, but 100x slowdown with the additional referenceDate filter, even when forcing it as a postfilter.
Here are some perf tests from a python script executing http queries and computing the QTime of 100 distinct referenceNumber.
+----+-------------------------------------+----------------------+--------------------------+
| ID | Query | Results | Comment |
+----+-------------------------------------+----------------------+--------------------------+
| 1 | q=referenceNumber:{referenceNumber} | 100 calls in <10ms | Performance OK |
+----+-------------------------------------+----------------------+--------------------------+
| 2 | q=referenceNumber:{referenceNumber} | 99 calls in <10ms | 1 call to warm up |
| | &fq=startDate:[* to NOW/DAY] | 1 call in >=1000ms | the cache then all |
| | AND endDate:[NOW/DAY to *] | | queries hit the filter |
| | | | cache. Problem: as |
| | | | soon as new documents |
| | | | come in, they invalidate |
| | | | the cache. |
+----+-------------------------------------+----------------------+--------------------------+
| 3 | q=referenceNumber:{referenceNumber} | 99 calls in >=500ms | The average of |
| | &fq={!cache=false cost=200} | 1 call in >=1000ms | calls is 734.5ms. |
| | startDate:[* to NOW/DAY] | | |
| | AND endDate:[NOW/DAY to *] | | |
+----+-------------------------------------+----------------------+--------------------------+
How is it possible that the additional date range filter query creates a 100x slowdown? From this blog, I would have expected similar performance from the daterange query as without the additional filter: http://yonik.com/advanced-filter-caching-in-solr/
Or is the only option is to change the softCommit/hardCommit delays, create 30 warmup fq for the past 30 days and tolerate poor performance on 25% of our queries?
Edit 1: Thanks for the answer, unfortunately, using integers instead of tdate does not seem to provide any performance gains. It can only leverage caching, like the query ID 2 above. That means we need a strategy of warmup of 30+ fq.
+----+-------------------------------------+----------------------+--------------------------+
| ID | Query | Results | Comment |
+----+-------------------------------------+----------------------+--------------------------+
| 4 | fq={!cache=false} | 35 calls in <10ms | |
| | referenceNumber:{referenceNumber} | 65 calls in >10ms | |
+----+-------------------------------------+----------------------+--------------------------+
| 5 | fq={!cache=false} | 9 calls in >100ms | |
| | referenceNumber:{referenceNumber} | 6 calls in >500ms | |
| | AND versionNumber:[2 TO *] | 85 calls in >1000ms | |
+----+-------------------------------------+----------------------+--------------------------+
edit 2: It seems that passing my referenceNumber from fq to q and setting different costs improve the query time (no perfect, but better). What's weird though is that the cost >= 100 is supposed to be executed as a postFilter, but setting the cost from 20 to 200 does not seem to impact performance at all. Does anyone know how to see whether a fq param is executed as a post filter?
+----+-------------------------------------+----------------------+--------------------------+
| 6 | fq={!cache=false cost=0} | 89 calls in >100ms | |
| | referenceNumber:{referenceNumber} | 11 calls in >500ms | |
| | &fq={!cache=false cost=200} | | |
| | startDate:[* TO NOW] AND | | |
| | endDate:[NOW TO *] | | |
+----+-------------------------------------+----------------------+--------------------------+
| 7 | fq={!cache=false cost=0} | 36 calls in >100ms | |
| | referenceNumber:{referenceNumber} | 64 calls in >500ms | |
| | &fq={!cache=false cost=20} | | |
| | startDate:[* TO NOW] AND | | |
| | endDate:[NOW TO *] | | |
+----+-------------------------------------+----------------------+--------------------------+
Hi I have an another solution for you, it will give a good performance after performing same query to solr.
My Suggestion is store date in int format, please find below example.
Your Start Date : 2017-03-01
Your END Date : 2029-03-01
**Suggested format in int format.
Start Date : 20170301
END Date : 20290301**
When you are trying fire same query with int number instead of dates it works faster as expected.
So your query will be.
q=referenceNumber:{referenceNumber}
&fq=startNewDate:[* to YYMMDD]
AND endNewDate:[YYMMDD to *]
Hope it will help you ..

MySQL Import into Innodb table severely spikes at a certain point

I'm trying to migrate a 30GB database from one server to another.
The short story is that at a certain point through the process, the amount of time it takes to import records severely increases as a spike. The following is from using the SOURCE command to import a chunk of 500k records (out of about ~25-30 million throughout the database) that was exported as an sql file that was ssh tunnelled over to the new server:
...
Query OK, 2871 rows affected (0.73 sec)
Records: 2871 Duplicates: 0 Warnings: 0
Query OK, 2870 rows affected (0.98 sec)
Records: 2870 Duplicates: 0 Warnings: 0
Query OK, 2865 rows affected (0.80 sec)
Records: 2865 Duplicates: 0 Warnings: 0
Query OK, 2871 rows affected (0.87 sec)
Records: 2871 Duplicates: 0 Warnings: 0
Query OK, 2864 rows affected (2.60 sec)
Records: 2864 Duplicates: 0 Warnings: 0
Query OK, 2866 rows affected (7.53 sec)
Records: 2866 Duplicates: 0 Warnings: 0
Query OK, 2879 rows affected (8.70 sec)
Records: 2879 Duplicates: 0 Warnings: 0
Query OK, 2864 rows affected (7.53 sec)
Records: 2864 Duplicates: 0 Warnings: 0
Query OK, 2873 rows affected (10.06 sec)
Records: 2873 Duplicates: 0 Warnings: 0
...
The spikes eventually average to 16-18 seconds per ~2800 rows affected. Granted I don't usually use Source for a large import, but for the sakes of showing legitimate output, I used it to understand when the spikes happen. Using mysql command or mysqlimport yields the same results. Even piping the results directly into the new database instead of through an sql file has these spikes.
As far as I can tell, this happens after a certain amount of records are inserted into a table. The first time I boot up a server and import a chunk that size, it goes through just fine. Give or take the estimated amount it handles until these spikes occur. I can't correlate that because I haven't consistently replicated the issue to evidently conclude that. There are ~20 tables that have sub 500,000 records that all imported just fine when those 20 tables were imported through a single command. This seems to only happen to tables that have an excessive amount of data. Granted, the solutions I've come cross so far seem to only address the natural DR that occurs when you import over time (The expected output in my case was that eventually at the end of importing 500k records, it would take 2-3 seconds per ~2800, whereas it seems the questions were addressing that at the end it shouldn't take that long). This comes from a single sugarCRM table called 'campaign_log', which has ~9 million records. I was able to import in chunks of 500k back onto the old server i'm migrating off of without these spikes occuring, so I assume this has to do with my new server configuration. Another thing is that whenever these spikes occur, the table that it is being imported into seems to have an awkward way of displaying the # of records via count. I know InnoDB gives count estimates, but the number doesn't succeed the ~, indicating the estimate. It usually is accurate and that each time you refresh the table, it doesn't change the amount it displays (This is based on what it reports through PHPMyAdmin)
Here's the following commands/InnoDB system variables I have on the new server:
INNODB System Vars:
+---------------------------------+------------------------+
| Variable_name | Value |
+---------------------------------+------------------------+
| have_innodb | YES |
| ignore_builtin_innodb | OFF |
| innodb_adaptive_flushing | ON |
| innodb_adaptive_hash_index | ON |
| innodb_additional_mem_pool_size | 8388608 |
| innodb_autoextend_increment | 8 |
| innodb_autoinc_lock_mode | 1 |
| innodb_buffer_pool_instances | 1 |
| innodb_buffer_pool_size | 8589934592 |
| innodb_change_buffering | all |
| innodb_checksums | ON |
| innodb_commit_concurrency | 0 |
| innodb_concurrency_tickets | 500 |
| innodb_data_file_path | ibdata1:10M:autoextend |
| innodb_data_home_dir | |
| innodb_doublewrite | ON |
| innodb_fast_shutdown | 1 |
| innodb_file_format | Antelope |
| innodb_file_format_check | ON |
| innodb_file_format_max | Antelope |
| innodb_file_per_table | OFF |
| innodb_flush_log_at_trx_commit | 1 |
| innodb_flush_method | fsync |
| innodb_force_load_corrupted | OFF |
| innodb_force_recovery | 0 |
| innodb_io_capacity | 200 |
| innodb_large_prefix | OFF |
| innodb_lock_wait_timeout | 50 |
| innodb_locks_unsafe_for_binlog | OFF |
| innodb_log_buffer_size | 8388608 |
| innodb_log_file_size | 5242880 |
| innodb_log_files_in_group | 2 |
| innodb_log_group_home_dir | ./ |
| innodb_max_dirty_pages_pct | 75 |
| innodb_max_purge_lag | 0 |
| innodb_mirrored_log_groups | 1 |
| innodb_old_blocks_pct | 37 |
| innodb_old_blocks_time | 0 |
| innodb_open_files | 300 |
| innodb_print_all_deadlocks | OFF |
| innodb_purge_batch_size | 20 |
| innodb_purge_threads | 1 |
| innodb_random_read_ahead | OFF |
| innodb_read_ahead_threshold | 56 |
| innodb_read_io_threads | 8 |
| innodb_replication_delay | 0 |
| innodb_rollback_on_timeout | OFF |
| innodb_rollback_segments | 128 |
| innodb_spin_wait_delay | 6 |
| innodb_stats_method | nulls_equal |
| innodb_stats_on_metadata | ON |
| innodb_stats_sample_pages | 8 |
| innodb_strict_mode | OFF |
| innodb_support_xa | ON |
| innodb_sync_spin_loops | 30 |
| innodb_table_locks | ON |
| innodb_thread_concurrency | 0 |
| innodb_thread_sleep_delay | 10000 |
| innodb_use_native_aio | ON |
| innodb_use_sys_malloc | ON |
| innodb_version | 5.5.39 |
| innodb_write_io_threads | 8 |
+---------------------------------+------------------------+
System Specs:
Intel Xeon E5-2680 v2 (Ivy Bridge) 8 Processors
15GB Ram
2x80 SSDs
CMD to Export:
mysqldump -u <olduser> <oldpw>, <olddb> <table> --verbose --disable-keys --opt | ssh -i <privatekey> <newserver> "cat > <nameoffile>"
Thank you for any assistance. Let me know if there's any other information I can provide.
I figured it out. I increased the innodb_log_file_size from 5MB to 1024MB. While it did significantly increase the amount of records I imported (Never went above 1 second per 3000 rows), it also fixed the spikes. There were only 2 in all the records I imported, but after they happened, they immediately went back to taking sub 1 second.

Difficult kind of Hierachical Data in Relational Database

I have "components" which can be assembled in different ways into a "system". I want my database to hold all these "components", their type specific data and define how they are connected to each other to form a "system".
The systems are typically gearboxes and they can have rather complex branched designs. Let's start with an easy example:
This system is built up out of Masses (horizontal lines) and Stiffnesses (vertical lines). Gears and clutches are types of masses and come in pairs. Colors represent different branch speeds due to gear ratios. Here's a (bad) example of how I could store everything from this particular illustration:
ID | Type | Clutch | Ends | DrivenBy | NoOfTeeth| Mass | Stiffness
--- | ---- | ------ | ---- | --------- | -------- | ---- | ---------
1 | Mass | | Input1 | | | 5 |
2 | Stiffness | | | | | | 15
3 | Mass | 1.1 | | | | 2 |
4 | Mass | 1.2 | | | | 3 |
5 | Stiffness | | | | | | 20
6 | Gear | | | | 10 | 4 |
7 | Stiffness | | | | | | 30
8 | Gear | | | | 4 | 5 |
9 | Gear | | | 8 | 7 | 2 |
10 | Stiffness | | | | | | 40
11 | Mass | | | | | 4 |
12 | Stiffness | | Output1 | | | | 10
13 | Gear | | | 6 | 5 | 4 |
14 | Stiffness | | | | | | 20
15 | Mass | 2.1 | | | | 4 |
16 | Mass | 2.2 | | | | 3
17 | Stiffness | | | | | | 30
18 | Mass | | Output2 | | | 2 |
Obviously, this is not a very good way to store the data. This design pattern resembles somewhat of a "Repeated attributes" since each component type has a different attribute to be filled. I could create a table for each type of component, but things become more complex when looking at other examples, such as this 2-stage gearbox:
There are also examples with more than 1 input and several outputs, but I can't post more links due to low reputation.
Eitherway, you will see that the usual hierarchical data storage doesn't apply here because the data is not purely "tree-shaped" where everything branches off from 1 main branch.
I think that even though I could store data in the above mentioned way, I will get huge difficulties when it comes to the programming stage.
To add to the complexity, these gearboxes are actually sub-systems to a much bigger system.
So, any suggestions on a good way to store this type of data?*
Perhaps this is a possible way of doing it?
Here you will see that there is a "main" table called GearboxBranch, keeping track of all elements in the gearbox, giving them an id and to identify in which branch the element exists.
Then for the elements themselves, masses are defined in their dedicated table, so are stiffnesses. Gears and Clutches (which are types of masses) are then defined in their perspective tables. A recursive relationship is existing in the gear table, since one gear has to be driven by at least one other gear.
Furthermore, the table with Shaft Ends defines which of the elements in the gearbox are input or output and what number they have.
I can't seem to see any problems with this method, but I'm a little unsure how to get data out of the database. There will be considerable coding involved I'm afraid.

Resources