LiveCycle expanding table requiring total to equal 100 - livecycle

LiveCycle -- novice user. I have an expanding table where people enter the percentage of time they spend on a job duty. The bottom line of the table adds up all the percentages. I want an error to pop up if the total doesn't equal 100.

Related

Oracle index barely speeding up aggregate calculations

I created a table with two columns, a and b. Column a is simply the numbers 1 to 100 million. Column b is a random integer between 0 and 999 inclusive. I wanted to use this table to check how indexes improve calculations. So I checked the following:
select count(*) from my_table where b = 332
select avg(a) from my_table where b = 387
The 332 and 387 are just random integers, I wanted to make sure it wasn't caching anything so I switched it.
Then I created an index:
create bitmap index myindx1 on my_table (b);
commit;
This brought the count(*) down from 14 seconds to 75 milliseconds, success!
But the avg(a) didn't fare so well. It actually got worse, going from 8 seconds to 10 seconds. I didn't test this a ton of times and based on the plans it looks to be a fluke, but at the very least it doesn't seem to be doing much better as I expected it to.
The explain plan without the index looks like:
The explain plan with the index looks like:
So it looks like it's helping a bit, but is it really that much more expensive to average numbers than count them? And way more expensive to average numbers than to do a full table scan? I thought this index would cut my query into a fraction of the original cost rather than just shaving off a little bit of time. Is there something else I can do to speed up this query?
Thanks.
The problem is the way you set up your test - it isn't realistic and it is bad for indexes.
First: you have just two integer columns in your table, so each row is VERY small. So, Oracle can fit a lot of rows into each database block -- like a few thousand rows per block.
Second: you created your indexed data randomly, with values between 0 and 999.
Put those two facts together and what can we guess? Answer: just about every single database block is going to have at least one row with any given value of column B.
So, no matter what value of B you look for, you are going to wind up reading every block in your table one at a time (i.e.: "sequential read").
Compare that to the plan using no index -- a full table scan -- where Oracle will still read every single block, but it will read them several blocks at a time (i.e., "scattered read").
No wonder your index didn't help.
If you want a better test, add column C to your test table that is just a string of 200-300 characters (e.g., "XXXXXXXXX..."). This will reduce the number of rows per block to a more realistic value and you should see better gains from your index.
LAST NOTE: be very careful about using a BITMAP index. They are all but unusable on tables that have any sort of DML (insert, update, deleting) happening on them! Read all about them before using one!
UPDATE
Clarification on this:
So it looks like it's helping a bit, but is it really that much more expensive to average numbers than count them? And way more expensive to average numbers than to do a full table scan?
The reason your index helped your COUNT(*) query is that the index by itself will tell Oracle how many rows meet the condition B=332, so it does not need to read the table blocks and therefore does not suffer from the problem I described above (i.e., reading each table block one-by-one).
It's not that COUNT() is "faster" then AVG(). It's just that, in your test, the COUNT could be computed using only the index, whereas AVG needed information from the table.
Bitmap indexes should not be used in OTLP systems. Their maintenance cost is too high.
IHMO pure B*tree index will be enough. INDEX RANGE SCAN traverses from root to leftest leaf heaving value "332" and then iterates from left to right visiting all leaves having the same value of "B". This is all you want.
If you want to speed it up even more you can create so called covering index. Put both column "B" and "A" (in this order) into index. Then you will avoid lookup into table for value of "A" when "B" is matched. It is especially helpful if table contains many columns you do not care about.

How to calculate actual data size in every MariaDB table row?

Is there any techniques to calculate actual used data size per every SQL table row? Including enabled Indexes and Log records?
Sum of field sizes would not be correct because some fields can be empty or data is less than field size.
Target is to know, how much exactly data is used per user.
Probably I can do this in handler side.
With the word "exactly", I have to say "no".
Change that to "approximately", and I say
SHOW TABLE STATUS
and look at Avg_row_length. This info is also available in information_schema.TABLES.
But, that is just an average. And not a very accurate average at that.
Do you care about a hundred bytes here or there? Do users own rows in a single table? What the heck is going on?
There are some crude formulas for computing the size of Data rows and Index rows, but nothing on Log records. One of the problems is that if there is a "block split" in a BTree because someone else inserted a row, do you divvy up the new block evenly across all users? Or what?

Allow user to enter data in one currency

I have created four TM1 cubes: Rate for hour, Hours, Rate of exchange and Revenue.
In first one, user enters rates(costs) in different currencies.
In second one, user enters customer hours (for example, how much time customer consultation took).
In third, user enters rate of exchange for every currency.
In Revenue, based on data in previous cubes, I calculate all revenue in Euros.
The problem is when user enters same rate in more than one currency. Then revenue in Revenue cube is bigger than it should be.
My question: is there a way to prevent users from entering rates in more than one currency? All approaches I tried ends up with circular reference error.
Your question is almost impossible to answer in specific terms because you've provided no specific details of your cubes, dimensions, elements or rules.
In general terms, however... TM1 is not a relational database and other than picklists has few input restrictions. There are usually at least a couple of ways that you can work around that, though. In this case I assume (again, in the absence of specifics) that the relevant dimension in the first cube has an input element for each currency.
Instead of that you could have two input elements; one for the amount, and another for the currency code (regulated by picklist). Your rule in the Revenue cube then evaluates the relevant currency element by looking at the currency code input. That will allow it to look up the relevant exchange rate from the third cube via a DB() function. That rate is multiplied by the work rate that has been entered into the first cube and the hours entered into the second cube to calculate the revenue.

Large number of entries: how to calculate quickly the total?

I am writing a rather large application that allows people to send text messages and emails. I will charge 7c per SMS and 2c per email sent. I will allow people to "recharge" their account. So, the end result is likely to be a database table with a few small entries like +100 and many, many entries like -0.02 and -0.07.
I need to check a person's balance immediately when they are trying to send an email or a message.
The obvious answer is to have cached "total" somewhere, and update it whenever something is added or taken out. However, as always in programming, there is more to it: what about monthly statements, where the balance needs to be carried forward from the previous month? My "intuitive" solution is to have two levels of cache: one for the current month, and one entry for each month (or billing period) with three entries:
The total added
The total taken out
The balance to that point
Are there better, established ways to deal with this problem?
Largely depends on the RDBMS.
If it were SQL Server, one solution is to create an Indexed view (or views) to automatically incrementally calculate and hold the aggregated values.
Another solution is to use triggers to aggregate whenever a row is inserted at the finest granularity of detail.

How to decrease the search time in Sql Server

I have created a windows form that uses SQL Server Database. The windows form contains a search grid which brings all the bank account information of a person. The search grid contains a special field "Number of Account" which displays the number of Accounts a person have associated with a bank.
There are more than 100,000 records in the table from where the data is fetched. I just wanted to know how should I decrease the response time or the search time while getting the data from the table in the search grid.
When I run the page it takes hell lot of time to get the records displayed in the search grid. Moreover, it does not get the data unless and until I provide a search criteria(To and from Date for searching)
Is their any possible way to decrease the search time so that the data should get displayed in the grid.
There are a few things that you can do:
Only fetch the minimum amount of data that you need for your results - this means only select the needed columns and limit the number of rows.
In addition to the above, consider using paging on the UI, so you can further limit the amount of data returned. There is no point in showing a user 100,000 rows.
If you hadn't done so already, add indexes to the table (though at 100,000 rows, things shouldn't be that slow anyway). I can't go into detail about how to do that.

Resources