Let's say I want I have a table with the following columns and values:
| BucketID | Value |
|:-----------|------------:|
| 1 | 3 |
| 1 | 2 |
| 1 | 1 |
| 2 | 0 |
| 2 | 1 |
| 2 | 5 |
Let's pretend I want to partition over BucketId and Multiply the values in the partition:
SELECT DISTINCT BucketId, MULT(Value) OVER (PARTITION BY BucketId)
Now, there is no built-in Aggregate MULT function so I am writing my own using SQL CLR:
using System;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;
[Serializable]
[SqlUserDefinedAggregate(Format.Native, Name = "MULT")]
public struct MULT
{
private int runningSum;
public void Init()
{
runningSum = 1;
}
public void Accumulate(SqlInt32 value)
{
runnningSum *= (int)value;
}
public void Merge(OverallStatus other)
{
runnningSum *= other.runningSum;
}
public SqlInt32 Terminate()
{
return new SqlInt32(runningSum);
}
}
What my question boils down to is, let's assume I hit a 0 in Accumulate or Merge, there is no point continuing on. If that is the case, how can I return 0 as soon as I hit 0?
You cannot force a termination since there is no way to control the workflow. SQL Server will call the Terminate() method as each group finishes processing its set.
However, since the state of the UDA is maintained across each row processed in a group, you can simply check runningSum to see if it's already 0 and if so, skip any computations. This would save some slight amount of time processing.
In the Accumulate() method, the first step will be to check if runningSum is 0, and if true, simply return;. You should also check to see if value is NULL (something you are not currently checking for). After that, check the incoming value to see if it is 0 and if true, then set runningSum to 0 and return;.
public void Accumulate(SqlInt32 value)
{
if (runningSum == 0 || value.IsNull)
{
return;
}
if (value.Value == 0)
{
runningSum = 0;
return;
}
runningSum *= value.Value;
}
Note: Do not cast value to int. All Sql* types have a Value property that returns the expected native .NET type.
Finally, in the Merge method, check runnningSum to see if it is 0 and f true, simply return;. Then check other.runningSum to see if it is 0, and if true, then set runnningSum to 0.
Related
I want to create simple dashboard where I want to show the number of orders in different statuses. The statuses can be New/Cancelled/Finished/etc
Where should I implement these criteria? If I add filter in the Cube Browser then it applies for the whole dashboard. Should I do that in KPI? Or should I add calculated column with 1/0 values?
My expected output is something like:
--------------------------------------
| Total | New | Finished | Cancelled |
--------------------------------------
| 1000 | 100 | 800 | 100 |
--------------------------------------
I'd use measures for that, something like:
CountTotal = COUNT('Orders'[OrderID])
CountNew = CALCULATE(COUNT('Orders'[OrderID]), 'Orders'[Status] = "New")
CountFinished = CALCULATE(COUNT('Orders'[OrderID]), 'Orders'[Status] = "Finished")
CountCancelled = CALCULATE(COUNT('Orders'[OrderID]), 'Orders'[Status] = "Cancelled")
I have an html table showing outage start and end times with different types of outages. Currently, I am sorting the outages in order of outage type, but I would like to be able to sort them by earliest to latest start times. The times in each start and end will already be in order, but I am trying to get them in order, regardless of type. I know for sorting by value, you usually use some sort of value compare like this "sort { $h{$a} <=> $h{$b} } keys(%h);"
Currently they sort like:
1 | phone | 00:00:00 | 04:08:03
2 | phone | 14:26:03 | 18:00:00
3 | television | 12:34:19 | 12:34:25
But it should be like:
1 | phone | 00:00:00 | 04:08:03
2 | television | 12:34:19 | 12:34:25
3 | phone | 14:26:03 | 18:00:00
This is my code.
my %outages;
my #outage_times = qw(start end);
my %outage_reasons = (
'tv' => 'television',
'p' => 'phone'
);
foreach my $outage_reason (values %outage_reasons) {
foreach my $outage (#outage_times) {
$outages{$outage_reason}{$outage} = [];
}
}
$outages{television}{start} = ['00:00:00', '14:26:03'];
$outages{television}{end} = ['04:08:03', '18:00:00'];
$outages{phone}{start} = ['12:32:02'];
$outages{phone}{end} = ['12:38:09'];
my $outage_number = 1;
foreach my $outage (sort keys %outages){
for my $i (0 .. scalar (#{$outages{$outage}{start}})-1) {
my $outage_start_time = $outages{$outage}{start}[$i];
my $outage_end_time = $outages{$outage}{end}[$i];
my $row_html = "<tr><td>$outage_number</td><td>$outage</td>";
$row_html .= "<td>$outage_start_time</td>";
$row_html .= "<td>$outage_end_time</td></tr>";
$outage_number += 1;
}
}
I think this is a situation where you are making life difficult for yourself because your data structure is unnecessarily complicated. I don't know where your data is coming from, but it would be far easier if you could get an array of hashes like this:
my #outages = ({
type => 'phone',
start => '00:00:00',
end => '04:04:03',
}, {
type => 'phone',
start => '14:26:03',
end => '18:00;00',
}, {
type => 'television',
start => '12:34:19',
end => '12:34:25',
});
The code to sort and print these then becomes almost trivial.
my $number = 1;
for (sort { $a->{start} cmp $b->{start} } #outages) {
my $row_html = '<tr>'
. "<td>$number</td>"
. "<td>$_->{type}</td>"
. "<td>$_->{start}</td>"
. "<td>$_->{end}</td>"
. "</tr>\n";
$number++;
print $row_html;
}
It's worth noting that this only works because your timestamps can be treated as strings which are easily sorted. If the timestamps were more complicated and included dates then you're probably going to want to convert them to sortable data using something like Time::Piece or DateTime.
I'd also mention that one day you'll discover that including raw HTML tags in your Perl code is a recipe for disaster. Far better to use a templating system like the Template Toolkit.
Don't store the time stamp as strings but as seconds-since-epoch. Then you can use normal numeric compare
foreach my $outage (sort { $a->{start} <=> $b->{start} values %outages) {
EDIT: SOP for time stamp processing in any language/program, unless you have some really out-of-this-world requirements:
parse the input format to convert time stamps to "X since epoch"
always convert to UTC, ie. determine time zone if it is not given
determine resolution (seconds, milliseconds, microseconds) provided by input
Date::Manip can be your friend here
process time stamps in your algorithm as numerical values
compare: a < b -> a happens before b
differences: a - b at your given resolution
convert timestamps to desired output format
if you have control of the output format, always opt for a precise format, e.g. use the UTC timestamp directly or ISO-8601 format
again Date::Manip::Date printf() method can be your friend here
I want to convert a set of rows in SQL Server database (In the form of rules) to a single if-else condition without hardcoding any values in the code. The code will be written in Scala and I am trying to figure out the logic to do this but could not think of a good approach.
Sample SQL Server Rows:
TAG | CONDITION | MIN VALUE | MAX VALUE | STATUS
ABC | = | 0 | NULL | GOOD
ABC | = | 1 | NULL | BAD
ABC | = | 2 | NULL | ERROR
ABC | >= | 3 | NULL | IGNORE
Similar to tag ABC, there could be any number of tags and the conditions will vary with the tag column and each tag will have conditions in multiple rows. If anyone has dealt with a similar problem and has any suggestions that would be appreciated.
The question doesn't seem clear to me as currently written. What do you mean by a "a single if-else condition without hardcoding any values in the code"?
Would the following work?
sealed trait Condition
object Eq extends Condition // =
object Ge extends Condition // >=
sealed trait Status
object Good extends Status
object Bad extends Status
object Error extends Status
object Ignore extends Status
case class Rule(tag: String,
condition: Condition,
min: Int,
max: Int,
status: Status)
def handle(input: Int, rules: List[Rule]): Status =
rules
.view // lazily iterate the rules
.filter { // find matching rules
case Rule(_, Eq, x, _, _) if input == x => true
case Rule(_, Ge, x, _, _) if input >= x => true
case _ => false
}
.map { matchingRule => matchingRule.status } // return the status
.head // find the status of the first matching rule, or throw
// Tests
val rules = List(
Rule("abc", Eq, 0, 0, Good),
Rule("abc", Eq, 1, 0, Bad),
Rule("abc", Eq, 2, 0, Error),
Rule("abc", Ge, 3, 0, Ignore))
assert(handle(0, rules) == Good)
assert(handle(1, rules) == Bad)
assert(handle(2, rules) == Error)
assert(handle(3, rules) == Ignore)
assert(handle(4, rules) == Ignore)
I have a little problem with the performance of one of my applications, basically:
An external system gives me a big structure as an Object(,).
This structure only has an column per row.
MyData(0,0) = 'COL1-ROW1 | COL2-ROW1 | COL3-ROW1'
MyData(1,0) = 'COL1-ROW2 | COL2-ROW2 | COL3-ROW2'
MyData(2,0) = 'COL1-ROW3 | COL2-ROW3 | COL3-ROW3'
MyData(3,0) = 'COL1-ROW4 | COL2-ROW4 | COL3-ROW4'
MyData(0,1) ' Doesn't exists.
There are some method in LINQ to convert this structure to an one dimensional array of strings?
It would be awesome if you could divide by columns, given a specific character.
Something like this:
NewData(0,0) = COL1-ROW1
NewData(0,1) = COL2-ROW1
NewData(0,2) = COL3-ROW1
NewData(1,0) = COL1-ROW2
...
NewData(3,2) = COL3-ROW3
Seems I found the answer from myself; here my solution:
Dim vMyData(1000, 0) As Object
For x = 0 To 1000
vMyData(x, 0) = String.Format("ROW{0}COL1|ROW{0}COL2|ROW{0}COL3|ROW{0}COL4", x)
Next
Dim vQuery = From TempResult In vMyData
Select Value = TempResult.ToString.Split("|")
Dim vMyNewArray As New ArrayList(vQuery.ToArray)
Now; exists some method to trim each value of the Split("|")?
[UPDATE TO THE PREVIOUS QUESTION]:
From TempResult In vMyData Select Value = Array.ConvertAll(TempResult.ToString.Split("|"), Function(vVal) vVal.ToString.Trim)
Using
Ruby 1.9.3-p194
Rails 3.2.8
Here's what I need.
Count the different human resources (human_resource_id) and divide this by the total number of assignments (assignment_id).
So, the answer for the dummy-data as given below should be:
1.5 assignments per human resource
But I just don't know where to go anymore.
Here's what I tried:
Table name: Assignments
id | human_resource_id | assignment_id | assignment_start_date | assignment_expected_end_date
80101780 | 20200132 | 80101780 | 2012-10-25 | 2012-10-31
80101300 | 20200132 | 80101300 | 2012-07-07 | 2012-07-31
80101308 | 21100066 | 80101308 | 2012-07-09 | 2012-07-17
At first I need to make a selection for the period I need to 'look' at. This is always from max a year ago.
a = Assignment.find(:all, :conditions => { :assignment_expected_end_date => (DateTime.now - 1.year)..DateTimenow })
=> [
#<Assignment id: 80101780, human_resource_id: "20200132", assignment_id: "80101780", assignment_start_date: "2012-10-25", assignment_expected_end_date: "2012-10-31">,
#<Assignment id: 80101300, human_resource_id: "20200132", assignment_id: "80101300", assignment_start_date: "2012-07-07", assignment_expected_end_date: "2012-07-31">,
#<Assignment id: 80101308, human_resource_id: "21100066", assignment_id: "80101308", assignment_start_date: "2012-07-09", assignment_expected_end_date: "2012-07-17">
]
foo = a.group_by(&:human_resource_id)
Now I got a beautiful 'Array of hash of object' and I just don't know what to do next.
Can someone help me?
You can try to execute the request in SQL :
ActiveRecord::Base.connection.select_value('SELECT count(distinct human_resource_id) / count(distinct assignment_id) AS ratio FROM assignments');
You could do something like
human_resource_count = assignments.collect{|a| a.human_resource_id}.uniq.count
assignment_count = assignments.collect{|a| a.assignment_id}.uniq.count
result = human_resource_count/assignment_count