How to create a report for below back-propagation neural network - artificial-intelligence

I am trying understand Artificial Intelligence Neural Network and I am self-learner. Hope anyone would help me in understanding on how to solve this problem
If this post should be posted here. Please comment instead of degrading the post. Appreciate for this as well.
I have a question that I am totally confused about how to solve it. I encountered this online but was unable to understand how to solve it. I have added the question below. Hope you can provide some help.
The data set contains 4 observations for 4 input variables (Temp, Pres, Flow, and Process) and an output variable (Rejects). The first column "No" is simply an identifier. The table below reproduces the first 4 observations:
No
Temp
Pres
Flow
Process
Rejects
1
53.39
10.52
4.82
0
1.88
2
46.23
15.13
5.31
0
2.13
3
42.85
18.79
3.59
0
2.66
4
53.09
18.33
3.67
0
2.03
Train a back-propagation neural network on approximately 80% of the observations, randomly selected. Test the trained network using the remaining 20% observations.
Question:
Based on this how to define a fixed neural network with output values and backpropagate an expected output pattern? Here, the output is only one which is the "Rejects" Column
What are the error values which is required to be calculated?
Does it required to define the hidden layer here? And how can we define the hidden layer?
What type of “tool” can be used to create a report for the above inputs and get the expected output? Can you help related to this? I am unsure about one thing as well
If not tool could you provide any program to understand this? Preferable tool though.
Create a figure that plots the actual and predicted values of the output "Rejects" for the training and test data sets.
Does this mean creating a chart something similar to the plot chart we create for the Support Vector Machine? Is that possible to create in the tool where we are using for the above question?
How to solve -> Sum of squared errors for the training and test data sets.
I would really appreciate your help.

Firstly, the dataset is insanely small. However, this is the way you would approach this kind of dataset, assuming there is much more data.
import pandas as pd
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dropout, Dense, Flatten
data = {
'No.': [1, 2, 3, 4],
'Temp': [53.39, 46.23, 42.85, 53.09],
'Pres': [10.52, 15.13, 18.79, 18.33],
'Process': [0, 0, 0, 0],
'Rejects': [1.88, 2.13, 2.66, 2.03]
}
df = pd.DataFrame(data)
df = df.drop(['No.'], axis=1)
features = df.drop(['Rejects'], axis=1)
labels = df['Rejects']
model = Sequential()
model.add(Dense(1000, activation='relu', input_shape=(features.shape[1],1)))
model.add(Dropout(0.2))
model.add(Dense(250, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(50, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(1, activation='relu'))
model.compile(optimizer='adam', loss='mean_squared_error', metrics=['mse','mae','mape'])
model.fit(features, labels, epochs=10)
model.evaluate(features,labels)
The results are not good, but that is only due to the quantity of data.
1/1 [==============================] - 0s 316ms/step - loss: 2.4341 - mse: 2.4341 - mae: 1.4419 - mape: 67.6981

So I'm writing a fresh answer because the code for this is tuned to suit the dataset URL that you have provided after the first answer. This time clearly the accuracy is much better due to the quantity of data available.
import pandas as pd
df = pd.read_csv('data.txt', sep='\t')
df = df.drop(['No.'], axis=1)
features = df.drop(['Rejects'], axis=1)
labels = df['Rejects']
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(features, labels, test_size=0.3)
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dropout, Dense, Flatten
model = Sequential()
model.add(Dense(2000, activation='relu', input_shape=(features.shape[1],1)))
model.add(Dropout(0.2))
model.add(Dense(500, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(100, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(1, activation='relu'))
model.compile(optimizer='adam', loss='mean_squared_error', metrics=['mse','mae','mape'])
history = model.fit(X_train, y_train, epochs=50, validation_data = (X_test, y_test))
import matplotlib.pyplot as plt
plt.plot(history.history['mape'])
plt.plot(history.history['val_mape'])
plt.xlabel('Epochs')
plt.ylabel('Percentage Loss')
plt.legend(['MAPE','Val_MAPE'])
And the model training went something like this:
Epoch 1/50
7/7 [==============================] - 1s 73ms/step - loss: 10.9066 - mse: 10.9066 - mae: 2.4855 - mape: 111.4739 - val_loss: 5.2550 - val_mse: 5.2550 - val_mae: 2.2529 - val_mape: 99.9000
Epoch 2/50
7/7 [==============================] - 0s 23ms/step - loss: 4.9923 - mse: 4.9923 - mae: 2.1328 - mape: 95.1877 - val_loss: 3.1912 - val_mse: 3.1912 - val_mae: 1.6590 - val_mape: 74.3683
Epoch 3/50
7/7 [==============================] - 0s 24ms/step - loss: 4.0993 - mse: 4.0993 - mae: 1.9074 - mape: 84.9316 - val_loss: 3.0207 - val_mse: 3.0207 - val_mae: 1.6149 - val_mape: 72.3441
Epoch 4/50
7/7 [==============================] - 0s 23ms/step - loss: 3.5641 - mse: 3.5641 - mae: 1.6932 - mape: 75.8205 - val_loss: 3.0053 - val_mse: 3.0053 - val_mae: 1.5755 - val_mape: 68.8496
Epoch 5/50
7/7 [==============================] - 0s 23ms/step - loss: 2.9217 - mse: 2.9217 - mae: 1.5616 - mape: 69.8578 - val_loss: 2.4539 - val_mse: 2.4539 - val_mae: 1.4140 - val_mape: 62.5867
Epoch 6/50
7/7 [==============================] - 0s 21ms/step - loss: 2.4518 - mse: 2.4518 - mae: 1.4247 - mape: 63.5009 - val_loss: 2.0144 - val_mse: 2.0144 - val_mae: 1.2820 - val_mape: 56.4856
Epoch 7/50
7/7 [==============================] - 0s 21ms/step - loss: 1.9910 - mse: 1.9910 - mae: 1.2630 - mape: 56.0590 - val_loss: 1.6839 - val_mse: 1.6839 - val_mae: 1.1723 - val_mape: 50.4525
Epoch 8/50
7/7 [==============================] - 0s 20ms/step - loss: 1.1813 - mse: 1.1813 - mae: 0.9188 - mape: 40.1967 - val_loss: 0.7452 - val_mse: 0.7452 - val_mae: 0.7067 - val_mape: 29.3356
Epoch 9/50
7/7 [==============================] - 0s 20ms/step - loss: 0.8689 - mse: 0.8689 - mae: 0.7326 - mape: 32.4377 - val_loss: 0.3546 - val_mse: 0.3546 - val_mae: 0.4791 - val_mape: 21.1433
Epoch 10/50
7/7 [==============================] - 0s 21ms/step - loss: 1.0251 - mse: 1.0251 - mae: 0.8172 - mape: 36.6930 - val_loss: 0.5519 - val_mse: 0.5519 - val_mae: 0.6279 - val_mape: 28.5509
Epoch 11/50
7/7 [==============================] - 0s 20ms/step - loss: 0.8735 - mse: 0.8735 - mae: 0.7236 - mape: 32.9642 - val_loss: 1.0568 - val_mse: 1.0568 - val_mae: 0.8415 - val_mape: 36.3284
Epoch 12/50
7/7 [==============================] - 0s 20ms/step - loss: 0.7933 - mse: 0.7933 - mae: 0.6918 - mape: 30.8646 - val_loss: 0.5851 - val_mse: 0.5851 - val_mae: 0.5987 - val_mape: 25.3339
Epoch 13/50
7/7 [==============================] - 0s 19ms/step - loss: 0.5194 - mse: 0.5194 - mae: 0.5638 - mape: 24.8541 - val_loss: 0.2628 - val_mse: 0.2628 - val_mae: 0.4087 - val_mape: 17.7300
Epoch 14/50
7/7 [==============================] - 0s 20ms/step - loss: 0.4954 - mse: 0.4954 - mae: 0.5518 - mape: 24.4398 - val_loss: 0.3021 - val_mse: 0.3021 - val_mae: 0.4256 - val_mape: 17.8031
Epoch 15/50
7/7 [==============================] - 0s 20ms/step - loss: 0.4629 - mse: 0.4629 - mae: 0.5339 - mape: 23.4556 - val_loss: 0.2119 - val_mse: 0.2119 - val_mae: 0.3771 - val_mape: 16.3196
Epoch 16/50
7/7 [==============================] - 0s 20ms/step - loss: 0.4563 - mse: 0.4563 - mae: 0.5222 - mape: 23.0115 - val_loss: 0.2919 - val_mse: 0.2919 - val_mae: 0.4207 - val_mape: 17.4477
Epoch 17/50
7/7 [==============================] - 0s 21ms/step - loss: 0.4153 - mse: 0.4153 - mae: 0.5046 - mape: 22.5874 - val_loss: 0.5661 - val_mse: 0.5661 - val_mae: 0.6011 - val_mape: 25.0547
Epoch 18/50
7/7 [==============================] - 0s 21ms/step - loss: 0.4056 - mse: 0.4056 - mae: 0.4932 - mape: 21.9288 - val_loss: 0.4406 - val_mse: 0.4406 - val_mae: 0.5216 - val_mape: 21.5496
Epoch 19/50
7/7 [==============================] - 0s 20ms/step - loss: 0.4677 - mse: 0.4677 - mae: 0.5323 - mape: 23.2442 - val_loss: 0.2383 - val_mse: 0.2383 - val_mae: 0.3868 - val_mape: 16.3032
Epoch 20/50
7/7 [==============================] - 0s 23ms/step - loss: 0.3991 - mse: 0.3991 - mae: 0.4907 - mape: 21.4421 - val_loss: 0.2270 - val_mse: 0.2270 - val_mae: 0.3835 - val_mape: 16.4031
Epoch 21/50
7/7 [==============================] - 0s 20ms/step - loss: 0.4039 - mse: 0.4039 - mae: 0.5030 - mape: 22.4905 - val_loss: 0.3142 - val_mse: 0.3142 - val_mae: 0.4375 - val_mape: 18.1178
Epoch 22/50
7/7 [==============================] - 0s 21ms/step - loss: 0.3628 - mse: 0.3628 - mae: 0.4799 - mape: 21.5093 - val_loss: 0.3639 - val_mse: 0.3639 - val_mae: 0.4683 - val_mape: 19.1954
Epoch 23/50
7/7 [==============================] - 0s 20ms/step - loss: 0.3455 - mse: 0.3455 - mae: 0.4649 - mape: 20.5179 - val_loss: 0.2378 - val_mse: 0.2378 - val_mae: 0.3864 - val_mape: 16.1129
Epoch 24/50
7/7 [==============================] - 0s 20ms/step - loss: 0.3276 - mse: 0.3276 - mae: 0.4523 - mape: 19.8604 - val_loss: 0.2182 - val_mse: 0.2182 - val_mae: 0.3768 - val_mape: 15.9712
Epoch 25/50
7/7 [==============================] - 0s 21ms/step - loss: 0.3175 - mse: 0.3175 - mae: 0.4485 - mape: 20.0487 - val_loss: 0.3083 - val_mse: 0.3083 - val_mae: 0.4336 - val_mape: 17.9253
Epoch 26/50
7/7 [==============================] - 0s 21ms/step - loss: 0.3289 - mse: 0.3289 - mae: 0.4514 - mape: 20.1608 - val_loss: 0.3361 - val_mse: 0.3361 - val_mae: 0.4495 - val_mape: 18.4325
Epoch 27/50
7/7 [==============================] - 0s 19ms/step - loss: 0.3233 - mse: 0.3233 - mae: 0.4471 - mape: 19.8604 - val_loss: 0.2534 - val_mse: 0.2534 - val_mae: 0.4036 - val_mape: 17.1102
Epoch 28/50
7/7 [==============================] - 0s 19ms/step - loss: 0.3226 - mse: 0.3226 - mae: 0.4441 - mape: 19.3694 - val_loss: 0.2483 - val_mse: 0.2483 - val_mae: 0.3982 - val_mape: 16.7979
Epoch 29/50
7/7 [==============================] - 0s 20ms/step - loss: 0.3188 - mse: 0.3188 - mae: 0.4439 - mape: 19.5424 - val_loss: 0.2392 - val_mse: 0.2392 - val_mae: 0.3908 - val_mape: 16.4567
Epoch 30/50
7/7 [==============================] - 0s 21ms/step - loss: 0.3109 - mse: 0.3109 - mae: 0.4457 - mape: 19.7457 - val_loss: 0.2292 - val_mse: 0.2292 - val_mae: 0.3859 - val_mape: 16.4228
Epoch 31/50
7/7 [==============================] - 0s 20ms/step - loss: 0.2999 - mse: 0.2999 - mae: 0.4337 - mape: 19.1884 - val_loss: 0.2527 - val_mse: 0.2527 - val_mae: 0.3966 - val_mape: 16.4614
Epoch 32/50
7/7 [==============================] - 0s 20ms/step - loss: 0.3091 - mse: 0.3091 - mae: 0.4313 - mape: 18.9708 - val_loss: 0.2601 - val_mse: 0.2601 - val_mae: 0.4023 - val_mape: 16.6517
Epoch 33/50
7/7 [==============================] - 0s 20ms/step - loss: 0.2974 - mse: 0.2974 - mae: 0.4363 - mape: 19.4105 - val_loss: 0.2839 - val_mse: 0.2839 - val_mae: 0.4175 - val_mape: 17.1926
Epoch 34/50
7/7 [==============================] - 0s 20ms/step - loss: 0.2786 - mse: 0.2786 - mae: 0.4177 - mape: 18.4687 - val_loss: 0.1865 - val_mse: 0.1865 - val_mae: 0.3689 - val_mape: 16.4617
Epoch 35/50
7/7 [==============================] - 0s 19ms/step - loss: 0.3164 - mse: 0.3164 - mae: 0.4466 - mape: 19.8367 - val_loss: 0.3088 - val_mse: 0.3088 - val_mae: 0.4362 - val_mape: 18.0655
Epoch 36/50
7/7 [==============================] - 0s 21ms/step - loss: 0.3097 - mse: 0.3097 - mae: 0.4339 - mape: 19.2173 - val_loss: 0.2615 - val_mse: 0.2615 - val_mae: 0.4002 - val_mape: 16.4560
Epoch 37/50
7/7 [==============================] - 0s 20ms/step - loss: 0.2861 - mse: 0.2861 - mae: 0.4249 - mape: 18.7808 - val_loss: 0.2223 - val_mse: 0.2223 - val_mae: 0.3794 - val_mape: 16.0339
Epoch 38/50
7/7 [==============================] - 0s 20ms/step - loss: 0.2967 - mse: 0.2967 - mae: 0.4334 - mape: 19.1338 - val_loss: 0.1935 - val_mse: 0.1935 - val_mae: 0.3679 - val_mape: 16.0777
Epoch 39/50
7/7 [==============================] - 0s 19ms/step - loss: 0.3012 - mse: 0.3012 - mae: 0.4307 - mape: 18.8958 - val_loss: 0.2027 - val_mse: 0.2027 - val_mae: 0.3718 - val_mape: 16.1167
Epoch 40/50
7/7 [==============================] - 0s 21ms/step - loss: 0.2926 - mse: 0.2926 - mae: 0.4204 - mape: 18.5881 - val_loss: 0.2174 - val_mse: 0.2174 - val_mae: 0.3810 - val_mape: 16.5433
Epoch 41/50
7/7 [==============================] - 0s 20ms/step - loss: 0.2947 - mse: 0.2947 - mae: 0.4214 - mape: 18.9445 - val_loss: 0.3573 - val_mse: 0.3573 - val_mae: 0.4648 - val_mape: 19.0649
Epoch 42/50
7/7 [==============================] - 0s 20ms/step - loss: 0.3088 - mse: 0.3088 - mae: 0.4332 - mape: 19.3028 - val_loss: 0.2762 - val_mse: 0.2762 - val_mae: 0.4090 - val_mape: 16.7506
Epoch 43/50
7/7 [==============================] - 0s 21ms/step - loss: 0.2898 - mse: 0.2898 - mae: 0.4235 - mape: 18.5388 - val_loss: 0.2007 - val_mse: 0.2007 - val_mae: 0.3747 - val_mape: 16.6556
Epoch 44/50
7/7 [==============================] - 0s 20ms/step - loss: 0.2835 - mse: 0.2835 - mae: 0.4168 - mape: 18.4563 - val_loss: 0.2329 - val_mse: 0.2329 - val_mae: 0.3871 - val_mape: 16.3445
Epoch 45/50
7/7 [==============================] - 0s 19ms/step - loss: 0.2685 - mse: 0.2685 - mae: 0.4109 - mape: 18.3725 - val_loss: 0.2807 - val_mse: 0.2807 - val_mae: 0.4141 - val_mape: 16.9569
Epoch 46/50
7/7 [==============================] - 0s 20ms/step - loss: 0.2783 - mse: 0.2783 - mae: 0.4205 - mape: 18.4501 - val_loss: 0.2055 - val_mse: 0.2055 - val_mae: 0.3726 - val_mape: 16.0784
Epoch 47/50
7/7 [==============================] - 0s 21ms/step - loss: 0.2712 - mse: 0.2712 - mae: 0.4225 - mape: 18.8953 - val_loss: 0.2424 - val_mse: 0.2424 - val_mae: 0.3906 - val_mape: 16.3056
Epoch 48/50
7/7 [==============================] - 0s 17ms/step - loss: 0.2623 - mse: 0.2623 - mae: 0.4113 - mape: 18.3200 - val_loss: 0.2274 - val_mse: 0.2274 - val_mae: 0.3821 - val_mape: 16.0680
Epoch 49/50
7/7 [==============================] - 0s 17ms/step - loss: 0.2629 - mse: 0.2629 - mae: 0.4026 - mape: 17.8561 - val_loss: 0.2516 - val_mse: 0.2516 - val_mae: 0.3948 - val_mape: 16.2785
Epoch 50/50
7/7 [==============================] - 0s 18ms/step - loss: 0.2641 - mse: 0.2641 - mae: 0.3969 - mape: 17.4429 - val_loss: 0.2531 - val_mse: 0.2531 - val_mae: 0.4031 - val_mape: 17.0205

Related

Converting seconds in date form Numpy Python

The delta_s function calculates the difference of time between 2 dates in the dates with seconds. Then the average median and max values for the differences is calculated. I am trying to convert the times in seconds for average median and max values but it does not work. i want to convert it in the form of x days x hours x minutes x seconds.
Code:
import numpy as np
dates= np.array(['2017-09-15 07:11:00' ,'2017-09-15 11:25:30', '2017-09-15 12:11:10', '2021-04-07 22:43:12', '2021-04-08 00:49:18'],
dtype="datetime64[ns]")
delta_s = np.diff(dates) // 1e9 # nanoseconds to seconds
delta_s = delta_s.astype(np.float64)
delta_avg = np.average(delta_s)
delta_median= np.median(delta_s)
delta_max = np.max(delta_s)
delta_max_index= np.argmax(delta_s)
The line delta_s = np.diff(dates) // 1e9 does not actually convert nanoseconds to seconds. It simply divides 1e9 to the timedelta object but the time unit is preserved timedelta64[ns].
>>> np.diff(dates)
array([ 15270000000000, 2740000000000, 112357922000000000,
7566000000000], dtype='timedelta64[ns]')
>>> np.diff(dates) // 1e9
array([ 15270, 2740, 112357922, 7566],
dtype='timedelta64[ns]')
This may mess up with any calculations you're doing.
Use
delta_s = np.array([np.timedelta64(td, 's') for td in np.diff(dates) ])
Currently there are no inbuilt functions to format timedelta to strings. However you can use something of this sort.
# Function to convert seconds to Human readable Timedelta string
def seconds_to_tdstring(total_seconds):
days, remainder = divmod(total_seconds, 60 * 60 * 24)
hours, remainder = divmod(remainder, 60 * 60)
minutes, seconds = divmod(remainder, 60)
return '{:02} Days {:02} Hours {:02} Minutes {:02} Seconds'.format(int(days), int(hours), int(minutes), int(seconds))
print(seconds_to_tdstring(delta_avg))
Output:
325 Days 04 Hours 24 Minutes 34 Seconds
I have modified the answer from a similar question .

Swift - Sort array by date and time in one go

I have an array with a property of string that resembles a date (yyyy-MM-dd) and another property of string that resembles a time (HH:mm).
I am trying to sort the array by date and then time in 1 sweep.
Example:
Array[0].date = 2019-11-18
Array[0].time = 19:00
Array[1].date = 2019-11-18
Array[1].time = 22:00
Array[2].date = 2019-10-14
Array[2].time = 16:00
Array[3].date = 2019-11-16
Array[3].time = 13:00
Array[4].date = 2019-11-16
Array[4].time = 14:00
and i want to achieve
Array[0].date = 2019-11-18
Array[0].time = 22:00
Array[1].date = 2019-11-18
Array[1].time = 19:00
Array[2].date = 2019-10-16
Array[2].time = 14:00
Array[3].date = 2019-10-16
Array[3].time = 13:00
Array[4].date = 2019-11-14
Array[4].time = 16:00.
How can i achieve this using Swift?
Thank you so much for your time!
This answer picks up on the refinement to the question in the below comment from the OP in response to the answer from #vadian. The actual requirement is to sort football goal times provided by the API. The solution below creates a struct for this data with a calculated variable for actual goal time and then sorts by that.
struct Goal{
let matchDate: String
let matchTime: String
let goalTime: String
var timeOfGoal: Date {
let goalComponents = goalTime.components(separatedBy: "+").map{$0.trimmingCharacters(in: CharacterSet.whitespacesAndNewlines.union(CharacterSet.decimalDigits.inverted))}
let goalSeconds = TimeInterval(60 * goalComponents.compactMap({Int($0)}).reduce(0, +))
let dateFormatter = DateFormatter()
dateFormatter.dateFormat = "yyyy-MM-dd HH:mm"
let startTime = dateFormatter.date(from: matchDate + " " + matchTime)!
return startTime.addingTimeInterval(goalSeconds)
}
}
I tested this as below
let goals = [
Goal(matchDate: "2019-11-18", matchTime: "22:00", goalTime: "90 +7"),
Goal(matchDate: "2019-11-18", matchTime: "19:00", goalTime: "22"),
Goal(matchDate: "2019-11-18", matchTime: "22:00", goalTime: "99"),
Goal(matchDate: "2019-11-18", matchTime: "19:00", goalTime: "45 + 3"),
Goal(matchDate: "2019-11-18", matchTime: "19:00", goalTime: "45+6"),
Goal(matchDate: "2019-11-18", matchTime: "22:00", goalTime: "90+6"),
Goal(matchDate: "2019-11-18", matchTime: "22:00", goalTime: "35"),
Goal(matchDate: "2019-11-18", matchTime: "22:00", goalTime: "85"),
Goal(matchDate: "2019-11-18", matchTime: "22:00", goalTime: "90"),
Goal(matchDate: "2019-11-18", matchTime: "22:00", goalTime: "90+ 8"),
Goal(matchDate: "2019-11-18", matchTime: "19:00", goalTime: "44")]
let ordered = goals.sorted{$0.timeOfGoal > $1.timeOfGoal}
ordered.forEach{print("\($0.matchDate) - \($0.matchTime) - \($0.goalTime) ")}
and it correctly produced:
2019-11-18 - 22:00 - 99
2019-11-18 - 22:00 - 90+ 8
2019-11-18 - 22:00 - 90 +7
2019-11-18 - 22:00 - 90+6
2019-11-18 - 22:00 - 90
2019-11-18 - 22:00 - 85
2019-11-18 - 22:00 - 35
2019-11-18 - 19:00 - 45+6
2019-11-18 - 19:00 - 45 + 3
2019-11-18 - 19:00 - 44
2019-11-18 - 19:00 - 22
There is room for improvement by not force unwrapping the Date?, although the string cleaning makes this reasonably safe, and by using a class-level static DateFormatter. But I'll leave that refinement for the implementation :-)
You could append the date and time strings, because the largest tome interval (year) is on the left, and smallest (minute) is on the right. Sorting by standard lexicographical means will put the “largest” date / time combination first.
let sortedArray = myArray.sorted(by: { ($0.date + $0.time) > ($1.date + $1.time) })
First of all please name variables with starting lowercase letter (array)
You can simply sort the array by concatenating the strings because the format yyyy-MM-dd HH:mm is sortable.
array.sort{"\($0.date) \($0.time)" > "\($1.date) \($1.time)"}

How do I return a value that's dependent on another value?

I have a table of products, and a table of rates. Each product has a set of different rates, and each set has a headline rate. How do I return the headline rate for each product?
Here's an example of the tables
Products pp
Id Product
- - - - - - - - - - - - - - - - - - - - - - - - - - - -
P1 Product1
P2 Product2
P3 Product3
Rates rr
Id Productid Headlinetier Tier1 Tier2 Tier3
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1 P1 3 0.1 0.2 0.3
2 P2 1 0.4 0.5 0.6
3 P3 2 0.7 0.8 0.9
How do I get the following results?
pp.Product rr.Headlinerate
- - - - - - - - - - - - - - - - - - - - - - - - - -
P1 0.3
P2 0.4
P3 0.8
You need to join the tables and a CASE statement to choose between the 3 tiers:
select
p.product,
case r.headlinetier
when 1 then r.tier1
when 2 then r.tier2
when 3 then r.tier3
end headlinerate
from products p inner join rates r
on r.productid = p.id
If your version is SQL Server 2012+ you can use choose():
select
p.product,
choose(r.headlinetier, r.tier1, r.tier2, r.tier3) headlinerate
from products p inner join rates r
on r.productid = p.id

How to split a multiple delimiter string to an array converting only the numbers to integers in Ruby

I have a sting like the one below and I want to convert it to an array so that I can sort the list by the donation amount.
What'd be the most efficient way to do it in Ruby?
names = "
Andres - Donation: $230 - Time: 568
Sarah - Donation: $345 - Time: 600
James - Donation: $134 - Time: 340
"
I'm trying to get each line into an Array as below and sort by Donation amount.
>array = [["Andres - Donation: $230 - Time: 568"],
>["Sarah - Donation: $345 - Time: 600"],
>["James - Donation: $134 - Time: 340"]]
but I'm struggling to find a way to do it. I could do it only by modifying the order of the items in each array and use the sort method adjusted to sort in descending order.
array2 = [["Donation: $230 - Time: 568 - Andres"],
["Donation: $345 - Time: 600 - Sarah"],
["Donation: $134 - Time: 340 - James"]]
array2.sort! { |a,b| b <=> a }
For splitting the Array I've tried
names_Array = names.split(/\n/)
I assume you split your long string into an array of strings as you already did:
array = names.split(/\n/)
I leave it to you to clean out the empty lines.
You don't want to compare the whole strings, you want to compare the donation values. For that, use a function that extracts the value from a string, like
def donation(s)
regex = /.*: \$(\d+) -.*$/
m = s.match(regex)
m ? m.captures.first.to_i : 0
end
Here, I use a regular expression to extract the decimal number between ": $" and " -". If there is no match, the function returns 0, otherwise the decimal amount.
Now, use the function to compare two strings:
array.sort {|a,b| donation(b) <=> donation(a)}
and you get your list sorted by donation amount (descending):
[" Sarah - Donation: $345 - Time: 600", " Andres - Donation: $230 - Time: 568", " James - Donation: $134 - Time: 340", "", "", "", " "]
Again, its up to you to process it further.
r = /
(?<=\$) # match a dollar sign in a positive lookbehind
\d+ # match >= 1 digit
(?=\s-) # match a space followed by a dash in a positive lookahead
/x # free-spacing regex definition mode
names.strip.split(/\n+/).sort_by { |s| puts "s[r]=#{s[r]}"; s[r].to_i }
#=> [" James - Donation: $134 - Time: 340",
# "Andres - Donation: $230 - Time: 568",
# " Sarah - Donation: $345 - Time: 600"]
If you want to remove the extra spaces:
names.strip.split(/\n+/).map(&:strip).sort_by { |s| s[r].to_i }
#=> ["James - Donation: $134 - Time: 340",
# "Andres - Donation: $230 - Time: 568",
# "Sarah - Donation: $345 - Time: 600"]
If, as specified in the question, you want to put each string in a one-element array:
names.strip.split(/\n+/).map(&:strip).sort_by { |s| s[r].to_i }.map { |s| [s] }
#=> [["James - Donation: $134 - Time: 340"],
# ["Andres - Donation: $230 - Time: 568"],
# ["Sarah - Donation: $345 - Time: 600"]]

Sort an PowerShell array according to a part of content in each element

I am working with a powershell script where I have an array containing following data. Each row below is an element in the array.
Each element has two parts seperated by "/" a number (can be 1-12 digits) and a date time.
I need to sort the array according to the date and time given in each element.
201410212339/21-Oct-2014 23:50 -
2251/27-Sep-2014 23:02 -
0436/22-Oct-2014 04:47 -
091342/09-Oct-2014 13:53 -
2220743/22-Oct-2014 07:53 -
20140/22-Sep-2014 07:41 -
2190446/19-Oct-2014 04:56 -
2014258/21-Aug-2014 23:21 -
22110/22-Oct-2014 14:21 -
1410221721/22-Jun-2014 17:33 -
130/23-Jul-2014 11:42 -
10231426/23-Feb-2014 14:38 -
231731/23-Jan-2014 17:43 -
0232039/23-Mar-2014 20:51 -
Can anyone help me with this? I want to sort the array to access the latest or the second latest entry and use the number associated with it. I can try to split each element into number and date-time and sort them but I am looking for a much simpler way.
Thanks in advance.
You can pass a code block to sort to make a custom sort property, without rebuilding the array. (Pinching the datetime parse from Jan Chrbolka):
$getDate = { [datetime]::Parse($_.split("/")[1].replace(' -','')) }
$data | sort $getDate
Or sort -Descending to reverse it.
But you aren't going to be able to use the date and number without splitting the line, the search for a "much simpler way" seems a bit fruitless.
If you want to sort by date and time, this is one way of doing it.
This is your data as an array
$data = #("201410212339/21-Oct-2014 23:50 -",
"2251/27-Sep-2014 23:02 -",
"0436/22-Oct-2014 04:47 -",
"091342/09-Oct-2014 13:53 -",
"2220743/22-Oct-2014 07:53 -",
"20140/22-Sep-2014 07:41 -",
"2190446/19-Oct-2014 04:56 -",
"2014258/21-Aug-2014 23:21 -",
"22110/22-Oct-2014 14:21 -",
"1410221721/22-Jun-2014 17:33 -",
"130/23-Jul-2014 11:42 -",
"10231426/23-Feb-2014 14:38 -",
"231731/23-Jan-2014 17:43 -",
"0232039/23-Mar-2014 20:51 -")
Strip extra characters from the end of each line
$data.replace(" -","")
Pre-pend each line by [datetime] representation of date in ticks
$data.replace(" -","") | % { [string]([datetime]::Parse($_.split("/")[1]).ticks) + "#" + $_}
Sort
$data.replace(" -","") | % { [string]([datetime]::Parse($_.split("/")[1]).ticks) + "#" + $_} | sort-object
Remove pre-pended date string and restore the garbage on the end if you want.
$data.replace(" -","") | % { [string]([datetime]::Parse($_.split("/")[1]).ticks) + "#" + $_} | sort-object | %{$_.split("#")[1] + " -"}
Here is the result:
231731/23-Jan-2014 17:43 -
10231426/23-Feb-2014 14:38 -
0232039/23-Mar-2014 20:51 -
1410221721/22-Jun-2014 17:33 -
130/23-Jul-2014 11:42 -
2014258/21-Aug-2014 23:21 -
20140/22-Sep-2014 07:41 -
2251/27-Sep-2014 23:02 -
091342/09-Oct-2014 13:53 -
2190446/19-Oct-2014 04:56 -
201410212339/21-Oct-2014 23:50 -
0436/22-Oct-2014 04:47 -
2220743/22-Oct-2014 07:53 -
22110/22-Oct-2014 14:21 -
EDIT:
My initial attempt at sorting by [datetime] did not work properly
[string]([datetime]::Parse($_.split("/")[1]))
This not suitable for sorting, as it does not sort by year or time.
Representing [datetime] in ticks fixes the problem.
[string]([datetime]::Parse($_.split("/")[1]).ticks)
I have edited the code above to reflect this.
You can create properties from each string, sort by the one, and then just re-expand the original... Something like:
#"
201410212339/21-Oct-2014 23:50 -
2251/27-Sep-2014 23:02 -
0436/22-Oct-2014 04:47 -
091342/09-Oct-2014 13:53 -
2220743/22-Oct-2014 07:53 -
20140/22-Sep-2014 07:41 -
2190446/19-Oct-2014 04:56 -
2014258/21-Aug-2014 23:21 -
22110/22-Oct-2014 14:21 -
1410221721/22-Jun-2014 17:33 -
130/23-Jul-2014 11:42 -
10231426/23-Feb-2014 14:38 -
231731/23-Jan-2014 17:43 -
0232039/23-Mar-2014 20:51 -
"# -split "`r`n"|Select #{l='SortMe';e={[int64]$_.split('/')[0]}},#{l='Value';e={$_}}|sort SortMe|Select -Expand Value
That will output:
130/23-Jul-2014 11:42 -
0436/22-Oct-2014 04:47 -
2251/27-Sep-2014 23:02 -
20140/22-Sep-2014 07:41 -
22110/22-Oct-2014 14:21 -
091342/09-Oct-2014 13:53 -
231731/23-Jan-2014 17:43 -
0232039/23-Mar-2014 20:51 -
2014258/21-Aug-2014 23:21 -
2190446/19-Oct-2014 04:56 -
2220743/22-Oct-2014 07:53 -
10231426/23-Feb-2014 14:38 -
1410221721/22-Jun-2014 17:33 -
201410212339/21-Oct-2014 23:50 -
If you don't want them sorted numerically and want the numbers sorted as string remove the [int64] from it.

Resources