Which file format is this and how to open it? - dataset
Which file format is this and how to open it?
1435708910.94,state,{"CLOSE_TIME": 1435752000, "SERVICE_STATES": {"LED1": {"USED_MINS": 96.91667685111364, "FAIL_MINS": 0, "PAYG_CREDITS_USED": 0, "LAST_USE": [1435708747.965771, 9], "LAST_AVAIL": [1435708731.860505, true], "FREE_MINS": 374.18972703218475, "CHARGE": 96.91667685111364, "LVD_MINS": 0, "NODATA_MINS": 0}, "LED2": {"USED_MINS": 96.99929953018822, "FAIL_MINS": 0, "PAYG_CREDITS_USED": 0, "LAST_USE": [1435708728.138294, 0], "LAST_AVAIL": [1435708721.04487, true], "FREE_MINS": 141.17060283819833, "CHARGE": 96.99929953018822, "LVD_MINS": 0, "NODATA_MINS": 0}, "USB1": {"USED_MINS": 0, "FAIL_MINS": 0, "PAYG_CREDITS_USED": 0, "LAST_USE": [1435708713.425554, 0], "LAST_AVAIL": [1435708702.763062, true], "FREE_MINS": 0, "CHARGE": 0, "LVD_MINS": 0, "NODATA_MINS": 0}}, "OPEN_TIME": 1435665600, "PAYG_CREDITS_USED": 0, "KERNEL_VERSION": "4.0", "LED_QUOTA_USED": 193.9159763813019, "USB_QUOTA_USED": 0, "USB_QUOTA_END": 1435665621.729028, "KERNEL_AUTHOR": "CA", "LED_QUOTA_END": 1435686998.772021}
If you remove the 1435708910.94,state, from the beginning, the rest is a JSON string. You can open the file in any text editor like Notepad, although a specialized JSON editor gives you more features like formatting it to be easier to read. You can try this online JSON Editor.
Here is the formatted version:
{
"CLOSE_TIME": 1435752000,
"SERVICE_STATES": {
"LED1": {
"USED_MINS": 96.91667685111363,
"FAIL_MINS": 0,
"PAYG_CREDITS_USED": 0,
"LAST_USE": [
1435708747.965771,
9
],
"LAST_AVAIL": [
1435708731.860505,
true
],
"FREE_MINS": 374.18972703218475,
"CHARGE": 96.91667685111363,
"LVD_MINS": 0,
"NODATA_MINS": 0
},
"LED2": {
"USED_MINS": 96.99929953018822,
"FAIL_MINS": 0,
"PAYG_CREDITS_USED": 0,
"LAST_USE": [
1435708728.138294,
0
],
"LAST_AVAIL": [
1435708721.04487,
true
],
"FREE_MINS": 141.17060283819833,
"CHARGE": 96.99929953018822,
"LVD_MINS": 0,
"NODATA_MINS": 0
},
"USB1": {
"USED_MINS": 0,
"FAIL_MINS": 0,
"PAYG_CREDITS_USED": 0,
"LAST_USE": [
1435708713.425554,
0
],
"LAST_AVAIL": [
1435708702.763062,
true
],
"FREE_MINS": 0,
"CHARGE": 0,
"LVD_MINS": 0,
"NODATA_MINS": 0
}
},
"OPEN_TIME": 1435665600,
"PAYG_CREDITS_USED": 0,
"KERNEL_VERSION": "4.0",
"LED_QUOTA_USED": 193.9159763813019,
"USB_QUOTA_USED": 0,
"USB_QUOTA_END": 1435665621.729028,
"KERNEL_AUTHOR": "CA",
"LED_QUOTA_END": 1435686998.772021
}
Related
build grid from 2 arrays in Ruby. Javascript to Ruby
I'm trying to replicate a solution from javascript to Ruby where the idea is to build a grid from two arrays and then, running a search on both arrays, change certain elements in the grid. I think my ruby solution is fairly similar but the grid doesn't look similar. Any help is much appreciated Arrays: nums1 = [1,2,3,2,1] nums2 = [3,2,1,4,7] Javascript grid: `` const dp = new Array(nums1.length + 1).fill(0).map(() => new Array(nums2.length + 1).fill(0)); console.log(dp) output: [ [ 0, 0, 0, 0, 0, 0 ], [ 0, 0, 0, 0, 0, 0 ], [ 0, 0, 0, 0, 0, 0 ], [ 0, 0, 0, 0, 0, 0 ], [ 0, 0, 0, 0, 0, 0 ], [ 0, 0, 0, 0, 0, 0 ] ] Ruby grid attempts: 1º attempt dp = Array.new(nums1.length+1) {Array.new(nums2.length+1, 0)} p dp 2º attempt: dp = (0..n).map{|el| el = (0..m).to_a.map{|el| el = 0 }} p dp output is the same for both: [[0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0]] Thank you
Most efficient way to select elements in n dimension array in Ruby with indexes
I want to select elements with its indexes based on given condition for n dimension array I have solution for two dimension array like below ary = [ ['A', 'B', 'C'], ['D', 'E', 'F'], ['G', 'H', 'I'] ] new_ary = ary.collect.with_index do |row, index_r| row.collect.with_index { |col, index_c| [col, index_r, index_c] if index_c == 0 }.compact end new_ary.flatten(1) => [["A", 0, 0], ["D", 1, 0], ["G", 2, 0]] I want this solution for n dimension it would be great if method works like this ary.select_with_indexes { |val, index_c ,index_c,...| index_c == 0 } # x,y,... is indexes # val is value of x,y,... index # x == 0 is condition for select element, this condition is only for understanding. we will put any condition there Three dimension array like below ary = [[ ['A1', 'B1', 'C1'], ['D1', 'E1', 'F1'], ['G1', 'H1', 'I1'] ],[ ['A2', 'B2', 'C2'], ['D2', 'E2', 'F2'], ['G2', 'H2', 'I2'] ],[ ['A3', 'B3', 'C3'], ['D3', 'E3', 'F3'], ['G3', 'H3', 'I3'] ]] new_ary = ary.collect.with_index do |row, index_r| row.collect.with_index do |col, index_c| col.collect.with_index do |val, index_d| [val, index_r, index_c, index_d] if index_d == 0 end.compact end end new_ary.flatten(1) => [[["A1", 0, 0, 0]], [["D1", 0, 1, 0]], [["G1", 0, 2, 0]], [["A2", 1, 0, 0]], [["D2", 1, 1, 0]], [["G2", 1, 2, 0]], [["A3", 2, 0, 0]], [["D3", 2, 1, 0]], [["G3", 2, 2, 0]]] index_d == 0 this condition is only for understanding we will put any condition there
Code In the examples in the question the index of the innermost array is always zero. I have made that an argument, most_inner_index. def doit(ary, most_inner_index) first, *rest = nested_indices(ary) first.product(*rest).map do |indices| [ary.dig(*indices, most_inner_index), *indices, most_inner_index] end end def nested_indices(ary) sizes = [] while ary.first.is_a?(Array) sizes << ary.size.times.to_a ary = ary.first end sizes end Examples ary2 = [ ['A', 'B', 'C'], ['D', 'E', 'F'], ['G', 'H', 'I'] ] doit(ary2, 0) #=> [["A", 0, 0], ["D", 1, 0], ["G", 2, 0]] doit(ary2, 1) #=> [["B", 0, 1], ["E", 1, 1], ["H", 2, 1]] doit(ary2, 2) #=> [["C", 0, 2], ["F", 1, 2], ["I", 2, 2]] ary3 = [[ ['A1', 'B1', 'C1'], ['D1', 'E1', 'F1'], ['G1', 'H1', 'I1'] ],[ ['A2', 'B2', 'C2'], ['D2', 'E2', 'F2'], ['G2', 'H2', 'I2'] ],[ ['A3', 'B3', 'C3'], ['D3', 'E3', 'F3'], ['G3', 'H3', 'I3'] ]] doit(ary3, 0) #=> [["A1", 0, 0, 0], ["D1", 0, 1, 0], ["G1", 0, 2, 0], # ["A2", 1, 0, 0], ["D2", 1, 1, 0], ["G2", 1, 2, 0], # ["A3", 2, 0, 0], ["D3", 2, 1, 0], ["G3", 2, 2, 0]] doit(ary3, 1) #=> [["B1", 0, 0, 1], ["E1", 0, 1, 1], ["H1", 0, 2, 1], # ["B2", 1, 0, 1], ["E2", 1, 1, 1], ["H2", 1, 2, 1], # ["B3", 2, 0, 1], ["E3", 2, 1, 1], ["H3", 2, 2, 1]] doit(ary3, 2) #=> [["C1", 0, 0, 2], ["F1", 0, 1, 2], ["I1", 0, 2, 2], # ["C2", 1, 0, 2], ["F2", 1, 1, 2], ["I2", 1, 2, 2], # ["C3", 2, 0, 2], ["F3", 2, 1, 2], ["I3", 2, 2, 2]] Notice that the return values are not quite in the form desired. That is because I could not figure out from the question how many nested arrays were desired. ary4 = [ [ [ [['A1', 'B1'], ['C1', 'D1']], [['E1', 'F1'], ['G1', 'H1']] ], [ [['I1', 'J1'], ['K1', 'L1']], [['M1', 'N1'], ['O1', 'P1']] ] ], [ [ [['A2', 'B2'], ['C2', 'D2']], [['E2', 'F2'], ['G2', 'H2']] ], [ [['I2', 'J2'], ['K2', 'L2']], [['M2', 'N2'], ['O2', 'P2']] ] ], [ [ [['A3', 'B3'], ['C3', 'D3']], [['E3', 'F3'], ['G3', 'H3']] ], [ [['I3', 'J3'], ['K3', 'L3']], [['M3', 'N3'], ['O3', 'P3']] ] ] ] doit(ary4, 0) #=> [["A1", 0, 0, 0, 0, 0], ["C1", 0, 0, 0, 1, 0], ["E1", 0, 0, 1, 0, 0], # ["G1", 0, 0, 1, 1, 0], ["I1", 0, 1, 0, 0, 0], ["K1", 0, 1, 0, 1, 0], # ["M1", 0, 1, 1, 0, 0], ["O1", 0, 1, 1, 1, 0], ["A2", 1, 0, 0, 0, 0], # ["C2", 1, 0, 0, 1, 0], ["E2", 1, 0, 1, 0, 0], ["G2", 1, 0, 1, 1, 0], # ["I2", 1, 1, 0, 0, 0], ["K2", 1, 1, 0, 1, 0], ["M2", 1, 1, 1, 0, 0], # ["O2", 1, 1, 1, 1, 0], ["A3", 2, 0, 0, 0, 0], ["C3", 2, 0, 0, 1, 0], # ["E3", 2, 0, 1, 0, 0], ["G3", 2, 0, 1, 1, 0], ["I3", 2, 1, 0, 0, 0], # ["K3", 2, 1, 0, 1, 0], ["M3", 2, 1, 1, 0, 0], ["O3", 2, 1, 1, 1, 0]] doit(ary4, 1) #=> [["B1", 0, 0, 0, 0, 1], ["D1", 0, 0, 0, 1, 1], ["F1", 0, 0, 1, 0, 1], # ["H1", 0, 0, 1, 1, 1], ["J1", 0, 1, 0, 0, 1], ["L1", 0, 1, 0, 1, 1], # ["N1", 0, 1, 1, 0, 1], ["P1", 0, 1, 1, 1, 1], ["B2", 1, 0, 0, 0, 1], # ["D2", 1, 0, 0, 1, 1], ["F2", 1, 0, 1, 0, 1], ["H2", 1, 0, 1, 1, 1], # ["J2", 1, 1, 0, 0, 1], ["L2", 1, 1, 0, 1, 1], ["N2", 1, 1, 1, 0, 1], # ["P2", 1, 1, 1, 1, 1], ["B3", 2, 0, 0, 0, 1], ["D3", 2, 0, 0, 1, 1], # ["F3", 2, 0, 1, 0, 1], ["H3", 2, 0, 1, 1, 1], ["J3", 2, 1, 0, 0, 1], # ["L3", 2, 1, 0, 1, 1], ["N3", 2, 1, 1, 0, 1], ["P3", 2, 1, 1, 1, 1]] Explanation The steps are as follows for ary3 and most_inner_index = 0. a = nested_indices(ary3) #=> [[0, 1, 2], [0, 1, 2]] first, *rest = a #=> [[0, 1, 2], [0, 1, 2]] Ruby applies array decomposition to obtain the following. first #=> [0, 1, 2] rest #=> [[0, 1, 2]] Continuing, b = first.product(*rest) #=> [[0, 0], [0, 1], [0, 2], [1, 0], [1, 1], [1, 2], [2, 0], [2, 1], [2, 2]] c = b.map do |indices| [ary.dig(*indices, most_inner_index), *indices, most_inner_index] end #=> [["A1", 0, 0, 0], ["D1", 0, 1, 0], ["G1", 0, 2, 0], # ["A2", 1, 0, 0], ["D2", 1, 1, 0], ["G2", 1, 2, 0], # ["A3", 2, 0, 0], ["D3", 2, 1, 0], ["G3", 2, 2, 0]] See Array#product and Array#dig.
Constructing Diagonal and Off-Diagonal Matrix Elements
Mathematica Code In Mathematica, I was able to write out the desired matrix with diagonal and off-diagonal values I was wondering what the best way to do this is in python using numpy?
A virtual clone of your code: In [146]: arr = np.zeros((5,5),int) In [147]: arr[np.arange(5),np.arange(5)]=2 In [148]: arr[np.arange(4),np.arange(1,5)]=-1 In [149]: arr[np.arange(1,5),np.arange(4)]=-1 In [150]: arr Out[150]: array([[ 2, -1, 0, 0, 0], [-1, 2, -1, 0, 0], [ 0, -1, 2, -1, 0], [ 0, 0, -1, 2, -1], [ 0, 0, 0, -1, 2]]) or with a diag function: In [151]: np.diag(np.ones(5,int)*2,0)+np.diag(np.ones(4,int)*-1,-1)+np.diag(np.ones(4,int)*-1,1 ...: ) Out[151]: array([[ 2, -1, 0, 0, 0], [-1, 2, -1, 0, 0], [ 0, -1, 2, -1, 0], [ 0, 0, -1, 2, -1], [ 0, 0, 0, -1, 2]])
My GridDB nodes do not want to join into the same cluster
I have successfully started three nodes on three different Azure CentOS instances. Each node is pointing to the default notification address (239.0.0.1) and are on the same virtual network on Azure (address space 10.2.0.0/24). The nodes are all joined in on the same cluster name ("temperature" in my specific case). Based on this, the nodes should all be in the same cluster; the problem is, when I run gs_stat, they're all clearly joined into individual clusters: -bash-4.2$ gs_stat -u admin/password { "checkpoint": { "endTime": 1542823670774, "mode": "NORMAL_CHECKPOINT", "normalCheckpointOperation": 1, "pendingPartition": 0, "requestedCheckpointOperation": 0, "startTime": 1542823670486 }, "cluster": { "activeCount": 1, "clusterName": "temperature", "clusterStatus": "MASTER", "designatedCount": 1, "loadBalancer": "ACTIVE", "master": { "address": "10.2.0.5", "port": 10040 }, "nodeList": [ { "address": "10.2.0.5", "port": 10040 } ], "nodeStatus": "ACTIVE", "notificationMode": "MULTICAST", "partitionStatus": "NORMAL", "startupTime": "2018-11-21T18:06:49Z", "syncCount": 2 }, "currentTime": "2018-11-21T18:08:33Z", "performance": { "backupCount": 0, "batchFree": 0, "checkpointFileAllocateSize": 262144, "checkpointFileSize": 262144, "checkpointFileUsageRate": 0, "checkpointMemory": 0, "checkpointMemoryLimit": 1073741824, "checkpointWriteSize": 0, "checkpointWriteTime": 0, "currentCheckpointWriteBufferSize": 0, "currentTime": 1542823713412, "numBackground": 0, "numConnection": 2, "numNoExpireTxn": 0, "numSession": 0, "numTxn": 0, "ownerCount": 128, "peakProcessMemory": 72777728, "processMemory": 72777728, "recoveryReadSize": 262144, "recoveryReadTime": 32, "storeCompressionMode": "NO_BLOCK_COMPRESSION", "storeDetail": { "batchFreeMapData": { "storeMemory": 0, "storeUse": 0, "swapRead": 0, "swapWrite": 0 }, "batchFreeRowData": { "storeMemory": 0, "storeUse": 0, "swapRead": 0, "swapWrite": 0 }, "mapData": { "storeMemory": 0, "storeUse": 0, "swapRead": 0, "swapWrite": 0 }, "metaData": { "storeMemory": 0, "storeUse": 0, "swapRead": 0, "swapWrite": 0 }, "rowData": { "storeMemory": 0, "storeUse": 0, "swapRead": 0, "swapWrite": 0 } }, "storeMemory": 0, "storeMemoryLimit": 1073741824, "storeTotalUse": 0, "swapRead": 0, "swapReadSize": 0, "swapReadTime": 0, "swapWrite": 0, "swapWriteSize": 0, "swapWriteTime": 0, "syncReadSize": 0, "syncReadTime": 0, "totalBackupLsn": 0, "totalLockConflictCount": 0, "totalOtherLsn": 0, "totalOwnerLsn": 0, "totalReadOperation": 0, "totalRowRead": 0, "totalRowWrite": 0, "totalWriteOperation": 0 }, "recovery": { "progressRate": 1 }, "version": "4.0.0-33128 CE" } Is there a proper way to troubleshoot this? Is there a reason the nodes can't communicate?
It looks like you’re using GridDB with multicast. This works if you’ve got local machines but don’t seem to work on Azure (or other cloud services). The solution is to change to fixed-list mode. This will give explicit addresses for the griddb nodes to join in on as a cluster. More info here: https://griddb.net/en/blog/griddb-using-fixed-list-or-multicast-clustering/
Numpy Swap/Substitute NoneType Entry with Numpy Array (i.e. vector)
Given a numpy array containing two types of elements: "numpy.ndarray" entries and "NoneType" entries How do I replace all "NoneType" entries with e.g. np.zeros(some_shape)? Could this be also done for any type of single elements like scalar for instance instead of NoneType? Example: test_array= array([[array([[0, 0, 0, ..., 0, 0, 0]], dtype=uint8), array([[0, 0, 0, ..., 0, 0, 0]], dtype=uint8), array([[0, 0, 0, ..., 0, 0, 0]], dtype=uint8), ..., None, None, None], [array([[0, 0, 0, ..., 0, 0, 0]], dtype=uint8), array([[0, 0, 0, ..., 0, 0, 0]], dtype=uint8), array([[0, 0, 0, ..., 0, 0, 0]], dtype=uint8), ..., None, None, None], [array([[0, 0, 0, ..., 0, 0, 0]], dtype=uint8), array([[0, 0, 0, ..., 0, 0, 0]], dtype=uint8), array([[0, 0, 0, ..., 0, 0, 0]], dtype=uint8), ..., array([[0, 0, 0, ..., 0, 0, 0]], dtype=uint8), None, None], ..., [array([[0, 0, 0, ..., 0, 0, 0]], dtype=uint8), array([[0, 0, 0, ..., 0, 0, 0]], dtype=uint8), array([[0, 0, 0, ..., 0, 0, 0]], dtype=uint8), ..., array([[0, 0, 0, ..., 0, 0, 0]], dtype=uint8), array([[0, 0, 0, ..., 0, 0, 0]], dtype=uint8), None], [array([[0, 0, 0, ..., 0, 0, 0]], dtype=uint8), array([[0, 0, 0, ..., 0, 0, 0]], dtype=uint8), array([[0, 0, 0, ..., 0, 0, 0]], dtype=uint8), ..., array([[0, 0, 0, ..., 0, 0, 0]], dtype=uint8), array([[0, 0, 0, ..., 0, 0, 0]], dtype=uint8), None], [array([[0, 0, 0, ..., 0, 0, 0]], dtype=uint8), array([[0, 0, 0, ..., 0, 0, 0]], dtype=uint8), array([[0, 0, 0, ..., 0, 0, 0]], dtype=uint8), ..., array([[0, 0, 0, ..., 0, 0, 0]], dtype=uint8), array([[0, 0, 0, ..., 0, 0, 0]], dtype=uint8), array([[0, 0, 0, ..., 0, 0, 0]], dtype=uint8)]], dtype=object) where an array within test_array might look like this: test_array[323]= array([array([[0, 0, 0, ..., 0, 0, 0]], dtype=uint8), array([[0, 0, 0, ..., 0, 0, 0]], dtype=uint8), array([[0, 0, 0, ..., 0, 0, 0]], dtype=uint8), array([[0, 0, 0, ..., 0, 0, 0]], dtype=uint8), array([[0, 0, 0, ..., 0, 0, 0]], dtype=uint8), None, None], dtype=object) And I want to replace those "None" entries with a zero-vector of same length as the other vectors (here position 0 to 3). So that my result for each array (test_array[i] within test_array would look like this: test_array[131]= array([array([[0, 0, 0, ..., 0, 0, 0]], dtype=uint8), array([[0, 0, 0, ..., 0, 0, 0]], dtype=uint8), array([[0, 0, 0, ..., 0, 0, 0]], dtype=uint8), array([[0, 0, 0, ..., 0, 0, 0]], dtype=uint8), array([[0, 0, 0, ..., 0, 0, 0]], dtype=uint8), array([[0, 0, 0, ..., 0, 0, 0]], dtype=uint8), array([[0, 0, 0, ..., 0, 0, 0]], dtype=uint8)], dtype=object) So I would like to fill all None entries with np.zeros arrays. There does exists the numpy function np.nan_to_num but this does not help me because I would need something like "np.nan_to_array". Thanks!
Normally I would not use a for loop with NumPy, but in your case you have an object array which is anyway not very efficient, and dealing with the combination of None and sub-arrays stored as objects is very tricky. So, keep it simple: prototype = a[0] for i, x in enumerate(a): if x is None: a[i] = np.zeros_like(prototype) Of course you'll need to find a prototype if a[0] is None. That's left as an exercise.