SNMP table with objects as columns AgentX subagent - net-snmp

I have a table in my MIB file in which some of the columns are not terminal values but intermediate object-identifiers.
I can't find documentation about how to manage this case in the AgentX NET-SNMP subagent (with the C libraries)
Visually, what I mean:
Row: +------+------+--------+
| | | |
val1 val2 object val3
|
-------+-------
| | |
val val val

I am not quite sure, but mib2c can generate scalar and non-scalar mib handlers on the same MIB file, and then use generated files to make up the whole mib tree.

Related

Efficient data retention policy other than time in timescaledb

I have a hypertable which looks like this:
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
-------------+---------+-----------+----------+---------+----------+-------------+--------------+-------------
state | text | | | | extended | | |
device | text | | | | extended | | |
time | bigint | | not null | | plain | | |
Indexes:
"device_state_time" btree ("time")
Triggers:
ts_insert_blocker BEFORE INSERT ON "device_state" FOR EACH ROW EXECUTE FUNCTION _timescaledb_internal.insert_blocker()
Child tables: _timescaledb_internal._hyper_4_2_chunk
Access method: heap
I have 100k devices each sending their state at different time intervals. For ex, device1 sends state every second, device2 every day, device3 every 5 days etc. And I MUST keep at least 10 latests states for a device. So, I can't really use the default data retention policy provided by timescale.
Is there any way to achieve this efficiently other than manually selecting the latest 10 entries for each device and deleting the rest?
Thanks!
That sounds like a corner case because the chunks are time-based. Can you categorize these devices in advance?
Maybe you can insert data into different hypertables based on the insert timeframe if you still want to use the retention policies.
For example, on promscale, the solution uses one table for each metric, allowing users to redefine the retention policy for every metric.
It will depend on how you read the data later; maybe fragmenting it into several hypertables will make it harder.
Also, consider hacking the hypertable creation optional arguments maybe you can get something from the partitioning_func and time_partitioning_func.

ad-hoc slowly-changing dimensions materialization from external table of timestamped csvs in a data lake

Question
main question
How can I ephemerally materialize slowly changing dimension type 2 from from a folder of daily extracts, where each csv is one full extract of a table from from a source system?
rationale
We're designing ephemeral data warehouses as data marts for end users that can be spun up and burned down without consequence. This requires we have all data in a lake/blob/bucket.
We're ripping daily full extracts because:
we couldn't reliably extract just the changeset (for reasons out of our control), and
we'd like to maintain a data lake with the "rawest" possible data.
challenge question
Is there a solution that could give me the state as of a specific date and not just the "newest" state?
existential question
Am I thinking about this completely backwards and there's a much easier way to do this?
Possible Approaches
custom dbt materialization
There's a insert_by_period dbt materialization in the dbt.utils package, that I think might be exactly what I'm looking for? But I'm confused as it's dbt snapshot, but:
run dbt snapshot for each file incrementally, all at once; and,
built directly off of an external table?
Delta Lake
I don't know much about Databricks's Delta Lake, but it seems like it should be possible with Delta Tables?
Fix the extraction job
Is our oroblem is solved if we can make our extracts contain only what has changed since the previous extract?
Example
Suppose the following three files are in a folder of a data lake. (Gist with the 3 csvs and desired table outcome as csv).
I added the Extracted column in case parsing the timestamp from the filename is too tricky.
2020-09-14_CRM_extract.csv
| OppId | CustId | Stage | Won | LastModified | Extracted |
|-------|--------|-------------|-----|--------------|-----------|
| 1 | A | 2 - Qualify | | 9/1 | 9/14 |
| 2 | B | 3 - Propose | | 9/12 | 9/14 |
2020-09-15_CRM_extract.csv
| OppId | CustId | Stage | Won | LastModified | Extracted |
|-------|--------|-------------|-----|--------------|-----------|
| 1 | A | 2 - Qualify | | 9/1 | 9/15 |
| 2 | B | 4 - Closed | Y | 9/14 | 9/15 |
| 3 | C | 1 - Lead | | 9/14 | 9/15 |
2020-09-16_CRM_extract.csv
| OppId | CustId | Stage | Won | LastModified | Extracted |
|-------|--------|-------------|-----|--------------|-----------|
| 1 | A | 2 - Qualify | | 9/1 | 9/16 |
| 2 | B | 4 - Closed | Y | 9/14 | 9/16 |
| 3 | C | 2 - Qualify | | 9/15 | 9/16 |
End Result
Below is SCD-II for the three files as of 9/16. SCD-II as of 9/15 would be the same but OppId=3 has only one from valid_from=9/15 and valid_to=null
| OppId | CustId | Stage | Won | LastModified | valid_from | valid_to |
|-------|--------|-------------|-----|--------------|------------|----------|
| 1 | A | 2 - Qualify | | 9/1 | 9/14 | null |
| 2 | B | 3 - Propose | | 9/12 | 9/14 | 9/15 |
| 2 | B | 4 - Closed | Y | 9/14 | 9/15 | null |
| 3 | C | 1 - Lead | | 9/14 | 9/15 | 9/16 |
| 3 | C | 2 - Qualify | | 9/15 | 9/16 | null |
Interesting concept and of course it would a longer conversation than is possible in this forum to fully understand your business, stakeholders, data, etc. I can see that it might work if you had a relatively small volume of data, your source systems rarely changed, your reporting requirements (and hence, datamarts) also rarely changed and you only needed to spin up these datamarts very infrequently.
My concerns would be:
If your source or target requirements change how are you going to handle this? You will need to spin up your datamart, do full regression testing on it, apply your changes and then test them. If you do this as/when the changes are known then it's a lot of effort for a Datamart that's not being used - especially if you need to do this multiple times between uses; if you do this when the datamart is needed then you're not meeting your objective of having the datamart available for "instant" use.
Your statement "we have a DW as code that can be deleted, updated, and recreated without the complexity that goes along with traditional DW change management" I'm not sure is true. How are you going to test updates to your code without spinning up the datamart(s) and going through a standard test cycle with data - and then how is this different from traditional DW change management?
What happens if there is corrupt/unexpected data in your source systems? In a "normal" DW where you are loading data daily this would normally be noticed and fixed on the day. In your solution the dodgy data might have occurred days/weeks ago and, assuming it loaded into your datamart rather than erroring on load, you would need processes in place to spot it and then potentially have to unravel days of SCD records to fix the problem
(Only relevant if you have a significant volume of data) Given the low cost of storage, I'm not sure I see the benefit of spinning up a datamart when needed as opposed to just holding the data so it's ready for use. Loading large volumes of data everytime you spin up a datamart is going to be time-consuming and expensive. Possible hybrid approach might be to only run incremental loads when the datamart is needed rather than running them every day - so you have the data from when the datamart was last used ready to go at all times and you just add the records created/updated since the last load
I don't know whether this is the best or not, but I've seen it done. When you build your initial SCD-II table, add a column that is a stored HASH() value of all of the values of the record (you can exclude the primary key). Then, you can create an External Table over your incoming full data set each day, which includes the same HASH() function. Now, you can execute a MERGE or INSERT/UPDATE against your SCD-II based on primary key and whether the HASH value has changed.
Your main advantage doing things this way is you avoid loading all of the data into Snowflake each day to do the comparison, but it will be slower to execute this way. You could also load to a temp table with the HASH() function included in your COPY INTO statement and then update your SCD-II and then drop the temp table, which could actually be faster.

SQL Server NEWSEQUENTIALID() - clarification for super fast .net core implementation

Currently I'm trying to write SQL Server NEWSEQUENTIALID() in .NET Core 2.2 that should be running really fast and also it should allocate minimum possible amount memory but I need clarification how calculate uuid version and when (which byte to place it or what bit shift is needed). So now I have generated timestamp, retrieved mac address and copied bytes 8 and 9 from some base random generated guid but surely I'm missing something because results doesn't match with output of original algorithm.
byte[16] guidArray;
// mac
guidArray[15] = macBytes[5];
guidArray[14] = macBytes[4];
guidArray[13] = macBytes[3];
guidArray[12] = macBytes[2];
guidArray[11] = macBytes[1];
guidArray[10] = macBytes[0];
// base guid
guidArray[9] = baseGuidBytes[9];
guidArray[8] = baseGuidBytes[8];
// time
guidArray[7] = ticksDiffBytes[0];
guidArray[6] = ticksDiffBytes[1];
guidArray[5] = ticksDiffBytes[2];
guidArray[4] = ticksDiffBytes[3];
guidArray[3] = ticksDiffBytes[4];
guidArray[2] = ticksDiffBytes[5];
guidArray[1] = ticksDiffBytes[6];
guidArray[0] = ticksDiffBytes[7];
var guid = new Guid(guidArray);
Current benchmark results:
Method | Mean | Error | StdDev | Ratio | RatioSD | Gen 0 | Gen 1 | Gen 2 | Allocated |
|--------------------------- |----------:|---------:|---------:|------:|--------:|-------:|------:|------:|----------:|
| SqlServerNewSequentialGuid | 37.31 ns | 0.680 ns | 0.636 ns | 1.00 | 0.00 | 0.0127 | - | - | 80 B |
| Guid_Standard | 63.29 ns | 0.435 ns | 0.386 ns | 1.70 | 0.03 | - | - | - | - |
| Guid_Comb | 299.57 ns | 2.902 ns | 2.715 ns | 8.03 | 0.13 | 0.0162 | - | - | 104 B |
| Guid_Comb_New | 266.92 ns | 3.173 ns | 2.813 ns | 7.16 | 0.11 | 0.0162 | - | - | 104 B |
| MyFastGuid | 70.08 ns | 1.011 ns | 0.946 ns | 1.88 | 0.05 | 0.0050 | - | - | 32 B |
Update:
Here are the latest results of benchmarking common id generators written in .net core.
As u can see my implementation NewSequentialGuid_PureNetCore is at most 2x worst performing then wrapper around rpcrt4.dll (which was my baseline) but me implementation eats less memory (30B).
Here are a sequence of sample first 10 guids:
492bea01-456f-3166-0001-e0d55e8cb96a
492bea01-456f-37a5-0002-e0d55e8cb96a
492bea01-456f-aca5-0003-e0d55e8cb96a
492bea01-456f-bba5-0004-e0d55e8cb96a
492bea01-456f-c5a5-0005-e0d55e8cb96a
492bea01-456f-cea5-0006-e0d55e8cb96a
492bea01-456f-d7a5-0007-e0d55e8cb96a
492bea01-456f-dfa5-0008-e0d55e8cb96a
492bea01-456f-e8a5-0009-e0d55e8cb96a
492bea01-456f-f1a5-000a-e0d55e8cb96a
If u want code then give me a sign ;)
The official documentation states it quite clearly:
NEWSEQUENTIALID is a wrapper over the Windows UuidCreateSequential
function, with some byte shuffling applied.
There are also links in the quoted paragraph which might be of interest for you. However, considering that the original code is written in C / C++, I somehow doubt that .NET can outperform it, so reusing the same approach might be a more prudent choice (even though it would involve unmanaged calls).
Having said that, I sincerely hope that you have researched the behaviour of this function and considered all its side effects before deciding to pursue this approach. And I certainly hope you aren't going to use this output as a clustered index for your table(s). The reason for this is also mentioned in the docs (as a warning, no less):
The UuidCreateSequential function has hardware dependencies. On SQL
Server, clusters of sequential values can develop when databases (such
as contained databases) are moved to other computers. When using
Always On and on SQL Database, clusters of sequential values can
develop if the database fails over to a different computer.
Basically, the function generates a monotonous sequence only while the database is in the same hosting environment. When:
a network card gets changed on the bare metal (or whatever else the function depends upon), or
a backup is restored someplace else (think Prod-to-Dev refresh, or simply prod migration / upgrade), or
a failover happens, whether in a cluster or in an AlwaysOn configuration
, the new SQL Server instance will have its own range of generated values, which is supposed not to overlap the ranges of other instances on other machines. If that new range comes "before" the existing values, you'll end up with fragmentation issues for absolutely no good reason. Oh, and top (1) to get the latest value won't work anymore.
Indeed, if all you need is a non-exhaustible monotonous sequence, follow the Greg Low's advice and just stick to bigint. It's half as wide, and no, you can't possibly exhaust it.

Mapping Tables of Database in NiFi

Here is my requirement.
I have a big table in Vertica say base_table as follows.
base_table
ID | path | service | experience
20 | /abc/xyz | trz | moderate
22 | /wer/cmz | brd | professional
Mapping Tables
map_table1
path_id | path
1 | /abc/xyz
map_table2
exp_id | experience
1 | beginner
Final Table
ID | path_id | service | exp_id
20 | 1 | trz | -
22 | - | brd | 2
In the First case, I need to get ID as 1 as the path column is present in the map_table1 as well as base table and insert that record into the final table.
In the Second case, I need to insert id as 2 in map_table2 as experience professional is not present in that table as well as insert it into the final table.
which processors should I go for or how the flow should look like in Nifi?
I am not sure if I understand your question correctly, but if I generalize the situation, you want to insert a record if it does not exist, and then get the value of the corresponding ID (which may or may not have existed before).
The good news is that NiFi can easily work with a database like Vertica, have a look at the QueryDatabaseTable processor.
The challenge here however, is that NiFi is designed to efficiently handle many individual messages, and is therefore designed not to be very context aware. For your usecase you would probably want to use a tool that is built to work with tables. In general the solution for this would be Spark, or perhaps it can be built into your database with some procedures.

Pulling ill-formatted data in Libre Calc: What Function will work with this?

I am working on a project where I am pulling tables from a Fandom Wikia page and feeding it into a spreadsheet named 'WikiPullSheet'. The data in the wiki tables is irregular in format; sometimes using multiple rows for the same entry.
Here is an example of some rows as described above from the sheet:
Name | Power | Stamina | Agility
Townsman Shield | 2 | 1 | 2
Starter | | |
Broken Shield | 4(+1) | 2(+1) | 2(+1)
Z1 | | |
Heater | 2(+1) | 4(+1) | 2(+1)
Z1 | | |
Wood Elf Shield | 2(+1) | 2(+1) | 4(+1)
Z1 | | |
Shiv | 4 | 4 | 3
Z1 Shop | | |
Deimos* | 26 | 16 | 26
| 34 | 22 | 34
I want the sheet to auto-update from the wikia page but this format will not allow me to reference items as the sheet expands. For instance, if on another sheet I want to have a drop down list of all the names for items in this list, I would be referencing the blank and starter cells even though they are not actually unique items in the table. I have done research on VLOOKUP, COUNTIF, REGEX options, MATCH, and more, but none of these seem to work for the issue I am having.
How would I take this input and either create a formula to reformat it or pull from the sheet as is and use the columns appropriately for a drop-down box containing only the item names from the NAME column?
Desired Output:
I need the data to end up formatted with each row representing a different unique item. Since the information is pulling with rows that contain location of the item in the name column (Z1 for instance), this is proving difficult. I could simply remove the rows that cause problems such as 'Z1' & 'Z1 Shop' in the above example, however this does not help when an item has multiple upgrade paths like in the case of the 'Deimos' row entry.
If you insert a pivot table (there is a icon to do so, select ColumnA first) based on ColumnA (assuming that is where Name is to be found) you should get something like:
It is far from a complete solution (you don't show what the desired output should be) but I thought a sorted list, with each entry unique and the blanks at least out of the way, might have been a start.

Resources