Too many languages in solr config - solr

We have a solr configuration based on apache solr 8.52.
We use the installation from the TYPO3 extension ext:solr 10.0.3.
In this way we have multiple (39) languages and multiple cores.
As we do not need most of the languages (for sure we need one, maybe two further) I tried to remove most of them with deleting (moving to another folder) all the configurations I identified as other languages, leaving only these folders and files in the solr folders:
server/
+-solr/
| +-configsets/
| | +-ext_solr_10_0_0/
| | +-conf/
| | | +-english/
| | | +-_schema_analysis_stopwords_english.json
| | | +-admin-extra.html
| | | :
| | | +-solrconfig.xml
| | +-typo3lib
| | +-solr-typo3-plugin-4.0.0.jar
| +cores/
| | +-english/
| | +-core.properties
| +-data/
| | +-english/
: : :
I thought that after restarting the server it would only present one language and one core. This was correct.
But on start it noted all the other languages as missing like:
core_es: org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: Could not load conf for core core_es: Error loading schema resource spanish/schema.xml
Where does solr get this information about all these languages I don't need?
How can I avoid this long list of warnings?

First of all, it does not hurt to have those cores. As long as they are empty and not loaded, they do not take much RAM and CPU.
But if you still want to get rid of them, you need to do it correctly. If you just move core's data directory, this does not mean it is deleted because solr server also needs to adjust config files. Best way is to use curl like this:
curl 'http://localhost:8983/solr/admin/cores?action=UNLOAD&core=core_en&deleteInstanceDir=true'
That would remove the core and all its data.

Related

Efficient data retention policy other than time in timescaledb

I have a hypertable which looks like this:
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
-------------+---------+-----------+----------+---------+----------+-------------+--------------+-------------
state | text | | | | extended | | |
device | text | | | | extended | | |
time | bigint | | not null | | plain | | |
Indexes:
"device_state_time" btree ("time")
Triggers:
ts_insert_blocker BEFORE INSERT ON "device_state" FOR EACH ROW EXECUTE FUNCTION _timescaledb_internal.insert_blocker()
Child tables: _timescaledb_internal._hyper_4_2_chunk
Access method: heap
I have 100k devices each sending their state at different time intervals. For ex, device1 sends state every second, device2 every day, device3 every 5 days etc. And I MUST keep at least 10 latests states for a device. So, I can't really use the default data retention policy provided by timescale.
Is there any way to achieve this efficiently other than manually selecting the latest 10 entries for each device and deleting the rest?
Thanks!
That sounds like a corner case because the chunks are time-based. Can you categorize these devices in advance?
Maybe you can insert data into different hypertables based on the insert timeframe if you still want to use the retention policies.
For example, on promscale, the solution uses one table for each metric, allowing users to redefine the retention policy for every metric.
It will depend on how you read the data later; maybe fragmenting it into several hypertables will make it harder.
Also, consider hacking the hypertable creation optional arguments maybe you can get something from the partitioning_func and time_partitioning_func.

Combining fields in Google Data Studio

I have a CSV file of the form (unimportant columns hidden)
player,game1,game2,game3,game4,game5,game6,game7,game8
Example data:
Alice,0,-10,-30,-60,-30,-50,-10,30
Bob,10,20,30,40,50,60,70,80
Charlie,20,0,20,0,20,0,20,0
Derek,1,2,3,4,5,6,7,8
Emily,-40,-30,-20,-10,10,20,30,40
Francine,1,4,9,16,25,36,49,64
Gina,0,0,0,0,0,0,0,0
Hank,-50,50,-50,50,-50,50,-50,50
Irene,-20,-20,-20,50,50,-20,-20,-20
I am looking for a way to make a Data Studio view where I can see a chart of all the results of a certain player. How would I make a custom field that combines the data from game1 to game8 so I can make a chart of it?
| Name | Scores |
|----------|---------------------------------|
| Alice | [0,-10,-30,-60,-30,-50,-10,30] |
| Bob | [10,20,30,40,50,60,70,80] |
| Charlie | [20,0,20,0,20,0,20,0] |
| Derek | [1,2,3,4,5,6,7,8] |
| Emily | [-40,-30,-20,-10,10,20,30,40] |
| Francine | [1,4,9,16,25,36,49,64] |
| Gina | [0,0,0,0,0,0,0,0] |
| Hank | [-50,50,-50,50,-50,50,-50,50] |
| Irene | [-20,-20,-20,50,50,-20,-20,-20] |
The goal of the resulting chart would be something like this, where game1 is the first point and so on.
If this is not possible, how would I best represent the data so what I am looking for can work in Data Studio? I currently have it implemented in a Google Sheet, but the issue is there's no way to make views, so when someone selects a row it changes for everyone viewing it.
If you have two file games as data sources, I guess that you want to combine them by the name, right?
You can do it with the blending data option. Resource > manage blends I think is the option.
Then you can create a blend data source merging it by the name.
You can add also both score fields, with different labels.
This is some documentation about it: https://support.google.com/datastudio/answer/9061420?hl=en

ad-hoc slowly-changing dimensions materialization from external table of timestamped csvs in a data lake

Question
main question
How can I ephemerally materialize slowly changing dimension type 2 from from a folder of daily extracts, where each csv is one full extract of a table from from a source system?
rationale
We're designing ephemeral data warehouses as data marts for end users that can be spun up and burned down without consequence. This requires we have all data in a lake/blob/bucket.
We're ripping daily full extracts because:
we couldn't reliably extract just the changeset (for reasons out of our control), and
we'd like to maintain a data lake with the "rawest" possible data.
challenge question
Is there a solution that could give me the state as of a specific date and not just the "newest" state?
existential question
Am I thinking about this completely backwards and there's a much easier way to do this?
Possible Approaches
custom dbt materialization
There's a insert_by_period dbt materialization in the dbt.utils package, that I think might be exactly what I'm looking for? But I'm confused as it's dbt snapshot, but:
run dbt snapshot for each file incrementally, all at once; and,
built directly off of an external table?
Delta Lake
I don't know much about Databricks's Delta Lake, but it seems like it should be possible with Delta Tables?
Fix the extraction job
Is our oroblem is solved if we can make our extracts contain only what has changed since the previous extract?
Example
Suppose the following three files are in a folder of a data lake. (Gist with the 3 csvs and desired table outcome as csv).
I added the Extracted column in case parsing the timestamp from the filename is too tricky.
2020-09-14_CRM_extract.csv
| OppId | CustId | Stage | Won | LastModified | Extracted |
|-------|--------|-------------|-----|--------------|-----------|
| 1 | A | 2 - Qualify | | 9/1 | 9/14 |
| 2 | B | 3 - Propose | | 9/12 | 9/14 |
2020-09-15_CRM_extract.csv
| OppId | CustId | Stage | Won | LastModified | Extracted |
|-------|--------|-------------|-----|--------------|-----------|
| 1 | A | 2 - Qualify | | 9/1 | 9/15 |
| 2 | B | 4 - Closed | Y | 9/14 | 9/15 |
| 3 | C | 1 - Lead | | 9/14 | 9/15 |
2020-09-16_CRM_extract.csv
| OppId | CustId | Stage | Won | LastModified | Extracted |
|-------|--------|-------------|-----|--------------|-----------|
| 1 | A | 2 - Qualify | | 9/1 | 9/16 |
| 2 | B | 4 - Closed | Y | 9/14 | 9/16 |
| 3 | C | 2 - Qualify | | 9/15 | 9/16 |
End Result
Below is SCD-II for the three files as of 9/16. SCD-II as of 9/15 would be the same but OppId=3 has only one from valid_from=9/15 and valid_to=null
| OppId | CustId | Stage | Won | LastModified | valid_from | valid_to |
|-------|--------|-------------|-----|--------------|------------|----------|
| 1 | A | 2 - Qualify | | 9/1 | 9/14 | null |
| 2 | B | 3 - Propose | | 9/12 | 9/14 | 9/15 |
| 2 | B | 4 - Closed | Y | 9/14 | 9/15 | null |
| 3 | C | 1 - Lead | | 9/14 | 9/15 | 9/16 |
| 3 | C | 2 - Qualify | | 9/15 | 9/16 | null |
Interesting concept and of course it would a longer conversation than is possible in this forum to fully understand your business, stakeholders, data, etc. I can see that it might work if you had a relatively small volume of data, your source systems rarely changed, your reporting requirements (and hence, datamarts) also rarely changed and you only needed to spin up these datamarts very infrequently.
My concerns would be:
If your source or target requirements change how are you going to handle this? You will need to spin up your datamart, do full regression testing on it, apply your changes and then test them. If you do this as/when the changes are known then it's a lot of effort for a Datamart that's not being used - especially if you need to do this multiple times between uses; if you do this when the datamart is needed then you're not meeting your objective of having the datamart available for "instant" use.
Your statement "we have a DW as code that can be deleted, updated, and recreated without the complexity that goes along with traditional DW change management" I'm not sure is true. How are you going to test updates to your code without spinning up the datamart(s) and going through a standard test cycle with data - and then how is this different from traditional DW change management?
What happens if there is corrupt/unexpected data in your source systems? In a "normal" DW where you are loading data daily this would normally be noticed and fixed on the day. In your solution the dodgy data might have occurred days/weeks ago and, assuming it loaded into your datamart rather than erroring on load, you would need processes in place to spot it and then potentially have to unravel days of SCD records to fix the problem
(Only relevant if you have a significant volume of data) Given the low cost of storage, I'm not sure I see the benefit of spinning up a datamart when needed as opposed to just holding the data so it's ready for use. Loading large volumes of data everytime you spin up a datamart is going to be time-consuming and expensive. Possible hybrid approach might be to only run incremental loads when the datamart is needed rather than running them every day - so you have the data from when the datamart was last used ready to go at all times and you just add the records created/updated since the last load
I don't know whether this is the best or not, but I've seen it done. When you build your initial SCD-II table, add a column that is a stored HASH() value of all of the values of the record (you can exclude the primary key). Then, you can create an External Table over your incoming full data set each day, which includes the same HASH() function. Now, you can execute a MERGE or INSERT/UPDATE against your SCD-II based on primary key and whether the HASH value has changed.
Your main advantage doing things this way is you avoid loading all of the data into Snowflake each day to do the comparison, but it will be slower to execute this way. You could also load to a temp table with the HASH() function included in your COPY INTO statement and then update your SCD-II and then drop the temp table, which could actually be faster.

Best database storage for matching products from offers

I have following problem. I have products, offers and their parameters (in MySQL about 300 000 000 rows). Based on offer parameters and their rate (parameters are dynamic and every parameter type has different rate) I must join offers to product. Of course there will be a lot of updates, deletes or inserts (for example around 5000req/s).
Second functionality will be sending these connected information via api. Anyone have any recommendations what NoSQL, relational database or something similar to use for storage?
Edit
I'll show my example on a small sample of data in MySQL:
Offer
+----------+-----------------+
| offer_id | name |
+----------+-----------------+
| 1 | iphone_se_black |
| 2 | iphone_se_red |
| 3 | iphone_se_white |
+----------+-----------------+
Parameter_rating
+--------------+----------------+--------+
| parameter_id | parameter_name | rating |
+--------------+----------------+--------+
| 1 | os | 10 |
| 2 | processor | 10 |
| 3 | ram | 10 |
| 4 | color | 1 |
+--------------+----------------+--------+
Parameter value
+----+--------------+----------------+
| id | parameter_id | value |
+----+--------------+----------------+
| 1 | 1 | iOS |
| 2 | 2 | some_processor |
| 3 | 3 | 2GB |
| 4 | 4 | black |
| 5 | 4 | red |
| 6 | 4 | white |
+----+--------------+----------------+
Parameter_to_value
+----------+--------------------+
| offer_id | parameter_value_id |
+----------+--------------------+
| 1 | 1 |
| 1 | 2 |
| 1 | 3 |
| 1 | 4 |
| 2 | 1 |
| 2 | 2 |
| 2 | 3 |
| 2 | 5 |
| 3 | 1 |
| 3 | 2 |
| 3 | 3 |
| 3 | 6 |
+----------+--------------------+
and based on this data I must return that bids 1,2 and 3 are one product.
The biggest problem is that data often changes. For example, changing prices, removing offers, etc. Therefore, I do not think that MySQL is the most suitable technology and I try to choose another.
Platform
any recommendations what NoSQL, relational database or something similar to use for storage?
Therefore, I do not think that MySQL is the most suitable technology and I try to choose another.
All that is ordinary fare for a Relational database. Tens of thousands of banks run trading and pricing systems that are extremely active from hundreds of thousands of users, on such systems. Every day. The changes you allude to are normal on such systems (eg. pricing and pricing basis, change all the time, in response to Buys & Sells).
But they use genuine SQL platforms. Freeware/shareware/vapourware/nowhere suites such as MySQL and PostgreSQL are neither SQL-compliant, nor viable platforms for high-throughput OLTP systems (no server architecture; no ACID Transactions; etc). They are still implementing the basics that SQL platforms have had since 1984, which is very difficult (impossible!) because they do not have a server architecture.
Therefore MySQL and PostgreSQL are not suitable for the reason of abject performance; zero concurrency; etc, and not for any database design concerns.
For an appreciation of the value of a genuine OLTP Server Architecture, refer to Oracle vs Sybase ASE. Although the article deals with Oracle explicitly, it applies to all freeware because all freeware has the same non-architecture that Oracle has. Actually, even less than Oracle. You get what you pay for.
Data Analysis
This answer is limited to Relational databases; SQL, its designated data sublanguage; and a genuine, commercially viable, SQL platform.
It appears the system supports an auction of some kind, which means you have to maintain an inventory of available/sold items. The database design that is required is quite ordinary.
However, your question is not clear enough to be answered. You are making many assumptions, that we are not party to. Allow me to ask some leading questions, which you need to consider and answer (update your Question):
what are the fundamental things that the systems transacts operations against ?
(products such as phones ?)
how are those things identified ?
(Not the ID but how do humans identify each thing)
what are the properties of those things ?
(please, not "parameter" ... maybe OS; RAM; Processor; Colour) ?
Then property values can be understood
(You can't mess with the attributes of a thing unless you hold and maintain the thing)
what are the operations or transactions against those things
(a) internal or admin transactions
(eg. AddProperty; AddPropertyValue; AddProduct; etc)
(b) external or online user transactions
(eg. BidProduct [offer to buy]; CloseBid; etc)
who are the operators, to which those transactions are permitted ?
(eg. Admins; product suppliers; online bidders; etc)
I can't make any sense of your Parameter_to_value, please explain
What is rating ? Some kind of weighting for the property vs the other properties, or something the bidders declare ?
Database Design • Tentative
This might take a few iterations.
Don't worry about ID fields on each and every file: first we have to understand the data, how it relates to other data, and how it is identified. We can add ID fields at the end.
Note
All my data models are rendered in IDEF1X, the Standard for modelling Relational databases since 1993
My IDEF1X Introduction is essential reading for beginners.
The IDEF1X Anatomy is a refresher for those who have lapsed.
If you have trouble reading the Predicates from the Data Model, let me know and I will produce them in text form.

Spare parts Database (structure)

There is a database of spare parts for cars, and online search by the name of spare parts. The user can type in the search, for example "safety cushion" or "airbag" - and the search result should be the same.
Therefore, I need somehow to implement the aliases for names of spare parts, and the question is how to store them in the database? Until now I have only one option that comes in mind - to create an additional table
| id | name of part | alias_id |
-------------------------------------------------- ---------------
| 1 | airbag | 10 |
| 2 | safety cushion | 10 |
And add additional field "alias_id" to table containing all the spare parts, and search by this field...
Are there other better options?
If I have understood correctly, it's best to have 3 tables in a many to many situation (if multiple parts have multiple aliases:
Table - Parts
| id | name of part |
-----------------------
| 1 | airbag |
| 2 | safety cushion |
Table - Aliases
| id | name of alias |
-----------------------
| 10 | AliasName |
Table - PartToAliases
| id | PartId | AliasId |
-------------------------
| 1 | 1 | 10 |
| 2 | 2 | 10 |
Your solution looks fine for the exact problem you described.
BUT what if someone writes safetycushion? or safety cuschion? With these kinds of variations your alias lookup table will soon become huge and and manualy maintaining these will not be feasible.
At that point you'll need a completely different approach (think full text search engine).
So if you are still sure you only need a couple of aliases your approach seems to be fine.

Resources