Is there a delay between a SET and a GET with the same key in Redis? - c

I have three processes on one computer:
A test (T)
A nginx server with my own module (M) --- the test starts and stops this process between each test case section
A Redis server (R), which is always running --- the test does not handle the start/stop sequence of this service (I'm testing my nginx module, not Redis.)
Here is a diagram of the various events:
T M R
| | |
O-------->+ FLUSHDB
| | |
+<--------O (FLUSHDB acknowledge as successful)
| | |
O-------->+ SET key value
| | |
+<--------O (SET acknowledge as successful)
| | |
O--->+ | Start nginx including my module
| | |
| O--->+ GET key
| | |
| +<---O (SUCCESS 80% and FAILURE 20%)
| | |
The test clears the Redis database with FLUSHDB then adds a key with SET key value. The test then starts nginx including my module. There, once in a while, the nginx module GET key action fails.
Note 1: I am not using the ASync implementation of Redis.
Note 2: I am using the C library hiredis.
Is it possible that there would be a delay between a SET and a following GET with the same key which would explain that this process would fail once in a while? Is there a way for me to ensure that the SET is really done once the redisCommand() function returns?
IMPORTANT NOTE: if I run one such test and the GET fails in my nginx module, the key appears in my Redis:
redis-cli
127.0.0.1:6379> KEYS *
1) "8b95d48d13e379f1ccbcdfc39fee4acc5523a"
127.0.0.1:6379> GET "8b95d48d13e379f1ccbcdfc39fee4acc5523a"
"the expected value"
So the
SET "8b95d48d13e379f1ccbcdfc39fee4acc5523a" "the expected value"
worked as expected. Only the GET failed and I would assume that it is because it somehow occurred too quickly. Any idea how to tackle this problem?

No, there is no delay between set and get. What you are doing should work.
Try running the monitor command in a separate window. When it fails - does the set command come before/after the get command?

Related

Chaining rows in a SQL Server table in a distributed system

Let's say that I have the following SQL table where each value has a reference to the previous one:
ChainedTable
+------------------+--------------------------------------+------------+--------------------------------------+
| SequentialNumber | GUID | CustomData | LastGUID |
+------------------+--------------------------------------+------------+--------------------------------------+
| 1 | 792c9583-12a1-4c95-93a4-3206855d284f | OtherData1 | 0 |
+------------------+--------------------------------------+------------+--------------------------------------+
| 2 | 1022ffd3-afda-4e20-9d45-eec884bc2a50 | OtherData2 | 792c9583-12a1-4c95-93a4-3206855d284f |
+------------------+--------------------------------------+------------+--------------------------------------+
| 3 | 83729ad4-2564-4146-b451-00d82585bd96 | OtherData3 | 1022ffd3-afda-4e20-9d45-eec884bc2a50 |
+------------------+--------------------------------------+------------+--------------------------------------+
| 4 | d7197e87-d7d6-4175-8172-12656043a69d | OtherData4 | 83729ad4-2564-4146-b451-00d82585bd96 |
+------------------+--------------------------------------+------------+--------------------------------------+
| 5 | c1d3d751-ef34-4079-a73c-8952f93d17db | OtherData5 | d7197e87-d7d6-4175-8172-12656043a69d |
+------------------+--------------------------------------+------------+--------------------------------------+
If I were to insert the sixth row, I would retrieve the data of the last row using a query like this:
SELECT TOP 1 (SequentialNumber, GUID) FROM ChainedTable ORDER BY SequentialNumber DESC;
After that selection and before the insertion of the next row, an operation outside the database will take place.
That would suffice if it is ensured that only one entity is using the table every time. However, if more entities can do this same operation, there is a risk of a race condition. There is the possibility that one entity requests the information of the last row and before doing the insert on the second one.
At first, I thought of creating a new table with a value that indicates if the table is being used or not (the value can be null or the identifier of the process that has access to the table). In that solution, the entity won't start the request of the last operation if the value indicates that the table is being used by another process. However, one of the things that can happen in this scenario is that the process using the table can die without releasing the table, blocking the whole system.
I'm sure this is a "typical" computer science problem and that there are well known solutions to implement this. Can anyone point me in the right direction, please?
I think using Transaction in SQL may solve the problem For example, if you create a transaction that will add a new row, no one else will be able to do the same transaction until the first one is completed.

Too many languages in solr config

We have a solr configuration based on apache solr 8.52.
We use the installation from the TYPO3 extension ext:solr 10.0.3.
In this way we have multiple (39) languages and multiple cores.
As we do not need most of the languages (for sure we need one, maybe two further) I tried to remove most of them with deleting (moving to another folder) all the configurations I identified as other languages, leaving only these folders and files in the solr folders:
server/
+-solr/
| +-configsets/
| | +-ext_solr_10_0_0/
| | +-conf/
| | | +-english/
| | | +-_schema_analysis_stopwords_english.json
| | | +-admin-extra.html
| | | :
| | | +-solrconfig.xml
| | +-typo3lib
| | +-solr-typo3-plugin-4.0.0.jar
| +cores/
| | +-english/
| | +-core.properties
| +-data/
| | +-english/
: : :
I thought that after restarting the server it would only present one language and one core. This was correct.
But on start it noted all the other languages as missing like:
core_es: org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: Could not load conf for core core_es: Error loading schema resource spanish/schema.xml
Where does solr get this information about all these languages I don't need?
How can I avoid this long list of warnings?
First of all, it does not hurt to have those cores. As long as they are empty and not loaded, they do not take much RAM and CPU.
But if you still want to get rid of them, you need to do it correctly. If you just move core's data directory, this does not mean it is deleted because solr server also needs to adjust config files. Best way is to use curl like this:
curl 'http://localhost:8983/solr/admin/cores?action=UNLOAD&core=core_en&deleteInstanceDir=true'
That would remove the core and all its data.

PostgreSQL + pgpool replication with miss balancing

I have a PostgreSQL replication M-S with pgpool as a load balancer on master server only. The replication is going OK and there is no delay on the process. The problem is that the master server is receiving more request than the slave even when I have configured a balance different from 50% for each server.
This is the pgpool show_pool_nodes with backend weigth M(1)-S(2)
node_id | hostname | port | status | lb_weight | role | select_cnt | load_balance_node | replication_delay
---------+-------------+------+--------+-----------+---------+------------+-------------------+-------------------
0 | master-ip | 9999 | up | 0.333333 | primary | 56348331 | false | 0
1 | slave-ip | 9999 | up | 0.666667 | standby | 3691734 | true | 0
as you can appreciate the master server is receiving +10x request than slave
This is the pgpool show_pool_nodes with backend weigth M(1)-S(5)
node_id | hostname | port | status | lb_weight | role | select_cnt | load_balance_node | replication_delay
---------+-------------+------+--------+-----------+---------+------------+-------------------+-------------------
0 | master-ip | 9999 | up | 0.166667 | primary | 10542201 | false | 0
1 | slave-ip | 9999 | up | 0.833333 | standby | 849494 | true | 0
The behave is quite similar when I assign M(1)-S(1)
Now I wonder if I miss understood the pgpool functioning:
Pgpool only balances read queries(as write queries are sent to
master always)
Backend Weight parameter is assigned to calculate distribution only
in balancing mode. As greater the value is more likely to be chosen
for pgpool, so if a server has a greater lb_weight it would be
selected more times than others with lower values.
If I'm right why is happening this?
Is there a way that I can actually assign a proper balancing configuration of select_cnt queries? My intention is to overcharge the slave with read queries and let to master only a "few" read queries as it is taking all the writing.
You are right on pgpool load balancing. There could be some reasons why this doesn't seem to work. For start, notice that you have the same port number for both backends. Try configuring your backend connection settings like shown in the sample pgpool.conf: https://github.com/pgpool/pgpool2/blob/master/src/sample/pgpool.conf.sample (lines 66-87), (where you also set the weights to your needs) and assign different port numbers to each backend.
Also check (assuming your running mode is master/slave):
load_balance_mode = on
master_slave_mode = on
-- changes require restart
There is a relevant FAQ entry " It seems my pgpool-II does not do load balancing. Why?" here: https://www.pgpool.net/mediawiki/index.php/FAQ (if pgpool version 4.1 also consider statement_level_load_balance). So far, i have assumed that the general conditions for load balancing (https://www.pgpool.net/docs/latest/en/html/runtime-config-load-balancing.html) are met.
You can try to adjust below one configs in pgpool.conf file:
1. wal lag delay size
delay_threshold = 10000000
it is used to let pgpool know if the slave postgresql wal is too delay to use. Change large more query can be pass to slave. Change small more query will go to master.
Besides, the pgbench testing parameter is also key. Use -C parameter, it will let connection per query, otherwise connection per session.
pgpoll load balance decision making depends of a matrix of parameter combination. not only a single parameter
Here is reference.
https://www.pgpool.net/docs/latest/en/html/runtime-config-load-balancing.html#GUC-LOAD-BALANCE-MODE

Opendaylight Boron : Config Shard not getting created and Circuit Breaker Timed out

We are using ODL Boron - SR2. We observe a strange behavior of "Config" Shard not getting created when we start ODL in cluster mode in RHEL 6.9. We observe Circuit Breaker Timed Out exception. However "Operational" shard is getting created without any issues. Due to unavailability of "Config" shard we are unable to persist anything in "Config" tree. We checked in JMX console and "Shards" is missing.
This is consistently reproducible in RHEL, however it works in CentOS.
2018-04-04 08:00:38,396 | WARN | saction-29-31'}} | 168 - org.opendaylight.controller.config-manager - 0.5.2.Boron-SR2 | DeadlockMonitor$DeadlockMonitorRunnable | ModuleIdentifier{factoryName='runtime-generated-mapping', instanceName='runtime-mapping-singleton'} did not finish after 26697 ms
2018-04-04 08:00:38,396 | WARN | saction-29-31'}} | 168 - org.opendaylight.controller.config-manager - 0.5.2.Boron-SR2 | DeadlockMonitor$DeadlockMonitorRunnable | ModuleIdentifier{factoryName='runtime-generated-mapping', instanceName='runtime-mapping-singleton'} did not finish after 26697 ms
2018-04-04 08:00:40,690 | ERROR | lt-dispatcher-30 | 216 - com.typesafe.akka.slf4j - 2.4.7 | Slf4jLogger$$anonfun$receive$1$$anonfun$applyOrElse$1 | Failed to persist event type [org.opendaylight.controller.cluster.raft.persisted.UpdateElectionTerm] with sequence number [4] for persistenceId [member-2-shard-default-config].
akka.pattern.CircuitBreaker$$anon$1: Circuit Breaker Timed out.
2018-04-04 08:00:40,690 | ERROR | lt-dispatcher-30 | 216 - com.typesafe.akka.slf4j - 2.4.7 | Slf4jLogger$$anonfun$receive$1$$anonfun$applyOrElse$1 | Failed to persist event type [org.opendaylight.controller.cluster.raft.persisted.UpdateElectionTerm] with sequence number [4] for persistenceId [member-2-shard-default-config].
akka.pattern.CircuitBreaker$$anon$1: Circuit Breaker Timed out.
This is an issue with akka persistence where it times out trying to write to the disk. See the discussion in https://lists.opendaylight.org/pipermail/controller-dev/2017-August/013781.html.

Watson Discovery Service Issue

Right Way - It's working
Wrong Way - Isn't working how should be
I'd like your help about an issue. I'm using wds and so I created a collection that was uploaded by several pieces of a manual. Once I did it, on the conversation service I also created, I put some descriptions on the intentions that the Discovery should uses. Now, when I try to identify these descriptions on the Discovery Service, unless I write exactly the same to test, it's not recognizing. Any suggestion about what can I use to fix it?
e.g. I uploaded a metadata txt file with the following fields:
+---------------------+------------+-------------+-----------------------+---------+------+
| Document | DocumentID | Chapter | Session | Title | Page |
+---------------------+------------+-------------+-----------------------+---------+------+
| Instructions Manual | BR_1 | Maintenance | Long Period of Disuse | Chassis | 237 |
+---------------------+------------+-------------+-----------------------+---------+------+
Now, when I search on the Discovery, I need to use the exactly word I put on the intention's description (Chassis). Otherwise the Discovery it's not getting through the way below:
metadata.Title:chas*|metadata.Chapter:chas*|metadata.Session:chas*
Any idea??
Please check the syntax if its right or wrong by matching it with discovery tool.
Sometimes we need inverted commas with backslash.

Resources