Is Hazelcast always blocking in case initial.min.cluster.size is not reached? If not, under which situations is it not?
Details:
I use the following code to initialize hazelcast:
Config cfg = new Config();
cfg.setProperty("hazelcast.initial.min.cluster.size",Integer.
toString(minimumInitialMembersInHazelCluster)); //2 in this case
cfg.getGroupConfig().setName(clusterName);
NetworkConfig network = cfg.getNetworkConfig();
JoinConfig join = network.getJoin();
join.getMulticastConfig().setEnabled(false);
join.getTcpIpConfig().addMember("192.168.0.1").addMember("192.168.0.2").
addMember("192.168.0.3").addMember("192.168.0.4").
addMember("192.168.0.5").addMember("192.168.0.6").
addMember("192.168.0.7").setRequiredMember(null).setEnabled(true);
network.getInterfaces().setEnabled(true).addInterface("192.168.0.*");
join.getMulticastConfig().setMulticastTimeoutSeconds(MCSOCK_TIMEOUT/100);
hazelInst = Hazelcast.newHazelcastInstance(cfg);
distrDischargedTTGs = hazelInst.getList(clusterName);
and get log messages like
debug: starting Hazel pullExternal from Hazelcluster with 1 members.
Does that definitely mean there was another member that has joined and left already? It does not look like that would be the case from the log files of the other instance. Hence I wonder whether there are situtations where hazelInst = Hazelcast.newHazelcastInstance(cfg); does not block even though it is the only instance in the hazelcast cluster.
The newHazelcastInstance blocks till the clusters has the required number of members.
See the code below for how it is implemented:
private static void awaitMinimalClusterSize(HazelcastInstanceImpl hazelcastInstance, Node node, boolean firstMember)
throws InterruptedException {
final int initialMinClusterSize = node.groupProperties.INITIAL_MIN_CLUSTER_SIZE.getInteger();
while (node.getClusterService().getSize() < initialMinClusterSize) {
try {
hazelcastInstance.logger.info("HazelcastInstance waiting for cluster size of " + initialMinClusterSize);
//noinspection BusyWait
Thread.sleep(TimeUnit.SECONDS.toMillis(1));
} catch (InterruptedException ignored) {
}
}
if (initialMinClusterSize > 1) {
if (firstMember) {
node.partitionService.firstArrangement();
} else {
Thread.sleep(TimeUnit.SECONDS.toMillis(3));
}
hazelcastInstance.logger.info("HazelcastInstance starting after waiting for cluster size of "
+ initialMinClusterSize);
}
}
If you set the logging on debug then perhaps you can see better what is happening. Member joining and leaving should already be visible under info.
Related
I have a large contract and I am in the process of splitting it out into two. The goal is to have the functions that are common (and will be used by many other contracts) to be separated out for efficiency.
One of these functions compares items in arrays "ownedSymbols" and "targetAssets". It produces a list "sellSymbols" if any item in "ownedSymbols" is not in "targetAssets".
The below code works fine while "sellSymbols" is stored as a string. As this function will become common, I need it to run in memory so the results aren't confused by calls from different contracts.
pragma solidity >0.8.0;
contract compareArrays {
string[] public ownedSymbols = ["A","B","C"];
string[] public targetAssets = ["A","B"];
string[] sellSymbols;
event sellListEvent(string[]);
function sellList(string[] memory _ownedSymbols, string[] memory _targetAssetsList) internal {
sellSymbols = _ownedSymbols;
for (uint256 i = 0; i < _targetAssetsList.length; i++) {
for (uint256 x = 0; x < sellSymbols.length; x++) {
if (
keccak256(abi.encodePacked((sellSymbols[x]))) ==
keccak256(abi.encodePacked((_targetAssetsList[i])))
) {
if (x < sellSymbols.length) {
sellSymbols[x] = sellSymbols[sellSymbols.length - 1];
sellSymbols.pop();
} else {
delete sellSymbols;
}
}
}
}
emit sellListEvent(sellSymbols);
}
function runSellList() public {
sellList(ownedSymbols,targetAssets);
}
}
Ideally the function would run with "string[] memory sellSymbols", however this kicks back an error.
pragma solidity >0.8.0;
contract compareArrays {
string[] public ownedSymbols = ["A","B","C"];
string[] public targetAssets = ["A","B"];
event sellListEvent(string[]);
function sellList(string[] memory _ownedSymbols, string[] memory _targetAssetsList) internal {
string[] memory sellSymbols = _ownedSymbols;
for (uint256 i = 0; i < _targetAssetsList.length; i++) {
for (uint256 x = 0; x < sellSymbols.length; x++) {
if (
keccak256(abi.encodePacked((sellSymbols[x]))) ==
keccak256(abi.encodePacked((_targetAssetsList[i])))
) {
if (x < sellSymbols.length) {
sellSymbols[x] = sellSymbols[sellSymbols.length - 1];
sellSymbols.pop();
} else {
delete sellSymbols;
}
}
}
}
emit sellListEvent(sellSymbols);
}
function runSellList() public {
sellList(ownedSymbols,targetAssets);
}
}
The error:
TypeError: Member "pop" is not available in string memory[] memory outside of storage.
--> contracts/sellSymbols.sol:20:25:
|
20 | sellSymbols.pop();
| ^^^^^^^^^^^^^^^
Two questions from me:
Is there a way to do this in memory so that the function can be common (i.e. used by multiple contracts at the same time)?
Is there a better way? The below is expensive to run, but it is the only way I have been able to achieve it.
One final comment - I know this would be much easier/cheaper to run off chain. That is not something I am willing to consider as I want this project to be decentralized.
If you want to keep the existing system, the best solution is described here: https://stackoverflow.com/a/49054593/11628256
if (x < sellSymbols.length) {
sellSymbols[x] = sellSymbols[sellSymbols.length - 1];
delete sellSymbols[myArray.length - 1];
sellSymbols.length--;
} else {
delete sellSymbols;
}
If all you care about is the presence or absence of a particular asset (and not enumerating through them), what you're going to want to do to really reduce gas costs is something called "lazy evaluation." Lazy evaluation is when instead of computing all results at once (like increasing all balances by 50% by iterating over an array), you modify the getters so that their return value reflects the operation (such as multiplying an internal variable by 50% and multiplying the original result of getBalance by that variable).
So, if this is the case, what you want to do is use the following function instead:
function except(string _item, mapping(string => bool) _ownedSymbols, mapping(string => bool) _targetAssets) internal returns (bool) {
return _ownedSymbols[_item] && !_targetAssets[_item];
}
<pet peeve>
Finally, I know you say you want this to be decentralized, but I really do feel the urge to say this. If this is a system that doesn't need to be decentralized, don't decentralize it! Decentralization is great for projects that other people rely on - for example, DNS or any sort of token.
From your variable names, it seems that this is probably some sort of system similar to a trading bot. Therefore, the incentive is on you to keep it running, as you are the one that gets the benefits. None of the problems that decentralization solves (censorship, conflict of interest, etc...) apply to your program, as the person running it is incentivized to keep it running and keep a copy of the program. It's cheaper for the user running it to not have security they don't need. You don't need a bank-grade vault to store a $1 bill!
</pet peeve>
I am building up some code to request multiple information from a database in order to have a time table in my front end incl. multiple DB requests.
The problem is that with one particular request where . am using swiftKuery and DispatchGroups i receive ooccasionally but not always an error message in my XCode. This can not be reconstructed by different request but just sometimes happens.
here is a snippet of my code.
var profWorkDaysBreak = [time_workbreaks]()
let groupServiceWorkDayBreaks = DispatchGroup()
...
///WorkdaysBreakENTER AsyncCall
//UnreliableCode ?
profWorkDays.forEach {workDay in
groupServiceWorkDayBreaks.enter()
time_workbreaks.getAll(weekDayId: workDay.id) { results, error in
if let error = error {
print(error)
}
if let results = results {
profWorkDaysBreak.append(contentsOf: results) // The error happens here !
}
groupServiceWorkDayBreaks.leave()
}
}
...
groupServiceWorkDayBreaks.wait()
The results and profWorkDaysBreak variables are the same just sometimes i receive the message:
Fatal error: Insufficient space allocated to copy array contents
This leads to a stop of the execution.
I assume, that maybe the loop might sometimes finish an earlier execution in the DispatchGroup ??? but this is the only think i have as an idea....
Most likely this is caused by some race conditions due to the fact that you modify the array from multiple threads. And if two threads happen to try and alter the array at the same time, you get into problems.
Make sure you serialize the access to the array, this should solve the problem. You can use a semaphore for that:
var profWorkDaysBreak = [time_workbreaks]()
let groupServiceWorkDayBreaks = DispatchGroup()
let semaphore = DispatchSemaphore(value: 0)
...
profWorkDays.forEach { workDay in
groupServiceWorkDayBreaks.enter()
time_workbreaks.getAll(weekDayId: workDay.id) { results, error in
if let error = error {
print(error)
}
if let results = results {
// acquire the lock, perform the non-thread safe operation, release the lock
semaphore.wait()
profWorkDaysBreak.append(contentsOf: results) // The error happens here !
semaphore.signal()
}
groupServiceWorkDayBreaks.leave()
}
}
...
groupServiceWorkDayBreaks.wait()
The semaphore here acts like a mutex, allowing at most one thread to operate on the array. Also I would like to emphasise the fact that the lock should be hold for the least amount of time possible, so that the other threads don't have to wait for too much.
Here is the only way i got my code running reliable so far..
i skipped contents of completely and just went to a forEach Loop
var profWorkDaysBreak = [time_workbreaks]()
let groupServiceWorkDayBreaks = DispatchGroup()
...
///WorkdaysBreakENTER AsyncCall
//UnreliableCode ?
profWorkDays.forEach {workDay in
groupServiceWorkDayBreaks.enter()
time_workbreaks.getAll(weekDayId: workDay.id) { results, error in
if let error = error {
print(error)
}
if let results = results {
results.forEach {profWorkDayBreak in
profWorkDaysBreak.append(profWorkDayBreak)
}
/*
//Alternative causes error !
profWorkDaysBreak.append(contentsOf: results)
*/
}
groupServiceWorkDayBreaks.leave()
}
}
...
groupServiceWorkDayBreaks.wait()
I'm trying to resolve a problem with the search bar. It works but the problem is that if I press two keys almost at the same time, the app will only search the words with the first key pressed.
Here are the logs:
In this one, it works when I press the P then R:
[EDT] 0:4:9,283 - p
[EDT] 0:4:9,348 - 10
[EDT] 0:4:9,660 - pr
[EDT] 0:4:9,722 - 3
The second one doesn't because I press P and R nearly at the same time:
[EDT] 0:4:35,237 - p
[EDT] 0:4:35,269 - pr
[EDT] 0:4:35,347 - 0
[EDT] 0:4:35,347 - 10
The logs here are generated to show the String searched and the result size. As you can see, the first case get results before typing the next char and the second case got all results when the two chars are typed.
The main problem is that in the second case, results from the 'p' String are shown instead of those of 'pr'.
I'm using the searchbar from the Toolbar API with addSearchCommand and an InfiniteContainer to show result data.
Could it be a problem in the order of the events from the addSearchCommand are treated ?
EDIT: Here is the client side code. Server side it's just a simple rest service call which fetch the data from the database.
public static ArrayList<Patient>getSearchedPatient(int index,int amount, String word)
{
ArrayList<Patient> listPatient = null;
Response reponse;
try {
reponse = RestManager.executeRequest(
Rest.get(server + "/patients/search")
.queryParam("index", String.valueOf(index))
.queryParam("amount", String.valueOf(amount))
.queryParam("word", word),
RequestResult.ENTITIES_LIST,
Patient.class);
listPatient = (ArrayList<Patient>)reponse.getResponseData();
Log.p(""+listPatient.size());
} catch (RestManagerException e) {
LogError("", e);
}
return listPatient;
}
private static Response executeRequest(RequestBuilder req, RequestResult type, Class objectClass) throws RestManagerException
{
Response response = null;
try {
switch (type) {
case BYTES:
response = req.getAsBytes();
break;
case JSON_MAP:
response = req.acceptJson().getAsJsonMap();
break;
case ENTITY:
response = req.acceptJson().getAsProperties(objectClass);
break;
case ENTITIES_LIST:
response = req.acceptJson().getAsPropertyList(objectClass);
break;
default:
case STRING:
response = req.getAsString();
break;
}
} catch (Exception e) {
log().error("Erreur à l'exécution de la requête", e);
response = null;
}
if(response == null)
return null;
return response;
}
So the trick here is a simple one. Don't make a request... Most users type fast enough to saturate your network connection speed so you will see completion suggestions referring to things that are no longer relevant.
This is a non-trivial implementation which I discuss in-depth in the Uber book where such a feature is implemented.
The solution is to send a request after a delay while caching responses to avoid double requests and ideally canceling request in progress when applicable. The solution in the Uber book does all 3 I'll try to cover just the basics in this mockup code. First you need a field for the timer and current request. Ideally you would also have a Map containing cached data:
private UITimer delayedRequest;
private String currentSearch;
private Map<String, String> searchCache = new HashMap<>();
Then you need to bind a listener like this:
tb.addSearchCommand(e -> {
String s = (String)e.getSource();
if(s == null) {
if(delayedRequest != null) {
delayedRequest.cancel();
delayedRequest = null;
}
return;
}
if(currentSearch != null && s.equals(currentSearch)) {
return;
}
if(delayedRequest != null) {
delayedRequest.cancel();
delayedRequest = null;
}
currenSearch = s;
delayedRequest = UITimer.timer(100, false, () -> {
doSearchCode();
});
});
I didn't include here usage of the cache which you need to check within the search method and fill up in the result code. I also didn't implement canceling requests already in progress.
When trying to run the code above I'm getting javax.naming.OperationNotSupportedException with the message:
[LDAP: error code 12 - 00000057: LdapErr: DSID-0C09079A, comment: Error processing control, data 0, v2580].
The first page is successfully retrieved and the exception is thrown only at second loop iteration.
public void pagedResults() {
PagedResultsCookie cookie = null;
SearchControls searchControls = new SearchControls();
searchControls.setSearchScope(SearchControls.SUBTREE_SCOPE);
int page = 1;
do {
logger.info("Starting Page: " + page);
PagedResultsDirContextProcessor processor = new PagedResultsDirContextProcessor(20, cookie);
List<String> lastNames = ldapTemplate.search("", initialFilter.encode(), searchControls, UserMapper.USER_MAPPER_VNT, processor);
for (String l : lastNames) {
logger.info(l);
}
cookie = processor.getCookie();
page = page + 1;
} while (null != cookie.getCookie());
}
However, when I remove Spring LDAP using pure implementation as above, it works!
try {
LdapContext ctx = new InitialLdapContext(env, null);
// Activate paged results
int pageSize = 5;
byte[] cookie = null;
ctx.setRequestControls(new Control[] { new PagedResultsControl(pageSize, Control.CRITICAL) });
int total;
do {
/* perform the search */
NamingEnumeration results = ctx .search("",
"(&(objectCategory=person)(objectClass=user)(SAMAccountName=vnt*))",
searchCtls);
/* for each entry print out name + all attrs and values */
while (results != null && results.hasMore()) {
SearchResult entry = (SearchResult) results.next();
System.out.println(entry.getName());
}
// Examine the paged results control response
Control[] controls = ctx.getResponseControls();
if (controls != null) {
for (int i = 0; i < controls.length; i++) {
if (controls[i] instanceof PagedResultsResponseControl) {
PagedResultsResponseControl prrc = (PagedResultsResponseControl) controls[i];
total = prrc.getResultSize();
if (total != 0) {
System.out.println("***************** END-OF-PAGE "
+ "(total : " + total
+ ") *****************\n");
} else {
System.out.println("***************** END-OF-PAGE "
+ "(total: unknown) ***************\n");
}
cookie = prrc.getCookie();
}
}
} else {
System.out.println("No controls were sent from the server");
}
// Re-activate paged results
ctx.setRequestControls(new Control[] { new PagedResultsControl(
pageSize, cookie, Control.CRITICAL) });
} while (cookie != null);
ctx.close();
} catch (NamingException e) {
System.err.println("PagedSearch failed.");
e.printStackTrace();
} catch (IOException ie) {
System.err.println("PagedSearch failed.");
ie.printStackTrace();
}
Any hints?
The bad thing about LDAP paged results is that they only work if the same underlying connection is used for all requests. The internals of Spring LDAP get a new connection for each LdapTemplate operation, unless you use the transactional support.
The easiest way to make sure the same connection will be used for a sequence of LDapTemplate operations is to use the transaction support, i.e. configure transactions for Spring LDAP and wrap the target method with a Transactional annotation.
I managed to make my example above work using SingleContextSource.doWithSingleContext approach.
However my scenario is different, my app is service oriented and the paged results as well as the cookie should be sent to an external client so that he decides to request next pages or not.
So as far as I can tell, spring-ldap does not support such case. I must use pure implementation so that I can keep track of the underlying connection during requests. Transaction support could help as well as SingleContextSource, but not among different requests.
#marthursson
Is there any plan in spring ldap to such support in the future?
I found I could use your first example (Spring) as long as I set the ignorePartialResultException property to true in my ldapTemplate configuration and put the #Transactional on my method as suggested.
you can replace ldapTemplate DirContext like this
ldapTemplate.setContextSource(new SingleContextSource(ldapContextSource().getReadWriteContext()));
Question is: does ds.put(employee) happen in transaction? Or does the outer transaction get erased/overriden by the transaction in saveRecord(..)?
Once error is thrown at line datastore.put(..) at some point in the for-loop (let's say i==5), will previous puts originating on the same line get rollbacked?
What about puts happening in the saveRecord(..). I suppose those will not get rollbacked.
DatastoreService datastore = DatastoreServiceFactory.getDatastoreService()
Transaction txn = datastore.beginTransaction();
try {
for (int i=0; 1<10; i++) {
Key employeeKey = KeyFactory.createKey("Employee", "Joe");
Entity employee = datastore.get(employeeKey);
employee.setProperty("vacationDays", 10);
datastore.put(employee);
Entity employeeRecord = createRecord("record", employeeKey);
saveRecord(employeeRecord);
}
txn.commit();
} finally {
if (txn.isActive()) {
txn.rollback();
}
}
public void saveRecord(Entity entity) {
datastore.beginTransaction();
try {
// do some logic in here, delete activity and commit txn
datastore.put(entity);
} finally {
if (datastore.getCurrentTransaction().isActive()) {
datastore.getCurrentTransaction().rollback();
}
}
}
OK, I'll assume you are using low-level Datastore API. Note that getTransaction() does not exist. I'll assume you meant datastoreService.getCurrentTransaction().
DatastoreService.beginTransaction() will return a Transaction, that is considered a current transaction on the same thread until you call beginTransaction() again. Since you call beginTransaction() in a loop inside "outer" transaction, it breaks your "outer" code: after the loop is finished ds.getCurrentTransaction() does not return the same transaction. Also, put() implicitly uses current transaction.
So first you must fix outer code to save transaction as shown in example:
public void put(EventPlan eventPlan) {
Transaction txn = ds.beginTransaction();
try {
for (final Activity activity : eventPlan.getActivities()) {
save(activity, getPlanKey(eventPlan)); // PUT
// IMPORTANT - also pass transaction and use it
// (I assume this is some internal helper method)
ds.put(txn, activity, getSubPlanKey(eventPlan)); //subplan's parent is eventPlan
}
txn.commit();
} finally {
if (txn.isActive())
txn.rollback();
}
}
Now on to questions:
Yes, because all puts are now part of the same transaction (after you fix the code) and you call txn.rollback() on it in case of errors.
No, of course not. They are part of different transactions.