How to handle keys pressed almost in the same time? - codenameone

I'm trying to resolve a problem with the search bar. It works but the problem is that if I press two keys almost at the same time, the app will only search the words with the first key pressed.
Here are the logs:
In this one, it works when I press the P then R:
[EDT] 0:4:9,283 - p
[EDT] 0:4:9,348 - 10
[EDT] 0:4:9,660 - pr
[EDT] 0:4:9,722 - 3
The second one doesn't because I press P and R nearly at the same time:
[EDT] 0:4:35,237 - p
[EDT] 0:4:35,269 - pr
[EDT] 0:4:35,347 - 0
[EDT] 0:4:35,347 - 10
The logs here are generated to show the String searched and the result size. As you can see, the first case get results before typing the next char and the second case got all results when the two chars are typed.
The main problem is that in the second case, results from the 'p' String are shown instead of those of 'pr'.
I'm using the searchbar from the Toolbar API with addSearchCommand and an InfiniteContainer to show result data.
Could it be a problem in the order of the events from the addSearchCommand are treated ?
EDIT: Here is the client side code. Server side it's just a simple rest service call which fetch the data from the database.
public static ArrayList<Patient>getSearchedPatient(int index,int amount, String word)
{
ArrayList<Patient> listPatient = null;
Response reponse;
try {
reponse = RestManager.executeRequest(
Rest.get(server + "/patients/search")
.queryParam("index", String.valueOf(index))
.queryParam("amount", String.valueOf(amount))
.queryParam("word", word),
RequestResult.ENTITIES_LIST,
Patient.class);
listPatient = (ArrayList<Patient>)reponse.getResponseData();
Log.p(""+listPatient.size());
} catch (RestManagerException e) {
LogError("", e);
}
return listPatient;
}
private static Response executeRequest(RequestBuilder req, RequestResult type, Class objectClass) throws RestManagerException
{
Response response = null;
try {
switch (type) {
case BYTES:
response = req.getAsBytes();
break;
case JSON_MAP:
response = req.acceptJson().getAsJsonMap();
break;
case ENTITY:
response = req.acceptJson().getAsProperties(objectClass);
break;
case ENTITIES_LIST:
response = req.acceptJson().getAsPropertyList(objectClass);
break;
default:
case STRING:
response = req.getAsString();
break;
}
} catch (Exception e) {
log().error("Erreur à l'exécution de la requête", e);
response = null;
}
if(response == null)
return null;
return response;
}

So the trick here is a simple one. Don't make a request... Most users type fast enough to saturate your network connection speed so you will see completion suggestions referring to things that are no longer relevant.
This is a non-trivial implementation which I discuss in-depth in the Uber book where such a feature is implemented.
The solution is to send a request after a delay while caching responses to avoid double requests and ideally canceling request in progress when applicable. The solution in the Uber book does all 3 I'll try to cover just the basics in this mockup code. First you need a field for the timer and current request. Ideally you would also have a Map containing cached data:
private UITimer delayedRequest;
private String currentSearch;
private Map<String, String> searchCache = new HashMap<>();
Then you need to bind a listener like this:
tb.addSearchCommand(e -> {
String s = (String)e.getSource();
if(s == null) {
if(delayedRequest != null) {
delayedRequest.cancel();
delayedRequest = null;
}
return;
}
if(currentSearch != null && s.equals(currentSearch)) {
return;
}
if(delayedRequest != null) {
delayedRequest.cancel();
delayedRequest = null;
}
currenSearch = s;
delayedRequest = UITimer.timer(100, false, () -> {
doSearchCode();
});
});
I didn't include here usage of the cache which you need to check within the search method and fill up in the result code. I also didn't implement canceling requests already in progress.

Related

Dart - HTTPClient download file to string

In the Flutter/Dart app that I am currently working on need to download large files from my servers. However, instead of storing the file in local storage what I need to do is to parse its contents and consume it one-off. I thought the best way to accomplish this was by implementing my own StreamConsumer and overriding the relvant methods. Here is what I have done thus far
import 'dart:io';
import 'dart:async';
class Accumulator extends StreamConsumer<List<int>>
{
String text = '';
#override
Future<void> addStream(Stream<List<int>> s) async
{
print('Adding');
//print(s.length);
return;
}
#override
Future<dynamic> close() async
{
print('closed');
return Future.value(text);
}
}
Future<String> fileFetch() async
{
String url = 'https://file.io/bse4moAYc7gW';
final HttpClientRequest request = await HttpClient().getUrl(Uri.parse(url));
final HttpClientResponse response = await request.close();
return await response.pipe(Accumulator());
}
Future<void> simpleFetch() async
{
String url = 'https://file.io/bse4moAYc7gW';
final HttpClientRequest request = await HttpClient().getUrl(Uri.parse(url));
final HttpClientResponse response = await request.close();
await response.pipe(File('sample.txt').openWrite());
print('Simple done!!');
}
void main() async
{
print('Starting');
await simpleFetch();
String text = await fileFetch();
print('Finished! $text');
}
When I run this in VSCode here is the output I get
Starting
Simple done!! //the contents of the file at https://file.io/bse4moAYc7gW are duly saved in the file
sample.txt
Adding //clearly addStream is being called
Instance of 'Future<int>' //I had expected to see the length of the available data here
closed //close is clearly being called BUT
Finished! //back in main()
My understanding of the underlying issues here is still rather limited. My expectation
I had thought that I would use addStream to accumulate the contents being downloaded until
There is nothing more to download at which point close would be called and the program would display exited
Why is addStream showing instance of... rather than the length of available content?
Although the VSCode debug console does display exited this happens several seconds after closed is displayed. I thought this might be an issue with having to call super.close() but not so. What am I doing wrong here?
I was going to delete this question but decided to leave it here with an answer for the benefit of anyone else trying to do something similar.
The key point to note is that the call to Accumulator.addStream does just that - it furnishes a stream to be listened to, no actual data to be read. What you do next is this
void whenData(List<int> data)
{
//you will typically get a sequence of one or more bytes here.
for(int value in data)
{
//accumulate the incoming data here
}
return;
}
function void whenDone()
{
//now that you have all the file data *accumulated* do what you like with it
}
#override
Future<void> addStream(Stream<List<int>> s) async
{
s.listen(whenData,onDone:whenDone);
//you can optionally ahandler for `onError`
}

Codename One EasyThread implementation that repeats a runnable if its result is false

Note for the readers: this question is specific for Codename One only.
I'm developing an app that needs some initial data from a server to run properly. The first shown Form doesn't need this data and there is also a splash screen on the first run, so if the Internet connection is good there is enought time to retrive the data... but the Internet connection can be slow or absent.
I have in the init a call to this method:
private void getStartData() {
Runnable getBootData = () -> {
if (serverAPI.getSomething() && serverAPI.getXXX() && ...) {
isAllDataFetched = true;
} else {
Log.p("Connection ERROR in fetching initial data");
}
};
EasyThread appInfo = EasyThread.start("APPINFO");
appInfo.run(getBootData);
}
Each serverAPI method in this example is a synchronous method that return true if success, false otherwise. My question is how to change this EasyThread to repeat again all the calls to (serverAPI.getSomething() && serverAPI.getXXX() && ...) after one second if the result is false, and again after another second and so on, until the result is true.
I don't want to shown an error or an alert to the user: I'll show an alert only if the static boolean isAllDataFetched is false when the requested data is strictly necessary.
I tried to read carefully the documentation of EasyThread and of Runnable, but I didn't understand how to handle this use case.
Since this is a thread you could easily use Thread.sleep(1000) or more simply Util.sleep(1000) which just swallows the InterruptedException. So something like this would work:
while(!isAllDataFetched) {
if (serverAPI.getSomething() && serverAPI.getXXX() && ...) {
isAllDataFetched = true;
} else {
Log.p("Connection ERROR in fetching initial data");
Util.sleep(1000);
}
}

Spring LDAP AD paging support not working - LDAP: error code 12 - 00000057: LdapErr: DSID-0C09079A

When trying to run the code above I'm getting javax.naming.OperationNotSupportedException with the message:
[LDAP: error code 12 - 00000057: LdapErr: DSID-0C09079A, comment: Error processing control, data 0, v2580].
The first page is successfully retrieved and the exception is thrown only at second loop iteration.
public void pagedResults() {
PagedResultsCookie cookie = null;
SearchControls searchControls = new SearchControls();
searchControls.setSearchScope(SearchControls.SUBTREE_SCOPE);
int page = 1;
do {
logger.info("Starting Page: " + page);
PagedResultsDirContextProcessor processor = new PagedResultsDirContextProcessor(20, cookie);
List<String> lastNames = ldapTemplate.search("", initialFilter.encode(), searchControls, UserMapper.USER_MAPPER_VNT, processor);
for (String l : lastNames) {
logger.info(l);
}
cookie = processor.getCookie();
page = page + 1;
} while (null != cookie.getCookie());
}
However, when I remove Spring LDAP using pure implementation as above, it works!
try {
LdapContext ctx = new InitialLdapContext(env, null);
// Activate paged results
int pageSize = 5;
byte[] cookie = null;
ctx.setRequestControls(new Control[] { new PagedResultsControl(pageSize, Control.CRITICAL) });
int total;
do {
/* perform the search */
NamingEnumeration results = ctx .search("",
"(&(objectCategory=person)(objectClass=user)(SAMAccountName=vnt*))",
searchCtls);
/* for each entry print out name + all attrs and values */
while (results != null && results.hasMore()) {
SearchResult entry = (SearchResult) results.next();
System.out.println(entry.getName());
}
// Examine the paged results control response
Control[] controls = ctx.getResponseControls();
if (controls != null) {
for (int i = 0; i < controls.length; i++) {
if (controls[i] instanceof PagedResultsResponseControl) {
PagedResultsResponseControl prrc = (PagedResultsResponseControl) controls[i];
total = prrc.getResultSize();
if (total != 0) {
System.out.println("***************** END-OF-PAGE "
+ "(total : " + total
+ ") *****************\n");
} else {
System.out.println("***************** END-OF-PAGE "
+ "(total: unknown) ***************\n");
}
cookie = prrc.getCookie();
}
}
} else {
System.out.println("No controls were sent from the server");
}
// Re-activate paged results
ctx.setRequestControls(new Control[] { new PagedResultsControl(
pageSize, cookie, Control.CRITICAL) });
} while (cookie != null);
ctx.close();
} catch (NamingException e) {
System.err.println("PagedSearch failed.");
e.printStackTrace();
} catch (IOException ie) {
System.err.println("PagedSearch failed.");
ie.printStackTrace();
}
Any hints?
The bad thing about LDAP paged results is that they only work if the same underlying connection is used for all requests. The internals of Spring LDAP get a new connection for each LdapTemplate operation, unless you use the transactional support.
The easiest way to make sure the same connection will be used for a sequence of LDapTemplate operations is to use the transaction support, i.e. configure transactions for Spring LDAP and wrap the target method with a Transactional annotation.
I managed to make my example above work using SingleContextSource.doWithSingleContext approach.
However my scenario is different, my app is service oriented and the paged results as well as the cookie should be sent to an external client so that he decides to request next pages or not.
So as far as I can tell, spring-ldap does not support such case. I must use pure implementation so that I can keep track of the underlying connection during requests. Transaction support could help as well as SingleContextSource, but not among different requests.
#marthursson
Is there any plan in spring ldap to such support in the future?
I found I could use your first example (Spring) as long as I set the ignorePartialResultException property to true in my ldapTemplate configuration and put the #Transactional on my method as suggested.
you can replace ldapTemplate DirContext like this
ldapTemplate.setContextSource(new SingleContextSource(ldapContextSource().getReadWriteContext()));

Behavior of initial.min.cluster.size

Is Hazelcast always blocking in case initial.min.cluster.size is not reached? If not, under which situations is it not?
Details:
I use the following code to initialize hazelcast:
Config cfg = new Config();
cfg.setProperty("hazelcast.initial.min.cluster.size",Integer.
toString(minimumInitialMembersInHazelCluster)); //2 in this case
cfg.getGroupConfig().setName(clusterName);
NetworkConfig network = cfg.getNetworkConfig();
JoinConfig join = network.getJoin();
join.getMulticastConfig().setEnabled(false);
join.getTcpIpConfig().addMember("192.168.0.1").addMember("192.168.0.2").
addMember("192.168.0.3").addMember("192.168.0.4").
addMember("192.168.0.5").addMember("192.168.0.6").
addMember("192.168.0.7").setRequiredMember(null).setEnabled(true);
network.getInterfaces().setEnabled(true).addInterface("192.168.0.*");
join.getMulticastConfig().setMulticastTimeoutSeconds(MCSOCK_TIMEOUT/100);
hazelInst = Hazelcast.newHazelcastInstance(cfg);
distrDischargedTTGs = hazelInst.getList(clusterName);
and get log messages like
debug: starting Hazel pullExternal from Hazelcluster with 1 members.
Does that definitely mean there was another member that has joined and left already? It does not look like that would be the case from the log files of the other instance. Hence I wonder whether there are situtations where hazelInst = Hazelcast.newHazelcastInstance(cfg); does not block even though it is the only instance in the hazelcast cluster.
The newHazelcastInstance blocks till the clusters has the required number of members.
See the code below for how it is implemented:
private static void awaitMinimalClusterSize(HazelcastInstanceImpl hazelcastInstance, Node node, boolean firstMember)
throws InterruptedException {
final int initialMinClusterSize = node.groupProperties.INITIAL_MIN_CLUSTER_SIZE.getInteger();
while (node.getClusterService().getSize() < initialMinClusterSize) {
try {
hazelcastInstance.logger.info("HazelcastInstance waiting for cluster size of " + initialMinClusterSize);
//noinspection BusyWait
Thread.sleep(TimeUnit.SECONDS.toMillis(1));
} catch (InterruptedException ignored) {
}
}
if (initialMinClusterSize > 1) {
if (firstMember) {
node.partitionService.firstArrangement();
} else {
Thread.sleep(TimeUnit.SECONDS.toMillis(3));
}
hazelcastInstance.logger.info("HazelcastInstance starting after waiting for cluster size of "
+ initialMinClusterSize);
}
}
If you set the logging on debug then perhaps you can see better what is happening. Member joining and leaving should already be visible under info.

Database problem: how to diagnose and fix the problem?

I created an application which stores values into the database and retrieves the stored data. While running an application in run mode everything seems to work fine (the values are stored and retrieved successfully) but when I run in the debug mode the process throws IllegalStateException and so far haven't found a cause.
The method which retrieves an object Recording is the following:
public Recording getRecording(String filename) {
Recording recording = null;
String where = RECORDING_FILENAME + "='" + filename + "'";
Log.v(TAG, "retrieving recording: filename = " + filename);
try {
cursor = db.query(DATABASE_TABLE_RECORDINGS, new String[]{RECORDING_FILENAME, RECORDING_TITLE, RECORDING_TAGS, RECORDING_PRIVACY_LEVEL, RECORDING_LOCATION, RECORDING_GEO_TAGS, RECORDING_GEO_TAGGING_ENABLED, RECORDING_TIME_SECONDS, RECORDING_SELECTED_COMMUNITY}, where, null, null, null, null);
if (cursor.getCount() > 0) {
cursor.moveToFirst();
//String filename = c.getString(0);
String title = cursor.getString(1);
String tags = cursor.getString(2);
int privacyLevel = cursor.getInt(3);
String location = cursor.getString(4);
String geoTags = cursor.getString(5);
int iGeoTaggingEnabled = cursor.getInt(6);
String recordingTime = cursor.getString(7);
String communityID = cursor.getString(8);
cursor.close();
recording = new Recording(filename, title, tags, privacyLevel, location, geoTags, iGeoTaggingEnabled, recordingTime, communityID);
}
}
catch (SQLException e) {
String msg = e.getMessage();
Log.w(TAG, msg);
recording = null;
}
return recording;
}
and it is called from another class (Settings):
private Recording getRecording(String filename) {
dbAdapter = dbAdapter.open();
Recording recording = dbAdapter.getRecording(filename);
dbAdapter.close();
return recording;
}
While running through the code above everything works fine but then I notice an exception in another thread:
alt text http://img509.imageshack.us/img509/862/illegalstateexception.jpg
and don't know neither what is the possible cause of this exception nor how to debug the code from that thread to diagnose the cause.
I would be very thankful if anyone knew what is the possible issue here.
Thank you!
Looks like that cursor.close() is inside an "if" - that's when SQLiteCursor.finalize() will throw an IllegalStateException (I googled for it). You migh be getting an empty recordset, for instance, if some other process/thread didn't have the time to commit.
Close it always, even if it's result set is empty.
I'd also advice you to access fields by names, not indices, for future compatibility. And do both close()s in finally{} blocks.

Resources