After reading the Throttling documentation https://docs.developer.amazonservices.com/en_US/products/Products_Throttling.html and https://docs.developer.amazonservices.com/en_US/dev_guide/DG_Throttling.html , I've started honoring the quotaRemaining and the quotaResetsAt response headers so that I dont go beyond the quote limit. However, whenever I fire a few requests within quick succession, i get the following exception.
The documentation doesnt mention anything about any burst limits. It talks about maximum request quota, but i dont know how that applies to my case. I'm invoking the ListMatchingProducts api
Caused by: com.amazonservices.mws.client.MwsException: Request is throttled
at com.amazonservices.mws.client.MwsAQCall.invoke(MwsAQCall.java:312)
at com.amazonservices.mws.client.MwsConnection.call(MwsConnection.java:422)
... 19 more
I guess I figured it out.
ListMatchingProducts mentions that the Maximum Request Quota is 20. Practically this means that you can fire at max 20 requests in quick succession, but after that you must wait until the Restore Rate "replenishes" your request "credits" (i.e in my case 1 request every 5 seconds).
This Restore rate will (every 5 seconds) start to then re-fill the quota, up to a max of 20 requests. The following code worked for me...
class Client {
private final int maxRequestQuota = 19
private Semaphore maximumRequestQuotaSemaphore = new Semaphore(maxRequestQuota)
private volatile boolean done = false
Client() {
new EveryFiveSecondRefiller().start()
}
ListMatchingProductsResponse fetch(String searchString) {
maximumRequestQuotaSemaphore.acquire()
// .....
}
class EveryFiveSecondRefiller extends Thread {
#Override
void run() {
while (!done()) {
int availablePermits = maximumRequestQuotaSemaphore.availablePermits()
if (availablePermits == maxRequestQuota) {
log.debug("Max permits reached. Waiting for 5 seconds")
sleep(5000)
continue
}
log.debug("Releasing a single permit. Current available permits are $availablePermits")
maximumRequestQuotaSemaphore.release()
sleep(5000)
}
}
boolean done() {
done
}
}
void close() {
done = true
}
}
Related
Note for the readers: this question is specific for Codename One only.
I'm developing an app that needs some initial data from a server to run properly. The first shown Form doesn't need this data and there is also a splash screen on the first run, so if the Internet connection is good there is enought time to retrive the data... but the Internet connection can be slow or absent.
I have in the init a call to this method:
private void getStartData() {
Runnable getBootData = () -> {
if (serverAPI.getSomething() && serverAPI.getXXX() && ...) {
isAllDataFetched = true;
} else {
Log.p("Connection ERROR in fetching initial data");
}
};
EasyThread appInfo = EasyThread.start("APPINFO");
appInfo.run(getBootData);
}
Each serverAPI method in this example is a synchronous method that return true if success, false otherwise. My question is how to change this EasyThread to repeat again all the calls to (serverAPI.getSomething() && serverAPI.getXXX() && ...) after one second if the result is false, and again after another second and so on, until the result is true.
I don't want to shown an error or an alert to the user: I'll show an alert only if the static boolean isAllDataFetched is false when the requested data is strictly necessary.
I tried to read carefully the documentation of EasyThread and of Runnable, but I didn't understand how to handle this use case.
Since this is a thread you could easily use Thread.sleep(1000) or more simply Util.sleep(1000) which just swallows the InterruptedException. So something like this would work:
while(!isAllDataFetched) {
if (serverAPI.getSomething() && serverAPI.getXXX() && ...) {
isAllDataFetched = true;
} else {
Log.p("Connection ERROR in fetching initial data");
Util.sleep(1000);
}
}
I'm writing some networking code in Swift that prevents initiating a download that is already in progress. I do this by keeping tracking of the identity of the network request along with the associated completion handlers in an (synchronized) array A. When a network call finishes it calls the completion handlers that are associated with that resource and it subsequently removes those handlers from the array A.
I want to make sure there is no way for threads to access the array in certain cases. For example, consider the following scenario:
A request to download resource X is started.
Verify whether the request has already been made.
Add the completion handler to the array A.
If request has not been made, start the download.
What if resource X was already downloading, and the completion handler for this download interrupts the thread between steps 2 and 3? It has been verified that the request has been made so the download will not be started, but the new completion handler will be added to array A which will now never be called.
How would I block this from happening? Can I lock the array for writing while I do steps 2 and 3?
The simple solution is to run everything on the main thread except the actual downloading. All you need to do is make the completion handler a stub that places a block on the main queue to do all the work.
The pseudo code for what you want is something like
assert(Thread.current == Thread.main)
handlerArray.append(myHandler)
if !requestAlreadyRunning)
{
requestAlreadyRunning = true
startDownloadRequest(completionHandelr: {
whatever in
Dispatch.main.async // This is the only line of code that does not run on the main thread
{
for handler in handlerArray
{
handler()
}
handlerArray = []
requestAlreadyRunning = false
}
})
}
This works because all the work that might result in race conditions and synchronisation conflicts runs on one thread - the main thread and so the completion handler can't possibly be running when you are adding new completion handlers to the queue and vice versa.
Note that, for the above solution to work, your application needs to be in a run loop. This will be true for any Cocoa based application on Mac OS or iOS but not necessarily true for a command line tool. If that is the case or if you don't want any of the work to happen on the main thread, set up a serial queue and run the connection initiation and the completion handler on it instead of the main queue.
I'm working on the assumption that you want to be able to add multiple callbacks that will all be run when the latest request completes, whether it was already in-flight or not.
Here's a sketch of a solution. The basic point is to take a lock before touching the array(s) of handlers, whether to add one or to invoke them after the request has completed. You must also synchronize the determination of whether to start a new request, with the exact same lock.
If the lock is already held in the public method where the handlers are added, and the request's own completion runs, then the latter must wait for the former, and you will have deterministic behavior (the new handler will be invoked).
class WhateverRequester
{
typealias SuccessHandler = (Whatever) -> Void
typealias FailureHandler = (Error) -> Void
private var successHandlers: [SuccessHandler] = []
private var failureHandlers: [FailureHandler] = []
private let mutex = // Your favorite locking mechanism here.
/** Flag indicating whether there's something in flight */
private var isIdle: Bool = true
func requestWhatever(succeed: #escaping SuccessHandler,
fail: #escaping FailureHandler)
{
self.mutex.lock()
defer { self.mutex.unlock() }
self.successHandlers.append(succeed)
self.failureHandlers.append(fail)
// Nothing to do, unlock and wait for request to finish
guard self.isIdle else { return }
self.isIdle = false
self.enqueueRequest()
}
private func enqueueRequest()
{
// Make a request however you do, with callbacks to the methods below
}
private func requestDidSucceed(whatever: Whatever)
{
// Synchronize again before touching the list of handlers and the flag
self.mutex.lock()
defer { self.mutex.unlock() }
for handler in self.successHandlers {
handler(whatever)
}
self.successHandlers = []
self.failureHandlers = []
self.isIdle = true
}
private func requestDidFail(error: Error)
{
// As the "did succeed" method, but call failure handlers
// Again, lock before touching the arrays and idle flag.
}
}
This is so broadly applicable that you can actually extract the callback storage, locking, and invocation into its own generic component, which a "Requester" type can create, own, and use.
Based on Josh's answer I created a generic Request & Requester below. It had a few more specific needs than I described in the question above. I want the Request instance to manage only requests with a certain ID (which I made into a String for now, but I guess this could also made more generic). Different ID's require a different Request instance. I created the Requester class for this purpose.
The requester class manages an array of Requests. For example, one could choose T = UIImage, and ID = image URL. This would give us an image downloader. Or one could choose T = User, and ID = user id. This would get a user object only once, even when requested several times.
I also wanted to be able to cancel requests from individual callers. It tags the completion handler with a unique ID that is passed back to the caller. It can use this to cancel the request. If all callers cancel, the request is removed from the Requester.
(The code below has not been tested so I cannot guarantee it to be bug free. Use at your own risk.)
import Foundation
typealias RequestWork<T> = (Request<T>) -> ()
typealias RequestCompletionHandler<T> = (Result<T>) -> ()
typealias RequestCompletedCallback<T> = (Request<T>) -> ()
struct UniqueID {
private static var ID: Int = 0
static func getID() -> Int {
ID = ID + 1
return ID
}
}
enum RequestError: Error {
case canceled
}
enum Result<T> {
case success(T)
case failure(Error)
}
protocol CancelableOperation: class {
func cancel()
}
final class Request<T> {
private lazy var completionHandlers = [(invokerID: Int,
completion: RequestCompletionHandler<T>)]()
private let mutex = NSLock()
// To inform requester the request has finished
private let completedCallback: RequestCompletedCallback<T>!
private var isIdle = true
// After work is executed, operation should be set so the request can be
// canceled if possible
var operation: CancelableOperation?
let ID: String!
init(ID: String,
completedCallback: #escaping RequestCompletedCallback<T>) {
self.ID = ID
self.completedCallback = completedCallback
}
// Cancel the request for a single invoker and it invokes the competion
// handler with a cancel error. If the only remaining invoker cancels, the
// request will attempt to cancel
// the associated operation.
func cancel(invokerID: Int) {
self.mutex.lock()
defer { self.mutex.unlock() }
if let index = self.completionHandlers.index(where: { $0.invokerID == invokerID }) {
self.completionHandlers[index].completion(Result.failure(RequestError.canceled))
self.completionHandlers.remove(at: index)
if self.completionHandlers.isEmpty {
self.isIdle = true
operation?.cancel()
self.completedCallback(self)
}
}
}
// Request work to be done. It will only be done if it hasn't been done yet.
// The work block should set the operation on this request if possible. The
// work block should call requestFinished(result:) if the work has finished.
func request(work: #escaping RequestWork<T>,
completion: #escaping RequestCompletionHandler<T>) -> Int {
self.mutex.lock()
defer { self.mutex.unlock() }
let ID = UniqueID.getID()
self.completionHandlers.append((invokerID: ID, completion: completion))
guard self.isIdle else { return ID }
work(self)
self.isIdle = false
return ID
}
// This method should be called from the work block when the work has
// completed. It will pass the result to all completion handlers and call
// the Requester class to inform that this request has finished.
func requestFinished(result: Result<T>) {
self.mutex.lock()
defer { self.mutex.unlock() }
completionHandlers.forEach { $0.completion(result) }
completionHandlers = []
self.completedCallback(self)
self.isIdle = true
}
}
final class Requester<T> {
private lazy var requests = [Request<T>]()
private let mutex = NSLock()
init() { }
// reuqestFinished(request:) should be called after a single Request has
// finished its work. It removes the requests from the array of requests.
func requestFinished(request: Request<T>) {
self.mutex.lock()
defer { self.mutex.unlock() }
if let index = requests.index(where: { $0.ID == request.ID }) {
requests.remove(at: index)
}
}
// request(ID:, work:) will create a request or add a completion handler to
// an existing request if a request with the supplied ID already exists.
// When a request is created, it passes a closure that removes the request.
// It returns the invoker ID to the invoker for cancelation purposes.
func request(ID: String,
work: #escaping RequestWork<T>,
completion: #escaping RequestCompletionHandler<T>) ->
(Int, Request<T>) {
self.mutex.lock()
defer { self.mutex.unlock() }
if let existingRequest = requests.first(where: { $0.ID == ID }) {
let invokerID = existingRequest.request(work: work, completion: completion)
return (invokerID, existingRequest)
} else {
let request = Request<T>(ID: ID) { [weak self] (request) in
self?.requestFinished(request: request)
}
let invokerID = request.request(work: work, completion: completion)
return (invokerID, request)
}
}
}
Is Hazelcast always blocking in case initial.min.cluster.size is not reached? If not, under which situations is it not?
Details:
I use the following code to initialize hazelcast:
Config cfg = new Config();
cfg.setProperty("hazelcast.initial.min.cluster.size",Integer.
toString(minimumInitialMembersInHazelCluster)); //2 in this case
cfg.getGroupConfig().setName(clusterName);
NetworkConfig network = cfg.getNetworkConfig();
JoinConfig join = network.getJoin();
join.getMulticastConfig().setEnabled(false);
join.getTcpIpConfig().addMember("192.168.0.1").addMember("192.168.0.2").
addMember("192.168.0.3").addMember("192.168.0.4").
addMember("192.168.0.5").addMember("192.168.0.6").
addMember("192.168.0.7").setRequiredMember(null).setEnabled(true);
network.getInterfaces().setEnabled(true).addInterface("192.168.0.*");
join.getMulticastConfig().setMulticastTimeoutSeconds(MCSOCK_TIMEOUT/100);
hazelInst = Hazelcast.newHazelcastInstance(cfg);
distrDischargedTTGs = hazelInst.getList(clusterName);
and get log messages like
debug: starting Hazel pullExternal from Hazelcluster with 1 members.
Does that definitely mean there was another member that has joined and left already? It does not look like that would be the case from the log files of the other instance. Hence I wonder whether there are situtations where hazelInst = Hazelcast.newHazelcastInstance(cfg); does not block even though it is the only instance in the hazelcast cluster.
The newHazelcastInstance blocks till the clusters has the required number of members.
See the code below for how it is implemented:
private static void awaitMinimalClusterSize(HazelcastInstanceImpl hazelcastInstance, Node node, boolean firstMember)
throws InterruptedException {
final int initialMinClusterSize = node.groupProperties.INITIAL_MIN_CLUSTER_SIZE.getInteger();
while (node.getClusterService().getSize() < initialMinClusterSize) {
try {
hazelcastInstance.logger.info("HazelcastInstance waiting for cluster size of " + initialMinClusterSize);
//noinspection BusyWait
Thread.sleep(TimeUnit.SECONDS.toMillis(1));
} catch (InterruptedException ignored) {
}
}
if (initialMinClusterSize > 1) {
if (firstMember) {
node.partitionService.firstArrangement();
} else {
Thread.sleep(TimeUnit.SECONDS.toMillis(3));
}
hazelcastInstance.logger.info("HazelcastInstance starting after waiting for cluster size of "
+ initialMinClusterSize);
}
}
If you set the logging on debug then perhaps you can see better what is happening. Member joining and leaving should already be visible under info.
Some time ago we implemented a warehouse management app that keeps track of quantities of each product we have in the store. We solved the problem of concurrent access to data with database locks (select for update), but this approach led to poor performance when many clients try to consume product quantities from the same store. Note that we manage only a small set of product types (less than 10) so the degree of concurrency could be heavy (also, we don't care of stock re-fill). We thought to split each resource quantity in smaller "buckets", but this approach could lead to starvation for clients that try to consume a quantity that is bigger than each bucket capacity: we should manage buckets merge and so on...
My question is: there are some broadly-accepted solutions to this problem? I also looked for academic articles but the topic seems too wide.
P.S. 1:
our application runs in a clustered environment, so we cannot rely on the application concurrency control. The question aims to find an algorithm that structures and manages the data in a different way than a single row, but keeping all the advantages that a db transaction (using locks or not) has.
P.S. 2: for your info, we manage a wide number of similar warehouses, the example focuses on a single one, but we keep all the data in one db (prices are all the same, etc).
Edit: The setup below will still work on a cluster if you use a queueing program that can coordinate among multiple processes / servers, e.g. RabbitMQ.
You can also use a simpler queueing algorithm that only uses the database, with the downside that it requires polling (whereas a system like RabbitMQ allows threads to block until a message is available). Create a Requests table with a column for unique requestIds (e.g. a random UUID) that acts as the primary key, a timestamp column, a respourceType column, and an integer requestedQuantity column. You'll also need a Logs table with a unique requestId column that acts as the primary key, a timestamp column, a resourceType column, an integer requestQuantity column, and a boolean/tinyint/whatever success column.
When a client requests a quantity of ResourceX it generates a random UUID and adds a row to the Requests table using the UUID as the requestId, and then polls the Logs table for the requestId. If the success column is true then the request succeeded, else it failed.
The server with the database assigns one thread or process to each resource, e.g. ProcessX is in charge of ResourceX. ProcessX retrieves all rows from the Requests table where resourceType = ResourceX, sorted by timestamp, and then deletes them from Requests; it then processes each request in order, decrementing an in-memory counter for each successful request, and at the end of processing the requests it updates the quantity of ResourceX on the Resources table. It then writes each request and its success status to the Logs table. It then retrieves all of the requests from Requests where requestType = RequestX again, etc.
It may be slightly more efficient to use an autoincrement integer as the Requests primary key, and to have ProcessX sort by primary key instead of by timestamp.
One option is to assign one DAOThread per resource - this thread is the only thing that accesses that resource's database table so that there's no locking at the database level. Workers (e.g. web sessions) request resource quantities using a concurrent queue - the example below uses a Java BlockingQueue, but most languages will have some sort of concurrent queue implementation you can use.
public class Request {
final int value;
final BlockingQueue<ReturnMessage> queue;
}
public class ReturnMessage {
final int value;
final String resourceType;
final boolean isSuccess;
}
public class DAOThread implements Runnable {
private final int MAX_CHANGES = 10;
private String resourceType;
private int quantity;
private int changeCount = 0;
private DBTable table;
private BlockingQueue<Request> queue;
public DAOThread(DBTable table, BlockingQueue<Request> queue) {
this.table = table;
this.resourceType = table.select("resource_type");
this.quantity = table.select("quantity");
this.queue = queue;
}
public void run() {
while(true) {
Requester request = queue.take();
if(request.value <= quantity) {
quantity -= request.value;
if(++changeCount > MAX_CHANGES) {
changeCount = 0;
table.update("quantity", quantity);
}
request.queue.offer(new ReturnMessage(request.value, resourceType, true));
} else {
request.queue.offer(new ReturnMessage(request.value, resourceType, false));
}
}
}
}
public class Worker {
final Map<String, BlockingQueue<Request>> dbMap;
final SynchronousQueue<ReturnMessage> queue = new SynchronousQueue<>();
public class WorkerThread(Map<String, BlockingQueue<Request>> dbMap) {
this.dbMap = dbMap;
}
public boolean request(String resourceType, int value) {
dbMap.get(resourceType).offer(new Request(value, queue));
return queue.take();
}
}
The Workers send resource requests to the appropriate DAOThread's queue; the DAOThread processes these requests in order, either updating the local resource quantity if the request's value doesn't exceed the quantity and returning a Success, else leaving the quantity unchanged and returning a Failure. The database is only updated after ten updates to reduce the amount of IO; the larger MAX_CHANGES is, the more complicated it will be to recover from system failure. You can also have a dedicated IOThread that does all of the database writes - this way you don't need to duplicate any logging or timing (e.g. there ought to be a Timer that flushes the current quantity to the database after every few seconds).
The Worker uses a SynchronousQueue to wait for a response from the DAOThread (a SynchronousQueue is a BlockingQueue that can only hold one item); if the Worker is running in its own thread the you may want to replace this with a standard multi-item BlockingQueue so that the Worker can process the ReturnMessages in any order.
There are some databases e.g. Riak that have native support for counters, so this might improve your IO thoughput and reduce or eliminate the need for a MAX_CHANGES.
You can further increase throughput by introducing BufferThreads to buffer the requests to the DAOThreads.
public class BufferThread implements Runnable {
final SynchronousQueue<ReturnMessage> returnQueue = new SynchronousQueue<>();
final int BUFFERSIZE = 10;
private DAOThread daoThread;
private BlockingQueue<Request> queue;
private ArrayList<Request> buffer = new ArrayList<>(BUFFERSIZE);
private int tempTotal = 0;
public BufferThread(DAOThread daoThread, BlockingQueue<Request> queue) {
this.daoThread = daoThread;
this.queue = queue;
}
public void run() {
while(true) {
Request request = queue.poll(100, TimeUnit.MILLISECONDS);
if(request != null) {
tempTotal += request.value;
buffer.add(request);
}
if(buffer.size() == BUFFERSIZE || request == null) {
daoThread.queue.offer(new Request(tempTotal, returnQueue));
ReturnMessage message = returnQueue.take();
if(message.isSuccess()) {
for(Request request: buffer) {
request.queue.offer(new ReturnMessage(request.value, daoThread.resourceType, message.isSuccess));
}
} else {
// send unbuffered requests to DAOThread to see if any can be satisfied
for(Request request: buffer) {
daoThread.queue.offer(request);
}
}
buffer.clear();
tempTotal = 0;
}
}
}
}
The Workers send their requests to the BufferThreads, who then wait until they've buffered BUFFERSIZE requests or have waited for 100ms for a request to come through the buffer (Request request = queue.poll(100, TimeUnit.MILLISECONDS)), at which point they forward the buffered message to the DAOThread. You can have multiple buffers per DAOThread - rather than sending a Map<String, BlockingQueue<Request>> to the Workers you instead send a Map<String, ArrayList<BlockingQueue<Request>>>, one queue per BufferThread, with the Worker either using a counter or a random number generator to determine which BufferThread to send a request to. Note that if BUFFERSIZE is too large and/or if you have too many BufferThreads then Workers will suffer from long pause times as they wait for the buffer to fill up.
my problem is that I have an Alarmmanager that shoudl go off every 60 minutes to a defined time.
This, however, is only working for the first time.
With every next hour passing by Alarmmanager delays its work for 2 or 3 minutes.
Here an example:
hour is set to 4 p. m.
minute is set to 32
timer is set to 60 minutes
Calendar timeOff9 = Calendar.getInstance();
timeOff9.set(Calendar.HOUR_OF_DAY, hour);
timeOff9.set(Calendar.MINUTE, minute);
am.setRepeating(AlarmManager.RTC_WAKEUP, timeOff9.getTimeInMillis(), timer*60000, pi);
Maybe someone knows why that is so?
I am using API level 15. As of documentation API 19 setRepeating == setInexactRepeating
Thank you very much!
You almost answered your own question by referring to the documentation.
API 19 did indeed infer calls to setRepeating would be treated as setInexactRepeating.
The way to handle this is to set a new alarm when you handle receiving an alarm (>= API 19).
e.g.
#SuppressLint("NewApi")
private void setAlarm(AlarmManager alarmManager, long time,
PendingIntent pIntent, boolean repeat) {
if (android.os.Build.VERSION.SDK_INT >= 19) {
alarmManager.setExact(AlarmManager.RTC_WAKEUP, time, pIntent);
} else {
if (repeat) {
alarmManager.setRepeating(AlarmManager.RTC_WAKEUP, time,
AlarmManager.INTERVAL_DAY * 7, pIntent);
} else {
alarmManager.set(AlarmManager.RTC_WAKEUP, time, pIntent);
}
}
}
And then when you receive alarm..
/***
* From version 19 (KitKat) notification that are set repeating aren't
* exact, so the notification needs to be scheduled again each time it is
* received
*/
private void scheduleNextNotification() {
if (android.os.Build.VERSION.SDK_INT < 19) {
return;
}
// set alarm that would have otherwise been repeating
}