dart:io sync vs async file operations - file

There are a number of sync and async operations for files in dart:io:
file.deleteSync() and file.delete()
file.readAsStringSync() and file.readAsString()
file.writeAsBytesSync(bytes) and file.writeAsBytes(bytes)
and many, many more.
What are the considerations that I should keep in mind when choosing between the sync and async options? I seem to recall seeing somewhere that the sync option is faster if you have to wait for it to finish anyway (await file.delete() for example). But I can't remember where I saw that or if it is true.
Is there any difference between this method:
Future deleteFile(File file) async {
await file.delete();
print('deleted');
}
and this method:
Future deleteFile(File file) async {
file.deleteSync();
print('deleted');
}

Let me try to summarize an answer based on the comments to my question. Correct me where I'm wrong.
Running code in an async method doesn't make it run on another thread.
Dart is a single threaded system.
Code gets run on an event loop.
Performing long running synchronous tasks will block the system whether it is in an async method or not.
An isolate is a single thread.
If you want to run tasks on another thread then you need to run it on another isolate.
Starting another isolate is called spawning the isolate.
There are a few options for running tasks on another isolate including compute and IsolateChannel and writing your own isolate communication code.
For File IO, the synchronous versions are faster than the asynchronous versions.
For heavy File IO, prefer the asynchronous version because they work on a separate thread.
For light File IO (like file.exists()?), using the synchronous version is an option since it is likely to be fast.
Further reading
Isolates and Event Loops
Single Thread Dart, What? — Part 1
Single Thread Dart, What? — Part 2
avoid_slow_async_io lint

sync variants unlike async ones stop the CPU from executing any event handlers - like the event loop, until the operation is complete.
Using sync:
void main() {
final file = File('...');
Future(() => print('1')); // Adding to the event queue
file.readAsBytesSync();
print('2');
}
Output:
2
1
Using async:
void main() async {
final file = File('...');
Future(() => print('1')); // Adding to the event queue
await file.readAsBytes();
print('2');
}
Output:
1
2

Related

How to build a async rest endpoint that calls blocking action in worker thread and replies instantly (Quarkus)

I checked the docs and stackoverflow but didn't find exactly a suiting approach.
E.g. this post seems very close: Dispatch a blocking service in a Reactive REST GET endpoint with Quarkus/Mutiny
However, I don't want so much unneccessary boilerplate code in my service, at best, no service code change at all.
I generally just want to call a service method which uses entity manager and thus is a blocking action, however, want to return a string to the caller immidiately like "query started" or something. I don't need a callback object, it's just a fire and forget approach.
I tried something like this
#NonBlocking
#POST
#Produces(MediaType.TEXT_PLAIN)
#Path("/query")
public Uni<String> triggerQuery() {
return Uni.createFrom()
.item("query started")
.call(() -> service.startLongRunningQuery());
}
But it's not working -> Error message returned to the caller:
You have attempted to perform a blocking operation on a IO thread. This is not allowed, as blocking the IO thread will cause major performance issues with your application. If you want to perform blocking EntityManager operations make sure you are doing it from a worker thread.",
I actually expected quarkus takes care to distribute the tasks accordingly, that is, rest call to io thread and blocking entity manager operations to worker thread.
So I must using it wrong.
UPDATE:
Also tried an proposed workaround that I found in https://github.com/quarkusio/quarkus/issues/11535 changing the method body to
return Uni.createFrom()
.item("query started")
.emitOn(Infrastructure.getDefaultWorkerPool())
.invoke(()-> service.startLongRunningQuery());
Now I don't get an error, but service.startLongRunningQuery() is not invoked, thus no logs and no query is actually sent to db.
Same with (How to call long running blocking void returning method with Mutiny reactive programming?):
return Uni.createFrom()
.item(() ->service.startLongRunningQuery())
.runSubscriptionOn(Infrastructure.getDefaultWorkerPool())
Same with (How to run blocking codes on another thread and make http request return immediately):
ExecutorService executor = Executors.newFixedThreadPool(10, r -> new Thread(r, "CUSTOM_THREAD"));
return Uni.createFrom()
.item(() -> service.startLongRunningQuery())
.runSubscriptionOn(executor);
Any idea why service.startLongRunningQuery() is not called at all and how to achieve fire and forget behaviour, assuming rest call handled via IO thread and service call handled by worker thread?
It depends if you want to return immediately (before your startLongRunningQuery operation is effectively executed), or if you want to wait until the operation completes.
If the first case, use something like:
#Inject EventBus bus;
#NonBlocking
#POST
#Produces(MediaType.TEXT_PLAIN)
#Path("/query")
public void triggerQuery() {
bus.send("some-address", "my payload");
}
#Blocking // Will be called on a worker thread
#ConsumeEvent("some-address")
public void executeQuery(String payload) {
service.startLongRunningQuery();
}
In the second case, you need to execute the query on a worker thread.
#POST
#Produces(MediaType.TEXT_PLAIN)
#Path("/query")
public Uni<String> triggerQuery() {
return Uni.createFrom(() -> service.startLongRunningQuery())
.runSubscriptionOn(Infrastructure.getDefaultWorkerPool());
}
Note that you need RESTEasy Reactive for this to work (and not classic RESTEasy). If you use classic RESTEasy, you would need the quarkus-resteasy-mutiny extension (but I would recommend using RESTEasy Reactive, it will be way more efficient).
Use the EventBus for that https://quarkus.io/guides/reactive-event-bus
Send and forget is the way to go.

Run while loop in parallel

I have a large collection (+90 000 objects) and I would like to run while loop in parallel on it, source of my function is below
val context = newSingleThreadAsyncContext()
return KtxAsync.async(context) {
val fields = regularMazeService.generateFields(colsNo, rowsNo)
val time = measureTimeMillis {
withContext(newAsyncContext(10)) {
while (availableFieldsWrappers.isNotEmpty()) {
val wrapper = getFirstShuffled(availableFieldsWrappers.lastIndex)
.let { availableFieldsWrappers[it] }
if (wrapper.neighborsIndexes.isEmpty()) {
availableFieldsWrappers.remove(wrapper)
continue
}
val nextFieldIndex = getFirstShuffled(wrapper.neighborsIndexes.lastIndex)
.let {
val fieldIndex = wrapper.neighborsIndexes[it]
wrapper.neighborsIndexes.removeAt(it)
fieldIndex
}
if (visitedFieldsIndexes.contains(nextFieldIndex)) {
wrapper.neighborsIndexes.remove(nextFieldIndex)
fields[nextFieldIndex].neighborFieldsIndexes.remove(wrapper.index)
continue
}
val nextField = fields[nextFieldIndex]
availableFieldsWrappers.add(FieldWrapper(nextField, nextFieldIndex))
visitedFieldsIndexes.add(nextFieldIndex)
wrapper.field.removeNeighborWall(nextFieldIndex)
nextField.removeNeighborWall(wrapper.index)
}
}
}
Gdx.app.log("maze-time", "$time")
On top of class
private val availableFieldsWrappers = Collections.synchronizedList(mutableListOf<FieldWrapper>())
private val visitedFieldsIndexes = Collections.synchronizedList(mutableListOf<Int>())
I test it a few times results are below:
1 thread - 21213ms
5 threads - 27894ms
10 threads - 21494ms
15 threads- 20986ms
What I'm doing wrong?
You are using Collections.synchronizedList from Java standard library, which returns a list wrapper that leverages blocking synchronized mechanism to ensure thread safety. This mechanism is not compatible with coroutines, as in it blocks the other threads from accessing the collection until the operation is finished. You should generally use non-blocking concurrent collections when accessing data from multiple coroutines or protect the shared data with a non-blocking mutex.
List.contains will be become slower and slower (O(n)) as more and more elements are added. Instead of a list, you should use a set for visitedFieldsIndexes. Just make sure to either protect it with a mutex or use a concurrent variant. Similarly, removal of values with random indices from the availableFieldsWrappers is pretty costly - instead, you can shuffle the list once and use simple iteration.
You are not reusing the coroutine contexts. In general, you can create asynchronous context once and reuse its instance instead of creating a new thread pool each time you need coroutines. You should invoke and assign the result of newAsyncContext(10) just once and reuse it throughout your application.
The code you have currently written does not leverage coroutines very well. Instead of thinking of coroutines dispatcher as a thread pool where you can launch N big tasks in parallel (i.e. your while availableFieldsWrappers.isNotEmpty loop), you should think of it as an executor of hundreds or thousands of small tasks, and adjust your code accordingly. I think you could avoid the available/visited collections altogether by rewriting your code with introduction of e.g. Kotlin flows or just multiple KtxAsync.async/KtxAsync.launch calls that handle smaller portion of the logic.
Unless some of the functions are suspending or use coroutines underneath, you're not really leveraging the multiple threads of an asynchronous context at all. withContext(newAsyncContext(10)) launches a single coroutine that handles the whole logic sequentially, leveraging only a single thread. See 4. for some ideas on how you can rewrite the code. Try collecting (or just printing) the thread hashes and names to see if you are using all of the threads well.

Why does async method block MVVM Light Relay Command

I'm new to async and need to consume an API that has it. I've read I should "go async all the way" back the UI command. So far I've propagated async back to my view model.
The code below blocks the Upload button in my UI. Is this because the RelayCommand's implementation calls it using await?
// In the ViewModel:
public MyViewModel()
{
...
UploadRelayCommand = new RelayCommand(mUpload, () => CanUpload);
...
}
private async void mUpload()
{
...
await mModel.Upload();
...
}
// In the model:
public async Task UploadToDatabase()
{
...
projectToUse = await api.CreateProjectAsync(ProjectName);
...
}
// In the API
public async Task<Project> CreateProjectAsync(Project project){}
Update: Sven's comment led me to find that CreateProjectAsync was running in a simulation mode that synchronously wrote to memory. When I wrapped that end code in Task.Run, it no longer blocked my Upload button. When not running in simulation mode, the API natively makes asynchronous calls to interact with a web server, so those also don't block.
Thanks.
The await itself will not block your UI. It is more likely that your Upload() method does not do any real asynchronous work.
(As Jim suggested, Task.Run() can be used in such a case. It will use the thread pool to run the operation in the background. Generally speaking, for IO-bound operations like uploads/downloads you should check if your API supports asynchronous calls natively. If such an implementation exists, it may make more efficient use of system resources than using a thread.)

Netty synchronous client with asynchronous callers

I am creating a server which consumes commands from numerous sources such as JMS, SNMP, HTTP etc. These are all asynchronous and are working fine. The server maintains a single connection to a single item of legacy hardware which has a request/reply architecture with a custom TCP protocol.
Ideally I would like a single command like this blocking type method
public Response issueCommandToLegacyHardware(Command command)
or this asynchronous type method
public Future<Response> issueCommandToLegacyHardware(Command command)
I am relatively new to Netty and asynchronous programming, basically learning it as I go along. My current thought is that my LegacyHardwareClient class will have public synchronized issueCommandToLegacyHardware(Command command), will make a write to the client channel to the legacy hardware, then take() from a SynchronousQueue<Response> which will block. The ChannelInboundHandler in the pipeline will offer() a Response to the SynchronousQueue>Response> which will allow the take() to unblock and receive the data.
Is this too convoluted? Are there any examples around of synchronous Netty client implementations that I can look at? Are there any best practices for Netty?
I could obviously use just standard Java sockets however the power of Netty for parsing custom protocols along with the ease of maintaniability is far too great to give up.
UPDATE:
Just regarding the implementation, I used an ArrayBlockingQueue<>() and I used put() and remove() rather than offer() and remove(). Because I wanted to ensure that subsequent requests to the legacy hardware were only sent when any active requests had been replied to as the legacy hardware behaviour is not known with certainty otherwise.
The reason offer() and remove() did not work for me was that the offer() command would not pass anything if there was not an actively blocking take() request no the other side. The converse is true that remove() would not return anything unless there was a blocking put() call inserting data.
I couldn't use a put()/remove() since the remove() statement would never be reached since there was no request written to the channel to trigger the event from where the remove() would be called. I couldn't use offer()/take() since the offer() statement would return false since the take() call hadn't been executed yet.
Using the ArrayBlockingQueue<>() with a capacity of 1, it ensured that only one command could be executed at once. Any other commands would block until there was sufficient room to insert, with a capacity of 1 this meant it had to be empty. The emptying of the queue was done once a response had been received from the legacy hardware. This ensured a nice synchronous behaviour toward the legacy hardware but provided an asynchronous API to the users of the legacy hardware, for which there are many.
Instead of designing your application on a blocking manner using SynchronousQueue<Response>, design it in a nonblocking manner using SynchronousQueue<Promise<Response>>.
Your public Future<Response> issueCommandToLegacyHardware(Command command) should then use offer() to add a DefaultPromise<>() to the Queue, and then the netty pipeline can use remove() to get the response for that request, notice I used remove() instead of take(), since only under exceptional circumstances, there is none element present.
A quick implementation of this might be:
public class MyLastHandler extends SimpleInboundHandler<Response> {
private final SynchronousQueue<Promise<Response>> queue;
public MyLastHandler (SynchronousQueue<Promise<Response>> queue) {
super();
this.queue = queue;
}
// The following is called messageReceived(ChannelHandlerContext, Response) in 5.0.
#Override
public void channelRead0(ChannelHandlerContext ctx, Response msg) {
this.queue.remove().setSuccss(msg); // Or setFailure(Throwable)
}
}
The above handler should be placed last in the chain.
The implementation of public Future<Response> issueCommandToLegacyHardware(Command command) can look:
Channel channel = ....;
SynchronousQueue<Promise<Response>> queue = ....;
public Future<Response> issueCommandToLegacyHardware(Command command) {
return issueCommandToLegacyHardware(command, channel.eventLoop().newPromise());
}
public Future<Response> issueCommandToLegacyHardware(Command command, Promise<Response> promise) {
queue.offer(promise);
channel.write(command);
return promise;
}
Using the approach with the overload on issueCommandToLegacyHardware is also the design pattern used for Channel.write, this makes it really flexable.
This design pattern can be used as follows in client code:
issueCommandToLegacyHardware(
Command.TAKE_OVER_THE_WORLD_WITH_FIRE,
channel.eventLoop().newPromise()
).addListener(
(Future<Response> f) -> {
System.out.println("We have taken over the world: " + f.get());
}
);
The advantage of this design pattern is that no unneeded blocking is used anywhere, just plain async logic.
Appendix I: Javadoc:
Promise Future DefaultPromise

store data in every minute what should use Service, AsyncTask

I want to store data in database in every minute . For the same what should I use Service, AsyncTask or anything else. I go through various link which made me more confused .
I read the developer guide and came to know about getWritableDatabase
Database upgrade may take a long time, you should not call this method from the application main thread,
Then first I think I will use AsyncTask then about this
AsyncTasks should ideally be used for short operations (a few seconds at the most.)
After that I think I can use Service then about Service
A Service is not a thread. It is not a means itself to do work off of the main thread (to avoid Application Not Responding errors).
Here I am not able to understand what should I use to store data in database periodically. Please help me here as struck badly.
Thanks in advance
you cant do a lot work on the UI thread, so making database operations you could choose different approaches, few of them that I prefer to use are listed below;
Create a thread pool and execute each database operation via a thread, this reduces load on UI thread, also it never initializes lot of threads.
You can use services for updating the database operations. since services running on UI thread you cant write your operations in Services, so that you have to create a separate thread inside service method. or you can use Intent service directly since it is not working on UI Thread.
here is developer documentation on thread pool in android
and this is the documentation for IntentService
UPDATE
This will send an intent to your service every minute without using any processor time in your activity in between
Intent myIntent = new Intent(context, MyServiceReceiver.class);
PendingIntent pendingIntent = PendingIntent.getBroadcast(context, 0, myIntent, 0);
AlarmManager alarmManager = (AlarmManager)context.getSystemService(Context.ALARM_SERVICE);
Calendar calendar = Calendar.getInstance();
calendar.setTimeInMillis(System.currentTimeMillis());
calendar.add(Calendar.SECOND, 60); // first time
long frequency= 60 * 1000; // in ms
alarmManager.setRepeating(AlarmManager.RTC_WAKEUP, calendar.getTimeInMillis(), frequency, pendingIntent);
Before that check if you really need a service to be started in each minute. or if you can have one service which checks for the data changes in each minute, starting new service would consume maybe more resources than checking itself.
UPDATE 2
private ping() {
// periodic action here.
scheduleNext();
}
private scheduleNext() {
mHandler.postDelayed(new Runnable() {
public void run() { ping(); }
}, 60000);
}
int onStartCommand(Intent intent, int x, int y) {
mHandler = new android.os.Handler();
ping();
return STICKY;
}
this is a simple example like that you can do

Resources