Lua - Socket receive with timer for other event - timer

I am trying to implement a script with a server socket that will also periodically poll for data from several sensors (i.e on 59th second of every minute). I do not want to serialize the data to disk but rather keep it in a table which the socket will respond with when polled.
Here's some sketch the code to illustrate what I am trying to do (I've not included the client code that accesses this server, but that part is OK)
#!/usr/bin/env lua
local socket = require("socket")
local server = assert(socket.bind("*", 0))
local ip, port = server:getsockname()
local data = {}
local count = 1
local function pollSensors()
-- I do the sensor polling here and add to table e.g os.time()
table.insert(data, os.time() .."\t" .. tostring(count))
count = count + 1
end
while true do
local client = server:accept()
client:settimeout(2)
local line, err = client:receive()
-- I do process the received line to determine the response
-- for illustration I'll just send the number of items in the table
if not err then client:send("Records: " ..table.getn(data) .. "\n") end
client:close()
if os.time().sec == 59 then
pollSensors()
end
end
I am concerned that the server may on occasion(s) block and therefore I'll miss the 59th second.
Is this a good way to implement this or is there a (simpler) better way to do this (say using coroutines)? If coroutines would be better, how do I implement them for my scenario?

To accomplish this you need some sort of multitasking.
I'd use a network aware scheduler.
e.g. cqueues would look like this:
local cqueues = require "cqueues"
local cs = require "cqueues.socket"
local data = {}
local count = 1
local function pollSensors()
-- I do the sensor polling here and add to table e.g os.time()
table.insert(data, os.time() .."\t" .. tostring(count))
count = count + 1
end
local function handle_client(client)
client:setmode("b", "bn") -- turn on binary mode for socket and turn off buffering
-- ported code from question:
client:settimeout(2) -- I'm not sure why you chose a 2 second timeout
local line, err = client:read("*l") -- with cqueues, this read will not block the whole program, but just yield the current coroutine until data arrives.
-- I do process the received line to determine the response
-- for illustration I'll just send the number of items in the table
if not err then
assert(client:write(string.format("Records: %d\n", #data)))
end
client:close()
end
local cq = cqueues.new() -- create a new scheduler
-- create first coroutine that waits for incoming clients
cq:wrap(function()
local server = cs.listen{host = "0.0.0.0"; port = "0"}
local fam, ip, port = server:localname()
print(string.format("Now listening on ip=%s port=%d", ip, port))
for client in server:clients() do -- iterates over `accept`ed clients
-- create a new coroutine for each client, passing the client in
cqueues.running():wrap(handle_client, client)
end
end)
-- create second coroutine that reads sensors
cq:wrap(function()
while true do
-- I assume you just wanted to read every 60 seconds; rather than actually *on* the 59th second of each minute.
pollSensors()
cqueues.sleep(60)
end
end)
-- Run scheduler until all threads exit
assert(cq:loop())

I think the periodically launching some apps/codes are good realized with 'cron' libraries in different languages.
For instance, cron lib in lua you can download here.

Related

How to prevent overwriting of database for requests from different instances (Google App Engine using NDB)

My Google App Engine application (Python3, standard environment) serves requests from users: if there is no wanted record in the database, then create it.
Here is the problem about database overwriting:
When one user (via browser) sends a request to database, the running GAE instance may temporarily fail to respond to the request and then it creates a new process to respond this request. It results that two instances respond to the same request. Both instances make a query to database almost in the same time, and each of them finds there is no wanted record and thus creates a new record. It results as two repeated records.
Another scenery is that for certain reason, the user's browser sends twice requests with time difference less than 0.01 second, which are processed by two instances at the server side and thus repeated records are created.
I am wondering how to temporarily lock the database by one instance to prevent the database overwriting from another instance.
I have considered the following schemes but have no idea whether it is efficient or not.
For python 2, Google App Engine provides "memcache", which can be used to mark the status of query for the purpose of database locking. But for python3, it seems that one has to setup a Redis server to rapidly exchange database status among different instances. So, how about the efficiency of database locking by using Redis?
The usage of session module of Flask. The session module can be used to share data (in most cases, the login status of users) among different requests and thus different instances. I am wondering the speed to exchange the data between different instances.
Appended information (1)
I followed the advice to use transaction, but it did not work.
Below is the code I used to verify the transaction.
The reason of failure may be that the transaction only works for CURRENT client. For multiple requests at the same time, the server side of GAE will create different processes or instances to respond to the requests, and each process or instance will have its own independent client.
#staticmethod
def get_test(test_key_id, unique_user_id, course_key_id, make_new=False):
client = ndb.Client()
with client.context():
from google.cloud import datastore
from datetime import datetime
client2 = datastore.Client()
print("transaction started at: ", datetime.utcnow())
with client2.transaction():
print("query started at: ", datetime.utcnow())
my_test = MyTest.query(MyTest.test_key_id==test_key_id, MyTest.unique_user_id==unique_user_id).get()
import time
time.sleep(5)
if make_new and not my_test:
print("data to create started at: ", datetime.utcnow())
my_test = MyTest(test_key_id=test_key_id, unique_user_id=unique_user_id, course_key_id=course_key_id, status="")
my_test.put()
print("data to created at: ", datetime.utcnow())
print("transaction ended at: ", datetime.utcnow())
return my_test
Appended information (2)
Here is new information about usage of memcache (Python 3)
I have tried the follow code to lock the database by using memcache, but it still failed to avoid overwriting.
#user_student.route("/run_test/<test_key_id>/<user_key_id>/")
def run_test(test_key_id, user_key_id=0):
from google.appengine.api import memcache
import time
cache_key_id = test_key_id+"_"+user_key_id
print("cache_key_id", cache_key_id)
counter = 0
client = memcache.Client()
while True: # Retry loop
result = client.gets(cache_key_id)
if result is None or result == "":
client.cas(cache_key_id, "LOCKED")
print("memcache added new value: counter = ", counter)
break
time.sleep(0.01)
counter+=1
if counter>500:
print("failed after 500 tries.")
break
my_test = MyTest.get_test(int(test_key_id), current_user.unique_user_id, current_user.course_key_id, make_new=True)
client.cas(cache_key_id, "")
memcache.delete(cache_key_id)
If the problem is duplication but not overwriting, maybe you should specify data id when creating new entries, but not let GAE generate a random one for you. Then the application will write to the same entry twice, instead of creating two entries. The data id can be anything unique, such as a session id, a timestamp, etc.
The problem of transaction is, it prevents you modifying the same entry in parallel, but it does not stop you creating two new entries in parallel.
I used memcache in the following way (using get/set ) and succeeded in locking the database writing.
It seems that gets/cas does not work well. In a test, I set the valve by cas() but then it failed to read value by gets() later.
Memcache API: https://cloud.google.com/appengine/docs/standard/python3/reference/services/bundled/google/appengine/api/memcache
#user_student.route("/run_test/<test_key_id>/<user_key_id>/")
def run_test(test_key_id, user_key_id=0):
from google.appengine.api import memcache
import time
cache_key_id = test_key_id+"_"+user_key_id
print("cache_key_id", cache_key_id)
counter = 0
client = memcache.Client()
while True: # Retry loop
result = client.get(cache_key_id)
if result is None or result == "":
client.set(cache_key_id, "LOCKED")
print("memcache added new value: counter = ", counter)
break
time.sleep(0.01)
counter+=1
if counter>500:
return "failed after 500 tries of memcache checking."
my_test = MyTest.get_test(int(test_key_id), current_user.unique_user_id, current_user.course_key_id, make_new=True)
client.delete(cache_key_id)
...
Transactions:
https://developers.google.com/appengine/docs/python/datastore/transactions
When two or more transactions simultaneously attempt to modify entities in one or more common entity groups, only the first transaction to commit its changes can succeed; all the others will fail on commit.
You should be updating your values inside a transaction. App Engine's transactions will prevent two updates from overwriting each other as long as your read and write are within a single transaction. Be sure to pay attention to the discussion about entity groups.
You have two options:
Implement your own logic for transaction failures (how many times to
retry, etc.)
Instead of writing to the datastore directly, create a task to modify
an entity. Run a transaction inside a task. If it fails, the App
Engine will retry this task until it succeeds.

GCP PubSub - How to enqueue asynchronous message?

I would like to have information about the setting of the publisher in the pubsub environment of gcp. I would like to enqueue messages that will be consumed via a google function. To achieve this, the publication will trigger when a number of messages is reached or from a certain time.
I set the topic as follows :
topic.PublishSettings = pubsub.PublishSettings{
ByteThreshold: 1e6, // Publish a batch when its size in bytes reaches this value. (1e6 = 1Mo)
CountThreshold: 100, // Publish a batch when it has this many messages.
DelayThreshold: 10 * time.Second, // Publish a non-empty batch after this delay has passed.
}
When I call the publish function, I have a 10 second delay on each call. Messages are not added to the queue ...
for _, v := range list {
ctx := context.Background()
res := a.Topic.Publish(ctx, &pubsub.Message{Data: v})
// Block until the result is returned and a server-generated
// ID is returned for the published message.
serverID, err = res.Get(ctx)
if err != nil {
return "", err
}
}
Someone can help me ?
Cheers
Batching the publisher side is designed to allow for more cost efficiency when sending messages to Google Cloud Pub/Sub. Given that the minimum billing unit for the service is 1KB, it can be cheaper to send multiple messages in the same Publish request. For example, sending two 0.5KB messages as separate Publish requests would result in being changed for sending 2KB of data (1KB for each). If one were to batch that into a single Publish request, it would be charged as 1KB of data.
The tradeoff with batching is latency: in order to fill up batches, the publisher has to wait to receive more messages to batch together. The three batching properties (ByteThreshold, CountThreshold, and DelayThreshold) allow one to control the level of that tradeoff. The first two properties control how much data or how many messages we put in a single batch. The last property controls how long the publisher should wait to send a batch.
As an example, imagine you have CountThreshold set to 100. If you are publishing few messages, it could take awhile to receive 100 messages to send as a batch. This means that the latency for messages in that batch will be higher because they are sitting in the client waiting to be sent. With DelayThreshold set to 10 seconds, that means that a batch would be sent if it had 100 messages in it or if the first message in the batch was received at least 10 seconds ago. Therefore, this is putting a limit on the amount of latency to introduce in order to have more data in an individual batch.
The code as you have it is going to result in batches with only a single message that each take 10 seconds to publish. The reason is the call to res.Get(ctx), which will block until the message has been successfully sent to the server. With CountThreshold set to 100 and DelayThreshold set to 10 seconds, the sequence that is happening inside your loop is:
A call to Publish puts a message in a batch to publish.
That batch is waiting to receive 99 more messages or for 10 seconds to pass before sending the batch to the server.
The code is waiting for this message to be sent to the server and return with a serverID.
Given the code doesn't call Publish again until res.Get(ctx) returns, it waits 10 seconds to send the batch.
res.Get(ctx) returns with a serverID for the single message.
Go back to 1.
If you actually want to batch messages together, you can't call res.Get(ctx) before the next Publish call. You'll want to either call publish inside a goroutine (so one routine per message) or you'll want to amass the res objects in a list and then call Get on them outside the loop, e.g.:
var res []*PublishResult
ctx := context.Background()
for _, v := range list {
res = append(res, a.Topic.Publish(ctx, &pubsub.Message{Data: v}))
}
for _, r := range res {
serverID, err = r.Get(ctx)
if err != nil {
return "", err
}
}
Something to keep in mind is that batching will optimize cost on the publish side, not on the subscribe side. Cloud Functions is built with push subscriptions. This means that messages must be delivered to the subscriber one at a time (since the response code is what is used to ack or nack each message), which means there is no batching of messages delivered to the subscriber.

Appengine looping across large datasets

I need to loop over a large dataset within appengine. Ofcourse, as the datastore times out after a small amount of time, I decided to use tasks to solve this problem, here's an attempt to explain the method I'm trying to use:
Initialization of task via http post
0) Create query (entity.query()), and set a batch_size limit (i.e. 500)
1) Check if there are any cursors--if this is the first time running, there won't be any.
2a) If there are no cursors, use iter() with the following options: produce_cursors = true, limit= batch_size
2b) If there are curors, use iter() with same options as 2a + set start_cursor to the cursor.
3) Do a for loop to iterate through the results pulled by iter()
4) Get cursor_after()
5) Queue new task (basically re-run the task that was running) passing the cursor into the payload.
So if this code were to work the way I wanted, there'd only be 1 task running at any particular time in the queue. However, I started running the task this morning and 3 hours later when I looked at the queue, there were 4 tasks in it! This is weird because the new task should only be launched at the end of the task launching it.
Here's the actual code with no edits:
class send_missed_swipes(BaseHandler): #disabled
def post(self):
"""Loops across entire database (as filtered) """
#Settings
BATCH_SIZE = 500
cursor = self.request.get('cursor')
start = datetime.datetime(2014, 2, 13, 0, 0, 0, 0)
end = datetime.datetime(2014, 3, 5, 0, 0, 00, 0)
#Filters
swipes = responses.query()
swipes = swipes.filter(responses.date>start)
if cursor:
num_updated = int(self.request.get('num_updated'))
cursor = ndb.Cursor.from_websafe_string(cursor)
swipes = swipes.iter(produce_cursors=True,limit=BATCH_SIZE,start_cursor=cursor)
else:
num_updated = 0
swipes = swipes.iter(produce_cursors=True,limit=BATCH_SIZE)
count = 0
for swipe in swipes:
count += 1
if swipe.date>end:
pass
else:
uKey = str(swipe.uuId.urlsafe())
pKey = str(swipe.pId.urlsafe())
act = swipe.act
taskqueue.add(queue_name="analyzeData", url="/admin/analyzeData/send_swipes", params={'act':act,'uKey':uKey,'pKey':pKey})
num_updated += 1
logging.info('count = '+str(count))
logging.info('num updated = '+str(num_updated))
cursor = swipes.cursor_after().to_websafe_string()
taskqueue.add(queue_name="default", url="/admin/analyzeData/send_missed_swipes", params={'cursor':cursor,'num_updated':num_updated})
This is a bit of a complicated question, so please let me know if I need to explain it better. And thanks for the help!
p.s. Threadsafe is false in app.yaml
I believe a task can be executed multiple times, therefore it is important to make your process idempotent.
From doc https://developers.google.com/appengine/docs/python/taskqueue/overview-push
Note that this example is not idempotent. It is possible for the task
queue to execute a task more than once. In this case, the counter is
incremented each time the task is run, possibly skewing the results.
You can create task with name to handle this
https://developers.google.com/appengine/docs/python/taskqueue/#Python_Task_names
I'm curious why threadsafe=False in your yaml?
A bit off topic (since I'm not addressing your problems), but this sounds like a job for map reduce.
On topic: you can create custom queue with max_concurrent_requests=1. You could still have multiple tasks in the queue, but only one would be executing at a time.

Task Parallel Library and SQL Connections

I'm hoping someone can confirm what is actually happening here with TPL and SQL connections.
Basically, I have a large application which, in essence, reads a table from SQL Server, and then processes each row - serially. The processing of each row can take quite some time. So, I thought to change this to use the Task Parallel Library, with a "Parallel.ForEach" across the rows in the datatable. This seems to work for a little while (minutes), then it all goes pear-shaped with...
"The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached."
Now, I surmised the following (which may of course be entirely wrong).
The "ForEach" creates tasks for each row, up to some limit based on the number of cores (or whatever). Lets say 4 for want of a better idea. Each of the four tasks gets a row, and goes off to process it. TPL waits until the machine is not too busy, and fires up some more. I'm expecting a max of four.
But that's not what I observe - and not what I think is happening.
So... I wrote a quick test (see below):
Sub Main()
Dim tbl As New DataTable()
FillTable(tbl)
Parallel.ForEach(tbl.AsEnumerable(), AddressOf ProcessRow)
End Sub
Private n As Integer = 0
Sub ProcessRow(row As DataRow, state As ParallelLoopState)
n += 1 ' I know... not thread safe
Console.WriteLine("Starting thread {0}({1})", n, Thread.CurrentThread.ManagedThreadId)
Using cnx As SqlConnection = New SqlConnection(My.Settings.ConnectionString)
cnx.Open()
Thread.Sleep(TimeSpan.FromMinutes(5))
cnx.Close()
End Using
Console.WriteLine("Closing thread {0}({1})", n, Thread.CurrentThread.ManagedThreadId)
n -= 1
End Sub
This creates way more than my guess at the number of tasks. So, I surmise that TPL fires up tasks to the limit it thinks will keep my machine busy, but hey, what's this, we're not very busy here, so lets start some more. Still not very busy, so... etc. (seems like one new task a second - roughly).
This is reasonable-ish, but I expect it to go pop 30 seconds (SQL connection timeout) after when and if it gets 100 open SQL connections - the default connection pool size - which it doesn't.
So, to scale it back a bit, I change my connection string to limit the max pool size.
Sub Main()
Dim tbl As New DataTable()
Dim csb As New SqlConnectionStringBuilder(My.Settings.ConnectionString)
csb.MaxPoolSize = 10
csb.ApplicationName = "Test 1"
My.Settings("ConnectionString") = csb.ToString()
FillTable(tbl)
Parallel.ForEach(tbl.AsEnumerable(), AddressOf ProcessRow)
End Sub
I count the real number of connections to the SQL server, and as expected, its 10. But my application has fired up 26 tasks - and then hangs. So, setting the max pool size for SQL somehow limited the number of tasks to 26, but why no 27, and especially, why doesn't it fall over at 11 because the pool is full ?
Obviously, somewhere along the line I'm asking for more work than my machine can do, and I can add "MaxDegreesOfParallelism" to the ForEach, but I'm interested in what's actually going on here.
PS.
Actually, after sitting with 26 tasks for (I'm guessing) 5 minutes, it does fall over with the original (max pool size reached) error. Huh ?
Thanks.
Edit 1:
Actually, what I now think happens in the tasks (my "ProcessRow" method) is that after 10 successful connections/tasks, the 11th does block for the connection timeout, and then does get the original exception - as do any subsequent tasks.
So... I conclude that the TPL is creating tasks at about 1 a second, and it gets enough time to create about 26/27 before task 11 throws an exception. All subsequent tasks then also throw exceptions (about a second apart) and the TPL stops creating new tasks (because it gets unhandled exceptions in one or more tasks ?)
For some reason (as yet undetermined), the ForEach than hangs for a while. If I modify my ProcessRow method to use the state to say "stop", it appears to have no effect.
Sub ProcessRow(row As DataRow, state As ParallelLoopState)
n += 1
Console.WriteLine("Starting thread {0}({1})", n, Thread.CurrentThread.ManagedThreadId)
Try
Using cnx As SqlConnection = fnNewConnection()
Thread.Sleep(TimeSpan.FromMinutes(5))
End Using
Catch ex As Exception
Console.WriteLine("Exception on thread {0}", Thread.CurrentThread.ManagedThreadId)
state.Stop()
Throw
End Try
Console.WriteLine("Closing thread {0}({1})", n, Thread.CurrentThread.ManagedThreadId)
n -= 1
End Sub
Edit 2:
Dur... The reason for the long delay is that, while tasks 11 onwards all crash and burn, tasks 1 to 10 don't, and all sit there sleeping for 5 minutes. The TPL has stopped creating new tasks (because of the unhandled exception in one or more of the tasks it has created), and then waits for the un-crashed tasks to complete.
The edits to the original question add more detail and, eventually, the answer becomes apparent.
TPL creates tasks repeatedly because the tasks it has created are (basically) idle. This is fine until the connection pool is exhausted, at which point the tasks which want a new connection wait for one to become available, and timeout. In the meantime, the TPL is still creating more tasks, all doomed to fail. After the connection timeout, the tasks start failing, and the ensuing exception(s) cause the TPL to stop creating new tasks. The TPL then waits for the tasks that did get connections to complete, before an AggregateException is thrown.
The TPL is not made for IO-bound work. It has heuristics which it uses to steer the count of threads being active. These heuristics fail for long-running and/or IO-bound tasks, causing it to inject more and more threads without a practical limit.
Use PLINQ to set a fixed amount of threads using WithDegreeOfParallelism. You should probably test different amounts. It could look like this. I have written much more about this topic on SO, but I can't find it at the moment.
I have no idea why you are seeing exactly 26 threads in your example. Note, that when the pool is depleted, a request to take a connection only fails after a timeout. This entire system is very non-deterministic and I'd consider any number of threads plausible.

Erlang mnesia database access

I have designed a mnesia database with 5 different tables. The idea is to simulate queries from many nodes (computers) not just one, at the moment from the terminal i can execute a query, but I just need help on how i can make it such that I am requesting information from many computers. I am testing for scalability and want to investigate the performance of mnesia vs other databases. Any idea will be highly appreciated.
The best way to test mnesia is by using an intensive threaded job both on the local Erlang Node where mnesia is running and on the remote nodes. Usually, you want to have remote nodes using RPC calls in which reads and writes are being executed on mnesia tables. Of-course, with high concurrency comes a trade off; speed of transactions will reduce, many may be retried as the locks may be many at a given time; But mnesia will ensure that all processes receive an {atomic,ok} for each transactional call they make.
The Concept
I propose that we have a non-blocking overload with both Writes and reads in directed to each mnesia table by as many processes as possible. We measure the time difference between the call to the write function and the time it takes for our massive mnesia subscriber to get a Write Event. These Events are sent by mnesia every after a successful Transaction and so we need not interrupt the working/overloading processes but rather let a "strong" mnesia subscriber to wait for asynchronous events reporting successful deletes and writes as soon as they occur. The technique here is that we take the time stamp at the point just before calling a write function and then we note down the record key, the write CALL timestamp. Then our mnesia subscriber would note down the record key, the write/read EVENT timestamp. Then the time difference between these two time stamps (lets call it: CALL-to-EVENT Time) would give us a rough idea of how loaded, or how efficient we are going. As locks increase with Concurrency, we should be registering increasing CALL-to-EVENT Time parameter. Processes doing writes (unlimited) will do so concurrently while those doing reads will also continue to do so without interruptions. We will choose the number of processes for each operation but lets first lay ground for the entire test case.
All the above Concept is for Local operations (processes running on the same Node as Mnesia)
--> Simulating Many Nodes
Well, i have personally not simulated Nodes in Erlang, i have always worked with real Erlang Nodes on the Same box or on several different machines in a networked environment. However, i advise that you look closely on this module: http://www.erlang.org/doc/man/slave.html, concentrate more on this one here: http://www.erlang.org/doc/man/ct_slave.html, and look at the following links as they talk about creating, simulating and controlling many nodes under another parent node (http://www.erlang.org/doc/man/pool.html, Erlang: starting slave node,https://support.process-one.net/doc/display/ERL/Starting+a+set+of+Erlang+cluster+nodes,http://www.berabera.info/oldblog/lenglet/howtos/erlangkerberosremctl/index.html). I will not dive into a jungle of Erlang Nodes here bacause it also another complicated topic but i will concentrate on tests on the same node running mnesia. I have come up with the above mnesia test concept and here, lets start implementing it.
Now, First of all, you need to make a test plan for each table (separate). This should include both writes and reads. Then you need to decide whether you want to do dirty operations or transactional operations on the tables. You need to test speed of traversing a mnesia table in relation to its size. Lets take an example of a simple mnesia table
-record(key_value,{key,value,instanceId,pid}).
We would want to have a general function for writing into our table, here below:
write(Record)->
%% Use mnesia:activity/4 to test several activity
%% contexts (and if your table is fragmented)
%% like the commented code below
%%
%% mnesia:activity(
%% transaction, %% sync_transaction | async_dirty | ets | sync_dirty
%% fun(Y) -> mnesia:write(Y) end,
%% [Record],
%% mnesia_frag
%% )
mnesia:transaction(fun() -> ok = mnesia:write(Record) end).
And for our reads, we will have:
read(Key)->
%% Use mnesia:activity/4 to test several activity
%% contexts (and if your table is fragmented)
%% like the commented code below
%%
%% mnesia:activity(
%% transaction, %% sync_transaction | async_dirty| ets | sync_dirty
%% fun(Y) -> mnesia:read({key_value,Y}) end,
%% [Key],
%% mnesia_frag
%% )
mnesia:transaction(fun() -> mnesia:read({key_value,Key}) end).
Now, we want to write very many records into our small table. We need a key generator. This key generator will be our own pseudo-random string generator. However, we need our generator to tell us the instant it generates a key so we record it. We want to see how long it takes to write a generated key. Lets put it down like this:
timestamp()-> erlang:now().
str(XX)-> integer_to_list(XX).
generate_instance_id()->
random:seed(now()),
guid() ++ str(crypto:rand_uniform(1, 65536 * 65536)) ++ str(erlang:phash2({self(),make_ref(),time()})).
guid()->
random:seed(now()),
MD5 = erlang:md5(term_to_binary({self(),time(),node(), now(), make_ref()})),
MD5List = binary_to_list(MD5),
F = fun(N) -> f("~2.16.0B", [N]) end,
L = lists:flatten([F(N) || N <- MD5List]),
%% tell our massive mnesia subscriber about this generation
InstanceId = generate_instance_id(),
mnesia_subscriber ! {self(),{key,write,L,timestamp(),InstanceId}},
{L,InstanceId}.
To make very many concurrent writes, we need a function which will be executed by many processes we will spawn. In this function, its desirable NOT to put any blocking functions such as sleep/1 usually implemented as sleep(T)-> receive after T -> true end.. Such a function would make a processes execution to hang for the specified milliseconds. mnesia_tm does the lock control, retry, blocking, e.t.c. on behalf of the processes to avoid dead locks. Lets say, we want each processes to write an unlimited amount of records. Here is our function:
-define(NO_OF_PROCESSES,20).
start_write_jobs()->
[spawn(?MODULE,generate_and_write,[]) || _ <- lists:seq(1,?NO_OF_PROCESSES)],
ok.
generate_and_write()->
%% remember that in the function ?MODULE:guid/0,
%% we inform our mnesia_subscriber about our generated key
%% together with the timestamp of the generation just before
%% a write is made.
%% The subscriber will note this down in an ETS Table and then
%% wait for mnesia Event about the write operation. Then it will
%% take the event time stamp and calculate the time difference
%% From there we can make judgement on performance.
%% In this case, we make the processes make unlimited writes
%% into our mnesia tables. Our subscriber will trap the events as soon as
%% a successful write is made in mnesia
%% For all keys we just write a Zero as its value
{Key,Instance} = guid(),
write(#key_value{key = Key,value = 0,instanceId = Instance,pid = self()}),
generate_and_write().
Likewise, lets see how the read jobs will be done.
We will have a Key provider, this Key provider keeps rotating around the mnesia table picking only keys, up and down the table it will keep rotating. Here is its code:
first()-> mnesia:dirty_first(key_value).
next(FromKey)-> mnesia:dirty_next(key_value,FromKey).
start_key_picker()-> register(key_picker,spawn(fun() -> key_picker() end)).
key_picker()->
try ?MODULE:first() of
'$end_of_table' ->
io:format("\n\tTable is empty, my dear !~n",[]),
%% lets throw something there to start with
?MODULE:write(#key_value{key = guid(),value = 0}),
key_picker();
Key -> wait_key_reqs(Key)
catch
EXIT:REASON ->
error_logger:error_info(["Key Picker dies",{EXIT,REASON}]),
exit({EXIT,REASON})
end.
wait_key_reqs('$end_of_table')->
receive
{From,<<"get_key">>} ->
Key = ?MODULE:first(),
From ! {self(),Key},
wait_key_reqs(?MODULE:next(Key));
{_,<<"stop">>} -> exit(normal)
end;
wait_key_reqs(Key)->
receive
{From,<<"get_key">>} ->
From ! {self(),Key},
NextKey = ?MODULE:next(Key),
wait_key_reqs(NextKey);
{_,<<"stop">>} -> exit(normal)
end.
key_picker_rpc(Command)->
try erlang:send(key_picker,{self(),Command}) of
_ ->
receive
{_,Reply} -> Reply
after timer:seconds(60) ->
%% key_picker hang, or too busy
erlang:throw({key_picker,hanged})
end
catch
_:_ ->
%% key_picker dead
start_key_picker(),
sleep(timer:seconds(5)),
key_picker_rpc(Command)
end.
%% Now, this is where the reader processes will be
%% accessing keys. It will appear to them as though
%% its random, because its one process doing the
%% traversal. It will all be a game of chance
%% depending on the scheduler's choice
%% he who will have the next read chance, will
%% win ! okay, lets get going below :)
get_key()->
Key = key_picker_rpc(<<"get_key">>),
%% lets report to our "massive" mnesia subscriber
%% about a read which is about to happen
%% together with a time stamp.
Instance = generate_instance_id(),
mnesia_subscriber ! {self(),{key,read,Key,timestamp(),Instance}},
{Key,Instance}.
Wow !!! Now we need to create the function where we will start all the readers.
-define(NO_OF_READERS,10).
start_read_jobs()->
[spawn(?MODULE,constant_reader,[]) || _ <- lists:seq(1,?NO_OF_READERS)],
ok.
constant_reader()->
{Key,InstanceId} = ?MODULE:get_key(),
Record = ?MODULE:read(Key),
%% Tell mnesia_subscriber that a read has been done so it creates timestamp
mnesia:report_event({read_success,Record,self(),InstanceId}),
constant_reader().
Now, the biggest part; mnesia_subscriber !!! This is a simple process that will subscribe
to simple events. Get mnesia events documentation from the mnesia users guide.
Here is the mnesia subscriber
-record(read_instance,{
instance_id,
before_read_time,
after_read_time,
read_time %% after_read_time - before_read_time
}).
-record(write_instance,{
instance_id,
before_write_time,
after_write_time,
write_time %% after_write_time - before_write_time
}).
-record(benchmark,{
id, %% {pid(),Key}
read_instances = [],
write_instances = []
}).
subscriber()->
mnesia:subscribe({table,key_value, simple}),
%% lets also subscribe for system
%% events because events passing through
%% mnesia:event/1 will go via
%% system events.
mnesia:subscribe(system),
wait_events().
-include_lib("stdlib/include/qlc.hrl").
wait_events()->
receive
{From,{key,write,Key,TimeStamp,InstanceId}} ->
%% A process is just about to call
%% mnesia:write/1 and so we note this down
Fun = fun() ->
case qlc:e(qlc:q([X || X <- mnesia:table(benchmark),X#benchmark.id == {From,Key}])) of
[] ->
ok = mnesia:write(#benchmark{
id = {From,Key},
write_instances = [
#write_instance{
instance_id = InstanceId,
before_write_time = TimeStamp
}]
}),
ok;
[Here] ->
WIs = Here#benchmark.write_instances,
NewInstance = #write_instance{
instance_id = InstanceId,
before_write_time = TimeStamp
},
ok = mnesia:write(Here#benchmark{write_instances = [NewInstance|WIs]}),
ok
end
end,
mnesia:transaction(Fun),
wait_events();
{mnesia_table_event,{write,#key_value{key = Key,instanceId = I,pid = From},_ActivityId}} ->
%% A process has successfully made a write. So we look it up and
%% get timeStamp difference, and finish bench marking that write
WriteTimeStamp = timestamp(),
F = fun()->
[Here] = mnesia:read({benchmark,{From,Key}}),
WIs = Here#benchmark.write_instances,
{_,WriteInstance} = lists:keysearch(I,2,WIs),
BeforeTmStmp = WriteInstance#write_instance.before_write_time,
NewWI = WriteInstance#write_instance{
after_write_time = WriteTimeStamp,
write_time = time_diff(WriteTimeStamp,BeforeTmStmp)
},
ok = mnesia:write(Here#benchmark{write_instances = [NewWI|lists:keydelete(I,2,WIs)]}),
ok
end,
mnesia:transaction(F),
wait_events();
{From,{key,read,Key,TimeStamp,InstanceId}} ->
%% A process is just about to do a read
%% using mnesia:read/1 and so we note this down
Fun = fun()->
case qlc:e(qlc:q([X || X <- mnesia:table(benchmark),X#benchmark.id == {From,Key}])) of
[] ->
ok = mnesia:write(#benchmark{
id = {From,Key},
read_instances = [
#read_instance{
instance_id = InstanceId,
before_read_time = TimeStamp
}]
}),
ok;
[Here] ->
RIs = Here#benchmark.read_instances,
NewInstance = #read_instance{
instance_id = InstanceId,
before_read_time = TimeStamp
},
ok = mnesia:write(Here#benchmark{read_instances = [NewInstance|RIs]}),
ok
end
end,
mnesia:transaction(Fun),
wait_events();
{mnesia_system_event,{mnesia_user,{read_success,#key_value{key = Key},From,I}}} ->
%% A process has successfully made a read. So we look it up and
%% get timeStamp difference, and finish bench marking that read
ReadTimeStamp = timestamp(),
F = fun()->
[Here] = mnesia:read({benchmark,{From,Key}}),
RIs = Here#benchmark.read_instances,
{_,ReadInstance} = lists:keysearch(I,2,RIs),
BeforeTmStmp = ReadInstance#read_instance.before_read_time,
NewRI = ReadInstance#read_instance{
after_read_time = ReadTimeStamp,
read_time = time_diff(ReadTimeStamp,BeforeTmStmp)
},
ok = mnesia:write(Here#benchmark{read_instances = [NewRI|lists:keydelete(I,2,RIs)]}),
ok
end,
mnesia:transaction(F),
wait_events();
_ -> wait_events();
end.
time_diff({A2,B2,C2} = _After,{A1,B1,C1} = _Before)->
{A2 - A1,B2 - B1,C2 - C1}.
Alright ! That was huge :) So we are done with the subscriber. We need to put the code that will crown it all together and run the necessary tests.
install()->
mnesia:stop().
mnesia:delete_schema([node()]),
mnesia:create_schema([node()]),
mnesia:start(),
{atomic,ok} = mnesia:create_table(key_value,[
{attributes,record_info(fields,key_value)},
{disc_copies,[node()]}
]),
{atomic,ok} = mnesia:create_table(benchmark,[
{attributes,record_info(fields,benchmark)},
{disc_copies,[node()]}
]),
mnesia:stop(),
ok.
start()->
mnesia:start(),
ok = mnesia:wait_for_tables([key_value,benchmark],timer:seconds(120)),
%% boot up our subscriber
register(mnesia_subscriber,spawn(?MODULE,subscriber,[])),
start_write_jobs(),
start_key_picker(),
start_read_jobs(),
ok.
Now, with proper analysis of the benchmark table records, you will get record of average read times,
average write times e.t.c. You draw a graph of these times against increasing number of processes.
As we increase the number of processes, you will discover that the read and write times increase
. Get the code, read it and make use of it. You may not use all of it but am sure you could pick up
new concepts from there as others send in there solutions. Using mnesia events is the best way to test mnesia reads and writes without blocking the processes doing the actual writing or reading. In the example above, the reading and writing processes are out of any control, infact, they will run forever until you terminate the VM. You can traverse the benchmark table with a good formulae to make use of the read and write times per read or write instance and then you would calculate averages, variations e.t.c.
Testing from Remote Computers, Simulating Nodes, benchmarking against other DBMS may not be as relevant simply because of many reasons. The concepts, motivations and goals of Mnesia are very different from several types of existing Database Types like: document oriented DBs, RDBMS, Object-Oriented DBs e.t.c. Infact, mnesia out to be compared with a Database such as this one. Its a Distributed DBMs with a Hybrid/Unstructured kinda Data Structures which belong to the Language Erlang. Benchmarking Mnesia against another type of Database may not be right because its purpose is very different from many and its tight coupling with Erlang/OTP. However, a knowledge of how mnesia works, transaction contexts, indexing, concurrency, distribution can be key to a good Database Design. Mnesia can store a very Complex Data Structure. Remember, the more complex a Data Structure is with nested information, the more work required to unpack it and extract the information you need at run-time, which means more CPU Cycles and memory. Some times, normalization with mnesia may just result in poor performance and so the implementation of its concepts are far away from other Database.
Its good you are interested in Mnesia performance across several machines (distributed), however, the performance is as good as Distributed Erlang is. The great thing is that atomicity is ensured for every transaction. Still concurrent requests from remote nodes can be sent via RPC Calls. Remember that if you have multiple replicas of mnesia on different machines, processes running on each node will write on that very node, then mnesia will carry on from there with its replication. Mnesia is very fast at replication, unless a network is really doing bad and/or the nodes are not connected, or network is partitioned at runtime.
Mnesia ensures Consistency and Atomicity of CRUD Operations. For this reason, replicated mnesia Databases highly depend on the network availability for better performance. As long as the Erlang Nodes remain connected, the two or more Mnesia Nodes will always have the same data. Reads on one Node will ensure that you get the most recent information. Problems arise when a disconnection occurs and each node registers thet other as though its down. More information on mnesia's performance can be found by following the following links
http://igorrs.blogspot.com/2010/05/mnesia-one-year-later.html
http://igorrs.blogspot.com/2010/05/mnesia-one-year-later-part-2.html
http://igorrs.blogspot.com/2010/05/mnesia-one-year-later-part-3.html
http://igorrs.blogspot.com/2009/11/consistent-hashing-for-mnesia-fragments.html
As a consequence, the concepts behind mnesia can only be compared with Ericsson's NDB Database found here: http://ww.dolphinics.no/papers/abstract/ericsson.html, but not with existing RDBMS, or Document Oriented Databases, e.t.c. Those are my thoughts :) lets wait for what others have to say.....
You start additional nodes using command like this:
erl -name test1#127.0.0.1 -cookie devel \
-mnesia extra_db_nodes "['devel#127.0.0.1']"\
-s mnesia start
where 'devel#127.0.0.1' is the node where mnesia is already setup. In this case all tables will be accessed from remote node, but you can make local copies with mnesia:add_table_copy/3.
Then you can use spawn/2 or spawn/4 to start load generation on all nodes with something like:
lists:foreach(fun(N) ->
spawn(N, fun () ->
%% generate some load
ok
end
end,
[ 'test1#127.0.0.1', 'test2#127.0.0.1' ]
)

Resources