Querying real time data from an SQL database sudden latency problem - sql-server

We are testing an application that is supposed to display real time data for multiple users on a 1 second basis. New data of 128 rows is inserted each one second by the server application into an SQL datatbase then it has to be queried by all users along with another old referential 128 rows.
We tested the query time and it didn't exceed 30 milliseonds; also the interface function that invokes the query didn't take more than 50 milliseconds with processing the data and all
We developed a testing application that creates a thread and an SQL connection per each user. The user issues 7 queries each 1 second. Everything starts fine, and no user takes more than 300 milliseconds for the 7 data series ( queries ). However, after 10 minutes, the latency exceeds 1 second and keeps on increasing. We don't know if the problem is from the SQL server 2008 handling multiple requests at the same time, and how to overcome such a problem.
Here's our testing client if it might help. Note that the client and server are made on the same 8 CPU machine with 8 GB RAM. Now we're questioning whether the database might not be the optimal solution for us.
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Enter Number of threads");
int threads = int.Parse(Console.ReadLine());
ArrayList l = new ArrayList();
for (int i = 0; i < threads; i++)
{
User u = new User();
Thread th = new Thread(u.Start);
th.IsBackground = true;
th.Start();
l.Add(u);
l.Add(th);
}
Thread.CurrentThread.Join();
GC.KeepAlive(l);
}
}
class User
{
BusinessServer client ; // the data base interface dll
public static int usernumber =0 ;
static TextWriter log;
public User()
{
client = new BusinessServer(); // creates an SQL connection in the constructor
Interlocked.Increment(ref usernumber);
}
public static void SetLog(int processnumber)
{
log = TextWriter.Synchronized(new StreamWriter(processnumber + ".txt"));
}
public void Start()
{
Dictionary<short, symbolStruct> companiesdic = client.getSymbolData();
short [] symbolids=companiesdic.Keys.ToArray();
Stopwatch sw = new Stopwatch();
while (true)
{
int current;
sw.Start();
current = client.getMaxCurrentBarTime();
for (int j = 0; j < 7; j++)
{
client.getValueAverage(dataType.mv, symbolids,
action.Add, actionType.Buy,
calculationType.type1,
weightType.freeFloatingShares, null, 10, current, functionBehaviour.difference); // this is the function that has the queries
}
sw.Stop();
Console.WriteLine(DateTime.Now.ToString("hh:mm:ss") + "\t" + sw.ElapsedMilliseconds);
if (sw.ElapsedMilliseconds > 1000)
{
Console.WriteLine("warning");
}
sw.Reset();
long diff = 0;//(1000 - sw.ElapsedMilliseconds);
long sleep = diff > 0 ? diff : 1000;
Thread.Sleep((int)sleep);
}
}
}

Warning: this answer is based on knowledge of MSSQL 2000 - not sure if it is still correct.
If you do a lot of inserts, the indexes will eventually get out of date and the server will automatically switch to table scans until the indexes are rebuilt. Some of this is done automatically, but you may want to force reindexing periodically if this kind of performance is critical.

I would suspect the query itself. While it may not take much time on an empty database, as the amount of data grows it may require more and more time depending on how the look up is done. Have you examined the query plan to make sure that it is doing index lookups instead of table scans to find the data? If not, perhaps introducing some indexes would help.

Related

Cloudsim- How to calculate totalCurrentRequestedRam of a VM?

I am working on SLA violation Time per Active Host(SLATAH). I want to consider three parameters CPU, RAM and Bw for calculation of this metric. Now, for considering this three parameters I need to calculate the totalCurrentRequestedRam.
The bellow code is the currentRequestedRam of a VM, now I need to totalCurrentRequestedRam, How Should I calculated ?
public int getCurrentRequestedRam() {
if (isBeingInstantiated()) {
return getRam();
}
return (int) (getCloudletScheduler().getCurrentRequestedUtilizationOfRam() * getRam());
}

How to add time delay to process more than 15 second in Actionscript?

So I have the following script to get all combination of an array :
'''
var value = new Array(40)
for (var i=0;i<value.length;i++){
value[i]=i;
}
var comb_list = getAllComb(value,24);
trace(comb_list)
function getAllComb(values:Array, r:int):Array{
var n = values.length;
var result = new Array();
var a = new Array(r);
// initialize first combination
for (var i = 0; i < r; i++) {
a[i] = i;
}
i = r - 1; // Index to keep track of maximum unsaturated element in array
// a[0] can only be n-r+1 exactly once - our termination condition!
var count = 0;
while (a[0] < n - r + 1) {
// If outer elements are saturated, keep decrementing i till you find unsaturated element
while (i > 0 && a[i] == n - r + i) {
i--;
}
result.push(a.slice())// pseudo-code to print array as space separated numbers
count++;
a[i]++;
// Reset each outer element to prev element + 1
while (i < r - 1) {
a[i + 1] = a[i] + 1;
i++;
}
}
return result;
}
'''
Running above script will get me:
Error: Error #1502: A script has executed for longer than the default timeout period of 15 seconds.
How to add time delay each 14 seconds passed so that I can run the script? So, after 14 seconds passed, the program will wait for 50ms then continue.
Any help appreciated.
So, there's a simple (well, pretty much so) and working example of how to separate the heavy calculations part from the main thread so the main thread (which also handles UI and external events like user input) would run smoothly, while being able to read the progress and the results of the heavy calculations going under the hood. It also is in a form of a single class, this could be a bit confusing (until you understand how it works) but still easy to handle and modify.
Although the background AVM goes along the same execution flow (code execution > graphics rendering > code execution > graphics rendering > and so on), there are no graphics to render hence there's no need to anyhow limit the code execution time. As a result Worker thread is not a subject to 15 seconds limit, which, somehow, solves the problem.
package
{
import flash.events.Event;
import flash.display.Sprite;
import flash.utils.ByteArray;
import flash.concurrent.Mutex;
import flash.system.Worker;
import flash.system.WorkerDomain;
public class MultiThreading extends Sprite
{
// These variables are needed by both the main and
// subservient threads and will actually point to
// the very same object instances, though from
// the different sides of this application.
private var B:ByteArray;
private var W:Worker;
private var M:Mutex;
// Constructor method.
public function MultiThreading()
{
super();
// This property is 'true' for the main thread
// and 'false' for any Worker instance created.
if (Worker.current.isPrimordial)
{
prepareProgress();
prepareThread();
startMain();
}
else
{
startWorker();
}
}
// *** THE MAIN THREAD *** //
private var P:Sprite;
private var F:Sprite;
// Prepares the progress bar graphics.
private function prepareProgress():void
{
F = new Sprite;
P = new Sprite;
P.graphics.beginFill(0x0000FF);
P.graphics.drawRect(0, 0, 100, 10);
P.graphics.endFill();
P.scaleX = 0;
F.graphics.lineStyle(0, 0x000000);
F.graphics.drawRect(0, 0, 100, 10);
F.x = 10;
F.y = 10;
P.x = 10;
P.y = 10;
addChild(P);
addChild(F);
}
// Prepares the subservient thread and shares
// the ByteArray (the way to pass messages)
// and the Mutex (the way to access the shared
// resources in a multi-thread environment
// without stepping on each others' toes).
private function prepareThread():void
{
M = new Mutex;
B = new ByteArray;
B.shareable = true;
B.writeObject(incomingMessage);
W = WorkerDomain.current.createWorker(loaderInfo.bytes);
W.setSharedProperty("message", B);
W.setSharedProperty("lock", M);
}
// Starts listening to what the background thread has to say
// and also starts the background thread itself.
private function startMain():void
{
addEventListener(Event.ENTER_FRAME, onFrame);
W.start();
}
private var incomingMessage:Object = {ready:0, total:100};
private function onFrame(e:Event):void
{
// This method runs only 20-25 times a second.
// We need to set a lock on the Mutex in order
// to read the shared data without any risks
// of colliding with the thread writing the
// same data at the same moment of time.
M.lock();
B.position = 0;
incomingMessage = B.readObject();
M.unlock();
// Display the current data.
P.scaleX = incomingMessage.ready / incomingMessage.total;
P.alpha = 1 - 0.5 * P.scaleX;
// Kill the thread if it signalled it is done calculating.
if (incomingMessage.terminate)
{
removeEventListener(Event.ENTER_FRAME, onFrame);
W.terminate();
B.clear();
B = null;
M = null;
W = null;
}
}
// *** THE BACKGROUND WORKER PART *** //
// I will use the same W, M and B variables to refer
// the same Worker, Mutex and ByteArray respectively,
// but you must keep in mind that this part of the code
// runs on a different virtual machine, so it is the
// different class instance thus its fields are not
// the same quite as well.
// Initialization.
private function startWorker():void
{
W = Worker.current;
M = W.getSharedProperty("lock");
B = W.getSharedProperty("message");
// Before starting the heavy calculations loop
// we need to release the main thread which is
// presently on W.start() instruction. I tried
// without it and it gives a huuuge lag before
// actually proceeding to intended work.
addEventListener(Event.ENTER_FRAME, onWorking);
}
private function onWorking(e:Event):void
{
removeEventListener(Event.ENTER_FRAME, onWorking);
var aMax:int = 10000000;
// Very very long loop which might run
// over the course of several seconds.
for (var i:int = 0; i < aMax; i++)
{
// This subservient thread does not actually need to
// write its status every single loop, so lets don't
// explicitly lock the shared resources for they
// might be in use by the main thread.
if (M.tryLock())
{
B.position = 0;
B.writeObject({ready:i, total:aMax});
M.unlock();
}
}
// Let's notify the main thread that
// the calculations are finally done.
M.lock();
B.position = 0;
B.writeObject({ready:i, total:aMax, terminate:true});
M.unlock();
// Release the used variables and prepare to be terminated.
M = null;
B = null;
W = null;
}
}
}
The error is not related to your script needing a time delay, the problem is your while loops are making your script unresponsive for more than 15 seconds, triggering the script timeout error. Action Script only allows 15 seconds for your script to execute.
Your first while loop looks problematic, and I'm unclear how the value of a[0] changes to end the loop. Add a break to the loop or make sure the condition changes to allow the loop to end, and you should solve your problem. You can also considering adding continue statements to your embedded while loops if they are only supposed to run one time after they find an unsaturated value.
Personally, since you are using ActionScript, I'd suggest using objects and listeners for value changes instead of iterating over arrays checking for changes.
You could also add a manual timeout for your while loop, but would need to include logic for it to pick up where it left off.
//Set timer to 14 seconds
timeout = getTimer() + 14000;
while(true && timeout > getTimer()){
trace("No Error");
}
If you were used Adobe Animate (Flash), you could change the "Script Time Limit" from Publish setting page.

JDBC Connection pooling for SQL Server: DBCP vs C3P0 vs No Pooling

I got this Java webapp which happens to communicate too much with a SQL Server Database. I wanna decide how to manage the connections to this DB in an efficient manner. The first option which pops to mind is using connection pooling third parties. I chose C3P0 and DBCP and prepared some test cases to compare these approaches as follows:
No Pooling:
public static void main(String[] args) {
long startTime=System.currentTimeMillis();
try {
for (int i = 0; i < 100; i++) {
Connection conn = ConnectionManager_SQL.getInstance().getConnection();
String query = "SELECT * FROM MyTable;";
PreparedStatement prest = conn.prepareStatement(query);
ResultSet rs = prest.executeQuery();
if (rs.next()) {
System.out.println(i + ": " + rs.getString("CorpName"));
}
conn.close();
}
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println("Finished in: "+(System.currentTimeMillis()-startTime)+" milli secs");
}
DBCP:
public static void main(String[] args) {
long startTime=System.currentTimeMillis();
try {
for (int i = 0; i < 100; i++) {
Connection conn = ConnectionManager_SQL_DBCP.getInstance().getConnection();
String query = "SELECT * FROM MyTable;";
PreparedStatement prest = conn.prepareStatement(query);
ResultSet rs = prest.executeQuery();
if (rs.next()) {
System.out.println(i + ": " + rs.getString("CorpName"));
}
conn.close();
}
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println("Finished in: "+(System.currentTimeMillis()-startTime)+" milli secs");
}
C3P0:
public static void main(String[] args) {
long startTime=System.currentTimeMillis();
try {
for (int i = 0; i < 100; i++) {
Connection conn = ConnectionManager_SQL_C3P0.getInstance().getConnection();
String query = "SELECT * FROM MyTable;";
PreparedStatement prest = conn.prepareStatement(query);
ResultSet rs = prest.executeQuery();
if (rs.next()) {
System.out.println(i + ": " + rs.getString("CorpName"));
}
conn.close();
}
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println("Finished in: "+(System.currentTimeMillis()-startTime)+" milli secs");
}
And Here is the results:
Max Pool size for c3p0 and dbcp=10
c3p0: 5534 milli secs
dbcp: 4807 milli secs
No Pooling: 2660 milli secs
__
Max Pool size for c3p0 and dbcp=100
c3p0: 4937 milli secs
dbcp: 4798 milli secs
No Pooling: 2660 milli secs
One might say the initialization and startup time of pooling libraries might affect the results of these test cases. I have repeated them with larger numbers in the loop and results are almost the same.
Surprisingly the no pooling approach is much more faster than connection pooling methods. While I assume when we close a connection physically, getting a new one must be more time consuming.
So, what's going on here?
EDIT_01: c3p0 and dbcp configurations
c3p0:
cpds.setMinPoolSize(5);
cpds.setAcquireIncrement(5);
cpds.setMaxPoolSize(100);
cpds.setMaxStatements(1000);
dbcp:
basicDataSource.setMinIdle(5);
basicDataSource.setMaxIdle(30);
basicDataSource.setMaxTotal(100);
basicDataSource.setMaxOpenPreparedStatements(180);
The rest of configurations are left as default. Worth to mention that all connections are established for a DB on localhost.
c3p0 is not deader than a doornail. It's old but (somewhat) actively maintained. Whether newer alternatives better suit your application is for you to decide.
What version of c3p0 are you using? If you think it is deader than a doornail, are you using an old version? You should be using 0.9.5.2.
The outcome of the test as you've defined it will be highly dependent on lots of things difficult to evaluate with the information you've provided. As Mark Rotteveel points out, you've not shown any information about your config. You've not said anything about the location of the SQL Server. You'll notice greater benefit from a Connection pool when the database is remote than when it is local, as some of the performance improvement comes from amortizing the network latency of Connection acquisition over multiple client uses. Your test executes a query and iterates through the result set. The longer the result set, the more you'll see overhead from the Connection pool (which must proxy the ResultSet) overtake the benefits of faster Connection acquisition. (The numbers you are getting look unusually bad, though. c3p0 typically has very fast ResultSet passthrough performance.) With a sufficiently long queries, the cost of Connection acquisition becomes negligible, if iterating through a ResultSet, the overhead of the pooling library increases, making a Connection pool not so useful.
But this is far from the typical use case for web or mobile clients, which usually make short queries, inserts, and updates. For short queries, inserts, and updates, the cost of a de novo Connection acquisition can be very large relative to the execution of the query. This is the use-case for which Connection pools offer a large improvement. That may not be what you are testing; it depends on how big MyTable is.

Algorithm for concurrent access to resource(s) on database

Some time ago we implemented a warehouse management app that keeps track of quantities of each product we have in the store. We solved the problem of concurrent access to data with database locks (select for update), but this approach led to poor performance when many clients try to consume product quantities from the same store. Note that we manage only a small set of product types (less than 10) so the degree of concurrency could be heavy (also, we don't care of stock re-fill). We thought to split each resource quantity in smaller "buckets", but this approach could lead to starvation for clients that try to consume a quantity that is bigger than each bucket capacity: we should manage buckets merge and so on...
My question is: there are some broadly-accepted solutions to this problem? I also looked for academic articles but the topic seems too wide.
P.S. 1:
our application runs in a clustered environment, so we cannot rely on the application concurrency control. The question aims to find an algorithm that structures and manages the data in a different way than a single row, but keeping all the advantages that a db transaction (using locks or not) has.
P.S. 2: for your info, we manage a wide number of similar warehouses, the example focuses on a single one, but we keep all the data in one db (prices are all the same, etc).
Edit: The setup below will still work on a cluster if you use a queueing program that can coordinate among multiple processes / servers, e.g. RabbitMQ.
You can also use a simpler queueing algorithm that only uses the database, with the downside that it requires polling (whereas a system like RabbitMQ allows threads to block until a message is available). Create a Requests table with a column for unique requestIds (e.g. a random UUID) that acts as the primary key, a timestamp column, a respourceType column, and an integer requestedQuantity column. You'll also need a Logs table with a unique requestId column that acts as the primary key, a timestamp column, a resourceType column, an integer requestQuantity column, and a boolean/tinyint/whatever success column.
When a client requests a quantity of ResourceX it generates a random UUID and adds a row to the Requests table using the UUID as the requestId, and then polls the Logs table for the requestId. If the success column is true then the request succeeded, else it failed.
The server with the database assigns one thread or process to each resource, e.g. ProcessX is in charge of ResourceX. ProcessX retrieves all rows from the Requests table where resourceType = ResourceX, sorted by timestamp, and then deletes them from Requests; it then processes each request in order, decrementing an in-memory counter for each successful request, and at the end of processing the requests it updates the quantity of ResourceX on the Resources table. It then writes each request and its success status to the Logs table. It then retrieves all of the requests from Requests where requestType = RequestX again, etc.
It may be slightly more efficient to use an autoincrement integer as the Requests primary key, and to have ProcessX sort by primary key instead of by timestamp.
One option is to assign one DAOThread per resource - this thread is the only thing that accesses that resource's database table so that there's no locking at the database level. Workers (e.g. web sessions) request resource quantities using a concurrent queue - the example below uses a Java BlockingQueue, but most languages will have some sort of concurrent queue implementation you can use.
public class Request {
final int value;
final BlockingQueue<ReturnMessage> queue;
}
public class ReturnMessage {
final int value;
final String resourceType;
final boolean isSuccess;
}
public class DAOThread implements Runnable {
private final int MAX_CHANGES = 10;
private String resourceType;
private int quantity;
private int changeCount = 0;
private DBTable table;
private BlockingQueue<Request> queue;
public DAOThread(DBTable table, BlockingQueue<Request> queue) {
this.table = table;
this.resourceType = table.select("resource_type");
this.quantity = table.select("quantity");
this.queue = queue;
}
public void run() {
while(true) {
Requester request = queue.take();
if(request.value <= quantity) {
quantity -= request.value;
if(++changeCount > MAX_CHANGES) {
changeCount = 0;
table.update("quantity", quantity);
}
request.queue.offer(new ReturnMessage(request.value, resourceType, true));
} else {
request.queue.offer(new ReturnMessage(request.value, resourceType, false));
}
}
}
}
public class Worker {
final Map<String, BlockingQueue<Request>> dbMap;
final SynchronousQueue<ReturnMessage> queue = new SynchronousQueue<>();
public class WorkerThread(Map<String, BlockingQueue<Request>> dbMap) {
this.dbMap = dbMap;
}
public boolean request(String resourceType, int value) {
dbMap.get(resourceType).offer(new Request(value, queue));
return queue.take();
}
}
The Workers send resource requests to the appropriate DAOThread's queue; the DAOThread processes these requests in order, either updating the local resource quantity if the request's value doesn't exceed the quantity and returning a Success, else leaving the quantity unchanged and returning a Failure. The database is only updated after ten updates to reduce the amount of IO; the larger MAX_CHANGES is, the more complicated it will be to recover from system failure. You can also have a dedicated IOThread that does all of the database writes - this way you don't need to duplicate any logging or timing (e.g. there ought to be a Timer that flushes the current quantity to the database after every few seconds).
The Worker uses a SynchronousQueue to wait for a response from the DAOThread (a SynchronousQueue is a BlockingQueue that can only hold one item); if the Worker is running in its own thread the you may want to replace this with a standard multi-item BlockingQueue so that the Worker can process the ReturnMessages in any order.
There are some databases e.g. Riak that have native support for counters, so this might improve your IO thoughput and reduce or eliminate the need for a MAX_CHANGES.
You can further increase throughput by introducing BufferThreads to buffer the requests to the DAOThreads.
public class BufferThread implements Runnable {
final SynchronousQueue<ReturnMessage> returnQueue = new SynchronousQueue<>();
final int BUFFERSIZE = 10;
private DAOThread daoThread;
private BlockingQueue<Request> queue;
private ArrayList<Request> buffer = new ArrayList<>(BUFFERSIZE);
private int tempTotal = 0;
public BufferThread(DAOThread daoThread, BlockingQueue<Request> queue) {
this.daoThread = daoThread;
this.queue = queue;
}
public void run() {
while(true) {
Request request = queue.poll(100, TimeUnit.MILLISECONDS);
if(request != null) {
tempTotal += request.value;
buffer.add(request);
}
if(buffer.size() == BUFFERSIZE || request == null) {
daoThread.queue.offer(new Request(tempTotal, returnQueue));
ReturnMessage message = returnQueue.take();
if(message.isSuccess()) {
for(Request request: buffer) {
request.queue.offer(new ReturnMessage(request.value, daoThread.resourceType, message.isSuccess));
}
} else {
// send unbuffered requests to DAOThread to see if any can be satisfied
for(Request request: buffer) {
daoThread.queue.offer(request);
}
}
buffer.clear();
tempTotal = 0;
}
}
}
}
The Workers send their requests to the BufferThreads, who then wait until they've buffered BUFFERSIZE requests or have waited for 100ms for a request to come through the buffer (Request request = queue.poll(100, TimeUnit.MILLISECONDS)), at which point they forward the buffered message to the DAOThread. You can have multiple buffers per DAOThread - rather than sending a Map<String, BlockingQueue<Request>> to the Workers you instead send a Map<String, ArrayList<BlockingQueue<Request>>>, one queue per BufferThread, with the Worker either using a counter or a random number generator to determine which BufferThread to send a request to. Note that if BUFFERSIZE is too large and/or if you have too many BufferThreads then Workers will suffer from long pause times as they wait for the buffer to fill up.

How to update silverlight UI while processing

I went through several examples posted online but I cant answer my question.
I have my 'p' variable that is being increased by 1 in the for loop. I want the UI to display the progress of calculation (to show how 'p' is increasing from 0 to 1000000). I do the calculation on the separate thread and the I call dispatcher to update the ResultBox in UI. Example:
int p=0;
...
private void GO(object sender, System.Windows.RoutedEventArgs e)
{
new Thread(delegate()
{
DoWork();
}).Start();
}
void DoWork()
{
for (int i = 0; i < 1000; i++)
{
for (int j = 0; j < 10000; j++)
{
p++;
this.Dispatcher.BeginInvoke(delegate { ResultBox.Text = p.ToString(); });
}
}
}
For some reason this doesn't work. However when I put Thread.Sleep(1) just before this.Dispatcher... it works as intended. Does it mean that the UI update (Dispatcher) is called too frequently therefore it freezes?
Is there any other way to do it?
Thank you
Why not bind a property to your TextBox and the update the property value instead of poking at the textbox directly?
Yes only doing p++ in your loop will not take much of time and inside silverlight, Dispatcher is nothing but a simple queue with delegates, and before silverlight can even update and process its UI, you are pumping too many values on the queue. Imagin what will happen if you keep on adding queue way to faster then the queue is dequeued, then eventually it will hit max limit as well. And eventually it will just stop. If your p++ is replaced with more time consuming task, then you may get good result.
You must know that our eye usually can see only updates of 30 fps, more then 30 updates per second will not be of any use at all, I will suggest your view update should be reduced to max 10 updates per second for best performance.
And for showing progress, I think 1 update per second is also enough. First always display updates very slowly, like
void DoWork()
{
for (int i = 0; i < 1000; i++)
{
for (int j = 0; j < 10000; j++)
{
p++;
if((p % 1000)==0){
this.Dispatcher.BeginInvoke(delegate
{ ResultBox.Text = p.ToString(); });
}
}
}
}
Now you can increaes/decrease 1000 to some suitable multipler of 10 to adjust your visual update.

Resources