With Gatling 2, is it possible to repeat with connection re-use? How?
I have the below code, but it appears to open new connection every time. I want to maintain x connections for some time.
val httpProtocol = http
.baseURL("http://mysrv.pvt")
.inferHtmlResources()
val uri1 = "http://mysrv.pvt"
val scn = scenario("Simulation").repeat(50){
pause(2 seconds,20 seconds).
exec(http("request_0")
.get("/s1/serve.html")
)
}
setUp(scn.inject(
atOnceUsers(20000)
).protocols(httpProtocol))
First, your question is not accurate enough.
By default, Gatling has one connection pool per virtual user, so each of them do re-use connections between sequential requests, and can have more than one concurrent connection when dealing with resource fetching, which you do as you enabled inferHtmlResources. This way, virtual users behave as independent browsers.
You can change this behavior and share a common connection pool, see doc. However, you have to make sure this makes sense in your case. Your workload profile will be very different, the toll on the TCP stack on both the client/Gatling and the server/your app will be way less, so make sure that's how your application is being used in production.
Related
Any chance I could get a tip for proper way to build an agent that could do read multiple points from multiple devices on a BACnet system? I am viewing the actuator agent code trying learn how to make the proper rpc call.
So going through the agent development procedure with the agent creation wizard.
In the init I have this just hard coded at the moment:
def __init__(self, **kwargs):
super(Setteroccvav, self).__init__(**kwargs)
_log.debug("vip_identity: " + self.core.identity)
self.default_config = {}
self.agent_id = "dr_event_setpoint_adj_agent"
self.topic = "slipstream_internal/slipstream_hq/"
self.jci_zonetemp_string = "/ZN-T"
So the BACnet system in the building has JCI VAV boxes all with the same zone temperature sensor point self.jci_zonetemp_string and self.topic is how I pulled them into volttron/config store through BACnet discovery processes.
In my actuate point function (copied from CSV driver example) am I at all close for how to make the rpc call named reads using the get_multiple_points? Hoping to scrape the zone temperature sensor readings on BACnet device ID's 6,7,8,9,10 which are all the same VAV box controller with the same points/BAS program running.
def actuate_point(self):
"""
Request that the Actuator set a point on the CSV device
"""
# Create a start and end timestep to serve as the times we reserve to communicate with the CSV Device
_now = get_aware_utc_now()
str_now = format_timestamp(_now)
_end = _now + td(seconds=10)
str_end = format_timestamp(_end)
# Wrap the timestamps and device topic (used by the Actuator to identify the device) into an actuator request
schedule_request = [[self.ahu_topic, str_now, str_end]]
# Use a remote procedure call to ask the actuator to schedule us some time on the device
result = self.vip.rpc.call(
'platform.actuator', 'request_new_schedule', self.agent_id, 'my_test', 'HIGH', schedule_request).get(
timeout=4)
_log.info(f'*** [INFO] *** - SCHEDULED TIME ON ACTUATOR From "actuate_point" method sucess')
reads = publish_agent.vip.rpc.call(
'platform.actuator',
'get_multiple_points',
self.agent_id,
[(('self.topic'+'6', self.jci_zonetemp_string)),
(('self.topic'+'7', self.jci_zonetemp_string)),
(('self.topic'+'8', self.jci_zonetemp_string)),
(('self.topic'+'9', self.jci_zonetemp_string)),
(('self.topic'+'10', self.jci_zonetemp_string))]).get(timeout=10)
Any tips before I break something on the live system greatly appreciated :)
The basic form of an RPC call to the actuator is as follows:
# use the agent's VIP connection to make an RPC call to the actuator agent
result = self.vip.rpc.call('platform.actuator', <RPC exported function>, <args>).get(timeout=<seconds>)
Because we're working with devices, we need to know which devices we're interested in, and what their topics are. We also need to know which points on the devices that we're interested in.
device_map = {
'device1': '201201',
'device2': '201202',
'device3': '201203',
'device4': '201204',
}
building_topic = 'campus/building'
all_device_points = ['point1', 'point2', 'point3']
Getting points with the actuator requires a list of point topics, or device/point topic pairs.
# we only need one of the following:
point topics = []
for device in device_map.values():
for point in all_device_points:
point_topics.append('/'.join([building_topic, device, point]))
device_point_pairs = []
for device in device_map.values():
for point in all_device_points:
device_point_pairs.append(('/'.join([building_topic, device]),point,))
Now we send our RPC request to the actuator:
# can use instead device_point_pairs
point_results = self.vip.rpc.call('platform.actuator', 'get_multiple_points', point_topics).get(timeout=3)
maybe it's just my interpretation of your question, but it seems a little open-ended - so I shall respond in a similar vein - general (& I'll try to keep it short).
First, you need the list of info for targeting each device in-turn; i.e. it might consist of just a IP(v4) address (for the physical device) & the (logical) device's BOIN (BACnet Object Instance Number) - or if the request is been routed/forwarded on by/via a BACnet router/BACnet gateway then maybe also the DNET # & the DADR too.
Then you probably want - for each device/one at a time, to retrieve the first/0-element value of the device's Object-List property - in order to get the number of objects it contains, to allow you to know how many objects are available (including the logical device/device-type object) - that you need to retrieve from it/iterate over; NOTE: in the real world, as much as it's common for the device-type object to be the first one in the list, there's no guarantee it will always be the case.
As much as the BACnet standard started allowing for the retrieval of the Property-List (property) from each & every object, most equipment don't as-yet support it, so you might need your own idea of what properties (/at least the ones of interest to you) that each different object-type supports; probably at the very-very least know which ones/object-types support the Present-Value property & which ones don't.
One ideal would be to have the following mocked facets - as fakes for testing purposes instead of testing against a live/important device (- or at least consider testing against a noddy BACnet enabled Raspberry PI or the harware-based like):
a mock for your BACnet service
a mock for the BACnet communication stack
a mock for your device as a whole (- if you can't write your own one, then maybe even start with the YABE 'Room Control Simulator' as a starting point)
Hope this helps (in some way).
I have some fairly simple stream code that aggregating data via time windows. The windows are on the large side (1 hour, with a 2 hour bound), and the values in the streams are metrics coming from hundreds of servers. I keep running out of memory, and so I added the RocksDBStateBackend. This caused the JVM to segfault. Next I tried the FsStateBackend. Both of these backends never wrote any data to disk, but simply created a directory with the JobID. I'm running this code in standalone mode, not deployed. Any thoughts as to why the State Backends aren't writing data, and why it runs out of memory even when provided with 8GB of heap?
final SingleOutputStreamOperator<Metric> metricStream =
objectStream.map(node -> new Metric(node.get("_ts").asLong(), node.get("_value").asDouble(), node.get("tags"))).name("metric stream");
final WindowedStream<Metric, String, TimeWindow> hourlyMetricStream = metricStream
.assignTimestampsAndWatermarks(new BoundedOutOfOrdernessTimestampExtractor<Metric>(Time.hours(2)) { // set how long metrics can come late
#Override
public long extractTimestamp(final Metric metric) {
return metric.get_ts() * 1000; // needs to be in ms since Java epoch
}
})
.keyBy(metric -> metric.getMetricName()) // key the stream so we can run the windowing in parallel
.timeWindow(Time.hours(1)); // setup the time window for the bucket
// create a stream for each type of aggregation
hourlyMetricStream.sum("_value") // we want to sum by the _value
.addSink(new MetricStoreSinkFunction(parameters, "sum"))
.name("hourly sum stream")
.setParallelism(6);
hourlyMetricStream.aggregate(new MeanAggregator())
.addSink(new MetricStoreSinkFunction(parameters, "mean"))
.name("hourly mean stream")
.setParallelism(6);
hourlyMetricStream.aggregate(new ReMedianAggregator())
.addSink(new MetricStoreSinkFunction(parameters, "remedian"))
.name("hourly remedian stream")
.setParallelism(6);
env.execute("flink test");
It is tough to say why you would run out of memory unless you have a very large number of metric names (that is the only explanation I can come up with based on the code you posted).
With respect to the disk writing, RocksDB will actually use a temporary directory by default for its actual database files. You can also pass an explicit directory during configuration. You would do this by calling state.setDbStoragePath(someDirectory)
Somewhat confusingly, the FSStateBackend in fact only writes to disk during checkpointing, it otherwise is entirely heap based. So you likely did not see anything in the directory if you did not have checkpointing enabled. So that would explain why you might still run out of memory when the FSStateBackend is used.
Assuming you do have the RocksDB (or any) state backend working, you can enable checkpointing by doing:
env.enableCheckpointing(5000); // value is in MS, so however frequently you want to checkpoint
env.getCheckpointConfig.setMinPauseBetweenCheckpoints(5000); // this is to help prevent your job from making progress if checkpointing takes a bit. For large state checkpointing can take multiple seconds
My server is listening on the port 1234 for incoming connections, i made an array of sockets and i am looping through that array looking for an already free sockets (Closed = 0) increasing that array to hold a new incoming sockets if no free socket available in the sockets array, This will be later the index to the sockets to identify each connection alone. Not sure if that is a good approach but it works fine and stable until i tried to stress my server with about 15 client opened at the same time. The problem i am facing is that some of the client apps gets connection time-out and the server becomes unstable handling those multiple connections at the same time. I am not sending much data only 20 bytes on each connect event received from the server. I have tried to increase the Backlog value on the listen call but that didn't help either.
I have to wait longer for those clients to connect. But they connect eventually. And the server still responses to new connections later on (If i close all those clients apps for example or open a new client app it will connect immediatly). Also these connections stay open and i do not close socket from the client side.
I am using CSocketPlus Class
Connection request event
Private Sub Sockets_ConnectionRequest(ByVal Index As Variant, ByVal requestID As Long)
Dim sckIndex As Integer
sckIndex = GetFreeSocketIndex(Sockets)
Sockets.Accept sckIndex, requestID
End Sub
Function GetFreeSocketIndex(Sockets As CSocketPlus) As Integer
Dim i As Integer, blnFound As Boolean
If Sockets.ArrayCount = 1 Then 'First we check if we have not loaded any arrays yet (First connector)
Sockets.ArrayAdd 1
GetFreeSocketIndex = 1
ReDim UserInfo(1)
Exit Function
Else
'Else we loop through all arrays and find a free (Not connected one)
For i = 1 To Sockets.ArrayCount - 1
If Sockets.State(i) <> sckConnected Then
'Found one use it
blnFound = True
Sockets.CloseSck i
GetFreeSocketIndex = i
If UBound(UserInfo) < i Then
ReDim Preserve UserInfo(i)
End If
Exit Function
End If
Next i
'Didn't find any load one and use it
If blnFound = False Then
Sockets.ArrayAdd i
Sockets.CloseSck i
GetFreeSocketIndex = i
ReDim Preserve UserInfo(i)
End If
End If
End Function
Does anyone know why there is a performance issues when multiple connections occur at the same time? why the server becomes slowly to response?
What makes the server to accept faster?
I have also tried to not accept the connection ie not calling accept on connection request event but still the same issue.
EDIT: I have tried to put a debug variable to print the output of socket number on the FD_ACCEPT event on the CSocket class and it seems that WndProc is delaying to post the messages in case of a lot of connections.
Ok the problem seems to be from my connection. I have moved my Server to my RDP which has 250Mbps Download speed with 200Mbps Upload speed and it seems to work very well there. Tested it with 100 client made a connection and every one of them connected immediatly!. Wonder why i have such issues where my home connection is 40/40 Mbps...hmmm. Anyone knows why that happen?
Edit: Seems to be the Router option!
Enable Firewall
Enable DoS protection
Disabled the firewall (just for testing purposes) and everything works flawlessly!
So basically the router is thinking that there is some kind of dos attack.
So it will slow down the traffic.
We're migrating SQL to Azure. Our DAL is Entity Framework 4.x based. We're wanting to use the Transient Fault Handling Block to add retry logic for SQL Azure.
Overall, we're looking for the best 80/20 rule (or maybe more of a 95/5 but you get the point) - we're not looking to spend weeks refactoring/rewriting code (there's a LOT of it). I'm fine re-implementing our DAL's framework but not all of the code written and generated against it anymore than we have to since this is already here only to address a minority case. Mitigation >>> elimination of this edge case for us.
Looking at the possible options explained here at MSDN, it seems Case #3 there is the "quickest" to implement, but only at first glance. Upon pondering this solution a bit, it struck me that we might have problems with connection management since this circumvent's Entity Framework's built-in processes for managing connections (i.e. always closing them). It seems to me that the "solution" is to make sure 100% of our Contexts that we instantiate use Using blocks, but with our architecture, this would be difficult.
So my question: Going with Case #3 from that link, are hanging connections a problem or is there some magic somewhere that's going on that I don't know about?
I've done some experimenting and it turns out that this brings us back to the old "managing connections" situation we're used to from the past, only this time the connections are abstracted away from us a bit and we must now "manage Contexts" similarly.
Let's say we have the following OnContextCreated implementation:
private void OnContextCreated()
{
const int maxRetries = 4;
const int initialDelayInMilliseconds = 100;
const int maxDelayInMilliseconds = 5000;
const int deltaBackoffInMilliseconds = initialDelayInMilliseconds;
var policy = new RetryPolicy<SqlAzureTransientErrorDetectionStrategy>(maxRetries,
TimeSpan.FromMilliseconds(initialDelayInMilliseconds),
TimeSpan.FromMilliseconds(maxDelayInMilliseconds),
TimeSpan.FromMilliseconds(deltaBackoffInMilliseconds));
policy.ExecuteAction(() =>
{
try
{
Connection.Open();
var storeConnection = (SqlConnection) ((EntityConnection) Connection).StoreConnection;
new SqlCommand("declare #i int", storeConnection).ExecuteNonQuery();
//Connection.Close();
// throw new ApplicationException("Test only");
}
catch (Exception e)
{
Connection.Close();
Trace.TraceWarning("Attempted to open connection but failed: " + e.Message);
throw;
}
}
);
}
In this scenario, we forcibly open the Connection (which was the goal here). Because of this, the Context keeps it open across many calls. Because of that, we must tell the Context when to close the connection. Our primary mechanism for doing that is calling the Dispose method on the Context. So if we just allow garbage collection to clean up our contexts, then we allow connections to remain hanging open.
I tested this by toggling the comments on the Connection.Close() in the try block and running a bunch of unit tests against our database. Without calling Close, we jumped up to ~275-300 active connections (from SQL Server's perspective). By calling Close, that number hovered at ~12. I then reproduced with a small number of unit tests both with and without a using block for the Context and reproduced the same result (different numbers - I forget what they were).
I was using the following query to count my connections:
SELECT s.session_id, s.login_name, e.connection_id,
s.last_request_end_time, s.cpu_time,
e.connect_time
FROM sys.dm_exec_sessions AS s
INNER JOIN sys.dm_exec_connections AS e
ON s.session_id = e.session_id
WHERE login_name='myuser'
ORDER BY s.login_name
Conclusion: If you call Connection.Open() with this work-around to enable the Transient Fault Handling Block, then you MUST use using blocks for all contexts you work with, otherwise you will have problems (that with SQL Azure, will cause your database to be "throttled" and ultimately taken offline for hours!).
The problem with this approach is it only takes care of connection retries and not command retries.
If you use Entity Framework 6 (currently in alpha) then there is some new in-built support for transient retries with Azure SQL Database (with a little bit of configuration): http://entityframework.codeplex.com/wikipage?title=Connection%20Resiliency%20Spec
I've created a library which allows you to configure Entity Framework to retry using the Fault Handling block without needing to change every database call - generally you will only need to change your config file and possibly one or two lines of code.
This allows you to use it for Entity Framework or Linq To Sql.
https://github.com/robdmoore/ReliableDbProvider
In my java code, I am processing huge amount of data. So I moved the code as servlet to Cron Job of App Engine. Some days it works fine. After the amount of the data increases, the cron job is not working and shows the following error message.
2012-09-26 04:18:40.627
'ServletName' 'MethodName': Inside SQLExceptionjava.sql.SQLRecoverableException:
Connection is already in use.
I 2012-09-26 04:18:40.741
This request caused a new process to be started for your application, and thus caused
your application code to be loaded for the first time. This request may thus take
longer and use more CPU than a typical request for your application.
W 2012-09-26 04:18:40.741
A problem was encountered with the process that handled this request, causing it to
exit. This is likely to cause a new process to be used for the next request to your
application. If you see this message frequently, you may be throwing exceptions during
the initialization of your application. (Error code 104)
How to handle this problem?
This exception is typical when a single connection is shared between multiple threads. This will in turn happen when your code does not follow the standard JDBC idiom of acquiring and closing the DB resources in the shortest possible scope in the very same try-finally block like so:
public Entity find(Long id) throws SQLException {
Connection connection = null;
// ...
try {
connection = dataSource.getConnection();
// ...
} finally {
// ...
if (connection != null) try { connection.close(); } catch (SQLException ignore) {}
}
return entity;
}
Your comment on the question,
#TejasArjun i used connection pooling with servlet Init() method.
doesn't give me the impression that you're doing it the right way. This suggests that you're obtaining a DB connection in servlet's init() method and reusing the same one across all HTTP requests in all HTTP sessions. This is absolutely not right. A servlet instance is created/initialized only once during webapp's startup and reused throughout the entire remaining of the application's lifetime. This at least confirms the exception you're facing.
Just rewrite your JDBC code according the standard try-finally idiom as demonstrated above and you should be all set.
See also:
Is it safe to use a static java.sql.Connection instance in a multithreaded system?