Diagnosing Response Times - angularjs

I don't have too much experience in diagnosing the source of longer response times, so I was curious to find out about some methods from the more experienced.
I have a simple index page of paginated records, and each time I click to load a new page, it says the response time is a little over 300ms for 25 records. The query to pull the records isn't anything complicated, and maybe 300ms isn't that long. But I'd still like to know how to find what's taking a majority of that time, because it just feels like it's taking a bit of time.
What tools or methods could I use to find out any bottlenecks? Thanks in advance.

DebugBar is a great way to get more visibility into your Laravel App's execution. It will break out all the SQL queries and provide timings.
In addition to DebugBar, adding a simple Benchmarking class is another tactic I often use. To do this, start a timer at the very beginning of your app. Here is a sample class I've used in the past (made a couple tweaks and didn't test the code below):
<?php
class Benchmark
{
public static $events = [];
public static function start()
{
static::$events = [];
static::bench('start');
}
public static function end()
{
static::bench('end');
}
public static function bench($event)
{
static::$events[$event] = static::time();
}
public static function time()
{
list($msec, $sec) = explode(" ", microtime());
return ($msec+$sec) - static::$events['start'];
}
public static function report()
{
print_r(static::$events);
}
}
Implementation would be:
Benchmark::start();
Benchmark::bench('finished big query');
Benchmark::end();
then print out the benchmark somewhere at the end of execution:
<?php Benchmark::report() ?>
Check MySQL
Most of the time the slowdown is the database.
If you find a particular query that is slowing the page load down, use the MySQL slow query log to isolate problems further and the MySQL explain command to dissect the query to make improvements.

Related

Flink integration test(s) with Testcontainers

I have a simple Apache Flink job that looks very much like this:
public final class Application {
public static void main(final String... args) throws Exception {
final var env = StreamExecutionEnvironment.getExecutionEnvironment();
final var executionConfig = env.getConfig();
final var params = ParameterTool.fromArgs(args);
executionConfig.setGlobalJobParameters(params);
executionConfig.setParallelism(params.getInt("application.parallelism"));
final var source = KafkaSource.<CustomKafkaMessage>builder()
.setBootstrapServers(params.get("application.kafka.bootstrap-servers"))
.setGroupId(config.get("application.kafka.consumer.group-id"))
// .setStartingOffsets(OffsetsInitializer.committedOffsets(OffsetResetStrategy.EARLIEST))
.setStartingOffsets(OffsetsInitializer.earliest())
.setTopics(config.getString("application.kafka.listener.topics"))
.setValueOnlyDeserializer(new MessageDeserializationSchema())
.build();
env.fromSource(source, WatermarkStrategy.noWatermarks(), "custom.kafka-source")
.uid("custom.kafka-source")
.rebalance()
.flatMap(new CustomFlatMapFunction())
.uid("custom.flatmap-function")
.filter(new CustomFilterFunction())
.uid("custom.filter-function")
.addSink(new CustomDiscardSink()) // Will be a Kafka sink in the future
.uid("custom.discard-sink");
env.execute(config.get("application.job-name"));
}
}
Problem is that I would like to provide an integration test for the entire application — sort of like an end-to-end (set of) test(s) for the entire job. I'm using Testcontainers, but I'm not really sure how to move forward with this. For instance, this is how the test looks like (for now):
#Testcontainers
final class ApplicationTest {
private static final DockerImageName DOCKER_IMAGE = DockerImageName.parse("confluentinc/cp-kafka:7.0.1");
#Container
private static final KafkaContainer KAFKA_CONTAINER = new KafkaContainer(DOCKER_IMAGE);
#ClassRule // How come this work in JUnit Jupiter? :/
public static MiniClusterResource cluster;
#BeforeAll
static void init() {
KAFKA_CONTAINER.start();
// ...probably need to wait and create the topic(s) as well
final var config = new MiniClusterResourceConfiguration.Builder().setNumberSlotsPerTaskManager(2)
.setNumberTaskManagers(1)
.build();
cluster = new MiniClusterResource(config);
}
#Test
void main() throws Exception {
// new Application(); // ...what's next?
}
}
I'm not sure how to implement what's required to trigger the job as-is from that point on. Basically, I would like to execute what was defined before, without (almost) any modifications — I've seen plenty of examples that practically build the entire job again, so that's not an option.
Can somebody provide any pointers here?
MessageDeserializationSchema is unbounded, so isEndOfStream returns false. Not sure if that's an impediment.
In order to make the pipeline more testable, I suggest you create a method on your Application class that takes a source and a sink as parameters, and creates and executes the pipeline, using those connectors.
In your tests you can call that method with special sources and sinks that you use for testing. In particular, you will want to use a KafkaSource that uses .setBounded(...) in the tests so that it cleanly handles just the range of data intended for the test(s).
The solutions and tests for the Apache Flink training exercises are organized along these lines; for example, see RideCleansingSolution.java and RideCleansingIntegrationTest.java. These examples don't use kafka or test containers, but hopefully they'll still be helpful.
I would suggest you instrument your application as an opaque-box test by interacting with it through its public API. This can be done either as an out-process test (e.g. by running your application in a container as well, using Testcontainers) are as an in-process test (by creating your Application and calling its main() method).
Now in your comments you explained, that you want to check for the side-effects of interacting with your application (Kafka messages being published). To check this, connect to the KafkaContainer with your own KafkaConsumer from within the test and use a library such as Awaitiliy to wait until the messages have been received.

How to slow down the speed of execution?

I am using selenium web-driver with testing. I want to slow down the speed of execution.
Here is the sample code:
#Parameters({ "provider_name", "branch", "address", "clientId", "website", "UserName", "Password", "Dpid" })
public void addDematAccount(String provider_name, String branch, String address, String clientId, String website,
String UserName, String Password, String Dpid) {
driver.findElement(By.xpath("//a[contains(#href, '#/app/DematAccount/Add')]")).click();
setParameter(provider_name, branch, address, clientId, website, UserName, Password, Dpid);
driver.manage().timeouts().implicitlyWait(60, TimeUnit.SECONDS);
I have used driver.manage().timeouts().implicitlyWait(60, TimeUnit.SECONDS); and Thread.sleep(2000); but not helping
There is no longer any way to control the speed of each "step" in Selenium WebDriver. At one time, there was a setSpeed() method on the Options interface (in the Java bindings; other bindings had similar constructs on their appropriately-named objects), but it was deprecated long, long ago. The theory behind this is that you should not need to a priori slow down every single step of your WebDriver code. If you need to wait for something to happen in the application you're automating, you should be using an implicit or explicit wait routine.
If you want to view it, and its too fast I would think you could maybe record your test being executed and then review it ?
See here : http://www.seleniummonster.com/boost-up-your-selenium-tests-with-video-recording-capability/
And here : http://unmesh.me/2012/01/13/recording-screencast-of-selenium-tests-in-java/
Here is some examples from the above link
public void startRecording() throws Exception
{
GraphicsConfiguration gc = GraphicsEnvironment.getLocalGraphicsEnvironment().getDefaultScreenDevice().getDefaultConfiguration();
this.screenRecorder = new ScreenRecorder(gc,
new Format(MediaTypeKey, MediaType.FILE, MimeTypeKey, MIME_AVI),
new Format(MediaTypeKey, MediaType.VIDEO, EncodingKey, ENCODING_AVI_TECHSMITH_SCREEN_CAPTURE,
CompressorNameKey, ENCODING_AVI_TECHSMITH_SCREEN_CAPTURE,DepthKey, 24, FrameRateKey, Rational.valueOf(15),QualityKey, 1.0f,KeyFrameIntervalKey, 15 * 60),new Format(MediaTypeKey,MediaType.VIDEO, EncodingKey, "black",FrameRateKey, Rational.valueOf(30)),null);
this.screenRecorder.start();
}
public void stopRecording() throws Exception
{
this.screenRecorder.stop();
}
The whole purpose of automated tests ( in my opinion ) is so they can be run in the background without user interaction/without being viewed. Also, if you want to do as many tests as possible in a certain about of time speed and parallized testing is essential. If you want to view your tests being executed I think the above method would be good to ensure you don't ruin the performance of Selenium and view the execution when completed, you will have full control with the video to replay etc.
If you really want to execute your program slowly or even step by step, you can try the following approaches:
execute your program in debug mode one step at a time;
refactor your code into function blocks, only execute a block of code at one time, you will not see you code being executed slowly as in time, but it becomes easier for you to associate your codes with the results.

'Concurrent saves are not allowed' with manager.enableSaveQueuing(true)

Why breeze keeps throwing 'Concurrent saves are not allowed' with manager.enableSaveQueuing(true) option enabled
Simply because you're trying to issue multiple saves at the same time.
Breeze's default save option is queuing data for saving.
In your case, you can overwrite the option for allowing concurrent saves as follows:
var so = new breeze.SaveOptions({allowConcurrentSaves: true})
return manager.saveChanges(null,so)
.then(saveSucceeded) //
.fail(saveFailed);
EDIT
Since you are using the "saveQueuing" plugin, Ignore my first Answer since it only applies to concurrent saves.
I'm not aware of how your code works, but you might take some considerations in case of save queuing:
You can only issue manager.saveChanges() once inside your code.
At the server side, override the BeforeSaveEntity() method for the sake of the mutual-exclusion lock statement in your new savechanges() method, your code may look something like this:
public void SaveChanges(SaveWorkState saveWorkState)
{
lock (__lock) // this will block any try to issue concurrent saves on the same row
{
// Saving Operations goes here
}
}
You might want to look at it in the NoDB Sample.

Using/Searching AsyncDataProvider with Objectify / Google App Engine

I currently have an application which uses the activities/places and an AsyncDataProvider.
Right now, everytime the activity loads up - it uses the request factory to retrieve the data (currently not a lot but will get very large coming up here soon) and passes it to the View to update the DataGrid. Before it is updated it is filtered based on a search box.
Right now - I have implemented updating the DataGrid as follows: (this code isn't the prettiest)
private void updateData() {
final AsyncDataProvider<EquipmentTypeProxy> provider = new AsyncDataProvider<EquipmentTypeProxy>() {
#Override
protected void onRangeChanged(HasData<EquipmentTypeProxy> display) {
int start = display.getVisibleRange().getStart();
int end = start + display.getVisibleRange().getLength();
final List<EquipmentTypeProxy> subList = getSubList(start, end);
end = (end >= subList.size()) ? subList.size() : end;
if (subList.size() < DATAGRID_PAGE_SIZE) {
updateRowCount(subList.size(), true);
} else {
updateRowCount(data.size(), true);
}
updateRowData(start, subList);
}
private List<EquipmentTypeProxy> getSubList(int start, int end) {
final List<EquipmentTypeProxy> filteredEquipment;
if (searchString == null || searchString.equals("")) {
if (data.isEmpty() == false && data.size() > (end - start)) {
filteredEquipment = data.subList(start, end);
} else {
filteredEquipment = data;
}
} else {
filteredEquipment = new ArrayList<EquipmentTypeProxy>();
for (final EquipmentTypeProxy equipmentType : data) {
if (equipmentType.getName().contains(searchString)) {
filteredEquipment.add(equipmentType);
}
}
}
return filteredEquipment;
}
};
provider.addDataDisplay(dataGrid);
}
Ultimately - what I would like to do is only load up the necessary data at first (the default page size in this application is 25).
Unfortunately, to my current understanding, with Google App Engine there is no order to any of the Id's (one entry has an ID of 3 the next has an entry of 4203).
What I'm wondering, what is the best way to go about retrieving a subset of data from Google App Engine when using Objectify?
I was looking into using Offset and limit but another stack overflow post (http://stackoverflow.com/questions/9726232/achieve-good-paging-using-objectify) basically said this is inefficient.
The best information I've found is the following link (http://stackoverflow.com/questions/7027202/objectify-paging-with-cursors). The answer here says to use Cursors but also says this is inefficient. I'm also using Request Factory so I will have to store the Cursor in my user Session (if that is incorrect please let me know).
Currently since there isn't likely to be a lot of data (maybe 200 rows total for the next few months) I am just pulling back the entire set to the client as a temporary hack - I know this is the worst way to do it but would like to get input to the best way to do it before wasting my time implementing another hack solution. I am worried currently as it seems every single post i've read on doing this makes it seem like there's not really a solid way to do this.
What i am also thinking about doing - currently my searching / page loading is lightning fast because all the data is already on the client side. I use a KeyUpEvent handler in the search box to filter the data - i don't think there is any way to keep this speed by making a call to the server - is there any accepted solution to this problem?
Thank you very much
Go with Cursors. They are as efficient as it gets - cursor stores the point where last query ended and continues from there. The answer you linked actually does not discuss efficiency of cursors vs offset. (there is a comment that is wrong)
You can use limit with Cursors - it does not affect efficiency.
Also, Cursors can be serialized via cursor.toWebSafeString() and sent to client via RPC. This way you do not need to save them in session. Actually you can also use them as fragment identifier (aka history token in GWT parlance) - this way a certain "page" of your result set can be bookmarked.
(Offset is "inefficient" because it actually loads, and charges you, for all entities upto offset+limit, bit it only returns limit entities)
OTOH, if you already know the query parameters when the page is loaded, then just do the query at page generation time, instead invoking it via RPC. Also, if you have a small set of data (<1000) you could just preload all entity IDs s part of page html.

NHibernate Performance Optimization | Suggestions invited!

I’m facing an issue with NHibernate performance and can you please suggest me some optimizations? Below mentioned is a small summary of my application architecture
I have a windows service which is listening to a messaging bus. On receiving a message the service creates an object out of which a property is the received xml snippet and saves the message to the DB (uses NH). There is a WPF UI with a readonly connection to the DB, and on refresh of the UI it displays the objects on the screen.
While the UI does a refresh, it retrieves the xml and deserializes it , from which the object’s properties are derived and binded to the screen.
For example assume an xml XXX is received by the service, it deserializes the xml , creates the book object and save it to the DB and a property/column is SCHEMA which contains the xml snippet.
The UI while refreshed searches all book objects by ID and creates the book objects out of the xml which is being saved (yes, the xml is the constructor param).
Now my issue is that the refresh takes more than 2 minutes to display say 50 book objects. I analyzed it using the NHibernate profiler, and found that the time spend within the DB is negligible, however time spent to create the entities is proportionally huge(10ms:1990 ms).I guess it’s due to the fairly huge size of xml snippet and it’s deserialization.
My question is, how can I improve the performance. I dispose sessions after every refresh and is not lazy loading (please note that the time spend in DB is negligible). On every refresh it’s possible that all objects are updated by some downstream systems or maybe one of them are updated.Can I implement some sort of caching mechanism in this case?
Thanks in advance for any suggestions.
Regards,
-Mike
The entire list of 50 books could be saved in a singleton class meant for caching. Like a cache manager. You could also use say an enterprise library cache but I would suggest an in memory cache. If a book gets added you could update the cache. The cache would have the entire xml so no deserialisation would happen. Also you could update the db in an ansynchronous thread and reduce the time.
Here is the pseudo code
On the service, whenever I receive a message
public void OnMessage(string message)
{
//deserializes the message
DeserializedObject schema = deserializationFactory.Deserialize(message);
var book = new Book(schema,message);
// saves the book using a new session
repository.Save(book);
}
The book object:
public class Book
{
public DeserializedObject Schema{get;set;}
private string xml;
public string Xml{get{return xml;}}
public Book(DeserializedObject schema,string xml):this(schema)
{
this.xml = xml;
}
public Book(DeserializedObject schema):this()
{
this.Schema = schema;
}
public virtual XmlDocument XmlSchema
{
get
{
var doc = new XmlDocument();
if (Schema!= null)
{
var serializer = new XmlSerializer(typeof(DeserializedObject));
var stream = new MemoryStream();
serializer.Serialize(stream, Schema);
stream.Position = 0;
doc.Load(stream);
}
return doc;
}
}
public virtual string SerializedSchema
{
get { return XmlSchema.OuterXml; }
set
{
if (value != null)
Schema = value.Deserialize< DeserializedObject >();
}
}
public string Author
{
get{return Schema.Author;}
}
}
Now the Mapping for Book(uses FNH)
public class BookMap:ClassMap<Book>
{
LazyLoad();
Table("Books");
IdGenerator.Instance.GenerateId(this, "book_id_seq", book => book.Id);
Map(book=> book.SerializedSchema, "SERIALIZED_SCHEMA")
.CustomSqlType("Clob")
.CustomType("StringClob");
}
On UI:
public void OnRefresh()
{
//In reality the call to DB runs on a background worker and the records are binded to the grid after a context switch.
//GetByCriteria creates a new session every time a refresh happens.
datagrid.DataContext = repository.GetByCriteria(ICriterion allBooksforToday);
}
The important thing to note here is Book type is shared between the service and the UI. However, only service can do a write to the DB, wherin the UI can update the trade object (basically the xml) and sends it over the messaging bus (again the xml). The service once receiving it updates the DB.
The xml size will be approximately 20 KB, so that would mean that if I'm loading say 50 books I'll be loading close to an MB of data.
Thanks,-Mike

Resources