How do you properly benchmark ColdFusion execution times? - database

1) What settings in the ColdFusion Administrator should be turned off/on?
2) What ColdFusion code should you use to properly benchmark execution time like getTickCount()?
3) What system information should you provide also like CF Engine, Version, Standard/Enterprise, DB, etc?

What we do is:
In Application.cfc's onRequestStart() -> set tick count value, add to REQUEST scope.
In Application.cfc's onRequestEnd() -> set tick count value, subtract first value from it to get total processing time in ms
We then have a set threshold (say 200ms) and if that threshold is reached we'll log a record in a database table
Typically we'll log the URL query string, the script name, the server name, etc.
This can give very useful information over time on how particular pages are performing. This can also be easily graphed so you can see if a page suddenly started taking 5000ms where before it was taking 300ms, and then you can check SVN to see what change did it :)
Hope that helps!

1) In CF administrator, in Debug Settings, you can turn on Enable Request Debugging Output, which outputs runtime and other debugging information at the bottom of every page. This can be useful if you want to see queries as well. If you want to use timers to you must select Timer Information in the Debug Settings(got hung on that for a hot minute).
2) You can use timers to have custom benchmarks of execution times. There are four types, inline, outside,comment or debug, each corresponding to where the output will be. In inline, it will create a little box around your code(if its a .cfm) and print the total runtime. The others will print in the bottom output that you turned on in CF admin.
3) I don't really know what you should provide. Wish I could help more. In my opinion the more information the better, so that what I would say :P

with respect to #mbseid's answer, request debugging adds a significant amount of processing time to any request, especially if you use CFCs. I would recommend you turn request debugging off and use getTickCount() at the top and bottom of the page and then take the difference to get the time to render that page. This will give you a much closer reflection of how the code will perform in production.

Related

Convert FileTime to DateTime in Azure Logic App

I'm pretty new to Logic App so still learning my way around custom expressions. One thing I cannot seem to figure out is how to convert a FileTime value to a DateTime value.
FileTime value example: 133197984000000000
I don't have a desired output format as long as Logic App can understand that this is a DateTime value and can be able to run before/after date logic.
To achieve your requirement, I have converted the Windows file Time to Unix File Time then converted to File time by add them as seconds to a default date 1970-01-01T00:00:00Z. Here is the Official documentation that I followed. Below is the expression that worked for me.
addSeconds('1970-01-01T00:00:00Z', div(sub(133197984000000000,116444736000000000),10000000))
Results:
This isn't likely to float your boat but the Advanced Data Operations connector can do it for you.
The unfortunate piece of the puzzle is that (at this stage) it doesn't just work as is but be rest assured that this functionality is coming.
Meaning, you need to do some trickery if you want to use it to do what you want.
By this I mean, if you use the Xml to Json operation, you can use the built in functions that come with the conversion to do it for you.
This is an example of what I mean ...
You can see that I have constructed some XML that is then passed into the Data parameter. That XML contains your Windows file time value.
I have then setup the Map Object to then take that value and use the built in ado function FromWindowsFileTime to convert it to a date time value.
The Primary Loop at Element is the XPath query that will make the selection to return the relevant values to loop over.
The result is this ...
Disclaimer: I should point out, this is due to drop in preview sometime in the middle of Jan 2023.
They have another operation in development that will allow you to do this a lot easier but for now, this is your easier and cheapest option.
This kind of thing is also available in the Transform and Expert operations but that's the next tier level of pricing.

In SSMS, what is the number that appears in brackets after the username?

Just out of curiosity, but I notice it changes seemingly at random, and I cannot find any documentation online for it at all. I have tried Googling it, and I fear I am poorly wording my search terms to find it.
I'm trying to workout what the numbers in brackets are after the username that is executing a query in SSMS are or for?
In the tab name or the window title, it'll appear like this:
SQLQuery19.sql - server_name.db_name (NETWORK\user (525))
The number seemingly changes when I open a new query at different times, and it doesn't seem to increase or decrease with any sort of pattern. Any links to documentation or brief explanation would be great.
That's the session_id - you can confirm by running SELECT ##SPID; in that window.
See https://learn.microsoft.com/en-us/sql/t-sql/functions/spid-transact-sql

"?" character in MSSQL DB getting replaced with (capital A with grave accennt) when displayed by ASP script

I'm attempting to provide support for a legacy ASP/MSSQL web application - I wasn't involved in the development of the software (the company that built it no longer exists) & I'm not the admin of the server where it's hosted, I just manage the hosting for the owners of the site via a reseller account. I'm also not an ASP developer (more a PHP guy), and am not that familiar with it beyond the basics - updating DB connection strings after server migrations, etc.
The issue is that the site in question stores the content of individuals pages in an MSSQL database, and much of the content includes links. Almost all of the internal links on the site are format like "main.asp?123" (with "123" being the ID of a database row). The problem is, starting sometime in the last 8 months or so*, something caused the links in the DB content to show up as "main.aspÀ123" instead - in other words, the "?" character is being replaced by the "À" character (capital A with grave accent). Which, of course, breaks all of those links. Note that Stackoverflow won't allow me to include that character in the post title, because it seems to think that it indicates I'm posting in Spanish...?
(*unfortunately I don't know the timing beyond that, the site owners didn't know when the issue started occurring, so all I have to go by is an archive.org snapshot from last October, where it was working)
I attempted to manually change the "?" character in one of the relevant DB records to "?" (the HTML entity for the question mark), but that didn't make any difference. I also checked the character encoding of the HTML code used to display the content, but that doesn't seem to be the cause either - the same ASP files contain hard-coded links to some of the same pages (formatted exactly the same way), and those work correctly: the "?" doesn't get replaced.
I've also connected to the database directly with the MSSQL Management Studio Express application, but couldn't find any charset/character encoding options for either the database or the table.
And I've tried contacting the hosting provider, but they (M247 UK, in case anyone is curious) have been laughably unhelpful. The responses from them have been along the lines of "durrrrrr, we checked a totally different link that wasn't actually the one that you clearly described AND highlighted in a screenshot, and it works when we check the wrong link, so the problem must be resolved, right?" Suffice it to say, I wouldn't recommend them - used to be a customer of RedFox hosting, and the quality of customer has dropped off substantially since M247 bought them.
Any suggestions? If this were PHP/MySQL, I'd probably start by creating a small test script that did nothing but fetch one of the relevant records and display it's contents, to narrow down the issue - but I'm not familiar enough with ASP to do that here, at least not without a fair amount of googl'ing (and most of the info I can find is specific to ASP.net instead).
Edit: the thread suggested as a solution appears to be for character encoding issues when writing to MSSQL, not reading from it - and I've tried the solutions suggested in that thread, none make any difference.
Looks like you're converting from UNICODE to ASCII somewhere along the line...
Have a look at this to get a quick demo of what happens. In particular, pay attention to the ascii derived from inr, versus the ascii derived from unicode...
SELECT
t.n,
ascii_char = CHAR(t.n),
unicode_char = NCHAR(t.n),
unicode_to_ascii = CONVERT(varchar(10), NCHAR(t.n))
FROM (
SELECT TOP (1024)
n = ROW_NUMBER() OVER (ORDER BY ao.object_id)
FROM
sys.all_objects ao
) t
WHERE 1 = 1
--AND CONVERT(varchar(10), NCHAR(t.n)) ='À'
;
I found a workaround that appears to do the trick: I was previously trying to replace the ? in the code with &#63 (took out the ; so that it will show the code rather than the output), which didn't work. BUT it seems to work if I use &quest instead.
One thing to note, it seemed that I was originally incorrect in thinking that the issue was only affecting content being read/displayed from the MSSQL DB. Rather, it looks like the same problem was also occurring with static content being "echo'd" by code in the ASP scripts (I'm more of a PHP guy, not sure the correct term is for ASP's equivalent to echo is). Though the links that were hardcoded as static (rather HTML being dynamically output by ASP) were unaffected. Though chancing the ? to &quest worked for those ones too (hardest part was tracking down the file I needed to edit).

ADO ADDBTIMESTAMP format changes

OK, so, here's an odd one, which is causing me to lose what little hair I have left.
We have some code that uses ADO to pull data from SQL Server. The code's been in place for 7 or 8 years now, and hasn't been touched for quite a while.
In the function, where we check the returned field's type for some conversion, we have this:
case ( fieldType = ADDBTIMESTAMP$ )
* // A date/time stamp (yyyymmddhhmmss plus a fraction
* // in billionths)
* // Looksd like we're just getting MM/DD/YYYY
* // ooh no sometimes we get 6/25/2010 11:35:00 AM
Basically, this is saying that when the field's type is ADDBTIMESTAMP (or 135), then for whatever reason, the date is being returned MM/DD/YYYY sometimes with, and sometimes without a time stamp.
This morning, all date fields are now returning values YYYY-MM-DD (dashes included).
I haven't changed this code. The network people swear up and down that they haven't updated or modified SQL Server. My workstation is Win10, so who knows what's changed on that, but I don't see any indication of updates for the past few days.
Obviously, something's changed, considering we're now getting the data back in a what should be the correct format, but for the life of me, I can't see what could have happened.
Any help or tips or psychiatric recommendations would be appreciated.
Thanks.
Basically, it's a configuration issue on the workstation I'm using. No other workstation appears to have the problem. I don't think I'll ever find out what happened. To be honest, now that I've looked at the code, which I've never needed to before, I'm more concerned that it's always been returning wrong on every machine for the last 7 or 8 years.
It's things like this that make me want to go into a less stressful line of work, like opening a restaurant.
Thanks.

Solr optimize command status

I have run the solr optimize command using update?optimize=true. Can any one pls tel me how to check the status of Solr optimize command? I am using Solr 3.5 Thanks in advance.
While the optimize is running, you can run the top command, type M to sort by memory usage, and watch the RES and SHR columns increase for the SOLR java process. Also keep and eye on Mem: free at the top of the screen. As long as RES and SHR are increasing, optimize is working. In fact the only thing that will stop it would be if Mem: free goes down to zero.
If that happens to you, rerun optimize with a LARGER number for maxSegments. For instance if 5 segments runs out of RAM, try 9. Then run again with 8, then again with 7, then try 5 again, and 3 and 1.
The easiest way to check the status of the index after an optimize, is to browse to http://<your instance & core>/admin/stats.jsp. This is also the same as clicking [Statistics] link off of the Solr Admin page.
If you look in the stats: section once on that page, typically after an optimize, the numDocs and maxDoc values will be the same as all pending deletes will have occurred. Also the reader value should show a value that contains segments=1 at the end. This will be the case as the optimize command will force the index to be merged into one section as explained below in this excerpt from the UpdateXmlMessages section for optimize in the Solr Wiki.
An optimize is like a hard commit except that it forces all of the
index segments to be merged into a single segment first.

Resources