I want to add support EXIF for libjpeg library. I read exif2.2 specification briefly, but it does not contain any information about possiblity of exif tag's value, which can be located outside of APP1 marker segment (for example after end of file marker). Is that situation possible?
I maintain the metadata-extractor library, in Java and C#, and have never seen Exif data outside of APP1. It's possible that there are more than one APP1 segments, so you should check for, then skip, the six-byte preamble: Exif\0\0.
If you find a JPEG image that has Exif data outside of the APP1 segment, I'd very much like to know about it!
We are looking to speed up our website. More specifically, we are looking to lower TTFB. Our website consists mainly of pages that are dynamically generated based on the url path (a subject is extracted) and on parameters in the url path.
These entries are put into an sql query that pulls in all the right data from our database executed with php.
Here is the issue:
These queries work perfectly to generate the pages and all the information associated with them (e.g. tags).
However, rerunning the code everytime a visitor goes to a page takes a significant amount of time, resulting in a high TTFB/server response time. In essence, these pages only need to be updated using the sql queries once every month. In between that, it should be possible to serve them as preloaded/pregenerated static HTML pages (until we indicate a refresh). We are currently using Cloudflare as a CDN which has been great in speeding up the website already. Now, even though we have the page rule with "Cache Everything" on, we can still see it reruns the php code including sql queries.
The question:
Does anybody know a good way to accomplish this goal of caching the dynamic part of the website? Whether that's with Cloudflare, or via another way? I know that Akamai offers this service but evidently, there is some switching cost associated with swapping the website to another CDN, and we are rather satisfied with Cloudflare.
Thanks in advance!
If your website has hundreds of pages with many visitors everyday, you might want to implement some sort of caching mechanism for your website to speed up page loading time. Each client-server request consist of multiple database queries, server response and the processing time increasing overall page loading time. The most common solution is to make copies of dynamic pages called cache files and store them in a separate directory, which can later be served as static pages instead of re-generating dynamic pages again and again.
Understanding Dynamic pages & Cache Files
Cache files are static copies generated by dynamic pages, these files are generated one time and stored in separate folder until it expires, and when user requests the content, the same static file is served instead of dynamically generated pages, hence bypassing the need of regenerating HTML and requesting results from database over and over again using server-side codes. For example, running several database queries, calculating and processing PHP codes to the HTML output takes certain seconds, increasing overall page loading time with dynamic page, but a cached file consist of just plain HTML codes, you can open it in any text editor or browser, which means it doesn’t require processing time at all.
Dynamic page :— The example in the picture below shows how a dynamic page is generated. As its name says, it’s completely dynamic, it talks to database and generates the HTML output according to different variables user provides during the request. For example a user might want to list all the books by a particular author, it can do that by sending queries to database and generating fresh HTML content, but each request requires few seconds to process also the certain server memory is used, which is not a big deal if website receives very few visitors. However, consider hundreds of visitors requesting and generating dynamic pages from your website over and over again, it will considerably increase the pressure, resulting delayed output and HTTP errors on the client’s browser.
dynamic-page-example
Cached File :— Picture below illustrates how cached files are served instead of dynamic pages, as explained above the cached files are nothing but static web pages. They contain plain HTML code, the only way the content of the cached page will change is if the Web developer manually edits the file. As you can see cached files neither require database connectivity nor the processing time, it is an ideal solution to reduce server pressure and page loading time consistently.
cached-file-example
PHP Caching
There are other ways to cache dynamic pages using PHP, but the most common method everyone’s been using is PHP Output Buffer and Filesystem Functions, combining these two methods we can have magnificent caching system.
PHP Output buffer :— It interestingly improves performance and decreases the amount of time it takes to download, because the output is not being sent to browser in pieces but the whole HTML page as one variable. The method is insanely simple take a look at the code below :
<?php
ob_start(); // start the output buffer
/* the content */
ob_get_contents(); gets the contents of the output buffer
ob_end_flush(); // Send the output and turn off output buffering
?>
When you call ob_start() on the top of the code, it turns output buffering on, which means anything after this will be stored in the buffer, instead of outputting on the browser. The content in the buffer can be retrieved using ob_get_contents(). You should call ob_end_flush() at the end of the code to send the output to the browser and turn buffering off.
PHP Filesystem :— You may be familiar with PHP file system, it is a part of the PHP core, which allow us to read and write the file system. Have a look at the following code.
$fp = fopen('/path/to/file.txt', 'w'); //open file for writing
fwrite($fp, 'I want to write this'); //write
fclose($fp); //Close file pointer
As you can see the first line of the code fopen() opens the file for writing, the mode ‘w’places the file pointer at the beginning of the file and if file does not exist, it attempts to create one. Second line fwrite() writes the string to the opened file, and finally fclose()closes the successfully opened file at the beginning of the code.
Implementing PHP caching
Now you should be pretty clear about PHP output buffer and filesystem, we can use these both methods to create our PHP caching system. Please have a look at the picture below, the Flowchart gives us the basic idea about our cache system.
php-cache-system
The cycle starts when a user request the content, we just check whether the cache copy exist for the currently requested page, if it doesn’t exist we generate a new page, create cache copy and then output the result. If the cache already exist, we just have to fetch the file and send it to the user browser.
Take a look at the Full PHP cache code below, you can just copy and paste it in your PHP projects, it should work flawlessly as depicted in above Flowchart. You can play with the settings in the code, modify the cache expire time, cache file extension, ignored pages etc.
<?php
//settings
$cache_ext = '.html'; //file extension
$cache_time = 3600; //Cache file expires afere these seconds (1 hour = 3600 sec)
$cache_folder = 'cache/'; //folder to store Cache files
$ignore_pages = array('', '');
$dynamic_url = 'http://'.$_SERVER['HTTP_HOST'] . $_SERVER['REQUEST_URI'] . $_SERVER['QUERY_STRING']; // requested dynamic page (full url)
$cache_file = $cache_folder.md5($dynamic_url).$cache_ext; // construct a cache file
$ignore = (in_array($dynamic_url,$ignore_pages))?true:false; //check if url is in ignore list
if (!$ignore && file_exists($cache_file) && time() - $cache_time < filemtime($cache_file)) { //check Cache exist and it's not expired.
ob_start('ob_gzhandler'); //Turn on output buffering, "ob_gzhandler" for the compressed page with gzip.
readfile($cache_file); //read Cache file
echo '<!-- cached page - '.date('l jS \of F Y h:i:s A', filemtime($cache_file)).', Page : '.$dynamic_url.' -->';
ob_end_flush(); //Flush and turn off output buffering
exit(); //no need to proceed further, exit the flow.
}
//Turn on output buffering with gzip compression.
ob_start('ob_gzhandler');
######## Your Website Content Starts Below #########
?>
<!DOCTYPE html>
<html>
<head>
<title>Page to Cache</title>
</head>
<body>
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer ut tellus libero.
</body>
</html>
<?php
######## Your Website Content Ends here #########
if (!is_dir($cache_folder)) { //create a new folder if we need to
mkdir($cache_folder);
}
if(!$ignore){
$fp = fopen($cache_file, 'w'); //open file for writing
fwrite($fp, ob_get_contents()); //write contents of the output buffer in Cache file
fclose($fp); //Close file pointer
}
ob_end_flush(); //Flush and turn off output buffering
?>
You must place your PHP content between the enclosed comment lines, In fact I’d suggest putting them in separate header and footer file, so that it can generate and serve cache files for all the different dynamic pages. If you read the comment lines in the code carefully, you should find it pretty much self explanatory.
You can do this at the edge with cloudflare to get better performance. The nice thing is you can set the domain to "Under Development" at any time to see code changes - without having to change a server-side mechanism.
Your Page Rule would look something like the below. Note the url variable with wildcard. This means any url with that variable name will be cached at the edge. You should see TTFB + entire html download under 50ms easily.
Also note the Expiration. You said a month so I chose that setting. But I'd probably make it "1 Day". Just so I can keep an eye on things.
I want to set up data driven subscriptions to mass output png files. The problem is that adding a new Extension for png in rsreportserver.config under Configurations/Extensions/Render only gives one fixed size of png file.
Report A really ought to output a 6in x 3in png file and report B ought to output a 6in x 4in png file.
Yes, I could create multiple entries in rsreportserver.config but this is confusing for end users as they show up on all users' export dropdowns needlessly.
I proposed doing the mass image generation with an external program that generates a custom url for each png (DeviceInfo settings can be part of the url) and uses WebClient.DownloadFile() in a loop, but my supervisor is currently really locked into the idea of data driven subscriptions for whatever reason.
Per #iamdave's suggestion, just setting the overarching page dimensions in report designer does give a suitably sized png file via data driven subscription without having to hardcode png dimensions in rsreportserver.config.
The reason I didn't initially notice this was the reports in question were graphs that were only ever used as subreports on an encompassing megareport and had never really been run as individual standalone reports. When used as a subreport, page dimensions never came into play, so they were left at the default 8.5x11.
"I would recommend switching to another edit control like SynEdit(it can load 80 mb of text file in few miliseconds)." - more memory for TMemo / TRichEdit
Is it possible?
To load 1,5 Mb takes me 8 sec...
My previous post: Delphi: Form becomes Frozen while assigning strings in thread
I have Delphi 2010 and UniSynEdit for Delphi 2009.
Thanks!
The backend of SynEdit is a TStrings descendant which loads EVERYTHING IN ONE GO if you just use
ASynEdit.Lines.LoadFromFile
You could use on demand loading (i.e. just load visible lines, perhaps with 10 lines before and after) but you have to handle it yourself.
I have a PDF file where every page is a (LZW) TIFF file. I know this because I created it. I want to be able to load it and save it as a bunch of TIFF files.
I can open the PDF file with CGPDFDocumentCreateWithURL, and get a page. I can even draw the page onto the screen.
What I WANT to do is draw the page into a bitmapContext, so that I can use CGBitmapContextCreateImage to get the image into a CGImageRef. However, in order to create a bitmap context, I need to know the size and resolution of the image. I can't seem to find out how to get either a CGPDFDocument or a CGPDFPage to tell me the resolution of the image object on that page.
Is there an easier way to do this that I'm not realizing?
thanks.
Ghostscript will work for you here :
gs -sDEVICE=tiff32nc -sOutputFile=foo-Page%d.tif foo.pdf
For 2 page document foo.pdf you should get :
foo-Page1.tif
foo-Page2.tif
From memory I think the output resolution from GS is that of the containing Page, not necessarily the resolution of the embedded file (unless these are the same to begin with).
If this is the case and you want to recover the image as it was originally res-wise, you can use iText (java) or iTextSharp(.net) to get to the image content stream (ie. Bytes) and write them out to disk in the format of your choice, after converting the content stream into a PdfImage iirc.
Hope the ghostscript option is applicable to save writing yet another utility...