Reading HTML data from database is slow? Need a better approach? - database

We have a table in mysql of 18GB which has a column "html_view" which stores HTML source data, which we are displaying on the page, but now its taking too much time to fetch html data from "html_view" column, which making the page load slow.
We want an approach which can simplify our existing structure to load the html data faster from db or from any other way.
One idea which we are planning is to store HTML data in .txt files and in db we'll just store path of the txt file and will fetch the data from that particular file by reading file. But we fear that it will make extensive read write operations n our server and may slowdown the server then.
Is there any better approach, for making this situation faster?

First of all, why store HTML in database? Why not render it on demand?
For big text tables, you could store compressed text in a byte array, or compressed and encoded in base64 as plain text.
When you have an array with large text column, how many other columns does the table have? If it's not too many, you could partition the table and create a two column key-value store. That should be faster and simpler than reading files from disk.

Have a look at the Apache Caching guide.
It explains disk and memory caching - from my pov if the content is static (as the databae table indicates), you should use Apaches capabilities instead of writing your own slower mechanisms because you add multiple layers on top.
The usual measure instead of estimating does still apply though ;-).

Related

fileserver vs DB query speed

I have very simple data that I need to retrieve as quickly as possible:
I have json data that is associated with a hash of an email. So the table looks like this:
email_sha256, json
and has millions of rows.
I was wondering if one of the following two options would be faster:
1 Split the single large table into many smallers (split by alphabetical order)
2 Do not use a DB at all and serve the data as files. i.e. every email hash is the name of a separate file that contains the json data.
Creating a file for each user (for each email address), looks so wrong for so many aspect:
If you needs good performance you need a small amount number of file by directory
DB were created for that, you can have an index to retrieve the information very fast.
Without a DB you need to have your own lock/synchronization mechanism
If you are using a DB why using json to store data.
If you are looking for performance, do not serialize the data to a json.
What do you mean by "fast", can you quantify this duration/delay ?
Unless (maybe) the information associated with the user are huge (The size must be very superior to one sector). But again in this case, what do you mean by fast.

Best Way to Relate a File to a Database

So, for example let's say I wanted to setup a SQLite database that contains some data on invoices. Let's say each invoice has a date, invoice number, and company associated with it for simplicity. Is there a good way for the database to be able to access or store a PDF file(~300-700kb/file) of the specified invoice? If this wouldn't work any alternative ideas on what might work well?
Any help is greatly appreciated
You could store the data (each file) as a BLOB which is a byte array/stream so the file could basically be stored as it is within a BLOB.
However, it may be more efficient (see linked article) to just store the path to the file, or perhaps just the file name (depending upon standards) and then use that to retrieve and view the invoice.
Up to around 100k it can be more efficient to store files as BLOB. You may find this a useful document to read SQLite 35% Faster Than The Filesystem
SQLite does support a BLOB data type, which stores data exactly as it is entered. From the documentation:
The current implementation will only support a string or BLOB length up to 231-1 or 2147483647
This limit is much larger than your expected need of 300-700 KB per file, so what you want should be possible. The other thing to consider is the size of your database. Unless you expect to have well north of around 100 TB, then the database size limit also should not pose a problem.

What's the best way for developing a large amount of data in a Table View?

I'm a newbie to app development. I'm using Xcode 4.3.2. I'm attempting to develop an app using a tab bar with a table view. In the table view I need to list about 100 cities and info about those 100 cities when the user selects one. Basically, I already have that data about the cities in a Excel spreadsheet.
I can't really find good examples of what I want to achieve. I've heard the terms parsing XML, SQLite, Core Data, database, etc, and I'm not sure if that is what I need to do.
I'd thankfully accept any suggestions.
If the data in the table are changing or edited, then by using a database, you will avoid rolling a new patch with those minor changes (you just change the values in the db)
If the data is the same and won't change for a long time and you plan to patch the application, then you just need a source for that data (the spreadsheet)
For parsing the data, you can use anything, when taking about showing 100 cities, it depends how big the total data you will be querying, how fast it needs to be and you just need to benchmark it.
If you are querying about 500k records and you need to do some 'figuring out' and it takes too long to load. Then, transforming your data into xml then parsing it may give you better performance.
You have to at least design your way into what you want to achieve. Check the performance and tweak it to find the decent spot.
Right now I look at it as tackling an unknown problem. Spend some time and build something. This will help you see the potential problems better.
While databases are good, for a few hundred elements you can tolerate inefficiency. If your existing data are in an Excel spreadsheet, the easiest way to get them into your app is to export the Excel spreadsheet to Comma-Separated-Values (CSV), then make your app read CSV files. (If your Excel spreadsheet has multiple worksheets, you'll need to convert each separately.)
How do you parse CSV? See iPhone : How to convert CSV format into NSData or NSString?
You'll end up with arrays of arrays of NSString. You'll probably need to define a new class for your city data, and convert each row in the imported data to one city element.
If you need to know more, posting a few rows from your spreadsheet may help.

How to store videos in a PostgreSQL database?

I am storing image files (like jpg, png) in a PostgreSQL database. I found information on how to do that here.
Likewise, I want to store videos in a PostgreSQL database. I searched the net - some say one should use a data type such as bytea to store binary data.
Can you tell me how to use a bytea column to store videos?
I would generally not recommend to store huge blobs (binary large objects) inside PostgreSQL if referential integrity is not your paramount requirement. Storing huge files in the filesystem is much more efficient:
Much faster, less disk space used, easier backups.
I have written a more comprehensive assessment of the options you've got in a previous answer to a similar question. (With deep links to the manual.)
We did some tests about practical limits of bytea datatype. There are theoretical limit 1GB. But practical limit is about 20MB. Processing larger bytea data eats too much RAM and encoding and decoding takes some time too. Personally I don't think so storing videos is good idea, but if you need it, then use a large objects - blobs.
Without knowing what programming language you are using, I can only give a general approach:
Create a table with a column of type 'bytea'.
Get the contents of the video file into a variable.
Insert a row into that table with that variable as the data for the bytea column.

Silverlight Isolated Storage and loading big files

In a Windows Phone 7 application, I would like to query a big XML file (list of cities) stored using Isolated Storage. If I do that this way, will the file be loaded to memory (> 5 mo) ? If so, what other solution do I have?
Edit:
More details. I want to use AutoCompleteBox (http://www.jeff.wilcox.name/2008/10/introducing-autocompletebox/), but instead of using a web service (this is fixed data, no need to be online), I want to query a file/database/isolated storage... I have a fixed list of cities. I said in the comments it's 40k, but it finally seems closer to 1k rows.
instead of using isolatedstorage for this, would it be an option for you to use a webservice instead... or do you design your app for an offline approach?
querying a webservice, wcf or json enabled webservice is really simple, and will be easier for you to maintain :)
Rather than have a big file containing all the data can you not break it down into lots of smaller files. (One for each city?)
You could have a separate file to keep an index of them all if need be. Alternatively, depending on the naming of the files, you may be able to use IsolatedStorageFile.GetFileNames to get a list of all files.
I would create my own file format, using, for example, a separator between fields, with one row for each record.
That way you can read your file line-by-line to fill your data structure with these advantages:
no need to pull the whole file into memory
no XML overhead (in a desktop application it may not be a problem, but in the phone context a 5 MB text file may become quite a bit smaller)
Dumb example:
New York City; 12345
Berlin; 25635
...
EDIT: given that the volume is not that large you don't need any form of indexing or loading on-demand. I would store the cities as stated above -one record per line-, load them in a list and use LINQ to select the items you need. This will probably be fast and keep your application very responsive.
In this case, in my opinion, XML is not the best tool for the job. Your structure is very simple and storing in XML would probably double the file size, which is a concern for a mobile device, and would also slow the parsing, also a concern in this case.

Resources