How to store images in your filesystem - database

Currently, I've got images (max. 6MB) stored as BLOB in a InnoDB table.
As the size of the data is growing, the nightly backup is growing slower and slower hindering normal performance.
So, the binary data needs to go to the file system. (pointers to the files will be kept in the DB.)
The data has a tree like relation:
- main site
- user_0
- album_0
- album_1
- album_n
- user_1
- user_n
etc...
Now I want the data to be distributed evenly trough the directory structure. How should I accomplish this?
I guess I could try MD5('userId, albumId, imageId'); and slice up the resulting string to get my directory path:
/var/imageStorage/f/347e/013b/c042/51cf/985f7ad0daa987d.jpeg
This would allow me to map the first character to a server and evenly distribute the directory structure over multiple servers.
This would however not keep images organised per user, likely spreading the images for 1 album over multiple servers.
My question is:
What is the best way to store the image data in the file system in a balanced way, while keeping user/album data together ?
Am I thinking in the right direction? or is this the wrong way of doing things altogether?
Update:
I will go for the md5(user_id) string slicing for the split up on highest level.
And then put all user data in that same bucket. This will ensure an even distribution of data while keeping user data stored close together.
/var
- imageStorage
- f/347e/013b
- f347e013bc04251cf985f7ad0daa987d
- 0
- album1_10
- picture_1.jpeg
- 1
- album1_1
- picture_2.jpeg
- picture_3.jpeg
- album1_11
- picture_n.jpeg
- n
- album1_n
I think I will use albumId splitted up from behind (I like that idea!) as to keep the number of albums per directory smaller (although it won't be necessary for most users).
Thanks!

Just split your userid from behind. e.g.
UserID = 6435624
Path = /images/24/56/6435624
As for the backup you could use MySQL Replication and backup the slave
database to avoid problems (e.g. locks) while backuping.

one thing about distributing the filenames into different directories, if you consider splitting your md5 filenames into different subdirectories (which is generally a good idea), I would suggest keeping the complete hash as filename and duplicate the first few chars as directory names. This way you will make it easier to identify files e.g. when you have to move directories.
e.g.
abcdefgh.jpg -> a/ab/abc/abcdefgh.jpg
if your filenames are not evenly distributed (not a hash), try to choose a splitting method that gets an even distribution, e.g. the last characters if it is an incrementing user-id

I'm using this strategy given a unique picture ID
reverse the string
zerofill it with leading zero if there's an odd number of digits
chunk the string into two-digits substrings
build the path as below
17 >> 71 >> /71.jpg
163 >> 0361 >> /03/61.jpg
6978 >> 8796 >> /87/96.jpg
1687941 >> 01497861 >> /01/49/78/61.jpg
This method ensures that each folder contains up to 100 pictures and 100 sub-folders and the load is evenly distributed between the left-most folders.
Moreover, you just need the ID of the picture to reach the file, no need to read picture table containing other metadata.
User data are not stored close together indeed and the ID-Path relation is predictable, it depends on your needs.

Related

MarkLogic - How to know size of database, size of Index, Total indexs

We are using MarkLogic 9.0.8.2
We have setup MarkLogic cluster, ingested around 18M XML documents, few indexes have been created like Fields, PathRange & so on.
Now while setting up another environment with configuration, indexs, same number of records but i am not able to understand why the total size on database status page is different from previous environment.
So i started comparing database status page of both clusters where i can see size per forest/replica forest and all.
So in this case, i would like to know size for each
Database
Index
Also would like to know (instead of expanding each thru admin interface) the total indexes in given database
Option within Admin interface OR thru xQuery will also do.
MarkLogic does not break down the index sizes separately from the Database size. One reason for this is because the data is stored together with the Universal Index.
You could approximate the size of the other indexes by creating them one at a time, and checking the size before and after the reindexer runs, and the deleted fragments are merged out. We usually don't find a lot of benefit it trying to determine the exact index sizes, since the benefits they provide typically outweigh the cost of storage.
It's hard to say exactly why there is a size discrepancy. One common cause would be the number of deleted fragments in each database. Deleted fragments are pieces of data that have been marked for deletion (usually due to an update, delete or other change). Deleted fragments will continue to consume database space until they are merged out. This happens by default, or it can be manually started at the forest or database level.
The database size, and configured indexes can be determined through the Admin UI, Query Console (QConsole) or via the MarkLogic REST Management API (RMA) endpoints. QConsole supports a number of languages, but server side Javascript and XQuery are the most common. RMA can return results in XML or JSON.
Database Size:
REST: http://[host-name]:8002/manage/v2/databases/[database-name]?view=status
QConsole: Sum the disk size elements for the stands from xdmp.forestStatus(javascript) or xdmp:forest-status(XQuery) for all the forests in the database.
Configured Indexes:
REST: http://[host-name]:8002/manage/v2/databases/\database-name]?view=package
QConsole: Use xdmp.getConfiguration(javascript) or xdmp:get-configuration(XQuery) in conjunction with the xdmp.databaseGet[index type] or xdmp:database-get-[index type]
for $db-id in xdmp:databases()
let $db-name := xdmp:database-name($db-id)
let $db-size :=
fn:sum(
for $f-id in xdmp:database-forests($db-id)
let $f-status := xdmp:forest-status($f-id)
let $space := $f-status/forest:device-space
let $f-name := $f-status/forest:forest-name
let $f-size :=
fn:sum(
for $stand in $f-status/forest:stands/forest:stand
let $stand-size := $stand/forest:disk-size/fn:data(.)
return $space
)
return $f-size
)
order by $db-size descending
return $db-name || " = " || $db-size

Why use an external database with Matlab?

Why should you use an external database (e.g. Mysql) when working with (large/growing) Data?
I know of some projects which use SQL databases, but I can't see the advantage you get from doing this in contrast to just storing everything in .mat files (as for example stated here: http://www.matlabtips.com/how-to-store-large-datasets/)
Where is this necessary? Where does this approach simplify things?
Regarding growing data, let's take an example where, on a production line, you would measure different sources with different sensors:
Experiment.Date = '2014-07-18 # 07h28';
Experiment.SensorType = 'A';
Experiment.SensorSerial = 'SENSOR-00012-A';
Experiment.SourceType = 'C';
Experiment.SourceSerial = 'SOURCE-00143-C';
Experiment.SensorPositions = 180 * linspace(0, 359, 360) / pi;
Experiment.SensorResponse = rand(1, 360);
And store these experiments on disk using .mat files:
experiment.2013-01-02.0001.mat
experiment.2013-01-02.0002.mat
experiment.2013-01-02.0003.mat
experiment.2013-01-03.0004.mat
...
experiment.2014-07-18.0001.mat
experiment.2014-07-18.0002.mat
So now, if I ask you:
"what is the typical response of sensors of type B when the source is of type E" ?
Or:
"Which sensor has best performances to measure sources of type C ? Sensors A or sensors B ?"
"How performances of these sensors degrade with time ?"
"Did modification we made last july to production line improved lifetime of sensors A ?"
Loading in memory all these .mat files, to check if date, sensor and source type are correct and then calculate min,mean,max responses, and other statistics is gonna be very painful and time consuming + writing custom code for file selection!
Building a data-base on top of these .mat files can be very useful to "SELECT/JOIN/..." elements of interest and then perform further statistic or operations.
NB: The database does not replace .mat files (i.e. the information), it just a practical and standard way to quickly select some of them upon conditions without having to load and parse everything.

MATLAB Database Searching

I have to compare certain stored images (saved in .dat format) in my MATLAB Fingerprint Recognition System. Now, I can perform 1:1 matching, however, it isn't feasible for a relatively larger database of, say, 300 employees.
Any suggestions for what I could do?
(a) Something like, adding the address of where the complete database is stored
(b) Functions to employ best possible matching algorithm
c = [';#3EA4:aei7]ced.CFHE;4\T>*Y>,dL0,HOQQMJLJE9PX[[Q.ZF.\JTCA1dd'
'<A;FB:;bfj8^df//DGIF<5]UF+ZH-eM>-IorRPNMPIE-Y\\R8[I8]SUDW2e+'
'=4BGC;<cgk9_e00DEOJG=6^VG,[I.fN?5jpsSQPNQPF.Z,]S9`S9cTWVX:+,'
':5CHD<=4hlh`f11EFPKHA7&WH-\J/gOC?kqtTRRORQJ8--^TB+T=dWYWY;,_'
';6D3E=>7imiag2IFOQLID8''XI.]K0"PD#l32UZhP//P988_WC,U>+Z^Y\<2`'
'<82BF>?8jnjbhLJGPRMJE9/YJ/`L1#QMC$;;V[iv09QE99,XD.YB,[_\]=3a'
'>9;CG?#9kokc2MKHQSOKF:0ZL0aM2$RNG%AAW\jw9E.FEE-_G8aG.d`]_W5+'
'?:CDH#A:lpld3NLIRTPLG=1[M1bN3%SOH4BBX]kx:J9LLL8`H9bJ/+d_dX6,'
'#;DEIAB;mqmePOMJSUQMJ>2\N2cO4&TPP#HCY^lyDKEMMN9+I#+S8,+deY7^'
'8#EFJBC<4rnfQPNPTVRNKB3]O3dP5''UQQCIDZ_mzEPFNNOE,RA,T9/,++\8_'
'9A2G3CD=544gRQPQUWUOLE4^P4"Q6(VRRIJE[`n{KQKOOPK-SE.W:F/,,]Z+'
':BDH4DE>655hSRQRVXVPMF5_Q5#R>)eSSJKF\ao0L.L-WUL.VF8XCH001_[,'
';3EI<EO?766iTSRSWYWQNG6$R6''S?*fTTlLQ]bp1M/P.XVP8[H9]DIDA=`\]'
'?4D3=FP#877jUTSTXZXROK7%S7(TF+gUUmMR^cq:N9Q8YZQ9_I>cIJEB>d_^'
'#5E#>GQA98b3VUTUY*YSPL8&T>)UI,hVhnNS_dr;PE.9Z[RCaR?+JTFC?e`+'
'79FA?HRB:9c4WVUVZ+ZWQM=,WG*VJ-"gi4OT`es<QL9E[\TD+SA,SWUVW+d,'
'8:3B#JSX;:dVXWVW[,[XRN>-XH+bK.#hj#PUvftDRMEF,]UH,UB.TYVWX,e\'
'9;ECAKTY<;eWYXWX\:)YSOE.YI,cL/$ikCqV1guE/PFL-^XI-YG/WZWXY1+]'
':AFDBLUZ=<fXZYXY,;*ZTPF/ZJ-dM0%j#Jrt2hxH0QKM8,YJ.ZI8[^YY\2,,'
';B3ECMV[>jgY[ZYZ-<7[XQG0[K.eN1&"$K2u:iyO9.PN9-_K8aJ9\_]\]82['
'?CEFDNW\?khZ\[Z[==8\YRH1\M/!O2''#%m31Bw0PE/QXE8+R9bS;da^]_93\'
'#2FGEOX]ali[]\[\>>9(ZSL2]N0"P3($&n;2Cx1QN9--L9,SA+T<+d__`:4,'
'A3GHFPY^bmj\^]\]??:)[TM3^O1%Q4)%''oA:D0:0OE.8ME-TE,XB,+`da;5['
'643IGQZ_cnk]_^]^##;5\UN4_P2&R6*&(3B;E1<1PN99NL8WF.^C/,a+bY6,'
'7:F3HR[`dol^`_^_AA<6]VO5`Q3''S>+'');CBF:=:QOEEOO9_G8aH6/d,cZ[Y'
'8;G4IS\aep4_a`_-BD=7''XP6aR4(T?,(5#DCHCC;RPFLPPD`H9bJ70+0d\\Z'
'9BH>JT^bf45`ba`.CE#8(YQ7#S5)UD-)?AEDIDDD/QKMVQJ+S?cSDF,1e]a,'
':C3?K4_cg5[acbaADFA92ZR8$T6*VE.*#JFEJEEE0.NNWTK,U#+TEG0?+_bX'
';2D#L9`dh6\bdcbBEGD:3[S=)U7+cK/+CKGFLIKI9/OWZUL-VA,WIHB#,`cY'];
i = double(c(:)-32);
j = cumsum(diff([0; i])<=0) + 1;
S = sparse(i,j,1)';
spy(S)

How to generate large files (PDF and CSV) using AppEngine and Datastore?

When I first started developing this project, there was no requirement for generating large files, however it is now a deliverable.
Long story short, GAE just doesn't play nice with any large scale data manipulation or content generation. The lack of file storage aside, even something as simple as generating a pdf with ReportLab with 1500 records seems to hit a DeadlineExceededError. This is just a simple pdf comprised of a table.
I am using the following code:
self.response.headers['Content-Type'] = 'application/pdf'
self.response.headers['Content-Disposition'] = 'attachment; filename=output.pdf'
doc = SimpleDocTemplate(self.response.out, pagesize=landscape(letter))
elements = []
dataset = Voter.all().order('addr_str')
data = [['#', 'STREET', 'UNIT', 'PROFILE', 'PHONE', 'NAME', 'REPLY', 'YS', 'VOL', 'NOTES', 'MAIN ISSUE']]
i = 0
r = 1
s = 100
while ( i < 1500 ):
voters = dataset.fetch(s, offset=i)
for voter in voters:
data.append([voter.addr_num, voter.addr_str, voter.addr_unit_num, '', voter.phone, voter.firstname+' '+voter.middlename+' '+voter.lastname ])
r = r + 1
i = i + s
t=Table(data, '', r*[0.4*inch], repeatRows=1 )
t.setStyle(TableStyle([('ALIGN',(0,0),(-1,-1),'CENTER'),
('INNERGRID', (0,0), (-1,-1), 0.15, colors.black),
('BOX', (0,0), (-1,-1), .15, colors.black),
('FONTSIZE', (0,0), (-1,-1), 8)
]))
elements.append(t)
doc.build(elements)
Nothing particularly fancy, but it chokes. Is there a better way to do this? If I could write to some kind of file system and generate the file in bits, and then rejoin them that might work, but I think the system precludes this.
I need to do the same thing for a CSV file, however the limit is obviously a bit higher since it's just raw output.
self.response.headers['Content-Type'] = 'application/csv'
self.response.headers['Content-Disposition'] = 'attachment; filename=output.csv'
dataset = Voter.all().order('addr_str')
writer = csv.writer(self.response.out,dialect='excel')
writer.writerow(['#', 'STREET', 'UNIT', 'PROFILE', 'PHONE', 'NAME', 'REPLY', 'YS', 'VOL', 'NOTES', 'MAIN ISSUE'])
i = 0
s = 100
while ( i < 2000 ):
last_cursor = memcache.get('db_cursor')
if last_cursor:
dataset.with_cursor(last_cursor)
voters = dataset.fetch(s)
for voter in voters:
writer.writerow([voter.addr_num, voter.addr_str, voter.addr_unit_num, '', voter.phone, voter.firstname+' '+voter.middlename+' '+voter.lastname])
memcache.set('db_cursor', dataset.cursor())
i = i + s
memcache.delete('db_cursor')
Any suggestions would be very much appreciated.
Edit:
Above I had documented three possible solutions based on my research, plus suggestions etc
They aren't necessarily mutually exclusive, and could be a slight variation or combination of any of the three, however the gist of the solutions are there. Let me know which one you think makes the most sense, and might perform the best.
Solution A: Using mapreduce (or tasks), serialize each record, and create a memcache entry for each individual record keyed with the keyname. Then process these items individually into the pdf/xls file. (use get_multi and set_multi)
Solution B: Using tasks, serialize groups of records, and load them into the db as a blob. Then trigger a task once all records are processed that will load each blob, deserialize them and then load the data into the final file.
Solution C: Using mapreduce, retrieve the keynames and store them as a list, or serialized blob. Then load the records by key, which would be faster than the current loading method. If I were to do this, which would be better, storing them as a list (and what would the limitations be...I presume a list of 100,000 would be beyond the capabilities of the datastore) or as a serialized blob (or small chunks which I then concatenate or process)
Thanks in advance for any advice.
Here is one quick thought, assuming it is crapping out fetching from the datastore. You could use tasks and cursors to fetch the data in smaller chunks, then do the generation at the end.
Start a task which does the initial query and fetches 300 (arbitrary number) records, then enqueues a named(!important) task that you pass the cursor to. That one in turn queries [your arbitrary number] records, and then passes the cursor to a new named task as well. Continue that until you have enough records.
Within each task process the entities, then store the serialized result in a text or blob property on a 'processing' model. I would make the model's key_name the same as the task that created it. Keep in mind the serialized data will need to be under the API call size limit.
To serialize your table pretty fast you could use:
serialized_data = "\x1e".join("\x1f".join(voter) for voter in data)
Have the last task (when you get enough records) kick of the PDf or CSV generation. If you use key_names for you models you, should be able to grab all of the entities with encoded data by key. Fetches by key are pretty fast, you'll know the model's keys since you know the last task name. Again, you'll want to be mindful size of your fetches from the datastore!
To deserialize:
list(voter.split('\x1f') for voter in serialized_data.split('\x1e'))
Now run your PDF / CSV generation on the data. If splitting up the datastore fetches alone does not help you'll have to look into doing more of the processing in each task.
Don't forget in the 'build' task you'll want to raise an exception if any of the interim models are not yet present. Your final task will automatically retry.
Some time ago I faced the same problem with GAE. After many attempts I just moved to another web hosting since I could do it. Nevertheless, before moving I had 2 ideas how to resolve it. I haven't implemented them, but you may try to.
First idea is to use SOA/RESTful service on another server, if it is possible. You can even create another application on GAE in Java, do all the work there (I guess with Java's PDFBox it will take much less time to generate PDF), and return result to Python. But this option needs you to know Java and also to divide your app to several parts with terrible modularity.
So, there's another approach: you can create a "ping-pong" game with a user's browser. The idea is that if you cannot make everything in a single request, force browser to send you several. During first request make only a part of work which fits 30 seconds limit, then save the state and generate 'ticket' - unique identifier of a 'job'. Finally, send the user response which is simple page with redirect back to your app, parametrized by a job ticket. When you get it. just restore state and proceed with the next part of job.

key-value store for time series data?

I've been using SQL Server to store historical time series data for a couple hundred thousand objects, observed about 100 times per day. I'm finding that queries (give me all values for object XYZ between time t1 and time t2) are too slow (for my needs, slow is more then a second). I'm indexing by timestamp and object ID.
I've entertained the thought of using somethings a key-value store like MongoDB instead, but I'm not sure if this is an "appropriate" use of this sort of thing, and I couldn't find any mentions of using such a database for time series data. ideally, I'd be able to do the following queries:
retrieve all the data for object XYZ between time t1 and time t2
do the above, but return one date point per day (first, last, closed to time t...)
retrieve all data for all objects for a particular timestamp
the data should be ordered, and ideally it should be fast to write new data as well as update existing data.
it seems like my desire to query by object ID as well as by timestamp might necessitate having two copies of the database indexed in different ways to get optimal performance...anyone have any experience building a system like this, with a key-value store, or HDF5, or something else? or is this totally doable in SQL Server and I'm just not doing it right?
It sounds like MongoDB would be a very good fit. Updates and inserts are super fast, so you might want to create a document for every event, such as:
{
object: XYZ,
ts : new Date()
}
Then you can index the ts field and queries will also be fast. (By the way, you can create multiple indexes on a single database.)
How to do your three queries:
retrieve all the data for object XYZ
between time t1 and time t2
db.data.find({object : XYZ, ts : {$gt : t1, $lt : t2}})
do the above, but return one date
point per day (first, last, closed to
time t...)
// first
db.data.find({object : XYZ, ts : {$gt : new Date(/* start of day */)}}).sort({ts : 1}).limit(1)
// last
db.data.find({object : XYZ, ts : {$lt : new Date(/* end of day */)}}).sort({ts : -1}).limit(1)
For closest to some time, you'd probably need a custom JavaScript function, but it's doable.
retrieve all data for all objects for
a particular timestamp
db.data.find({ts : timestamp})
Feel free to ask on the user list if you have any questions, someone else might be able to think of an easier way of getting closest-to-a-time events.
This is why databases specific to time series data exist - relational databases simply aren't fast enough for large time series.
I've used Fame quite a lot at investment banks. It's very fast but I imagine very expensive. However if your application requires the speed it might be worth looking it.
There is an open source timeseries database under active development (.NET only for now) that I wrote. It can store massive amounts (terrabytes) of uniform data in a "binary flat file" fashion. All usage is stream-oriented (forward or reverse). We actively use it for the stock ticks storage and analysis at our company.
I am not sure this will be exactly what you need, but it will allow you to get the first two points - get values from t1 to t2 for any series (one series per file) or just take one data point.
https://code.google.com/p/timeseriesdb/
// Create a new file for MyStruct data.
// Use BinCompressedFile<,> for compressed storage of deltas
using (var file = new BinSeriesFile<UtcDateTime, MyStruct>("data.bts"))
{
file.UniqueIndexes = true; // enforces index uniqueness
file.InitializeNewFile(); // create file and write header
file.AppendData(data); // append data (stream of ArraySegment<>)
}
// Read needed data.
using (var file = (IEnumerableFeed<UtcDateTime, MyStrut>) BinaryFile.Open("data.bts", false))
{
// Enumerate one item at a time maxitum 10 items starting at 2011-1-1
// (can also get one segment at a time with StreamSegments)
foreach (var val in file.Stream(new UtcDateTime(2011,1,1), maxItemCount = 10)
Console.WriteLine(val);
}
I recently tried something similar in F#. I started with the 1 minute bar format for the symbol in question in a Space delimited file which has roughly 80,000 1 minute bar readings. The code to load and parse from disk was under 1ms. The code to calculate a 100 minute SMA for every period in the file was 530ms. I can pull any slice I want from the SMA sequence once calculated in under 1ms. I am just learning F# so there are probably ways to optimize. Note this was after multiple test runs so it was already in the windows Cache but even when loaded from disk it never adds more than 15ms to the load.
date,time,open,high,low,close,volume
01/03/2011,08:00:00,94.38,94.38,93.66,93.66,3800
To reduce the recalculation time I save the entire calculated indicator sequence to disk in a single file with \n delimiter and it generally takes less than 0.5ms to load and parse when in the windows file cache. Simple iteration across the full time series data to return the set of records inside a date range in a sub 3ms operation with a full year of 1 minute bars. I also keep the daily bars in a separate file which loads even faster because of the lower data volumes.
I use the .net4 System.Runtime.Caching layer to cache the serialized representation of the pre-calculated series and with a couple gig's of RAM dedicated to cache I get nearly a 100% cache hit rate so my access to any pre-computed indicator set for any symbol generally runs under 1ms.
Pulling any slice of data I want from the indicator is typically less than 1ms so advanced queries simply do not make sense. Using this strategy I could easily load 10 years of 1 minute bar in less than 20ms.
// Parse a \n delimited file into RAM then
// then split each line on space to into a
// array of tokens. Return the entire array
// as string[][]
let readSpaceDelimFile fname =
System.IO.File.ReadAllLines(fname)
|> Array.map (fun line -> line.Split [|' '|])
// Based on a two dimensional array
// pull out a single column for bar
// close and convert every value
// for every row to a float
// and return the array of floats.
let GetArrClose(tarr : string[][]) =
[| for aLine in tarr do
//printfn "aLine=%A" aLine
let closep = float(aLine.[5])
yield closep
|]
I use HDF5 as my time series repository. It has a number of effective and fast compression styles which can be mixed and matched. It can be used with a number of different programming languages.
I use boost::date_time for the timestamp field.
In the financial realm, I then create specific data structures for each of bars, ticks, trades, quotes, ...
I created a number of custom iterators and used standard template library features to be able to efficiently search for specific values or ranges of time-based records.

Resources