Multiple small files as input to map reduce - file

I have lots of small files , say more than 20000.
I want to save time spent on mapper initialization, so is it possible to use just 500 mapper , each processes 40 small files as its input?
I need guidance about how to implement this kind of inputformat if possible , thanks !
BTW, I know I should merge these small files, this step is also needed.

CombineFileInputFormat can be used. It's there in the old and the new MR API. Here is a nice blog entry on how to use it.

Related

How to repeat a command on different values of the same variable using SPSS LOOP?

Probably an easy question:
I want to run this piece of syntax:
SUMMARIZE
/TABLES=AGENCY
PIN
AGE
GENDER
DISABILITY
MAINSERVICE
MRESAGENCY
MRESSUPPORT
/FORMAT=LIST NOCASENUM TOTAL
/TITLE='Case Summaries'
/MISSING=VARIABLE
/CELLS=COUNT.
for 264 different agencies which are all values contained in the variable 'AGENCY'.
I want to create a different table for each agency outlining the above information for them.
I think I can do this using a DO REPEAT or LOOP on SPSS.
Any advice would be much appreciated.
Thank you :)
note: I have Google'd and read endless amounts on looping I am just a little unsure as to which method is what I am looking for
Take a look at SPLIT FILE, which meets your needs

Most efficient way to pull values that may or may not change?

I am not a trained programmer, but I assist in developing/maintaining macros within our VBA-based systems to expedite various tasks our employees do manually. For instance, copying data from one screen to another. By hand, any instance of this could take 30 seconds to 2 minutes, but with a macro, it could take 2-3 seconds.
Most of the macros we develop rely on the ability to accurately pull data as displayed (not from its relative field!) based on a row/column format for each character. As such, we employ the use of a custom command (let's call it, say... Instance.Grab) that pulls what we need from the screen using row x/column y coordinates and the length of what we want to pull. Example, where the we would normally pull a 8 character string from coordinates 1,1:
dim PulledValue as String
PulledValue = Instance.Grab(1,1,8)
If I ran that code on my question so far, the returned value for our macro would have been "I am not"
Unfortunately, our systems are getting their displays altered to handled values of an increased character length. As such, the coordinates of the data we're pulling is getting altered significantly. Rather than go through our macros and change the coordinates and length manually in each macro (which would need to be repeated if the screen formats change again), I'm converting our macros so that any time they need to pull the needed string, we can simply change the needed coordinate/length from a central location.
My question is, what would be the best way to handle this task? I've thought of a few ideas, but want to maximize effectiveness and minimize the time I spend developing it, given my limited programming experience. For the sake of this, let's call what I need to make happen CoorGrab, and where an array is needed, make an array called CoorArray:
1) creating Public Function CoorGrab(ThisField As Variant) -if I did it this way, than I would simply list all the needed coordinate/length sets based on the variant I enter, then pull whichever set as needed using a 3 dimensional array. For instance: CoorGrab(situationA) would return CoorArray(5, 7, 15). This would be easy enough to edit for one of us who know something about programming, but if we're not around for any reason, there could be issues.
2) creating all the needed coordinates in public arrays in the module. I'm not overly familiar with how to implement this, but I think I read up on something called public constants? I kinda like this idea for its simplicity, but am hesitant to use any variable or array as public.
3) creating a .txt file in a shared drive that has all the needed data and a label to identify them, and save it to a shared drive that any terminal can access when running these macros. This would be the easiest for a non-programmer to jump in and edit in case I or one of our other programming-savvy employees aren't available, but it seems like far more work than is needed, and I fear what could happen if the .txt file got a type or accidentally deleted.
Any thoughts on how I should proceed? Are one of the above options inherently better/easier than the others? Or is there another way to handled this situation that I didn't cover? Any info or advice you all can provide would be greatly appreciated!
8/2/15 Note - Should probably mention the VBA is used as part of a terminal emulator with custom applications for the needs of our department. I don't manage the emulator or its applications, nor do I have system admin access; I just create/edit macros used within it to streamline some of the ways our users handle their workloads. Of the three of us who do this, I'm the least skilled at programming, but also the only on who could be pulled that could update them before the changes take effect.
Your way is not so bad, I would:
Use a string as a label as parameter for CoorGrab
Return a range instead of a string (because you can use a single cell range as text and you keep a trace where your data is)
public CoorGrab(byval label as string) as range
Create an Excel Sheet with 3 rows: 1 = label, 2 = x, 3 = y (you could
add a 4 if you need to search in an other sheet)
CoorGrab() Find the label in the Excel Sheet and return X / Y
If developers aren't availables, they just have to edit the Excel sheet.
You could too create and outsource Excel File to read coordinates outside the local file, or use it to update files of everybody (Read file from server, add/update all label in the server file but not in local file)

How to make training and testing set from a dataset?

What's the best method:
splitting my data into training and testing sets by making 70% of the data as training and 30% test, or
using similar data for training and testing set.
A- Is the second method correct and what's its disadvantages?
B- My dataset contains 3 attributes and 1000 objects, is this good for selecting the training and testing sets from this dataset?
The second method is wrong (at least if by 'similar' you mean 'same').
You shouldn't use the test set for training.
If you use just one data set, you could achieve perfect accuracy by simply learning this set (with the risk of overfitting).
Generally, this isn't what you want because the algorithm should learn the general concept behind the examples. A way of testing if this happens is to use separate dataset for training and testing.
Test set gives you a forecast of the performance of your model in the "real world" because it's independent (during the training/validation phase you don't make any choice based on test data).
Second option is wrong. First option is the best....
Using ling-pipe classifier we can train and test news data. But if you provide same data used in training for testing purposes no doubt it shows accurate output. What we want is predicting output for unknown cases that's how we test accuracy right.
So what you have to do is
1)Train your data
2)Build a model
3)Apply test data to the model to get output for unknown sets/ cases too.
Building a model is nothing but writing the trained object into a file. So each time you runs the program you have to put the data into that model instead of training each time. This saves your time. I hope my answer will help you. Best regards.
You can create Train-Test from a dataset in command line:
java -cp weka.jar weka.filters.unsupervised.instance.RemovePercentage -P 30 -i dataset.arff -o train.arff
java -cp weka.jar weka.filters.unsupervised.instance.RemovePercentage -P 70 -i dataset.arff -o test.arff
and A): except if "all" the future possible data combinations exist in your dataset, using same data for train and test is a bad solution. It does not assess how your model is able to handle different new cases and can't assess if you are overfitting (it fits to your current data without re-usable logic). Why don't you use "cross validation", this is very effective if you want to use the same dataset. It automatically splits in different parts, and test each part against the rest of the data, then compute the average result.
B) if you mean 3 attributes and 1000 instances, it could be ok if you don't have too much different type of outputs (classes) to predict and that instances map good use cases.
FYI: if you want to test your data on many different classifiers to find the best one, use the experimenter.

Improving performance when looping in a big data set

I am making some spatio-temporal analysis (with MATLAB) on a quite big data set and I am not sure what is the best strategy to adopt in terms of performance for my script.
Actually, the data set is split in 10 yearly arrays of dimension (latitude,longitude,time)=(50,60,8760).
The general structure of my analysis is:
for iterations=1:Big Number
1. Select a specific site of spatial reference (i,j).
2. Do some calculation on the whole time series of site (i,j).
3. Store the result in archive array.
end
My question is:
Is it better (in terms of general performance) to have
1) all data in big yearly (50,60,8760) arrays as global variables loaded for once. At each iteration the script will have to extract one particular "site" (i,j,:) from those arrays for data process.
2) 50*60 distinct files stored in a folder. Each file containing a particular site time series (a vector of dimension (Total time range,1)). The script will then have to open, data process and then close at each iteration a specific file from the folder.
Because your computations are computed on the entire time series, I would suggest storing the data that way in a 3000x8760 vector and doing the computations that way.
Your accesses then will be more cache-friendly.
You can reformat your data using the reshape function:
newdata = reshape(olddata,50*60,8760);
Now, instead of accessing olddata(i,j,:), you need to access newdata(sub2ind([50 60],i,j),:).
After doing some experiments it is clear that the second proposition with 3000 distinct files is much slower than having to manipulate big arrays loaded in workspace. But I didn't try to load all the 3000 files in workspace before computing (A tad to much).
It looks like Reshaping data help's a little bit.
Thanks to all contributors for your suggestions.

How to go about creating a prolog program that can work backwards to determine steps needed to reach a goal

I'm not sure what exactly I'm trying to ask. I want to be able to make some code that can easily take an initial and final state and some rules, and determine paths/choices to get there.
So think, for example, in a game like Starcraft. To build a factory I need to have a barracks and a command center already built. So if I have nothing and I want a factory I might say ->Command Center->Barracks->Factory. Each thing takes time and resources, and that should be noted and considered in the path. If I want my factory at 5 minutes there are less options then if I want it at 10.
Also, the engine should be able to calculate available resources and utilize them effectively. Those three buildings might cost 600 total minerals but the engine should plan the Command Center when it would have 200 (or w/e it costs).
This would ultimately have requirements similar to 10 marines # 5 minutes, infantry weapons upgrade at 6:30, 30 marines at 10 minutes, Factory # 11, etc...
So, how do I go about doing something like this? My first thought was to use some procedural language and make all the decisions from the ground up. I could simulate the system and branching and making different choices. Ultimately, some choices are going quickly make it impossible to reach goals later (If I build 20 Supply Depots I'm prob not going to make that factory on time.)
So then I thought weren't functional languages designed for this? I tried to write some prolog but I've been having trouble with stuff like time and distance calculations. And I'm not sure the best way to return the "plan".
I was thinking I could write:
depends_on(factory, barracks)
depends_on(barracks, command_center)
builds_from(marine, barracks)
build_time(command_center, 60)
build_time(barracks, 45)
build_time(factory, 30)
minerals(command_center, 400)
...
build(X) :-
depends_on(X, Y),
build_time(X, T),
minerals(X, M),
...
Here's where I get confused. I'm not sure how to construct this function and a query to get anything even close to what I want. I would have to somehow account for rate at which minerals are gathered during the time spent building and other possible paths with extra gold. If I only want 1 marine in 10 minutes I would want the engine to generate lots of plans because there are lots of ways to end with 1 marine at 10 minutes (maybe cut it off after so many, not sure how you do that in prolog).
I'm looking for advice on how to continue down this path or advice about other options. I haven't been able to find anything more useful than towers of hanoi and ancestry examples for AI so even some good articles explaining how to use prolog to DO REAL THINGS would be amazing. And if I somehow can get these rules set up in a useful way how to I get the "plans" prolog came up with (ways to solve the query) other than writing to stdout like all the towers of hanoi examples do? Or is that the preferred way?
My other question is, my main code is in ruby (and potentially other languages) and the options to communicate with prolog are calling my prolog program from within ruby, accessing a virtual file system from within prolog, or some kind of database structure (unlikely). I'm using SWI-Prolog atm, would I be better off doing this procedurally in Ruby or would constructing this in a functional language like prolog or haskall be worth the extra effort integrating?
I'm sorry if this is unclear, I appreciate any attempt to help, and I'll re-word things that are unclear.
Your question is typical and very common for users of procedural languages who first try Prolog. It is very easy to solve: You need to think in terms of relations between successive states of your world. A state of your world consists for example of the time elapsed, the minerals available, the things you already built etc. Such a state can be easily represented with a Prolog term, and could look for example like time_minerals_buildings(10, 10000, [barracks,factory])). Given such a state, you need to describe what the state's possible successor states look like. For example:
state_successor(State0, State) :-
State0 = time_minerals_buildings(Time0, Minerals0, Buildings0),
Time is Time0 + 1,
can_build_new_building(Buildings0, Building),
building_minerals(Building, MB),
Minerals is Minerals0 - MB,
Minerals >= 0,
State = time_minerals_buildings(Time, Minerals, Building).
I am using the explicit naming convention (State0 -> State) to make clear that we are talking about successive states. You can of course also pull the unifications into the clause head. The example code is purely hypothetical and could look rather different in your final application. In this case, I am describing that the new state's elapsed time is the old state's time + 1, that the new amount of minerals decreases by the amount required to build Building, and that I have a predicate can_build_new_building(Bs, B), which is true when a new building B can be built assuming that the buildings given in Bs are already built. I assume it is a non-deterministic predicate in general, and will yield all possible answers (= new buildings that can be built) on backtracking, and I leave it as an exercise for you to define such a predicate.
Given such a predicate state_successor/2, which relates a state of the world to its direct possible successors, you can easily define a path of states that lead to a desired final state. In its simplest form, it will look similar to the following DCG that describes a list of successive states:
states(State0) -->
( { final_state(State0) } -> []
; [State0],
{ state_successor(State0, State1) },
states(State1)
).
You can then use for example iterative deepening to search for solutions:
?- initial_state(S0), length(Path, _), phrase(states(S0), Path).
Also, you can keep track of states you already considered and avoid re-exploring them etc.
The reason you get confused with the example code you posted is essentially that build/1 does not have enough arguments to describe what you want. You need at least two arguments: One is the current state of the world, and the other is a possible successor to this given state. Given such a relation, everything else you need can be described easily. I hope this answers your question.
Caveat: my Prolog is rusty and shallow, so this may be off base
Perhaps a 'difference engine' approach would be appropriate:
given a goal like 'build factory',
backwards-chaining relations would check for has-barracks and tell you first to build-barracks,
which would check for has-command-center and tell you to build-command-center,
and so on,
accumulating a plan (and costs) along the way
If this is practical, it may be more flexible than a state-based approach... or it may be the same thing wearing a different t-shirt!

Resources