peewee bulk_update not updating any records - peewee

I'm running the following, and while I'm not receiving any errors, the database (MySQL) doesn't update. I'm running peewee v3.14.10 and PyMySQL v1.0.2 on Python v3.9.7. All the other calls using peewee work (eg. query, update, etc.) without any issues.
# query Word model with only result words
words = Word.select(Word.word, Word.queried, Word.updated).where(Word.word << result_words)
# update word in words
for word in words:
word.queried = word.queried + 1
word.updated = datetime.now()
# debug
for word in words:
print(word.word, word.queried, word.updated)
# bulk update queried counters on Words model
do_atomic = db.database
batch_size = 50
with do_atomic.atomic():
Word.bulk_update(words, fields=[Word.word, Word.queried, Word.updated], batch_size=batch_size)

There is also no need to do this in Python. Why don't you use a proper update query?
(Word
.update(queried=Word.queried + 1, updated=datetime.now)
.where(Word.word << list_of_words)
.execute())
bulk_update() also is working fine as far as I can tell:
db = MySQLDatabase('peewee_test')
class Word(Model):
word = CharField()
queried = IntegerField()
updated = DateTimeField()
class Meta:
database = db
Word.insert_many([
('w%d' % i, i, datetime.datetime.now())
for i in range(10)]).execute()
words = Word.select().where(Word.word << ['w1', 'w3', 'w5', 'w7', 'w9'])
for word in words:
word.queried += 1
word.updated = datetime.datetime.now()
with db.atomic():
Word.bulk_update(words, fields=[Word.word, Word.queried, Word.updated],
batch_size=3)
for w in Word.select().order_by(Word.word):
print(w.word, w.queried, w.updated)
Output:
w0 0 2022-06-28 10:47:43
w1 2 2022-06-28 10:47:43
w2 2 2022-06-28 10:47:43
w3 4 2022-06-28 10:47:43
w4 4 2022-06-28 10:47:43
w5 6 2022-06-28 10:47:43
w6 6 2022-06-28 10:47:43
w7 8 2022-06-28 10:47:43
w8 8 2022-06-28 10:47:43
w9 10 2022-06-28 10:47:43

Related

SympifyError: SympifyError: index when using a loop

I am having trouble using simpify when changing the parameters in a loop. Before adding the loop it worked just fine so I am a bit confused about what is going wrong. The idea is to calculate the fixed points for the above equations when having a varying parameter. I determined the parameters by using a random algorithm beforehand.
data used
index c1 c2 c3 c4 c5
2 0.182984 2.016811 0.655393 1.581344 1000.0
3 0.481093 3.696431 0.174021 2.604066 1000.0
4 2.651888 0.665661 2.010521 1.004902 1000.0
5 4.356905 3.805205 0.169469 0.188154 1000.0
6 0.618898 1.205760 0.394822 0.624573 1000.0
7 1.628458 0.908339 0.117855 0.801636 1000.0
8 1.084346 0.251490 5.008077 4.606338 1000.0
9 0.314420 4.553279 0.279103 1.136288 1000.0
10 0.309323 3.447195 0.769426 1.058890 1000.0
11 1.353905 5.034620 3.025668 0.136687 1000.0
12 0.294230 0.590507 0.203964 0.105073 1000.0
13 0.433693 1.040195 0.197015 0.214636 1000.0
14 5.597691 2.734779 0.298786 6.869852 1000.0
15 0.106748 0.329506 1.642285 2.259433 1000.0
16 7.065243 0.138986 6.280275 0.265305 1000.0
17 0.676381 0.263757 6.540224 2.890927 1000.0
18 0.646750 2.573060 0.157341 1.779078 1000.0
19 2.829030 0.208247 0.102454 0.117786 1000.0
20 3.973703 0.134666 1.099034 4.255214 1000.0
df1 = df[df.columns[1]]
df2 = df[df.columns[2]]
df3 = df[df.columns[3]]
df4 = df[df.columns[4]]
EQ=[]
for i in df[:5]:
a = df["c1"]
b = df["c2"]
c = df["c3"]
d = df["c4"]
Q = 1
a1 = 0
b1 = 0
c1 = 0
d1 = 0
u,v = sm.symbols('u,v', negative=False)
# equations
U = a * u -a1* v**2 - b*v+b1*v + Q
V = c * u -c1*u*v- d*v + d1 + Q
# use sympy's way of setting equations to zero
Uqual = sm.Eq(U, 0)
Vqual = sm.Eq(V, 0)
# compute fixed points
equilibria = sm.solve( (Uqual, Vqual), u,v)
print('The fixed point(s) of this system are: %s' %equilibria)
equilibria.append(equilibria)
SympifyError Traceback (most recent call last)
<ipython-input-81-7104e05ced6a> in <module>
16 V = c * u -c1*u*v- d*v + d1 + Q
17 # use sympy's way of setting equations to zero
---> 18 Uqual = sm.Eq(U, 0)
19 Vqual = sm.Eq(V, 0)
20
~\anaconda3\lib\site-packages\sympy\core\relational.py in __new__(cls, lhs, rhs, **options)
501 rhs = 0
502 evaluate = options.pop('evaluate', global_parameters.evaluate)
--> 503 lhs = _sympify(lhs)
504 rhs = _sympify(rhs)
505 if evaluate:
~\anaconda3\lib\site-packages\sympy\core\sympify.py in _sympify(a)
510
511 """
--> 512 return sympify(a, strict=True)
513
514
~\anaconda3\lib\site-packages\sympy\core\sympify.py in sympify(a, locals, convert_xor, strict, rational, evaluate)
431
432 if strict:
--> 433 raise SympifyError(a)
434
435 if iterable(a):
SympifyError: SympifyError: index
1 0.32539361355594*u - 0.153951771353544*v + 1
2 0.111286178007145*u - 0.211620881593914*v + 1
3 0.410704332996077*u - 0.338148622964363*v + 1
4 1.39126513227539*u - 0.715390758416011*v + 1
5 0.289981428632838*u - 3.76334113661812*v + 1
...
96 0.450838908230239*u - 7.00849756407416*v + 1
97 4.59646738213032*u - 1.45107766000711*v + 1
98 6.28779804684458*u - 0.395831415205476*v + 1
99 0.196464087698782*u - 0.205057919337616*v + 1
100 1.69031014508742*u - 0.140571509904066*v + 1
Length: 100, dtype: object
In an isympy session:
Make a sample dataframe:
In [11]: import pandas as pd
In [12]: df = pd.DataFrame(np.arange(12).reshape(3,4))
In [13]: df
Out[13]:
0 1 2 3
0 0 1 2 3
1 4 5 6 7
2 8 9 10 11
Set up a non-iterative case:
In [15]: u,v = symbols('u,v', negative=False)
In [16]: a,a1,b,b1 = 1,2,3,4
In [17]: U = a * u -a1* v**2 - b*v+b1*v
In [18]: U
Out[18]:
2
u - 2⋅v + v
Versus on with dataframe values:
In [19]: a,b = df[0],df[1]
In [20]: a,b
Out[20]:
(0 0
1 4
2 8
Name: 0, dtype: int64,
0 1
1 5
2 9
Name: 1, dtype: int64)
In [21]: U = a * u -a1* v**2 - b*v+b1*v
In [22]: U
Out[22]:
0 -2*v**2 + 3*v
1 4*u - 2*v**2 - v
2 8*u - 2*v**2 - 5*v
dtype: object
This U is a pandas Series, with object elements (which are sympy expressions). But U itself is not sympy.
Eq applied to the simple expression:
In [23]: Eq(Out[18],0)
Out[23]:
2
u - 2⋅v + v = 0
Your error - Eq applied to the Series:
In [24]: Eq(Out[22],0)
---------------------------------------------------------------------------
SympifyError Traceback (most recent call last)
Input In [24], in <cell line: 1>()
----> 1 Eq(Out[22],0)
File /usr/local/lib/python3.8/dist-packages/sympy/core/relational.py:626, in Equality.__new__(cls, lhs, rhs, **options)
624 rhs = 0
625 evaluate = options.pop('evaluate', global_parameters.evaluate)
--> 626 lhs = _sympify(lhs)
627 rhs = _sympify(rhs)
628 if evaluate:
File /usr/local/lib/python3.8/dist-packages/sympy/core/sympify.py:528, in _sympify(a)
502 def _sympify(a):
503 """
504 Short version of :func:`~.sympify` for internal usage for ``__add__`` and
505 ``__eq__`` methods where it is ok to allow some things (like Python
(...)
526
527 """
--> 528 return sympify(a, strict=True)
File /usr/local/lib/python3.8/dist-packages/sympy/core/sympify.py:449, in sympify(a, locals, convert_xor, strict, rational, evaluate)
446 continue
448 if strict:
--> 449 raise SympifyError(a)
451 if iterable(a):
452 try:
SympifyError: SympifyError: 0 -2*v**2 + 3*v
1 4*u - 2*v**2 - v
2 8*u - 2*v**2 - 5*v
dtype: object
Eq() does not have a 'iterate over Series' (or even over list) capability.
We can iterate (list comprehension) and apply the Eq to each terms of the Series:
In [25]: [Eq(U[i],0) for i in range(3)]
Out[25]:
⎡ 2 2 2 ⎤
⎣- 2⋅v + 3⋅v = 0, 4⋅u - 2⋅v - v = 0, 8⋅u - 2⋅v - 5⋅v = 0⎦
As a general rule, sympy and pandas/numpy does not work well.
It's hard to understand what you are trying to achieve with the code you posted above. So, the following represents my guess:
# NOTE: the following variables are of type
# pandas.core.series.Series. They are iterables
# (think of them as arrays)
a = df["c1"]
b = df["c2"]
c = df["c3"]
d = df["c4"]
# constants
Q = 1
a1 = 0
b1 = 0
c1 = 0
d1 = 0
# symbols
u, v = symbols('u, v', negative=False)
# equations
# NOTE: because a, b, c, d are iterables, then U, V
# will be iterables too. Each element will be a SymPy
# expression because you used the symbols u and v.
U = a * u - a1 * v**2 - b * v + b1 * v + Q
V = c * u - c1 * u * v - d * v + d1 + Q
EQ = []
# loop over the equations and solve them
for u_eq, v_eq in zip(U, V):
# here we are asking to solve u_eq=0 and v_eq=0
equilibria = solve((u_eq, v_eq), (u, v))
EQ.append(equilibria)
print('The fixed point(s) of this system are: %s' % equilibria)

Drop columns from a data frame but I keep getting this error below

enter image description here
enter image description here
enter image description here
enter image description here
enter image description here
enter image description here
enter image description here
No matter how I try to code this in R, I still cannot drop my columns so that I can build my logistic regression model. I tried to run it two different ways
cols<-c("EmployeeCount","Over18","StandardHours")
Trainingmodel1 <- DAT_690_Attrition_Proj1EmpAttrTrain[-cols,]
Error in -cols : invalid argument to unary operator
cols<-c("EmployeeCount","Over18","StandardHours")
Trainingmodel1 <- DAT_690_Attrition_Proj1EmpAttrTrain[!cols,]
Error in !cols : invalid argument type
This may solve your problem:
Trainingmodel1 <- DAT_690_Attrition_Proj1EmpAttrTrain[ , !colnames(DAT_690_Attrition_Proj1EmpAttrTrain) %in% cols]
Please note that if you want to drop columns, you should put your code inside [ on the right side of the comma, not on the left side.
So [, your_code] not [your_code, ].
Here is an example of dropping columns using the code above.
cols <- c("cyl", "hp", "wt")
mtcars[, !colnames(mtcars) %in% cols]
# mpg disp drat qsec vs am gear carb
# Mazda RX4 21.0 160.0 3.90 16.46 0 1 4 4
# Mazda RX4 Wag 21.0 160.0 3.90 17.02 0 1 4 4
# Datsun 710 22.8 108.0 3.85 18.61 1 1 4 1
# Hornet 4 Drive 21.4 258.0 3.08 19.44 1 0 3 1
# Hornet Sportabout 18.7 360.0 3.15 17.02 0 0 3 2
# Valiant 18.1 225.0 2.76 20.22 1 0 3 1
#...
Edit to Reproduce the Error
The error message you got indicates that there is a column that has only one, identical value in all rows.
To show this, let's try a logistic regression using a subset of mtcars data, which has only one, identical values in its cyl column, and then we use that column as a predictor.
mtcars_cyl4 <- mtcars |> subset(cyl == 4)
mtcars_cyl4
# mpg cyl disp hp drat wt qsec vs am gear carb
# Datsun 710 22.8 4 108.0 93 3.85 2.320 18.61 1 1 4 1
# Merc 240D 24.4 4 146.7 62 3.69 3.190 20.00 1 0 4 2
# Merc 230 22.8 4 140.8 95 3.92 3.150 22.90 1 0 4 2
# Fiat 128 32.4 4 78.7 66 4.08 2.200 19.47 1 1 4 1
# Honda Civic 30.4 4 75.7 52 4.93 1.615 18.52 1 1 4 2
# Toyota Corolla 33.9 4 71.1 65 4.22 1.835 19.90 1 1 4 1
# Toyota Corona 21.5 4 120.1 97 3.70 2.465 20.01 1 0 3 1
# Fiat X1-9 27.3 4 79.0 66 4.08 1.935 18.90 1 1 4 1
# Porsche 914-2 26.0 4 120.3 91 4.43 2.140 16.70 0 1 5 2
# Lotus Europa 30.4 4 95.1 113 3.77 1.513 16.90 1 1 5 2
# Volvo 142E 21.4 4 121.0 109 4.11 2.780 18.60 1 1 4 2
glm(am ~ as.factor(cyl) + mpg + disp, data = mtcars_cyl4, family = "binomial")
#Error in `contrasts<-`(`*tmp*`, value = contr.funs[1 + isOF[nn]]) :
# contrasts can be applied only to factors with 2 or more levels
Now, compare it with the same logistic regression by using full mtcars data, which have various values in cyl column.
glm(am ~ as.factor(cyl) + mpg + disp, data = mtcars, family = "binomial")
# Call: glm(formula = am ~ as.factor(cyl) + mpg + disp, family = "binomial",
# data = mtcars)
#
# Coefficients:
# (Intercept) as.factor(cyl)6 as.factor(cyl)8 mpg disp
# -5.08552 2.40868 6.41638 0.37957 -0.02864
#
# Degrees of Freedom: 31 Total (i.e. Null); 27 Residual
# Null Deviance: 43.23
# Residual Deviance: 25.28 AIC: 35.28
It is likely that, even though you have drop three columns that have one,identical values in all the respective rows, there is another column in Trainingmodel1 that has one identical values. The identical values in the column were probably resulted during filtering the data frame and splitting data into training and test groups. Better to have a check by using summary(Trainingmodel1).
Further edit
I have checked the summary(Trainingmodel1) result, and it becomes clear that EmployeeNumber has one identical value (called "level" for a factor) in all rows. To run your regression properly, either you drop it from your model, or if EmployeeNumber has another level and you want to include it in your model, you should make sure that it contains at least two levels in the training data. It is possible to achieve that during splitting by repeating the random sampling until the randomly selected EmployeeNumber samples contain at least two levels. This can be done by looping using for, while, or repeat. It is possible, but I don't know how proper the repeated sampling is for your study.
As for your question about subsetting more than one variable, you can use subset and conditionals. For example, you want to get a subset of mtcars that has cyl == 4 and mpg > 20 :
mtcars |> subset(cyl == 4 & mpg > 20 )
If you want a subset that has cyl == 4 or mpg > 20:
mtcars |> subset(cyl == 4 | mpg > 20 )
You can also subset by using more columns as subset criteria:
mtcars |> subset((cyl > 4 & cyl <8) | (mpg > 20 & gear > 4 ))

Loop though each observation in SAS

Let say I have a table of 10000 observations:
Obs X Y Z
1
2
3
...
10000
For each observation, I create a macro: mymacro(X, Y, Z) where I use X, Y, Z like inputs. My macro create a table with 1 observation, 4 new variables var1, var2, var3, var4.
I would like to know how to loop through 10000 observations in my initial set, and the result would be like:
Obs X Y Z Var1 Var2 Var3 Var4
1
2
3
...
10000
Update:
The calculation of Var1, Var2, Var3, Var4:
I have a reference table:
Z 25 26 27 28 29 30
0 10 000 10 000 10 000 10 000 10 000 10 000
1 10 000 10 000 10 000 10 000 10 000 10 000
2 10 000 10 000 10 000 10 000 10 000 10 000
3 10 000 10 000 10 000 10 000 10 000 10 000
4 9 269 9 322 9 322 9 381 9 381 9 436
5 8 508 8 619 8 619 8 743 8 743 8 850
6 7 731 7 914 7 914 8 102 8 102 8 258
7 6 805 7 040 7 040 7 280 7 280 7 484
8 5 864 6 137 6 137 6 421 6 421 6 655
9 5 025 5 328 5 328 5 629 5 629 5 929
10 4 359 4 648 4 648 4 934 4 934 5 320
And my have set is like:
Obs X Y Z
1 27 4 9
2
3
10000
So for the first observation (27, 4, 9):
Var1 = (8 619+ 7 914+ 7 040 + 6 137 + 5 328)/ 9 322
Var2 = (8 743+ 8 102+ 7 280+ 6 421 + 5 629 )/ 9 381
So that:
Var1 = Sum of all number in column 27 (X), from the observation 5 (Z+1) to the observation 9 (Z), and divided by the value in the (column 27 (X) - observation 4 (Z))
Var2 = Sum of all number in column 28 (X+1), from the observation 5 (Z+1) to the observation 9 (Z), and divided by the value in the (column 28 (X+1) - observation 4 (Z))
I would convert the reference table to a form that lets you do the calculations for all observations at once. So make your reference table into a tall structure, either by transposing the existing table or just reading it that way to start with:
data ref_tall;
input z #;
do col=25 to 30 ;
input value :comma9. #;
output;
end;
datalines;
0 10,000 10,000 10,000 10,000 10,000 10,000
1 10,000 10,000 10,000 10,000 10,000 10,000
2 10,000 10,000 10,000 10,000 10,000 10,000
3 10,000 10,000 10,000 10,000 10,000 10,000
4 9,269 9,322 9,322 9,381 9,381 9,436
5 8,508 8,619 8,619 8,743 8,743 8,850
6 7,731 7,914 7,914 8,102 8,102 8,258
7 6,805 7,040 7,040 7,280 7,280 7,484
8 5,864 6,137 6,137 6,421 6,421 6,655
9 5,025 5,328 5,328 5,629 5,629 5,929
10 4,359 4,648 4,648 4,934 4,934 5,320
;
Now take your list table HAVE:
data have;
input id x y z;
datalines;
1 27 4 9
2 25 2 4
;
And combine it with the reference table and make your calculations:
proc sql ;
create table want1 as
select a.id
, sum(b.value)/min(c.value) as var1
from have a
left join ref_tall b
on a.x=b.col
and b.z between a.y+1 and a.z
left join ref_tall c
on a.x=c.col
and c.z = a.y
group by a.id
;
create table want2 as
select a.id
, sum(d.value)/min(e.value) as var2
from have a
left join ref_tall d
on a.x+1=d.col
and d.z between a.y+1 and a.z
left join ref_tall e
on a.x+1=e.col
and e.z = a.y
group by a.id
;
create table want as
select *
from want1 natural join want2 natural join have
;
quit;
Results:
Obs id x y z var1 var2
1 1 27 4 9 3.75864 3.85620
2 2 25 2 4 1.92690 1.93220
The reference table can be established in an array that makes performing the specified computations easy. The reference values can than be accessed using a direct address reference.
Example
The reference table data was moved into a data set so the values can be changed over time or reloaded from some source such as Excel. The reference values can be loaded into an array for use during a DATA step.
* reference information in data set, x property column names are _<num>;
data ref;
input z (_25-_30) (comma9. &);
datalines;
0 10,000 10,000 10,000 10,000 10,000 10,000
1 10,000 10,000 10,000 10,000 10,000 10,000
2 10,000 10,000 10,000 10,000 10,000 10,000
3 10,000 10,000 10,000 10,000 10,000 10,000
4 9,269 9,322 9,322 9,381 9,381 9,436
5 8,508 8,619 8,619 8,743 8,743 8,850
6 7,731 7,914 7,914 8,102 8,102 8,258
7 6,805 7,040 7,040 7,280 7,280 7,484
8 5,864 6,137 6,137 6,421 6,421 6,655
9 5,025 5,328 5,328 5,629 5,629 5,929
10 4,359 4,648 4,648 4,934 4,934 5,320
;
* computation parameters, might be a thousand of them specified;
data have;
input id x y z;
datalines;
1 27 4 9
;
* perform computation for each parameters specified;
data want;
set have;
array ref[0:10,1:30] _temporary_;
if _n_ = 1 then do ref_row = 0 by 1 until (last_ref);
* load reference data into an array for direct addressing during computation;
set ref end=last_ref;
array ref_cols _25-_30;
do index = 1 to dim(ref_cols);
colname = vname(ref_cols[index]);
colnum = input(substr(colname,2),8.);
ref[ref_row,colnum] = ref_cols[index];
end;
end;
* perform computation for parameters specified;
array vars var1-var4;
do index = 1 to dim(vars);
ref_column = x + index - 1 ; * column x, then x+1, then x+2, then x+3;
numerator = 0; * algorithm against reference data;
do ref_row = y+1 to z;
numerator + ref[ref_row,ref_column];
end;
denominator = ref[y,ref_column];
vars[index] = numerator / denominator; * result;
end;
keep id x y z numerator denominator var1-var4;
run;

SQL Server : how to map decimals to corrected values

I have a situation where I get trip data from another company. The other company measures fuel with a precision of ⅛ gallon.
I get data from the other company and store it in my SQL Server table. The aggregated fuel amounts aren't right. I discovered that while the other company stores fuel in 1/8 gallons, it was sending me only one decimal place.
Furthermore, thanks to this post, I've determined that the company isn't rounding the values to the nearest tenth but is instead truncating them.
Query:
/** Fuel Fractions **/
SELECT DISTINCT ([TotalFuelUsed] % 1) AS [TotalFuelUsedDecimals]
FROM [Raw]
ORDER BY [TotalFuelUsedDecimals]
Results:
TotalFuelUsedDecimals
0.00
0.10
0.20
0.30
0.50
0.60
0.70
0.80
What I'd like is an efficient way to add a corrected fuel column to my views which would map as follows:
0.00 → 0.000
0.10 → 0.125
0.20 → 0.250
0.30 → 0.375
0.50 → 0.500
0.60 → 0.625
0.70 → 0.750
0.80 → 0.875
1.80 → 1.875
and so on
I'm new to SQL so please be kind.
Server is running Microsoft SQL Server 2008. But if you know a way better function only supported by newer SQL Server, please post it too because we may upgrade someday soon and it may help others.
Also, if it makes any difference, there are several different fuel columns in the table that I'll be correcting.
While writing up the question, I tried the following method using a temp table and multiple joins which seemed to work. I expect there are better solutions out there to be had.
CREATE TABLE #TempMap
([from] decimal(18,2), [to] decimal(18,3))
;
INSERT INTO #TempMap
([from], [to])
VALUES
(0.0, 0.000),
(0.1, 0.125),
(0.2, 0.250),
(0.3, 0.375),
(0.5, 0.500),
(0.6, 0.625),
(0.7, 0.750),
(0.8, 0.875)
;
SELECT [TotalFuelUsed]
,[TotalFuelCorrect].[to] + ROUND([TotalFuelUsed], 0, 1) AS [TotalFuelUsedCorrected]
,[IdleFuelUsed]
,[IdleFuelCorrect].[to] + ROUND([IdleFuelUsed], 0, 1) AS [IdleFuelUsedCorrected]
FROM [Raw]
JOIN [#TempMap] AS [TotalFuelCorrect] ON [TotalFuelUsed] % 1 = [TotalFuelCorrect].[from]
JOIN [#TempMap] AS [IdleFuelCorrect] ON [IdleFuelUsed] % 1 = [IdleFuelCorrect].[from]
ORDER BY [TotalFuelUsed] DESC
DROP TABLE #TempMap;
Try adding a column as:
select ....
, case when right(cast([TotalFuelUsed] as decimal(12,1)), 1) = 1 then [TotalFuelUsed] + 0.025
when right(cast([TotalFuelUsed] as decimal(12,1)), 1) = 2 then [TotalFuelUsed] + 0.05
when right(cast([TotalFuelUsed] as decimal(12,1)), 1) = 3 then [TotalFuelUsed] + 0.075
when right(cast([TotalFuelUsed] as decimal(12,1)), 1) = 6 then [TotalFuelUsed] + 0.025
when right(cast([TotalFuelUsed] as decimal(12,1)), 1) = 7 then [TotalFuelUsed] + 0.05
when right(cast([TotalFuelUsed] as decimal(12,1)), 1) = 8 then [TotalFuelUsed] + 0.075
else [TotalFuelUsed] end as updatedTotalFuelUsed

combining the arrays

I have written a code and i'm pretty much stuck.
In the following code, i have split the complete image into a 3*3 block.
Each of the child (1-8) you can see, say i alter them in that child(1-8) array only.
Is there a method to combine these array (mother & child 1-8) to get the complete image back with the alteration that i've already made
pd_x=imread(name_doc);
[pd_m,pd_n]=size(pd_x);
di_m=pd_m;
di_n=pd_n;
pd_rv=ceil(pd_m/3);
pd_cv=ceil(pd_n/3);
mother1=pd_x(1:pd_rv,1:pd_cv);
child1=pd_x(1:pd_rv,pd_cv:(pd_cv+pd_cv));
child2=pd_x(1:pd_rv,(pd_cv+pd_cv):pd_n);
child3=pd_x(pd_rv:(pd_rv+pd_rv),1:pd_cv);
child4=pd_x(pd_rv:(pd_rv+pd_rv),pd_cv:(pd_cv+pd_cv));
child5=pd_x(pd_rv:(pd_rv+pd_rv),(pd_cv+pd_cv):pd_n);
child6=pd_x((pd_rv+pd_rv):pd_m,1:pd_cv);
child7=pd_x((pd_rv+pd_rv):pd_m,pd_cv:(pd_cv+pd_cv));
child8=pd_x((pd_rv+pd_rv):pd_m,(pd_cv+pd_cv):pd_n);
The syntax for concatenation is the following:
A = [12 62 93 -8 22; 16 2 87 43 91; -4 17 -72 95 6]
A =
12 62 93 -8 22
16 2 87 43 91
-4 17 -72 95 6
Taken from http://www.mathworks.com/help/techdoc/math/f1-84864.html
I've also made a basic example, defining first v,v2 and v3:
>> v
v =
1 2
>> v2
v2 =
3 4
>> v3
v3 =
5 6
I do the following concatenation, the result will be...
>> m = [v v2 v3; v3 v2 v];
>> m
m =
1 2 3 4 5 6
5 6 3 4 1 2
Hope it helps you understand how it works!!
If you used the method I've posted on this answer and still has the 4d matrix created, just do this:
mother1 = permute( mat4d, [ 1 3 2 4 ] );
mother1 = reshape( mother1, [ pd_rv pd_cv ] );
But pd_rv and pd_cv should be calculated with floor, and not with ceil, shouldn't they?

Resources