Related
I would like to solve the transient diffusion equation for two compounds A and B as shown in image. I think the image is a better way to show my problem.
Diffusion equations and boundary conditions.
As you can see, the reaction only occurs at the surface and the flux of A is equal to flux of B. So, this two equations are coupled only at surface. The boundary condition is similar to ROBIN boundary condition, explained in Fipy manual. However, the main difference is the existence of the second variable in boundary condition. Does anybody have any idea how to formulate this boundary condition in Fipy?
I guess I need to add some extra term to ROBIN boundary condition, but I couldn't figure it out.
I really appreciate your help.
This is the code which solves the mentioned equation with ROBIN boundary condition # x=0.
-D(dC_A/dx) = -kC_A
-D(dC_B/dx) = -kC_B
In this condition, I can easily use ROBIN boundary condition to solve equations. The results seem reasonable for this boundary condition.
"""
Question for StackOverflow
"""
#%%
from fipy import Variable, FaceVariable, CellVariable, Grid1D, TransientTerm, DiffusionTerm, Viewer, ImplicitSourceTerm
from fipy.tools import numerix
#%%
##### Model parameters
L= 8.4853e-4 # m boundary layer thickness
dx= 1e-8 # mesh size
nx = int(L/dx)+1 # number of meshes
D = 1e-9 # m^2/s diffusion coefficient
k = 1e-4 # m/s reaction coefficient R = k [c_A],
c_inf = 0. # ROBIN general condition, once can think R = k ([c_A]-[c_inf])
c_init = 1. # Initial concentration of compound A, mol/m^3
#%%
###### Meshing and variable definition
mesh = Grid1D(nx=nx, dx=dx)
c_A = CellVariable(name="c_A", hasOld = True,
mesh=mesh,
value=c_init)
c_B = CellVariable(name="c_B", hasOld = True,
mesh=mesh,
value=0.)
#%%
##### Right boundary condition
valueRight = c_init
c_A.constrain(valueRight, mesh.facesRight)
c_B.constrain(0., mesh.facesRight)
#%%
### ROBIN BC requirements, defining cellDistanceVectors
## This code is for fixing celldistance via this link:
## https://stackoverflow.com/questions/60073399/fipy-problem-with-grid2d-celltofacedistancevectors-gives-error-uniformgrid2d
MA = numerix.MA
tmp = MA.repeat(mesh._faceCenters[..., numerix.NewAxis,:], 2, 1)
cellToFaceDistanceVectors = tmp - numerix.take(mesh._cellCenters, mesh.faceCellIDs, axis=1)
tmp = numerix.take(mesh._cellCenters, mesh.faceCellIDs, axis=1)
tmp = tmp[..., 1,:] - tmp[..., 0,:]
cellDistanceVectors = MA.filled(MA.where(MA.getmaskarray(tmp), cellToFaceDistanceVectors[:, 0], tmp))
#%%
##### Defining mask and Robin BC at left boundary
mask = mesh.facesLeft
Gamma0 = D
Gamma = FaceVariable(mesh=mesh, value=Gamma0)
Gamma.setValue(0., where=mask)
dPf = FaceVariable(mesh=mesh,
value=mesh._faceToCellDistanceRatio * cellDistanceVectors)
n = mesh.faceNormals
a = FaceVariable(mesh=mesh, value=k, rank=1)
b = FaceVariable(mesh=mesh, value=D, rank=0)
g = FaceVariable(mesh=mesh, value= k * c_inf, rank=0)
RobinCoeff = (mask * Gamma0 * n / (-dPf.dot(a)+b))
#%%
#### Making a plot
viewer = Viewer(vars=(c_A, c_B),
datamin=-0.2, datamax=c_init * 1.4)
viewer.plot()
#%% Time step and simulation time definition
time = Variable()
t_simulation = 4 # seconds
timeStepDuration = .05
steps = int(t_simulation/timeStepDuration)
#%% PDE Equations
eqcA = (TransientTerm(var=c_A) == DiffusionTerm(var=c_A, coeff=Gamma) +
(RobinCoeff * g).divergence
- ImplicitSourceTerm(var=c_A, coeff=(RobinCoeff * a.dot(-n)).divergence))
eqcB = (TransientTerm(var=c_B) == DiffusionTerm(var=c_B, coeff=Gamma) -
(RobinCoeff * g).divergence
+ ImplicitSourceTerm(var=c_B, coeff=(RobinCoeff * a.dot(-n)).divergence))
#%% A loop for solving PDE equations
while time() <= (t_simulation):
time.setValue(time() + timeStepDuration)
c_B.updateOld()
c_A.updateOld()
res1=res2 = 1e10
viewer.plot()
while (res1 > 1e-6) & (res2 > 1e-6):
res1 = eqcA.sweep(var=c_A, dt=timeStepDuration)
res2 = eqcB.sweep(var=c_B, dt=timeStepDuration)
It's possible to solve this as a fully implicit system. The code below simplifies the problem to have a unity domain size and diffusion coefficient. k is set to 0.2. It captures the analytical solution quite well with some caveats (see below).
from fipy import (
CellVariable,
TransientTerm,
DiffusionTerm,
ImplicitSourceTerm,
Grid1D,
Viewer,
)
L = 1.0
nx = 1000
dx = L / nx
konstant = 0.2
coeff = 1.0
mesh = Grid1D(nx=nx, dx=dx)
var_a = CellVariable(mesh=mesh, value=1.0, hasOld=True)
var_b = CellVariable(mesh=mesh, value=0.0, hasOld=True)
var_a.constrain(1.0, mesh.facesRight)
var_b.constrain(0.0, mesh.facesRight)
coeff_mask = ~mesh.facesLeft * coeff
boundary_coeff = konstant * (mesh.facesLeft * mesh.faceNormals).divergence
eqn_a = TransientTerm(var=var_a) == DiffusionTerm(
coeff_mask, var=var_a
) - ImplicitSourceTerm(boundary_coeff, var=var_a) + ImplicitSourceTerm(
boundary_coeff, var=var_b
)
eqn_b = TransientTerm(var=var_b) == DiffusionTerm(
coeff_mask, var=var_b
) - ImplicitSourceTerm(boundary_coeff, var=var_b) + ImplicitSourceTerm(
boundary_coeff, var=var_a
)
eqn = eqn_a & eqn_b
for _ in range(5):
var_a.updateOld()
var_b.updateOld()
eqn.sweep(dt=1e10)
Viewer((var_a, var_b)).plot()
print("var_a[0] (expected):", (1 + konstant) / (1 + 2 * konstant))
print("var_b[0] (expected):", konstant / (1 + 2 * konstant))
print("var_a[0] (actual):", var_a[0])
print("var_b[0] (actual):", var_b[0])
input("wait")
Note the following:
As written the boundary condition is only first order accurate, which doesn't really matter for this problem, but might hurt you for in higher dimensions. There might be ways to fix this such as having a small cell near the boundary or adding in an explicit second order correction for the boundary condition.
The equations are coupled here. If uncoupled it would probably require loads of iterations to reach equilibrium.
It did require a few iterations to reach equilibrium, but it shouldn't. That's probably due to the solver not converging adequately without a few tries. It might be that coupled equations have some bad conditioning.
Given that:
Y = 0.299R + 0.587G + 0.114B
What values do we put in for R,G, and B? I’m assuming 0-255. For arguments sake, if R, G, B are each 50, then does it mean Y=0.299(50) + 0.587(500) + 0.11(50)?
The next two are also confusing. How can B - Y even be possible if Y contains Blue then isn’t B - Y just taking away itself?
Cb = 0.564( B − Y )
Cr =0.713(R−Y)
It's just simple (confusing) math ...
Remark: There are few standards of YCbCr following formula applies BT.601, with "full range":
Equation (1): Y = 0.299R + 0.587G + 0.114B
The common definition of YCbCr assumes that R, G, and B are 8 bits unsigned integers in range [0, 255].
There are cases where R, G, B are floating point values in range [0, 1] (normalized values).
There are also HDR cases where range is [0, 1023] for example.
In case R=50, G=50, B=50, you just need to assign the values:
Y = 0.299*50 + 0.587*50 + 0.114*50
Result: Y = 50.
Since Y represents the Luma (line luminescence), and RGB=(50,50,50), is a gray pixel, it does make sense that Y = 50.
The following equations Cb = 0.564(B - Y), Cr = 0.713(R - Y) are incorrect.
Instead of Cb, and Cr they should be named Pb and Pr.
Equation (2): Pb = 0.564(B - Y)
Equation (3): Pr = 0.713(R - Y)
The equations mean that you can compute Y first, and then use the result for computing Pb and Pr.
Remark: don't round the value of Y when you are using it for computing Pb and Pr.
You can also assign Equation (1) in (2) and (3):
Pb = 0.564(B - Y) = 0.564(B - (0.299R + 0.587G + 0.114B)) = 0.4997*B - 0.3311*G - 0.1686*R
Pr = 0.713(R - Y) = 0.713(R - (0.299R + 0.587G + 0.114B)) = 0.4998*R - 0.4185*G - 0.0813*B
There are some small inaccuracies.
Wikipedia is more accurate (but still just a result of mathematical assignments):
Y = 0.299*R + 0.587*G + 0.114*B
Pb = -0.168736*R - 0.331264*G + 0.5*B
Pr = 0.5*R - 0.418688*G - 0.081312*B
In the above formulas the range of Pb, Pr is [-127.5, 127.5].
In the "full range" formula of YCbCr (not YPbPr), an offset of 128 is added to Pb and Pr (so result is always positive).
In case of 8 bits, the final result is limited to range [0, 255] and rounded.
What you're referencing is the conversion of RGB to YPbPr.
Conversion to YCbCr is as follows:
Y = 0.299 * R + 0.587 * G + 0.114 * B
Cb = -0.167 * R - 0.3313 * G - 0.5 * B + 128
Cr = 0.5 * R - 0.4187 * G - 0.0813 * B + 128
Yours is YPbPr (which is better for JPEG Compression, see below):
delta = 0.5
Y = 0.299 * R + 0.587 * G + 0.114 * B (same as above)
Pb: (B - Y) * 0.564 + delta
Pr: (B - Y) * 0.713 + delta
The above answer did a better job of explaining this.
I've been looking into JPEG Compression for implementation in Pytorch and found this thread (https://photo.stackexchange.com/a/8357) useful to explain why we use YPbPr for compression over YCbCr.
Pb and Pr versions are better for image compression because the luminance information (which contains the most detail) is retained in only one (Y) channel, while Pb and Pr would contain the chrominance information. Thus when you're doing down-sampling later down the line, there's less loss of valuable info.
The start data: 2d array (620x480) is contained image, where shows human face, and 2d array (30x20) which is contained eye image. Face image includes eye image.
How I can expand eye image to 36x60 to include pixels from face image? Are there ready-made solutions?
Another similar task: the eye image have 37x27 size. How I can expand eye image to target(closest to 36x60) size, e.g. 39x65 i.e maintain the aspect ratio required before resizing and then resize to 36x60.
Code for testing (project is available by reference):
import dlib
import cv2 as cv
from imutils.face_utils import shape_to_np
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor('res/model.dat')
frame = cv.imread('photo.jpg')
gray = cv.cvtColor(frame, cv.COLOR_BGR2GRAY)
img = frame.copy()
dets = detector(gray, 0)
for i, det in enumerate(dets):
shape = shape_to_np(predictor(gray, det))
shape_left_eye = shape[36:42]
x, y, h, w = cv.boundingRect(shape_left_eye)
cv.rectangle(img, (x, y), (x + h, y + w), (0, 255, 0), 1)
cv.imwrite('file.png', frame[y: y+w, x: x+h])
The image 42x13:
For the first part you can use cv2.matchTemplate to find the eye region in the face and then according to the size you want you can enlarge it. You can read more about it here.
FACE IMAGE USED
EYE IMAGE USED
The size of eye I have (12, 32).
face = cv2.imread('face.jpg', 0)
eye = cv2.imread('eye.jpg', 0)
w, h = eye.shape[::-1]
res = cv2.matchTemplate(face,eye,cv2.TM_CCOEFF)
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)
top_left = max_loc
bottom_right = (top_left[0] + w, top_left[1] + h)
cv2.rectangle(face ,top_left, bottom_right, 255, 2)
cv2.imshow('image', face)
cv2.waitKey(0)
cv2.destroyAllWindows()
The result with this code is:
Now I have the top left and bottom right co-ordinates of the eye that is matched where top_left = (112, 108) and bottom_right = (144, 120). Now to expand them to dimensions of 36x60 I simply subtract the required values from top_left and add the required values in bottom_right.
EDIT 1
The question has been edited which suggests that dlib has been used along with a model trained to perform left eye detection. Using the same code I obtained
After that as proposed above I find top_left = (x,y) and bottom_right = (x+w, y+h).
Now if the eye size is smaller 36x60 then we just have to take the area around it to expand it to 36x60 otherwise we have to expand it as such that the aspect ratio is not disturbed and then resized and it cannot be hard coded. The full code used is:
import dlib
from imutils.face_utils import shape_to_np
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor('res/model.dat')
face = cv2.imread('face.jpg', 0)
img = face.copy()
dets = detector(img, 0)
for i, det in enumerate(dets):
shape = shape_to_np(predictor(img, det))
shape_left_eye = shape[36:42]
x, y, w, h = cv2.boundingRect(shape_left_eye)
cv2.rectangle(face, (x, y), (x + w, y + h), (255, 255, 255), 1)
top_left = (x, y)
bottom_right = (x + w, y + h)
if w <= 36 and h <= 60:
x = int((36 - w)/2)
y = int((60 - h)/2)
else:
x1 = w - 36
y1 = h - 60
if x1 > y1:
x = int((w % 3)/2)
req = (w+x) * 5 / 3
y = int((req - h)/2)
else:
y = int((h % 5)/2)
req = (y+h) * 3 / 5
x = int((req - w)/2)
top_left = (top_left[0] - x, top_left[1] - y)
bottom_right = (bottom_right[0] + x, bottom_right[1] + y)
extracted = face[top_left[1]:bottom_right[1], top_left[0]:bottom_right[0]]
result = cv2.resize(extracted, (36, 60), interpolation = cv2.INTER_LINEAR)
cv2.imshow('image', face)
cv2.imshow('imag', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Which gives us a 36x60 region of the eye:
This takes care of the case when size of eye is smaller than 36x60. For the second case when the size of eye is larger than 36x60 region I used face = cv2.resize(face, None, fx=4, fy=4, interpolation = cv2.INTER_CUBIC). The result was:
The size of eye detected is (95, 33) and the extracted region is (97, 159) which is very close to the aspect ration of 3:5 before resizing which also satisfies that second task.
Does enybody know how to create water material like in badger example from Apple?
There is a "geotherm_01" object in "scene.scn" and this object got materials "_1_terrasses_orange_water" and "_1_terrasses_eau" which creates water with slow animation that looks realistic.
I've tried to repeat same materials in my testing project but can't get the same result.
This water effect was achieved using shader modifiers as shown in the SceneKit Scene Editor within Xcode. The most important material property is the normal map that will be used to affect lighting in order to simulate wavelets on a perfectly flat geometry.
More specifically the modifier for the .surface entry point is
float waterSpeed = u_time * -0.1;vec2 uvs = _surface.normalTexcoord;uvs.x *= 2;vec3 tn = texture2D(u_normalTexture, vec2(uvs.x, uvs.y + waterSpeed)).xyz;tn = tn * 2 - 1;vec3 tn2 = texture2D(u_normalTexture, vec2(uvs.x + 0.35 , uvs.y + 0.35 + (waterSpeed * 1.3))).xyz;tn2 = tn2 * 2 - 1;vec3 rn = (tn + tn2) * 0.5;mat3 ts = mat3(_surface.tangent, _surface.bitangent, _surface.geometryNormal);_surface.normal = normalize(ts * rn);
Here's a commented version of the code:
// Elapsed time in seconds
float waterSpeed = u_time * -0.1;
// Texture coordinates that will be used to sample the normal map
vec2 uvs = _surface.normalTexcoord;
uvs.x *= 2;
// Sample the normal map
vec3 tn = texture2D(u_normalTexture, vec2(uvs.x, uvs.y + waterSpeed)).xyz;
// The texture stores values in the [0, 1] range
// Express the coordinates of the normal vector in the [-1, +1] range
tn = tn * 2 - 1;
// Sample the normal map again, using the `waterSpeed` offset this time
// in order to produce the animation effect
vec3 tn2 = texture2D(u_normalTexture, vec2(uvs.x + 0.35 , uvs.y + 0.35 + (waterSpeed * 1.3))).xyz;
tn2 = tn2 * 2 - 1;
// Combine the two normals (static and animated) to produce a richer (more complex and less uniform) effect
vec3 rn = (tn + tn2) * 0.5;
// Normals in the normal map are expressed in tangent space
// Convert them to object space and override the surface normal to simulate wavelets
mat3 ts = mat3(_surface.tangent, _surface.bitangent, _surface.geometryNormal);
_surface.normal = normalize(ts * rn);
I am trying to model heat conduction within a wood cylinder using implicit finite difference methods. The general heat equation that I'm using for cylindrical and spherical shapes is:
Where p is the shape factor, p = 1 for cylinder and p = 2 for sphere. Boundary conditions include convection at the surface. For more details about the model, please see the comments in the Matlab code below.
The main m-file is:
%--- main parameters
rhow = 650; % density of wood, kg/m^3
d = 0.02; % wood particle diameter, m
Ti = 300; % initial particle temp, K
Tinf = 673; % ambient temp, K
h = 60; % heat transfer coefficient, W/m^2*K
% A = pre-exponential factor, 1/s and E = activation energy, kJ/mol
A1 = 1.3e8; E1 = 140; % wood -> gas
A2 = 2e8; E2 = 133; % wood -> tar
A3 = 1.08e7; E3 = 121; % wood -> char
R = 0.008314; % universal gas constant, kJ/mol*K
%--- initial calculations
b = 1; % shape factor, b = 1 cylinder, b = 2 sphere
r = d/2; % particle radius, m
nt = 1000; % number of time steps
tmax = 840; % max time, s
dt = tmax/nt; % time step spacing, delta t
t = 0:dt:tmax; % time vector, s
m = 20; % number of radius nodes
steps = m-1; % number of radius steps
dr = r/steps; % radius step spacing, delta r
%--- build initial vectors for temperature and thermal properties
i = 1:m;
T(i,1) = Ti; % column vector of temperatures
TT(1,i) = Ti; % row vector to store temperatures
pw(1,i) = rhow; % initial density at each node is wood density, rhow
pg(1,i) = 0; % initial density of gas
pt(1,i) = 0; % inital density of tar
pc(1,i) = 0; % initial density of char
%--- solve system of equations [A][T]=[C] where T = A\C
for i = 2:nt+1
% kinetics at n
[rww, rwg, rwt, rwc] = funcY(A1,E1,A2,E2,A3,E3,R,T',pw(i-1,:));
pw(i,:) = pw(i-1,:) + rww.*dt; % update wood density
pg(i,:) = pg(i-1,:) + rwg.*dt; % update gas density
pt(i,:) = pt(i-1,:) + rwt.*dt; % update tar density
pc(i,:) = pc(i-1,:) + rwc.*dt; % update char density
Yw = pw(i,:)./(pw(i,:) + pc(i,:)); % wood fraction
Yc = pc(i,:)./(pw(i,:) + pc(i,:)); % char fraction
% thermal properties at n
cpw = 1112.0 + 4.85.*(T'-273.15); % wood heat capacity, J/(kg*K)
kw = 0.13 + (3e-4).*(T'-273.15); % wood thermal conductivity, W/(m*K)
cpc = 1003.2 + 2.09.*(T'-273.15); % char heat capacity, J/(kg*K)
kc = 0.08 - (1e-4).*(T'-273.15); % char thermal conductivity, W/(m*K)
cpbar = Yw.*cpw + Yc.*cpc; % effective heat capacity
kbar = Yw.*kw + Yc.*kc; % effective thermal conductivity
pbar = pw(i,:) + pc(i,:); % effective density
% temperature at n+1
Tn = funcACbar(pbar,cpbar,kbar,h,Tinf,b,m,dr,dt,T);
% kinetics at n+1
[rww, rwg, rwt, rwc] = funcY(A1,E1,A2,E2,A3,E3,R,Tn',pw(i-1,:));
pw(i,:) = pw(i-1,:) + rww.*dt;
pg(i,:) = pg(i-1,:) + rwg.*dt;
pt(i,:) = pt(i-1,:) + rwt.*dt;
pc(i,:) = pc(i-1,:) + rwc.*dt;
Yw = pw(i,:)./(pw(i,:) + pc(i,:));
Yc = pc(i,:)./(pw(i,:) + pc(i,:));
% thermal properties at n+1
cpw = 1112.0 + 4.85.*(Tn'-273.15);
kw = 0.13 + (3e-4).*(Tn'-273.15);
cpc = 1003.2 + 2.09.*(Tn'-273.15);
kc = 0.08 - (1e-4).*(Tn'-273.15);
cpbar = Yw.*cpw + Yc.*cpc;
kbar = Yw.*kw + Yc.*cpc;
pbar = pw(i,:) + pc(i,:);
% revise temperature at n+1
Tn = funcACbar(pbar,cpbar,kbar,h,Tinf,b,m,dr,dt,T);
% store temperature at n+1
T = Tn;
TT(i,:) = T';
end
%--- plot data
figure(1)
plot(t./60,TT(:,1),'-b',t./60,TT(:,m),'-r')
hold on
plot([0 tmax/60],[Tinf Tinf],':k')
hold off
xlabel('Time (min)'); ylabel('Temperature (K)');
sh = num2str(h); snt = num2str(nt); sm = num2str(m);
title(['Cylinder Model, d = 20mm, h = ',sh,', nt = ',snt,', m = ',sm])
legend('Tcenter','Tsurface',['T\infty = ',num2str(Tinf),'K'],'location','southeast')
figure(2)
plot(t./60,pw(:,1),'--',t./60,pw(:,m),'-','color',[0 0.7 0])
hold on
plot(t./60,pg(:,1),'--b',t./60,pg(:,m),'b')
hold on
plot(t./60,pt(:,1),'--k',t./60,pt(:,m),'k')
hold on
plot(t./60,pc(:,1),'--r',t./60,pc(:,m),'r')
hold off
xlabel('Time (min)'); ylabel('Density (kg/m^3)');
The function m-file, funcACbar, that creates the system of equations to solve is:
% Finite difference equations for cylinder and sphere
% for 1D transient heat conduction with convection at surface
% general equation is:
% 1/alpha*dT/dt = d^2T/dr^2 + p/r*dT/dr for r ~= 0
% 1/alpha*dT/dt = (1 + p)*d^2T/dr^2 for r = 0
% where p is shape factor, p = 1 for cylinder, p = 2 for sphere
function T = funcACbar(pbar,cpbar,kbar,h,Tinf,b,m,dr,dt,T)
alpha = kbar./(pbar.*cpbar); % effective thermal diffusivity
Fo = alpha.*dt./(dr^2); % effective Fourier number
Bi = h.*dr./kbar; % effective Biot number
% [A] is coefficient matrix at time level n+1
% {C} is column vector at time level n
A(1,1) = 1 + 2*(1+b)*Fo(1);
A(1,2) = -2*(1+b)*Fo(2);
C(1,1) = T(1);
for k = 2:m-1
A(k,k-1) = -Fo(k-1)*(1 - b/(2*(k-1))); % Tm-1
A(k,k) = 1 + 2*Fo(k); % Tm
A(k,k+1) = -Fo(k+1)*(1 + b/(2*(k-1))); % Tm+1
C(k,1) = T(k);
end
A(m,m-1) = -2*Fo(m-1);
A(m,m) = 1 + 2*Fo(m)*(1 + Bi(m) + (b/(2*m))*Bi(m));
C(m,1) = T(m) + 2*Fo(m)*Bi(m)*(1 + b/(2*m))*Tinf;
% solve system of equations [A]{T} = {C} where temperature T = [A]\{C}
T = A\C;
end
And finally the function that deals with the kinetic reactions, funcY, is:
% Kinetic equations for reactions of wood, first-order, Arrhenious type equations
% K = A*exp(-E/RT) where A = pre-exponential factor, 1/s
% and E = activation energy, kJ/mol
function [rww, rwg, rwt, rwc] = funcY(A1,E1,A2,E2,A3,E3,R,T,pww)
K1 = A1.*exp(-E1./(R.*T)); % wood -> gas (1/s)
K2 = A2.*exp(-E2./(R.*T)); % wood -> tar (1/s)
K3 = A3.*exp(-E3./(R.*T)); % wood -> char (1/s)
rww = -(K1+K2+K3).*pww; % rate of wood consumption (rho/s)
rwg = K1.*pww; % rate of gas production from wood (rho/s)
rwt = K2.*pww; % rate of tar production from wood (rho/s)
rwc = K3.*pww; % rate of char production from wood (rho/s)
end
Running the above code gives a temperature profile at the center and surface of the wood cylinder:
As you can see from this plot, for some reason the center and surface temperatures rapidly converge at the 2 min mark which isn't correct.
Any suggestions on how to fix this or create a more efficient way to solve the problem?
It looks like you are using a backward Euler implicit method of discretization of a diffusion PDE. A more accurate approach is the Crank-Nicolson method. Both methods are unconditionally stable.
The introduction of a T-dependent diffusion coefficient requires special treatment, best probably in the form of linearization, as explained briefly here. It would be useful to identify stability criteria to ensure that the time and distance step lengths are appropriate following introduction of T-dependent coefficients.
Note that matlab offers a PDE toolbox which might be useful to you, although I have not checked how you might use it in detail.