www.egwald.com Egwald Web Services

Egwald Web Services
Domain Names
Web Site Design

Egwald Website Search JOIN US AS A FACEBOOK FAN Twitter - Follow Elmer WiensRadio Podcasts - Geraldos Hour

 

Statistics Programs - Econometrics and Probability Economics - Microeconomics & Macroeconomics Operations Research - Linear Programming and Game Theory Egwald's Mathematics Egwald's Optimal Control
Egwald HomeEconomics Home PageOligopoly/Public Firm ModelRun Oligopoly ModelDerive Oligopoly ModelProduction FunctionsCost FunctionsDuality Production Cost FunctionsGraduate EssaysReferences & Links
 

Egwald Economics: Microeconomics

The Duality of Production and Cost Functions

by

Elmer G. Wiens

Egwald's popular web pages are provided without cost to users.
Please show your support by joining Egwald Web Services as a Facebook Fan: JOIN US AS A FACEBOOK FAN
Follow Elmer Wiens on Twitter: Twitter - Follow Elmer Wiens

Duality: Production / Cost Functions:   Cobb-Douglas Duality | CES Duality | Theory of Duality | Translog Duality - CES | Translog Duality - Generalized CES | References and Links

Cost Functions:   Cobb-Douglas Cost | Normalized Quadratic Cost | Translog Cost | Diewert Cost | Generalized CES-Translog Cost | Generalized CES-Diewert Cost | References and Links

Production Functions:   Cobb-Douglas | CES | Generalized CES | Translog | Diewert | Translog vs Diewert | Diewert vs Translog | Estimate Translog | Estimate Diewert | References and Links

Q. Duality of Production and Cost Functions Using the Implicit Function Theorem.

The theory of duality links the production function models to the cost function models by way of a minimization or maximization framework. The cost function is derived from the production function by choosing the combination of factor quantities that minimize the cost of producing levels of output at given factor prices. Conversely, the production function is derived from the cost function by calculating the maximum level of output that can be obtained from specified combinations of inputs.

I. Profit (Wealth) Maximizing Firm.

Production and cost functions (and profit functions) can be used to model how a profit (wealth) maximizing firm hires or purchases inputs (factors), such as labour, capital (structures and machinery), and materials and supplies, and combines these inputs through its production process to produce the products (outputs) that the firm sells (supplies) to its customers.

II. The Production Function.

The production function describes the maximum output that can be produced from given quantities of factor inputs with the firm's existing technological expertise. Let the variables q, L, K, and M represent the quantity of output, and the input quantities of labour, capital, and materials and supplies, respectively.

Mathematically, the production function, f, relates output, q, to inputs, L, K, and M, written as:

q = f(L, K, M)

with the function f having certain desirable properties.

III. Example: The CES production function

q = A * [alpha * (L^-rho) + beta * (K^-rho) + gamma *(M^-rho)]^(-nu/rho) = f(L, K, M).

The coefficients of the production function, A, alpha, beta, gamma, nu, and rho are positive, real numbers. The production function's inputs, L, K, and M, are non-negative real numbers.

IV. The Total Cost of Production.

Let the variables wL, wK, and wM represent the unit prices of the factors L, K, and M, respectively. For any given combination of factor inputs, L, K, and M, the total cost of using these inputs is:

TC = wL * L + wK * K + wM * M

i.e. the sum of the quantities of factor inputs weighted by their respective factor prices.

V. The Cost Function.

The cost function describes the total cost of producing any given output quantity, using the cost minimizing quantity of inputs.

Mathematically, the cost function, C, relates the total cost, TC, to output, q, and factor prices wL, wK, and wM, if the cost minimizing combination of factor inputs is used, written as:

TC = C(q; wL, wK, wM)

with the function C having certain desirable properties.

VI. Example: The CES cost function:

C(q;wL,wK,wM) = h(q) * c(wL,wK,wM) = (q/A)^1/nu * [alpha^(1/(1+rho)) * wL^(rho/(1+rho)) + beta^(1/(1+rho)) * wK^(rho/(1+rho)) + gamma^(1/(1+rho)) * wM^(rho/(1+rho))]^((1+rho)/rho)

The cost function's factor prices, wL, wK, and wM, are positive real numbers.




VII. The Least-Cost Combination of Inputs: Production Function to Cost Function.

The entrepreneur, management, and employees of the profit maximizing firm choose the factor proportions and quantities, and output levels given the prices of factor inputs, and products. For any specified combination of positive factor prices, wL, wK, and wM, what combination of factor inputs, L, K, and M, will minimize the cost of producing any given level of positive output, q?

C*(q; wL, wK, wM) = minL,K,M{ wL * L + wK * K + wM * M   :   q - f(L,K,M) = 0,   q > 0, wL > 0; wK > 0, and wM > 0 }

VIII. Constrained Optimization (Minimum): The Method of Lagrange.

Define the Langrangian function, G, of the least-cost problem of (VII):

G(q; wL, wK, wM, L, K, M, μ) = wL * L + wK * K + wM * M + μ * (q - f(L,K,M))

where the new variable, μ, is called the Lagrange multiplier.

a.   First Order Necessary Conditions:

0.   Gµ(q; wL, wK, wM, L, K, M, μ) = q - f(L, K, M) = 0
1.   GL(q; wL, wK, wM, L, K, M, μ) = wL - µ * fL(L, K, M) = 0
2.   GK(q; wL, wK, wM, L, K, M, μ) = wK - µ * fK(L, K, M) = 0
3.   GM(q; wL, wK, wM, L, K, M, μ) = wM - µ * fM(L, K, M) = 0

b.   Solution Functions:

We want to solve, simultaneously, these four equations for the variables L, K, M, and µ as continuously differentiable functions of the variables q, wL, wK, and wM, and the parameters of the production function.

L = L(q; wL, wK, wM)
K = K(q; wL, wK, wM)
M = M(q; wL, wK, wM)
µ = µ(q; wL, wK, wM)

Suppose that the point Z = (q, wL, wK, wM, L, K, M, µ) satisfies equations 0. to 3., and that each of the functions Gµ, GL, GK, and GM has continuous partial derivatives with respect to each of the variables q, wL, wK, wM, L, K, M, and μ at the point Z.  Also, suppose the determinant of the Jacobian matrix, defined below, when evaluated at the point Z is not equal to zero.

According to the Implicit Function Theorem, functions L, K, M, and µ exist that express the variables L, K, M, and µ as continuously differentiable functions of the variables q, wL, wK, and wM.

Moreover:

L = L(q; wL, wK, wM)
K = K(q; wL, wK, wM)
M = M(q; wL, wK, wM)
µ = µ(q; wL, wK, wM)

and:

0'.   Gµ(q; wL, wK, wM, L, K, M, μ) = q - f(L(q; wL, wK, wM), K(q; wL, wK, wM), M(q; wL, wK, wM)) = 0
1'.   GL(q; wL, wK, wM, L, K, M, μ) = wL - µ(q; wL, wK, wM) * fL(L(q; wL, wK, wM), K(q; wL, wK, wM), M(q; wL, wK, wM)) = 0
2'.   GK(q; wL, wK, wM, L, K, M, μ) = wK - µ(q; wL, wK, wM) * fK(L(q; wL, wK, wM), K(q; wL, wK, wM), M(q; wL, wK, wM)) = 0
3'.   GM(q; wL, wK, wM, L, K, M, μ) = wM - µ(q; wL, wK, wM) * fM(L(q; wL, wK, wM), K(q; wL, wK, wM), M(q; wL, wK, wM)) = 0

for points (q; wL, wK, wM) in a neighborhood of the point (q; wL, wK, wM). That is, the points (q, wL, wK, wM, L(q; wL, wK, wM), K(q; wL, wK, wM), M(q; wL, wK, wM), µ(q; wL, wK, wM)) also satisfy the first order conditions.

c.   Jacobian Matrices:

The Jacobian matrix (bordered Hessian of G) of the four functions, Gµ, GL, GK, GM, with respect to the choice variables, μ, L, K, and M:

J3   =  

GµµGµLGµKGµM
GGLLGLKGLM
GGKLGKKGKM
GGMLGMKGMM
=
0-fL-fK-fM
-fL-µ * fLL-µ * fLK-µ * fLM
-fK-µ * fKL-µ * fKK-µ * fKM
-fM-µ * fML-µ * fMK-µ * fMM

  =   Jµ, L, K, M

The bordered principal minor of the bordered Hessian of the Langrangian function, G:

J2   =  

0-fL-fK
-fL-µ * fLL-µ * fLK
-fK-µ * fKL-µ * fKK

d.   Second Order Necessary Conditions:

The second order necessary conditions require that the Jacobian matrix (bordered Hessian of G) be positive definite at Z = (q, wL, wK, wM, L, K, M, µ). The Jacobian matrix, J3, is a positive definite matrix if the determinants of J2 and J3 are both negative.

e.   Sufficient Conditions:

If the second order necessary conditions are satisfied, then the first order necessary conditions are sufficient for a minimum at Z. Therefore:

C*(q; wL, wK, wM) = wL * L(q; wL, wK, wM) + wK * K(q; wL, wK, wM) + wM * M(q; wL, wK, wM)

Moreover, since the sixteen functions that comprise the components of J3 are continuous, the Jacobian matrix, J3, is a positive definite matrix at points (q; wL, wK, wM) in a neighborhood of the point (q; wL, wK, wM). That is, the points (q, wL, wK, wM, L(q; wL, wK, wM), K(q; wL, wK, wM), M(q; wL, wK, wM), µ(q; wL, wK, wM)) also satisfy the second order conditions.

Consequently, for points (q; wL, wK, wM) in a neighborhood of the point (q; wL, wK, wM):

C*(q; wL, wK, wM) = wL * L(q; wL, wK, wM) + wK * K(q; wL, wK, wM) + wM * M(q; wL, wK, wM) = G(q; wL, wK, wM, L(q; wL, wK, wM), K(q; wL, wK, wM), M(q; wL, wK, wM), μ(q; wL, wK, wM))

IX. Marginal Cost Function.

MC*(q; wL, wK, wM) = ∂C*(q; wL, wK, wM)/∂q = ∂G(q; wL, wK, wM, L, K, M, μ)/∂q = μ(q; wL, wK, wM)

Example: CES Marginal Cost
MC(q; wL, wK, wM) = ∂C(q; wL, wK, wM)/∂q = h'(q) * c(wL,wK,wM) = (1/nu) * (1/A) * (q/A) ^(1 - nu)/nu * c(wL,wK,wM)

X. Factor Demand Functions.

If the cost function C*(q; wL, wK, wM) satisfies certain properties (Hotelling-Shephard's lemma), properties derived from the properties of the production function f(L, K, M):

L(q; wL, wK, wM) = ∂C*(q; wL, wK, wM)/∂wL = L
K(q; wL, wK, wM) = ∂C*(q; wL, wK, wM)/∂wK = K
M(q; wL, wK, wM) = ∂C*(q; wL, wK, wM)/∂wM = M

where (L, K, M) are the factor proportions that minimize the cost of producing q units of output at specific factor prices (wL, wK, wM) for a given output quantity q.

Example: CES Factor Demand Functions
L(q;wL,wK,wM)   =   h(q) * l(wL,wK,wM)   =   h(q) * [(alpha / wL) * c(wL,wK,wM)]^(1/(1+rho))
K(q;wL,wK,wM)   =   h(q) * m(wL,wK,wM)   =   h(q) * [(beta / wK) * c(wL,wK,wM)]^(1/(1+rho))
M(q;wL,wK,wM)   =   h(q) * m(wL,wK,wM)   =   h(q) * [(gamma / wM) * c(wL,wK,wM)]^(1/(1+rho))

If the cost function C*(q; wL, wK, wM) satisfies the properties required by the Hotelling-Shephard's lemma, then the factor demand functions satisfy:

wL * ∂ L(q;wL,wK,wM) / ∂wL   +   wK * ∂ K(q;wL,wK,wM) / ∂wL   +   wM * ∂ M(q;wL,wK,wM) / ∂wL   =   0
wL * ∂ L(q;wL,wK,wM) / ∂wK   +   wK * ∂ K(q;wL,wK,wM) / ∂wK   +   wM * ∂ M(q;wL,wK,wM) / ∂wK   =   0
wL * ∂ L(q;wL,wK,wM) / ∂wM   +   wK * ∂ K(q;wL,wK,wM) / ∂wM   +   wM * ∂ M(q;wL,wK,wM) / ∂wM   =   0

If the first-order conditions are sufficient for minimum, then;

q   ≡   f(L(q; wL, wK, wM), K(q; wL, wK, wM), M(q; wL, wK, wM))   →
0   ≡   fL * ∂L(q; wL, wK, wM)/∂wL + fK * ∂K(q; wL, wK, wM)/∂wL + fM * ∂M(q; wL, wK, wM)/∂wL, and
wL = μ(q; wL, wK, wM) * fL(L(q; wL, wK, wM), K(q; wL, wK, wM), M(q; wL, wK, wM))
wK = μ(q; wL, wK, wM) * fK(L(q; wL, wK, wM), K(q; wL, wK, wM), M(q; wL, wK, wM))
wM = μ(q; wL, wK, wM) * fM(L(q; wL, wK, wM), K(q; wL, wK, wM), M(q; wL, wK, wM))

so:

∂C*(q; wL, wK, wM)/∂wL

=   ∂ { wL * L(q; wL, wK, wM) + wK * K(q; wL, wK, wM) + wM * M(q; wL, wK, wM) } / ∂wL

 

=   L(q; wL, wK, wM) + wL * ∂L(q; wL, wK, wM)/∂wL + wK * ∂K(q; wL, wK, wM)/∂wL + wM * ∂M(q; wL, wK, wM)/∂wL

 

=   L(q; wL, wK, wM) + μ(q; wL, wK, wM) * (fL * ∂L(q; wL, wK, wM)/∂wL + fK * ∂K(q; wL, wK, wM)/∂wL + fM * ∂M(q; wL, wK, wM)/∂wL)

  =   L(q; wL, wK, wM)

XI. Example. Duality: CES production function to CES cost function.

XII. The Solution Functions' Comparative Statics.

The Jacobian matrix of the four functions, Gµ, GL, GK, GM, with respect to the variables, q, wL, wK, and wM:

Jq, wL, wK, wM;   =  

GµqGµwLGµwKGµwM
GLqGLwLGLwKGLwM
GKqGKwLGKwKGKwM
GMqGMwLGMwKGMwM
=
1000
0100
0010
0001

The Jacobian matrix of the four solution functions, Φ = {µ, L, K, M}, with respect to the variables, q, wL, wK, and wM:

JΦ   =  

µqµwLµwKµwM
LqLwLLwKLwM
KqKwLKwKKwM
MqMwLMwKMwM

From the Implicit Function Theorem:

Jq, wL, wK, wM;   +   Jµ, L, K, M   * JΦ   =   0 (zero matrix)   →

JΦ   =   - (Jµ, L, K, M)-1

for points (q; wL, wK, wM) in a neighborhood of the point (q; wL, wK, wM), and L = L(q; wL, wK, wM), K = K(q; wL, wK, wM), M = M(q; wL, wK, wM), and µ = µ(q; wL, wK, wM).

XIII.Numerical Example: Production Function to Cost Function.

The CES production function as specified:

f(L, K, M) = 1 * [0.35 * (L^- 0.17647) + 0.4 * (K^- 0.17647) + 0.25 *(M^- 0.17647)]^(-1/0.17647)

with A = 1, alpha = 0.35, beta = 0.4, gamma = 0.25, rho = 0.17647 (sigma = 0.85), and nu = 1. The CES production function has continuous first and second order partial derivatives with respect to its arguments.

fL(L,K,M) = 1 * 0.35 * 1^(-0.17647/1) * L ^-(1 + 0.17647) * f(L,K,M)^(1 + 0.17647/1),
fK(L,K,M) = 1 * 0.4 * 1^(-0.17647/1) * K ^-(1 + 0.17647) * f(L,K,M)^(1 + 0.17647/1),
fM(L,K,M) = 1 * 0.25 * 1^(-0.17647/1) * M ^-(1 + 0.17647) * f(L,K,M)^(1 + 0.17647/1).

The factor prices are set: wL = 7, wK = 13, wM = 6.   Output is set: q = 30.

The Least-Cost Combination of Inputs: Production Function to Cost Function.

The dual cost function, C*, is obtained from the production function, f, by:

C*(q; wL, wK, wM) = minL,K,M{ wL * L + wK * K + wM * M   :   q - f(L,K,M) = 0,   q > 0, wL > 0; wK > 0, and wM > 0 }

C*(30; 7, 13, 6) = minL,K,M{ 7 * L + 13 * K + 6 * M   :   30 - f(L,K,M) = 0 }

Constrained Optimization (Minimum): The Method of Lagrange.

G(q; wL, wK, wM, L, K, M, μ) = wL * L + wK * K + wM * M + μ * (q - f(L,K,M))

G(30; 7, 13, 6, L, K, M, μ) = 7 * L + 13 * K + 6 * M   + μ * (30 - f(L,K,M))

First Order Necessary Conditions:

0.   Gµ(30; 7, 13, 6, L, K, M, μ) = 30 - f(L, K, M) = 0
1.   GL(30; 7, 13, 6, L, K, M, μ) = 7 - µ * fL(L, K, M) = 0
2.   GK(30; 7, 13, 6, L, K, M, μ) = 13 - µ * fK(L, K, M) = 0
3.   GM(30; 7, 13, 6, L, K, M, μ) = 6 - µ * fM(L, K, M) = 0

Solve these four equations simultaneously (say, using Newton's Method) for L, K, M, and µ. With L = 36.89, K = 24.42, M = 31.59, and µ = 25.506,

f(L,K,M) = 30,   fL(L,K,M) = 0.2744,   fK(L,K,M) = 0.5097,   fM(L,K,M) = 0.2352,

0.   Gµ(30; 7, 13, 6, L, K, M, μ) = 30 - 30 = 0
1.   GL(30; 7, 13, 6, L, K, M, μ) = 7 - 25.506 * 0.2744 ≈ 0
2.   GK(30; 7, 13, 6, L, K, M, μ) = 13 - 25.506 * 0.5097 ≈ 0
3.   GM(30; 7, 13, 6, L, K, M, μ) = 6 - 25.506 * 0.2352 ≈ 0

Second Order Necessary Conditions:

The second order necessary conditions require that the Jacobian matrix (bordered Hessian of G) be positive definite at Z = (q, wL, wK, wM, L, K, M, µ) = (30, 7, 13, 6, 36.89, 24.42, 31.59, 25.506). The Jacobian matrix, J3, is a positive definite matrix if the determinants of J2 and J3 are both negative.

Jµ, L, K, M   =  

J3   =  

0-fL-fK-fM
-fL-µ * fLL-µ * fLK-µ * fLM
-fK-µ * fKL-µ * fKK-µ * fKM
-fM-µ * fML-µ * fMK-µ * fMM
=
0-0.274-0.51-0.235
-0.2740.148-0.14-0.065
-0.51-0.140.367-0.12
-0.235-0.065-0.120.168

  Determinant(J3) = -0.0312

 

J2   =  

0-fL-fK
-fL-µ * fLL-µ * fLK
-fK-µ * fKL-µ * fKK
=
0-0.274-0.51
-0.2740.148-0.14
-0.51-0.140.367

  Determinant(J2) = -0.1052

The Least-Cost Combination of Inputs.

C*(30; 7, 13, 6) = 7 * L + 13 * K + 6 * M = 7 * 36.89 + 13 * 24.42 + 6 * 31.59   =   765.17

The Solution Functions' Comparative Statics.

From the Implicit Function Theorem:

JΦ   =   - (Jµ, L, K, M)-1

JΦ   =  

µqµwLµwKµwM
LqLwLLwKLwM
KqKwLKwKKwM
MqMwLMwKMwM
=
01.230.8141.053
1.23-2.96811.295
0.8141-0.9340.857
1.0531.2950.857-3.367



XIV. Maximum Output: Cost Function to Production Function.

The entrepreneur, management, and employees of the profit maximizing firm can investigate the technology (production function) available in the firm's cost function, C(q; wL, wK, wM), by determining the factor prices, wL, wK, and wM, consistent with the maximum level of output, q, for a given combination of factor inputs, L, K, and M.

f*(L, K, M) = maxq {q   :   C(q; wL, wK, wM)   <=   wL * L + wK * K + wM * M,   L > 0, K > 0, M > 0,   for all wL >= 0 , wK >= 0, wM >= 0}

Question: f*   ==   f (original production function)?

Consider the case where the cost function, C(q; wL, wK, wM), factors:

C(q; wL, wK, wM) = q^1/nu * c(wL, wK, wM)

Setting:

f*(L, K, M) = maxq {q   :   q^1/nu * c(wL, wK, wM) <= wL * L + wK * K + wM * M,   L > 0, K > 0, M > 0,   for all wL >= 0 , wK >= 0 , wM >= 0}

With c(wL, wK, wM) and wL * L + wK * K + wM * M linear homogeneous:

f*(L, K, M) = maxq {q   :   q^1/nu * c(wL, wK, wM) <= 1,   L > 0, K > 0, M > 0,   wL * L + wK * K + wM * M = 1}

f*(L, K, M) = maxq {q   :   q^1/nu <= 1 / c(wL, wK, wM),   L > 0, K > 0, M > 0,   wL * L + wK * K + wM * M = 1}

Rewrite this as (Diewert, 1974, 157):

f*(L, K, M)^1/nu = minwL,wK,wM {1 / c(wL, wK, wM)   :   wL * L + wK * K + wM * M = 1,   wL >= 0, wK >= 0, wM >= 0}

f*(L, K, M)^1/nu = 1 / maxwL,wK,wM {c(wL, wK, wM)   :   wL * L + wK * K + wM * M = 1,   wL >= 0, wK >= 0, wM >= 0}, since c(wL, wK, wM) >= 0

XV. Constrained Optimization (Maximum): The Method of Lagrange.

Define the Langrangian function, H, of the output maximization problem of (XIV):

H(L, K, M, wL, wK, wM, λ) = c(wL, wK, wM) + λ * (1 - (wL * L + wK * K + wM * M))

where the new variable, λ, is called the Lagrange multiplier.

a.   First Order Necessary Conditions:

0.   Hλ(L, K, M, wL, wK, wM, λ) = 1 - (wL * L + wK * K + wM * M) = 0
1.   HwL(L, K, M, wL, wK, wM, λ) = ∂c(wL, wK, wM)/∂wL - λ * L = 0
2.   HwK(L, K, M, wL, wK, wM, λ) = ∂c(wL, wK, wM)/∂wK) - λ * K = 0
3.   HwM(L, K, M, wL, wK, wM, λ) = ∂c(wL, wK, wM)/∂wM - λ * M = 0

b.   Solution Functions:

We want to solve, simultaneously, these four equations for the variables wL, wK, and wM, and λ as continuously differentiable functions of the variables L, K, and M, and the parameters of the cost function.

λ = λ(L, K, M)
wL = wL(L, K, M)
wK = wK(L, K, M)
wM = wM(L, K, M)

Suppose that the point W = (L, K, M, wL, wK, wM, λ) satisfies equations 0. to 3., and that each of the functions Hλ, HwL, HwK, and HwM has continuous partial derivatives with respect to each of the variables L, K, M, wL, wK, wM, and λ at the point W.  Also, suppose the determinant of the Jacobian matrix, defined below, when evaluated at the point W is not equal to zero.

According to the Implicit Function Theorem, functions wL, wK, wM, and λ exist that express the variables wL, wK, wM, and λ as continuously differentiable functions of the variables L, K, and M.

Moreover:

λ = λ(L, K, M)
wL = wL(L, K, M)
wK = wK(L, K, M)
wM = wM(L, K, M)

and:

0'.   Hλ(L, K, M, wL, wK, wM, λ) = 1 - ( wL(L, K, M) * L + wK(L, K, M) * K + wM(L, K, M) * M ) = 0
1'.   HwL((L, K, M, wL, wK, wM, λ) = ∂c(wL(L, K, M), wK(L, K, M), wM(L, K, M))/∂wL - λ(L, K, M) * L = 0
2'.   HwK(L, K, M, wL, wK, wM, λ) = ∂c(wL(L, K, M), wK(L, K, M), wM(L, K, M))/∂wK - λ(L, K, M) * K = 0
3'.   HwM((L, K, M, wL, wK, wM, λ) = ∂c(wL(L, K, M), wK(L, K, M), wM(L, K, M))/∂wM - λ(L, K, M) * M = 0

for points (L, K, M) in a neighborhood of the point (L, K, M). That is, the points (L, K, M, wL(L, K, M), wK(L, K, M), wM(L, K, M), λ(L, K, M)) also satisfy the first order conditions.

c.   Jacobian Matrices:

The Jacobian matrix (bordered Hessian of H) of the four functions, Hλ, HwL, HwK, HwM, with respect to the choice variables, λ, wL, wK, wM :

J3   =  

HλλHλwLHλwKHλwM
HwLλHwLwLHwLwKHwLwM
HwKλHwKwLHwKwKHwKwM
HwMλHwMwLHwMwKHwMwM
=
  0   -L-K-M
  -L   cwLwLcwLwKcwLwM
  -K   cwKwLcwKwKcwKwM
  -M   cwMwLcwMwKcwMwM

  =   Jλ, wL, wK, wM

The bordered principal minor of the bordered Hessian of the Langrangian function, H:

J2   =  

  0     -L     -K  
  -L   cwLwLcwLwK
  -K   cwKwLcwKwK

d.   Second Order Necessary Conditions:

The second order necessary conditions require that the Jacobian matrix (bordered Hessian of H) be negative definite at W = (L, K, M, wL, wK, wM, λ). The Jacobian matrix, J3, is a negative definite matrix if the determinant of J2 is positive, and the determinant of J3 is negative.

e.   Sufficient Conditions:

If the second order necessary conditions are satisfied, then the first order necessary conditions are sufficient for a maximum at W. Therefore:

f*(L, K, M)^1/nu = q(L, K, M)^1/nu = 1 / c(wL(L, K, M, wK(L, K, M), wM(L, K, M))

Moreover, since the sixteen functions that comprise the components of J3 are continuous, the Jacobian matrix, J3, is a negative definite matrix at points (L, K, M) in a neighborhood of the point (L, K, M). That is, the points (L, K, M, wL(L, K, M), wK(L, K, M), wM(L, K, M), λ(L, K, M)) also satisfy the second order conditions.

Consequently, for points (L, K, M) in a neighborhood of the point (L, K, M):

f*(L, K, M)^1/nu = q(L, K, M)^1/nu = 1 / H(L, K, M, wL(L, K, M), wK(L, K, M), wM(L, K, M), λ(L, K, M))

XVI. The Lagrange Multiplier.

The first order necessary conditions (XV. a. 1-3.) imply:

λ(wL, wK, wM) = c(wL, wK, wM)/∂wL / L = c(wL, wK, wM)/∂wK / K = c(wL, wK, wM)/∂wM / M

where:

wL = wL(L, K, M)
wK = wK(L, K, M)
wM = wM(L, K, M)

for points (L, K, M) in a neighborhood of the point (L, K, M).

XVII. Factor Demand Functions.

If the specified (or derived) cost function, C(q; wL, wK, wM) = q^1/nu * c(wL, wK, wM), satisfies the Hotelling-Shephard properties, then the factor demand functions are given by:

L(q; wL, wK, wM)  =  ∂C(q; wL, wK, wM)/∂wL = q^1/nu * ∂c(wL, wK, wM)/∂wL = q^1/nu * l(wL, wK, wM)
K(q; wL, wK, wM)  =  ∂C(q; wL, wK, wM)/∂wK = q^1/nu * ∂c(wL, wK, wM)/∂wK = q^1/nu * k(wL, wK, wM)
M(q; wL, wK, wM)  =  ∂C(q; wL, wK, wM)/∂wM = q^1/nu * ∂c(wL, wK, wM)/∂wM = q^1/nu * m(wL, wK, wM)

XVIII. Example. Duality: CES cost function to CES production function.

XIX. The Solution Functions' Comparative Statics.

The Jacobian matrix of the four functions, Hλ, HwL, HwK, HwM, with respect to the variables, L, K, M:

JL, K, M   =  

Hλ,L;Hλ,KHλ,M
HwL,LHwL,KHwL,M
HwK,LHwK,KHwK,M
HwM,LHwM,KHwM,M
=
-wL-wK-wM
00
00
00

The Jacobian matrix of the four solution functions, Φ = {λ, wL, wK, wM}, with respect to the variables, L, K, and M:

JΦ   =  

λLλKλM
wLLwLKwLM
wKLwKKwKM
wMLwMKwMM

From the Implicit Function Theorem:

JL, K, M;   +   Jλ, wL, wK, wM, q   * JΦ   =   0 (zero matrix)   →

JΦ   =   - (Jλ, wL, wK, wM, q)-1   *   JL, K, M

for points (L, K, M) in a neighborhood of the point (L, K, M), and wL = wL(L, K, M), wK = wK(L, K, M), wM = wM(L, K, M), and λ = λ(L, K, M),

with:

f*(L, K, M) = (1 / c(wL(L, K, M), wK(L, K, M), wM(L, K, M))^nu.

XX. Numerical Example: Cost Function to Production Function.

The CES cost function as specified:

C(q;wL,wK,wM) = h(q) * c(wL,wK,wM) = (q/1)^1/1 * [0.35^(1/(1+0.17647))) * wL^(0.17647)/(1+0.17647))) + 0.4^(1/(1+0.17647))) * wK^(0.17647)/(1+0.17647))) + 0.25^(1/(1+0.17647))) * wM^(0.17647)/(1+0.17647)))]^((1+0.17647))/0.17647))

with A = 1, alpha = 0.35, beta = 0.4, gamma = 0.25, rho = 0.17647 (sigma = 0.85), and nu = 1. The CES cost function has continuous first and second order partial derivatives with respect to their arguments.

∂C(q;wL,wK,wM) / ∂wL = h(q) * 0.35^(1/(1+0.17647)) * wL^(-1/(1+0.17647)) * c(wL, wK, wM)^(1/(1+0.17647));
∂C(q;wL,wK,wM) / ∂wK = h(q) * 0.4^(1/(1+0.17647)) * wK^(-1/(1+0.17647)) * c(wL, wK, wM)^(1/(1+0.17647));
∂C(q;wL,wK,wM) / ∂wM = h(q) * 0.25^(1/(1+0.17647)) * wM^(-1/(1+0.17647)) * c(wL, wK, wM)^(1/(1+0.17647)).

The factor inputs are set: L = 36.89, K = 24.42, M = 31.59.

Maximum Output: Cost Function to Production Function.

f*(L, K, M)^1/nu = 1 / maxwL,wK,wM {c(wL, wK, wM)   :   wL * L + wK * K + wM * M = 1,   wL >= 0, wK >= 0, wM >= 0}

f*(36.89, 24.42, 31.59)^1/1 = 1 / maxwL,wK,wM {c(wL, wK, wM)   :   wL * 36.89 + wK * 24.42 + wM * 31.59 = 1,   wL >= 0, wK >= 0, wM >= 0}

Constrained Optimization (Maximum): The Method of Lagrange.

H(L, K, M, wL, wK, wM, λ) = c(wL, wK, wM) + λ * (1 - (wL * L + wK * K + wM * M))

H(36.89, 24.42, 31.59, wL, wK, wM, λ) = c(wL, wK, wM) + λ * (1 - (wL * 36.89 + wK * 24.42 + wM * 31.59))

First Order Necessary Conditions:

0.   Hλ(36.89, 24.42, 31.59, wL, wK, wM, λ) = 1 - (wL * 36.89 + wK * 24.42 + wM * 31.59) = 0
1.   HwL(36.89, 24.42, 31.59, wL, wK, wM, λ) = ∂c(wL, wK, wM)/∂wL - λ * 36.89 = 0
2.   HwK(36.89, 24.42, 31.59, wL, wK, wM, λ) = ∂c(wL, wK, wM)/∂wK) - λ * 24.42 = 0
3.   HwM(36.89, 24.42, 31.59, wL, wK, wM, λ) = ∂c(wL, wK, wM)/∂wM - λ * 31.59 = 0

Solve these four equations simultaneously (say, using Newton's Method) for wL, wK, wM, and λ. With wL = 0.00915, wK = 0.01699, wM = 0.00784, and λ = 0.0333,

0.   Hλ(36.89, 24.42, 31.59, wL, wK, wM, λ) = 1 - (wL * 36.89 + wK * 24.42 + wM * 31.59) ≈ 0
1.   HwL(36.89, 24.42, 31.59, wL, wK, wM, λ) = 1.23 - 0.0333 * 36.89 ≈ 0
2.   HwK(36.89, 24.42, 31.59, wL, wK, wM, λ) = 0.81 - 0.0333 * 24.42 ≈ 0
3.   HwM(36.89, 24.42, 31.59, wL, wK, wM, λ) = 1.05 - 0.0333 * 31.59 ≈ 0

Second Order Necessary Conditions:

The second order necessary conditions require that the Jacobian matrix (bordered Hessian of H) be negative definite at W = (L, K, M, wL, wK, wM, λ) = (36.89, 24.42, 31.59, 0.00915, 0.01699, 0.00784, 0.0333). The Jacobian matrix, J3, is a negative definite matrix if the determinant of J2 is positive, and the determinant of J3 is negative.

Jλ, wL, wK, wM   =  

J3   =  

  0   -L-K-M
  -L   cwLwLcwLwKcwLwM
  -K   cwKwLcwKwKcwKwM
  -M   cwMwLcwMwKcwMwM
=
0-36.888-24.415-31.592
-36.888-75.69325.51833.019
-24.41525.518-23.82721.855
-31.59233.01921.855-85.874

  Determinant(J3) = -18741780.5534

 

J2   =  

0-36.888-24.415
-36.888-75.69325.518
-24.41525.518-23.827

  Determinant(J2) = 123508.5984

Maximum Output:

With L = 36.89, K = 24.42, M = 31.59,   set wL = 0.00915, wK = 0.01699, wM = 0.00784,

f*(L, K, M) = (1 / c(wL, wK, wM))^(nu),   →

f*(36.89, 24.42, 31.59) = (1 / c(0.00915, 0.01699, 0.00784))^1 = (1 / 0.0333)^1 = 30

Compare f*(L,K,M) with f(L,K,M) at L = 36.89, K = 24.42, M = 31.59:

f(36.89, 24.42, 31.59) = 30

The Solution Functions' Comparative Statics.

JL, K, M   =  

-wL-wK-wM
00
00
00
  =  
-0.00915-0.01699-0.00784
-0.033300
0-0.03330
00-0.0333

From the Implicit Function Theorem:

JΦ   =   - (Jλ, wL, wK, wM, q)-1   *   JL, K, M

JΦ   =  

λLλKλM
wLLwLKwLM
wKLwKKwKM
wMLwMKwMM
  =  
-0.000305-0.000566-0.000261
-0.000283.0E-51.0E-5
3.0E-5-0.000772.0E-5
1.0E-52.0E-5-0.00028

The Partial Derivates of f*(L,K,M).

Since f*(L,K,M) = (1 / c(wL,wK,wM))^nu = (1 / λ)^nu:

f*L(L,K,M) = nu * (-λL / λ^2) * (1 / λ)^(nu-1),
f*K(L,K,M) = nu * (-λK / λ^2) * (1 / λ)^(nu-1),
f*M(L,K,M) = nu * (-λM / λ^2) * (1 / λ)^(nu-1),   →

f*L(36.89,24.42,31.59) = 1 * (0.000305 / 0.0333^2) * (1 / 0.0333)^(1-1) = 0.274,
f*K(36.89,24.42,31.59) = 1 * (0.000566 / 0.0333^2) * (1 / 0.0333)^(1-1) = 0.51,
f*M(36.89,24.42,31.59) = 1 * (0.000261 / 0.0333^2) * (1 / 0.0333)^(1-1) = 0.235.

Partial Derivatives of f(L,K,M).

fL(36.89,24.42,31.59) = 0.274,
fK(36.89,24.42,31.59) = 0.51,
fM(36.89,24.42,31.59) = 0.235.

Conclude: f*   ==   f.

 



CES Production/ Cost Functions Numerical Example

CES Production Function:

q = A * [alpha * (L^-rho) + beta * (K^-rho) + gamma *(M^-rho)]^(-nu/rho) = f(L,K,M).

where L = labour, K = capital, M = materials and supplies, and q = product. The parameter nu is a measure of the economies of scale, while the parameter rho yields the elasticity of substitution:

sigma = 1/(1 + rho).

The CES Cost Function:

C(q;wL,wK,wM) = h(q) * c(wL,wK,wM) = (q/A)^1/nu * [alpha^(1/(1+rho)) * wL^(rho/(1+rho)) + beta^(1/(1+rho)) * wK^(rho/(1+rho)) + gamma^(1/(1+rho)) * wM^(rho/(1+rho))]^((1+rho)/rho)

The cost function's factor prices, wL, wK, and wM, are positive real numbers.

Set the parameters below to re-run with your own CES parameters.

Restrictions: .7 < nu < 1.3; .5 < sigma < 1.5;
.25 < alpha < .45, .3 < beta < .5, .2 < gamma < .35
sigma = 1 → nu = alpha + beta + gamma (Cobb-Douglas)
sigma < 1 → inputs complements; sigma > 1 → inputs substitutes
15 < q < 45;
4 <= wL* <= 11,   7<= wK* <= 16,   4 <= wM* <= 10

CES Production Function Parameters
CES elasticity of scale parameter: nu
elasticity of substitution: sigma
alpha
beta
gamma
output q
Factor Prices
wL* wK* wM*

The CES production function as specified:

f(L, K, M) = 1 * [0.35 * (L^- 0.17647) + 0.4 * (K^- 0.17647) + 0.25 *(M^- 0.17647)]^(-1/0.17647)

The CES cost function as specified:

C(q;wL,wK,wM) = h(q) * c(wL,wK,wM) = (q/1)^1/1 * [0.35^(1/(1+0.17647))) * wL^(0.17647)/(1+0.17647))) + 0.4^(1/(1+0.17647))) * wK^(0.17647)/(1+0.17647))) + 0.25^(1/(1+0.17647))) * wM^(0.17647)/(1+0.17647)))]^((1+0.17647))/0.17647))

Both functions have continuous first and second order partial derivatives with respect to their arguments.


Curvature:

The CES production function, q = f(L,K,M), is (quasi)concave to the origin of the 3-dimensional (L, K, M) space if its Hessian matrix, F, is negative (semi)definite.

At the specified parameters of the production function, with q = 30, wL = 7. wK = 13, wM = 6, and with L = 36.89, K = 24.42, M = 31.59 :

F   =  
fLLfLK fLM
fKLfKK fKM
fMLfMK fMM
=
-0.00580.00550.0025
0.0055-0.01440.0047
0.00250.0047-0.0066

The Hessian matrix, F, is negative definite if its eigenvalues are negative; negative semidefinite its eigenvalues are nonpositive. If one or more eigenvalues of F are positive, f(L,K,M) us NOT concave.

The eigenvalues of F are e1 = -0.018, e2 = -0.00876, e3 = -0.

The CES cost function, C(q; wL, wK, wM) is (quasi)concave to the origin of the 3-dimensional (wL, wK, wM) space if the Hessian matrix, C, of its unit cost funtion, c(wL, wK, wM), is negative (semi)definite.

At the specified parameters of the cost function, with q = 30, wL = 7. wK = 13, wM = 6 :

C   =  
cwLwLcwLwK cwLwM
cwKwLcwKwK cwKwM
cwMwLcwMwK cwMwM
=
-0.09890.03330.0432
0.0333-0.03110.0286
0.04320.0286-0.1122

The Hessian matrix, C, is negative definite if its eigenvalues are negative; negative semidefinite its eigenvalues are nonpositive. If one or more eigenvalues of C are positive, C(q; wL, wK, wM) us NOT concave.

The eigenvalues of C are e1 = -0.14924, e2 = -0.09305, e3 = -0.

 

 



Mathematical Notes

Implicit Function Theorem.

Fitzpatrick, Patrick M. Advanced Calculus. Boston: PWS Publishing, 1996.

Given N + M variables, x1, ..., xN, y1, ..., yM, a system of N equations expressed as:

F1(x1, ..., xN, y1, ..., yM) = 0,
F2(x1, ..., xN, y1, ..., yM) = 0,
. . . . . . . . . .
FN(x1, ..., xN, y1, ..., yM) = 0,

and a vector (point) Z = (a1, ..., aN, b1, ..., bM) that satisfies the system of equations.

Under what conditions can the system of equations be solved for the M variables y1, ..., yM as continuously differentiable functions of the N variables x1, ..., xN in a neighborhood of Z:

y1 = Φ1(x1, ..., xN),
y2 = Φ2(x1, ..., xN),
. . . . . . . . . .
yM = ΦM(x1, ..., xN),

such that

b1 = Φ1(a1, ..., aN),
b2 = Φ2(a1, ..., aN),
. . . . . . . . . .
bM = ΦM(a1, ..., aN),

and such that the equations

F1(x1, ..., xN, Φ1(x1, ..., xN), ..., ΦM(x1, ..., xN)) = 0,
F2(x1, ..., xN, Φ1(x1, ..., xN), ..., ΦM(x1, ..., xN)) = 0,
. . . . . . . . . .
FN(x1, ..., xN, Φ1(x1, ..., xN), ..., ΦM(x1, ..., xN)) = 0,

are satisfied for all (x1, ..., xN) in a neighborhood of (a1, ..., aN)?

The M continuously differentiable functions, Φ1, Φ2, ..., ΦM, exist if each of the N functions, F1, F2, ... , FN, has continuous partial derivatives with respect to each of the N + M variables, x1, ..., xN, y1, ..., yM, near Z, and if the Jacobian determinant of the N functions F1, F2, ... , FN with respect to the M variables, y1, ..., yM, is not equal to zero when evaluated at Z.

The Jacobian determinant of the N functions F1, F2, ... , FN with respect to the M variables, y1, ..., yM is the determinant of the Jacobian matrix, Jy, of partial derivatives of F1, F2, ... , FN with respect to y1, ..., yM. This is written as the matrix:

    Jy   =  

∂F1/∂y1∂F1/∂y2 ..., ∂F1/∂yM
∂F2/∂y1∂F2/∂y2 ..., ∂F2/∂yM
. . . . . . . . . .
∂FN/∂y1∂FN/∂y2 ..., ∂FN/∂yM

The Jacobian matrix, Jx, of the N functions F1, F2, ... , FN with respect to the N variables, x1, ..., xN is the matrix of partial derivatives of F1, F2, ... , FN with respect to x1, ..., xN. This is written as the matrix:

    Jx   =  

∂F1/∂x1∂F1/∂x2 ..., ∂F1/∂xN
∂F2/∂x1∂F2/∂x2 ..., ∂F2/∂xN
. . . . . . . . . .
∂FN/∂x1∂FN/∂x2 ..., ∂FN/∂xN

The Jacobian matrix, JΦ, of the M functions Φ1, Φ2, ... , ΦM with respect to the N variables, x1, ..., xN is the matrix of partial derivatives of Φ1, Φ2, ... , ΦM with respect to x1, ..., xN. This is written as the matrix:

    JΦ   =  

∂Φ1/∂x1∂Φ1/∂x2 ..., ∂Φ1/∂xN
∂Φ2/∂x1∂Φ2/∂x2 ..., ∂Φ2/∂xN
. . . . . . . . . .
∂ΦM/∂x1∂ΦM/∂x2 ..., ∂ΦM/∂xN

Moreover, the Jacobian matrices, Jy, Jx, and JΦ satisfy

Jx + Jy * JΦ = 0 (zero matrix)

for (x1, ..., xN) in a neighborhood of (a1, ..., aN), and y1 = Φ1(x1, ..., xN), y2 = Φ2(x1, ..., xN), . . . . . . . . . . , and yM = ΦM(x1, ..., xN).

 



Continuously Differentiable Functions

A function, f(x1, ..., xN), is called continuously differentiable if its partial derivatives, ∂f/∂x1, ..., ∂f/∂xN, are continuous functions.

 



Young's Theorem

If it is possible to interchange the order of taking the first two partial derivatives of a function, the function satisfies Young's Theorem.

If the function, f(x1, ..., xN), has continuous second-order derivatives, it satisfies Young's Theorem. Thus,

2f/∂xi∂xj = ∂2f/∂xj∂xi   i,j = 1 ..., N

 



Euler's Theorem

If the function f(x1, ..., xN) is homogeneous of degree r, then:

∂f/∂x1 * x1 + ... + ∂f/xN * xN = r * f(x1 + ... + xN)

 



Positive Definite Matrix.

A symmetric, real matrix is a positive definite matrix if all of its eigenvalues are positive.

Positive Semidefinite Matrix.

A symmetric, real matrix is a positive semidefinite matrix if all of its eigenvalues are nonnegative.

Negative Definite Matrix.

A symmetric, real matrix is a negative definite matrix if all of its eigenvalues are negative.

Negative Semidefinite Matrix.

A symmetric, real matrix is a negative semidefinite matrix if all of its eigenvalues are nonpositive.

 

 
   

      Copyright © Elmer G. Wiens:   Egwald Web Services       All Rights Reserved.    Inquiries