Câu trả lời:
Tikhonov chính quy là một tập hợp lớn hơn hồi quy sườn. Đây là nỗ lực của tôi để đánh vần chính xác chúng khác nhau như thế nào.
Giả sử rằng với một ma trận và vectơ b đã biết , chúng ta muốn tìm một vectơ x sao cho:
.
The standard approach is ordinary least squares linear regression. However, if no satisfies the equation or more than one does—that is the solution is not unique—the problem is said to be ill-posed. Ordinary least squares seeks to minimize the sum of squared residuals, which can be compactly written as:
where is the Euclidean norm. In matrix notation the solution, denoted by , is given by:
Tikhonov regularization minimizes
for some suitably chosen Tikhonov matrix, . An explicit matrix form solution, denoted by , is given by:
this reduces to the unregularized least squares solution provided that (ATA)−1 exists.
Typically for ridge regression, two departures from Tikhonov regularization are described. First, the Tikhonov matrix is replaced by a multiple of the identity matrix
,
giving preference to solutions with smaller norm, i.e., the norm. Then becomes leading to
Finally, for ridge regression, it is typically assumed that variables are scaled so that has the form of a correlation matrix. and is the correlation vector between the variables and , leading to
Note in this form the Lagrange multiplier is usually replaced by , , or some other symbol but retains the property
In formulating this answer, I acknowledge borrowing liberally from Wikipedia and from Ridge estimation of transfer function weights
Carl has given a thorough answer that nicely explains the mathematical differences between Tikhonov regularization vs. ridge regression. Inspired by the historical discussion here, I thought it might be useful to add a short example demonstrating how the more general Tikhonov framework can be useful.
First a brief note on context. Ridge regression arose in statistics, and while regularization is now widespread in statistics & machine learning, Tikhonov's approach was originally motivated by inverse problems arising in model-based data assimilation (particularly in geophysics). The simplified example below is in this category (more complex versions are used for paleoclimate reconstructions).
Imagine we want to reconstruct temperatures in the past, based on present-day measurements . In our simplified model we will assume that temperature evolves according to the heat equation
Tikhonov regularization can solve this problem by solving
Below is a comparison of the results:
We can see that the original temperature has a smooth profile, which is smoothed still further by diffusion to give . Direct inversion fails to recover , and the solution shows strong "checkerboarding" artifacts. However the Tikhonov solution is able to recover with quite good accuracy.
Note that in this example, ridge regression would always push our solution towards an "ice age" (i.e. uniform zero temperatures). Tikhonov regression allows us a more flexible physically-based prior constraint: Here our penalty essentially says the reconstruction should be only slowly evolving, i.e. .
Matlab code for the example is below (can be run online here).
% Tikhonov Regularization Example: Inverse Heat Equation
n=15; t=2e1; w=1e-2; % grid size, # time steps, regularization
L=toeplitz(sparse([-2,1,zeros(1,n-3),1]/2)); % laplacian (periodic BCs)
A=(speye(n)+L)^t; % forward operator (diffusion)
x=(0:n-1)'; u0=sin(2*pi*x/n); % initial condition (periodic & smooth)
ufwd=A*u0; % forward model
uinv=A\ufwd; % inverse model
ureg=[A;w*L]\[ufwd;zeros(n,1)]; % regularized inverse
plot(x,u0,'k.-',x,ufwd,'k:',x,uinv,'r.:',x,ureg,'ro');
set(legend('u_0','u_{fwd}','u_{inv}','u_{reg}'),'box','off');