Packages

import numpy as np
import matplotlib.pyplot as plt
np.random.seed(1)

Likelihood Function

We can derive the Likelihood function like this,

$$ l(\beta_0, \beta_1) = \frac{1}{n} \big( \sum_{i=1}^n y_i (x_i^T \beta) - exp(x_i^T \beta) - \log(y_i !) \big) \\ \beta = (\beta_0, \beta_1)^T, x_i = (1, x_{i1})^T $$

Usually, Likelihood function is used under some specific distribution (most of normal distribution). In this example, we will use Poisson distribution.

p = 2
n = 1000
true_beta = np.array([[1], [0.5]])

Here, we generate 1000 data samples from normal distribution that has 0 mean and 0.2 std. The reason we add 1 in data sample is that we need to use intercept term for regression.

x = np.random.normal(loc=0, scale=0.2, size=(n, 1))
x = np.hstack((np.ones((n, 1)), x))
x[:5, :]
array([[ 1.        ,  0.32486907],
       [ 1.        , -0.12235128],
       [ 1.        , -0.10563435],
       [ 1.        , -0.21459372],
       [ 1.        ,  0.17308153]])

We build the poisson model with exponential. In this case, exponential function is used as link function.

$$ \lambda(x_i) = \mathbb{E}(Y \vert X = x_i) = \exp(x_i^T \beta) $$

param = np.exp(x @ true_beta)
param[:5, :]
array([[3.19770875],
       [2.55697357],
       [2.57843551],
       [2.44172105],
       [2.96400313]])
y = np.random.poisson(param)
y[:5, :]
array([[4],
       [2],
       [2],
       [3],
       [1]])

Gradient Descent

So, How can we find true beta from the data samples? Most of use will use a sort of Optimization methods. Here, we will use Gradient Descent. The mathmatical term can be express like this,

$$ -\nabla l(\beta_0^{(t)}, \beta_1^{(t)}) $$

You can see the negative term of likelihood function. Because we need to find the minimum point of likelihood function. As we add minus in function, optimization process occur the direction of minimization. We borrow the previous poisson function and take the negative gradient of likelihood function.

$$ -\nabla l(\beta_0, \beta_1) = -\frac{1}{n} \sum_{i=1}^n (y_i \cdot x_i - \exp(x_i^T \beta) \cdot x_i) \in R^2 $$

beta = np.array([.5, 0.5]).reshape((p, 1))

param = np.exp(x @ beta)
grad = -np.mean(y * x - param * x, axis=0).reshape((p, 1))
grad
array([[-1.07590006],
       [-0.03620562]])

Based on these values, we find that $\beta_0$ must be changed the way of opposite direction, which is $(+)$. In order to do that, we need to update the beta. So learning rate $\eta$ is used.

learning_rate = 0.7

# initial beta 
beta = np.zeros((p, 1))
beta_0 = []
beta_1 = []

for i in range(500):
    param = np.exp(x @ beta)
    grad = -np.mean(y * x - param * x, axis=0).reshape((p, 1))
    
    # beta update
    beta_new = beta - learning_rate * grad
    
    if np.sum(np.abs(beta_new - beta)) < 1e-8:
        beta = beta_new
        beta_0.append(beta[0])
        beta_1.append(beta[1])
        print('Iteration {} beta: '.format(i))
        print(beta_new, '\n')
        break
    else:
        beta = beta_new
        beta_0.append(beta[0])
        beta_1.append(beta[1])
        print('Iteration {} beta: '.format(i))
        print(beta_new, '\n')
Iteration 0 beta: 
[[1.2173    ]
 [0.05128819]] 

Iteration 1 beta: 
[[0.76890038]
 [0.08497731]] 

Iteration 2 beta: 
[[1.17481907]
 [0.12502591]] 

Iteration 3 beta: 
[[0.82294632]
 [0.15322112]] 

Iteration 4 beta: 
[[1.14358892]
 [0.18814092]] 

Iteration 5 beta: 
[[0.85957336]
 [0.2118515 ]] 

Iteration 6 beta: 
[[1.11921502]
 [0.24220564]] 

Iteration 7 beta: 
[[0.88634236]
 [0.26222014]] 

Iteration 8 beta: 
[[1.09956325]
 [0.28854142]] 

Iteration 9 beta: 
[[0.90677746]
 [0.30548869]] 

Iteration 10 beta: 
[[1.0834115 ]
 [0.32826791]] 

Iteration 11 beta: 
[[0.92281526]
 [0.34265599]] 

Iteration 12 beta: 
[[1.06997847]
 [0.36233765]] 

Iteration 13 beta: 
[[0.93563883]
 [0.37458085]] 

Iteration 14 beta: 
[[1.05872319]
 [0.39156291]] 

Iteration 15 beta: 
[[0.94602687]
 [0.40200151]] 

Iteration 16 beta: 
[[1.04924815]
 [0.41663734]] 

Iteration 17 beta: 
[[0.95452114]
 [0.42555254]] 

Iteration 18 beta: 
[[1.04124817]
 [0.43815396]] 

Iteration 19 beta: 
[[0.96151472]
 [0.44577932]] 

Iteration 20 beta: 
[[1.03448127]
 [0.45662009]] 

Iteration 21 beta: 
[[0.96730224]
 [0.46315052]] 

Iteration 22 beta: 
[[1.02875119]
 [0.47247006]] 

Iteration 23 beta: 
[[0.97211023]
 [0.47806886]] 

Iteration 24 beta: 
[[1.02389612]
 [0.48607586]] 

Iteration 25 beta: 
[[0.97611626]
 [0.49088038]] 

Iteration 26 beta: 
[[1.01978125]
 [0.49775622]] 

Iteration 27 beta: 
[[0.9794617 ]
 [0.50188239]] 

Iteration 28 beta: 
[[1.01629337]
 [0.50778435]] 

Iteration 29 beta: 
[[0.98226044]
 [0.51133028]] 

Iteration 30 beta: 
[[1.01333703]
 [0.51639449]] 

Iteration 31 beta: 
[[0.98460507]
 [0.51944347]] 

Iteration 32 beta: 
[[1.01083146]
 [0.52378753]] 

Iteration 33 beta: 
[[0.98657143]
 [0.52641041]] 

Iteration 34 beta: 
[[1.00870819]
 [0.53013581]] 

Iteration 35 beta: 
[[0.98822201]
 [0.53239298]] 

Iteration 36 beta: 
[[1.00690917]
 [0.53558717]] 

Iteration 37 beta: 
[[0.98960848]
 [0.53753023]] 

Iteration 38 beta: 
[[1.0053851 ]
 [0.54026848]] 

Iteration 39 beta: 
[[0.99077375]
 [0.54194156]] 

Iteration 40 beta: 
[[1.00409414]
 [0.54428862]] 

Iteration 41 beta: 
[[0.99175358]
 [0.54572952]] 

Iteration 42 beta: 
[[1.00300078]
 [0.54774106]] 

Iteration 43 beta: 
[[0.99257776]
 [0.54898219]] 

Iteration 44 beta: 
[[1.00207488]
 [0.55070602]] 

Iteration 45 beta: 
[[0.99327123]
 [0.55177521]] 

Iteration 46 beta: 
[[1.00129088]
 [0.55325236]] 

Iteration 47 beta: 
[[0.99385487]
 [0.55417351]] 

Iteration 48 beta: 
[[1.00062708]
 [0.55543923]] 

Iteration 49 beta: 
[[0.99434615]
 [0.55623289]] 

Iteration 50 beta: 
[[1.0000651 ]
 [0.55731739]] 

Iteration 51 beta: 
[[0.99475977]
 [0.55800123]] 

Iteration 52 beta: 
[[0.99958935]
 [0.55893043]] 

Iteration 53 beta: 
[[0.99510803]
 [0.55951967]] 

Iteration 54 beta: 
[[0.99918662]
 [0.56031579]] 

Iteration 55 beta: 
[[0.9954013 ]
 [0.56082352]] 

Iteration 56 beta: 
[[0.99884571]
 [0.56150561]] 

Iteration 57 beta: 
[[0.99564828]
 [0.5619431 ]] 

Iteration 58 beta: 
[[0.99855715]
 [0.5625275 ]] 

Iteration 59 beta: 
[[0.99585628]
 [0.56290447]] 

Iteration 60 beta: 
[[0.9983129 ]
 [0.56340516]] 

Iteration 61 beta: 
[[0.99603147]
 [0.56372997]] 

Iteration 62 beta: 
[[0.99810616]
 [0.56415895]] 

Iteration 63 beta: 
[[0.99617902]
 [0.56443881]] 

Iteration 64 beta: 
[[0.99793117]
 [0.56480636]] 

Iteration 65 beta: 
[[0.99630329]
 [0.56504748]] 

Iteration 66 beta: 
[[0.99778305]
 [0.56536239]] 

Iteration 67 beta: 
[[0.99640797]
 [0.56557013]] 

Iteration 68 beta: 
[[0.99765769]
 [0.56583995]] 

Iteration 69 beta: 
[[0.99649613]
 [0.56601892]] 

Iteration 70 beta: 
[[0.99755157]
 [0.56625011]] 

Iteration 71 beta: 
[[0.99657039]
 [0.56640429]] 

Iteration 72 beta: 
[[0.99746175]
 [0.56660239]] 

Iteration 73 beta: 
[[0.99663293]
 [0.5667352 ]] 

Iteration 74 beta: 
[[0.99738573]
 [0.56690494]] 

Iteration 75 beta: 
[[0.9966856 ]
 [0.56701935]] 

Iteration 76 beta: 
[[0.99732137]
 [0.5671648 ]] 

Iteration 77 beta: 
[[0.99672996]
 [0.56726335]] 

Iteration 78 beta: 
[[0.9972669 ]
 [0.56738798]] 

Iteration 79 beta: 
[[0.99676732]
 [0.56747286]] 

Iteration 80 beta: 
[[0.99722079]
 [0.56757967]] 

Iteration 81 beta: 
[[0.99679878]
 [0.56765277]] 

Iteration 82 beta: 
[[0.99718176]
 [0.5677443 ]] 

Iteration 83 beta: 
[[0.99682527]
 [0.56780726]] 

Iteration 84 beta: 
[[0.99714872]
 [0.56788569]] 

Iteration 85 beta: 
[[0.99684758]
 [0.56793992]] 

Iteration 86 beta: 
[[0.99712074]
 [0.56800713]] 

Iteration 87 beta: 
[[0.99686637]
 [0.56805383]] 

Iteration 88 beta: 
[[0.99709706]
 [0.56811144]] 

Iteration 89 beta: 
[[0.99688218]
 [0.56815164]] 

Iteration 90 beta: 
[[0.99707702]
 [0.56820102]] 

Iteration 91 beta: 
[[0.9968955 ]
 [0.56823564]] 

Iteration 92 beta: 
[[0.99706004]
 [0.56827795]] 

Iteration 93 beta: 
[[0.99690671]
 [0.56830776]] 

Iteration 94 beta: 
[[0.99704567]
 [0.56834403]] 

Iteration 95 beta: 
[[0.99691614]
 [0.5683697 ]] 

Iteration 96 beta: 
[[0.99703351]
 [0.56840078]] 

Iteration 97 beta: 
[[0.99692409]
 [0.56842288]] 

Iteration 98 beta: 
[[0.99702321]
 [0.56844952]] 

Iteration 99 beta: 
[[0.99693078]
 [0.56846855]] 

Iteration 100 beta: 
[[0.99701448]
 [0.56849139]] 

Iteration 101 beta: 
[[0.9969364 ]
 [0.56850776]] 

Iteration 102 beta: 
[[0.9970071 ]
 [0.56852734]] 

Iteration 103 beta: 
[[0.99694114]
 [0.56854144]] 

Iteration 104 beta: 
[[0.99700084]
 [0.56855822]] 

Iteration 105 beta: 
[[0.99694513]
 [0.56857035]] 

Iteration 106 beta: 
[[0.99699555]
 [0.56858474]] 

Iteration 107 beta: 
[[0.99694848]
 [0.56859518]] 

Iteration 108 beta: 
[[0.99699106]
 [0.56860752]] 

Iteration 109 beta: 
[[0.9969513 ]
 [0.56861651]] 

Iteration 110 beta: 
[[0.99698727]
 [0.56862708]] 

Iteration 111 beta: 
[[0.99695368]
 [0.56863482]] 

Iteration 112 beta: 
[[0.99698405]
 [0.56864388]] 

Iteration 113 beta: 
[[0.99695567]
 [0.56865054]] 

Iteration 114 beta: 
[[0.99698132]
 [0.56865831]] 

Iteration 115 beta: 
[[0.99695736]
 [0.56866404]] 

Iteration 116 beta: 
[[0.99697902]
 [0.5686707 ]] 

Iteration 117 beta: 
[[0.99695877]
 [0.56867563]] 

Iteration 118 beta: 
[[0.99697706]
 [0.56868135]] 

Iteration 119 beta: 
[[0.99695996]
 [0.56868559]] 

Iteration 120 beta: 
[[0.99697541]
 [0.56869049]] 

Iteration 121 beta: 
[[0.99696096]
 [0.56869414]] 

Iteration 122 beta: 
[[0.99697401]
 [0.56869834]] 

Iteration 123 beta: 
[[0.9969618 ]
 [0.56870148]] 

Iteration 124 beta: 
[[0.99697282]
 [0.56870508]] 

Iteration 125 beta: 
[[0.99696251]
 [0.56870778]] 

Iteration 126 beta: 
[[0.99697181]
 [0.56871087]] 

Iteration 127 beta: 
[[0.9969631]
 [0.5687132]] 

Iteration 128 beta: 
[[0.99697096]
 [0.56871584]] 

Iteration 129 beta: 
[[0.9969636 ]
 [0.56871785]] 

Iteration 130 beta: 
[[0.99697024]
 [0.56872012]] 

Iteration 131 beta: 
[[0.99696402]
 [0.56872184]] 

Iteration 132 beta: 
[[0.99696963]
 [0.56872378]] 

Iteration 133 beta: 
[[0.99696437]
 [0.56872527]] 

Iteration 134 beta: 
[[0.99696911]
 [0.56872693]] 

Iteration 135 beta: 
[[0.99696467]
 [0.56872821]] 

Iteration 136 beta: 
[[0.99696867]
 [0.56872964]] 

Iteration 137 beta: 
[[0.99696492]
 [0.56873074]] 

Iteration 138 beta: 
[[0.9969683 ]
 [0.56873196]] 

Iteration 139 beta: 
[[0.99696513]
 [0.56873291]] 

Iteration 140 beta: 
[[0.99696798]
 [0.56873396]] 

Iteration 141 beta: 
[[0.99696531]
 [0.56873477]] 

Iteration 142 beta: 
[[0.99696771]
 [0.56873567]] 

Iteration 143 beta: 
[[0.99696545]
 [0.56873637]] 

Iteration 144 beta: 
[[0.99696749]
 [0.56873715]] 

Iteration 145 beta: 
[[0.99696558]
 [0.56873775]] 

Iteration 146 beta: 
[[0.99696729]
 [0.56873841]] 

Iteration 147 beta: 
[[0.99696568]
 [0.56873893]] 

Iteration 148 beta: 
[[0.99696713]
 [0.5687395 ]] 

Iteration 149 beta: 
[[0.99696577]
 [0.56873994]] 

Iteration 150 beta: 
[[0.99696699]
 [0.56874043]] 

Iteration 151 beta: 
[[0.99696584]
 [0.56874081]] 

Iteration 152 beta: 
[[0.99696688]
 [0.56874123]] 

Iteration 153 beta: 
[[0.9969659 ]
 [0.56874156]] 

Iteration 154 beta: 
[[0.99696678]
 [0.56874192]] 

Iteration 155 beta: 
[[0.99696596]
 [0.5687422 ]] 

Iteration 156 beta: 
[[0.99696669]
 [0.56874251]] 

Iteration 157 beta: 
[[0.996966  ]
 [0.56874275]] 

Iteration 158 beta: 
[[0.99696662]
 [0.56874302]] 

Iteration 159 beta: 
[[0.99696604]
 [0.56874323]] 

Iteration 160 beta: 
[[0.99696656]
 [0.56874345]] 

Iteration 161 beta: 
[[0.99696607]
 [0.56874363]] 

Iteration 162 beta: 
[[0.99696651]
 [0.56874383]] 

Iteration 163 beta: 
[[0.99696609]
 [0.56874398]] 

Iteration 164 beta: 
[[0.99696647]
 [0.56874415]] 

Iteration 165 beta: 
[[0.99696611]
 [0.56874428]] 

Iteration 166 beta: 
[[0.99696643]
 [0.56874442]] 

Iteration 167 beta: 
[[0.99696613]
 [0.56874454]] 

Iteration 168 beta: 
[[0.9969664 ]
 [0.56874466]] 

Iteration 169 beta: 
[[0.99696615]
 [0.56874476]] 

Iteration 170 beta: 
[[0.99696637]
 [0.56874486]] 

Iteration 171 beta: 
[[0.99696616]
 [0.56874495]] 

Iteration 172 beta: 
[[0.99696635]
 [0.56874504]] 

Iteration 173 beta: 
[[0.99696617]
 [0.56874511]] 

Iteration 174 beta: 
[[0.99696633]
 [0.56874519]] 

Iteration 175 beta: 
[[0.99696618]
 [0.56874525]] 

Iteration 176 beta: 
[[0.99696632]
 [0.56874532]] 

Iteration 177 beta: 
[[0.99696619]
 [0.56874537]] 

Iteration 178 beta: 
[[0.9969663 ]
 [0.56874543]] 

Iteration 179 beta: 
[[0.99696619]
 [0.56874547]] 

Iteration 180 beta: 
[[0.99696629]
 [0.56874552]] 

Iteration 181 beta: 
[[0.9969662 ]
 [0.56874556]] 

Iteration 182 beta: 
[[0.99696628]
 [0.56874561]] 

Iteration 183 beta: 
[[0.9969662 ]
 [0.56874564]] 

Iteration 184 beta: 
[[0.99696627]
 [0.56874568]] 

Iteration 185 beta: 
[[0.99696621]
 [0.5687457 ]] 

Iteration 186 beta: 
[[0.99696627]
 [0.56874574]] 

Iteration 187 beta: 
[[0.99696621]
 [0.56874576]] 

Iteration 188 beta: 
[[0.99696626]
 [0.56874579]] 

Iteration 189 beta: 
[[0.99696621]
 [0.56874581]] 

Iteration 190 beta: 
[[0.99696625]
 [0.56874583]] 

Iteration 191 beta: 
[[0.99696621]
 [0.56874585]] 

Iteration 192 beta: 
[[0.99696625]
 [0.56874587]] 

Iteration 193 beta: 
[[0.99696622]
 [0.56874589]] 

Iteration 194 beta: 
[[0.99696625]
 [0.5687459 ]] 

Iteration 195 beta: 
[[0.99696622]
 [0.56874592]] 

Iteration 196 beta: 
[[0.99696624]
 [0.56874593]] 

Iteration 197 beta: 
[[0.99696622]
 [0.56874594]] 

Iteration 198 beta: 
[[0.99696624]
 [0.56874596]] 

Iteration 199 beta: 
[[0.99696622]
 [0.56874597]] 

Iteration 200 beta: 
[[0.99696624]
 [0.56874598]] 

Iteration 201 beta: 
[[0.99696622]
 [0.56874598]] 

Iteration 202 beta: 
[[0.99696624]
 [0.56874599]] 

Iteration 203 beta: 
[[0.99696622]
 [0.568746  ]] 

Iteration 204 beta: 
[[0.99696623]
 [0.56874601]] 

Iteration 205 beta: 
[[0.99696622]
 [0.56874602]] 

Iteration 206 beta: 
[[0.99696623]
 [0.56874602]] 

Iteration 207 beta: 
[[0.99696622]
 [0.56874603]] 

Iteration 208 beta: 
[[0.99696623]
 [0.56874603]] 

Iteration 209 beta: 
[[0.99696622]
 [0.56874604]] 

Iteration 210 beta: 
[[0.99696623]
 [0.56874604]] 

Iteration 211 beta: 
[[0.99696622]
 [0.56874605]] 

Iteration 212 beta: 
[[0.99696623]
 [0.56874605]] 

Iteration 213 beta: 
[[0.99696622]
 [0.56874606]] 

After that, we find that the beta is converged to true beta after 213 iterations. Here is value change of each beta.

fig, ax = plt.subplots(2, 1, figsize=(16, 10))
ax[0].set_ylim(0, 1.3)
ax[0].plot(beta_0, label='beta')
ax[0].axhline(true_beta[0], color='red', linestyle='--', label='true')
ax[0].legend(loc='best')
ax[1].set_ylim(0, 1.3)
ax[1].plot(beta_1, label='beta')
ax[1].axhline(true_beta[1], color='red', linestyle='--', label='true')
ax[1].legend(loc='best')
plt.show()
grad
array([[ 8.8316704e-09],
       [-5.0488693e-09]])

Of course, we can change the learning rate to get accurate betas.

Newton-Raphson Method

Unlike Gradient Descent, Newton-Raphson method uses 2nd-order differentiation. So in order to use newton-raphson method, we need to calculate Hessian matrix,

$$ \begin{aligned} H &= - \nabla^2 l (\beta_0^{(t)}, \beta_1^{(t)}) \\ &= \frac{1}{n} \sum_{i=1}^n \exp(x_i^T \beta) \cdot x_i x_i^T \end{aligned}$$
beta = np.array([.5, .5]).reshape((p, 1))

param = np.exp(x @ beta)
D = np.diag(np.squeeze(param))
D
array([[1.9395084 , 0.        , 0.        , ..., 0.        , 0.        ,
        0.        ],
       [0.        , 1.55088286, 0.        , ..., 0.        , 0.        ,
        0.        ],
       [0.        , 0.        , 1.56390019, ..., 0.        , 0.        ,
        0.        ],
       ...,
       [0.        , 0.        , 0.        , ..., 1.63728199, 0.        ,
        0.        ],
       [0.        , 0.        , 0.        , ..., 0.        , 1.70810923,
        0.        ],
       [0.        , 0.        , 0.        , ..., 0.        , 0.        ,
        1.61818394]])
H = x.T @ D @ x / n
H
array([[1.66309994, 0.04482571],
       [0.04482571, 0.06487878]])
beta = np.zeros((p, 1))
beta_0 = []
beta_1 = []

for i in range(500):
    param = np.exp(x @ beta)
    grad = -np.mean(y * x - param * x, axis=0).reshape((p, 1))
    D = np.diag(np.squeeze(param))
    H = x.T @ D @ x / n
    
    beta_new = beta - np.linalg.inv(H) @ grad
    
    if np.sum(np.abs(beta_new - beta)) < 1e-8:
        beta = beta_new
        beta_0.append(beta[0])
        beta_1.append(beta[1])
        print('Iteration {} beta: '.format(i))
        print(beta_new, '\n')
        break
    else:
        beta = beta_new
        beta_0.append(beta[0])
        beta_1.append(beta[1])
        print('Iteration {} beta: '.format(i))
        print(beta_new, '\n')
Iteration 0 beta: 
[[1.72694737]
 [1.55267496]] 

Iteration 1 beta: 
[[1.21680451]
 [1.10098661]] 

Iteration 2 beta: 
[[1.02354269]
 [0.68242648]] 

Iteration 3 beta: 
[[0.99755109]
 [0.57208561]] 

Iteration 4 beta: 
[[0.99696661]
 [0.56874834]] 

Iteration 5 beta: 
[[0.99696623]
 [0.5687461 ]] 

Iteration 6 beta: 
[[0.99696623]
 [0.5687461 ]] 

fig, ax = plt.subplots(2, 1, figsize=(16, 10))
ax[0].set_ylim(0, 2)
ax[0].plot(beta_0, label='beta')
ax[0].axhline(true_beta[0], color='red', linestyle='--', label='true')
ax[0].legend(loc='best')
ax[1].set_ylim(0, 2)
ax[1].plot(beta_1, label='beta')
ax[1].axhline(true_beta[1], color='red', linestyle='--', label='true')
ax[1].legend(loc='best')
plt.show()

Summary

We tried find the minimum of likelihood function with optimization method, like gradient descent and Newton-Raphson Methods. The key point is that differentiation is required for optimization. In this post, we manually calculate the gradient. But actually there are helper functions for calculating gradient (and Hessian matrix) in Numpy.